CN107396037B - Video monitoring method and device - Google Patents

Video monitoring method and device Download PDF

Info

Publication number
CN107396037B
CN107396037B CN201610322001.7A CN201610322001A CN107396037B CN 107396037 B CN107396037 B CN 107396037B CN 201610322001 A CN201610322001 A CN 201610322001A CN 107396037 B CN107396037 B CN 107396037B
Authority
CN
China
Prior art keywords
type
world
scene
coordinates
world coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610322001.7A
Other languages
Chinese (zh)
Other versions
CN107396037A (en
Inventor
孙杰
韩杰茜
全晓臣
呼志刚
浦世亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201610322001.7A priority Critical patent/CN107396037B/en
Publication of CN107396037A publication Critical patent/CN107396037A/en
Application granted granted Critical
Publication of CN107396037B publication Critical patent/CN107396037B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0604Management of faults, events, alarms or notifications using filtering, e.g. reduction of information by using priority, element types, position or time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Abstract

The embodiment of the invention provides a video monitoring method and device. Wherein, the method comprises the following steps: when a moving target is detected in a video monitoring process aiming at a target scene, obtaining a target image coordinate of the moving target under an image three-dimensional coordinate system of a video frame where the moving target is located; calculating a first type of world coordinate corresponding to the target image coordinate based on a preset first conversion relation between the image three-dimensional coordinate system and a world three-dimensional coordinate system corresponding to the target scene, wherein the first type of world coordinate is the coordinate of the moving target in the world three-dimensional coordinate system; obtaining a second type world coordinate corresponding to a preset picture warning position; and judging whether the position relation change of the first type world coordinate and the second type world coordinate accords with a preset alarm rule, and if so, triggering an alarm event aiming at the target scene. The scheme can improve the triggering accuracy of the alarm event.

Description

Video monitoring method and device
Technical Field
The invention relates to the technical field of video monitoring, in particular to a video monitoring method and device.
Background
In real life, certain positions in a real scene need to be armed to prevent people from intruding, crossing, entering, leaving, etc., wherein the positions to be armed are referred to as scene armed positions, which include types including but not limited to armed lines, armed planes, and armed bodies.
In the prior art, in order to alert a scene alert position, a 2D frame alert position is usually set in a video monitoring frame corresponding to a target scene where the scene alert position is located, and the setting mode is to directly draw the 2D frame alert position by using a mouse, so that when a change in a positional relationship between a moving target appearing in a video monitoring frame and the frame alert position conforms to a predetermined alert rule, an alert event is triggered, and specifically, when a change in a positional relationship between an image coordinate (a coordinate in an image two-dimensional coordinate system) corresponding to the frame alert position and an image coordinate (a coordinate in the image two-dimensional coordinate system) corresponding to the moving target conforms to the predetermined alert rule, an event alert is triggered.
However, the image alert position set in the prior art is only based on a 2D plane, and does not have three-dimensional spatial information in a real scene, that is, information such as a height from the ground and a relative distance between each object in the real scene is not provided, and a misjudgment phenomenon is easily caused. For example, a picture warning position is directly set on a video monitoring picture, the picture warning position is used for warning a flower bed area, an event alarm is generated when a person enters the flower bed, the event alarm should not be triggered when the pedestrian passes by a roadside, but the pedestrian is judged to enter the scene warning position due to the perspective relation, so that the event alarm is triggered, and the false alarm belongs to the false alarm.
Disclosure of Invention
The embodiment of the invention aims to provide a video monitoring method and a video monitoring device so as to improve the triggering accuracy of an alarm event. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a video monitoring method, including:
when a moving target is detected in a video monitoring process aiming at a target scene, obtaining a target image coordinate of the moving target under an image three-dimensional coordinate system of a video frame where the moving target is located;
calculating a first type of world coordinate corresponding to the target image coordinate based on a preset first conversion relation between the image three-dimensional coordinate system and a world three-dimensional coordinate system corresponding to the target scene, wherein the first type of world coordinate is the coordinate of the moving target in the world three-dimensional coordinate system;
obtaining a second-class world coordinate corresponding to a preset picture alert position, wherein the second-class world coordinate is a world coordinate corresponding to an image coordinate of the picture alert position, the second-class world coordinate is determined based on the first conversion relationship, and the picture alert position is as follows: an image position which is set in a video monitoring picture of the target scene without a moving target and corresponds to a scene warning position in the target scene in advance, wherein the second type of world coordinates are world coordinates of the scene warning position in the world three-dimensional coordinate system;
and judging whether the position relation change of the first type world coordinate and the second type world coordinate accords with a preset alarm rule, and if so, triggering an alarm event aiming at the target scene.
In a second aspect, an embodiment of the present invention provides a video monitoring apparatus, including:
the target image coordinate determination module is used for obtaining target image coordinates of the moving target in an image three-dimensional coordinate system of a video frame when the moving target is detected in the video monitoring process aiming at a target scene;
the first-class world coordinate determination module is used for calculating first-class world coordinates corresponding to the target image coordinates based on a preset first conversion relation between the image three-dimensional coordinate system and a world three-dimensional coordinate system corresponding to the target scene, wherein the first-class world coordinates are coordinates of the moving target in the world three-dimensional coordinate system;
a second-class world coordinate determination module, configured to obtain a second-class world coordinate corresponding to a preset picture alert position, where the second-class world coordinate is a world coordinate corresponding to an image coordinate of the picture alert position, the second-class world coordinate is determined based on the first conversion relationship, and the picture alert position is: an image position which is set in a video monitoring picture of the target scene without a moving target and corresponds to a scene warning position in the target scene in advance, wherein the second type of world coordinates are world coordinates of the scene warning position in the world three-dimensional coordinate system;
the position relation determining module is used for judging whether the position relation change of the first type world coordinate and the second type world coordinate accords with a preset alarm rule or not, and if so, the alarm event generating module is executed;
the alarm event generation module is used for triggering an alarm event aiming at the target scene.
In the embodiment of the invention, when a moving target is detected, a first type world coordinate of the moving target under a world three-dimensional coordinate system is determined, whether the change of the position relation of a second type world coordinate corresponding to the first type world coordinate and an image coordinate of a picture alert position meets a preset alarm rule is judged, if yes, an alarm event aiming at the target scene is triggered, wherein the picture alert position is an image position which is set in a video monitoring picture of the target scene without the moving target and corresponds to a scene alert position in the target scene in advance. Compared with the prior art, the method and the device have the advantages that the position relation matching is carried out based on the coordinate information under the real world three-dimensional coordinate, and the 2D plane under the image two-dimensional coordinate system is not based, so that false alarm is reduced, and the triggering accuracy of the alarm event is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a first flowchart of a video monitoring method according to an embodiment of the present invention;
fig. 2 is a second flowchart of a video monitoring method according to an embodiment of the present invention;
fig. 3 is a third flowchart of a video monitoring method according to an embodiment of the present invention;
fig. 4 is a fourth flowchart of a video monitoring method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a video monitoring apparatus according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a basis for calculating depth information for a binocular camera;
FIG. 7 is a schematic diagram illustrating a drawing of a warning line of a picture according to an embodiment of the present invention;
FIG. 8 is a schematic drawing of a picture warning surface according to an embodiment of the present invention;
FIG. 9 is a schematic drawing of a picture warning surface according to an embodiment of the present invention;
FIG. 10 is a schematic drawing diagram of a frame alert volume according to an embodiment of the present invention;
fig. 11 is a flowchart illustrating an extraction process of a moving object in an image captured by a binocular camera.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a video monitoring method and a video monitoring device, which are used for improving the triggering accuracy of an alarm event.
First, a video monitoring method provided by an embodiment of the present invention is described below.
It should be noted that the main execution body of the video monitoring method provided by the embodiment of the present invention may be a video monitoring apparatus, and the video monitoring apparatus may be operated in a device connected to an image capturing device, and the image capturing device may have a moving object detecting function, for example: the image acquisition equipment can be a camera, the equipment connected with the image acquisition equipment can be a data hard disk video recorder or a terminal, wherein the data hard disk video recorder or the terminal can obtain a video stream of a target scene from the image acquisition equipment in real time and output the obtained video stream, and a constituting unit of the video stream is composed of video frames. Of course, the video monitoring apparatus may also be operated in an image capturing device, wherein the image capturing device may have a moving object detecting function.
As shown in fig. 1, a video monitoring method provided in an embodiment of the present invention may include the following steps:
s101, when a moving target is detected in a video monitoring process aiming at a target scene, obtaining a target image coordinate of the moving target under an image three-dimensional coordinate system of a video frame where the moving target is located;
when a moving target is detected in a video monitoring process for a target scene, the video monitoring device can obtain a target image coordinate of the moving target in an image three-dimensional coordinate system of a video frame where the moving target is located, wherein the moving target can be detected in any moving target detection mode in the prior art. It should be noted that the target image coordinates may uniquely identify the moving target, and the number of the target image coordinates is not limited herein; in addition, since the detection of the moving object requires analysis on a plurality of consecutive video frames, the video frame where the moving object is located is a plurality of consecutive video frames, and accordingly, the object image coordinates are coordinate information in the plurality of consecutive video frames.
It will be understood by those skilled in the art that the image three-dimensional coordinate system is relative to the image, the world three-dimensional coordinate system is relative to the target scene belonging to the real environment, and the camera three-dimensional coordinate system is relative to the image capture device (e.g., camera). Specifically, the image three-dimensional coordinate system is a coordinate system calibrated by taking pixels as units, the coordinates of the XY axis of the image three-dimensional coordinate system are the representation of the pixels in the image, and the coordinates of the Z axis are the depth information of the image, namely the depth values of the pixel points; the camera three-dimensional coordinate system is a coordinate system set by taking the image acquisition equipment as a center, and the origin of the coordinate system is the position of the image acquisition equipment and is an absolute coordinate of an objective world; the world three-dimensional coordinate system is also called a real or real world three-dimensional coordinate system, which is an absolute coordinate of an objective world. In addition, it should be emphasized that, in the embodiment of the present invention, the origin of the world three-dimensional coordinate system is a projection point of the position where the image capturing device is located on the horizontal plane, that is, a projection point of the origin in the camera three-dimensional coordinate system, and the three axes of the camera three-dimensional coordinate system and the world three-dimensional coordinate system may be set in the same manner.
Note that, the depth information of the image may be obtained by: the method comprises the steps that a binocular camera is adopted as image acquisition equipment, and the depth information of each pixel point, namely the actual distance of each pixel point relative to the binocular camera, is obtained by the prior art; alternatively, the depth information of the image may be obtained by: the depth information of the image is obtained by obtaining a depth map corresponding to the image by using methods such as a physical sensor tof (time of flight) and Structure Light. Taking a binocular camera technology as an example, the binocular stereoscopic vision technology is a vision system simulating human eyes, and obtains depth information of a target by obtaining images of the target shot by two cameras at different positions and calculating parallax of the target (point Q shown in fig. 6) in the images; among them, the principle of the binocular camera is shown in FIG. 6, and x is known in the left and right CCD image planesleftAnd xrightKnowing the base line b and the focal length f, the distance l between Q and the binocular camera, i.e. the depth value, can be obtained by the following calculation formula
Figure GDA0002366536030000051
According to the principle, the actual distance of each pixel point in the video frame relative to the binocular camera can be obtained, namely the depth information of each pixel point can be obtained.
In addition, for the image acquisition device being a binocular camera, the basic idea of extracting a moving object in an acquired picture is as follows: specifically, in the flow shown in fig. 11, the feature point calculation step is used for extracting feature points in two images (a left image and a right image) of the binocular, the feature point matching step is used for selecting the feature points in the left image and matching the feature points to the maximum relevant positions of the feature points in the right image, the depth image generation step is used for calculating a final depth image by using matching information, and the three-dimensional data calculation step is used for calculating three-dimensional coordinate data of a target by using depth image information and combining binocular calibration parameters; the depth background modeling step is used for performing background modeling on the depth map so as to filter the background in the depth map, and the depth foreground generating step is used for extracting the foreground, namely the moving target, of the depth map by using the background model. It is understood that after the moving object is extracted, the object image coordinates of the moving object in the image three-dimensional coordinate system of the video frame can be determined.
As will be understood by those skilled in the art, for the image capture device being a binocular camera, the specific implementation manner of obtaining the target image coordinates of the moving target in the image three-dimensional coordinate system of the video frame may adopt the prior art; for the image acquisition device being a physical sensor tof (time of flight), Structure Light, or the like, a specific implementation manner of obtaining the target image coordinate of the moving target in the image three-dimensional coordinate system of the video frame may also adopt the prior art; the detection mode of the image capturing device with respect to the moving object may be implemented by any existing technology, and is not limited herein.
S102, calculating a first type of world coordinate corresponding to the target image coordinate based on a preset first conversion relation between the image three-dimensional coordinate system and a world three-dimensional coordinate system corresponding to the target scene;
and the first type world coordinate is the coordinate of the moving object in the world three-dimensional coordinate system.
In order to improve the triggering accuracy of the alarm event, the embodiment of the invention determines whether to trigger the alarm event by judging the position relation change of the moving target and the scene alert position in a real target scene, namely, the position relation of the coordinate of the moving target in a world three-dimensional coordinate system and the coordinate of the scene alert position in the world three-dimensional coordinate system is judged to determine whether to trigger the alarm event. Based on the above idea, after obtaining the target image coordinates of the moving target in the image three-dimensional coordinate system of the video frame, the first kind of world coordinates corresponding to the target image coordinates may be calculated based on a preset first transformation relationship between the image three-dimensional coordinate system and the world three-dimensional coordinate system corresponding to the target scene, and then the subsequent processing may be performed by using the calculated first kind of world coordinates.
The first conversion relation is characterized by a conversion formula of an image three-dimensional coordinate system and a camera three-dimensional coordinate system and a conversion formula of the camera three-dimensional coordinate system and a world three-dimensional coordinate system;
wherein, the conversion formula of the image three-dimensional coordinate system and the camera three-dimensional coordinate system is as follows:
Figure GDA0002366536030000061
the conversion formula of the camera three-dimensional coordinate system and the world three-dimensional coordinate system is as follows:
Figure GDA0002366536030000062
wherein, (X, y, z) is the image coordinate in the image three-dimensional coordinate system, (X)c,Yc,Zc) For the camera coordinates in the three-dimensional coordinate system of the camera, (X, Y, Z) are world coordinates in the three-dimensional coordinate system of the world, X1, X2, X3, X4 and w are constant values obtained based on an image capturing device capturing video frames, θ is a pitch angle of the image capturing device, ψ is a tilt angle of the image capturing device, and Hcam is a height of the image capturing device from the ground; the origin of the world three-dimensional coordinate system is a projection of the origin of the camera three-dimensional coordinate system.
It is to be understood that after the first conversion relationship is determined, the first type world coordinates corresponding to each target image coordinate may be calculated after the target image coordinates are determined.
S103, obtaining a second world coordinate corresponding to a preset picture warning position;
the second type world coordinate is a world coordinate corresponding to the image coordinate of the picture warning position, the second type world coordinate is determined based on the first conversion relation, and the picture warning position is as follows: and the second type of world coordinates are world coordinates of the scene warning position in the world three-dimensional coordinate system. The triggering accuracy of the alarm event needs to be improved by detecting the position relation between the moving target and the scene alert position in the real target scene, so that the second-class world coordinates corresponding to the preset picture alert position can be obtained.
It can be understood that, because the preset frame guard position is determined, the second world coordinate corresponding to the image coordinate of the preset frame guard position can be obtained by pre-calculation so as to be directly utilized in the video monitoring process; of course, the second world coordinate corresponding to the image coordinate of the preset image warning position can be calculated in the first video monitoring process, and can be directly utilized in the subsequent video monitoring process, which is also reasonable. Wherein, the image coordinate of the image warning position to be utilized can only determine the image warning position, so the number of the image coordinate of the image warning position to be utilized is not limited herein; in addition, the second type world coordinates have a one-to-one correspondence with the image coordinates of the screen guard position to be used, and therefore, the number of the second type world coordinates is not limited herein.
Specifically, the image alert position may be: an image position determined based on second-type reference image coordinates, the second-type reference image coordinates being: executing pixel conversion operation on first type reference image coordinates to obtain image coordinates, wherein the first type reference image coordinates are image coordinates included in virtual projection in the video monitoring picture without the moving object; wherein the pixel conversion operation is a conversion operation utilizing the first conversion relationship, the height relationship between the scene surveillance position and the real projection, and the second conversion relationship; the second conversion relation is the inverse conversion relation to the first conversion relation, the real projection is the projection of the scene alert position on the predetermined reference plane in the target scene, and the virtual projection is the position of the real projection in the video monitoring picture without the moving target. It will be appreciated that the predetermined reference plane may be the ground, although it is not limited thereto.
It is emphasized that the picture alert position has a shape correspondence with a scene alert position in the target scene. Specifically, for the target scene, the set scene alert position may be: a scene warning line, wherein the picture warning position is a picture warning line correspondingly; the set scene alert position may be: the scene warning surface, correspondingly, the picture warning position is the picture warning surface; the set scene alert position may be: and the scene warning body is correspondingly provided with a picture warning position which is a picture warning body.
For clarity of layout, the setting process of the warning positions of various frames will be described later with reference to specific embodiments.
S104, judging whether the position relation change of the first type world coordinate and the second type world coordinate accords with a preset alarm rule, if so, executing S105;
after the first-class world coordinates and the second-class world coordinates are obtained, whether the change of the position relationship between the first-class world coordinates and the second-class world coordinates meets a preset alarm rule, namely the position relationship between the moving target and the scene alert position in the target scene can be judged, and different operations can be executed according to different judgment results. Specifically, when it is determined that the change in the positional relationship between the first-type world coordinate and the second-type world coordinate meets a predetermined alarm rule, it indicates that the trigger condition of the alarm event is satisfied, and therefore S105 may be performed; and when the change of the position relation between the first type world coordinate and the second type world coordinate is judged to be not in accordance with the preset alarm rule, the triggering condition of the alarm event is indicated to be not met, so that the processing is not required.
And S105, triggering an alarm event aiming at the target scene.
And when the change of the position relation between the first type world coordinate and the second type world coordinate is judged to accord with a preset alarm rule, an alarm event aiming at the target scene can be triggered. The output form of the alarm event is not limited in the embodiments of the present invention, for example: the concrete form can be as follows: sound alarm, short message alarm, dialog box alarm, etc.
In the embodiment of the invention, when a moving target is detected, a first type world coordinate of the moving target under a world three-dimensional coordinate system is determined, whether the change of the position relation of a second type world coordinate corresponding to the first type world coordinate and an image coordinate of a picture alert position meets a preset alarm rule is judged, if yes, an alarm event aiming at the target scene is triggered, wherein the picture alert position is an image position which is set in a video monitoring picture of the target scene without the moving target and corresponds to a scene alert position in the target scene in advance. Compared with the prior art, the method and the device have the advantages that the position relation matching is carried out based on the coordinate information under the real world three-dimensional coordinate, and the 2D plane under the image two-dimensional coordinate system is not based, so that false alarm is reduced, and the triggering accuracy of the alarm event is improved.
The following describes a video monitoring method provided by the embodiment of the present invention, taking a scene guard position as a scene guard line as an example.
It should be noted that the main execution body of the video monitoring method provided by the embodiment of the present invention may be a video monitoring apparatus, and the video monitoring apparatus may be operated in a device connected to an image capturing device, and the image capturing device may have a moving object detecting function, for example: the image acquisition equipment can be a camera, the equipment connected with the image acquisition equipment can be a data hard disk video recorder or a terminal, wherein the data hard disk video recorder or the terminal can obtain a video stream of a target scene from the image acquisition equipment in real time and output the obtained video stream, and a constituting unit of the video stream is composed of video frames. Of course, the video monitoring apparatus may also be operated in an image capturing device, wherein the image capturing device may have a moving object detecting function.
As shown in fig. 2, the video monitoring method provided in the embodiment of the present invention may include the following steps:
s201, when a moving target is detected in a video monitoring process aiming at a target scene, obtaining a target image coordinate of the moving target in an image three-dimensional coordinate system of a video frame where the moving target is located;
s202, calculating a first type of world coordinate corresponding to the target image coordinate based on a preset first conversion relation between the image three-dimensional coordinate system and a world three-dimensional coordinate system corresponding to the target scene;
and the first type world coordinate is the coordinate of the moving object in the world three-dimensional coordinate system.
In this embodiment, the contents of S201 to S202 are similar to those of S101 to S102 in the above embodiment, and are not described herein again.
S203, obtaining a second type world coordinate corresponding to a preset picture warning line;
wherein the second-type world coordinates are world coordinates corresponding to image coordinates of the picture warning line, the second-type world coordinates are determined based on the first conversion relationship, and the picture warning line is: and the second type of world coordinates are world coordinates of the scene warning line in the world three-dimensional coordinate system.
The triggering accuracy of the alarm event needs to be improved by detecting the position relation of the moving target and the scene warning line in the real target scene, so that the second-class world coordinates corresponding to the preset picture warning line can be obtained.
It can be understood that, because the preset picture warning line is determined, the second world coordinate corresponding to the image coordinate of the preset picture warning line can be obtained by pre-calculation so as to be directly utilized in the video monitoring process; of course, the second world coordinate corresponding to the image coordinate of the preset image warning line can be calculated in the first video monitoring process, and can be directly utilized in the subsequent video monitoring process, which is also reasonable. The image coordinates of the image warning lines to be utilized can uniquely determine the image warning lines, so that the number of the image coordinates of the image warning lines to be utilized is not limited herein; in addition, the second type world coordinates have a one-to-one correspondence with the image coordinates of the screen warning line to be used, and therefore, the number of the second type world coordinates is not limited herein.
Specifically, the image warning line specifically includes: an image position determined based on second-type reference image coordinates, the second-type reference image coordinates being: executing pixel conversion operation on first type reference image coordinates to obtain image coordinates, wherein the first type reference image coordinates are image coordinates included in virtual projection in the video monitoring picture without the moving target; wherein the pixel conversion operation is a conversion operation utilizing the first conversion relationship, the height relationship between the scene warning line and the real projection, and a second conversion relationship; the second conversion relation is the inverse conversion relation to the first conversion relation, the real projection is the projection of the scene guard line on the predetermined reference plane in the target scene, and the virtual projection is the position of the real projection in the video monitoring picture without the moving target. It will be appreciated that the predetermined reference plane may be the ground, although it is not limited thereto.
For clarity of the layout of the scheme, the setting process of the picture warning lines corresponding to the scene warning lines is described in detail later.
S204, judging whether the position relation change of the first type world coordinate and the second type world coordinate meets the following requirements: if the range formed by the second type world coordinates does not include the first type world coordinates, includes the first type world coordinates and does not include the first type world coordinates, executing S205;
after the first type world coordinate and the second type world coordinate are obtained, whether the change of the position relation between the first type world coordinate and the second type world coordinate meets the following requirements can be judged: and executing different operations according to different judgment results from the range formed by the second type world coordinates not including the first type world coordinates to including the first type world coordinates and not including the first type world coordinates. Specifically, when it is determined that the change in the position relationship between the first-type world coordinate and the second-type world coordinate satisfies: from the range formed by the second type of world coordinates not including the first type of world coordinates to not including the first type of world coordinates, it indicates that the moving object in the real object scene crosses the scene alert line, that is, the change of the position relationship conforms to the predetermined alarm rule for crossing the alert line, at this time, S205 may be executed; when the change of the position relation between the first type world coordinate and the second type world coordinate is judged not to satisfy the following conditions: the fact that the moving target does not cross the scene warning line in the real target scene is shown from the range formed by the second type of world coordinates to the range formed by the first type of world coordinates to the range formed by the second type of world coordinates, namely the fact that the position relation change does not accord with the preset warning line crossing warning rule, at this time, no processing can be carried out.
It is emphasized that, for ensuring the validity of the judgment result, the change of the position relationship between the first world coordinate and the second world coordinate can be judged according to the principle that the acquisition time is first far and then near, if the change of the position relationship between the first world coordinate and the second world coordinate meets the preset alarm rule for crossing the warning line.
And S205, triggering an alarm event aiming at the target scene.
In this embodiment, S205 is similar to S105 of the above embodiments, and is not described herein again.
Compared with the prior art, the method and the device have the advantages that the position relation matching is carried out based on the coordinate information under the real world three-dimensional coordinate, and the 2D plane under the image two-dimensional coordinate system is not based, so that false alarm is reduced, and the triggering accuracy of the alarm event is improved.
The following describes an example of a setting process of a picture warning line corresponding to a scene warning line.
The setting process of the picture warning line corresponding to the scene warning line may include:
a1: determining a virtual projection which is set in a video monitoring picture aiming at a target scene and has no moving target and is related to a scene warning line by a user;
the scene warning line can be a line segment suspended in the air or a line segment located on the ground. Before drawing the scene fence, the user needs to draw a virtual projection related to the scene fence in the video monitoring picture by taking the ground as a predetermined reference plane, wherein the relative position relationship between the virtual projection related to the scene fence and other objects in the video monitoring picture is equivalent to: and the relative position relation between the real projection corresponding to the scene warning line and the corresponding other objects in the target scene. Specifically, as shown in fig. 7(a) -7 (c), the process of drawing the virtual projection associated with the scene alert line is performed, wherein the user can place a mouse in the video monitoring picture without a moving object by using the ground as a predetermined reference plane, the mouse is changed into a crosshair to indicate a starting point at which the setting of the virtual projection associated with the scene alert line can be started, the mouse is moved to a position at which the starting point of the virtual projection associated with the scene alert line is desired to be set, a left mouse button is clicked to complete the setting of the starting point, the mouse is moved, a connecting line of a dotted line type is formed between the starting point and a mouse pointer, the position at which the line segment of the virtual projection associated with the scene alert line is dynamically displayed, the mouse is moved to a position at which the ending point is desired to be set, the left mouse button is clicked to complete the setting of the ending point, so far, the user manually sets a virtual projection associated with the scene fence, as shown in fig. 7 (c).
It is emphasized that the video surveillance apparatus may have a virtual projection setting function so that a user may manually set a virtual projection associated with the scene fence on the video surveillance screen. In addition, the preset reference plane is selected from the ground instead of an object with an unfixed structural form, such as a wall surface or a desktop, so that the effectiveness of the set virtual projection is ensured.
B1: calculating first-class reference world coordinates corresponding to first-class reference image coordinates included in virtual projections related to the scene warning line based on the first conversion relation, wherein the first-class reference image coordinates uniquely determine the virtual projections related to the scene warning line;
after the virtual projection related to the scene fence is determined, in order to realize the drawing of the picture fence, the coordinate points on the virtual projection related to the scene fence can be converted into a world three-dimensional coordinate system, so that the coordinate information related to the real projection of the scene fence can be obtained. Specifically, based on the first conversion relationship, a first kind of reference world coordinates corresponding to first kind of reference image coordinates included in the virtual projection about the scene fence may be calculated, where the first kind of reference world coordinates may uniquely determine the real projection of the scene fence.
It should be noted that the first type of reference image coordinates included in the virtual projection associated with the scene fence line may at least include: two endpoints of the virtual projection related to the scene fence only need to ensure that the virtual projection related to the scene fence can be uniquely determined through the first-class reference image coordinates, and the specific number is not limited herein. In addition, it should be emphasized that the angle of the virtual projection related to the scene fence in the video surveillance picture relative to other objects and the angle in the horizontal direction of the ground are respectively equivalent to: the real projection of the scene warning line is relative to the angle of other corresponding objects in the target scene and the angle of the ground horizontal direction in the target scene.
C1: obtaining a distance value from the scene warning line to the real projection of the scene warning line;
wherein the distance value may be manually input by a user. The scene guard line may be a line segment parallel to the ground or uneven ground, specifically, when the scene guard line is a line segment parallel to the ground, the distance value from the scene guard line to the real projection thereof is one, that is, the distance values from all points on the scene guard line to the corresponding points of the real projection are the same; when the scene fence is a line segment that is not parallel to the ground, the distance value from the scene fence to the real projection thereof may be multiple, that is, the distance values from all points on the scene fence to the corresponding points of the real projection are not exactly the same.
D1: determining a second type of reference world coordinate corresponding to the first type of reference world coordinate based on the distance value, wherein the height difference between the second type of reference world coordinate and the first type of reference world coordinate is the distance value;
after the distance value and the first-class reference world coordinate corresponding to the real projection are determined, a second-class reference world coordinate corresponding to the scene warning line can be determined, wherein the second-class reference world coordinate corresponding to the first-class reference world coordinate can uniquely determine the scene warning line.
E1: calculating the second type reference image coordinates corresponding to the second type reference world coordinates based on the second conversion relation;
after the second-type reference world coordinates are determined, second-type reference image coordinates corresponding to the second-type reference world coordinates, that is, image coordinates of an identification position of a picture warning line under the three-dimensional coordinates of the image, may be calculated based on the second conversion relationship, where the second-type reference image coordinates may uniquely determine the picture warning line corresponding to the scene warning line. The first-type reference world coordinate, the second-type reference world coordinate, the first-type reference image coordinate and the second-type reference image coordinate are the same in number and have corresponding relations.
F1: and drawing a picture warning line corresponding to the scene warning line in the video monitoring picture without the moving target based on the second type of reference image coordinates.
After the second type of reference image coordinates are determined, drawing a picture warning line corresponding to the scene warning line in the video monitoring picture without the moving object, so as to complete the setting process of the picture warning line corresponding to the scene warning line, and the setting result can be as shown in 7(d) (e).
It should be emphasized that the setting process of the scene fence corresponding to the scene fence given above by a1-F1 is only an example and should not be construed as a limitation to the embodiments of the present invention.
The following describes a video monitoring method provided by the embodiment of the present invention, taking a scene guard position as a scene guard surface as an example.
It should be noted that the main execution body of the video monitoring method provided by the embodiment of the present invention may be a video monitoring apparatus, and the video monitoring apparatus may be operated in a device connected to an image capturing device, and the image capturing device may have a moving object detecting function, for example: the image acquisition equipment can be a camera, the equipment connected with the image acquisition equipment can be a data hard disk video recorder or a terminal, wherein the data hard disk video recorder or the terminal can obtain a video stream of a target scene from the image acquisition equipment in real time and output the obtained video stream, and a constituting unit of the video stream is composed of video frames. Of course, the video monitoring apparatus may also be operated in an image capturing device, wherein the image capturing device may have a moving object detecting function.
As shown in fig. 3, the video monitoring method provided in the embodiment of the present invention may include the following steps:
s301, when a moving target is detected in a video monitoring process aiming at a target scene, obtaining a target image coordinate of the moving target under an image three-dimensional coordinate system of a video frame where the moving target is located;
s302, calculating a first type of world coordinate corresponding to the target image coordinate based on a preset first conversion relation between the image three-dimensional coordinate system and a world three-dimensional coordinate system corresponding to the target scene;
in this embodiment, the contents of S301 to S302 are similar to those of S101 to S102 in the above embodiment, and are not described herein again.
And the first type world coordinate is the coordinate of the moving object in the world three-dimensional coordinate system.
S303, obtaining a second type world coordinate corresponding to a preset picture warning surface;
wherein the second type world coordinate is a world coordinate corresponding to the image coordinate of the picture warning surface, the second type world coordinate is determined based on the first conversion relationship, and the picture warning surface is: and the second type of world coordinates are world coordinates of the scene warning surface in the world three-dimensional coordinate system.
The triggering accuracy of the alarm event needs to be improved by detecting the position relation between the moving target and the scene warning surface in the real target scene, so that the second-class world coordinates corresponding to the preset picture warning surface can be obtained.
It can be understood that, because the preset picture warning surface is determined, the second world coordinate corresponding to the image coordinate of the preset picture warning surface can be obtained by pre-calculation so as to be directly utilized in the video monitoring process; of course, the second world coordinate corresponding to the image coordinate of the preset image warning surface can be calculated in the first video monitoring process, and can be directly utilized in the subsequent video monitoring process, which is also reasonable. The image coordinates of the image warning surface to be utilized can uniquely determine the image warning surface, so the number of the image coordinates of the image warning surface to be utilized is not limited herein; in addition, the second type world coordinates have a one-to-one correspondence with the image coordinates of the screen warning surface to be used, and therefore, the number of the second type world coordinates is not limited herein.
Specifically, the picture warning surface specifically includes: an image position determined based on second-type reference image coordinates, the second-type reference image coordinates being: image coordinates obtained by performing pixel conversion operation on first-class reference image coordinates, wherein the first-class reference image coordinates are as follows: image coordinates included in virtual projection in a video monitoring picture without a moving target; wherein the pixel conversion operation is a conversion operation utilizing the first conversion relationship, the height relationship between the scene surveillance surface and the real projection, and a second conversion relationship; the second conversion relation is the inverse conversion relation to the first conversion relation, the real projection is the projection of the scene warning surface on the preset reference plane in the target scene, and the virtual projection is the position of the real projection in the video monitoring picture without the moving target. It will be appreciated that the predetermined reference plane may be the ground, although it is not limited thereto.
For the sake of clear layout of the scheme, the following detailed description is made on the setting process of the picture alert surface corresponding to the scene alert surface.
S304, judging whether the position relation change of the first type world coordinate and the second type world coordinate meets the following requirements: from the range formed by the second type of world coordinates not including the first type of world coordinates to including the first type of world coordinates, if yes, executing S305;
after the first type world coordinate and the second type world coordinate are obtained, whether the change of the position relation between the first type world coordinate and the second type world coordinate meets the following requirements can be judged: and performing different operations according to different judgment results from the range formed by the second type of world coordinates not including the first type of world coordinates to the range formed by the second type of world coordinates including the first type of world coordinates. Specifically, when it is determined that the change in the position relationship between the first-type world coordinate and the second-type world coordinate satisfies: from the range formed by the second type of world coordinates not including the first type of world coordinates to the range formed by the first type of world coordinates, the moving object in the real object scene enters the scene alert surface, that is, the position relation change conforms to the predetermined alert surface entering alarm rules, at this time, S305 may be executed; and when the change of the position relation between the first type world coordinate and the second type world coordinate is judged not to satisfy the following conditions: from the range formed by the second type of world coordinates not including the first type of world coordinates to the range formed by the first type of world coordinates, the moving object in the real target scene does not enter the scene warning surface, namely, the change of the position relation does not accord with the preset warning line entering warning rule, and at this time, no processing is needed.
For the case that the scene-alert position is the scene-alert surface, in another implementation manner of the embodiment of the present invention, the determining whether the change in the position relationship between the first type world coordinate and the second type world coordinate meets a predetermined alarm rule may include:
judging whether the position relation change of the first world coordinate and the second world coordinate meets the following requirements: if the first type world coordinate is not included, the first type world coordinate is included and the first type world coordinate is not included in the range formed by the second type world coordinate, the change of the position relation between the first type world coordinate and the second type world coordinate is in accordance with the preset alarm surface crossing rule, S305 can be executed, otherwise, the processing is not carried out;
alternatively, the first and second electrodes may be,
judging whether the position relation change of the first world coordinate and the second world coordinate meets the following requirements: from the range formed by the second type of world coordinates to the range including the first type of world coordinates, if so, the change of the position relation between the first type of world coordinates and the second type of world coordinates is in accordance with the preset warning surface departure warning rule, at this time, S305 can be executed, otherwise, no processing is carried out;
alternatively, the first and second electrodes may be,
judging whether the position relation change of the first world coordinate and the second world coordinate meets the following requirements: the range formed by the second type of world coordinates includes the first type of world coordinates, if so, the change of the position relationship between the first type of world coordinates and the second type of world coordinates is shown to accord with the preset intrusion alert surface alarm rule, at this time, S305 can be executed, otherwise, no processing is performed.
It is emphasized that, for ensuring the validity of the judgment result, the change of the position relationship between the first-class world coordinates and the second-class world coordinates can be judged according to the principle that the acquisition time is first far and then near, whether the preset alarm rule for entering the warning surface, the preset alarm rule for crossing the warning line surface, the preset alarm rule for leaving the warning surface or the preset alarm rule for invading the warning surface.
S305, triggering an alarm event aiming at the target scene.
In this embodiment, S305 is similar to S105 of the above embodiments, and is not described herein again.
Compared with the prior art, the method and the device have the advantages that the position relation matching is carried out based on the coordinate information under the real world three-dimensional coordinate, and the 2D plane under the image two-dimensional coordinate system is not based, so that false alarm is reduced, and the triggering accuracy of the alarm event is improved.
The following describes an example of a setting process of a scene alert plane corresponding to a scene alert plane.
Specifically, in an embodiment of the present invention, the scene warning surface is a scene warning surface parallel to the ground, and the setting process of the picture warning surface corresponding to the scene warning surface is as follows:
a2: determining a virtual projection which is set in a video monitoring picture aiming at a target scene and has no moving target and is related to a scene warning surface by a user;
wherein, the scene warning surface can be suspended in the air or positioned on the ground. Before drawing the picture alert surface, the user needs to draw a virtual projection related to the scene alert surface in the video monitoring picture by taking the ground as a predetermined reference plane, wherein the relative position relationship between the virtual projection related to the scene alert surface and other objects in the video monitoring picture is equivalent to: the scene alert surface corresponds to the relative position relationship between the real projection in the target scene and the corresponding other objects. Specifically, as shown in fig. 8(a) -8 (f), the process of drawing the virtual projection associated with the scene-alert surface is shown, wherein the user can place a mouse in the video frame picture with the ground as a predetermined reference plane in the video surveillance picture without a moving object, the mouse changes to a crosshair, which indicates that the start endpoint of the virtual projection associated with the scene-alert surface can be set, move the mouse to a position where the start endpoint of the virtual projection associated with the scene-alert surface is desired to be set, and click the left button of the mouse, thereby completing the setting of the start endpoint; moving the mouse, wherein a connecting line (a dotted line) appears between the starting endpoint and the mouse pointer, and the position of the virtual projection side related to the scene warning surface is dynamically displayed; moving the mouse to a second endpoint of the virtual projection related to the scene warning surface, and clicking a left button of the mouse to finish drawing the second endpoint; and analogizing in sequence, completing the drawing of all endpoints of the virtual projection related to the scene warning surface, and forming a plane closed loop when the last endpoint of the virtual projection related to the scene warning surface is superposed with the initial endpoint to complete the drawing of the virtual projection related to the scene warning surface.
It is emphasized that the video surveillance apparatus may have a virtual projection setting function so that a user may manually set a virtual projection associated with the scene-alert surface on the video surveillance screen. In addition, the preset reference plane is selected from the ground instead of an object with an unfixed structural form, such as a wall surface or a desktop, so that the effectiveness of the set virtual projection is ensured.
B2: calculating first-class reference world coordinates corresponding to first-class reference image coordinates included in the virtual projection related to the scene warning surface based on the first conversion relation, wherein the first-class reference image coordinates uniquely determine the virtual projection related to the scene warning surface;
after the virtual projection related to the scene warning surface is determined, in order to realize the drawing of the picture warning surface, a coordinate point on the virtual projection related to the scene warning surface can be converted into a world three-dimensional coordinate system, so that coordinate information related to the real projection of the scene warning surface is obtained. Specifically, based on the first conversion relationship, a first kind of reference world coordinates corresponding to first kind of reference image coordinates included in the virtual projection on the scene-alert surface may be calculated, where the first kind of reference world coordinates can uniquely determine the real projection of the scene-alert surface.
It should be noted that the number of the first type reference image coordinates is not limited herein, as long as it is ensured that the virtual projection associated with the scene surveillance plane can be uniquely determined by the first type reference image coordinates. In addition, it should be emphasized that the angle of the virtual projection associated with the scene-surveillance surface with respect to other objects in the video surveillance picture and the angle in the horizontal direction of the ground are respectively equivalent to: the angle of the real projection of the scene warning surface in the target scene relative to other corresponding objects and the angle in the horizontal direction of the ground.
It will be appreciated that the scene-warning surface may be circular or square, etc., as is reasonable.
C2: obtaining a distance value from the scene warning surface to the real projection of the scene warning surface;
wherein the distance value may be manually input by a user. The scene warning surface may be a plane parallel to the ground or uneven ground, specifically, when the scene warning surface is parallel to the ground, the distance value from the scene warning surface to the real projection thereof is one, that is, the distance values from all points on the scene warning surface to the corresponding points of the real projection are the same; when the scene guard surface is not parallel to the ground, the distance value from the scene guard surface to the real projection is multiple, that is, the distance values from all points on the scene guard surface to the corresponding points of the real projection are not completely the same.
D2: determining a second type of reference world coordinate corresponding to the first type of reference world coordinate based on the distance value, wherein the height difference between the second type of reference world coordinate and the first type of reference world coordinate is the distance value;
after the distance value and the first-class reference world coordinate corresponding to the real projection are determined, a second-class reference world coordinate corresponding to the scene warning surface can be determined, wherein the second-class reference world coordinate can uniquely determine the scene warning line.
E2: calculating the second type reference image coordinates corresponding to the second type reference world coordinates based on the second conversion relation;
after the second type of reference world coordinates are determined, second type of reference image coordinates corresponding to the second type of reference world coordinates, that is, image coordinates of the identification position of the screen warning surface in the three-dimensional image coordinates, may be calculated based on the second conversion relationship. And the second type of reference image coordinates can uniquely determine the picture warning surface corresponding to the scene warning surface. The first-type reference world coordinate, the second-type reference world coordinate, the first-type reference image coordinate and the second-type reference image coordinate are the same in number and have corresponding relations.
F2: and drawing a picture warning surface corresponding to the scene warning surface in the video monitoring picture without the moving target based on the second type of reference image coordinates.
After the second type of reference image coordinates are determined, drawing the picture warning surface corresponding to the scene warning surface in the video monitoring picture without the moving target, so that the setting process of the picture warning surface is completed, and the setting result can be as shown in 8(g) (h).
Specifically, in another embodiment of the present invention, the scene alert surface is a scene alert surface perpendicular to the ground, and the setting process of the picture alert surface corresponding to the scene alert surface may include:
a3: determining a virtual projection which is set in a video monitoring picture aiming at a target scene and has no moving target and is related to a scene warning surface by a user;
the scene warning surface can be suspended in the air or one side of the scene warning surface is connected with the ground. Before drawing the picture alert surface, the user needs to draw a virtual projection related to the scene alert surface in the video monitoring picture by taking the ground as a predetermined reference plane, wherein the relative position relationship between the virtual projection related to the scene alert surface and other objects in the video monitoring picture is equivalent to: the scene alert surface corresponds to the relative position relationship between the real projection in the target scene and the corresponding other objects. Specifically, as shown in fig. 9(a) -9 (c), the process of drawing the virtual projection associated with the scene-alert surface is shown, wherein the user can place a mouse in the video frame picture with the ground as a predetermined reference plane in the video surveillance picture without the moving object, and the mouse becomes a crosshair, which indicates that the setting of the starting endpoint of the virtual projection associated with the scene-alert surface can be started; moving the mouse to a position where a starting endpoint of virtual projection related to the scene warning surface is expected to be set, and clicking a left mouse button to complete the setting of the starting endpoint; moving the mouse, generating a connecting line (a dotted line) between the starting endpoint and the mouse pointer, and dynamically displaying the position of a virtual projection line segment related to the scene warning surface; and moving the mouse to a second endpoint of the virtual projection related to the scene warning surface, clicking a left button of the mouse to finish the drawing of the second endpoint of the virtual projection related to the scene warning surface, and finishing the drawing of the virtual projection related to the scene warning surface, which is vertical to the ground.
It is emphasized that the video surveillance apparatus may have a virtual projection setting function so that a user may manually set a virtual projection associated with the scene-alert surface on the video surveillance screen. In addition, the preset reference plane is selected from the ground instead of an object with an unfixed structural form, such as a wall surface or a desktop, so that the effectiveness of the set virtual projection is ensured.
B3: calculating first-class reference world coordinates corresponding to first-class reference image coordinates included in the virtual projection related to the scene alert surface based on the first conversion relation, wherein the first-class reference image coordinates uniquely determine the virtual projection related to the scene alert surface;
after the virtual projection related to the scene warning surface is determined, in order to realize the drawing of the picture warning surface, a coordinate point on the virtual projection related to the scene warning surface can be converted into a world three-dimensional coordinate system, so that coordinate information related to the real projection of the scene warning surface is obtained. Specifically, based on the first conversion relationship, a first kind of reference world coordinates corresponding to first kind of reference image coordinates included in the virtual projection on the scene-alert surface may be calculated, where the first kind of reference world coordinates can uniquely determine the real projection of the scene-alert surface.
It should be noted that the number of the first type reference image coordinates is not limited herein, as long as it is ensured that the virtual projection associated with the scene surveillance plane can be uniquely determined by the first type reference image coordinates. In addition, it should be emphasized that the angle of the virtual projection associated with the scene-surveillance surface with respect to other objects in the video surveillance picture and the angle in the horizontal direction of the ground are respectively equivalent to: the angle of the real projection of the scene warning surface in the target scene relative to other corresponding objects and the angle in the horizontal direction of the ground.
Wherein, the left side and the right side of the scene warning surface vertical to the ground are both vertical to the ground.
C3: obtaining a first distance value from the upper side of the scene warning surface to the real projection of the scene warning surface and a second distance value from the lower side of the scene warning surface to the real projection of the scene warning surface;
wherein the first type of distance value and the second type of distance value may be manually input by a user. Specifically, when the upper side or the lower side of the scene warning surface is parallel to the ground, the corresponding distance value is one, that is, the distance values from the upper side or the lower side to the corresponding points of the real projection are completely the same; when the upper side or the lower side of the scene warning surface is not parallel to the ground, the corresponding distance values are multiple, that is, the distance values from the upper side or the lower side to the corresponding points of the real projection are not completely the same.
D3: determining a second type of reference world coordinate corresponding to the first type of reference world coordinate and corresponding to the upper side and a second type of reference world coordinate corresponding to the lower side, wherein the height difference between the second type of reference world coordinate corresponding to the upper side and the first type of reference world coordinate is the first type distance value, and the height difference between the second type of reference world coordinate corresponding to the lower side and the first type of reference world coordinate is the second type distance value;
after the first-class distance value, the second-class distance value and the first-class reference world coordinate corresponding to the real projection are determined, the second-class reference world coordinate corresponding to the upper side and the second-class reference world coordinate corresponding to the lower side, which correspond to the first-class reference world coordinate, can be determined, wherein the scene warning surface vertical to the ground can be uniquely determined for the second-class reference world coordinate corresponding to the upper side and the second-class reference world coordinate corresponding to the lower side.
E3: calculating second type reference image coordinates corresponding to the obtained second type reference world coordinates based on the second conversion relation;
after the second type of reference world coordinates are determined, second type of reference image coordinates corresponding to the second type of reference world coordinates, that is, image coordinates of the identification position of the screen warning surface in the three-dimensional coordinates of the image, may be calculated based on the second conversion relationship. And the second type of reference image coordinates can uniquely determine the picture warning surface corresponding to the scene warning surface. The first type reference image coordinates and the first type reference world coordinates correspond to the same number, and the second type reference world coordinates and the second type reference image coordinates correspond to the same number and have a corresponding relation.
F3: and drawing a picture warning surface corresponding to the scene warning surface in the video monitoring picture without the moving target based on the second type of reference image coordinates. After the second type of reference image coordinates are determined, drawing the picture warning surface corresponding to the scene warning surface in the video monitoring picture without the moving target, so that the setting process of the picture warning surface is completed, and the setting result can be as shown in 9(d) (e).
It should be emphasized that the setting process of the scene surveillance planes corresponding to the scene surveillance planes given by a2-F2 and A3-F3 is only an example and should not be construed as limiting the embodiments of the present invention.
The following describes a video monitoring method provided by the embodiment of the present invention by taking a scene guard position as a scene guard object as an example.
It should be noted that the main execution body of the video monitoring method provided by the embodiment of the present invention may be a video monitoring apparatus, and the video monitoring apparatus may be operated in a device connected to an image capturing device, and the image capturing device may have a moving object detecting function, for example: the image acquisition equipment can be a camera, the equipment connected with the image acquisition equipment can be a data hard disk video recorder or a terminal, wherein the data hard disk video recorder or the terminal can obtain a video stream of a target scene from the image acquisition equipment in real time and output the obtained video stream, and a constituting unit of the video stream is composed of video frames. Of course, the video monitoring apparatus may also be operated in an image capturing device, wherein the image capturing device may have a moving object detecting function.
As shown in fig. 4, the video monitoring method provided in the embodiment of the present invention may include the following steps:
s401, when a moving target is detected in a video monitoring process aiming at a target scene, obtaining a target image coordinate of the moving target under an image three-dimensional coordinate system of a video frame where the moving target is located;
s402, calculating a first type of world coordinate corresponding to the target image coordinate based on a preset first conversion relation between the image three-dimensional coordinate system and a world three-dimensional coordinate system corresponding to the target scene;
and the first type world coordinate is the coordinate of the moving object in the world three-dimensional coordinate system.
In this embodiment, the contents of S401 to S402 are similar to those of S101 to S102 in the above embodiment, and are not described herein again.
S403, obtaining second world coordinates corresponding to a preset picture warning body;
wherein the second type world coordinate is a world coordinate corresponding to the image coordinate of the picture alert body, the second type world coordinate is determined based on the first conversion relationship, and the picture alert body is: and the second type of world coordinates are world coordinates of the scene warning body in the world three-dimensional coordinate system.
The triggering accuracy of the alarm event needs to be improved by detecting the position relation of the moving target and the scene alert body in the real target scene, so that the second-class world coordinates corresponding to the image coordinates of the preset picture alert body can be obtained.
It can be understood that, because the preset frame alert object is determined, the second world coordinate corresponding to the image coordinate of the preset frame alert object can be obtained by pre-calculation so as to be directly utilized in the video monitoring process; of course, the second world coordinate corresponding to the image coordinate of the preset image warning body can be calculated in the first video monitoring process, and can be directly utilized in the subsequent video monitoring process, which is also reasonable. Wherein, the image coordinate of the image warning body to be utilized can only determine the image warning body, therefore, the number of the image coordinate of the image warning body to be utilized is not limited herein; in addition, the second type world coordinates have a one-to-one correspondence with the image coordinates of the screen warning object to be used, and therefore, the number of the second type world coordinates is not limited herein.
Specifically, the image alert body is specifically: an image position determined based on second-type reference image coordinates, the second-type reference image coordinates being: executing pixel conversion operation on first type reference image coordinates to obtain image coordinates, wherein the first type reference image coordinates are image coordinates included in virtual projection in a video monitoring picture without a moving target; wherein the pixel conversion operation is a conversion operation utilizing the first conversion relationship, the height relationship between the scene alert object and the real projection, and the second conversion relationship; the second conversion relation is the inverse conversion relation to the first conversion relation, the real projection is the projection of the scene alert body on the predetermined reference plane in the target scene, and the virtual projection is the position of the real projection in the video monitoring picture without the moving target. It will be appreciated that the predetermined reference plane may be the ground, although it is not limited thereto.
For the sake of clear layout of the scheme, the following detailed description is made on the setting process of the picture alert surface corresponding to the scene alert surface.
S404, judging whether the position relation change of the first world coordinate and the second world coordinate meets the following requirements: from the range formed by the second type of world coordinates to the range formed by the first type of world coordinates, if so, executing S405;
after the first type world coordinate and the second type world coordinate are obtained, whether the change of the position relation between the first type world coordinate and the second type world coordinate meets the following requirements can be judged: and performing different operations according to different judgment results from the range formed by the second type of world coordinates including the first type of world coordinates to the range formed by the second type of world coordinates not including the first type of world coordinates. Specifically, when it is determined that the change in the position relationship between the first-type world coordinate and the second-type world coordinate satisfies: from the range formed by the second type of world coordinates including the first type of world coordinates to the range formed by the second type of world coordinates not including the first type of world coordinates, the moving object in the real target scene leaves the scene alert, that is, the change of the position relationship conforms to the predetermined alert rule for leaving the alert, at this time, S405 may be executed; and when the change of the position relation between the first type world coordinate and the second type world coordinate is judged not to satisfy the following conditions: from the range formed by the second type of world coordinates including the first type of world coordinates to the range formed by the second type of world coordinates not including the first type of world coordinates, the moving object in the real object scene does not leave the scene warning surface, namely, the change of the position relation does not accord with the warning rule of the preset departure warning body, and at the moment, no processing is needed.
For the case that the scene-alert position is the scene-alert object, in another implementation manner of the embodiment of the present invention, the determining whether the change in the position relationship between the first type world coordinate and the second type world coordinate meets a predetermined alarm rule may include:
judging whether the position relation change of the first world coordinate and the second world coordinate meets the following requirements: if the first type world coordinate is not included, the first type world coordinate is included and the first type world coordinate is not included in the range formed by the second type world coordinate, the change of the position relation between the first type world coordinate and the second type world coordinate is in accordance with the preset alarm rule for crossing the alarm body, S405 can be executed, otherwise, the processing is not carried out;
alternatively, the first and second electrodes may be,
judging whether the position relation change of the first world coordinate and the second world coordinate meets the following requirements: from the range formed by the second type of world coordinates not including the first type of world coordinates to including the first type of world coordinates, if so, the change of the position relation between the first type of world coordinates and the second type of world coordinates is shown to accord with a preset alarm rule for entering an alert body, at this time, S405 can be executed, otherwise, no processing is carried out;
alternatively, the first and second electrodes may be,
judging whether the position relation change of the first world coordinate and the second world coordinate meets the following requirements: the range formed by the second type of world coordinates comprises the first type of world coordinates, if so, the change of the position relationship between the first type of world coordinates and the second type of world coordinates is in accordance with the preset alarm rule of the intrusion alert body, and then S405 can be executed; otherwise, no processing is performed.
It is emphasized that, for ensuring the validity of the judgment result, the change of the position relationship between the first world coordinate and the second world coordinate may be judged according to the principle that the acquisition time is from far to near, whether the change of the position relationship satisfies the predetermined alarm rule for entering the alert body, the predetermined alarm rule for crossing the alert line body, the predetermined alarm rule for leaving the alert body or the predetermined alarm rule for invading the alert body. S405, triggering an alarm event aiming at the target scene.
In this embodiment, S405 is similar to S105 of the above embodiment, and is not described herein again.
Compared with the prior art, the method and the device have the advantages that the position relation matching is carried out based on the coordinate information under the real world three-dimensional coordinate, and the 2D plane under the image two-dimensional coordinate system is not based, so that false alarm is reduced, and the triggering accuracy of the alarm event is improved.
The following describes an example of a setting process of a scene alert object corresponding to a scene alert object. Specifically, the setting process of the picture alert object corresponding to the scene alert object is as follows:
a4: determining a virtual projection which is set in a video monitoring picture aiming at a target scene and has no moving target and is related to a scene warning body by a user;
wherein, the scene warning body can be suspended in the air or positioned on the ground. Before drawing the scene alarm body, the user needs to draw a virtual projection related to the scene alarm body in the video monitoring picture by taking the ground as a preset reference plane, wherein the relative position relationship between the virtual projection related to the scene alarm body and other objects in the video monitoring picture is equal to that: the scene alert body corresponds to the relative position relationship between the real projection in the target scene and the corresponding other objects. Specifically, as shown in fig. 10(a) -10 (f), the process of drawing the virtual projection related to the scene alert body may be performed, where the user may place a mouse in the video frame picture with the ground as a predetermined reference plane in the video surveillance picture without the moving object, the mouse changes to a crosshair, which indicates that the start endpoint of the virtual projection related to the scene alert body may be set, move the mouse to a position where the start endpoint of the virtual projection related to the scene alert body is desired to be set, and click the left button of the mouse, thereby completing the setting of the start endpoint; moving the mouse, wherein a connecting line (a dotted line) appears between the starting endpoint and the mouse pointer, and the position of the virtual projection side related to the scene warning body is dynamically displayed; moving the mouse to a second endpoint of the virtual projection related to the scene warning body, and clicking a left button of the mouse to finish drawing the second endpoint; and analogizing in turn, completing the drawing of all endpoints of the virtual projection related to the scene alert body, and forming a plane closed loop when the last endpoint of the virtual projection related to the scene alert body is superposed with the initial endpoint, so as to complete the drawing of the virtual projection related to the scene alert body.
It is emphasized that the video surveillance apparatus may have a virtual projection setting function so that a user may manually set a virtual projection associated with the scene alert object on the video surveillance screen. In addition, the preset reference plane is selected from the ground instead of an object with an unfixed structural form, such as a wall surface or a desktop, so that the effectiveness of the set virtual projection is ensured.
B4: calculating first-class reference world coordinates corresponding to first-class reference image coordinates included in the virtual projection related to the scene alert body based on the first conversion relation, wherein the first-class reference image coordinates uniquely determine the virtual projection related to the scene alert body;
after the virtual projection related to the scene alert body is determined, in order to realize the drawing of the picture alert body, a coordinate point on the virtual projection related to the scene alert body can be converted into a world three-dimensional coordinate system, so that coordinate information related to the real projection of the scene alert body can be obtained. Specifically, based on the first conversion relationship, a first kind of reference world coordinates corresponding to first kind of reference image coordinates included in the virtual projection of the scene alert body may be calculated, where the first kind of reference world coordinates can uniquely determine the real projection of the scene alert body.
It should be noted that the number of the first type reference image coordinates is not limited herein, as long as it is ensured that the virtual projection associated with the scene-alert object can be uniquely determined by the first type reference image coordinates. In addition, it should be emphasized that the angle of the virtual projection associated with the scene-alert object in the video surveillance picture with respect to other objects and the angle in the horizontal direction on the ground are respectively equivalent to: the angle of the real projection of the scene warning body in the target scene relative to other corresponding objects and the angle in the horizontal direction of the ground. Wherein, the scene warning body can be a cube or a cylinder, and the upper surface and the lower surface of the cube or the cylinder are parallel, and the other surfaces except the upper surface and the lower surface are vertical to the ground.
C4: obtaining a first distance value from the upper surface of the scene warning body to the real projection of the scene warning body and a second distance value from the lower surface of the scene warning body to the real projection of the scene warning body;
wherein the first type of distance value and the second type of distance value may be manually input by a user. Wherein, the upper and lower sides of the scene warning body can be parallel to the ground or not parallel to the ground; specifically, when the upper surface or the lower surface of the scene-alert body is parallel to the ground, the corresponding distance value is one, that is, all points on the upper surface or the lower surface of the scene-alert body have the same real projection; when the upper surface or the lower surface of the scene guard body is not parallel to the ground, the corresponding distance values are multiple, that is, all points on the upper surface or the lower surface of the scene guard body are not exactly the same as the real projection of the scene guard body.
D4: determining a second type of reference world coordinate corresponding to the first type of reference world coordinate and a second type of reference world coordinate corresponding to the lower type of reference world coordinate, wherein the height difference between the second type of reference world coordinate corresponding to the upper type of reference world coordinate and the first type of reference world coordinate is the corresponding first type of distance value, and the height difference between the second type of reference world coordinate corresponding to the lower type of reference world coordinate and the first type of reference world coordinate is the corresponding second type of distance value;
after the first-class distance value, the second-class distance value and the first-class reference world coordinate corresponding to the real projection are determined, the second-class reference world coordinate corresponding to the first-class reference world coordinate and corresponding to the upper face and the second-class reference world coordinate corresponding to the lower face are determined, wherein the second-class reference world coordinate corresponding to the upper face can be combined with the second-class reference coordinate corresponding to the lower face to uniquely determine the scene warning body.
E4: calculating second type reference image coordinates corresponding to the obtained second type reference world coordinates based on the second conversion relation;
after the second type of reference world coordinates are determined, second type of reference image coordinates corresponding to the second type of reference world coordinates, that is, image coordinates of the identifier position of the screen warning body in the three-dimensional image coordinates, may be calculated based on the second conversion relationship. And the second type of reference image coordinates can uniquely determine the picture warning surface corresponding to the scene warning body. The first type reference image coordinates and the first type reference world coordinates are the same in number, and the second type reference world coordinates and the second type reference image coordinates are the same in number and have a corresponding relation.
F4: and drawing the picture warning body corresponding to the scene warning body in the video monitoring picture without the moving target based on the second type of reference image coordinates.
After the second type of reference image coordinates are determined, drawing the picture warning surface corresponding to the scene warning body in the video monitoring picture without the moving target, so that the setting process of the picture warning body is completed, and the setting result can be as shown in 10(g) (h).
It should be emphasized that the setting process of the scene alert corresponding to the scene alert given above in a4-F4 is only an example and should not be construed as a limitation to the embodiments of the present invention.
Corresponding to the method embodiment, the embodiment of the invention provides a video monitoring device. As shown in fig. 5, the apparatus may include:
a target image coordinate determining module 510, configured to, when a moving target is detected in a video monitoring process for a target scene, obtain target image coordinates of the moving target in an image three-dimensional coordinate system of a video frame where the moving target is located;
a first-class world coordinate determining module 520, configured to calculate a first-class world coordinate corresponding to the target image coordinate based on a preset first conversion relationship between the image three-dimensional coordinate system and a world three-dimensional coordinate system corresponding to the target scene, where the first-class world coordinate is a coordinate of the moving target in the world three-dimensional coordinate system;
a second-type world coordinate determining module 530, configured to obtain a second-type world coordinate corresponding to a preset frame alert position, where the second-type world coordinate is a world coordinate corresponding to an image coordinate of the frame alert position, the second-type world coordinate is determined based on the first conversion relationship, and the frame alert position is: an image position which is set in a video monitoring picture of the target scene without a moving target and corresponds to a scene warning position in the target scene in advance, wherein the second type of world coordinates are world coordinates of the scene warning position in the world three-dimensional coordinate system;
a position relation determining module 540, configured to determine whether a change in the position relation between the first-type world coordinate and the second-type world coordinate meets a predetermined alarm rule, and if so, execute an alarm event generating module;
the alarm event generating module 550 is configured to trigger an alarm event for the target scenario.
Compared with the prior art, the method and the device have the advantages that the position relation matching is carried out based on the coordinate information under the real world three-dimensional coordinate, and the 2D plane under the image two-dimensional coordinate system is not based, so that false alarm is reduced, and the triggering accuracy of the alarm event is improved.
Specifically, the image warning position specifically includes: an image position determined based on second-type reference image coordinates, the second-type reference image coordinates being: executing pixel conversion operation on first type reference image coordinates to obtain image coordinates, wherein the first type reference image coordinates are image coordinates included in virtual projection in the video monitoring picture without the moving object;
wherein the pixel conversion operation is a conversion operation utilizing the first conversion relationship, the height relationship between the scene surveillance position and the real projection, and a second conversion relationship;
the second conversion relation is a conversion relation inverse to the first conversion relation, the real projection is a projection of the scene alert position on a predetermined reference plane in the target scene, and the virtual projection is a position of the real projection in the video monitoring picture without the moving target.
Specifically, in an implementation manner of the present invention, the scene alert position is: a scene warning line;
the position relation determining module 540 may include:
a first position relation determination unit, configured to determine whether a change in a position relation between the first type world coordinate and the second type world coordinate satisfies: and if the position relation change between the first type world coordinate and the second type world coordinate is in accordance with a preset alarm line crossing rule, the change is shown from the range formed by the second type world coordinate not including the first type world coordinate to the range formed by including the first type world coordinate to not including the first type world coordinate.
Specifically, in an implementation manner of the present invention, the scene alert position is: a scene warning surface;
the position relation determining module 540 may include:
a second position relation determination unit, configured to determine whether a change in a position relation between the first type world coordinate and the second type world coordinate satisfies: if the first type world coordinate is not included, the first type world coordinate is included and the first type world coordinate is not included in the range formed by the second type world coordinate, the change of the position relation between the first type world coordinate and the second type world coordinate is in accordance with the preset alarm surface crossing rule;
alternatively, the first and second electrodes may be,
a third position relation determining unit, configured to determine whether a change in a position relation between the first type world coordinate and the second type world coordinate satisfies: from the range formed by the second type of world coordinates not including the first type of world coordinates to including the first type of world coordinates, if so, the change of the position relation between the first type of world coordinates and the second type of world coordinates is shown to accord with the preset alarm entering surface alarm rule;
alternatively, the first and second electrodes may be,
a fourth position relation determination unit, configured to determine whether a change in a position relation between the first type world coordinate and the second type world coordinate satisfies: from the range formed by the second type of world coordinates including the first type of world coordinates to the range not including the first type of world coordinates, if so, the change of the position relation between the first type of world coordinates and the second type of world coordinates is shown to accord with a preset alarm surface departure rule;
alternatively, the first and second electrodes may be,
a fifth position relation determination unit, configured to determine whether a change in a position relation between the first type world coordinate and the second type world coordinate satisfies: and the range formed by the second type of world coordinates comprises the first type of world coordinates, and if so, the change of the position relationship between the first type of world coordinates and the second type of world coordinates is shown to accord with a preset intrusion alert surface alarm rule.
Specifically, in an implementation manner, the scene alert position is: a scene warning body;
the position relation determining module 540 may include:
a sixth positional relationship determination unit, configured to determine whether a change in a positional relationship between the first-type world coordinate and the second-type world coordinate satisfies: if the first type world coordinate is not included, the first type world coordinate is included and the first type world coordinate is not included in the range formed by the second type world coordinate, the change of the position relation between the first type world coordinate and the second type world coordinate is in accordance with the preset alarm-crossing rule;
alternatively, the first and second electrodes may be,
a seventh positional relationship determination unit, configured to determine whether a change in the positional relationship between the first-type world coordinate and the second-type world coordinate satisfies: from the range formed by the second type of world coordinates not including the first type of world coordinates to including the first type of world coordinates, if so, the change of the position relation between the first type of world coordinates and the second type of world coordinates is shown to accord with the preset alarm entering rule of the alarm body;
alternatively, the first and second electrodes may be,
an eighth location relation determining unit, configured to determine whether a change in the location relation between the first-class world coordinates and the second-class world coordinates satisfies: from the range formed by the second type of world coordinates including the first type of world coordinates to the range not including the first type of world coordinates, if so, the change of the position relation between the first type of world coordinates and the second type of world coordinates is shown to accord with the preset alarm rule of leaving the alarm body;
alternatively, the first and second electrodes may be,
a ninth positional relationship determination unit, configured to determine whether a change in the positional relationship between the first type world coordinate and the second type world coordinate satisfies: and the range formed by the second type of world coordinates comprises the first type of world coordinates, and if so, the change of the position relationship between the first type of world coordinates and the second type of world coordinates is shown to accord with a preset alarm rule of the intrusion alert body.
Specifically, the picture warning lines corresponding to the scene warning lines are set by a picture warning line setting module;
the screen warning line setting module may include:
the first virtual projection determining unit is used for determining virtual projections which are set in a video monitoring picture aiming at a target scene and do not have a moving target and are related to a scene warning line by a user;
a first reference world coordinate determination unit, configured to calculate first type reference world coordinates corresponding to first type reference image coordinates included in a virtual projection related to the scene fence, where the first type reference image coordinates uniquely determine the virtual projection related to the scene fence;
the first distance value determining unit is used for obtaining a distance value from the scene warning line to the real projection of the scene warning line;
a second reference world coordinate determination unit, configured to determine, based on the distance value, a second type of reference world coordinate corresponding to the first type of reference world coordinate, where a height difference between the second type of reference world coordinate and the first type of reference world coordinate is the distance value;
the first reference image coordinate determining unit is used for calculating second type reference image coordinates corresponding to the second type reference world coordinates based on the second conversion relation;
and the picture warning line drawing unit is used for drawing the picture warning line corresponding to the scene warning line in the video monitoring picture without the moving target based on the second type of reference image coordinates.
Specifically, the scene warning surface is a scene warning surface parallel to the ground, and the picture warning surface corresponding to the scene warning surface is set by the first picture warning surface setting module;
the first picture warning surface setting module may include:
a second virtual projection determination unit, configured to determine a virtual projection related to the scene alert surface, set by the user in the video surveillance picture for the target scene in which no moving object exists
A third reference world coordinate determination unit, configured to calculate, based on the first conversion relationship, a first type of reference world coordinates corresponding to first type of reference image coordinates included in a virtual projection associated with the scene surveillance surface, where the first type of reference image coordinates uniquely determine the virtual projection associated with the scene surveillance surface;
the second distance value determining unit is used for obtaining a distance value from the scene warning surface to the real projection of the scene warning surface;
a fourth reference world coordinate determination unit, configured to determine, based on the distance value, a second type of reference world coordinate corresponding to the first type of reference world coordinate, where a height difference between the second type of reference world coordinate and the first type of reference world coordinate is the distance value;
the second reference image coordinate determining unit is used for calculating second type reference image coordinates corresponding to the second type reference world coordinates based on the second conversion relation;
and the first picture warning surface drawing unit is used for drawing the picture warning surface corresponding to the scene warning surface in the video monitoring picture without the moving target based on the second type of reference image coordinates.
Specifically, the scene warning surface is a scene warning surface perpendicular to the ground, and the picture warning surface corresponding to the scene warning surface is set by the second picture warning surface setting module;
the second picture warning surface setting module may include:
the third virtual projection determining unit is used for determining a virtual projection which is set in a video monitoring picture aiming at a target scene and does not have a moving target and is related to the scene warning surface by a user;
a fifth reference world coordinate determination unit, configured to calculate, based on the first conversion relationship, a first type of reference world coordinates corresponding to first type of reference image coordinates included in a virtual projection associated with the scene surveillance surface, where the first type of reference image coordinates uniquely determine the virtual projection associated with the scene surveillance surface;
the third distance value determining unit is used for obtaining a first distance value from the upper side of the scene warning surface to the real projection of the scene warning surface and a second distance value from the lower side of the scene warning surface to the real projection of the scene warning surface;
a sixth reference world coordinate determination unit, configured to determine a second type of reference world coordinate corresponding to the first type of reference world coordinate and corresponding to the upper side and a second type of reference world coordinate corresponding to the lower side, where a height difference between the second type of reference world coordinate corresponding to the upper side and the first type of reference world coordinate is the first type distance value, and a height difference between the second type of reference world coordinate corresponding to the lower side and the first type of reference world coordinate is the second type distance value;
a third reference image coordinate determination unit, configured to calculate, based on the second conversion relationship, second type reference image coordinates corresponding to the obtained second type reference world coordinates;
and the second picture warning surface drawing unit is used for drawing the picture warning surface corresponding to the scene warning surface in the video monitoring picture without the moving target based on the second type of reference image coordinates.
Specifically, the picture alert body corresponding to the scene alert body is set by a picture alert body setting module;
the screen alert body setting module may include:
a fourth virtual projection determination unit, configured to determine a virtual projection related to the scene alert body, which is set by a user in a video frame for a target scene in which no moving object exists;
a seventh reference world coordinate determination unit, configured to calculate, based on the first conversion relationship, a first type of reference world coordinates corresponding to first type of reference image coordinates included in a virtual projection associated with the scene alert object, where the first type of reference image coordinates uniquely determine the virtual projection associated with the scene alert object;
the fourth distance value determining unit is used for obtaining a first type of distance value from the upper surface of the scene warning body to the real projection of the scene warning body and a second type of distance value from the lower surface of the scene warning body to the real projection of the scene warning body;
an eighth reference world coordinate determination unit, configured to determine a second type of reference world coordinate corresponding to the first type of reference world coordinate and a second type of reference world coordinate corresponding to the lower type of reference world coordinate, where a height difference between the second type of reference world coordinate corresponding to the upper type of reference world coordinate and the first type of reference world coordinate is the corresponding first type of distance value, and a height difference between the second type of reference world coordinate corresponding to the lower type of reference world coordinate and the first type of reference world coordinate is the corresponding second type of distance value;
a fourth reference image coordinate determination unit, configured to calculate, based on the second conversion relationship, second type reference image coordinates corresponding to the obtained second type reference world coordinates;
and the picture warning body drawing unit is used for drawing the picture warning body corresponding to the scene warning body in the video monitoring picture without the moving target based on the second type of reference image coordinates.
Specifically, the first conversion relationship is characterized by a conversion formula of an image three-dimensional coordinate system and a camera three-dimensional coordinate system, and a conversion formula of the camera three-dimensional coordinate system and a world three-dimensional coordinate system;
the conversion formula of the image three-dimensional coordinate system and the camera three-dimensional coordinate system is as follows:
Figure GDA0002366536030000331
the conversion formula of the camera three-dimensional coordinate system and the world three-dimensional coordinate system is as follows:
Figure GDA0002366536030000332
wherein (X, y, z) is the image coordinate in the image three-dimensional coordinate system, (X)c,Yc,Zc) For the camera coordinates in the camera three-dimensional coordinate system, (X, Y, Z) are world coordinates in the world three-dimensional coordinate system, X1, X2, X3, X4 and w are constant values obtained based on an image acquisition device acquiring video frames, θ is a pitch angle of the image acquisition device, ψ is a tilt angle of the image acquisition device, and Hcam is a height of the image acquisition device from the ground; and the origin of the world three-dimensional coordinate system is the projection of the origin of the camera three-dimensional coordinate system.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (18)

1. A video surveillance method, comprising:
when a moving target is detected in a video monitoring process aiming at a target scene, obtaining a target image coordinate of the moving target under an image three-dimensional coordinate system of a video frame where the moving target is located;
calculating a first type of world coordinate corresponding to the target image coordinate based on a preset first conversion relation between the image three-dimensional coordinate system and a world three-dimensional coordinate system corresponding to the target scene, wherein the first type of world coordinate is the coordinate of the moving target in the world three-dimensional coordinate system;
obtaining a second-class world coordinate corresponding to a preset picture alert position, wherein the second-class world coordinate is a world coordinate corresponding to an image coordinate of the picture alert position, the second-class world coordinate is determined based on the first conversion relationship, and the picture alert position is as follows: an image position determined based on second-type reference image coordinates, the second-type reference image coordinates being: executing pixel conversion operation on first type reference image coordinates to obtain image coordinates, wherein the first type reference image coordinates are image coordinates included in virtual projection in a video monitoring picture without a moving target; wherein the pixel conversion operation is a conversion operation utilizing the first conversion relationship, the height relationship between the scene surveillance position and the real projection, and the second conversion relationship; the second conversion relation is the conversion relation which is the inverse of the first conversion relation, the real projection is the projection of the scene alert position on a preset reference plane in the target scene, and the virtual projection is the position of the real projection in the video monitoring picture without the moving target; the second type of world coordinates are world coordinates of the scene warning position under the world three-dimensional coordinate system; and the relative position relation of the virtual projection and other objects in the video monitoring picture is equivalent to that: the relative position relation between the real projection and other corresponding objects in the target scene;
and judging whether the position relation change of the first type world coordinate and the second type world coordinate accords with a preset alarm rule, and if so, triggering an alarm event aiming at the target scene.
2. The method according to claim 1, wherein the scene alert position is: a scene warning line;
the judging whether the position relation change of the first type world coordinate and the second type world coordinate accords with a preset alarm rule comprises the following steps:
judging whether the change of the position relation between the first world coordinate and the second world coordinate meets the following requirements: and if the position relation change between the first type world coordinate and the second type world coordinate is in accordance with a preset alarm line crossing rule, the change is shown from the range formed by the second type world coordinate not including the first type world coordinate to the range formed by including the first type world coordinate to not including the first type world coordinate.
3. The method according to claim 1, wherein the scene alert position is: a scene warning surface;
the judging whether the position relation change of the first type world coordinate and the second type world coordinate accords with a preset alarm rule comprises the following steps:
judging whether the change of the position relation between the first world coordinate and the second world coordinate meets the following requirements: if the first type world coordinate is not included, the first type world coordinate is included and the first type world coordinate is not included in the range formed by the second type world coordinate, the change of the position relation between the first type world coordinate and the second type world coordinate is in accordance with the preset alarm surface crossing rule;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first world coordinate and the second world coordinate meets the following requirements: from the range formed by the second type of world coordinates not including the first type of world coordinates to including the first type of world coordinates, if so, the change of the position relation between the first type of world coordinates and the second type of world coordinates is shown to accord with the preset alarm entering surface alarm rule;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first world coordinate and the second world coordinate meets the following requirements: from the range formed by the second type of world coordinates including the first type of world coordinates to the range not including the first type of world coordinates, if so, the change of the position relation between the first type of world coordinates and the second type of world coordinates is shown to accord with a preset alarm surface departure rule;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first world coordinate and the second world coordinate meets the following requirements: and the range formed by the second type of world coordinates comprises the first type of world coordinates, and if so, the change of the position relationship between the first type of world coordinates and the second type of world coordinates is shown to accord with a preset intrusion alert surface alarm rule.
4. The method according to claim 1, wherein the scene alert position is: a scene warning body;
the judging whether the position relation change of the first type world coordinate and the second type world coordinate accords with a preset alarm rule comprises the following steps:
judging whether the change of the position relation between the first world coordinate and the second world coordinate meets the following requirements: if the first type world coordinate is not included, the first type world coordinate is included and the first type world coordinate is not included in the range formed by the second type world coordinate, the change of the position relation between the first type world coordinate and the second type world coordinate is in accordance with the preset alarm-crossing rule;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first world coordinate and the second world coordinate meets the following requirements: from the range formed by the second type of world coordinates not including the first type of world coordinates to including the first type of world coordinates, if so, the change of the position relation between the first type of world coordinates and the second type of world coordinates is shown to accord with the preset alarm entering rule of the alarm body;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first world coordinate and the second world coordinate meets the following requirements: from the range formed by the second type of world coordinates including the first type of world coordinates to the range not including the first type of world coordinates, if so, the change of the position relation between the first type of world coordinates and the second type of world coordinates is shown to accord with the preset alarm rule of leaving the alarm body;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first world coordinate and the second world coordinate meets the following requirements: and the range formed by the second type of world coordinates comprises the first type of world coordinates, and if so, the change of the position relationship between the first type of world coordinates and the second type of world coordinates is shown to accord with a preset alarm rule of the intrusion alert body.
5. The method according to claim 2, wherein the setting procedure of the scene fence corresponding to the picture fence is as follows:
determining a virtual projection which is set in a video monitoring picture aiming at a target scene and has no moving target and is related to a scene warning line by a user;
calculating first-class reference world coordinates corresponding to first-class reference image coordinates included in virtual projections related to the scene alert line based on the first conversion relation, wherein the first-class reference image coordinates uniquely determine the virtual projections related to the scene alert line;
obtaining a distance value from the scene warning line to a real projection of the scene warning line;
determining a second type of reference world coordinate corresponding to the first type of reference world coordinate based on the distance value, wherein the height difference between the second type of reference world coordinate and the first type of reference world coordinate is the distance value;
calculating second type reference image coordinates corresponding to the second type reference world coordinates based on the second conversion relation;
and drawing a picture warning line corresponding to the scene warning line in the video monitoring picture without the moving target based on the second type of reference image coordinates.
6. The method according to claim 3, wherein the scene alert surface is a scene alert surface parallel to the ground, and the setting process of the picture alert surface corresponding to the scene alert surface is as follows:
determining a virtual projection which is set in a video monitoring picture aiming at a target scene and has no moving target and is related to the scene warning surface by a user;
calculating first-class reference world coordinates corresponding to first-class reference image coordinates included in the virtual projection related to the scene alert surface based on the first conversion relation, wherein the first-class reference image coordinates uniquely determine the virtual projection related to the scene alert surface;
obtaining a distance value from the scene warning surface to a real projection of the scene warning surface;
determining a second type of reference world coordinate corresponding to the first type of reference world coordinate based on the distance value, wherein the height difference between the second type of reference world coordinate and the first type of reference world coordinate is the distance value;
calculating second type reference image coordinates corresponding to the second type reference world coordinates based on the second conversion relation;
and drawing a picture warning surface corresponding to the scene warning surface in the video monitoring picture without the moving target based on the second type of reference image coordinates.
7. The method according to claim 3, wherein the scene alert surface is a scene alert surface perpendicular to the ground, and the setting process of the picture alert surface corresponding to the scene alert surface is as follows:
determining a virtual projection which is set in a video monitoring picture aiming at a target scene and has no moving target and is related to the scene warning surface by a user;
calculating first-class reference world coordinates corresponding to first-class reference image coordinates included in the virtual projection related to the scene alert surface based on the first conversion relation, wherein the first-class reference image coordinates uniquely determine the virtual projection related to the scene alert surface;
obtaining a first distance value from the upper side of the scene warning surface to the real projection of the scene warning surface and a second distance value from the lower side of the scene warning surface to the real projection of the scene warning surface;
determining a second type of reference world coordinate corresponding to the first type of reference world coordinate and corresponding to the upper side and a second type of reference world coordinate corresponding to the lower side, wherein the height difference between the second type of reference world coordinate corresponding to the upper side and the first type of reference world coordinate is the first type distance value, and the height difference between the second type of reference world coordinate corresponding to the lower side and the first type of reference world coordinate is the second type distance value;
calculating second type reference image coordinates corresponding to the obtained second type reference world coordinates based on the second conversion relation;
and drawing a picture warning surface corresponding to the scene warning surface in the video monitoring picture without the moving target based on the second type of reference image coordinates.
8. The method according to claim 4, wherein the setting procedure of the scene alert object corresponding to the scene alert object is as follows:
determining a virtual projection which is set by a user in a video frame aiming at a target scene and without a moving target and is related to a scene warning body;
calculating first-class reference world coordinates corresponding to first-class reference image coordinates included in a virtual projection related to the scene alert body based on the first conversion relation, wherein the first-class reference image coordinates uniquely determine the virtual projection related to the scene alert body;
obtaining a first distance value from the upper surface of the scene warning body to the real projection of the scene warning body and a second distance value from the lower surface of the scene warning body to the real projection of the scene warning body;
determining a second type of reference world coordinate corresponding to the first type of reference world coordinate and a second type of reference world coordinate corresponding to the lower type of reference world coordinate, wherein the height difference between the second type of reference world coordinate corresponding to the upper type of reference world coordinate and the first type of reference world coordinate is the corresponding first type of distance value, and the height difference between the second type of reference world coordinate corresponding to the lower type of reference world coordinate and the first type of reference world coordinate is the corresponding second type of distance value;
calculating second type reference image coordinates corresponding to the obtained second type reference world coordinates based on the second conversion relation;
and drawing a picture warning body corresponding to the scene warning body in the video monitoring picture without the moving target based on the second type of reference image coordinates.
9. The method according to any one of claims 1-8, wherein the first transformation relationship is characterized by a transformation formula of an image three-dimensional coordinate system and a camera three-dimensional coordinate system, and a transformation formula of the camera three-dimensional coordinate system and a world three-dimensional coordinate system;
the conversion formula of the image three-dimensional coordinate system and the camera three-dimensional coordinate system is as follows:
Figure FDA0002366536020000061
the conversion formula of the camera three-dimensional coordinate system and the world three-dimensional coordinate system is as follows:
Figure FDA0002366536020000062
wherein (X, y, z) is the image coordinate in the image three-dimensional coordinate system, (X)c,Yc,Zc) For the camera coordinates in the camera three-dimensional coordinate system, (X, Y, Z) are world coordinates in the world three-dimensional coordinate system, X1, X2, X3, X4 and w are constant values obtained based on an image acquisition device acquiring video frames, θ is a pitch angle of the image acquisition device, ψ is a tilt angle of the image acquisition device, and Hcam is a height of the image acquisition device from the ground; and the origin of the world three-dimensional coordinate system is the projection of the origin of the camera three-dimensional coordinate system.
10. A video monitoring apparatus, comprising:
the target image coordinate determination module is used for obtaining target image coordinates of the moving target in an image three-dimensional coordinate system of a video frame when the moving target is detected in the video monitoring process aiming at a target scene;
the first-class world coordinate determination module is used for calculating first-class world coordinates corresponding to the target image coordinates based on a preset first conversion relation between the image three-dimensional coordinate system and a world three-dimensional coordinate system corresponding to the target scene, wherein the first-class world coordinates are coordinates of the moving target in the world three-dimensional coordinate system;
a second-class world coordinate determination module, configured to obtain a second-class world coordinate corresponding to a preset picture alert position, where the second-class world coordinate is a world coordinate corresponding to an image coordinate of the picture alert position, the second-class world coordinate is determined based on the first conversion relationship, and the picture alert position is: an image position determined based on second-type reference image coordinates, the second-type reference image coordinates being: executing pixel conversion operation on first type reference image coordinates to obtain image coordinates, wherein the first type reference image coordinates are image coordinates included in virtual projection in a video monitoring picture without a moving target; wherein the pixel conversion operation is a conversion operation utilizing the first conversion relationship, the height relationship between the scene surveillance position and the real projection, and the second conversion relationship; the second conversion relation is the conversion relation which is the inverse of the first conversion relation, the real projection is the projection of the scene alert position on a preset reference plane in the target scene, and the virtual projection is the position of the real projection in the video monitoring picture without the moving target; the second type of world coordinates are world coordinates of the scene warning position under the world three-dimensional coordinate system; and the relative position relation of the virtual projection and other objects in the video monitoring picture is equivalent to that: the relative position relation between the real projection and other corresponding objects in the target scene;
the position relation determining module is used for judging whether the position relation change of the first type world coordinate and the second type world coordinate accords with a preset alarm rule or not, and if so, the alarm event generating module is executed;
the alarm event generation module is used for triggering an alarm event aiming at the target scene.
11. The apparatus according to claim 10, wherein the scene alert position is: a scene warning line;
the position relation determination module includes:
a first position relation determination unit, configured to determine whether a change in a position relation between the first type world coordinate and the second type world coordinate satisfies: and if the position relation change between the first type world coordinate and the second type world coordinate is in accordance with a preset alarm line crossing rule, the change is shown from the range formed by the second type world coordinate not including the first type world coordinate to the range formed by including the first type world coordinate to not including the first type world coordinate.
12. The apparatus according to claim 10, wherein the scene alert position is: a scene warning surface;
the position relation determination module includes:
a second position relation determination unit, configured to determine whether a change in a position relation between the first type world coordinate and the second type world coordinate satisfies: if the first type world coordinate is not included, the first type world coordinate is included and the first type world coordinate is not included in the range formed by the second type world coordinate, the change of the position relation between the first type world coordinate and the second type world coordinate is in accordance with the preset alarm surface crossing rule;
alternatively, the first and second electrodes may be,
a third position relation determining unit, configured to determine whether a change in a position relation between the first type world coordinate and the second type world coordinate satisfies: from the range formed by the second type of world coordinates not including the first type of world coordinates to including the first type of world coordinates, if so, the change of the position relation between the first type of world coordinates and the second type of world coordinates is shown to accord with the preset alarm entering surface alarm rule;
alternatively, the first and second electrodes may be,
a fourth position relation determination unit, configured to determine whether a change in a position relation between the first type world coordinate and the second type world coordinate satisfies: from the range formed by the second type of world coordinates including the first type of world coordinates to the range not including the first type of world coordinates, if so, the change of the position relation between the first type of world coordinates and the second type of world coordinates is shown to accord with a preset alarm surface departure rule;
alternatively, the first and second electrodes may be,
a fifth position relation determination unit, configured to determine whether a change in a position relation between the first type world coordinate and the second type world coordinate satisfies: and the range formed by the second type of world coordinates comprises the first type of world coordinates, and if so, the change of the position relationship between the first type of world coordinates and the second type of world coordinates is shown to accord with a preset intrusion alert surface alarm rule.
13. The apparatus according to claim 10, wherein the scene alert position is: a scene warning body;
the position relation determination module includes:
a sixth positional relationship determination unit, configured to determine whether a change in a positional relationship between the first-type world coordinate and the second-type world coordinate satisfies: if the first type world coordinate is not included, the first type world coordinate is included and the first type world coordinate is not included in the range formed by the second type world coordinate, the change of the position relation between the first type world coordinate and the second type world coordinate is in accordance with the preset alarm-crossing rule;
alternatively, the first and second electrodes may be,
a seventh positional relationship determination unit, configured to determine whether a change in the positional relationship between the first-type world coordinate and the second-type world coordinate satisfies: from the range formed by the second type of world coordinates not including the first type of world coordinates to including the first type of world coordinates, if so, the change of the position relation between the first type of world coordinates and the second type of world coordinates is shown to accord with the preset alarm entering rule of the alarm body;
alternatively, the first and second electrodes may be,
an eighth location relation determining unit, configured to determine whether a change in the location relation between the first-class world coordinates and the second-class world coordinates satisfies: from the range formed by the second type of world coordinates including the first type of world coordinates to the range not including the first type of world coordinates, if so, the change of the position relation between the first type of world coordinates and the second type of world coordinates is shown to accord with the preset alarm rule of leaving the alarm body;
alternatively, the first and second electrodes may be,
a ninth positional relationship determination unit, configured to determine whether a change in the positional relationship between the first type world coordinate and the second type world coordinate satisfies: and the range formed by the second type of world coordinates comprises the first type of world coordinates, and if so, the change of the position relationship between the first type of world coordinates and the second type of world coordinates is shown to accord with a preset alarm rule of the intrusion alert body.
14. The apparatus according to claim 11, wherein the scene fence corresponds to a scene fence set by a scene fence setting module;
the picture warning line setting module comprises:
the first virtual projection determining unit is used for determining virtual projections which are set in a video monitoring picture aiming at a target scene and do not have a moving target and are related to a scene warning line by a user;
a first reference world coordinate determination unit, configured to calculate first type reference world coordinates corresponding to first type reference image coordinates included in a virtual projection related to the scene fence, where the first type reference image coordinates uniquely determine the virtual projection related to the scene fence;
the first distance value determining unit is used for obtaining a distance value from the scene warning line to the real projection of the scene warning line;
a second reference world coordinate determination unit, configured to determine, based on the distance value, a second type of reference world coordinate corresponding to the first type of reference world coordinate, where a height difference between the second type of reference world coordinate and the first type of reference world coordinate is the distance value;
the first reference image coordinate determining unit is used for calculating second type reference image coordinates corresponding to the second type reference world coordinates based on the second conversion relation;
and the picture warning line drawing unit is used for drawing the picture warning line corresponding to the scene warning line in the video monitoring picture without the moving target based on the second type of reference image coordinates.
15. The apparatus according to claim 12, wherein the scene alert surface is a scene alert surface parallel to the ground, and the picture alert surface corresponding to the scene alert surface is set by the first picture alert surface setting module;
the first picture warning surface setting module comprises:
the second virtual projection determining unit is used for determining a virtual projection which is set in a video monitoring picture aiming at a target scene and does not have a moving target and is related to the scene warning surface by a user;
a third reference world coordinate determination unit, configured to calculate, based on the first conversion relationship, a first type of reference world coordinates corresponding to first type of reference image coordinates included in a virtual projection associated with the scene surveillance surface, where the first type of reference image coordinates uniquely determine the virtual projection associated with the scene surveillance surface;
the second distance value determining unit is used for obtaining a distance value from the scene warning surface to the real projection of the scene warning surface;
a fourth reference world coordinate determination unit, configured to determine, based on the distance value, a second type of reference world coordinate corresponding to the first type of reference world coordinate, where a height difference between the second type of reference world coordinate and the first type of reference world coordinate is the distance value;
the second reference image coordinate determining unit is used for calculating second type reference image coordinates corresponding to the second type reference world coordinates based on the second conversion relation;
and the first picture warning surface drawing unit is used for drawing the picture warning surface corresponding to the scene warning surface in the video monitoring picture without the moving target based on the second type of reference image coordinates.
16. The apparatus according to claim 12, wherein the scene alert surface is a scene alert surface perpendicular to the ground, and the picture alert surface corresponding to the scene alert surface is set by the second picture alert surface setting module;
the second picture warning surface setting module comprises:
the third virtual projection determining unit is used for determining a virtual projection which is set in a video monitoring picture aiming at a target scene and does not have a moving target and is related to the scene warning surface by a user;
a fifth reference world coordinate determination unit, configured to calculate, based on the first conversion relationship, a first type of reference world coordinates corresponding to first type of reference image coordinates included in a virtual projection associated with the scene surveillance surface, where the first type of reference image coordinates uniquely determine the virtual projection associated with the scene surveillance surface;
the third distance value determining unit is used for obtaining a first distance value from the upper side of the scene warning surface to the real projection of the scene warning surface and a second distance value from the lower side of the scene warning surface to the real projection of the scene warning surface;
a sixth reference world coordinate determination unit, configured to determine a second type of reference world coordinate corresponding to the first type of reference world coordinate and corresponding to the upper side and a second type of reference world coordinate corresponding to the lower side, where a height difference between the second type of reference world coordinate corresponding to the upper side and the first type of reference world coordinate is the first type distance value, and a height difference between the second type of reference world coordinate corresponding to the lower side and the first type of reference world coordinate is the second type distance value;
a third reference image coordinate determination unit, configured to calculate, based on the second conversion relationship, second type reference image coordinates corresponding to the obtained second type reference world coordinates;
and the second picture warning surface drawing unit is used for drawing the picture warning surface corresponding to the scene warning surface in the video monitoring picture without the moving target based on the second type of reference image coordinates.
17. The apparatus according to claim 13, wherein the scene alert object corresponding to the scene alert object is set by the scene alert object setting module;
the picture warning body setting module includes:
a fourth virtual projection determination unit, configured to determine a virtual projection related to the scene alert body, which is set by a user in a video frame for a target scene in which no moving object exists;
a seventh reference world coordinate determination unit, configured to calculate, based on the first conversion relationship, a first type of reference world coordinates corresponding to first type of reference image coordinates included in a virtual projection associated with the scene alert object, where the first type of reference image coordinates uniquely determine the virtual projection associated with the scene alert object;
the fourth distance value determining unit is used for obtaining a first type of distance value from the upper surface of the scene warning body to the real projection of the scene warning body and a second type of distance value from the lower surface of the scene warning body to the real projection of the scene warning body;
an eighth reference world coordinate determination unit, configured to determine a second type of reference world coordinate corresponding to the first type of reference world coordinate and a second type of reference world coordinate corresponding to the lower type of reference world coordinate, where a height difference between the second type of reference world coordinate corresponding to the upper type of reference world coordinate and the first type of reference world coordinate is the corresponding first type of distance value, and a height difference between the second type of reference world coordinate corresponding to the lower type of reference world coordinate and the first type of reference world coordinate is the corresponding second type of distance value;
a fourth reference image coordinate determination unit, configured to calculate, based on the second conversion relationship, second type reference image coordinates corresponding to the obtained second type reference world coordinates;
and the picture warning body drawing unit is used for drawing the picture warning body corresponding to the scene warning body in the video monitoring picture without the moving target based on the second type of reference image coordinates.
18. The apparatus according to any one of claims 10-17, wherein the first transformation relationship is characterized by a transformation formula of an image three-dimensional coordinate system and a camera three-dimensional coordinate system, and a transformation formula of the camera three-dimensional coordinate system and a world three-dimensional coordinate system;
the conversion formula of the image three-dimensional coordinate system and the camera three-dimensional coordinate system is as follows:
Figure FDA0002366536020000121
the conversion formula of the camera three-dimensional coordinate system and the world three-dimensional coordinate system is as follows:
Figure FDA0002366536020000122
wherein (X, y, z) is the image coordinate in the image three-dimensional coordinate system, (X)c,Yc,Zc) For the camera coordinates in the camera three-dimensional coordinate system, (X, Y, Z) are world coordinates in the world three-dimensional coordinate system, X1, X2, X3, X4 and w are constant values obtained based on an image acquisition device acquiring video frames, θ is a pitch angle of the image acquisition device, ψ is a tilt angle of the image acquisition device, and Hcam is a height of the image acquisition device from the ground; and the origin of the world three-dimensional coordinate system is the projection of the origin of the camera three-dimensional coordinate system.
CN201610322001.7A 2016-05-16 2016-05-16 Video monitoring method and device Active CN107396037B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610322001.7A CN107396037B (en) 2016-05-16 2016-05-16 Video monitoring method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610322001.7A CN107396037B (en) 2016-05-16 2016-05-16 Video monitoring method and device

Publications (2)

Publication Number Publication Date
CN107396037A CN107396037A (en) 2017-11-24
CN107396037B true CN107396037B (en) 2020-04-03

Family

ID=60338476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610322001.7A Active CN107396037B (en) 2016-05-16 2016-05-16 Video monitoring method and device

Country Status (1)

Country Link
CN (1) CN107396037B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108566534A (en) * 2018-04-23 2018-09-21 Oppo广东移动通信有限公司 Alarm method, device, terminal based on video monitoring and storage medium
CN111538009B (en) * 2019-01-21 2022-09-16 杭州海康威视数字技术股份有限公司 Radar point marking method and device
CN111508027B (en) * 2019-01-31 2023-10-20 杭州海康威视数字技术股份有限公司 Method and device for calibrating external parameters of camera
CN113068000B (en) * 2019-12-16 2023-07-18 杭州海康威视数字技术股份有限公司 Video target monitoring method, device, equipment, system and storage medium
CN113452954B (en) * 2020-03-26 2023-02-28 浙江宇视科技有限公司 Behavior analysis method, apparatus, device and medium
CN113476835A (en) * 2020-10-22 2021-10-08 青岛海信电子产业控股股份有限公司 Picture display method and device
CN113106839B (en) * 2021-03-29 2023-06-06 杭州海康威视数字技术股份有限公司 Control method, device, equipment and system of lifting bridge

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2209091B1 (en) * 2009-01-16 2012-08-08 Honda Research Institute Europe GmbH System and method for object motion detection based on multiple 3D warping and vehicle equipped with such system
CN102650514A (en) * 2012-05-03 2012-08-29 秦毅 Stereoscopic vision system and application thereof to real time monitoring of three-dimensional safety warning area
CN103745484A (en) * 2013-12-31 2014-04-23 国家电网公司 Worker target safety early-warning method for hot-line work on electric power facility
CN104680557A (en) * 2015-03-10 2015-06-03 重庆邮电大学 Intelligent detection method for abnormal behavior in video sequence image
CN104954747A (en) * 2015-06-17 2015-09-30 浙江大华技术股份有限公司 Video monitoring method and device
CN105141885A (en) * 2014-05-26 2015-12-09 杭州海康威视数字技术股份有限公司 Method for video monitoring and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2893521A1 (en) * 2012-09-07 2015-07-15 Siemens Schweiz AG Methods and apparatus for establishing exit/entry criteria for a secure location

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2209091B1 (en) * 2009-01-16 2012-08-08 Honda Research Institute Europe GmbH System and method for object motion detection based on multiple 3D warping and vehicle equipped with such system
CN102650514A (en) * 2012-05-03 2012-08-29 秦毅 Stereoscopic vision system and application thereof to real time monitoring of three-dimensional safety warning area
CN103745484A (en) * 2013-12-31 2014-04-23 国家电网公司 Worker target safety early-warning method for hot-line work on electric power facility
CN105141885A (en) * 2014-05-26 2015-12-09 杭州海康威视数字技术股份有限公司 Method for video monitoring and device
CN104680557A (en) * 2015-03-10 2015-06-03 重庆邮电大学 Intelligent detection method for abnormal behavior in video sequence image
CN104954747A (en) * 2015-06-17 2015-09-30 浙江大华技术股份有限公司 Video monitoring method and device

Also Published As

Publication number Publication date
CN107396037A (en) 2017-11-24

Similar Documents

Publication Publication Date Title
CN107396037B (en) Video monitoring method and device
CN108694741B (en) Three-dimensional reconstruction method and device
CN105225230B (en) A kind of method and device of identification foreground target object
TWI508027B (en) Three dimensional detecting device and method for detecting images thereof
WO2016199244A1 (en) Object recognition device and object recognition system
CN110067274B (en) Equipment control method and excavator
CN109191533B (en) Tower crane high-altitude construction method based on fabricated building
KR101759798B1 (en) Method, device and system for generating an indoor two dimensional plan view image
EP2476999B1 (en) Method for measuring displacement, device for measuring displacement, and program for measuring displacement
WO2015098222A1 (en) Information processing device, information processing method, and program
CN109791607A (en) It is detected from a series of images of video camera by homography matrix and identifying object
WO2020006551A1 (en) Computer vision systems and methods for modeling three dimensional structures using two-dimensional segments detected in digital aerial images
US20120162412A1 (en) Image matting apparatus using multiple cameras and method of generating alpha maps
CN108111802B (en) Video monitoring method and device
CN111465937B (en) Face detection and recognition method employing light field camera system
CN116778094B (en) Building deformation monitoring method and device based on optimal viewing angle shooting
KR101474521B1 (en) Method and apparatus for building image database
CN107103582B (en) The matching process of robot visual guidance positioning image characteristic point
JP7437930B2 (en) Mobile objects and imaging systems
KR101559739B1 (en) System for merging virtual modeling and image data of cameras
JP5960471B2 (en) Image monitoring device
JP2013200840A (en) Video processing device, video processing method, video processing program, and video display device
JP6546898B2 (en) Three-dimensional space identification apparatus, method, and program
JP6213106B2 (en) Image processing device
CN110723073B (en) Automobile A column perspective method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant