CN108111802B - Video monitoring method and device - Google Patents
Video monitoring method and device Download PDFInfo
- Publication number
- CN108111802B CN108111802B CN201611046620.4A CN201611046620A CN108111802B CN 108111802 B CN108111802 B CN 108111802B CN 201611046620 A CN201611046620 A CN 201611046620A CN 108111802 B CN108111802 B CN 108111802B
- Authority
- CN
- China
- Prior art keywords
- dimensional
- type
- alert
- dimensional coordinates
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
- G06T2207/10021—Stereoscopic video; Stereoscopic image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Abstract
The embodiment of the invention provides a video monitoring method and device. The method comprises the following steps: when a target moving object is detected in a video monitoring process aiming at a target scene, determining a first type of three-dimensional coordinates of the target moving object in a space coordinate system corresponding to a virtual three-dimensional space, wherein the virtual three-dimensional space is formed in advance based on target three-dimensional information; obtaining a second type of three-dimensional coordinates of a space alert position in a space coordinate system corresponding to a virtual three-dimensional space, wherein the space alert position is drawn in the virtual three-dimensional space in advance; judging whether the position relation between a target moving object in the target scene and the scene warning position meets a preset alarm rule or not according to the first three-dimensional coordinate and the second three-dimensional coordinate; and when the judgment result is yes, triggering an alarm event aiming at the target scene. The scheme can improve the triggering accuracy of the alarm event.
Description
Technical Field
The invention relates to the technical field of video monitoring, in particular to a video monitoring method and device.
Background
In real life, certain positions in a real scene need to be armed to prevent people from invading, crossing, entering, leaving and the like, wherein the positions needing to be armed in the real scene are called scene armed positions, and the types of the positions comprise but are not limited to an armed line, an armed plane and an armed body.
In the prior art, in order to alert a scene alert position, a 2D frame alert position is usually set in a video monitoring frame corresponding to a target scene where the scene alert position is located, and the setting mode is to directly draw the 2D frame alert position by using a mouse, so that when a change in a positional relationship between a moving target appearing in the video monitoring frame and the frame alert position conforms to a predetermined alert rule, an alert event is triggered, and specifically, when a change in a positional relationship between an image coordinate (a coordinate in an image two-dimensional coordinate system) corresponding to the frame alert position and an image coordinate (a coordinate in the image two-dimensional coordinate system) corresponding to the moving target conforms to the predetermined alert rule, an event alert is triggered.
However, the image alert position set in the prior art is only based on a 2D plane, and does not have three-dimensional spatial information in a real scene, that is, information such as a height from the ground and a relative distance between each object in the real scene is not provided, and a misjudgment phenomenon is easily caused. For example, in fig. 1, a frame alert position is directly set on a video monitoring frame, the frame alert position is to alert a flower bed area, an event alarm is generated when a person enters the flower bed, and the pedestrian passes through the flower bed on the roadside and should not be triggered, but the pedestrian in the right picture in fig. 1 judges that the pedestrian enters the scene alert position due to the perspective relationship, so as to trigger the event alarm, which belongs to false alarm.
Disclosure of Invention
The embodiment of the invention aims to provide a video monitoring method and a video monitoring device so as to improve the triggering accuracy of an alarm event. The specific technical scheme is as follows:
in a first aspect, a video monitoring method provided in an embodiment of the present invention includes:
when a target moving object is detected in a video monitoring process aiming at a target scene, determining a first type of three-dimensional coordinates of the target moving object under a space coordinate system corresponding to a virtual three-dimensional space, wherein the virtual three-dimensional space is formed in advance based on target three-dimensional information, and the target three-dimensional information is image three-dimensional information of the target scene acquired by an image acquisition device when the moving object does not exist in the target scene;
obtaining a second type of three-dimensional coordinates of a space alert position in a space coordinate system corresponding to the virtual three-dimensional space, wherein the space alert position is drawn in the virtual three-dimensional space in advance, and the relative position relationship of the space alert position in the virtual three-dimensional space is the same as the relative position relationship of the scene alert position in the target scene;
judging whether the position relation between the target moving object in the target scene and the scene warning position meets a preset alarm rule or not according to the first three-dimensional coordinate and the second three-dimensional coordinate;
and when the judgment result is yes, triggering an alarm event aiming at the target scene.
Optionally, the step of determining a first type of three-dimensional coordinates of the target moving object in a space coordinate system corresponding to a virtual three-dimensional space includes:
determining image three-dimensional information of the target moving object acquired by the image acquisition equipment;
obtaining image three-dimensional information of a predetermined reference object, wherein the predetermined reference object is a fixed object in the target scene;
determining the relative position relation of the target moving object and the preset reference object according to the image three-dimensional information of the target moving object and the image three-dimensional information of the preset reference object;
obtaining a reference three-dimensional coordinate of the preset reference object in a space coordinate system corresponding to a virtual three-dimensional space;
and obtaining a first type of three-dimensional coordinates of the target moving object in a space coordinate system corresponding to the virtual three-dimensional space according to the relative position relation and the reference three-dimensional coordinates.
Optionally, the spatial alert position is plotted in a manner that:
acquiring gesture operation of a user in the virtual three-dimensional space;
and drawing the space alert position in the virtual three-dimensional space according to the operation track of the gesture operation.
Optionally, the spatial alert position is plotted in a manner that:
receiving three-dimensional coordinates of at least two fixed points input by a user, wherein the at least two fixed points uniquely determine a space alert position;
and drawing the space alert position in the virtual three-dimensional space according to the three-dimensional coordinates of the at least two fixed points.
Optionally, the type of the space alert position is an alert line;
the step of judging whether the position relationship between the target moving object in the target scene and the scene alert position meets a preset alarm rule or not according to the first three-dimensional coordinate and the second three-dimensional coordinate comprises the following steps:
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: and if the first type of three-dimensional coordinates are not included, the first type of three-dimensional coordinates are included and the first type of three-dimensional coordinates are not included in the range formed by the second type of three-dimensional coordinates, judging that the position relation between the target moving object in the target scene and the scene alert position meets a preset alarm rule for crossing the alert line.
Optionally, the type of the space alert position is an alert surface;
the step of judging whether the position relationship between the target moving object in the target scene and the scene alert position meets a preset alarm rule or not according to the first three-dimensional coordinate and the second three-dimensional coordinate comprises the following steps:
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: if the first type of three-dimensional coordinates are not included, the first type of three-dimensional coordinates are included and the first type of three-dimensional coordinates are not included in the range formed by the second type of three-dimensional coordinates, the position relation between the target moving object in the target scene and the scene alert position is judged to meet a preset alert surface crossing alert rule;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: from the range formed by the second type of three-dimensional coordinates not including the first type of three-dimensional coordinates to the range formed by the first type of three-dimensional coordinates, if so, judging that the position relation between the target moving object in the target scene and the scene alert position meets the preset alert entering surface alarm rule;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: from the range formed by the second type of three-dimensional coordinates to the range including the first type of three-dimensional coordinates, if so, judging that the position relation between the target moving object in the target scene and the scene alert position meets a preset alert-surface-leaving alert rule;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: and the range formed by the second type of three-dimensional coordinates comprises the first type of three-dimensional coordinates, if so, the position relation between the target moving object in the target scene and the scene alert position is judged to meet the preset intrusion alert surface alarm rule.
Optionally, the type of the space alert position is an alert body;
the step of judging whether the position relationship between the target moving object in the target scene and the scene alert position meets a preset alarm rule or not according to the first three-dimensional coordinate and the second three-dimensional coordinate comprises the following steps:
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: if the first type of three-dimensional coordinates are not included, the first type of three-dimensional coordinates are included and the first type of three-dimensional coordinates are not included in the range formed by the second type of three-dimensional coordinates, the position relation between the target moving object in the target scene and the scene alert position is judged to meet the preset alert crossing rule;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: from the range formed by the second type of three-dimensional coordinates not including the first type of three-dimensional coordinates to the range formed by the first type of three-dimensional coordinates, if so, judging that the position relation between the target moving object in the target scene and the scene alert position meets the preset alert entering body alert rule;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: from the range formed by the second type of three-dimensional coordinates to the range including the first type of three-dimensional coordinates, if so, judging that the position relation between the target moving object in the target scene and the scene alert position meets the preset alert-body-leaving alert rule;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: and the range formed by the second type of three-dimensional coordinates comprises the first type of three-dimensional coordinates, if so, the position relation between the target moving object in the target scene and the scene alert position is judged to meet the preset intrusion alert body alarm rule.
In a second aspect, a video monitoring apparatus provided in an embodiment of the present invention includes:
the device comprises a first coordinate determining module, a second coordinate obtaining module, a judging module, a processing module and a drawing module;
the first coordinate determination module is used for determining a first type of three-dimensional coordinates of a target moving object in a space coordinate system corresponding to a virtual three-dimensional space when the target moving object is detected in a video monitoring process aiming at the target scene, wherein the virtual three-dimensional space is formed in advance based on target three-dimensional information, and the target three-dimensional information is image three-dimensional information of the target scene acquired by image acquisition equipment when no moving object exists in the target scene;
the second coordinate obtaining module is configured to obtain a second type of three-dimensional coordinates of a space alert position in a space coordinate system corresponding to the virtual three-dimensional space, where the space alert position is previously drawn in the virtual three-dimensional space by the drawing module, and a relative positional relationship of the space alert position in the virtual three-dimensional space is the same as a relative positional relationship of the scene alert position in the target scene;
the judging module is used for judging whether the position relation between the target moving object in the target scene and the scene warning position meets a preset alarm rule or not according to the first type of three-dimensional coordinates and the second type of three-dimensional coordinates;
and the processing module is used for triggering an alarm event aiming at the target scene when the judgment result of the judgment module is yes.
Optionally, the first coordinate determination module includes:
a first information obtaining unit configured to determine three-dimensional information of an image of the target moving object acquired by the image acquisition device;
a second information obtaining unit, configured to obtain three-dimensional information of an image of a predetermined reference object, where the predetermined reference object is a fixed object in the target scene;
a relative position relation determining unit configured to determine a relative position relation between the target moving object and the predetermined reference object based on the image three-dimensional information of the target moving object and the image three-dimensional information of the predetermined reference object;
a third information obtaining unit, configured to obtain a reference three-dimensional coordinate of the predetermined reference object in a space coordinate system corresponding to a virtual three-dimensional space;
and the fourth information obtaining unit is used for obtaining the first type of three-dimensional coordinates of the target moving object in a space coordinate system corresponding to the virtual three-dimensional space according to the relative position relation and the reference three-dimensional coordinates.
Optionally, the rendering module includes:
the gesture acquisition unit is used for acquiring gesture operation of a user in the virtual three-dimensional space;
and the first drawing unit is used for drawing the space alert position in the virtual three-dimensional space according to the operation track of the gesture operation.
Optionally, the rendering module includes:
the receiving unit is used for receiving three-dimensional coordinates of at least two fixed points input by a user, and the at least two fixed points uniquely determine a space alert position;
and the second drawing unit is used for drawing the space alert position in the virtual three-dimensional space according to the three-dimensional coordinates of the at least two fixed points.
Optionally, the type of the space alert position is an alert line;
the judging module comprises:
a first judging unit, configured to judge whether a change in a positional relationship between the first-type three-dimensional coordinates and the second-type three-dimensional coordinates satisfies: and if the first type of three-dimensional coordinates are not included, the first type of three-dimensional coordinates are included and the first type of three-dimensional coordinates are not included in the range formed by the second type of three-dimensional coordinates, judging that the position relation between the target moving object in the target scene and the scene alert position meets a preset alarm rule for crossing the alert line.
Optionally, the type of the space alert position is an alert surface;
the judging module comprises:
a second judging unit, configured to judge whether a change in a positional relationship between the first-type three-dimensional coordinates and the second-type three-dimensional coordinates satisfies: if the first type of three-dimensional coordinates are not included, the first type of three-dimensional coordinates are included and the first type of three-dimensional coordinates are not included in the range formed by the second type of three-dimensional coordinates, the position relation between the target moving object in the target scene and the scene alert position is judged to meet a preset alert surface crossing alert rule;
alternatively, the first and second electrodes may be,
a third judging unit, configured to judge whether a change in a positional relationship between the first-type three-dimensional coordinates and the second-type three-dimensional coordinates satisfies: from the range formed by the second type of three-dimensional coordinates not including the first type of three-dimensional coordinates to the range formed by the first type of three-dimensional coordinates, if so, judging that the position relation between the target moving object in the target scene and the scene alert position meets the preset alert entering surface alarm rule;
alternatively, the first and second electrodes may be,
a fourth judging unit, configured to judge whether a change in a positional relationship between the first-type three-dimensional coordinates and the second-type three-dimensional coordinates satisfies: from the range formed by the second type of three-dimensional coordinates to the range including the first type of three-dimensional coordinates, if so, judging that the position relation between the target moving object in the target scene and the scene alert position meets a preset alert-surface-leaving alert rule;
alternatively, the first and second electrodes may be,
a fifth judging unit, configured to judge whether a change in a positional relationship between the first-type three-dimensional coordinates and the second-type three-dimensional coordinates satisfies: and the range formed by the second type of three-dimensional coordinates comprises the first type of three-dimensional coordinates, if so, the position relation between the target moving object in the target scene and the scene alert position is judged to meet the preset intrusion alert surface alarm rule.
Optionally, the type of the space alert position is an alert body;
the judging module comprises:
a sixth determining unit, configured to determine whether a change in a positional relationship between the first three-dimensional coordinate and the second three-dimensional coordinate satisfies: if the first type of three-dimensional coordinates are not included, the first type of three-dimensional coordinates are included and the first type of three-dimensional coordinates are not included in the range formed by the second type of three-dimensional coordinates, the position relation between the target moving object in the target scene and the scene alert position is judged to meet the preset alert crossing rule;
alternatively, the first and second electrodes may be,
a seventh judging unit, configured to judge whether a change in a positional relationship between the first-type three-dimensional coordinates and the second-type three-dimensional coordinates satisfies: from the range formed by the second type of three-dimensional coordinates not including the first type of three-dimensional coordinates to the range formed by the first type of three-dimensional coordinates, if so, judging that the position relation between the target moving object in the target scene and the scene alert position meets the preset alert entering body alert rule;
alternatively, the first and second electrodes may be,
an eighth determining unit, configured to determine whether a change in a positional relationship between the first three-dimensional coordinate and the second three-dimensional coordinate satisfies: from the range formed by the second type of three-dimensional coordinates to the range including the first type of three-dimensional coordinates, if so, judging that the position relation between the target moving object in the target scene and the scene alert position meets the preset alert-body-leaving alert rule;
alternatively, the first and second electrodes may be,
a ninth judging unit, configured to judge whether a change in a positional relationship between the first-type three-dimensional coordinates and the second-type three-dimensional coordinates satisfies: and the range formed by the second type of three-dimensional coordinates comprises the first type of three-dimensional coordinates, if so, the position relation between the target moving object in the target scene and the scene alert position is judged to meet the preset intrusion alert body alarm rule.
In the embodiment of the invention, when a target moving object is detected, a first type of three-dimensional coordinates of the target moving object in a space coordinate system corresponding to a virtual three-dimensional space is determined, wherein the virtual three-dimensional space is formed in advance based on target three-dimensional information, and the target three-dimensional information is image three-dimensional information of a target scene, which is acquired by image acquisition equipment when no moving object exists in the target scene; obtaining a second type of three-dimensional coordinates of a space alert position in a space coordinate system corresponding to the virtual three-dimensional space, wherein the space alert position is drawn in the virtual three-dimensional space in advance, and the relative position relationship of the space alert position in the virtual three-dimensional space is the same as the relative position relationship of the scene alert position in the target scene; judging whether the position relation between the target moving object in the target scene and the scene warning position meets a preset alarm rule or not according to the first type of three-dimensional coordinates and the second type of three-dimensional coordinates; and when the judgment result is yes, triggering an alarm event aiming at the target scene. Compared with the prior art, the method and the device have the advantages that the position relation matching is carried out based on the coordinate information under the space coordinate system corresponding to the virtual three-dimensional space, and the 2D plane under the image two-dimensional coordinate system is not based, so that false alarm is reduced, and the triggering accuracy of the alarm event is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram illustrating a warning position of a picture in the prior art;
fig. 2 is a flowchart of a video monitoring method according to an embodiment of the present invention;
fig. 3 is another flowchart of a video monitoring method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a video monitoring apparatus according to an embodiment of the present invention;
fig. 5 is another schematic structural diagram of a video monitoring apparatus according to an embodiment of the present invention;
fig. 6 is a schematic view of a virtual three-dimensional space a containing a moving object and a space fence.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to solve the problem of the prior art, embodiments of the present invention provide a video monitoring method and apparatus to improve the triggering accuracy of an alarm event.
First, a video monitoring method provided by an embodiment of the present invention is described below.
It should be noted that an execution main body of the video monitoring method provided by the embodiment of the present invention may be a video monitoring apparatus, and the video monitoring apparatus may be run in a device connected to an image capturing device, and the image capturing device may have a moving object detecting function and an image three-dimensional information capturing function, for example: the image acquisition equipment can be a binocular camera, the equipment connected with the image acquisition equipment can be a data hard disk video recorder or a terminal, wherein the data hard disk video recorder or the terminal can obtain a video stream of a target scene from the image acquisition equipment in real time and output the obtained video stream, and a constituting unit of the video stream is composed of video frames. Of course, the video monitoring apparatus may also be operated in an image capturing device, wherein the image capturing device may have a moving object detecting function and a three-dimensional information capturing function. It is understood that, so-called image three-dimensional information, i.e., image information formed by adding depth information to image two-dimensional information, image capturing devices capable of capturing image three-dimensional information include, but are not limited to, binocular cameras; moreover, the moving object detection function is a function capable of detecting a moving object, and a specific implementation manner may be any manner in the prior art.
As shown in fig. 2, the video monitoring method provided in the embodiment of the present invention may include the following steps:
s201, when a target moving object is detected in a video monitoring process aiming at a target scene, determining a first type of three-dimensional coordinates of the target moving object in a space coordinate system corresponding to a virtual three-dimensional space;
when a target moving object is detected in a video monitoring process for a target scene, the video monitoring device may determine a first type of three-dimensional coordinates of the target moving object in a space coordinate system corresponding to a virtual three-dimensional space, where the virtual three-dimensional space is formed in advance based on target three-dimensional information, and the target three-dimensional information is image three-dimensional information of the target scene acquired by an image acquisition device when the moving object does not exist in the target scene.
It should be noted that, when no moving object exists in the target scene, the image acquisition device may acquire the three-dimensional image information of the target scene and send the three-dimensional image information to the video monitoring apparatus; accordingly, the video monitoring apparatus can take the received image three-dimensional information as target three-dimensional information and form a virtual three-dimensional space based on the target three-dimensional information. For example, a Virtual Reality (VR) technology widely applied in the prior art may be used to form the Virtual three-dimensional space based on the target three-dimensional information, but is not limited thereto.
It can be understood that the determined first three-dimensional coordinates may uniquely identify the target moving object, the number of the first three-dimensional coordinates is not limited herein, and since the detection of the target moving object requires analysis on a plurality of consecutive video frames, the video frame where the target moving object is located is a plurality of consecutive video frames, and accordingly, the first three-dimensional coordinates relate to the plurality of consecutive video frames. In addition, the image acquisition device can report the video monitoring device after detecting the target moving object, and send the acquired image three-dimensional information of the target moving object to the video monitoring device, so that the video monitoring device determines the first type of three-dimensional coordinates of the target moving object in a space coordinate system corresponding to the virtual three-dimensional space.
Specifically, in an implementation manner, as shown in fig. 3, the step of determining a first type of three-dimensional coordinates of the target moving object in a space coordinate system corresponding to a virtual three-dimensional space may include:
s301, determining three-dimensional information of the image of the target moving object acquired by the image acquisition equipment;
s302, obtaining three-dimensional image information of a preset reference object, wherein the preset reference object is a fixed object in the target scene;
the predetermined reference object is a predetermined fixed object located in the target scene, and the three-dimensional information of the acquired image is invariant for the fixed object, and correspondingly, the three-dimensional coordinate information in the virtual three-dimensional space is also invariant, so that the first type of three-dimensional coordinates corresponding to the target moving object can be determined based on the relative position relationship between the fixed object and the target moving object in the target scene and the three-dimensional coordinates of the fixed object in the virtual three-dimensional space. Based on the processing idea, after the image three-dimensional information of the target moving object is obtained, the image three-dimensional information of the predetermined reference object can be obtained, and then the subsequent determination step of the relative position relationship is performed, wherein the image three-dimensional information of the predetermined reference object can be determined in advance and stored.
S303, determining the relative position relationship between the target moving object and the preset reference object according to the image three-dimensional information of the target moving object and the image three-dimensional information of the preset reference object;
s304, obtaining a reference three-dimensional coordinate of the predetermined reference object in a space coordinate system corresponding to the virtual three-dimensional space;
the reference three-dimensional coordinates of the predetermined reference object in the space coordinate system corresponding to the virtual three-dimensional space can be determined in advance and stored. Further, it can be understood that, since the predetermined reference object is a fixed object, a corresponding virtual object exists in the virtual three-dimensional space, and the three-dimensional coordinates of the virtual object are the reference three-dimensional coordinates corresponding to the predetermined reference object.
S305, obtaining a first type of three-dimensional coordinates of the target moving object in a space coordinate system corresponding to the virtual three-dimensional space according to the relative position relation and the reference three-dimensional coordinates.
It should be emphasized that the specific implementation of the above step of determining the first type of three-dimensional coordinates of the target moving object in the space coordinate system corresponding to the virtual three-dimensional space is merely an exemplary illustration and should not be construed as a limitation to the embodiments of the present invention.
S202, obtaining a second type of three-dimensional coordinates of the space alert position under a space coordinate system corresponding to the virtual three-dimensional space;
the space alert position is drawn in the virtual three-dimensional space in advance, and the relative position relationship of the space alert position in the virtual three-dimensional space is the same as the relative position relationship of the scene alert position in the target scene.
After the virtual three-dimensional space is formed, the space alert position can be drawn in the virtual three-dimensional space in a manual drawing mode according to the principle that the relative position relationship of the space alert position in the virtual three-dimensional space is the same as the relative position relationship of the scene alert position in the target scene, the second three-dimensional coordinate of the space alert position in a space coordinate system corresponding to the virtual three-dimensional space is determined, and the determined second three-dimensional coordinate is stored for subsequent use. Specifically, since the type of the scene alert position may be an alert line, an alert surface or an alert body, correspondingly, the type of the space alert position may be an alert line, an alert surface or an alert body, wherein the scene alert position has a correspondence with the type of the space alert position, that is: when the type of the scene warning position is a warning line, the type of the space warning position is a warning line; when the type of the scene warning position is a warning surface, the type of the space warning position is a warning surface; when the type of the scene alert position is an alert body, the type of the space alert position is an alert body. Specifically, in an implementation manner, the mapping manner of the space alert position may include:
acquiring gesture operation of a user in the virtual three-dimensional space;
and drawing the space alert position in the virtual three-dimensional space according to the operation track of the gesture operation.
In this implementation manner, a user may perform a gesture operation in the virtual three-dimensional space according to the type of the scene guard position and the relative position relationship of the scene guard position in the target scene, and accordingly, the video monitoring apparatus may collect the gesture operation of the user in the virtual three-dimensional space, and draw the space guard position in the virtual three-dimensional space according to an operation trajectory of the gesture operation, where the operation trajectory of the gesture operation is the space guard position.
Specifically, in another specific implementation manner, the mapping manner of the space alert position may include:
receiving three-dimensional coordinates of at least two fixed points input by a user, wherein the at least two fixed points uniquely determine a space alert position;
and drawing the space alert position in the virtual three-dimensional space according to the three-dimensional coordinates of the at least two fixed points.
In the specific implementation mode, a user can calculate a three-dimensional coordinate range of the space alert position according to the type of the scene position, the relative position relation of the scene alert position in the target scene and a space coordinate system corresponding to the virtual three-dimensional space, and then input three-dimensional coordinates of at least two fixed points which uniquely determine the space alert position; accordingly, the video monitoring device can receive the three-dimensional coordinates of at least two fixed points input by the user, and draw the space alert position in the virtual three-dimensional space according to the three-dimensional coordinates of the at least two fixed points. For example: if the type of the scene warning position is a warning line, the user can input two fixed points which are two endpoints of the space warning line respectively; if the type of the scene alert position is an alert plane, the number of fixed points input by the user may be equal to the number of vertices of the spatial alert plane, and the fixed points have a correspondence with the vertices.
The aforementioned space alert position is depicted by way of example only and should not be construed as limiting the embodiments of the present invention.
S203, judging whether the position relation between the target moving object and the scene warning position in the target scene meets a preset alarm rule or not according to the first type of three-dimensional coordinates and the second type of three-dimensional coordinates;
the relative position relationship between the target moving object and the space alert position in the virtual three-dimensional space is equal to that: therefore, after the first three-dimensional coordinates and the second three-dimensional coordinates are obtained, whether the position relationship between the target moving object and the scene alert position in the target scene meets the preset alarm rule or not can be judged according to the first three-dimensional coordinates and the second three-dimensional coordinates.
Because the types of the space alert positions are different, the specific implementation manner for judging whether the position relationship between the target moving object in the target scene and the scene alert position meets the preset alarm rule or not according to the first type three-dimensional coordinates and the second type three-dimensional coordinates is also different. The specific implementation manner of determining whether the position relationship between the target moving object in the target scene and the alert position of the scene meets the predetermined alert rule according to the first three-dimensional coordinate and the second three-dimensional coordinate is described below by taking the type of the spatial alert position as an alert line, an alert plane, and an alert body, respectively.
Optionally, in a specific implementation manner, the type of the space alert position is an alert line;
correspondingly, the step of judging whether the position relationship between the target moving object and the scene alert position in the target scene meets a preset alarm rule according to the first type of three-dimensional coordinates and the second type of three-dimensional coordinates comprises the following steps:
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: and if the first type of three-dimensional coordinates are not included, the first type of three-dimensional coordinates are included and the first type of three-dimensional coordinates are not included in the range formed by the second type of three-dimensional coordinates, judging that the position relation between the target moving object in the target scene and the scene alert position meets a preset alert line crossing alarm rule.
Optionally, in a specific implementation, the type of the space alert position is an alert surface;
correspondingly, the step of judging whether the position relationship between the target moving object and the scene alert position in the target scene meets a preset alarm rule according to the first type of three-dimensional coordinates and the second type of three-dimensional coordinates comprises the following steps:
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: if the first type of three-dimensional coordinates are not included, the first type of three-dimensional coordinates are included and the first type of three-dimensional coordinates are not included in the range formed by the second type of three-dimensional coordinates, the position relation between the target moving object in the target scene and the scene alert position is judged to meet the preset alert surface crossing alert rule;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: if the first type of three-dimensional coordinates are included in the range formed by the second type of three-dimensional coordinates, the position relation between the target moving object in the target scene and the scene alert position is judged to meet the preset alert entering surface alarm rule;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: from the range formed by the second type of three-dimensional coordinates to the range including the first type of three-dimensional coordinates, if so, judging that the position relation between the target moving object in the target scene and the scene alert position meets the preset alert surface leaving alert rule;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: and the range formed by the second type of three-dimensional coordinates comprises the first type of three-dimensional coordinates, if so, the position relation between the target moving object in the target scene and the scene alert position is judged to meet the preset intrusion alert surface alarm rule.
Optionally, in another specific implementation manner, the type of the space alert position is an alert body;
correspondingly, the step of judging whether the position relationship between the target moving object and the scene alert position in the target scene meets a preset alarm rule according to the first type of three-dimensional coordinates and the second type of three-dimensional coordinates comprises the following steps:
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: if the first type of three-dimensional coordinates are not included, the first type of three-dimensional coordinates are included and the first type of three-dimensional coordinates are not included in the range formed by the second type of three-dimensional coordinates, the position relation between the target moving object in the target scene and the scene alert position is judged to meet the preset alert rule for crossing the alert body;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: if the first type of three-dimensional coordinates are included in the range formed by the second type of three-dimensional coordinates, the position relation between the target moving object in the target scene and the scene alert position is judged to meet the preset alert entering body alert rule;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: from the range formed by the second type of three-dimensional coordinates to the range including the first type of three-dimensional coordinates, if so, judging that the position relation between the target moving object in the target scene and the scene alert position meets the preset alert-body-leaving alert rule;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: and the range formed by the second type of three-dimensional coordinates comprises the first type of three-dimensional coordinates, if so, the position relation between the target moving object in the target scene and the scene alert position is judged to meet the preset intrusion alert body alarm rule.
And S204, when the judgment result is yes, triggering an alarm event aiming at the target scene.
When the position relation between the target moving object and the scene alert position in the target scene is judged to meet the preset alarm rule, an alarm event aiming at the target scene can be triggered. The output form of the alarm event is not limited in the embodiments of the present invention, for example: the concrete form can be as follows: sound alarm, short message alarm, dialog box alarm, etc.
For example: in a scenario a, if an alarm is required when a moving object with a height exceeding 175cm passes through a certain ground location, the alarm requirement can be met by the scheme provided by the embodiment of the present invention, and the specific process is as follows:
setting a scene warning position as a scene warning line, wherein the scene warning line is parallel to the ground position and is 175cm away from the ground position;
the video monitoring device can pre-construct a virtual three-dimensional space a corresponding to the scene A, wherein the virtual three-dimensional space a is formed in advance based on target three-dimensional information, and the target three-dimensional information is image three-dimensional information of the target scene collected by the binocular camera when no moving object exists in the scene A; the video monitoring device collects gesture operation of a user in the virtual three-dimensional space a, and draws a space warning line corresponding to the scene warning line in the virtual three-dimensional space a according to an operation track of the gesture operation, wherein the operation track of the gesture operation is a straight line, and a relative position relationship of the space warning line in the virtual three-dimensional space a is equal to a relative position relationship of the scene warning line in a scene a, as shown in fig. 6;
as shown in fig. 6, when detecting a first moving object on the left side, the video monitoring apparatus determines a three-dimensional coordinate Y of the first moving object in a space coordinate system corresponding to a virtual three-dimensional space a, obtains a three-dimensional coordinate X of the space fence in the space coordinate system corresponding to the virtual three-dimensional space, and determines whether a change in a positional relationship between the three-dimensional coordinate Y and the three-dimensional coordinate X satisfies: from the range formed by the three-dimensional coordinate X not including the three-dimensional coordinate Y to the range formed by the three-dimensional coordinate X not including the three-dimensional coordinate Y, because the height of the first moving object is less than 175cm, the judgment result is no, and therefore, no alarm is given when the first moving object passes through the ground position;
as shown in fig. 6, when detecting a second moving object on the right side, the video monitoring apparatus determines a three-dimensional coordinate Z of the second moving object in a space coordinate system corresponding to a virtual three-dimensional space a, obtains a three-dimensional coordinate X of the space fence in the space coordinate system corresponding to the virtual three-dimensional space, and determines whether a change in a positional relationship between the three-dimensional coordinate Z and the three-dimensional coordinate X satisfies: from the range formed by the three-dimensional coordinate X not including the three-dimensional coordinate Z to the range formed by including the three-dimensional coordinate Z to not including the three-dimensional coordinate Z, since the height of the second moving object exceeds 175cm, the judgment result is yes, and therefore, an alarm is given for the case where the second moving object passes through the ground position. It is emphasized that the two gray arrows in fig. 6 are merely for indicating the direction of travel of the moving object and do not have any limiting meaning.
In the embodiment of the invention, when a target moving object is detected, a first type of three-dimensional coordinates of the target moving object in a space coordinate system corresponding to a virtual three-dimensional space is determined, wherein the virtual three-dimensional space is formed in advance based on target three-dimensional information, and the target three-dimensional information is image three-dimensional information of a target scene, which is acquired by image acquisition equipment when no moving object exists in the target scene; obtaining a second type of three-dimensional coordinates of a space alert position in a space coordinate system corresponding to the virtual three-dimensional space, wherein the space alert position is drawn in the virtual three-dimensional space in advance, and the relative position relationship of the space alert position in the virtual three-dimensional space is the same as the relative position relationship of the scene alert position in the target scene; judging whether the position relation between the target moving object in the target scene and the scene warning position meets a preset alarm rule or not according to the first type of three-dimensional coordinates and the second type of three-dimensional coordinates; and when the judgment result is yes, triggering an alarm event aiming at the target scene. Compared with the prior art, the method and the device have the advantages that the position relation matching is carried out based on the coordinate information under the space coordinate system corresponding to the virtual three-dimensional space, and the 2D plane under the image two-dimensional coordinate system is not based, so that false alarm is reduced, and the triggering accuracy of the alarm event is improved.
Corresponding to the foregoing method embodiment, an embodiment of the present invention further provides a video monitoring apparatus, as shown in fig. 4, the apparatus may include:
a first coordinate determination module 410, a second coordinate obtaining module 420, a judgment module 430, a processing module 440 and a drawing module 450;
the first coordinate determining module 410 is configured to, when a target moving object is detected in a video monitoring process for a target scene, determine a first type of three-dimensional coordinates of the target moving object in a space coordinate system corresponding to a virtual three-dimensional space, where the virtual three-dimensional space is formed in advance based on target three-dimensional information, and the target three-dimensional information is image three-dimensional information of the target scene acquired by an image acquisition device when no moving object exists in the target scene;
the second coordinate obtaining module 420 is configured to obtain a second type of three-dimensional coordinates of a space alert position in a space coordinate system corresponding to the virtual three-dimensional space, where the space alert position is previously drawn in the virtual three-dimensional space by the drawing module 450, and a relative positional relationship of the space alert position in the virtual three-dimensional space is the same as a relative positional relationship of the scene alert position in the target scene;
the determining module 430 is configured to determine whether a position relationship between the target moving object in the target scene and the scene alert position satisfies a predetermined alarm rule according to the first three-dimensional coordinate and the second three-dimensional coordinate;
the processing module 440 is configured to trigger an alarm event for the target scene when the determination result of the determining module is yes.
In the embodiment of the invention, when a target moving object is detected, a first type of three-dimensional coordinates of the target moving object in a space coordinate system corresponding to a virtual three-dimensional space is determined, wherein the virtual three-dimensional space is formed in advance based on target three-dimensional information, and the target three-dimensional information is image three-dimensional information of a target scene, which is acquired by image acquisition equipment when no moving object exists in the target scene; obtaining a second type of three-dimensional coordinates of a space alert position in a space coordinate system corresponding to the virtual three-dimensional space, wherein the space alert position is drawn in the virtual three-dimensional space in advance, and the relative position relationship of the space alert position in the virtual three-dimensional space is the same as the relative position relationship of the scene alert position in the target scene; judging whether the position relation between the target moving object in the target scene and the scene warning position meets a preset alarm rule or not according to the first type of three-dimensional coordinates and the second type of three-dimensional coordinates; and when the judgment result is yes, triggering an alarm event aiming at the target scene. Compared with the prior art, the method and the device have the advantages that the position relation matching is carried out based on the coordinate information under the space coordinate system corresponding to the virtual three-dimensional space, and the 2D plane under the image two-dimensional coordinate system is not based, so that false alarm is reduced, and the triggering accuracy of the alarm event is improved.
Optionally, in a specific implementation manner, as shown in fig. 5, the first coordinate determination module 410 may include:
a first information obtaining unit 411 configured to determine three-dimensional information of an image of the target moving object acquired by the image acquisition device;
a second information obtaining unit 412, configured to obtain three-dimensional image information of a predetermined reference object, where the predetermined reference object is a fixed object in the target scene;
a relative positional relationship determination unit 413 configured to determine a relative positional relationship between the target moving object and the predetermined reference object based on the image three-dimensional information of the target moving object and the image three-dimensional information of the predetermined reference object;
a third information obtaining unit 414, configured to obtain a reference three-dimensional coordinate of the predetermined reference object in a space coordinate system corresponding to a virtual three-dimensional space;
a fourth information obtaining unit 415, configured to obtain, according to the relative position relationship and the reference three-dimensional coordinate, a first type of three-dimensional coordinates of the target moving object in a space coordinate system corresponding to the virtual three-dimensional space.
Optionally, in an implementation manner, the drawing module 450 may include:
the gesture acquisition unit is used for acquiring gesture operation of a user in the virtual three-dimensional space;
and the first drawing unit is used for drawing the space alert position in the virtual three-dimensional space according to the operation track of the gesture operation.
Optionally, in an implementation manner, the drawing module 450 may include:
the receiving unit is used for receiving three-dimensional coordinates of at least two fixed points input by a user, and the at least two fixed points uniquely determine a space alert position;
and the second drawing unit is used for drawing the space alert position in the virtual three-dimensional space according to the three-dimensional coordinates of the at least two fixed points.
Optionally, in one implementation, the type of the space alert position is an alert line;
the determining module 430 may include:
a first judging unit, configured to judge whether a change in a positional relationship between the first-type three-dimensional coordinates and the second-type three-dimensional coordinates satisfies: and if the first type of three-dimensional coordinates are not included, the first type of three-dimensional coordinates are included and the first type of three-dimensional coordinates are not included in the range formed by the second type of three-dimensional coordinates, judging that the position relation between the target moving object in the target scene and the scene alert position meets a preset alarm rule for crossing the alert line.
Optionally, in one implementation, the type of the space alert position is an alert surface;
the determining module 430 may include:
a second judging unit, configured to judge whether a change in a positional relationship between the first-type three-dimensional coordinates and the second-type three-dimensional coordinates satisfies: if the first type of three-dimensional coordinates are not included, the first type of three-dimensional coordinates are included and the first type of three-dimensional coordinates are not included in the range formed by the second type of three-dimensional coordinates, the position relation between the target moving object in the target scene and the scene alert position is judged to meet a preset alert surface crossing alert rule;
alternatively, the first and second electrodes may be,
a third judging unit, configured to judge whether a change in a positional relationship between the first-type three-dimensional coordinates and the second-type three-dimensional coordinates satisfies: from the range formed by the second type of three-dimensional coordinates not including the first type of three-dimensional coordinates to the range formed by the first type of three-dimensional coordinates, if so, judging that the position relation between the target moving object in the target scene and the scene alert position meets the preset alert entering surface alarm rule;
alternatively, the first and second electrodes may be,
a fourth judging unit, configured to judge whether a change in a positional relationship between the first-type three-dimensional coordinates and the second-type three-dimensional coordinates satisfies: from the range formed by the second type of three-dimensional coordinates to the range including the first type of three-dimensional coordinates, if so, judging that the position relation between the target moving object in the target scene and the scene alert position meets a preset alert-surface-leaving alert rule;
alternatively, the first and second electrodes may be,
a fifth judging unit, configured to judge whether a change in a positional relationship between the first-type three-dimensional coordinates and the second-type three-dimensional coordinates satisfies: and the range formed by the second type of three-dimensional coordinates comprises the first type of three-dimensional coordinates, if so, the position relation between the target moving object in the target scene and the scene alert position is judged to meet the preset intrusion alert surface alarm rule.
Optionally, in one implementation, the type of the space alert position is an alert body;
the determining module 430 may include:
a sixth determining unit, configured to determine whether a change in a positional relationship between the first three-dimensional coordinate and the second three-dimensional coordinate satisfies: if the first type of three-dimensional coordinates are not included, the first type of three-dimensional coordinates are included and the first type of three-dimensional coordinates are not included in the range formed by the second type of three-dimensional coordinates, the position relation between the target moving object in the target scene and the scene alert position is judged to meet the preset alert crossing rule;
alternatively, the first and second electrodes may be,
a seventh judging unit, configured to judge whether a change in a positional relationship between the first-type three-dimensional coordinates and the second-type three-dimensional coordinates satisfies: from the range formed by the second type of three-dimensional coordinates not including the first type of three-dimensional coordinates to the range formed by the first type of three-dimensional coordinates, if so, judging that the position relation between the target moving object in the target scene and the scene alert position meets the preset alert entering body alert rule;
alternatively, the first and second electrodes may be,
an eighth determining unit, configured to determine whether a change in a positional relationship between the first three-dimensional coordinate and the second three-dimensional coordinate satisfies: from the range formed by the second type of three-dimensional coordinates to the range including the first type of three-dimensional coordinates, if so, judging that the position relation between the target moving object in the target scene and the scene alert position meets the preset alert-body-leaving alert rule;
alternatively, the first and second electrodes may be,
a ninth judging unit, configured to judge whether a change in a positional relationship between the first-type three-dimensional coordinates and the second-type three-dimensional coordinates satisfies: and the range formed by the second type of three-dimensional coordinates comprises the first type of three-dimensional coordinates, if so, the position relation between the target moving object in the target scene and the scene alert position is judged to meet the preset intrusion alert body alarm rule.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.
Claims (12)
1. A video surveillance method, comprising:
when a target moving object is detected in a video monitoring process aiming at a target scene, determining a first type of three-dimensional coordinates of the target moving object under a space coordinate system corresponding to a virtual three-dimensional space, wherein the virtual three-dimensional space is formed in advance based on target three-dimensional information, and the target three-dimensional information is image three-dimensional information of the target scene acquired by an image acquisition device when the moving object does not exist in the target scene;
obtaining a second type of three-dimensional coordinates of a space alert position in a space coordinate system corresponding to the virtual three-dimensional space, wherein the space alert position is drawn in the virtual three-dimensional space in advance, and the relative position relationship of the space alert position in the virtual three-dimensional space is the same as the relative position relationship of the scene alert position in the target scene;
judging whether the position relation between the target moving object in the target scene and the scene warning position meets a preset alarm rule or not according to the first three-dimensional coordinate and the second three-dimensional coordinate;
when the judgment result is yes, triggering an alarm event aiming at the target scene;
the step of determining the first type of three-dimensional coordinates of the target moving object in the space coordinate system corresponding to the virtual three-dimensional space includes:
determining image three-dimensional information of the target moving object acquired by the image acquisition equipment;
obtaining image three-dimensional information of a predetermined reference object, wherein the predetermined reference object is a fixed object in the target scene;
determining the relative position relation of the target moving object and the preset reference object according to the image three-dimensional information of the target moving object and the image three-dimensional information of the preset reference object;
obtaining a reference three-dimensional coordinate of the preset reference object in a space coordinate system corresponding to a virtual three-dimensional space;
and obtaining a first type of three-dimensional coordinates of the target moving object in a space coordinate system corresponding to the virtual three-dimensional space according to the relative position relation and the reference three-dimensional coordinates.
2. The method according to claim 1, wherein the spatial alert location is plotted in a manner comprising:
acquiring gesture operation of a user in the virtual three-dimensional space;
and drawing the space alert position in the virtual three-dimensional space according to the operation track of the gesture operation.
3. The method according to claim 1, wherein the spatial alert location is plotted in a manner comprising:
receiving three-dimensional coordinates of at least two fixed points input by a user, wherein the at least two fixed points uniquely determine a space alert position;
and drawing the space alert position in the virtual three-dimensional space according to the three-dimensional coordinates of the at least two fixed points.
4. A method according to any one of claims 1 to 3, wherein the type of spatial alert location is an alert line;
the step of judging whether the position relationship between the target moving object in the target scene and the scene alert position meets a preset alarm rule or not according to the first three-dimensional coordinate and the second three-dimensional coordinate comprises the following steps:
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: and if the first type of three-dimensional coordinates are not included, the first type of three-dimensional coordinates are included and the first type of three-dimensional coordinates are not included in the range formed by the second type of three-dimensional coordinates, judging that the position relation between the target moving object in the target scene and the scene alert position meets a preset alarm rule for crossing the alert line.
5. A method according to any one of claims 1 to 3, wherein the type of spatial alert location is an alert surface;
the step of judging whether the position relationship between the target moving object in the target scene and the scene alert position meets a preset alarm rule or not according to the first three-dimensional coordinate and the second three-dimensional coordinate comprises the following steps:
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: if the first type of three-dimensional coordinates are not included, the first type of three-dimensional coordinates are included and the first type of three-dimensional coordinates are not included in the range formed by the second type of three-dimensional coordinates, the position relation between the target moving object in the target scene and the scene alert position is judged to meet a preset alert surface crossing alert rule;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: from the range formed by the second type of three-dimensional coordinates not including the first type of three-dimensional coordinates to the range formed by the first type of three-dimensional coordinates, if so, judging that the position relation between the target moving object in the target scene and the scene alert position meets the preset alert entering surface alarm rule;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: from the range formed by the second type of three-dimensional coordinates to the range including the first type of three-dimensional coordinates, if so, judging that the position relation between the target moving object in the target scene and the scene alert position meets a preset alert-surface-leaving alert rule;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: and the range formed by the second type of three-dimensional coordinates comprises the first type of three-dimensional coordinates, if so, the position relation between the target moving object in the target scene and the scene alert position is judged to meet the preset intrusion alert surface alarm rule.
6. A method according to any one of claims 1 to 3, wherein the type of spatial alert location is an alert body;
the step of judging whether the position relationship between the target moving object in the target scene and the scene alert position meets a preset alarm rule or not according to the first three-dimensional coordinate and the second three-dimensional coordinate comprises the following steps:
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: if the first type of three-dimensional coordinates are not included, the first type of three-dimensional coordinates are included and the first type of three-dimensional coordinates are not included in the range formed by the second type of three-dimensional coordinates, the position relation between the target moving object in the target scene and the scene alert position is judged to meet the preset alert crossing rule;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: from the range formed by the second type of three-dimensional coordinates not including the first type of three-dimensional coordinates to the range formed by the first type of three-dimensional coordinates, if so, judging that the position relation between the target moving object in the target scene and the scene alert position meets the preset alert entering body alert rule;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: from the range formed by the second type of three-dimensional coordinates to the range including the first type of three-dimensional coordinates, if so, judging that the position relation between the target moving object in the target scene and the scene alert position meets the preset alert-body-leaving alert rule;
alternatively, the first and second electrodes may be,
judging whether the change of the position relation between the first three-dimensional coordinate and the second three-dimensional coordinate meets the following requirements: and the range formed by the second type of three-dimensional coordinates comprises the first type of three-dimensional coordinates, if so, the position relation between the target moving object in the target scene and the scene alert position is judged to meet the preset intrusion alert body alarm rule.
7. A video monitoring apparatus, comprising:
the device comprises a first coordinate determining module, a second coordinate obtaining module, a judging module, a processing module and a drawing module;
the first coordinate determination module is used for determining a first type of three-dimensional coordinates of a target moving object in a space coordinate system corresponding to a virtual three-dimensional space when the target moving object is detected in a video monitoring process aiming at the target scene, wherein the virtual three-dimensional space is formed in advance based on target three-dimensional information, and the target three-dimensional information is image three-dimensional information of the target scene acquired by image acquisition equipment when no moving object exists in the target scene;
the second coordinate obtaining module is configured to obtain a second type of three-dimensional coordinates of a space alert position in a space coordinate system corresponding to the virtual three-dimensional space, where the space alert position is previously drawn in the virtual three-dimensional space by the drawing module, and a relative positional relationship of the space alert position in the virtual three-dimensional space is the same as a relative positional relationship of the scene alert position in the target scene;
the judging module is used for judging whether the position relation between the target moving object in the target scene and the scene warning position meets a preset alarm rule or not according to the first type of three-dimensional coordinates and the second type of three-dimensional coordinates;
the processing module is used for triggering an alarm event aiming at the target scene when the judgment result of the judgment module is yes;
wherein the first coordinate determination module comprises:
a first information obtaining unit configured to determine three-dimensional information of an image of the target moving object acquired by the image acquisition device;
a second information obtaining unit, configured to obtain three-dimensional information of an image of a predetermined reference object, where the predetermined reference object is a fixed object in the target scene;
a relative position relation determining unit configured to determine a relative position relation between the target moving object and the predetermined reference object based on the image three-dimensional information of the target moving object and the image three-dimensional information of the predetermined reference object;
a third information obtaining unit, configured to obtain a reference three-dimensional coordinate of the predetermined reference object in a space coordinate system corresponding to a virtual three-dimensional space;
and the fourth information obtaining unit is used for obtaining the first type of three-dimensional coordinates of the target moving object in a space coordinate system corresponding to the virtual three-dimensional space according to the relative position relation and the reference three-dimensional coordinates.
8. The apparatus of claim 7, wherein the rendering module comprises:
the gesture acquisition unit is used for acquiring gesture operation of a user in the virtual three-dimensional space;
and the first drawing unit is used for drawing the space alert position in the virtual three-dimensional space according to the operation track of the gesture operation.
9. The apparatus of claim 7, wherein the rendering module comprises:
the receiving unit is used for receiving three-dimensional coordinates of at least two fixed points input by a user, and the at least two fixed points uniquely determine a space alert position;
and the second drawing unit is used for drawing the space alert position in the virtual three-dimensional space according to the three-dimensional coordinates of the at least two fixed points.
10. A device according to any one of claims 7 to 9, wherein the type of spatial alert location is an alert line;
the judging module comprises:
a first judging unit, configured to judge whether a change in a positional relationship between the first-type three-dimensional coordinates and the second-type three-dimensional coordinates satisfies: and if the first type of three-dimensional coordinates are not included, the first type of three-dimensional coordinates are included and the first type of three-dimensional coordinates are not included in the range formed by the second type of three-dimensional coordinates, judging that the position relation between the target moving object in the target scene and the scene alert position meets a preset alarm rule for crossing the alert line.
11. A device according to any one of claims 7 to 9, wherein the type of spatial alert location is an alert surface;
the judging module comprises:
a second judging unit, configured to judge whether a change in a positional relationship between the first-type three-dimensional coordinates and the second-type three-dimensional coordinates satisfies: if the first type of three-dimensional coordinates are not included, the first type of three-dimensional coordinates are included and the first type of three-dimensional coordinates are not included in the range formed by the second type of three-dimensional coordinates, the position relation between the target moving object in the target scene and the scene alert position is judged to meet a preset alert surface crossing alert rule;
alternatively, the first and second electrodes may be,
a third judging unit, configured to judge whether a change in a positional relationship between the first-type three-dimensional coordinates and the second-type three-dimensional coordinates satisfies: from the range formed by the second type of three-dimensional coordinates not including the first type of three-dimensional coordinates to the range formed by the first type of three-dimensional coordinates, if so, judging that the position relation between the target moving object in the target scene and the scene alert position meets the preset alert entering surface alarm rule;
alternatively, the first and second electrodes may be,
a fourth judging unit, configured to judge whether a change in a positional relationship between the first-type three-dimensional coordinates and the second-type three-dimensional coordinates satisfies: from the range formed by the second type of three-dimensional coordinates to the range including the first type of three-dimensional coordinates, if so, judging that the position relation between the target moving object in the target scene and the scene alert position meets a preset alert-surface-leaving alert rule;
alternatively, the first and second electrodes may be,
a fifth judging unit, configured to judge whether a change in a positional relationship between the first-type three-dimensional coordinates and the second-type three-dimensional coordinates satisfies: and the range formed by the second type of three-dimensional coordinates comprises the first type of three-dimensional coordinates, if so, the position relation between the target moving object in the target scene and the scene alert position is judged to meet the preset intrusion alert surface alarm rule.
12. A device according to any one of claims 7 to 9, wherein the type of spatial alert location is an alert body;
the judging module comprises:
a sixth determining unit, configured to determine whether a change in a positional relationship between the first three-dimensional coordinate and the second three-dimensional coordinate satisfies: if the first type of three-dimensional coordinates are not included, the first type of three-dimensional coordinates are included and the first type of three-dimensional coordinates are not included in the range formed by the second type of three-dimensional coordinates, the position relation between the target moving object in the target scene and the scene alert position is judged to meet the preset alert crossing rule;
alternatively, the first and second electrodes may be,
a seventh judging unit, configured to judge whether a change in a positional relationship between the first-type three-dimensional coordinates and the second-type three-dimensional coordinates satisfies: from the range formed by the second type of three-dimensional coordinates not including the first type of three-dimensional coordinates to the range formed by the first type of three-dimensional coordinates, if so, judging that the position relation between the target moving object in the target scene and the scene alert position meets the preset alert entering body alert rule;
alternatively, the first and second electrodes may be,
an eighth determining unit, configured to determine whether a change in a positional relationship between the first three-dimensional coordinate and the second three-dimensional coordinate satisfies: from the range formed by the second type of three-dimensional coordinates to the range including the first type of three-dimensional coordinates, if so, judging that the position relation between the target moving object in the target scene and the scene alert position meets the preset alert-body-leaving alert rule;
alternatively, the first and second electrodes may be,
a ninth judging unit, configured to judge whether a change in a positional relationship between the first-type three-dimensional coordinates and the second-type three-dimensional coordinates satisfies: and the range formed by the second type of three-dimensional coordinates comprises the first type of three-dimensional coordinates, if so, the position relation between the target moving object in the target scene and the scene alert position is judged to meet the preset intrusion alert body alarm rule.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611046620.4A CN108111802B (en) | 2016-11-23 | 2016-11-23 | Video monitoring method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611046620.4A CN108111802B (en) | 2016-11-23 | 2016-11-23 | Video monitoring method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108111802A CN108111802A (en) | 2018-06-01 |
CN108111802B true CN108111802B (en) | 2020-06-26 |
Family
ID=62204988
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611046620.4A Active CN108111802B (en) | 2016-11-23 | 2016-11-23 | Video monitoring method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108111802B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110636204B (en) * | 2018-06-22 | 2021-04-20 | 杭州海康威视数字技术股份有限公司 | Face snapshot system |
CN111063145A (en) * | 2019-12-13 | 2020-04-24 | 北京都是科技有限公司 | Intelligent processor for electronic fence |
CN113068000B (en) * | 2019-12-16 | 2023-07-18 | 杭州海康威视数字技术股份有限公司 | Video target monitoring method, device, equipment, system and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011155393A (en) * | 2010-01-26 | 2011-08-11 | Denso It Laboratory Inc | Device and method for displaying image of vehicle surroundings |
CN102436676A (en) * | 2011-09-27 | 2012-05-02 | 夏东 | Three-dimensional reestablishing method for intelligent video monitoring |
CN104902246A (en) * | 2015-06-17 | 2015-09-09 | 浙江大华技术股份有限公司 | Video monitoring method and device |
CN104954747A (en) * | 2015-06-17 | 2015-09-30 | 浙江大华技术股份有限公司 | Video monitoring method and device |
-
2016
- 2016-11-23 CN CN201611046620.4A patent/CN108111802B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011155393A (en) * | 2010-01-26 | 2011-08-11 | Denso It Laboratory Inc | Device and method for displaying image of vehicle surroundings |
CN102436676A (en) * | 2011-09-27 | 2012-05-02 | 夏东 | Three-dimensional reestablishing method for intelligent video monitoring |
CN104902246A (en) * | 2015-06-17 | 2015-09-09 | 浙江大华技术股份有限公司 | Video monitoring method and device |
CN104954747A (en) * | 2015-06-17 | 2015-09-30 | 浙江大华技术股份有限公司 | Video monitoring method and device |
Also Published As
Publication number | Publication date |
---|---|
CN108111802A (en) | 2018-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10182280B2 (en) | Sound processing apparatus, sound processing system and sound processing method | |
WO2016129403A1 (en) | Object detection device | |
CN107396037B (en) | Video monitoring method and device | |
JP5180733B2 (en) | Moving object tracking device | |
KR101505459B1 (en) | System and method for parking management by tracking position of vehicle | |
CN104966062B (en) | Video monitoring method and device | |
CN104954747B (en) | Video monitoring method and device | |
US20190279490A1 (en) | Eyeglasses-type wearable terminal, control method thereof, and control program | |
CN111724558B (en) | Monitoring method, monitoring device and intrusion alarm system | |
CN108111802B (en) | Video monitoring method and device | |
CN107657626B (en) | Method and device for detecting moving target | |
US11417134B2 (en) | Image analysis device, image analysis method, and recording medium | |
CN110766899A (en) | Method and system for enhancing electronic fence monitoring early warning in virtual environment | |
CN112270253A (en) | High-altitude parabolic detection method and device | |
CN110765823A (en) | Target identification method and device | |
CN112633096A (en) | Passenger flow monitoring method and device, electronic equipment and storage medium | |
CN109934217B (en) | Method, apparatus and system for detecting a loitering event | |
KR101840042B1 (en) | Multi-Imaginary Fence Line Setting Method and Trespassing Sensing System | |
KR101797544B1 (en) | Apparatus and method for determining existence of car | |
KR101420242B1 (en) | vehicle detector and method using stereo camera | |
Kim et al. | Real-Time Struck-By Hazards Detection System for Small-and Medium-Sized Construction Sites Based on Computer Vision Using Far-Field Surveillance Videos | |
CN114913470B (en) | Event detection method and device | |
US9183448B2 (en) | Approaching-object detector, approaching object detecting method, and recording medium storing its program | |
CN111753587A (en) | Method and device for detecting falling to ground | |
CN115115978A (en) | Object identification method and device, storage medium and processor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |