CN114608441A - Method for setting up dynamic visual security fence - Google Patents

Method for setting up dynamic visual security fence Download PDF

Info

Publication number
CN114608441A
CN114608441A CN202011405817.9A CN202011405817A CN114608441A CN 114608441 A CN114608441 A CN 114608441A CN 202011405817 A CN202011405817 A CN 202011405817A CN 114608441 A CN114608441 A CN 114608441A
Authority
CN
China
Prior art keywords
dynamic target
dynamic
camera
image
binocular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011405817.9A
Other languages
Chinese (zh)
Inventor
于海斌
崔龙
白宁
王宏伟
刘钊铭
张峰
田申
许伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Institute of Automation of CAS
Original Assignee
Shenyang Institute of Automation of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Institute of Automation of CAS filed Critical Shenyang Institute of Automation of CAS
Priority to CN202011405817.9A priority Critical patent/CN114608441A/en
Publication of CN114608441A publication Critical patent/CN114608441A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P13/00Indicating or recording presence, absence, or direction, of movement
    • G01P13/02Indicating direction only, e.g. by weather vane
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P3/00Measuring linear or angular speed; Measuring differences of linear or angular speeds
    • G01P3/36Devices characterised by the use of optical means, e.g. using infrared, visible, or ultraviolet light
    • G01P3/38Devices characterised by the use of optical means, e.g. using infrared, visible, or ultraviolet light using photographic means
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19613Recognition of a predetermined image pattern or behaviour pattern indicating theft or intrusion

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Power Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Emergency Alarm Devices (AREA)

Abstract

The invention relates to the field of safety protection of unmanned automatic workshops of an automatic factory, in particular to a method for establishing a dynamic visual safety fence, which comprises the following steps: arranging a plurality of groups of binocular cameras in a workshop; calibrating all binocular cameras; acquiring three-dimensional coordinate data of diagonal vertexes of the early warning area and the shutdown warning area, and storing the three-dimensional coordinate data into corresponding model databases for evaluation basis of threat levels of the dynamic target body; obtaining a characteristic image obtained by combining video data collected by a left camera and a right camera through a three-frame difference method, extracting a dynamic target from the characteristic image, and recording vertex coordinate data of a rectangular range of the dynamic target; calculating the three-dimensional coordinate value, the speed and the motion direction of the dynamic target by a triangulation principle; and comprehensively judging the degree and the trend of the intrusion of the dynamic target. According to the invention, the binocular triangulation step is started only when the foreign matters intrude into the safety alert area, so that the calculated amount is greatly reduced, and the operation efficiency of the program is obviously improved.

Description

Method for setting up dynamic visual security fence
Technical Field
The invention relates to the field of safety protection of unmanned automatic workshops of an automatic factory, in particular to a method for establishing a dynamic visual safety fence.
Background
Mine metal smelting directly affects the countryside. The fine ore smelting, loading and unloading workshop is usually built in suburbs or remote areas of cities, large-scale automatic hoisting facilities in the workshop are mostly completed by manually operating semi-automatic equipment in production, and a series of potential dangers caused by intrusion of irrelevant personnel exist in a production field due to the fact that the environment is severe and visual dead angles are generated by unavoidable movement of operating personnel. And the irregularity of the heap and the randomness of the shape make it difficult to install and use conventional safety fences. With the application of the video detection technology in the field of intelligent factories, the unattended target of a large-scale loading and unloading workshop is completed to a certain extent, but the traditional video monitoring technology has limitations. Namely, the operation and maintenance personnel can only monitor part of information of the workshop remotely, but the video information transmitted back to the main control center also needs manual screening to extract valuable information; in addition, fatigue or negligence of the monitor still causes a judgment error, and the subjective judgment error also increases the false alarm probability.
Disclosure of Invention
The invention aims to provide a binocular vision-based dynamic vision security fence establishment method capable of controlling and rapidly acquiring three-dimensional coordinates of a dynamic target.
The purpose of the invention can be realized by the following technical scheme: a dynamic visual security fence establishment method, comprising the steps of:
1) arranging a plurality of groups of binocular cameras in a workshop, and enabling a monitoring range formed by all the binocular cameras to cover a workshop area;
2) calibrating all binocular cameras to obtain internal parameters and external parameters of the binocular cameras;
3) covering a workshop area according to a monitoring range formed by all binocular cameras to divide an early warning area and a shutdown warning area, acquiring three-dimensional coordinate data of diagonal vertexes of the early warning area and the shutdown warning area through the obtained internal parameters and external parameters of the binocular cameras, establishing cubic spaces of the early warning area and the shutdown warning area, and storing the cubic spaces into corresponding model databases for evaluation basis of threat levels of the dynamic target body;
4) the left camera and the right camera of each binocular camera simultaneously acquire video data, a characteristic image obtained by combining the video data acquired by the left camera and the right camera is obtained through a three-frame difference method, a dynamic target is extracted from the characteristic image, and vertex coordinate data of a rectangular range of the dynamic target are recorded;
5) each binocular camera calculates the three-dimensional coordinate value, the speed and the motion direction of the dynamic target according to the vertex coordinate data of the dynamic target rectangular range and the corresponding characteristic image by the triangulation principle and sends the three-dimensional coordinate value, the speed and the motion direction to the background processor;
6) and the background processor comprehensively judges the intrusion degree and the intrusion trend of the dynamic target according to the combination of the three-dimensional coordinate value, the speed and the motion direction of the dynamic target.
The step 2) is specifically as follows:
sequentially calibrating the binocular cameras by a checkerboard calibration method to obtain the normalized focal length f of the two camerasu,fvAnd the difference u between the pixel coordinate of the image center and the pixel of the image origin0,v0
Normalized focal length fu,fvObtained by the following formula:
Figure BDA0002814108220000021
wherein f is the focal length of the camera; duAnd dvThe sizes of unit pixels on the u-axis and the v-axis of the camera respectively;
obtaining an internal parameter matrix K of the left camera and the right camera of the binocular camera:
Figure BDA0002814108220000022
obtaining external parameters of a left camera and a right camera of the binocular camera through three-dimensional calibration, wherein the external parameters comprise a rotation matrix R and a translation vector
Figure BDA0002814108220000023
The step 4) is specifically as follows:
the left camera and the right camera simultaneously select three frames of images in respective video data to be recorded as Ik-2、Ik-1、IkWherein the current frame IkRespectively with the previous frame Ik-1Data and first two frames Ik-2The data are respectively subjected to difference operation to obtain difference images D1(x1, y1) and D2(x2, y 2); by comparing the difference images D1(x1, y1) and D2The (x2, y2) processing results in a feature image F (x, y).
The passing pair difference image D1(x1, y1) and D2(x2, y2) to obtain a feature image F (x, y), specifically:
for differential image D1(x1, y1) and D2(x2, y2) performing thresholding, wherein CnIf the threshold is a preset threshold, the following steps are performed:
Figure BDA0002814108220000031
Figure BDA0002814108220000032
T1and T2Respectively representing thresholdingProcessing a reserved image area, wherein 1 represents an effective pixel, and 0 represents a pixel point discarded below a preset threshold;
removing an influence area formed by the morphological change of the dynamic target by a filtering mode by utilizing the AND operation among pixels, thereby obtaining a characteristic image F (x, y), namely:
F(x,y)=Ti(x1,y1)∧T2(X2,y2)
and selecting the minimum inscribed rectangle of the dynamic target in the characteristic image F (x, y), and recording the vertex coordinate data of the dynamic target rectangle range.
The step 5) calculates the three-dimensional coordinate value, the speed and the movement direction of the dynamic target by the triangulation principle, and specifically comprises the following steps:
in the vertex coordinates of the dynamic target rectangular range, the left camera and the right camera respectively match respective images in the current dynamic target rectangular range, and original pixel coordinates of not less than 3 pairs of feature points are stored, wherein the original pixel coordinates refer to the matched feature points in the minimum inscribed rectangle of the dynamic target;
and based on the pixel coordinates (u) on the left and right images through the same pair of feature pointsl,vl) And (u)r,vr) The disparity values are obtained, namely:
d=ul-ur
restoring three-dimensional coordinate values (X, Y, Z) of the dynamic target in the detected environment by combining the triangulation principle;
then, according to the position of the three-dimensional coordinate value (X, Y, Z) of the dynamic target in the detected environment, acquiring the speed V of the dynamic target according to the two characteristic images at the interval of delta t time;
merging the two characteristic images according to the time interval delta t, selecting a reference line vector on the merged characteristic images, obtaining the position connecting lines of the same target point on the merged characteristic images at two moments, and obtaining the motion direction of the dynamic target by utilizing the included angle between the reference line vector and the connecting lines.
The obtaining of the speed V of the dynamic target according to the two characteristic images at the interval of Δ t time specifically includes:
when the three-dimensional coordinates (X, Y, Z) of the dynamic target are in the early warning region, the position of the dynamic target is determined again after a set time Δ t is set, so as to obtain the forward speed V of the dynamic target, that is:
Figure BDA0002814108220000041
(X1,Y1,Z1) Determining for the first time the three-dimensional coordinate, three-dimensional coordinate (X), of a dynamic target entering an early warning region for a binocular camera2,Y2,Z2) The three-dimensional coordinate of the dynamic target after being judged by the binocular camera after the time interval delta t.
The obtaining of the motion direction of the dynamic target by using the included angle between the reference line vector and the connection line specifically includes:
combining the characteristic image of the dynamic target entering the early warning area judged by the binocular camera for the first time with the characteristic image of the dynamic target judged by the binocular camera after the interval delta t time, and selecting a reference line vector line on the combined characteristic image
Figure BDA0002814108220000042
t1And t2Connecting line between two characteristic points of time dynamic target
Figure BDA0002814108220000043
Using reference lines
Figure BDA0002814108220000044
And the connecting line
Figure BDA0002814108220000045
The included angle between the two dynamic targets obtains the advancing direction of the dynamic target;
acquiring the motion direction of the dynamic target, namely:
Figure BDA0002814108220000046
the step 6) is specifically as follows:
(1) when the dynamic target is in a fixed shape and is marked with a registered operation tool, shielding the dynamic visual detection function, judging that the dynamic target does not belong to the early warning area database, and not adopting any alarm operation;
(2) when the dynamic target system database has no mark or registration information and the three-dimensional coordinates (X, Y, Z) of the dynamic target enter an early warning area at the moment after judgment and display, acquiring the speed and the advancing direction of the dynamic target;
if the advancing speed V and the direction theta of the dynamic target simultaneously meet the condition that the advancing speed V and the direction theta are smaller than the preset threshold value, the fact that although the dynamic target is in the early warning area, the behavior trend of the dynamic target does not threaten equipment and production, and early warning can be omitted;
if at least one of the advancing speed V and the direction theta of the dynamic target exceeds a threshold value, early warning measures are taken;
(3) and if the three-dimensional coordinates (X, Y and Z) of the dynamic target are judged to break into the alarm stop area, immediately starting an alarm, and safely braking all the automatic equipment in the corresponding production area.
The invention has the following beneficial effects and advantages:
1. the depth value of the target to be detected can be obtained, and the real three-dimensional coordinate of the detected object is restored;
2. the area of the dynamic target can be judged through the known three-dimensional coordinates, the direction and the speed of the dynamic target body are obtained, the foreign matter invasion level is comprehensively judged, and reasonable corresponding measures are taken;
3. the binocular triangulation step is started only when the foreign matter breaks into the safety warning area, so that the calculated amount is greatly reduced, and the operation efficiency of the program is obviously improved;
4. by establishing a three-dimensional coordinate information database of the vertex of the pre-alarm area, the judgment time and the calculation amount of the safety system can be greatly reduced. The database change is not limited by the layout of the field hardware equipment, and the flexible upgrade and reconstruction can be realized.
Drawings
FIG. 1 is a flowchart of a method for extracting dynamic targets and determining intrusion level according to the present invention;
FIG. 2 is a schematic diagram of the present invention for obtaining the direction of motion of a dynamic target;
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
As shown in fig. 1, which is a flowchart of a method for extracting a dynamic target and determining an intrusion degree according to the present invention, the present invention is a method for setting up a dynamic visual security fence, comprising the following steps:
1) a plurality of groups of binocular cameras are arranged in an area needing safety protection in a workshop, and a picture detected by the cameras covers an area which can be freely accessed by all personnel in the workshop, and also comprises a closed area which can be illegally crossed, but does not comprise movable automatic equipment in a dangerous area.
2) Sequentially calibrating the binocular cameras by a checkerboard calibration method to obtain the normalized focal length f of the two camerasu,fvAnd the difference u between the pixel coordinate of the image center and the pixel of the image origin0,v0
Normalized focal length fu,fvThe calculation formula of (2) is as follows:
Figure BDA0002814108220000051
wherein f is the focal length of the camera; duAnd dvThe unit pixel sizes on the u-axis and v-axis of the camera, respectively.
Obtaining an internal parameter matrix K of the left camera and the right camera of the binocular camera:
Figure BDA0002814108220000061
obtaining external parameters of a left camera and a right camera of the binocular camera through three-dimensional calibration, wherein the external parameters comprise a rotation matrix R and a translation vector
Figure BDA0002814108220000062
3) And calculating three-dimensional coordinate values of vertexes of the early warning area and the shutdown warning area by using the calibrated binocular camera, and constructing a cube diagonal vertex database corresponding to the monitoring warning area for evaluating the threat level of the later-stage dynamic target body.
And further extracting the dynamic target by using a three-frame difference method, and further extracting the pixel coordinates of the same characteristic point in the dynamic target.
4) The left camera and the right camera simultaneously select three frames of images in respective video data to be recorded as Ik-2、Ik-1、IkWherein the current frame IkRespectively with the previous frame Ik-1Data and first two frames Ik-2The data are respectively subjected to difference operation to obtain difference images D1(x1, y1) and D2(x2, y 2); by comparing the difference images D1(x1, y1) and D2The (x2, y2) processing results in a feature image F (x, y).
The passing pair difference image D1(x1, y1) and D2(x2, y2) to obtain a feature image F (x, y), specifically:
for differential image D1(x1, y1) and D2(x2, y2) performing thresholding, wherein CnIf the threshold is a preset threshold, the following steps are performed:
Figure BDA0002814108220000063
Figure BDA0002814108220000064
T1and T2Respectively representing the thresholding reserved image areas, 1 representing an effective pixel, and 0 representing a pixel point discarded below a preset threshold;
removing an influence area formed by the morphological change of the dynamic target by a filtering mode by utilizing the AND operation among pixels, thereby obtaining a characteristic image F (x, y), namely:
F(x,y)=Ti(x1,y1)∧T2(X2,y2)
and selecting the minimum inscribed rectangle of the dynamic target in the characteristic image F (x, y), and recording the vertex coordinate data of the dynamic target rectangle range.
And matching the foreign matters in the left and right images only in the dynamic foreign matter rectangular range by using the vertex coordinates of the foreign matter rectangular range, and finally storing the most reliable original pixel coordinates of more than 3 pairs of feature points.
Pixel coordinates (u) on left and right images by the same feature pointl,vl) And (u)r,vr) A disparity value is calculated. The calculation formula is as follows: d ═ ul-ur
The three-dimensional coordinate values (X, Y and Z) of the dynamic target under the detected environment can be restored by combining the triangulation principle.
If the three-dimensional coordinates (X, Y, Z) of the intruding foreign object are detected to be within the early warning region, the foreign object position is determined again at an interval of Δ t. Thereby, the advancing speed V of the invading foreign body is obtained, and the calculation formula is as follows:
Figure BDA0002814108220000071
(X1,Y1,Z1) Determining for the first time the three-dimensional coordinate, three-dimensional coordinate (X), of a dynamic target entering an early warning region for a binocular camera2,Y2,Z2) The three-dimensional coordinate of the dynamic target after being judged by the binocular camera after the time interval delta t.
As shown in fig. 2, a schematic diagram of obtaining a motion direction of a dynamic target according to the present invention, where an angle between a reference line vector and a connection line is used to obtain the motion direction of the dynamic target, specifically:
combining the characteristic image of the dynamic target entering the early warning area judged by the binocular camera for the first time with the characteristic image of the dynamic target judged by the binocular camera after the interval delta t time, and selecting a reference line vector line on the combined characteristic image
Figure BDA0002814108220000072
t1And t2Feature points of a temporally dynamic objectConnecting line between two points
Figure BDA0002814108220000073
Using reference lines
Figure BDA0002814108220000074
And the connecting line
Figure BDA0002814108220000075
The included angle between the two dynamic targets obtains the advancing direction of the dynamic target;
acquiring the motion direction of the dynamic target, namely:
Figure BDA0002814108220000076
5) setting an alarm threshold value according to the type of the monitoring area;
and according to the monitoring scene layout, the safety alert speed interval and the safety forward direction interval. The logic for determining the dynamic target intrusion level is as follows:
1. if the dynamic target is a fixed shape such as a truck and the like and the operation tools marked and registered in the system shield the dynamic visual detection function, do not belong to an early warning area database and do not adopt any warning operation;
2. if the dynamic target system database has no mark or registered information and the three-dimensional coordinates (X, Y, Z) of the object are judged and displayed to enter the early warning area at the moment, the speed and the advancing direction factors of the object are introduced. If the advancing speed V and the direction theta of the foreign matter simultaneously meet the preset threshold value, the foreign matter is in an early warning area, but the behavior trend of the foreign matter does not threaten equipment and production, and the early warning can be avoided; if at least one of the advancing speed V and the direction theta of the foreign matter exceeds a threshold value, early warning measures such as sound-light alarm and the like are taken;
3. if the three-dimensional coordinates (X, Y, Z) of the foreign matter are judged to break into the alarm shutdown area, an alarm program is started immediately and all the automatic equipment in the corresponding production area is safely braked.

Claims (8)

1. A method of dynamic visual security fence establishment, comprising the steps of:
1) arranging a plurality of groups of binocular cameras in a workshop, and enabling a monitoring range formed by all the binocular cameras to cover a workshop area;
2) calibrating all binocular cameras to obtain internal parameters and external parameters of the binocular cameras;
3) covering a workshop area according to a monitoring range formed by all binocular cameras to divide an early warning area and a shutdown warning area, acquiring three-dimensional coordinate data of diagonal vertexes of the early warning area and the shutdown warning area through the obtained internal parameters and external parameters of the binocular cameras, establishing cubic spaces of the early warning area and the shutdown warning area, and storing the cubic spaces into corresponding model databases for evaluation basis of threat levels of the dynamic target body;
4) the left camera and the right camera of each binocular camera simultaneously acquire video data, a characteristic image obtained by combining the video data acquired by the left camera and the right camera is obtained through a three-frame difference method, a dynamic target is extracted from the characteristic image, and vertex coordinate data of a rectangular range of the dynamic target are recorded;
5) each binocular camera calculates the three-dimensional coordinate value, the speed and the motion direction of the dynamic target according to the vertex coordinate data of the dynamic target rectangular range and the corresponding characteristic image by the triangulation principle and sends the three-dimensional coordinate value, the speed and the motion direction to the background processor;
6) and the background processor comprehensively judges the intrusion degree and the intrusion trend of the dynamic target according to the combination of the three-dimensional coordinate value, the speed and the motion direction of the dynamic target.
2. The method as claimed in claim 1, wherein the step 2) is specifically as follows:
sequentially calibrating the binocular cameras by a checkerboard calibration method to obtain the normalized focal length f of the two camerasu,fvAnd the difference u between the pixel coordinates of the image center and the pixel of the image origin0,v0
Normalized focal length fu,fvObtained by the following formula:
Figure FDA0002814108210000011
wherein f is the focal length of the camera; duAnd dvThe sizes of unit pixels on the u-axis and the v-axis of the camera respectively;
obtaining an internal parameter matrix K of the left camera and the right camera of the binocular camera:
Figure FDA0002814108210000021
obtaining external parameters of a left camera and a right camera of the binocular camera through three-dimensional calibration, wherein the external parameters comprise a rotation matrix R and a translation vector
Figure FDA0002814108210000024
3. The dynamic visual security fence establishment method according to claim 1, wherein said step 4) is specifically:
the left camera and the right camera simultaneously select three frames of images in respective video data to be recorded as Ik-2、Ik-1、IkWherein the current frame IkRespectively with the previous frame Ik-1Data and first two frames Ik-2The data are respectively subjected to difference operation to obtain difference images D1(x1, y1) and D2(x2, y 2); by comparing the difference images D1(x1, y1) and D2The (x2, y2) processing results in a feature image F (x, y).
4. The method as claimed in claim 3, wherein the difference image D is obtained by comparing the difference image1(x1, y1) and D2(x2, y2) to obtain a feature image F (x, y), specifically:
for differential image D1(x1, y1) and D2(x2, y2) performing thresholding, wherein CnIf the threshold is a preset threshold, the following steps are performed:
Figure FDA0002814108210000022
Figure FDA0002814108210000023
T1and T2Respectively representing the thresholding reserved image areas, 1 representing an effective pixel, and 0 representing a pixel point discarded below a preset threshold;
removing an influence area formed by the morphological change of the dynamic target by a filtering mode by utilizing the AND operation among pixels, thereby obtaining a characteristic image F (x, y), namely:
F(x,y)=T1(x1,y1)∧T2(x2,y2)
and selecting the minimum inscribed rectangle of the dynamic target in the characteristic image F (x, y), and recording the vertex coordinate data of the dynamic target rectangle range.
5. The method as claimed in claim 1, wherein the step 5) calculates the three-dimensional coordinate value, the speed and the moving direction of the dynamic target by triangulation, specifically:
in the vertex coordinates of the dynamic target rectangular range, the left camera and the right camera respectively match respective images in the current dynamic target rectangular range, and original pixel coordinates of not less than 3 pairs of feature points are stored, wherein the original pixel coordinates refer to the matched feature points in the minimum inscribed rectangle of the dynamic target;
and based on the pixel coordinates (u) on the left and right images through the same pair of feature pointsl,vl) And (u)r,vr) The disparity values are obtained, namely:
d=ul-ur
restoring three-dimensional coordinate values (X, Y, Z) of the dynamic target in the detected environment by combining the triangulation principle;
then, according to the position of the three-dimensional coordinate value (X, Y, Z) of the dynamic target in the detected environment, acquiring the speed V of the dynamic target according to the two characteristic images at the interval of delta t time;
merging the two characteristic images according to the time interval delta t, selecting a reference line vector on the merged characteristic images, obtaining the position connecting lines of the same target point on the merged characteristic images at two moments, and obtaining the motion direction of the dynamic target by utilizing the included angle between the reference line vector and the connecting lines.
6. The method as claimed in claim 5, wherein the obtaining of the velocity V of the dynamic target according to the two characteristic images at the interval Δ t time comprises:
when the three-dimensional coordinates (X, Y, Z) of the dynamic target are in the early warning region, the position of the dynamic target is determined again after a set time Δ t is set, so as to obtain the forward speed V of the dynamic target, that is:
Figure FDA0002814108210000041
(X1,Y1,Z1) Determining for the first time the three-dimensional coordinate, three-dimensional coordinate (X), of a dynamic target entering an early warning region for a binocular camera2,Y2,Z2) The three-dimensional coordinate of the dynamic target after being judged by the binocular camera after the time interval delta t.
7. The method as claimed in claim 5, wherein the obtaining of the moving direction of the dynamic object by using the included angle between the reference line vector and the connection line comprises:
combining the characteristic image of the dynamic target entering the early warning area judged by the binocular camera for the first time with the characteristic image of the dynamic target judged by the binocular camera after the interval delta t time, and combining the combined characteristic imagesSelecting a reference line vector line on the image
Figure FDA0002814108210000042
t1And t2Connecting line between two characteristic points of time dynamic target
Figure FDA0002814108210000043
Using reference lines
Figure FDA0002814108210000044
And the connecting line
Figure FDA0002814108210000045
The included angle between the two dynamic targets obtains the advancing direction of the dynamic target;
acquiring the motion direction of the dynamic target, namely:
Figure FDA0002814108210000046
8. the method as claimed in claim 1, wherein the step 6) is specifically as follows:
(1) when the dynamic target is in a fixed shape and is marked with a registered operation tool, shielding the dynamic visual detection function, judging that the dynamic target does not belong to the early warning area database, and not adopting any alarm operation;
(2) when the dynamic target system database has no mark or registration information and the three-dimensional coordinates (X, Y, Z) of the dynamic target are judged and displayed to enter an early warning area at the moment, acquiring the speed and the advancing direction of the dynamic target;
if the advancing speed V and the direction theta of the dynamic target simultaneously meet the condition that the advancing speed V and the direction theta are smaller than the preset threshold value, the fact that although the dynamic target is in the early warning area, the behavior trend of the dynamic target does not threaten equipment and production, and early warning can be omitted;
if at least one of the advancing speed V and the direction theta of the dynamic target exceeds a threshold value, early warning measures are taken;
(3) and if the three-dimensional coordinates (X, Y and Z) of the dynamic target are judged to break into the alarm shutdown area, immediately starting an alarm, and safely braking all the automatic equipment in the corresponding production area.
CN202011405817.9A 2020-12-03 2020-12-03 Method for setting up dynamic visual security fence Withdrawn CN114608441A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011405817.9A CN114608441A (en) 2020-12-03 2020-12-03 Method for setting up dynamic visual security fence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011405817.9A CN114608441A (en) 2020-12-03 2020-12-03 Method for setting up dynamic visual security fence

Publications (1)

Publication Number Publication Date
CN114608441A true CN114608441A (en) 2022-06-10

Family

ID=81856284

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011405817.9A Withdrawn CN114608441A (en) 2020-12-03 2020-12-03 Method for setting up dynamic visual security fence

Country Status (1)

Country Link
CN (1) CN114608441A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105869166A (en) * 2016-03-29 2016-08-17 北方工业大学 Human body action identification method and system based on binocular vision
CN109285309A (en) * 2018-09-30 2019-01-29 国网黑龙江省电力有限公司电力科学研究院 A kind of intrusion target real-time detecting system based on transmission system
CN110853002A (en) * 2019-10-30 2020-02-28 上海电力大学 Transformer substation foreign matter detection method based on binocular vision

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105869166A (en) * 2016-03-29 2016-08-17 北方工业大学 Human body action identification method and system based on binocular vision
CN109285309A (en) * 2018-09-30 2019-01-29 国网黑龙江省电力有限公司电力科学研究院 A kind of intrusion target real-time detecting system based on transmission system
CN110853002A (en) * 2019-10-30 2020-02-28 上海电力大学 Transformer substation foreign matter detection method based on binocular vision

Similar Documents

Publication Publication Date Title
CN110660186B (en) Method and device for identifying target object in video image based on radar signal
CN110232380B (en) Fire night scene restoration method based on Mask R-CNN neural network
KR101935399B1 (en) Wide Area Multi-Object Monitoring System Based on Deep Neural Network Algorithm
JP5551595B2 (en) Runway monitoring system and method
KR101735365B1 (en) The robust object tracking method for environment change and detecting an object of interest in images based on learning
CN104318206B (en) A kind of obstacle detection method and device
CN112800860B (en) High-speed object scattering detection method and system with coordination of event camera and visual camera
CN107657626B (en) Method and device for detecting moving target
CN110853002A (en) Transformer substation foreign matter detection method based on binocular vision
CN111932596B (en) Method, device and equipment for detecting camera occlusion area and storage medium
CN105976398A (en) Daylight fire disaster video detection method
JP3486229B2 (en) Image change detection device
CN114241370A (en) Intrusion identification method and device based on digital twin transformer substation and computer equipment
KR101125936B1 (en) Motion Monitoring Apparatus for Elevator Security and Method thereof
CN111046809A (en) Obstacle detection method, device and equipment and computer readable storage medium
JP5222908B2 (en) Collapse detection system and collapse detection method
CN116524017B (en) Underground detection, identification and positioning system for mine
CN114608441A (en) Method for setting up dynamic visual security fence
CN116452976A (en) Underground coal mine safety detection method
Zhu et al. Detection and recognition of abnormal running behavior in surveillance video
JPH05300516A (en) Animation processor
US20230022429A1 (en) Systems and methods for efficently sensing collison threats
CN114372966A (en) Camera damage detection method and system based on average light stream gradient
JP4998955B2 (en) Collapse detection system and method
Son et al. Detection of nearby obstacles with monocular vision for earthmoving operations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220610