CN114758305A - Method for constructing intrusion early warning monitoring database - Google Patents
Method for constructing intrusion early warning monitoring database Download PDFInfo
- Publication number
- CN114758305A CN114758305A CN202210674136.5A CN202210674136A CN114758305A CN 114758305 A CN114758305 A CN 114758305A CN 202210674136 A CN202210674136 A CN 202210674136A CN 114758305 A CN114758305 A CN 114758305A
- Authority
- CN
- China
- Prior art keywords
- target
- monitoring
- data
- picture
- pictures
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a method for constructing an intrusion early warning monitoring database, which comprises the following steps: shooting a plurality of pictures of target protection information by using cameras arranged at a plurality of angles, wherein the types of the target protection information comprise all target persons and target vehicles; aiming at a plurality of multi-angle pictures of each target person or target vehicle in the target protection information, fusing the multi-angle pictures into a three-dimensional image of the target person or target vehicle to generate target protection data; acquiring a plurality of pictures of natural information, and generating natural data through deep learning; and inputting the target protection data and the natural data into a monitoring database together. According to the invention, by generating the monitoring databases of the target personnel and the target vehicles, when the monitoring camera shoots the monitoring picture, the monitoring databases are distinguished, so that the accuracy of identifying and judging the target personnel and the target vehicles is improved, and the false alarm rate is reduced.
Description
Technical Field
The invention relates to the technical field of monitoring and early warning, in particular to a method for constructing an intrusion early warning monitoring database.
Background
In the intrusion detection process of the areas needing protection, such as large-scale factories, power stations, farms and the like, a plurality of patrol personnel are needed to insist on the external intrusion condition, but the patrol personnel are observed by eyes of the patrol personnel, so that the fish with net leakage is inevitable, a large number of monitoring cameras are arranged at present to carry out real-time monitoring on the condition of the periphery of the areas, and if any wind blows grass, the monitoring cameras can be shot in time and uploaded to a background.
However, when the monitoring camera performs shooting monitoring on the field environment around the area, it is difficult to distinguish whether the person in the field environment can enter the protected area or not, and the vehicle driving is also difficult to distinguish. This may cause an early warning alarm to occur even when a person who can enter the protected area passes. Therefore, the accuracy of identifying whether the personnel and the vehicles are allowed to enter the protection area needs to be improved, so that whether the personnel and the vehicles in the site environment need to be subjected to early warning or not is accurately judged, and the false warning rate is reduced.
Disclosure of Invention
The invention aims to generate the monitoring databases of target personnel and target vehicles, and carry out matching differentiation based on the monitoring databases when a monitoring camera shoots a monitoring picture, so that the accuracy of identification and judgment of the target personnel and the target vehicles is improved, the false alarm rate is reduced, and the method for constructing the intrusion early warning monitoring database is provided.
In order to achieve the above object, the embodiments of the present invention provide the following technical solutions:
a method for constructing an intrusion early warning monitoring database comprises the following steps:
step S1, shooting a plurality of pictures of target protection information by using cameras with a plurality of angle settings, wherein the types of the target protection information comprise all target persons and target vehicles;
step S2, fusing a plurality of multi-angle pictures of each target person or target vehicle in the target protection information into a three-dimensional image of the target person or target vehicle to generate target protection data;
step S3, acquiring a plurality of pictures of natural information, and generating natural data through deep learning;
and step S4, inputting the target protection data and the natural data into a monitoring database together.
In the scheme, a plurality of multi-angle pictures of the target personnel and the target vehicle are fused to form a three-dimensional image of each target personnel and each target vehicle, target protection data are generated and stored in a monitoring database. Through the multi-angle shooting and fusion of each target person and each target vehicle, the target person and each target vehicle can be recognized to be the full-face feature more accurately and rapidly in the actual application, and therefore the accuracy of recognition and judgment of the target person and each target vehicle is improved.
Step S5, identifying and classifying the monitoring data through deep learning, if the data is classified as natural data, not tracking the data, if the data is classified as target protection data, comparing the monitoring data with the target protection data in the monitoring database; if the monitoring data is successfully matched with the target protection data in the monitoring database, the monitoring data is not tracked, otherwise, tracking and alarming are carried out.
The step S2 includes the steps of: a plurality of multi-angle pictures of the target person are,Representing the ith picture of the nth target person, wherein I belongs to I, I is the total number of all pictures of the nth target person, N belongs to N, and N is the total number of all target persons;
setting a three-dimensional coordinate system, and fusing all the pictures of the nth target person into a three-dimensional image of the target person:
wherein, CdatapeoThree-dimensional images of all target persons;is the picture weight;the distance between the ideal picture cluster center and the cluster center of the ith picture of the nth target person,is an ideal picture clustering center and is characterized in that,the cluster center of the ith picture of the nth target person,indicating that the distance between the ideal picture cluster center and the cluster center of the ith picture of the nth target person is reduced to the minimum;
、、the weights are respectively on an X axis, a Y axis and a Z axis by taking the origin of the three-dimensional coordinate system as the center;linear fitting data on the X axis of the ith picture of the nth target person,representing the angle between the connecting line from any pixel point of the picture to the original point and the X axis in the three-dimensional coordinate system;linear fitting data on the Y axis for the ith picture of the nth target person,expressed in a three-dimensional coordinate systemThen, the distance between any pixel point of the picture and the original point is r, r is taken as the radius to draw a circle, and the circle passes through the Y axis;and linear fitting data of the ith picture of the nth target person on the Z axis represents the angle between the connecting line from any pixel point of the picture to the original point and the Z axis in the three-dimensional coordinate system.
In the scheme, the three-dimensional modeling is carried out on the parameters such as the angle, the radius and the like from the three-dimensional axial direction of the target person, and due to the fact that a large number of pixel points exist, a complete three-dimensional image is finally formed through continuous iteration, and the method has very accurate overall features.
A plurality of multi-angle pictures of the target vehicle are,The jth picture of the mth target vehicle is represented, J belongs to J, J is the total number of all pictures of the mth target vehicle, M belongs to M, and M is the total number of all target vehicles;
setting a three-dimensional coordinate system, and fusing the three-dimensional coordinate system into a three-dimensional image of the target vehicle according to all the pictures J of the mth target vehicle:
wherein, CdatacarThree-dimensional images of all target vehicles;is the picture weight;the distance between the ideal picture cluster center and the cluster center of the jth picture of the mth target vehicle is obtained,is an ideal picture clustering center and is characterized in that,is the cluster center of the jth picture of the mth target vehicle,indicating that the distance between the ideal picture cluster center and the cluster center of the jth picture of the mth target vehicle is reduced to be minimum;
、、the weights are respectively on an X axis, a Y axis and a Z axis by taking the origin of the three-dimensional coordinate system as the center;linear fitting data on the X axis of the jth picture of the mth target vehicle,representing the angle between the connecting line from any pixel point of the picture to the original point and the X axis in the three-dimensional coordinate system;linear fitting data on the Y axis for the jth picture of the mth target vehicle,the method comprises the following steps that (1) a connection distance from any pixel point of a picture to an original point is r, r is taken as a radius to draw a circle, and the circle passes through a Y axis;and linear fitting data of the jth picture of the mth target vehicle on the Z axis represents the angle between the Z axis and a connecting line from any pixel point of the picture to the origin in the three-dimensional coordinate system.
In step S5, before the step of performing recognition and classification on the monitoring data through deep learning, the method further includes the steps of:
acquiring monitoring pictures shot by all monitoring cameras at the same time t,The number of the monitoring camera is num;
fusing the monitoring pictures P shot by all the monitoring cameras at the same time t, filtering out repeated parts and forming an integrated monitoring picture Pt(ii) a For monitoring picture PtThe mobile object in (2) is detected and identified, and if the mobile object is found, the found mobile object forms monitoring data.
The pair of monitoring pictures PtThe step of detecting and identifying the moving object in (1) includes: establishing a plane coordinate system and acquiring a monitoring picture PtAnd the coordinates of the middle pixel point are determined as the moving object if the coordinates of the same pixel point change from t +1 to t + t'.
Compared with the prior art, the invention has the beneficial effects that:
(1) the method fuses a plurality of multi-angle pictures of target personnel and target vehicles to form three-dimensional images of each target personnel and each target vehicle, generates protection data and stores the protection data into a monitoring database; through multi-angle shooting and fusion of each target person and each target vehicle, the target person and each target vehicle can be identified more accurately and quickly as the full-face characteristics in practical application, so that the accuracy of identification and judgment of the target person and each target vehicle is improved, and the false alarm rate is reduced;
(2) the invention is based on a plurality of multi-angle pictures of the target personnel, three-dimensional modeling is carried out on the target personnel in the three-dimensional axial direction according to the parameters of angle, radius and the like, and because a large number of pixel points exist, a complete three-dimensional image is finally formed through continuous iteration, and the invention has very accurate overall characteristics.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a linear fit on the X-axis according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a linear fit on the Y-axis according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a Z-axis linear fit according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined or explained in subsequent figures. Also, in the description of the present invention, the terms "first", "second", and the like are used for distinguishing between descriptions and not necessarily for describing a relative importance or implying any actual relationship or order between such entities or operations.
Example (b):
the invention is realized by the following technical scheme, as shown in fig. 1, a method for constructing an intrusion early warning monitoring database comprises the following steps:
and step S1, shooting a plurality of pictures of target protection information by using cameras with a plurality of angle settings, wherein the category of the target protection information comprises all target persons and target vehicles.
The plurality of cameras are fixedly arranged in one scene, so that all the cameras can shoot a certain target together, and the angle, the direction and the like of shooting the target by each camera are different. Similar to multiple spotlights, directed together at the same target from different angles, directions.
All persons, vehicles admitted to the area are photographed by these cameras before the monitoring database is built up, so that several pictures of the object protection information are obtained. The target protection information is target personnel and target vehicles, and the embodiment only gives examples of the categories of the personnel and the vehicles, and can also be extended to the categories of unmanned aerial vehicles and the like in the same way. As will be readily appreciated, the person allowed to enter the area is referred to as the target, and no alarm is required when the target enters or exits the area; people who are not allowed to enter the area are called non-targets, and when the non-targets enter or exit the area, an alarm needs to be given; vehicles, unmanned aerial vehicles and the like also have the same principle.
Step S2, merging the multi-angle pictures of each target person or target vehicle in the target protection information into a three-dimensional image of the target person or target vehicle, and generating target protection data.
The camera arranged at a plurality of angles can shoot a plurality of multi-angle pictures of target personnel,And the ith picture represents the nth target person, I belongs to I, I is the total number of all pictures of the nth target person, N belongs to N, and N is the total number of all target persons. Such asThen the 35 th picture of the 256 th target person is shown, the target person is randomly ordered, and the pictures are also randomly ordered.
Because the positions and the shooting angles of the plurality of cameras are fixedly arranged, the pictures shot by all the cameras can form a three-dimensional space, a three-dimensional coordinate system is set in the three-dimensional space, the origin of the three-dimensional coordinate system is determined, and the three-dimensional images of any target person can be fused according to all the pictures of the target person.
Wherein, CdatapeoFor three-dimensional images of all target persons, the subscript peo is the category of the target person, further as CdatacarI.e., the three-dimensional images of all the target vehicles, and the subscript car is the category of the target vehicle.Is the picture weight;the distance between the ideal picture cluster center and the cluster center of the ith picture of the nth target person,the ideal picture clustering center is preset in advance, and the scheme can be understood that the target person in the picture is placed in the picture to want to be placed in the pictureThe position of (2), generally with the center of the picture as the ideal position,the cluster center of the ith picture of the nth target person,means to minimize the distance between the ideal picture cluster center and the cluster center of the ith picture of the nth target person, so as to place the target person at the desired position in the picture, but not to place the target person at the desired position in the pictureThe value of 0 cannot be obtained because in a three-dimensional space, the body type of each person is different, and the clustering center needs to be corrected at different angles, so that the distance is only reduced as much as possible, but the distance is not directly 0.
、、The weights are respectively on the X-axis, Y-axis and Z-axis with the origin of the three-dimensional coordinate system as the center. With reference to figure 2 of the drawings, it is shown,linear fitting data on the X axis of the ith picture of the nth target person,and (3) representing the angle between the connecting line from any pixel point a of the picture to the original point and the X axis in a three-dimensional coordinate system. With reference to figure 3 of the drawings, it is shown,is as followsLinear fitting data of the ith picture of the n target persons on the Y axis,and (3) representing that the distance of a connecting line from any pixel point a of the picture to the original point is r, r is taken as a radius to draw a circle, and the circle passes through a Y axis, such as a Y point passing through the Y axis. With reference to figure 4 of the drawings, it is shown,and linear fitting data of the ith picture of the nth target person on the Z axis represents the angle between the connecting line from any pixel point a of the picture to the original point and the Z axis in the three-dimensional coordinate system.
According to the scheme, after the multi-angle pictures are fused, a three-dimensional image is formed, when the multi-angle pictures are fused, three-dimensional modeling is carried out on the parameters such as angles and radiuses from the three-dimensional axis of a target person, and due to the fact that a large number of pixel points exist, a complete three-dimensional image is finally formed through continuous iteration.
Similarly, according to all the pictures of any target vehicle, the three-dimensional images of the target vehicle can be fused:
a plurality of multi-angle pictures of the target vehicle are,The jth picture of the mth target vehicle is represented, J belongs to J, J is the total number of all pictures of the mth target vehicle, M belongs to M, and M is the total number of all target vehicles;
setting a three-dimensional coordinate system, and fusing the three-dimensional coordinate system into a three-dimensional image of the target vehicle according to all the pictures J of the mth target vehicle:
wherein, CdatacarThree-dimensional images of all target vehicles;weighting the pictures;the distance between the ideal picture cluster center and the cluster center of the jth picture of the mth target vehicle is obtained,is a clustering center of an ideal picture,is the cluster center of the jth picture of the mth target vehicle,indicating that the distance between the ideal picture cluster center and the cluster center of the jth picture of the mth target vehicle is reduced to be minimum;
、、weights on an X axis, a Y axis and a Z axis by taking an original point of a three-dimensional coordinate system as a center;linear fitting data on the X axis for the jth picture of the mth target vehicle,representing the angle between the connecting line from any pixel point of the picture to the original point and the X axis in the three-dimensional coordinate system;for the m-th target vehicleLinear fit data on the Y-axis for the jth picture of the vehicle,the method comprises the following steps that (1) a connecting line distance from any pixel point of a picture to an original point is r, r is used as a radius to draw a circle, and the circle passes through a Y axis;and linear fitting data of the jth picture of the mth target vehicle on the Z axis represents the angle between the connecting line from any pixel point of the picture to the origin and the Z axis in the three-dimensional coordinate system.
And three-dimensional images of all target persons and target vehicles form a target protection database.
And step S3, acquiring a plurality of pictures of natural information, and generating natural data through deep learning.
The application scene of the scheme is that only invading people and vehicles are monitored, for example, when invading early warning is carried out, if common small animals, flying birds and the like enter, the warning is not needed; and when leaves and the like fly into the device, the device does not need to be alarmed. Therefore, according to the scheme, a large number of pictures of natural information are obtained through network or real shooting, the types of the natural information comprise animals, birds, plants and the like, and natural data can be generated through the existing deep learning technical means.
And step S4, inputting the target protection data and the natural data into a monitoring database together.
The method aims to improve the accuracy of identification and judgment of target personnel and target vehicles, and as the appearances and appearances of non-target personnel and non-target vehicles are definitely much more than those of target personnel and target vehicles, the method is implemented in a less-judgment mode, for example, personnel monitored on site are matched with target personnel stored in a monitoring database, and if the matching is successful, the personnel is shown to be the target personnel, and no alarm is given; and if the matching is unsuccessful, the person is indicated to be a non-target person, and then an alarm is given.
The target protection data (i.e., target person, target vehicle) formed at step S2 is thus entered into the monitoring database together with the natural data (i.e., animal, bird, plant, etc.) formed at step S3. During subsequent judgment, directly matching with the monitoring database, and directly increasing target protection data of target personnel through the steps when newly increased target personnel and target vehicles are generated along with development; when target personnel and target vehicles are reduced, the target protection data of the target personnel and the target vehicles can be directly deleted, so that the target protection data become non-target personnel and non-target vehicles which need to be alarmed.
Step S5, identifying and classifying the monitoring data through deep learning, if the monitoring data is classified as natural data, not tracking the monitoring data, and if the monitoring data is classified as target protection data, comparing the monitoring data with the target protection data in the monitoring database; if the monitoring data is successfully matched with the target protection data in the monitoring database, the monitoring data is not tracked, otherwise, tracking and alarming are carried out.
After a monitoring database is formed, monitoring pictures shot by all monitoring cameras (the monitoring cameras are cameras arranged in a monitoring site of a protection area) at the same moment t are obtained in real time,And the number is the number of the camera, and the numbering mode is self-defined.
Fusing the monitoring pictures P shot by all the monitoring cameras at the same time t, filtering out repeated parts and forming an integrated monitoring picture Pt. For monitoring picture PtThe moving object in (1) is detected and identified, and if the moving object is found, the found moving object forms monitoring data.
In-pair monitoring picture PtWhen the moving object in the system is detected and identified, a plane coordinate system is established, and a monitoring picture P is obtainedtCoordinates (x, y) of the middle pixel point from t +1 to t + tAt any moment, if the coordinates (x, y) of the same pixel point change, the image formed by the related pixel point is determined as a moving object, and t' is a self-defined time period.
After the mobile object is determined, the monitoring data are identified and classified through the existing deep learning, and the classification also needs to be based on a monitoring database, because the deep learning network needs to train the learning classification in advance, the monitoring database not only contains target protection data, but also natural data. The existing deep learning can easily identify the types of moving objects such as people, cars, animals, birds or plants, and the principle of the method is not repeated here.
If the monitored data is classified into animals, birds, plants and the like, namely natural data, the monitored data is not tracked and is not alarmed; if the data is classified as human or vehicle, namely, target protection data, the monitoring data is compared with the target protection data in the monitoring database. If the monitoring data is successfully matched with the target protection data in the monitoring database, the monitoring data is not tracked, and the identified monitoring data is the target personnel and the target vehicle; if the monitoring data is unsuccessfully matched with the protection data in the monitoring database, tracking and alarming are carried out, and the identified monitoring data are non-target personnel and non-target vehicles.
At the same time, in the monitoring picture PtThe method for identifying and monitoring each moving object is the same, and the moving objects can be monitored and identified at the same time.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (5)
1. A method for constructing an intrusion early warning monitoring database is characterized by comprising the following steps: the method comprises the following steps:
step S1, shooting a plurality of pictures of target protection information by using cameras with a plurality of angle settings, wherein the types of the target protection information comprise all target persons and target vehicles;
step S2, fusing a plurality of multi-angle pictures of each target person or target vehicle in the target protection information into a three-dimensional image of the target person or target vehicle to generate target protection data;
step S3, acquiring a plurality of pictures of natural information, and generating natural data through deep learning;
and step S4, inputting the target protection data and the natural data into a monitoring database together.
2. The method of claim 1, wherein the intrusion alert and monitoring database is constructed by: step S5, identifying and classifying the monitoring data through deep learning, if the data is classified as natural data, not tracking the data, if the data is classified as target protection data, comparing the monitoring data with the target protection data in the monitoring database; if the monitoring data is successfully matched with the target protection data in the monitoring database, the monitoring data is not tracked, otherwise, tracking and alarming are carried out.
3. The method of claim 1, wherein the intrusion alert and monitoring database is constructed by: the step S2 includes the steps of: a plurality of multi-angle pictures of the target person are,Representing the ith picture of the nth target person, wherein I belongs to I, I is the total number of all pictures of the nth target person, N belongs to N, and N is the total number of all target persons;
setting a three-dimensional coordinate system, and fusing all the pictures of the nth target person into a three-dimensional image of the target person:
wherein, CdatapeoThree-dimensional images of all target persons;is the picture weight;the distance between the ideal picture cluster center and the cluster center of the ith picture of the nth target person,is a clustering center of an ideal picture,the cluster center of the ith picture of the nth target person,indicating that the distance between the ideal picture cluster center and the cluster center of the ith picture of the nth target person is reduced to the minimum;
、、weights on an X axis, a Y axis and a Z axis by taking an original point of a three-dimensional coordinate system as a center;the ith picture of the nth target person is on the X axisThe linear fit data of (a) above,representing the angle between the connecting line from any pixel point of the picture to the original point and the X axis in the three-dimensional coordinate system;linear fitting data on the Y axis for the ith picture of the nth target person,the method comprises the following steps that (1) a connecting line distance from any pixel point of a picture to an original point is r, r is used as a radius to draw a circle, and the circle passes through a Y axis;and linear fitting data of the ith picture of the nth target person on the Z axis represents the angle between the connecting line from any pixel point of the picture to the original point and the Z axis in the three-dimensional coordinate system.
4. The method for constructing an intrusion alert and monitoring database according to claim 1, wherein: in step S5, before the step of performing recognition and classification on the monitoring data through deep learning, the method further includes the steps of:
acquiring monitoring pictures shot by all monitoring cameras at the same time t,Representing a monitoring picture shot by a monitoring camera at the moment t, and num is the number of the camera;
fusing the monitoring pictures P shot by all the monitoring cameras at the same time t, filtering out repeated parts and forming an integrated monitoring picture Pt(ii) a For monitoring picture PtThe moving object in (1) is detected and identified, and if the moving object is found, the found moving object forms monitoring data.
5. The method of claim 4, wherein the intrusion alert and monitoring database is constructed by: the pair of monitoring pictures PtThe step of detecting and identifying the moving object in (2) includes: establishing a plane coordinate system and acquiring a monitoring picture PtAnd (3) determining the coordinates of the middle pixel points as moving objects if the coordinates of the same pixel point change from t +1 to t + t'.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210674136.5A CN114758305A (en) | 2022-06-15 | 2022-06-15 | Method for constructing intrusion early warning monitoring database |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210674136.5A CN114758305A (en) | 2022-06-15 | 2022-06-15 | Method for constructing intrusion early warning monitoring database |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114758305A true CN114758305A (en) | 2022-07-15 |
Family
ID=82336438
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210674136.5A Pending CN114758305A (en) | 2022-06-15 | 2022-06-15 | Method for constructing intrusion early warning monitoring database |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114758305A (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006146378A (en) * | 2004-11-17 | 2006-06-08 | Hitachi Ltd | Monitoring system using multiple camera |
WO2012002601A1 (en) * | 2010-07-01 | 2012-01-05 | (주)비전에스티 | Method and apparatus for recognizing a person using 3d image information |
CN103198595A (en) * | 2013-03-11 | 2013-07-10 | 成都百威讯科技有限责任公司 | Intelligent door and window anti-invasion system |
CN106652291A (en) * | 2016-12-09 | 2017-05-10 | 华南理工大学 | Indoor simple monitoring and alarming system and method based on Kinect |
CN106951846A (en) * | 2017-03-09 | 2017-07-14 | 广东中安金狮科创有限公司 | A kind of face 3D models typing and recognition methods and device |
CN107093171A (en) * | 2016-02-18 | 2017-08-25 | 腾讯科技(深圳)有限公司 | A kind of image processing method and device, system |
CN108550184A (en) * | 2018-04-04 | 2018-09-18 | 北京天目智联科技有限公司 | A kind of biological characteristic 3D 4 D datas recognition methods and system based on light-field camera |
CN112257533A (en) * | 2020-10-14 | 2021-01-22 | 吉林大学 | Perimeter intrusion detection and identification method |
CN112800918A (en) * | 2021-01-21 | 2021-05-14 | 北京首都机场航空安保有限公司 | Identity recognition method and device for illegal moving target |
CN114022443A (en) * | 2021-11-03 | 2022-02-08 | 福建省农业科学院植物保护研究所 | Cross-border invading biological intelligent quick screening system |
-
2022
- 2022-06-15 CN CN202210674136.5A patent/CN114758305A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006146378A (en) * | 2004-11-17 | 2006-06-08 | Hitachi Ltd | Monitoring system using multiple camera |
WO2012002601A1 (en) * | 2010-07-01 | 2012-01-05 | (주)비전에스티 | Method and apparatus for recognizing a person using 3d image information |
CN103198595A (en) * | 2013-03-11 | 2013-07-10 | 成都百威讯科技有限责任公司 | Intelligent door and window anti-invasion system |
CN107093171A (en) * | 2016-02-18 | 2017-08-25 | 腾讯科技(深圳)有限公司 | A kind of image processing method and device, system |
CN106652291A (en) * | 2016-12-09 | 2017-05-10 | 华南理工大学 | Indoor simple monitoring and alarming system and method based on Kinect |
CN106951846A (en) * | 2017-03-09 | 2017-07-14 | 广东中安金狮科创有限公司 | A kind of face 3D models typing and recognition methods and device |
CN108550184A (en) * | 2018-04-04 | 2018-09-18 | 北京天目智联科技有限公司 | A kind of biological characteristic 3D 4 D datas recognition methods and system based on light-field camera |
CN112257533A (en) * | 2020-10-14 | 2021-01-22 | 吉林大学 | Perimeter intrusion detection and identification method |
CN112800918A (en) * | 2021-01-21 | 2021-05-14 | 北京首都机场航空安保有限公司 | Identity recognition method and device for illegal moving target |
CN114022443A (en) * | 2021-11-03 | 2022-02-08 | 福建省农业科学院植物保护研究所 | Cross-border invading biological intelligent quick screening system |
Non-Patent Citations (4)
Title |
---|
A.OLINE 等: "Exploring three-dimensional visualization for instrusion detection", 《IEEE WORKSHOP ON VISUALIZATION FOR COMPUTER SECURITY》 * |
JANEZ PER 等: "Dana36:A Multi-camera Image Dataset for Object Identification in Surveillance Scenarios", 《2012 IEEE NINTH INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL-BASED SURVEILLANCE》 * |
刘立洋等: "变电站三维安全跟踪防护系统的研究与开发", 《东北电力技术》 * |
龚德忠: "基于三维视频监控系统的应用研究", 《湖北警官学院学报》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101850286B1 (en) | A deep learning based image recognition method for CCTV | |
CN108062349B (en) | Video monitoring method and system based on video structured data and deep learning | |
CN108053427B (en) | Improved multi-target tracking method, system and device based on KCF and Kalman | |
CN110419048B (en) | System for identifying defined objects | |
CN104933730B (en) | Detected using the multi views people of half exhaustive search | |
US8744125B2 (en) | Clustering-based object classification | |
CN108052859B (en) | Abnormal behavior detection method, system and device based on clustering optical flow characteristics | |
Alexandrov et al. | Analysis of machine learning methods for wildfire security monitoring with an unmanned aerial vehicles | |
CN111832400B (en) | Mask wearing condition monitoring system and method based on probabilistic neural network | |
CN112068111A (en) | Unmanned aerial vehicle target detection method based on multi-sensor information fusion | |
US11475671B2 (en) | Multiple robots assisted surveillance system | |
CN113298053B (en) | Multi-target unmanned aerial vehicle tracking identification method and device, electronic equipment and storage medium | |
CN109218667B (en) | Public place safety early warning system and method | |
CN101751744A (en) | Detection and early warning method of smoke | |
CN113076899B (en) | High-voltage transmission line foreign matter detection method based on target tracking algorithm | |
CN110751081B (en) | Construction safety monitoring method and device based on machine vision | |
CN106341661A (en) | Patrol robot | |
CN112132047A (en) | Community patrol system based on computer vision | |
CN116343330A (en) | Abnormal behavior identification method for infrared-visible light image fusion | |
CN110619276A (en) | Anomaly and violence detection system and method based on unmanned aerial vehicle mobile monitoring | |
CN104933436A (en) | Vision-based multi-camera factory monitoring including dynamic integrity grading | |
CN113989702A (en) | Target identification method and device | |
CN112149618A (en) | Pedestrian abnormal behavior detection method and device suitable for inspection vehicle | |
CN116704411A (en) | Security control method, system and storage medium based on Internet of things | |
CN114120171A (en) | Fire smoke detection method, device and equipment based on video frame and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220715 |
|
RJ01 | Rejection of invention patent application after publication |