CN110719444A - Multi-sensor fusion omnibearing monitoring and intelligent camera shooting method and system - Google Patents

Multi-sensor fusion omnibearing monitoring and intelligent camera shooting method and system Download PDF

Info

Publication number
CN110719444A
CN110719444A CN201911078804.2A CN201911078804A CN110719444A CN 110719444 A CN110719444 A CN 110719444A CN 201911078804 A CN201911078804 A CN 201911078804A CN 110719444 A CN110719444 A CN 110719444A
Authority
CN
China
Prior art keywords
monitoring
camera
intelligent
network
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911078804.2A
Other languages
Chinese (zh)
Inventor
刘通
程江华
杨明胜
罗笑冰
杜湘瑜
张亮
王洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN201911078804.2A priority Critical patent/CN110719444A/en
Publication of CN110719444A publication Critical patent/CN110719444A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a multi-sensor fused all-dimensional monitoring and intelligent camera shooting method and system. The system comprises a 360-degree fixed camera module, a 360-degree scanning camera module, an intelligent processing and control module and a comprehensive information display terminal. Sensing information of four fixed-focus cameras, one variable-focus camera and one radar sensor is fused, and all-around monitoring and intelligent shooting monitoring targets of 'full-view', 'clear-view' and 'understanding' are achieved. The system can be widely applied to important places such as prisons, firearm depots and oil depots, realizes all-dimensional monitoring and intelligent shooting of sensitive areas, simultaneously supports snapshot and early warning of interested targets, and improves the intelligent level of security monitoring.

Description

Multi-sensor fusion omnibearing monitoring and intelligent camera shooting method and system
Technical Field
The invention relates to a multi-sensor fused all-dimensional monitoring and intelligent camera shooting method and system.
Background
With the advancement of science and technology, video surveillance is widely applied to various fields, such as security supervision of banks, recording of traffic violations, a sky-eye system for public security case-solving reconnaissance, surveillance on examination rooms, detection of power systems, and the like. Therefore, the video monitoring is integrated into the aspect of daily life, and huge market space of the video monitoring is promoted.
Monitoring devices currently on the market are roughly divided into two categories. One type of monitoring equipment is a monitoring unit consisting of fixed-focus cameras, has a simple structure and low cost, is only suitable for monitoring a small-range area, and cannot acquire detailed information because the focus is fixed and cannot perform zooming-in and zooming-out on a picture, for example, the equipment can monitor that a person enters a monitoring area, but cannot perform clear zooming-in processing on the person to acquire the detailed information of a target. The second type of monitoring equipment is a monitoring unit consisting of a variable-focus camera, the equipment has higher cost and wider variable-focus range, the maximum application of the equipment in the market can reach 50 times of optical zooming, but the zooming of the monitoring equipment needs manual operation control, and intelligent full-automatic monitoring cannot be realized if the target is locked and the picture is zoomed in by manually adjusting the focal length by a human hand under the condition, so that the human resources are consumed.
In the field of security monitoring, the traditional fixed-focus camera has the problems that detailed information of a target far away from the camera cannot be obtained, the monitoring range is limited and the like, the variable-focus camera needs to manually control the direction and the focal length, or automatically cruises according to a certain rule, the target cannot be actively shot, and all-directional monitoring cannot be realized.
Disclosure of Invention
In view of the above problems, the present invention provides a multi-sensor integrated omni-directional monitoring and intelligent camera method and system.
The omnibearing monitoring and intelligent camera system with the multi-sensor fusion provided by the invention comprises a 360-degree fixed camera module, a 360-degree scanning camera module, an intelligent processing and control module and a comprehensive information display terminal;
the 360-degree fixed camera module comprises 4 fixed-focus cameras, so that all-dimensional monitoring display is realized, and a monitoring scene can be seen more comprehensively;
the 360-degree scanning camera module comprises a zoom camera, a radar and a tripod head, 360-degree scanning is realized by the tripod head, the radar senses the distance of a target, the zoom camera is automatically controlled to adjust the focal length, optical amplification of the target with different distances is realized, and the monitored target can be seen more clearly;
the intelligent processing and control module deploys a target detection and identification algorithm to intelligently identify the interested target in the monitoring scene, so that the monitoring scene is easier to see;
the intelligent processing and control module realizes the control of the cradle head through a duty ratio-adjustable rectangular wave generator with the PWM pulse frequency of 50Hz, the different duty ratios correspond to the cloud platform and rotate for a fixed angle at a standard zero angle, the duty ratio of the rectangular wave is 5-95%, the corresponding rotation angle of the cradle head is 0-360 degrees, and the rotation of the cradle head is controlled;
the intelligent processing and control module is connected with the laser ranging signal through a USB-to-TTL interface and converts the laser ranging signal into distance information of a target; the zooming camera is focused in a preset position mode, 6 preset positions correspond to 6 distance sections respectively, the preset positions are obtained through distance information, and then the zooming camera is controlled through a USB serial port to adjust focal length, so that a clear image is obtained;
after the intelligent processing and control module receives the laser signal and controls the zoom camera to focus, intelligently identifying the image captured by the zoom camera, specifically, deploying a multi-model fusion target detection side algorithm on the ARM, and capturing the image of which the algorithm detection result is the pedestrian;
the comprehensive information display terminal displays all-dimensional monitoring information and carries out intelligent shooting, and the video monitoring requirement of high security level is met.
The invention integrates the sensing information of four fixed-focus cameras, one zoom camera and one radar sensor, and realizes the monitoring targets of 'full-view', 'clear-view' and 'understandable' omnibearing monitoring and intelligent shooting.
Drawings
Figure 1360 degree fixed camera module schematic view (top view),
figure 2360 deg. scanning camera module schematic diagram,
figure 3 is a schematic diagram of the VGG network architecture,
the network structure of figure 4 ResNet50,
figure 5 is a diagram of a multi-model fusion architecture,
FIG. 6 is a diagram of an integrated information display terminal system interface.
Detailed Description
The invention provides a multi-sensor fusion omnibearing monitoring and intelligent shooting method and system, which has the beneficial effects that sensing information of four fixed-focus cameras, one variable-focus camera and one radar sensor is fused, thus realizing omnibearing monitoring and intelligent shooting monitoring targets of 'seeing completely', 'seeing clearly' and 'seeing understandably', being widely applied to important places such as prisons, firearm warehouses, oil depots and the like, realizing omnibearing monitoring and intelligent shooting of sensitive areas, simultaneously supporting snapshot and early warning of interested targets, and improving the intelligent level of security monitoring.
(1) 360 fixed module of making a video recording
The 360-degree fixed camera module is responsible for shooting 360-degree panoramic pictures in the horizontal direction and is formed by assembling 4 fixed-focus cameras, the included angle between every two adjacent cameras is 90 degrees, the focal length of a camera lens is 3.6mm, the monitoring range is 85-105 degrees, and the monitoring distance is 0-10 m. The pitching angles of all the cameras are consistent, and all the cameras are inclined downwards by 30 degrees in the horizontal direction. Therefore, the monitoring angle of the 4 cameras in the horizontal direction is larger than 360 degrees, 360-degree all-round monitoring in the horizontal direction can be realized, and the 'full-view' monitoring target is achieved. The overlapping angle of the adjacent 2 cameras in the horizontal direction is larger than 10 degrees, so that the accurate splicing of the images shot by the adjacent cameras is realized conveniently. An assembly schematic diagram of the 360 ° fixed camera module is shown in fig. 1.
(2) 360-degree scanning camera module
The 360-degree scanning camera module is responsible for shooting a high-definition image of an interested target and comprises a holder, a laser ranging sensor and a zooming camera, and is shown in figure 2. Wherein, the cloud platform can 360 rotations, and laser rangefinder sensor is fixed on same support column with the camera that zooms, and the horizontal direction position is the same, and vertical direction inclination is the same, all is downward sloping 30. The central server acquires distance information measured by the laser ranging sensor through a serial port, analyzes the distance information, calculates a focal length section of the zooming camera, and sends a control command to the control module, so that the adjustment of the focal length of the zooming camera is realized, a clear image is obtained, and the monitoring purpose of 'seeing clearly' is achieved.
(3) Intelligent processing and control module
The treater of intelligence processing and control module group is the ARM integrated circuit board, mainly realizes three functions:
(3.1) Pan-Tilt control
The intelligent processing and control module realizes the control of the tripod head motor through a rectangular wave generator with the PWM pulse frequency of 50Hz and the adjustable duty ratio, the motor is calibrated at a zero-degree angle and rotates for a fixed angle according to different duty ratios, the duty ratio of the rectangular wave is 5-95%, the corresponding tripod head motor rotates for 0-360 degrees, and the rotation of the tripod head is controlled.
(3.2) laser ranging Signal processing and zoom Camera control
The intelligent processing and control module is connected with the laser ranging signal through a USB-to-TTL interface and converts the laser ranging signal into distance information of a target; the camera that zooms adopts preset position mode focusing, and total 6 preset positions correspond 6 distance sections respectively, obtain the preset position through distance information, then zoom camera adjustment focus through USB serial ports control, obtain clear image.
(3.3) Intelligent recognition and snapshot
After the intelligent processing and control module receives the laser signal and controls the zoom camera to focus, the intelligent processing and control module intelligently identifies the image captured by the zoom camera, specifically, a multi-model fusion target detection side algorithm is deployed on the ARM, and the image with the algorithm detection result being a pedestrian is captured. The multi-model fusion target detection side algorithm utilizes a transfer learning method to transfer a pre-trained model for target detection to a specific target identification by an official website, and freezes networks in a full connection layer in a deep convolutional neural network, so that parameters in frozen network layers are not subjected to gradient updating in the training process of the model, and the optimized parameters are only all parameters of a feature extractor which is not frozen, and are called as the feature extractor, so that a feature map is obtained. And then, connecting the characteristic diagrams, reconstructing a full connection layer for bearing the output classification work of the whole model, and performing parameter optimization by adopting 3 layers of full connection layers and using a random inactivation optimization method to improve the robustness of the model. Specifically, the invention fuses the VGG16 network with the ResNet50 network. The VGG16 feature extractor refers to the convolution block structure of the VGG16 network, and the ResNet50 feature extractor refers to the residual block structure of ResNet 50;
the VGG16 network is composed of 13 convolutional layers and 3 fully-connected layers, the network structure is shown in FIG. 3, and the biggest feature is to extract image features through the combination and stacking of 3 × 3 filters. For a specific target, the method extracts abundant detail features and enhances the distinguishing capability of the features on interested regions and non-interested regions. The VGG16 network feature extractor used by the present invention is part of the convolution block structure shown in dashed box in FIG. 3.
The ResNet50 network includes 49 convolutional layers and 1 full link layer, and the network structure is shown in FIG. 4. Because the network adds the identity mapping layer to directly connect the shallow network and the deep network, the connection method adopting the mode has the advantages that the effect is not degraded along with the increase of the network depth, and the convergence effect is good. By utilizing the characteristic, the problems of loss and under-fitting of the VGG16 network characteristics can be solved. The ResNet50 network feature extractor used by the present invention is part of the residual block structure shown in dashed outline in FIG. 4.
A model diagram of VGG16 fused with ResNet50 feature extractor is shown in FIG. 5.
The traditional machine learning framework needs a large amount of calibration training data, which consumes a large amount of manpower and material resources. Without a large amount of labeled data, many study-related studies and applications cannot be developed. Conventional machine learning assumes that training data follows the same data distribution as test data, and in many cases, this same distribution assumption is not satisfied. Viewed from another aspect, if we have a large amount of training data under different distributions, it is also very wasteful to discard the data completely. How to reasonably utilize the data is the main problem to be solved by the transfer learning. Migratory learning may migrate knowledge from existing data to aid future learning. The goal of transfer learning is to use knowledge learned from one environment to assist in the learning task in a new environment. Therefore, the migration learning does not make the same distribution assumption as the conventional machine learning. The current work on transfer learning can be divided into the following three parts: and the method comprises the steps of migration learning based on examples in a isomorphic space, and migration learning based on features in the isomorphic space and migration learning in a heterogeneous space. The invention adopts feature-based transfer learning in isomorphic space, finds common feature representation in feature spaces of a source field and a target field, reduces the difference between the two fields and improves the detection performance of a model in the target field. Because the specific targets selected by the method are people and vehicles, compared with a relatively perfect target detection field data set, the high-level network feature layer has universality in the aspects of shape, texture and the like, and the time for training the model can be greatly reduced by utilizing a large amount of labeled data contained in the target detection field to perform transfer learning, and a good effect can be obtained. The distribution of the features extracted by the feature extraction layers of different models in space is different, the feature extraction capability of different models can be migrated by adopting a migration learning method, a new full connection layer is built, training is carried out again in a new target field on the basis, and different models are better fitted. The invention adopts the VGG16 and ResNet50 models carried by a Keras library, and uses the trained pre-training weights from the ImageNet data set for transfer learning.
Based on a transfer learning method, a VGG16 and a ResNet50 feature extractor are transferred from a target detection field to a pedestrian detection field, the two models are fused, compared with a single model, a multi-model fusion network structure extracts 2560 feature maps of images 7 x 7 to be detected in total, the VGG16 feature extractor extracts 512 feature maps, and the ResNet50 feature extractor extracts 2048 feature maps, and as the ResNet50 network model adopts a jump connection mechanism, image information of a specific target is transmitted to deeper layers of a network, such as texture, edge and the like, so that the extracted pedestrian image information is richer. The method fuses the feature extractors in the migrated models, rebuilds the full-connection layer, freezes the feature extractors of the two models, and trains the fused models by adopting a randomly initialized full-connection layer parameter mode.
The multi-model fusion target detection side algorithm utilizes a transfer learning method to accelerate the training speed of the fused model, and adopts a multi-model fusion strategy to improve the reliability of pedestrian detection.
(4) Integrated information display terminal
The comprehensive information display terminal displays the monitoring data of the 360-degree fixed camera module and the 360-degree scanning camera module and the intelligent analysis and snapshot result of the intelligent processing and control module. As shown in FIG. 6, the system interface is divided into three areas, "see full", "see clear", and "see understand", as shown. The 'seen full' area displays all-round monitoring information obtained by the 4-path fixed-focus cameras; the clear-looking area displays a clear image acquired by the zoom camera, and the focal length can be automatically adjusted; the "understand" area displays an intelligent snapshot of the pedestrian target of interest.

Claims (4)

1. The omnibearing monitoring and intelligent camera system with multi-sensor integration comprises a 360-degree fixed camera module, a 360-degree scanning camera module, an intelligent processing and control module and a comprehensive information display terminal,
the 360-degree fixed camera module comprises 4 fixed-focus cameras, so that all-dimensional monitoring display is realized, and a monitoring scene can be comprehensively seen;
the included angle between the 4 adjacent cameras is 90 degrees, the focal length of the camera lens is 3.6mm, the monitoring range is 85-105 degrees, and the monitoring distance is 0-10 m;
the 360-degree scanning camera module comprises a zoom camera, a radar and a tripod head, 360-degree scanning is realized by the tripod head, the radar senses the distance of a target, the zoom camera is automatically controlled to adjust the focal length, optical amplification of the target with different distances is realized, and the monitored target can be clearly seen;
the holder rotates 360 degrees, the laser ranging sensor and the zooming camera are fixed on the same support column, the horizontal direction is the same, the vertical direction is the same in inclination angle, and the laser ranging sensor and the zooming camera are all inclined downwards by 30 degrees;
the central server acquires and analyzes distance information measured by the laser ranging sensor through a serial port, calculates a focal length section of the zoom camera, and sends a control instruction to the control module, so that the adjustment of the focal length of the zoom camera is realized, a clear image is obtained, and the monitoring purpose of 'seeing clearly' is achieved;
the intelligent processing and control module deploys a target detection and identification algorithm to intelligently identify an interested target in a monitoring scene;
the intelligent processing and control module realizes the control of the cradle head through a duty ratio-adjustable rectangular wave generator with the PWM pulse frequency of 50Hz, the different duty ratios correspond to the cloud platform and rotate for a fixed angle at a standard zero angle, the duty ratio of the rectangular wave is 5-95%, the corresponding rotation angle of the cradle head is 0-360 degrees, and the rotation of the cradle head is controlled;
the intelligent processing and control module is connected with the laser ranging signal through a USB-to-TTL interface and converts the laser ranging signal into distance information of a target; the zooming camera is focused in a preset position mode, 6 preset positions correspond to 6 distance sections respectively, the preset positions are obtained through distance information, and then the zooming camera is controlled through a USB serial port to adjust focal length, so that a clear image is obtained;
after the intelligent processing and control module receives the laser signal and controls the zoom camera to focus, intelligently identifying the image captured by the zoom camera, specifically, deploying a multi-model fusion target detection side algorithm on the ARM, and capturing the image of which the algorithm detection result is the pedestrian;
the comprehensive information display terminal displays all-dimensional monitoring information and carries out intelligent shooting, and the video monitoring requirement of high security level is met.
2. The multi-sensor fused all-dimensional monitoring and intelligent camera system according to claim 1, wherein the 4 cameras have the same pitch angle, are all inclined downwards by 30 degrees in the horizontal direction, the monitoring angle of the 4 cameras in the horizontal direction is larger than 360 degrees, 360-degree all-dimensional monitoring in the horizontal direction is realized, the overlapping angle of the 2 adjacent cameras in the horizontal direction is larger than 10 degrees, and accurate splicing of images shot by the adjacent cameras is realized.
3. The multi-sensor-fused omnibearing monitoring and intelligent camera system according to claim 1, characterized in that the target detection side algorithm migrates a pre-trained model for target detection to a specific target for identification, freezes networks in a full connection layer in a deep convolutional neural network, so that parameters in the frozen network layers are not subjected to gradient update in the training process of the model, and parameters that can be optimized are only all parameters of a feature extractor which is not frozen, and we refer to as a feature extractor, so as to obtain a feature map; then, connecting the feature maps, reconstructing a full connection layer for carrying the whole model output classification work, adopting 3 layers of full connection layers, and performing parameter optimization by using a random inactivation optimization method, specifically, fusing a VGG16 network and a ResNet50 network, wherein a VGG16 feature extractor refers to a convolution block structure of a VGG16 network, and a ResNet50 feature extractor refers to a residual block structure of ResNet 50;
the VGG16 network consists of 13 convolutional layers and 3 fully-connected layers, extracts image features through the combination and stacking of 3 x 3 filters,
the ResNet50 network comprises 49 convolutional layers and 1 full-connection layer, and an identity mapping layer is added into the network to directly connect a shallow network and a deep network, so that the problems of feature loss and under-fitting of the VGG16 network are solved.
4. The multi-sensor-fused omni-directional monitoring and intelligent camera system according to claim 3, wherein the transfer learning is based on feature transfer learning in isomorphic space, common feature representation in feature space of a source domain and a target domain is found, difference between the two domains is reduced, a Keras library-owned VGG16 and ResNet50 model is adopted, and trained pre-training weights from ImageNet data sets are used for the transfer learning.
CN201911078804.2A 2019-11-07 2019-11-07 Multi-sensor fusion omnibearing monitoring and intelligent camera shooting method and system Pending CN110719444A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911078804.2A CN110719444A (en) 2019-11-07 2019-11-07 Multi-sensor fusion omnibearing monitoring and intelligent camera shooting method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911078804.2A CN110719444A (en) 2019-11-07 2019-11-07 Multi-sensor fusion omnibearing monitoring and intelligent camera shooting method and system

Publications (1)

Publication Number Publication Date
CN110719444A true CN110719444A (en) 2020-01-21

Family

ID=69213777

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911078804.2A Pending CN110719444A (en) 2019-11-07 2019-11-07 Multi-sensor fusion omnibearing monitoring and intelligent camera shooting method and system

Country Status (1)

Country Link
CN (1) CN110719444A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753925A (en) * 2020-07-02 2020-10-09 广东技术师范大学 Multi-model fusion medical image classification method and equipment
CN112230681A (en) * 2020-09-28 2021-01-15 西安交通大学 Multi-motor disc suspension control system and method
CN112286190A (en) * 2020-10-26 2021-01-29 中国人民解放军国防科技大学 Security patrol early warning method and system
CN113301256A (en) * 2021-05-23 2021-08-24 成都申亚科技有限公司 Camera module with low power consumption and multi-target continuous automatic monitoring function and camera shooting method thereof
CN113538584A (en) * 2021-09-16 2021-10-22 北京创米智汇物联科技有限公司 Camera auto-negotiation monitoring processing method and system and camera
CN113645437A (en) * 2021-06-01 2021-11-12 安徽振鑫智慧工程技术有限公司 Application device and use method of smart city emergency management system
WO2022000300A1 (en) * 2020-06-30 2022-01-06 深圳市大疆创新科技有限公司 Image processing method, image acquisition apparatus, unmanned aerial vehicle, unmanned aerial vehicle system, and storage medium
CN117312828A (en) * 2023-09-28 2023-12-29 光谷技术有限公司 Public facility monitoring method and system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004266669A (en) * 2003-03-03 2004-09-24 Sony Corp Monitoring camera and image pickup method
CN202886832U (en) * 2012-09-27 2013-04-17 中国科学院宁波材料技术与工程研究所 360-degree panoramic camera
CN103546692A (en) * 2013-11-04 2014-01-29 苏州科达科技股份有限公司 Method and system achieving integrated camera automatic focusing
CN104822052A (en) * 2015-04-23 2015-08-05 暨南大学 Substation electrical equipment inspection system and method
CN105120245A (en) * 2015-10-08 2015-12-02 深圳九星智能航空科技有限公司 UAV (unmanned aerial vehicle) for panoramic surveillance
CN206850908U (en) * 2017-07-10 2018-01-05 沈峘 The measuring system that a kind of spliced panorama camera merges with tracking head
CN207706329U (en) * 2018-01-12 2018-08-07 深圳市派诺创视科技有限公司 A kind of panorama safety defense monitoring system
CN109543632A (en) * 2018-11-28 2019-03-29 太原理工大学 A kind of deep layer network pedestrian detection method based on the guidance of shallow-layer Fusion Features
CN109948557A (en) * 2019-03-22 2019-06-28 中国人民解放军国防科技大学 Smoke detection method with multi-network model fusion
CN109995982A (en) * 2017-12-29 2019-07-09 浙江宇视科技有限公司 A kind of method, apparatus that motor-driven lens focus automatically and video camera

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004266669A (en) * 2003-03-03 2004-09-24 Sony Corp Monitoring camera and image pickup method
CN202886832U (en) * 2012-09-27 2013-04-17 中国科学院宁波材料技术与工程研究所 360-degree panoramic camera
CN103546692A (en) * 2013-11-04 2014-01-29 苏州科达科技股份有限公司 Method and system achieving integrated camera automatic focusing
CN104822052A (en) * 2015-04-23 2015-08-05 暨南大学 Substation electrical equipment inspection system and method
CN105120245A (en) * 2015-10-08 2015-12-02 深圳九星智能航空科技有限公司 UAV (unmanned aerial vehicle) for panoramic surveillance
CN206850908U (en) * 2017-07-10 2018-01-05 沈峘 The measuring system that a kind of spliced panorama camera merges with tracking head
CN109995982A (en) * 2017-12-29 2019-07-09 浙江宇视科技有限公司 A kind of method, apparatus that motor-driven lens focus automatically and video camera
CN207706329U (en) * 2018-01-12 2018-08-07 深圳市派诺创视科技有限公司 A kind of panorama safety defense monitoring system
CN109543632A (en) * 2018-11-28 2019-03-29 太原理工大学 A kind of deep layer network pedestrian detection method based on the guidance of shallow-layer Fusion Features
CN109948557A (en) * 2019-03-22 2019-06-28 中国人民解放军国防科技大学 Smoke detection method with multi-network model fusion

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022000300A1 (en) * 2020-06-30 2022-01-06 深圳市大疆创新科技有限公司 Image processing method, image acquisition apparatus, unmanned aerial vehicle, unmanned aerial vehicle system, and storage medium
CN111753925A (en) * 2020-07-02 2020-10-09 广东技术师范大学 Multi-model fusion medical image classification method and equipment
CN112230681A (en) * 2020-09-28 2021-01-15 西安交通大学 Multi-motor disc suspension control system and method
CN112286190A (en) * 2020-10-26 2021-01-29 中国人民解放军国防科技大学 Security patrol early warning method and system
CN113301256A (en) * 2021-05-23 2021-08-24 成都申亚科技有限公司 Camera module with low power consumption and multi-target continuous automatic monitoring function and camera shooting method thereof
CN113301256B (en) * 2021-05-23 2023-12-22 成都申亚科技有限公司 Low-power-consumption multi-target continuous automatic monitoring camera module and camera method thereof
CN113645437A (en) * 2021-06-01 2021-11-12 安徽振鑫智慧工程技术有限公司 Application device and use method of smart city emergency management system
CN113538584A (en) * 2021-09-16 2021-10-22 北京创米智汇物联科技有限公司 Camera auto-negotiation monitoring processing method and system and camera
CN113538584B (en) * 2021-09-16 2021-11-26 北京创米智汇物联科技有限公司 Camera auto-negotiation monitoring processing method and system and camera
CN117312828A (en) * 2023-09-28 2023-12-29 光谷技术有限公司 Public facility monitoring method and system

Similar Documents

Publication Publication Date Title
CN110719444A (en) Multi-sensor fusion omnibearing monitoring and intelligent camera shooting method and system
Dilshad et al. Applications and challenges in video surveillance via drone: A brief survey
CN108111818B (en) Moving target actively perceive method and apparatus based on multiple-camera collaboration
CN110830756B (en) Monitoring method and device
Wheeler et al. Face recognition at a distance system for surveillance applications
CN109872483B (en) Intrusion alert photoelectric monitoring system and method
JP4188394B2 (en) Surveillance camera device and surveillance camera system
CN101119482B (en) Overall view monitoring method and apparatus
CN113850137A (en) Power transmission line image online monitoring method, system and equipment
CN104813339A (en) Methods, devices and systems for detecting objects in a video
CN109816702A (en) A kind of multiple target tracking device and method
CN104079916A (en) Panoramic three-dimensional visual sensor and using method
CN114905512B (en) Panoramic tracking and obstacle avoidance method and system for intelligent inspection robot
Nyberg et al. Unpaired thermal to visible spectrum transfer using adversarial training
Stone et al. Skyline-based localisation for aggressively manoeuvring robots using UV sensors and spherical harmonics
CN118555462B (en) Bionic eagle eye monitoring equipment
CN111800588A (en) Optical unmanned aerial vehicle monitoring system based on three-dimensional light field technology
CN109636763A (en) A kind of intelligence compound eye monitoring system
KR101836882B1 (en) All directional Camera and Pan Tilt Zoom Camera Linkage Possible Photographing Apparatus and Method Thereof
CN112488022B (en) Method, device and system for monitoring panoramic view
CN109785562A (en) A kind of vertical photoelectricity ground based threats warning system and suspicious object recognition methods
CN112364793A (en) Target detection and fusion method based on long-focus and short-focus multi-camera vehicle environment
CN112001224A (en) Video acquisition method and video acquisition system based on convolutional neural network
CN111399014A (en) Local stereoscopic vision infrared camera system and method for monitoring wild animals
CN111225182A (en) Image acquisition equipment, method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200121