CN117963194A - Self-tracking cradle head lighting system of electric power capital construction mooring unmanned aerial vehicle - Google Patents
Self-tracking cradle head lighting system of electric power capital construction mooring unmanned aerial vehicle Download PDFInfo
- Publication number
- CN117963194A CN117963194A CN202410386450.2A CN202410386450A CN117963194A CN 117963194 A CN117963194 A CN 117963194A CN 202410386450 A CN202410386450 A CN 202410386450A CN 117963194 A CN117963194 A CN 117963194A
- Authority
- CN
- China
- Prior art keywords
- aerial vehicle
- unmanned aerial
- illumination
- target
- monitoring
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000010276 construction Methods 0.000 title claims description 34
- 238000005286 illumination Methods 0.000 claims abstract description 228
- 238000012544 monitoring process Methods 0.000 claims abstract description 131
- 238000012545 processing Methods 0.000 claims abstract description 47
- 238000001931 thermography Methods 0.000 claims abstract description 13
- 238000004891 communication Methods 0.000 claims abstract description 7
- 238000001228 spectrum Methods 0.000 claims description 38
- 238000004458 analytical method Methods 0.000 claims description 26
- 230000003595 spectral effect Effects 0.000 claims description 22
- 238000001514 detection method Methods 0.000 claims description 20
- 239000013598 vector Substances 0.000 claims description 15
- 230000004927 fusion Effects 0.000 claims description 12
- 238000012937 correction Methods 0.000 claims description 11
- 230000003321 amplification Effects 0.000 claims description 9
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 9
- 238000000926 separation method Methods 0.000 claims description 8
- 230000005856 abnormality Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000009792 diffusion process Methods 0.000 claims description 5
- 230000007246 mechanism Effects 0.000 claims description 5
- 238000012706 support-vector machine Methods 0.000 claims description 5
- RZVHIXYEVGDQDX-UHFFFAOYSA-N 9,10-anthraquinone Chemical group C1=CC=C2C(=O)C3=CC=CC=C3C(=O)C2=C1 RZVHIXYEVGDQDX-UHFFFAOYSA-N 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 4
- 230000009977 dual effect Effects 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000000034 method Methods 0.000 description 18
- 230000008569 process Effects 0.000 description 14
- 230000000694 effects Effects 0.000 description 12
- 238000005516 engineering process Methods 0.000 description 7
- 238000013461 design Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000007537 lampworking Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U20/00—Constructional aspects of UAVs
- B64U20/80—Arrangement of on-board electronics, e.g. avionics systems or wiring
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64D—EQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENT OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
- B64D47/00—Equipment not otherwise provided for
- B64D47/02—Arrangements or adaptations of signal or lighting devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U10/00—Type of UAV
- B64U10/60—Tethered aircraft
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U20/00—Constructional aspects of UAVs
- B64U20/80—Arrangement of on-board electronics, e.g. avionics systems or wiring
- B64U20/87—Mounting of imaging devices, e.g. mounting of gimbals
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U2101/00—UAVs specially adapted for particular uses or applications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U2101/00—UAVs specially adapted for particular uses or applications
- B64U2101/30—UAVs specially adapted for particular uses or applications for imaging, photography or videography
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U2201/00—UAVs characterised by their flight controls
- B64U2201/10—UAVs characterised by their flight controls autonomous, i.e. by navigating independently from ground or air stations, e.g. by using inertial navigation systems [INS]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02B—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
- Y02B20/00—Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
- Y02B20/40—Control techniques providing energy savings, e.g. smart controller or presence detection
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Mechanical Engineering (AREA)
- Remote Sensing (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Image Processing (AREA)
Abstract
The application relates to the technical field of unmanned aerial vehicle illumination, and discloses a self-tracking cradle head illumination system of an electric power foundation mooring unmanned aerial vehicle. The system comprises: the system comprises a central processing module, a positioning module, a monitoring module, an unmanned aerial vehicle and a lighting module; the central processing module is respectively connected with the positioning module, the monitoring module and the lighting module; the lighting module comprises a height sensing unit, a lighting lamp, an angle adjusting unit, a brightness adjusting unit and a lamp detecting unit; the monitoring module comprises a camera, an infrared thermal imaging detector, an intelligent identification unit, an interference unit and a human image amplifying unit and is used for monitoring the position and the face orientation of a constructor; the positioning module comprises an RTK base station and an airborne RTK mobile station and is used for providing high-precision positioning information for the unmanned aerial vehicle; the unmanned aerial vehicle comprises a communication unit and a flight adjusting unit, wherein the lighting module and the monitoring module are arranged at the lower end of the unmanned aerial vehicle, and the positioning module is arranged in the unmanned aerial vehicle.
Description
Technical Field
The application relates to the technical field of unmanned aerial vehicle illumination, in particular to a self-tracking cradle head illumination system of an electric power foundation mooring unmanned aerial vehicle.
Background
The unmanned aerial vehicle is tethered with the concepts of illumination and 24-hour duration, the problem of duration of the unmanned aerial vehicle is perfectly solved, 24-hour continuous control video recording throughout the day can be realized, the visual angle is wide, and the video recording is complete.
In prior art, the general location of tethered unmanned aerial vehicle lighting apparatus is in the position of constructor top of the head or one side that needs illumination, utilizes tethered unmanned aerial vehicle lighting system to throw light on it, because constructor when constructing, construction position's orientation can change, and single illumination mode is when constructor is facing away from the light, and tethered unmanned aerial vehicle lighting system is to can't carry out accurate illumination to the construction place, produces illumination shadow easily.
Disclosure of Invention
The application provides a self-tracking cradle head lighting system of an electric power foundation mooring unmanned aerial vehicle, which is used for realizing the lighting of all-round following the construction direction of constructors.
In a first aspect, the present application provides an electric power infrastructure tethered unmanned aerial vehicle self-tracking pan-tilt lighting system, the electric power infrastructure tethered unmanned aerial vehicle self-tracking pan-tilt lighting system comprising: the system comprises a central processing module, a positioning module, a monitoring module, an unmanned aerial vehicle and a lighting module; the central processing module is respectively connected with the positioning module, the monitoring module and the lighting module; the lighting module comprises a height sensing unit, a lighting lamp, an angle adjusting unit, a brightness adjusting unit and a lamp detecting unit; the lighting module is specifically used for: acquiring height distance data between the unmanned aerial vehicle and a target area under construction through the height sensing unit; the illuminating lamp performs illumination operation on the target area according to the height distance data, performs illumination angle adjustment on the illuminating lamp through the angle adjustment unit, and performs illumination brightness adjustment on the illuminating lamp through the brightness adjustment unit; the monitoring module comprises a camera, an infrared thermal imaging detector, an intelligent identification unit, an interference device and a portrait amplifying unit, and is used for monitoring the position and the face orientation of constructors; the positioning module comprises an RTK base station and an airborne RTK mobile station and is used for providing high-precision positioning information for the unmanned aerial vehicle; the unmanned aerial vehicle includes communication unit and flight control unit, lighting module with monitoring module all sets up unmanned aerial vehicle's lower extreme, positioning module sets up unmanned aerial vehicle's inside.
With reference to the first aspect, in a first implementation manner of the first aspect of the present application, the power-based tethered unmanned aerial vehicle self-tracking pan-tilt lighting system is specifically configured to: acquiring target monitoring image data of a target area through the monitoring module, and identifying constructors on the target monitoring image data to obtain the number N of the constructors in the target image data, wherein N is a positive integer; if the number of constructors N=1, extracting a first target object in the target monitoring image data, performing unmanned aerial vehicle illumination parameter analysis on the first target object by adopting a single target object monitoring strategy, generating a first unmanned aerial vehicle illumination parameter combination corresponding to the first target object, and performing unmanned aerial vehicle self-tracking holder illumination on the first target object through the first unmanned aerial vehicle illumination parameter combination; and if the number N of constructors is more than 1, extracting a plurality of second target objects in the target monitoring image data, analyzing unmanned aerial vehicle illumination parameters of the plurality of second target objects by adopting a multi-target object monitoring strategy, generating second unmanned aerial vehicle illumination parameter combinations corresponding to the plurality of second target objects, and carrying out unmanned aerial vehicle self-tracking holder illumination on the plurality of second target objects through the second unmanned aerial vehicle illumination parameter combinations.
With reference to the first aspect, in a second implementation manner of the first aspect of the present application, the power-based tethered unmanned aerial vehicle self-tracking pan-tilt lighting system is specifically configured to: if the number of constructors N=1, carrying out target object identification on the target monitoring image data to obtain a first target object, and carrying out face image region segmentation on the first target object to obtain first face image data of the first target object; acquiring first object position information of the first target object based on the single target object monitoring strategy and the camera; performing face amplification processing on the first face image data through the intelligent recognition unit and the face amplification unit to obtain second face image data of the first target object; performing movement track analysis on the first target object according to the first object position information to obtain movement track information of the first target object, and performing face orientation analysis on the second face image data to obtain face orientation information of the first target object; calculating illumination direction parameters of the unmanned aerial vehicle according to the movement track information and the face orientation information of the first target object, monitoring height data of the unmanned aerial vehicle from the ground through a height sensing module, calculating first illumination brightness parameters of the unmanned aerial vehicle according to the height data of the unmanned aerial vehicle from the ground, and generating a first unmanned aerial vehicle illumination parameter combination according to the illumination direction parameters and the first illumination brightness parameters; transmitting the first unmanned aerial vehicle illumination parameter combination to the flight adjusting unit and the positioning unit through the central processing module, and generating a first monitoring instruction of the first target object through the flight adjusting unit and the positioning unit; and executing the first monitoring instruction through the unmanned aerial vehicle, so that the unmanned aerial vehicle performs unmanned aerial vehicle self-tracking cradle head illumination on the first target object.
With reference to the first aspect, in a third implementation manner of the first aspect of the present application, the power-based tethered unmanned aerial vehicle self-tracking pan-tilt lighting system is specifically configured to: if the number N of the constructors is more than 1, carrying out target object identification on the target monitoring image data to obtain a plurality of second target objects; respectively monitoring the position information of the plurality of second target objects through the camera and the infrared thermal imaging detector to obtain second object position information of each second target object; performing range calculation on the plurality of second target objects according to the second object position information to obtain target range information of the plurality of second target objects; calculating unmanned plane position parameters, unmanned plane height parameters and illumination range parameters of the unmanned plane according to target range information of the plurality of second target objects, monitoring height data of the unmanned plane from the ground through a height sensing module, calculating second illumination brightness parameters of the unmanned plane according to the height data of the unmanned plane from the ground, and generating second unmanned plane illumination parameter combinations according to the unmanned plane position parameters, the unmanned plane height parameters, the illumination range parameters and the second illumination brightness parameters; transmitting the second unmanned aerial vehicle illumination parameter combination to the illumination module and the positioning module through the central processing module, and generating second monitoring instructions of the plurality of second target objects through the illumination module and the positioning module; and executing the second monitoring instruction through the unmanned aerial vehicle, so that the unmanned aerial vehicle performs unmanned aerial vehicle self-tracking cradle head illumination on the plurality of second target objects.
With reference to the first aspect, in a fourth implementation manner of the first aspect of the present application, the monitoring module is specifically configured to: carrying out multi-channel spectrum image acquisition on the target area based on a plurality of preset image acquisition channels to obtain initial spectrum image data of each image acquisition channel; acquiring spectrum distribution data of each image acquisition channel, and respectively carrying out spectrum correction on the initial spectrum image data according to the spectrum distribution data to obtain target spectrum image data of each image acquisition channel; carrying out multispectral fusion on the plurality of target spectral image data to generate fused spectral image data, and carrying out image feature analysis on the fused spectral image data to obtain a spectral image feature set; and carrying out monitoring region enhancement processing on the fused spectrum image data according to the spectrum image feature set to obtain target monitoring image data.
With reference to the first aspect, in a fifth implementation manner of the first aspect of the present application, the light fixture detection unit is specifically configured to: detecting the current and the illumination time length of the illumination lamp to obtain initial current data and initial illumination time length data of the illumination lamp; performing standardization processing on the initial current data to obtain target current data, and performing standardization processing on the initial illumination duration data to obtain target illumination duration data; respectively carrying out feature recognition on the target current data and the target illumination duration data to obtain current features and illumination duration features; vector mapping and attention mechanism weighting are carried out on the current characteristics and the illumination duration characteristics, so that an illumination lamp detection vector is obtained; and inputting the lighting lamp detection vector into a preset support vector machine model to detect lamp damage abnormality, so as to obtain a lamp damage abnormality detection result.
With reference to the first aspect, in a sixth implementation manner of the first aspect of the present application, the portrait amplifying unit is specifically configured to: performing image coding on the first face image data by adopting a pre-trained diffusion automatic encoder to obtain a first image coding subspace and a second image coding subspace; extracting the portrait characteristics of the first image coding subspace and the second image coding subspace through a cross attention inverse linear interpolation branch in a preset double-branch identity separation network to obtain a first initial portrait characteristic of the first image coding subspace and a second initial portrait characteristic of the second image coding subspace; extracting hidden features of the first image coding subspace and the second image coding subspace through multi-layer perceptron branches in the double-branch identity separation network respectively to obtain a first initial hidden feature of the first image coding subspace and a second initial hidden feature of the second image coding subspace; performing feature fusion on the first initial portrait feature and the first initial hidden feature to obtain a first fused portrait feature, and performing feature fusion on the second initial portrait feature and the second initial hidden feature to obtain a second fused portrait feature; and performing feature decoding and image amplification processing on the first fused image feature and the second fused image feature to obtain second face image data of the first target object.
With reference to the first aspect, in a seventh implementation manner of the first aspect of the present application, the positioning module is further configured to: according to the first monitoring instruction or the second monitoring instruction, a flight repositioning instruction is sent to the unmanned aerial vehicle; receiving the flight repositioning instruction through the unmanned aerial vehicle, and performing repositioning operation on the unmanned aerial vehicle according to the first unmanned aerial vehicle illumination parameter combination or the second unmanned aerial vehicle illumination parameter combination to obtain repositioning data; and according to the repositioning data, carrying out illumination parameter correction and self-tracking cradle head illumination on the first unmanned aerial vehicle illumination parameter combination or the second unmanned aerial vehicle illumination parameter combination.
With reference to the first aspect, in an eighth implementation manner of the first aspect of the present application, the positioning module includes a beidou, GPS, and GLONASS three-mode positioning system, and the on-board RTK is configured with dual antennas to provide accurate positioning information and stable heading information.
With reference to the first aspect, in a ninth implementation manner of the first aspect of the present application, the camera is an ultralong-focal visible light transparent high-definition network camera.
According to the technical scheme provided by the application, the omnibearing and dynamic tracking illumination of constructors can be realized, and the illumination quality and the construction safety of a construction site are effectively improved; the central processing module, the positioning module, the monitoring module, the unmanned aerial vehicle and the lighting module are adopted, and the position of the unmanned aerial vehicle and the brightness and the angle of the lighting lamp can be automatically adjusted according to the specific position and the face direction of constructors through intelligent recognition and automatic adjustment technology, so that the best lighting effect is ensured; the monitoring module comprises a camera, an infrared thermal imaging detector, an intelligent identification unit and the like, can accurately capture the position and the activity condition of constructors in real time, and provides reliable data support for accurate illumination; the positioning module adopts a high-precision positioning technology of an RTK base station and an airborne RTK mobile station, and is matched with a Beidou, GPS and GLONASS three-mode positioning system and a double-antenna design, so that accurate positioning information and stable heading information are provided for the unmanned aerial vehicle, and the unmanned aerial vehicle can be ensured to be quickly and accurately adjusted to an optimal illumination position; the lamp detection unit can monitor the working state of the lighting lamp in real time, such as current, lighting time length and the like, and timely discover and process lamp faults, so that the stable operation and long-term reliability of the lighting system are ensured; the portrait amplifying unit is suitable for the environment with bad night or sight, and the accuracy of identification and the pertinence of illumination are further improved through the amplifying treatment and analysis of the face image, so that the illumination is realized by comprehensively following the construction direction of constructors.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained based on these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an embodiment of a self-tracking pan-tilt illumination system of an electric power-based tethered unmanned aerial vehicle according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a self-tracking cradle head lighting system of an electric power foundation mooring unmanned aerial vehicle. The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, system, article, or apparatus.
For ease of understanding, a specific flow of an embodiment of the present application is described below, referring to fig. 1, and an embodiment of a self-tracking pan-tilt lighting system of an electric power infrastructure tethered unmanned aerial vehicle in the embodiment of the present application includes:
The electric power foundation mooring unmanned aerial vehicle self-tracking cloud platform lighting system includes: the system comprises a central processing module 101, a positioning module 102, a monitoring module 103, a unmanned aerial vehicle 104 and a lighting module 105; the central processing module 101 is respectively connected with the positioning module 102, the monitoring module 103 and the lighting module 105; the illumination module 105 includes a height sensing unit 1051, an illumination lamp 1052, an angle adjustment unit 1053, a brightness adjustment unit 1054, and a lamp detection unit 1055; the lighting module 105 is specifically configured to: acquiring height distance data between the unmanned aerial vehicle 104 and a target area under construction by a height sensing unit 1051; the illumination lamp 1052 performs illumination operation on the target area according to the height distance data, and performs illumination angle adjustment of the illumination lamp 1052 by the angle adjustment unit 1053, and performs illumination brightness adjustment of the illumination lamp 1052 by the brightness adjustment unit 1054; the monitoring module 103 comprises a camera 1031, an infrared thermal imaging detector 1032, an intelligent recognition unit 1033, an interference unit 1034 and a portrait amplifying unit 1035, and the monitoring module 103 is used for monitoring the position and the face orientation of the constructor; the positioning module 102 comprises an RTK base station 1021 and an onboard RTK mobile station 1022, and the positioning module 102 is configured to provide high-precision positioning information for the unmanned aerial vehicle 104; the unmanned aerial vehicle 104 includes communication unit 1041 and flight adjustment unit 1042, and lighting module 105 and monitoring module 103 all set up in the lower extreme of unmanned aerial vehicle 104, and positioning module 102 sets up in the inside of unmanned aerial vehicle 104.
Specifically, the electric power capital construction mooring unmanned aerial vehicle self-tracking cloud platform lighting system includes: the system comprises a central processing module, a positioning module, a monitoring module, an unmanned aerial vehicle and a lighting module; the central processing module is respectively connected with the positioning module, the monitoring module and the lighting module; the lighting module comprises a height sensing unit, a lighting lamp, an angle adjusting unit, a brightness adjusting unit and a lamp detecting unit; the monitoring module comprises a camera, an infrared thermal imaging detector, an intelligent identification unit, an interference device and a portrait amplifying unit and is used for monitoring the position and the face orientation of constructors; the positioning module comprises an RTK base station and an airborne RTK mobile station and is used for providing high-precision positioning information for the unmanned aerial vehicle; unmanned aerial vehicle includes communication unit and flight control unit, and lighting module and monitoring module all set up the lower extreme at unmanned aerial vehicle, and positioning module sets up the inside at unmanned aerial vehicle. The central processing module serves as a core of the system and is responsible for coordinating the work of each module so as to ensure the efficient operation of the whole system. The module processes the information in real time by receiving the data from the positioning module and the monitoring module, thereby controlling the flight path of the unmanned aerial vehicle and the working states of the lighting module and the monitoring module. The lighting module is designed to meet various lighting demands, wherein the height sensing unit can automatically adjust the brightness and angle of the lighting lamp according to the flying height of the unmanned aerial vehicle so as to achieve the optimal lighting effect. The angle adjusting unit and the brightness adjusting unit further provide flexible illumination adjusting options to ensure that illumination conditions can be adjusted according to specific situations, and the lamp detecting unit is used for monitoring the running state of the illumination system to ensure the stability of illumination effects. The monitoring module integrates various high-end technologies, including a camera and an infrared thermal imaging detector, and is used for comprehensively monitoring the condition of a construction site. The intelligent recognition unit can recognize the face orientation and the position of constructors, cooperates with the portrait amplifying unit, can accurately monitor the state of each person even under a complex environment, and provides guarantee for construction safety. The positioning module provides high-precision position information for the unmanned aerial vehicle through an RTK technology, and ensures that the unmanned aerial vehicle can accurately fly to a specified position for illumination and monitoring. The RTK base station and the airborne RTK mobile station are matched for use, so that the accuracy and reliability of positioning are greatly improved. The unmanned aerial vehicle design allows for flexibility and practicality, and the design of the communication unit and the flight adjustment unit enables the unmanned aerial vehicle to fly stably in a complex environment and ensures real-time communication with a ground control center. By integrating the lighting module and the monitoring module to the lower end of the unmanned aerial vehicle, the unmanned aerial vehicle can perform lighting and monitoring work more flexibly, and the positioning module is arranged inside, so that the compactness and the overall performance of the system are guaranteed to the greatest extent. The special requirements of the electric power foundation construction are fully considered in the design of the whole system, and a brand new lighting and monitoring solution is provided for a construction site through efficient coordination and control of the work of each module. This not only improves the construction efficiency but also improves the construction safety.
Optionally, the self-tracking cradle head lighting system of the electric power capital construction mooring unmanned aerial vehicle is specifically used for: acquiring target monitoring image data of a target area through a monitoring module, and identifying constructors on the target monitoring image data to obtain the number N of the constructors in the target image data, wherein N is a positive integer; if the number of constructors N=1, extracting a first target object in target monitoring image data, carrying out unmanned aerial vehicle illumination parameter analysis on the first target object by adopting a single target object monitoring strategy, generating a first unmanned aerial vehicle illumination parameter combination corresponding to the first target object, and carrying out unmanned aerial vehicle self-tracking cradle head illumination on the first target object through the first unmanned aerial vehicle illumination parameter combination; if the number N of constructors is more than 1, extracting a plurality of second target objects in the target monitoring image data, analyzing unmanned aerial vehicle illumination parameters of the plurality of second target objects by adopting a multi-target object monitoring strategy, generating second unmanned aerial vehicle illumination parameter combinations corresponding to the plurality of second target objects, and carrying out unmanned aerial vehicle self-tracking holder illumination on the plurality of second target objects through the second unmanned aerial vehicle illumination parameter combinations.
Specifically, monitoring image data in a target area is obtained through a monitoring module, and the image data is analyzed to identify personnel in a construction site. In this process, the number N of constructors in the image data is determined, and the lighting strategy is determined according to the number of constructors. When the system detects that only one constructor is n=1, the constructor is automatically extracted as a first target object, and a single target monitoring strategy is adopted for analysis. And carrying out unmanned aerial vehicle illumination parameter analysis on the first target object to generate an unmanned aerial vehicle illumination parameter combination which is most suitable for the target object. This set of parameter combinations is then used to control the lighting devices on the drone, ensuring that the drone is able to automatically track and effectively illuminate the first target object, thereby improving the safety and efficiency of the job site. When more than one constructor exists in the monitoring image data display field, namely N is more than 1, a multi-target monitoring strategy is adopted. In this case, all constructors are identified and extracted as second target objects, and comprehensive unmanned aerial vehicle illumination parameter analysis is performed on these target objects. This analysis considers how to illuminate multiple constructors simultaneously, generating a set of unmanned lighting parameter combinations for multiple second target objects. The set of combination parameters ensures that the unmanned aerial vehicle can perform effective self-tracking cradle head illumination on a plurality of targets at the same time, thereby improving the working efficiency and the safety while guaranteeing the integral illumination of a construction site. Through intelligent analysis and the ability of automatically adjusting illumination parameters, the electric power foundation mooring unmanned aerial vehicle self-tracking cradle head illumination system can flexibly adjust illumination strategies according to the actual number and positions of construction site personnel, so that the construction site can still maintain a high-efficiency and safe working state at night or in an environment with insufficient light.
Optionally, the self-tracking cradle head lighting system of the electric power capital construction mooring unmanned aerial vehicle is specifically used for: if the number of constructors N=1, carrying out target object identification on target monitoring image data to obtain a first target object, and carrying out face image region segmentation on the first target object to obtain first face image data of the first target object; based on a single target object monitoring strategy and a camera, acquiring first object position information of a first target object; performing face amplification processing on the first face image data through the intelligent recognition unit and the face amplification unit to obtain second face image data of the first target object; performing movement track analysis on the first target object according to the first object position information to obtain movement track information of the first target object, and performing face orientation analysis on the second face image data to obtain face orientation information of the first target object; calculating illumination direction parameters of the unmanned aerial vehicle according to the movement track information and the face orientation information of the first target object, monitoring the height data of the unmanned aerial vehicle from the ground through the height sensing module, calculating first illumination brightness parameters of the unmanned aerial vehicle according to the height data of the unmanned aerial vehicle from the ground, and generating a first unmanned aerial vehicle illumination parameter combination according to the illumination direction parameters and the first illumination brightness parameters; transmitting the first unmanned aerial vehicle illumination parameter combination to a flight adjusting unit and a positioning unit through a central processing module, and generating a first monitoring instruction of a first target object through the flight adjusting unit and the positioning unit; and executing a first monitoring instruction through the unmanned aerial vehicle so that the unmanned aerial vehicle can perform unmanned aerial vehicle self-tracking cradle head illumination on the first target object.
Specifically, if the number of constructors n=1, the camera in the monitoring module is used to capture the monitoring image data of the target area. Target object recognition is performed on the captured image data, from which a first target object is accurately recognized. In the identification process, the face image area of the target object is segmented through an image processing algorithm, so that face image data of the first target object are obtained. In order to further optimize the monitoring effect, the intelligent recognition unit and the human image amplifying unit amplify the extracted first human face image data to generate second human face image data with higher definition, so that the accuracy of human face recognition and the quality of monitoring images are improved, and the unmanned aerial vehicle can more accurately lock the target when executing the illumination task. Based on a single target object monitoring strategy, position information of a first target object is collected. And capturing an image of the target object by utilizing the cooperative work of the camera and the positioning module, and acquiring the position data of the target object in real time. And obtaining the movement track information of the first target object by analyzing the position information. And further carrying out face orientation analysis on the second face image data to obtain face orientation information of the first target object. The correct face orientation information can help the system determine how to adjust the illumination angle to ensure optimization of the illumination effect. And calculating the illumination direction parameters of the unmanned aerial vehicle according to the movement track information and the face orientation information of the first target object. Meanwhile, the height data of the unmanned aerial vehicle from the ground are monitored through the height sensing module, and the first illumination brightness parameter of the unmanned aerial vehicle is calculated based on the data. The calculation of these two parameters directly affects the effect and accuracy of the unmanned aerial vehicle illumination. The calculated first unmanned aerial vehicle illumination parameter combination is transmitted to the central processing module, and the central processing module is responsible for transmitting the parameter instructions to the flight adjusting unit and the positioning unit. And the flight adjusting unit and the positioning unit generate a first monitoring instruction according to the received illumination parameter combination to guide the unmanned aerial vehicle to execute. According to the instructions, the unmanned aerial vehicle performs accurate self-tracking cradle head illumination on the first target object.
Optionally, the self-tracking cradle head lighting system of the electric power capital construction mooring unmanned aerial vehicle is specifically used for: if the number N of constructors is more than 1, carrying out target object identification on the target monitoring image data to obtain a plurality of second target objects; respectively monitoring the position information of a plurality of second target objects through a camera and an infrared thermal imaging detector to obtain second object position information of each second target object; performing range calculation on a plurality of second target objects according to the second object position information to obtain target range information of the plurality of second target objects; calculating unmanned aerial vehicle position parameters, unmanned aerial vehicle height parameters and illumination range parameters of the unmanned aerial vehicle according to target range information of a plurality of second target objects, monitoring height data of the unmanned aerial vehicle from the ground through a height sensing module, calculating second illumination brightness parameters of the unmanned aerial vehicle according to the height data of the unmanned aerial vehicle from the ground, and generating a second unmanned aerial vehicle illumination parameter combination according to the unmanned aerial vehicle position parameters, the unmanned aerial vehicle height parameters, the illumination range parameters and the second illumination brightness parameters; transmitting the second unmanned aerial vehicle illumination parameter combination to the illumination module and the positioning module through the central processing module, and generating a plurality of second monitoring instructions of a second target object through the illumination module and the positioning module; and executing a second monitoring instruction by the unmanned aerial vehicle so that the unmanned aerial vehicle performs unmanned aerial vehicle self-tracking cradle head illumination on a plurality of second target objects.
Specifically, when a plurality of constructors work on a construction site, the camera and the infrared thermal imaging detector scan and capture the site to obtain target monitoring image data. After processing the data, the system can identify a plurality of second target objects, i.e., constructors, in the image. The positional information of each target object is obtained by the cooperation of a camera and an infrared thermal imaging detector, which devices are capable of providing accurate second object positional information. And performing range calculation on the identified plurality of second target objects. By analyzing the position information of each target object, the relative position and distribution range of each target object in the construction site are determined. This information determines the illumination range that the drone needs to cover and how to illuminate each constructor effectively. Based on the target range information, further calculating a position parameter of the unmanned aerial vehicle, a height parameter of the unmanned aerial vehicle and an illumination range parameter. The flying height of the unmanned aerial vehicle is obtained through monitoring of a height sensing module, and the module can accurately measure the height of the unmanned aerial vehicle from the ground. The accurate calculation of these flight and illumination parameters ensures that the drone can fly at the optimal altitude and position to provide the most efficient illumination coverage. And then, calculating a second illumination brightness parameter, wherein the parameter is dynamically adjusted based on the flying height of the unmanned aerial vehicle and a preset illumination range so as to ensure that the illumination effect meets the field requirement. All these parameters are combined to form a second unmanned aerial vehicle illumination parameter combination, which is transmitted to the illumination module and the positioning module by the central processing module. And the lighting module and the positioning module generate a second monitoring instruction according to the received second unmanned aerial vehicle lighting parameter combination. The drone performs tasks according to the instructions, including adjusting flight position, altitude, and illumination intensity to achieve automatic tracking and illumination of the plurality of second target objects. This has guaranteed unmanned aerial vehicle can respond to the change in job site in a flexible way, provides continuous, efficient illumination and monitoring service for constructor. The design of the whole system fully considers the real-time performance, flexibility and automation level, and aims to improve the safety and the working efficiency of a construction site.
Optionally, the monitoring module is specifically configured to: carrying out multi-channel spectrum image acquisition on a target area based on a plurality of preset image acquisition channels to obtain initial spectrum image data of each image acquisition channel; acquiring spectrum distribution data of each image acquisition channel, and respectively carrying out spectrum correction on the initial spectrum image data according to the spectrum distribution data to obtain target spectrum image data of each image acquisition channel; carrying out multispectral fusion on a plurality of target spectral image data to generate fused spectral image data, and carrying out image feature analysis on the fused spectral image data to obtain a spectral image feature set; and carrying out monitoring region enhancement processing on the fused spectrum image data according to the spectrum image feature set to obtain target monitoring image data.
Specifically, the monitoring module performs multi-channel spectrum image acquisition on the target area based on a plurality of preset image acquisition channels. Each channel is designed with a specific spectral range to capture spectral information at a different wavelength using a predetermined image acquisition channel. By this method, initial spectral image data covering a broad spectral range is obtained. The monitoring module acquires the spectrum distribution data of each image acquisition channel, and performs spectrum correction on the initial spectrum image data according to the spectrum distribution data. The purpose of spectrum correction is to eliminate spectrum deviation in the image data, and ensure that the target spectrum image data of each image acquisition channel can accurately reflect the real spectrum characteristics of the target area. Multispectral fusion is performed on the plurality of target spectral image data. Multispectral fusion combines the spectral image data of different channels through an algorithm to generate fused spectral image data, and the data integrates information of different wavelengths, so that richer information than single spectral image can be provided. After the fused spectral image data are obtained, the monitoring module further extracts a spectral image feature set through image feature analysis. Image feature analysis involves identifying key features in the image, such as edges, textures, and colors, which facilitate understanding and analysis of the monitored area. Based on the spectrum image feature set, the monitoring module carries out monitoring region enhancement processing on the fused spectrum image data. The enhancement process aims at improving the image quality, can highlight the details of the target area, and improves the visibility and the analyzability of the image. Through enhancement processing, the finally generated target monitoring image data enables the unmanned aerial vehicle system to perform self-tracking and cradle head illumination more effectively, and safety and efficiency of an electric power foundation construction site are ensured.
Optionally, the lamp detection unit is specifically configured to: detecting the current and the illumination time length of the illumination lamp to obtain initial current data and initial illumination time length data of the illumination lamp; performing standardization processing on the initial current data to obtain target current data, and performing standardization processing on the initial illumination duration data to obtain target illumination duration data; respectively carrying out feature recognition on the target current data and the target illumination duration data to obtain current features and illumination duration features; vector mapping and attention mechanism weighting are carried out on the current characteristics and the illumination duration characteristics, so that an illumination lamp detection vector is obtained; inputting the detection vector of the lighting lamp into a preset support vector machine model to perform lamp damage abnormality detection, and obtaining a lamp damage abnormality detection result.
Specifically, the lamp detection unit detects the current and the illumination time length of the illumination lamp. And obtaining initial current data and initial illumination duration data of the illumination lamp through the sensor and the timing device. The data reflect the basic electrical characteristics and the service condition of the illuminating lamp in a normal working state, and are the basis for subsequent analysis and judgment. And (3) carrying out standardized processing on the initial data, and eliminating dimension influence and range difference of the data, so that the data is more suitable for machine learning and statistical analysis. And obtaining target current data and target illumination duration data through standardization processing, wherein the two sets of data are directly used for subsequent feature recognition. By analyzing the target current data and the target illumination time length data, current characteristics and illumination time length characteristics are identified. Valuable information is extracted from the data by algorithms and models, such as deep learning or pattern recognition techniques. The current characteristics and the illumination duration characteristics are two important dimensions of the state of the illumination lamp, and can reflect the working efficiency and the potential loss condition of the lamp. The current signature and the illumination duration signature are vector mapped and the attention mechanism weighted. Vector mapping is the conversion of features into a form of vectors that can be mathematically manipulated, and attention mechanism weighting is the weighting of different weights depending on the importance of the features. The contribution degree of different features to the recognition of the working state of the lamp and the judgment of potential damage is different, the importance of the features can be reflected more accurately through weighting, and then a more representative illumination lamp detection vector is generated. And inputting the detection vector of the lighting lamp into a preset support vector machine model to detect abnormal damage of the lighting lamp. The support vector machine is a machine learning model that can process high-dimensional data and find optimal decision boundaries in the dataset. Through training, the model can accurately distinguish normal and abnormal lamp working states according to the detection vector of the illuminating lamp, so that effective early warning of lamp damage is realized. The process not only improves the reliability of the lighting system, but also can obviously reduce the maintenance cost and the downtime caused by lamp damage, and is important for the stable operation of the self-tracking cradle head lighting system of the electric power foundation tethered unmanned aerial vehicle.
Optionally, the portrait amplifying unit is specifically configured to: image coding is carried out on the first face image data by adopting a pre-trained diffusion automatic encoder, so that a first image coding subspace and a second image coding subspace are obtained; respectively extracting the portrait characteristics of the first image coding subspace and the second image coding subspace through a cross attention inverse linear interpolation branch in a preset double-branch identity separation network to obtain a first initial portrait characteristic of the first image coding subspace and a second initial portrait characteristic of the second image coding subspace; extracting hidden features of the first image coding subspace and the second image coding subspace through multi-layer perceptron branches in the double-branch identity separation network respectively to obtain a first initial hidden feature of the first image coding subspace and a second initial hidden feature of the second image coding subspace; performing feature fusion on the first initial portrait feature and the first initial hidden feature to obtain a first fused portrait feature, and performing feature fusion on the second initial portrait feature and the second initial hidden feature to obtain a second fused portrait feature; and performing feature decoding and human image amplification processing on the first fused human image feature and the second fused human image feature to obtain second human face image data of the first target object.
Specifically, the first face image data is image coded using a pre-trained diffusion automatic encoder. Diffusion auto-encoders are a type of image processing model based on deep learning that is capable of capturing key features of an image by learning an intrinsic representation of the image. The first face image data is encoded into two image encoding subspaces, which respectively contain different features and attributes of the face image. And respectively extracting the portrait features of the two image coding subspaces through a cross attention inverse linear interpolation branch in a preset double-branch identity separation network. The cross attention inverse linear interpolation branches can extract key characteristics of the human image, such as details of face contours, eyes, nose, mouth and the like through analysis of the image coding subspace. In addition, the multi-layer perceptron branch in the double-branch identity separation network is responsible for extracting hidden features of the two image coding subspaces, the hidden features contain deeper information of the portrait image, and the enhancement of the definition and the identification accuracy of the image is facilitated. And fusing the initial portrait characteristic and the hidden characteristic. Feature fusion generates a more comprehensive and detailed representation of the features of the figures by comprehensively considering various features and attributes of the figures. Specifically, the first initial portrait feature and the first initial hidden feature are fused to obtain a first fused portrait feature; and the second initial portrait characteristic and the second initial hidden characteristic are fused, so that a second fused portrait characteristic is obtained. And performing feature decoding and human image amplifying processing on the first fused human image feature and the second fused human image feature. And decoding the fused portrait features through the deep learning model to recover the high-definition face image. Meanwhile, the resolution and detail performance of the image are further improved through the portrait amplifying processing, and the generated second face image data of the first target object is ensured to have enough definition and recognition degree. The process not only relates to an advanced image reconstruction technology, but also comprises a complex deep learning algorithm and model optimization, and aims to restore and enhance the quality of the face image to the greatest extent so as to meet the high requirements of the self-tracking holder lighting system of the electric power foundation mooring unmanned aerial vehicle on the definition and recognition accuracy of the image in practical application.
Optionally, the positioning module is further configured to: according to the first monitoring instruction or the second monitoring instruction, a flight repositioning instruction is sent to the unmanned aerial vehicle; receiving a flight repositioning instruction through the unmanned aerial vehicle, and repositioning the unmanned aerial vehicle according to the first unmanned aerial vehicle illumination parameter combination or the second unmanned aerial vehicle illumination parameter combination to obtain repositioning data; and according to the repositioning data, carrying out illumination parameter correction and self-tracking cradle head illumination on the first unmanned aerial vehicle illumination parameter combination or the second unmanned aerial vehicle illumination parameter combination.
Specifically, the positioning module sends a flight repositioning instruction to the unmanned aerial vehicle after receiving the first monitoring instruction or the second monitoring instruction. Based on the accurate analysis of the monitoring instruction, the monitoring instruction comprises specific requirements of unmanned aerial vehicle illumination tasks, such as illumination areas, illumination intensity, illumination angles and the like. The positioning module generates specific repositioning instructions aiming at the flight position and state of the unmanned aerial vehicle by analyzing the instructions, and the instructions directly guide the unmanned aerial vehicle to carry out the next flight operation. And the unmanned aerial vehicle receives the flight repositioning instruction and performs repositioning operation according to the first unmanned aerial vehicle illumination parameter combination or the second unmanned aerial vehicle illumination parameter combination. In the process, the unmanned aerial vehicle adjusts the flight path and the flight state by means of the built-in flight control system and the navigation system so as to reach a preset illumination position. The implementation of the repositioning operation not only depends on the flight performance and the control algorithm of the unmanned aerial vehicle, but also requires that the unmanned aerial vehicle can process the repositioning instruction sent by the positioning module in real time, which puts forward higher requirements on the response speed and the processing capacity of the unmanned aerial vehicle. After the unmanned aerial vehicle completes the relocation operation, the system acquires relocation data. The data comprise information such as the actual flight position, altitude, relative position to a predetermined illumination area, etc. of the unmanned aerial vehicle. By analyzing the repositioning data, the system can accurately evaluate the actual effect of the unmanned aerial vehicle illumination parameters, such as illumination coverage, illumination intensity distribution, illumination angles and the like, and the evaluation results directly influence the correction of the illumination parameters. Finally, the system corrects the illumination parameters of the first unmanned aerial vehicle illumination parameter combination or the second unmanned aerial vehicle illumination parameter combination according to the repositioning data. The illumination parameter correction is a dynamic optimization process, and aims to ensure that the illumination effect can meet the requirement of the monitoring instruction to the greatest extent by adjusting the illumination parameters of the unmanned aerial vehicle, such as adjusting the illumination angle, changing the illumination intensity and the like. After the illumination parameter correction is completed, the unmanned aerial vehicle executes self-tracking cradle head illumination operation, and the unmanned aerial vehicle automatically adjusts cradle head angles and illumination equipment according to the corrected illumination parameter so as to realize accurate illumination of a target area.
Optionally, the positioning module includes a Beidou, GPS and GLONASS three-mode positioning system, and the airborne RTK is provided with dual antennas to provide accurate positioning information and stable heading information.
Optionally, the camera is an ultralong-focus visible light fog-penetrating high-definition network camera.
In the embodiment of the application, the omnibearing and dynamic tracking illumination of constructors can be realized, and the illumination quality and the construction safety of a construction site are effectively improved; the central processing module, the positioning module, the monitoring module, the unmanned aerial vehicle and the lighting module are adopted, and the position of the unmanned aerial vehicle and the brightness and the angle of the lighting lamp can be automatically adjusted according to the specific position and the face direction of constructors through intelligent recognition and automatic adjustment technology, so that the best lighting effect is ensured; the monitoring module comprises a camera, an infrared thermal imaging detector, an intelligent identification unit and the like, can accurately capture the position and the activity condition of constructors in real time, and provides reliable data support for accurate illumination; the positioning module adopts a high-precision positioning technology of an RTK base station and an airborne RTK mobile station, and is matched with a Beidou, GPS and GLONASS three-mode positioning system and a double-antenna design, so that accurate positioning information and stable heading information are provided for the unmanned aerial vehicle, and the unmanned aerial vehicle can be ensured to be quickly and accurately adjusted to an optimal illumination position; the lamp detection unit can monitor the working state of the lighting lamp in real time, such as current, lighting time length and the like, and timely discover and process lamp faults, so that the stable operation and long-term reliability of the lighting system are ensured; the portrait amplifying unit is suitable for the environment with bad night or sight, and the accuracy of identification and the pertinence of illumination are further improved through the amplifying treatment and analysis of the face image, so that the illumination is realized by comprehensively following the construction direction of constructors.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system, system and unit may refer to corresponding procedures in the foregoing system embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the system according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random acceS memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.
Claims (10)
1. The utility model provides an electric power capital construction mooring unmanned aerial vehicle self-tracking cloud platform lighting system which characterized in that, electric power capital construction mooring unmanned aerial vehicle self-tracking cloud platform lighting system includes: the system comprises a central processing module, a positioning module, a monitoring module, an unmanned aerial vehicle and a lighting module; the central processing module is respectively connected with the positioning module, the monitoring module and the lighting module;
The lighting module comprises a height sensing unit, a lighting lamp, an angle adjusting unit, a brightness adjusting unit and a lamp detecting unit; the lighting module is specifically used for: acquiring height distance data between the unmanned aerial vehicle and a target area under construction through the height sensing unit; the illuminating lamp performs illumination operation on the target area according to the height distance data, performs illumination angle adjustment on the illuminating lamp through the angle adjustment unit, and performs illumination brightness adjustment on the illuminating lamp through the brightness adjustment unit;
the monitoring module comprises a camera, an infrared thermal imaging detector, an intelligent identification unit, an interference device and a portrait amplifying unit, and is used for monitoring the position and the face orientation of constructors;
The positioning module comprises an RTK base station and an airborne RTK mobile station and is used for providing high-precision positioning information for the unmanned aerial vehicle;
The unmanned aerial vehicle includes communication unit and flight control unit, lighting module with monitoring module all sets up unmanned aerial vehicle's lower extreme, positioning module sets up unmanned aerial vehicle's inside.
2. The power-based tethered unmanned aerial vehicle self-tracking pan-tilt lighting system of claim 1, wherein the power-based tethered unmanned aerial vehicle self-tracking pan-tilt lighting system is specifically configured to:
Acquiring target monitoring image data of a target area through the monitoring module, and identifying constructors on the target monitoring image data to obtain the number N of the constructors in the target image data, wherein N is a positive integer;
If the number of constructors N=1, extracting a first target object in the target monitoring image data, performing unmanned aerial vehicle illumination parameter analysis on the first target object by adopting a single target object monitoring strategy, generating a first unmanned aerial vehicle illumination parameter combination corresponding to the first target object, and performing unmanned aerial vehicle self-tracking holder illumination on the first target object through the first unmanned aerial vehicle illumination parameter combination;
and if the number N of constructors is more than 1, extracting a plurality of second target objects in the target monitoring image data, analyzing unmanned aerial vehicle illumination parameters of the plurality of second target objects by adopting a multi-target object monitoring strategy, generating second unmanned aerial vehicle illumination parameter combinations corresponding to the plurality of second target objects, and carrying out unmanned aerial vehicle self-tracking holder illumination on the plurality of second target objects through the second unmanned aerial vehicle illumination parameter combinations.
3. The power-based tethered unmanned aerial vehicle self-tracking pan-tilt lighting system of claim 2, wherein the power-based tethered unmanned aerial vehicle self-tracking pan-tilt lighting system is specifically configured to:
If the number of constructors N=1, carrying out target object identification on the target monitoring image data to obtain a first target object, and carrying out face image region segmentation on the first target object to obtain first face image data of the first target object;
Acquiring first object position information of the first target object based on the single target object monitoring strategy and the camera;
performing face amplification processing on the first face image data through the intelligent recognition unit and the face amplification unit to obtain second face image data of the first target object;
performing movement track analysis on the first target object according to the first object position information to obtain movement track information of the first target object, and performing face orientation analysis on the second face image data to obtain face orientation information of the first target object;
Calculating illumination direction parameters of the unmanned aerial vehicle according to the movement track information and the face orientation information of the first target object, monitoring height data of the unmanned aerial vehicle from the ground through a height sensing module, calculating first illumination brightness parameters of the unmanned aerial vehicle according to the height data of the unmanned aerial vehicle from the ground, and generating a first unmanned aerial vehicle illumination parameter combination according to the illumination direction parameters and the first illumination brightness parameters;
Transmitting the first unmanned aerial vehicle illumination parameter combination to the flight adjusting unit and the positioning unit through the central processing module, and generating a first monitoring instruction of the first target object through the flight adjusting unit and the positioning unit;
And executing the first monitoring instruction through the unmanned aerial vehicle, so that the unmanned aerial vehicle performs unmanned aerial vehicle self-tracking cradle head illumination on the first target object.
4. The power-based tethered unmanned aerial vehicle self-tracking pan-tilt lighting system of claim 2, wherein the power-based tethered unmanned aerial vehicle self-tracking pan-tilt lighting system is specifically configured to:
If the number N of the constructors is more than 1, carrying out target object identification on the target monitoring image data to obtain a plurality of second target objects;
respectively monitoring the position information of the plurality of second target objects through the camera and the infrared thermal imaging detector to obtain second object position information of each second target object;
performing range calculation on the plurality of second target objects according to the second object position information to obtain target range information of the plurality of second target objects;
Calculating unmanned plane position parameters, unmanned plane height parameters and illumination range parameters of the unmanned plane according to target range information of the plurality of second target objects, monitoring height data of the unmanned plane from the ground through a height sensing module, calculating second illumination brightness parameters of the unmanned plane according to the height data of the unmanned plane from the ground, and generating second unmanned plane illumination parameter combinations according to the unmanned plane position parameters, the unmanned plane height parameters, the illumination range parameters and the second illumination brightness parameters;
Transmitting the second unmanned aerial vehicle illumination parameter combination to the illumination module and the positioning module through the central processing module, and generating second monitoring instructions of the plurality of second target objects through the illumination module and the positioning module;
and executing the second monitoring instruction through the unmanned aerial vehicle, so that the unmanned aerial vehicle performs unmanned aerial vehicle self-tracking cradle head illumination on the plurality of second target objects.
5. The power-based tethered unmanned aerial vehicle self-tracking pan-tilt lighting system of claim 2, wherein the monitoring module is specifically configured to:
Carrying out multi-channel spectrum image acquisition on the target area based on a plurality of preset image acquisition channels to obtain initial spectrum image data of each image acquisition channel;
Acquiring spectrum distribution data of each image acquisition channel, and respectively carrying out spectrum correction on the initial spectrum image data according to the spectrum distribution data to obtain target spectrum image data of each image acquisition channel;
Carrying out multispectral fusion on the plurality of target spectral image data to generate fused spectral image data, and carrying out image feature analysis on the fused spectral image data to obtain a spectral image feature set;
and carrying out monitoring region enhancement processing on the fused spectrum image data according to the spectrum image feature set to obtain target monitoring image data.
6. The power-based tethered unmanned aerial vehicle self-tracking pan-tilt lighting system of claim 1, wherein the light fixture detection unit is specifically configured to:
detecting the current and the illumination time length of the illumination lamp to obtain initial current data and initial illumination time length data of the illumination lamp;
Performing standardization processing on the initial current data to obtain target current data, and performing standardization processing on the initial illumination duration data to obtain target illumination duration data;
respectively carrying out feature recognition on the target current data and the target illumination duration data to obtain current features and illumination duration features;
Vector mapping and attention mechanism weighting are carried out on the current characteristics and the illumination duration characteristics, so that an illumination lamp detection vector is obtained;
and inputting the lighting lamp detection vector into a preset support vector machine model to detect lamp damage abnormality, so as to obtain a lamp damage abnormality detection result.
7. The power-based tethered unmanned aerial vehicle self-tracking pan-tilt lighting system of claim 3, wherein the portrait amplifying unit is specifically configured to:
Performing image coding on the first face image data by adopting a pre-trained diffusion automatic encoder to obtain a first image coding subspace and a second image coding subspace;
extracting the portrait characteristics of the first image coding subspace and the second image coding subspace through a cross attention inverse linear interpolation branch in a preset double-branch identity separation network to obtain a first initial portrait characteristic of the first image coding subspace and a second initial portrait characteristic of the second image coding subspace;
Extracting hidden features of the first image coding subspace and the second image coding subspace through multi-layer perceptron branches in the double-branch identity separation network respectively to obtain a first initial hidden feature of the first image coding subspace and a second initial hidden feature of the second image coding subspace;
performing feature fusion on the first initial portrait feature and the first initial hidden feature to obtain a first fused portrait feature, and performing feature fusion on the second initial portrait feature and the second initial hidden feature to obtain a second fused portrait feature;
And performing feature decoding and image amplification processing on the first fused image feature and the second fused image feature to obtain second face image data of the first target object.
8. The power-based tethered unmanned aerial vehicle self-tracking pan-tilt lighting system of claim 4, wherein the positioning module is further configured to:
According to the first monitoring instruction or the second monitoring instruction, a flight repositioning instruction is sent to the unmanned aerial vehicle;
Receiving the flight repositioning instruction through the unmanned aerial vehicle, and performing repositioning operation on the unmanned aerial vehicle according to the first unmanned aerial vehicle illumination parameter combination or the second unmanned aerial vehicle illumination parameter combination to obtain repositioning data;
and according to the repositioning data, carrying out illumination parameter correction and self-tracking cradle head illumination on the first unmanned aerial vehicle illumination parameter combination or the second unmanned aerial vehicle illumination parameter combination.
9. The power-based tethered unmanned aerial vehicle self-tracking pan-tilt lighting system of claim 1, wherein the positioning module comprises a beidou, GPS, GLONASS three-mode positioning system, an on-board RTK with dual antennas.
10. The power-based tethered unmanned aerial vehicle self-tracking pan-tilt lighting system of claim 1, wherein the camera is an ultralong-focus visible-light fog-penetrating high-definition web camera.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410386450.2A CN117963194B (en) | 2024-04-01 | 2024-04-01 | Self-tracking cradle head lighting system of electric power capital construction mooring unmanned aerial vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410386450.2A CN117963194B (en) | 2024-04-01 | 2024-04-01 | Self-tracking cradle head lighting system of electric power capital construction mooring unmanned aerial vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117963194A true CN117963194A (en) | 2024-05-03 |
CN117963194B CN117963194B (en) | 2024-08-02 |
Family
ID=90853598
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410386450.2A Active CN117963194B (en) | 2024-04-01 | 2024-04-01 | Self-tracking cradle head lighting system of electric power capital construction mooring unmanned aerial vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117963194B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106027915A (en) * | 2016-07-05 | 2016-10-12 | 杨珊珊 | Light supplement system for unmanned aerial vehicle and light supplement method therefor |
CN108776491A (en) * | 2018-05-23 | 2018-11-09 | 广东容祺智能科技有限公司 | Unmanned plane multiple target monitoring system and monitoring method based on dynamic image identification |
CN111846221A (en) * | 2020-03-26 | 2020-10-30 | 同济大学 | Unmanned aerial vehicle and wisdom street lamp system thereof |
CN113218373A (en) * | 2020-01-21 | 2021-08-06 | 波音公司 | Laser alignment system for a luminaire mounting bracket |
CN115649501A (en) * | 2022-12-28 | 2023-01-31 | 北京熙捷科技有限公司 | Night driving illumination system and method for unmanned aerial vehicle |
CN116339375A (en) * | 2023-02-22 | 2023-06-27 | 国网山东省电力公司潍坊供电公司 | Unmanned aerial vehicle illumination system and method for automatically tracking target |
US20240043138A1 (en) * | 2021-09-13 | 2024-02-08 | Blue Vigil Llc | Systems and Methods for Tethered Drones |
-
2024
- 2024-04-01 CN CN202410386450.2A patent/CN117963194B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106027915A (en) * | 2016-07-05 | 2016-10-12 | 杨珊珊 | Light supplement system for unmanned aerial vehicle and light supplement method therefor |
CN108776491A (en) * | 2018-05-23 | 2018-11-09 | 广东容祺智能科技有限公司 | Unmanned plane multiple target monitoring system and monitoring method based on dynamic image identification |
CN113218373A (en) * | 2020-01-21 | 2021-08-06 | 波音公司 | Laser alignment system for a luminaire mounting bracket |
CN111846221A (en) * | 2020-03-26 | 2020-10-30 | 同济大学 | Unmanned aerial vehicle and wisdom street lamp system thereof |
US20240043138A1 (en) * | 2021-09-13 | 2024-02-08 | Blue Vigil Llc | Systems and Methods for Tethered Drones |
CN115649501A (en) * | 2022-12-28 | 2023-01-31 | 北京熙捷科技有限公司 | Night driving illumination system and method for unmanned aerial vehicle |
CN116339375A (en) * | 2023-02-22 | 2023-06-27 | 国网山东省电力公司潍坊供电公司 | Unmanned aerial vehicle illumination system and method for automatically tracking target |
Also Published As
Publication number | Publication date |
---|---|
CN117963194B (en) | 2024-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11410002B2 (en) | Ship identity recognition method based on fusion of AIS data and video data | |
CN111932588B (en) | Tracking method of airborne unmanned aerial vehicle multi-target tracking system based on deep learning | |
CN111255636B (en) | Method and device for determining tower clearance of wind generating set | |
CN110850723B (en) | Fault diagnosis and positioning method based on transformer substation inspection robot system | |
CN112396658B (en) | Indoor personnel positioning method and system based on video | |
CN110133573A (en) | A kind of autonomous low latitude unmanned plane system of defense based on the fusion of multielement bar information | |
CN111753712A (en) | Method, system and equipment for monitoring safety of power production personnel | |
CN112528817B (en) | Inspection robot vision detection and tracking method based on neural network | |
CN207268846U (en) | Electric inspection process robot | |
CN108733073A (en) | Unmanned plane managing and control system, method and readable medium in a kind of region | |
CN105810023B (en) | Airport undercarriage control automatic monitoring method | |
CN107300398A (en) | A kind of communicated based on WIFI supports the electric inspection process device of positioning and data transfer simultaneously | |
CN107256034B (en) | Change distribution room multiple spot environmental data collection system based on unmanned aerial vehicle | |
CN109238281B (en) | Visual navigation and obstacle avoidance method based on image spiral line | |
CN115495540A (en) | Intelligent route identification method, system and medium for robot inspection | |
CN117963194B (en) | Self-tracking cradle head lighting system of electric power capital construction mooring unmanned aerial vehicle | |
CN114325573A (en) | Method for rapidly detecting identity and position information of operation and maintenance personnel of transformer substation | |
CN112785564B (en) | Pedestrian detection tracking system and method based on mechanical arm | |
CN109708659A (en) | A kind of distributed intelligence photoelectricity low latitude guard system | |
Li et al. | Corner detection based target tracking and recognition for UAV-based patrolling system | |
CN114092522A (en) | Intelligent capture tracking method for take-off and landing of airport airplane | |
CN113823056A (en) | Unmanned aerial vehicle forest fire prevention monitoring system based on remote monitoring | |
CN117953433B (en) | Bird repellent operation supervision method and system based on image data processing | |
CN114445850B (en) | Depth image-based safety monitoring method for power producer | |
CN105787514A (en) | Temperature detection method based on infrared vision matching |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |