CN113066040A - Unmanned aerial vehicle 3D modeling-based face recognition equipment layout method - Google Patents
Unmanned aerial vehicle 3D modeling-based face recognition equipment layout method Download PDFInfo
- Publication number
- CN113066040A CN113066040A CN201911369432.9A CN201911369432A CN113066040A CN 113066040 A CN113066040 A CN 113066040A CN 201911369432 A CN201911369432 A CN 201911369432A CN 113066040 A CN113066040 A CN 113066040A
- Authority
- CN
- China
- Prior art keywords
- aerial vehicle
- unmanned aerial
- installation
- face recognition
- recognition equipment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/10—Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Geometry (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Quality & Reliability (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a human face recognition equipment layout method based on unmanned aerial vehicle 3D modeling, which comprises the following steps: inputting an optimal installation parameter standard and an error range of the face recognition equipment into an unmanned aerial vehicle, wherein the unmanned aerial vehicle is provided with an airborne camera; scanning the installation site by 360 degrees through an unmanned aerial vehicle; generating data of the pre-installation point location based on the scanning result, and acquiring a three-dimensional coordinate of the pre-installation point location and an optimally installed simulation photo; in the process of installing the face recognition equipment, the installation condition is scanned and monitored in real time through an airborne camera of the unmanned aerial vehicle; and rechecking the installation result through field scanning of the unmanned aerial vehicle. The human face recognition equipment layout method based on unmanned aerial vehicle 3D modeling can realize on-site accurate three-dimensional scanning and surveying, and directly carry out self-service professional installation of equipment according to on-site environment, thereby saving time and labor cost.
Description
Technical Field
The invention relates to the technical field of face recognition, in particular to a face recognition equipment layout method based on unmanned aerial vehicle 3D modeling.
Background
At present, the existing face recognition equipment needs to be installed through complicated manual measurement such as advanced site environment survey, height, angle, depth of field distance and the like, and is large in workload and strong in error susceptibility.
Because the installation of face identification equipment has strict requirements on parameters such as height, angle, orientation and the like, if the installation effect is not good, the face identification effect is extremely poor, and the product performance cannot be fully exerted, so that the conditions such as position, height and the like are required to be repeatedly debugged and verified, and the installation and the replacement are troublesome.
Moreover, because the installation requirements of the face recognition equipment are high, only a few workers with professional experience can deploy the equipment, the training and the examination cost of finished product performance installation are high, and the whole time period of the project is influenced because of the shortage of the number of professionals.
Disclosure of Invention
The invention aims to provide a human face recognition equipment layout method based on unmanned aerial vehicle 3D modeling, which can realize on-site accurate three-dimensional scanning and surveying, directly carry out self-service professional installation of equipment according to on-site environment and save time and labor cost.
The above object of the invention is achieved by the features of the independent claims, the dependent claims developing the features of the independent claims in alternative or advantageous ways.
In order to achieve the purpose, the invention provides a human face recognition equipment layout method based on unmanned aerial vehicle 3D modeling, which comprises the following steps:
step 1: inputting an optimal installation parameter standard and an error range of the face recognition equipment into an unmanned aerial vehicle, wherein the unmanned aerial vehicle is provided with an airborne camera;
step 2: scanning the installation site by 360 degrees through an unmanned aerial vehicle;
and step 3: generating data of the pre-installation point location based on the scanning result, and acquiring a three-dimensional coordinate of the pre-installation point location and an optimally installed simulation photo;
and 4, step 4: in the process of installing the face recognition equipment, the installation condition is scanned and monitored in real time through an airborne camera of the unmanned aerial vehicle;
and 5: and rechecking the installation result through field scanning of the unmanned aerial vehicle.
Further, in step 1, generating parameters for an installation environment according to the performance of the face recognition device and the quality requirement for taking a picture, including: ground height, horizontal distance to a reference object, camera horizontal angle and camera orientation.
Further, in step 2, through the flight of the unmanned aerial vehicle, 360-degree scanning is performed by the airborne camera within the space of 10m × 10m of the pre-installation point and the panoramic height.
Further, in step 3, a reference object of the pre-installation site is specified: and generating a three-dimensional coordinate with the point as a pre-installation point position and a simulated photo of the best equipment installation by taking the vertical center of the ground horizontal line with the depth of field from the starting point as a reference point.
Further, in the installation process of the face recognition device in the step 4, the installation condition is monitored by real-time scanning of an airborne camera of the unmanned aerial vehicle, and the installed data reference is output every 5 minutes: height, angle, depth of field distance and orientation, and outputs an adjustment scheme based on the optimum installation parameter criteria and error range input in advance.
Further, in the process of outputting the adjustment scheme, the adjustment scheme is broadcasted through voice.
Further, after the installation, the rechecking of the installation result through the field scanning of the unmanned aerial vehicle comprises: and performing error range rechecking on the installed face recognition equipment and the simulation photo by field scanning, if the error is exceeded, early warning and outputting an adjusting parameter, and if the error range is met, shooting the field photo by an unmanned aerial vehicle.
Further, after rechecking is completed, the live photos shot by the unmanned aerial vehicle comprise installed photos of the front surface and the back surface of the equipment along the horizontal direction and 3 meters away from the equipment, and the installed photos are uploaded to the cloud server.
Further, after the rechecking is completed, the method further comprises the following steps:
shooting at least 2 human face photos of different human faces on site through an unmanned aerial vehicle;
inputting the shot face picture into a preset face recognition algorithm for recognition, and outputting a corresponding face recognition score;
in response to the face recognition score being greater than the set threshold, ending the installation; otherwise, generating the pre-installation point location again and reinstalling.
It should be understood that all combinations of the foregoing concepts and additional concepts described in greater detail below can be considered as part of the inventive subject matter of this disclosure unless such concepts are mutually inconsistent. In addition, all combinations of claimed subject matter are considered a part of the presently disclosed subject matter.
The foregoing and other aspects, embodiments and features of the present teachings can be more fully understood from the following description taken in conjunction with the accompanying drawings. Additional aspects of the present invention, such as features and/or advantages of exemplary embodiments, will be apparent from the description which follows, or may be learned by practice of specific embodiments in accordance with the teachings of the present invention.
Drawings
The drawings are not intended to be drawn to scale. In the drawings, each identical or nearly identical component that is illustrated in various figures may be represented by a like numeral. For purposes of clarity, not every component may be labeled in every drawing. Embodiments of various aspects of the present invention will now be described, by way of example, with reference to the accompanying drawings, in which:
fig. 1 is a flowchart illustrating a method for laying face recognition equipment based on unmanned aerial vehicle 3D modeling according to the present invention.
Fig. 2 is a schematic diagram of the device of the present invention being installed and being reviewed by face recognition of the captured photograph.
Detailed Description
In order to better understand the technical content of the present invention, specific embodiments are described below with reference to the accompanying drawings.
In this disclosure, aspects of the present invention are described with reference to the accompanying drawings, in which a number of illustrative embodiments are shown. Embodiments of the present disclosure are not necessarily intended to include all aspects of the invention. It should be appreciated that the various concepts and embodiments described above, as well as those described in greater detail below, may be implemented in any of numerous ways, as the disclosed concepts and embodiments are not limited to any one implementation. In addition, some aspects of the present disclosure may be used alone, or in any suitable combination with other aspects of the present disclosure.
With reference to fig. 1, the method for laying the face recognition device based on the unmanned aerial vehicle 3D modeling, provided by the invention, generates a 3D photo and three-dimensional data by performing 360-degree panoramic scanning of an installation site environment through the unmanned aerial vehicle, performs range positioning (three-dimensional coordinates, angles and orientations) on a point location planned to be installed, and generates an accurate three-dimensional coordinate range and a 3D simulation photo of a device to be installed by combining the three-dimensional data generated by the unmanned aerial vehicle.
Meanwhile, after the installation is finished, the real-time 3D scanning of the unmanned aerial vehicle is carried out, the scanning of the live-action and 3D simulation photos is carried out on the installed equipment to recheck the result, if errors exist, the scheme needing to be adjusted is reported, and the installation quality is guaranteed to reach the optimal effect range of face recognition.
The process of the method for laying the face recognition equipment based on unmanned aerial vehicle 3D modeling in combination with the examples shown in fig. 1 and 2 comprises the following flows:
step 1: inputting an optimal installation parameter standard and an error range of the face recognition equipment into an unmanned aerial vehicle, wherein the unmanned aerial vehicle is provided with an airborne camera;
step 2: scanning the installation site by 360 degrees through an unmanned aerial vehicle;
and step 3: generating data of the pre-installation point location based on the scanning result, and acquiring a three-dimensional coordinate of the pre-installation point location and an optimally installed simulation photo;
and 4, step 4: in the process of installing the face recognition equipment, the installation condition is scanned and monitored in real time through an airborne camera of the unmanned aerial vehicle;
and 5: and rechecking the installation result through field scanning of the unmanned aerial vehicle.
Therefore, on one hand, direct on-site accurate three-dimensional scanning survey can be realized, manual on-site survey and on-site data manual measurement are not needed, self-service professional installation of equipment can be directly carried out according to the on-site environment, and time and labor cost are saved; on the other hand for very convenient accurate numerical value that possess height, angle, orientation during the installation, and can also obtain accurate installation result through the scanning after the installation is accomplished, guarantee product installation quality is up to standard, and can realize the equipment fixing of "fool", only need start unmanned aerial vehicle and carry out the environmental scan, just can generate a plurality of data of installation, install the back according to data, rethread unmanned aerial vehicle real-time scanning rechecks the result, real-time guidance, reduce installer's technical requirement.
Preferably, the unmanned aerial vehicle used in the present invention may be a current home-use or commercial-use unmanned aerial vehicle, such as an Inspire series or Phantom series unmanned aerial vehicle of the company of majiang, and is equipped with an onboard camera. Through unmanned aerial vehicle's flight, to the 10m space and the panorama altitude range of preinstallation point position, carry out 360 scans by the machine-carried camera.
In step 1, generating parameters for an installation environment according to the performance of the face recognition device and the quality requirement for the shot picture, including: ground height, horizontal distance to a reference object, camera horizontal angle and camera orientation.
In step 3, a reference object of the pre-installation point is specified: and generating a three-dimensional coordinate with the point as a pre-installation point position and a simulated photo of the best equipment installation by taking the vertical center of the ground horizontal line with the depth of field from the starting point as a reference point.
In step 4, in the installation process of the face recognition device, the installation condition is monitored by real-time scanning of an airborne camera of the unmanned aerial vehicle, and the installed data reference is output every 5 minutes: height, angle, depth of field distance and orientation, and outputs an adjustment scheme based on the optimum installation parameter criteria and error range input in advance. Preferably, the adjustment scheme is broadcasted by voice during the outputting of the adjustment scheme.
Preferably, after installation, the review of the installation result by the field scanning of the drone includes: and performing error range rechecking on the installed face recognition equipment and the simulation photo by field scanning, if the error is exceeded, early warning and outputting an adjusting parameter, and if the error range is met, shooting the field photo by an unmanned aerial vehicle.
Preferably, after the rechecking is completed, the live photos taken by the unmanned aerial vehicle comprise installed photos of the front surface and the back surface of the equipment along the horizontal direction and at a distance of 3 meters from the equipment, and the installed photos are uploaded to the cloud server.
In a further aspect, with reference to the example shown in fig. 2, after the double check is completed, the method further includes:
shooting at least 2 human face photos of different human faces on site through an unmanned aerial vehicle;
inputting the shot face picture into a preset face recognition algorithm for recognition, and outputting a corresponding face recognition score;
in response to the face recognition score being greater than the set threshold, ending the installation; otherwise, generating the pre-installation point location again and reinstalling.
Therefore, the human face and face photos are collected by standing the virtual human body or the actual personnel, further actual judgment is carried out, the quality of the shot human face photos is determined, and the actual requirements of actual identification scenes can be met.
Although the present invention has been described with reference to the preferred embodiments, it is not intended to be limited thereto. Those skilled in the art can make various changes and modifications without departing from the spirit and scope of the invention. Therefore, the protection scope of the present invention should be determined by the appended claims.
Claims (9)
1. A face recognition equipment layout method based on unmanned aerial vehicle 3D modeling is characterized by comprising the following steps:
step 1: inputting an optimal installation parameter standard and an error range of the face recognition equipment into an unmanned aerial vehicle, wherein the unmanned aerial vehicle is provided with an airborne camera;
step 2: scanning the installation site by 360 degrees through an unmanned aerial vehicle;
and step 3: generating data of the pre-installation point location based on the scanning result, and acquiring a three-dimensional coordinate of the pre-installation point location and an optimally installed simulation photo;
and 4, step 4: in the process of installing the face recognition equipment, the installation condition is scanned and monitored in real time through an airborne camera of the unmanned aerial vehicle;
and 5: and rechecking the installation result through field scanning of the unmanned aerial vehicle.
2. The method for laying the face recognition equipment based on unmanned aerial vehicle 3D modeling according to claim 1, wherein in the step 1, parameters of an installation environment are generated according to the performance of the face recognition equipment and the quality requirement of a shot picture, and the method comprises the following steps: ground height, horizontal distance to a reference object, camera horizontal angle and camera orientation.
3. The method for deploying the face recognition devices based on unmanned aerial vehicle 3D modeling according to claim 1, wherein in the step 2, the airborne camera scans 360 degrees within a space of 10m x 10m of the pre-installation point and a panoramic height through the flight of the unmanned aerial vehicle.
4. The method for laying the face recognition equipment based on unmanned aerial vehicle 3D modeling according to claim 1, wherein in step 3, a reference object of a pre-installation point position is specified: and generating a three-dimensional coordinate with the point as a pre-installation point position and a simulated photo of the best equipment installation by taking the vertical center of the ground horizontal line with the depth of field from the starting point as a reference point.
5. The method for deploying the face recognition devices based on unmanned aerial vehicle 3D modeling according to claim 1, wherein in the process of installing the face recognition devices in step 4, an onboard camera of the unmanned aerial vehicle is used for real-time scanning and monitoring of installation conditions, and installed data references are output every 5 minutes: height, angle, depth of field distance and orientation, and outputs an adjustment scheme based on the optimum installation parameter criteria and error range input in advance.
6. The unmanned aerial vehicle 3D modeling based face recognition device layout method of claim 5, wherein in outputting the adjustment scheme, the adjustment scheme is modified by voice broadcast.
7. The method for laying the face recognition equipment based on unmanned aerial vehicle 3D modeling according to claim 5 or 6, wherein after the installation is completed, the rechecking of the installation result through unmanned aerial vehicle field scanning comprises: and performing error range rechecking on the installed face recognition equipment and the simulation photo by field scanning, if the error is exceeded, early warning and outputting an adjusting parameter, and if the error range is met, shooting the field photo by an unmanned aerial vehicle.
8. The method for laying the face recognition equipment based on unmanned aerial vehicle 3D modeling according to claim 7, wherein after the rechecking is completed, the live photos taken by the unmanned aerial vehicle comprise installed photos of the front and back of the equipment along the horizontal direction and at a distance of 3 meters from the equipment, and the installed photos are uploaded to a cloud server.
9. The method for laying the face recognition equipment based on unmanned aerial vehicle 3D modeling according to claim 7, wherein after the rechecking is completed, the method further comprises:
shooting at least 2 human face photos of different human faces on site through an unmanned aerial vehicle;
inputting the shot face picture into a preset face recognition algorithm for recognition, and outputting a corresponding face recognition score;
in response to the face recognition score being greater than the set threshold, ending the installation; otherwise, generating the pre-installation point location again and reinstalling.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911369432.9A CN113066040B (en) | 2019-12-26 | 2019-12-26 | Face recognition equipment arrangement method based on unmanned aerial vehicle 3D modeling |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911369432.9A CN113066040B (en) | 2019-12-26 | 2019-12-26 | Face recognition equipment arrangement method based on unmanned aerial vehicle 3D modeling |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113066040A true CN113066040A (en) | 2021-07-02 |
CN113066040B CN113066040B (en) | 2022-09-09 |
Family
ID=76558254
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911369432.9A Active CN113066040B (en) | 2019-12-26 | 2019-12-26 | Face recognition equipment arrangement method based on unmanned aerial vehicle 3D modeling |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113066040B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106056075A (en) * | 2016-05-27 | 2016-10-26 | 广东亿迅科技有限公司 | Important person identification and tracking system in community meshing based on unmanned aerial vehicle |
CN107218926A (en) * | 2017-05-12 | 2017-09-29 | 西北工业大学 | A kind of data processing method of the remote scanning based on unmanned aerial vehicle platform |
CN110221625A (en) * | 2019-05-27 | 2019-09-10 | 北京交通大学 | The Autonomous landing guidance method of unmanned plane exact position |
-
2019
- 2019-12-26 CN CN201911369432.9A patent/CN113066040B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106056075A (en) * | 2016-05-27 | 2016-10-26 | 广东亿迅科技有限公司 | Important person identification and tracking system in community meshing based on unmanned aerial vehicle |
CN107218926A (en) * | 2017-05-12 | 2017-09-29 | 西北工业大学 | A kind of data processing method of the remote scanning based on unmanned aerial vehicle platform |
CN110221625A (en) * | 2019-05-27 | 2019-09-10 | 北京交通大学 | The Autonomous landing guidance method of unmanned plane exact position |
Also Published As
Publication number | Publication date |
---|---|
CN113066040B (en) | 2022-09-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109002055B (en) | High-precision automatic inspection method and system based on unmanned aerial vehicle | |
CN106403942B (en) | Personnel indoor inertial positioning method based on substation field depth image identification | |
JP2019515272A (en) | System for analyzing the damage of an aircraft and an object scanning an object | |
CN114020002B (en) | Method, device and equipment for unmanned aerial vehicle to inspect fan blade, unmanned aerial vehicle and medium | |
Kaartinen et al. | Accuracy of 3D city models: EuroSDR comparison | |
KR20200048615A (en) | Realtime inspecting drone for solar photovoltaic power station basen on machine learning | |
CN115240093B (en) | Automatic power transmission channel inspection method based on visible light and laser radar point cloud fusion | |
CN104933223B (en) | A kind of electric transmission line channel digital mapping method | |
US11021246B2 (en) | Method and system for capturing images of asset using unmanned aerial vehicles | |
CN114623049B (en) | Wind turbine tower clearance monitoring method and computer program product | |
CN112381356A (en) | Completion acceptance method, completion acceptance system, server and storage medium for engineering project | |
CN113378754B (en) | Bare soil monitoring method for construction site | |
CN116501091B (en) | Fan inspection control method and device based on unmanned aerial vehicle automatic adjustment route | |
CN116129064A (en) | Electronic map generation method, device, equipment and storage medium | |
CN117036999A (en) | Digital twinning-based power transformation equipment modeling method | |
CN115563732A (en) | Spraying track simulation optimization method and device based on virtual reality | |
Sepasgozar et al. | Scanners and Photography: A combined framework | |
CN113987246A (en) | Automatic picture naming method, device, medium and electronic equipment for unmanned aerial vehicle inspection | |
CN113066040B (en) | Face recognition equipment arrangement method based on unmanned aerial vehicle 3D modeling | |
JP2009032063A (en) | Device and program for generating space information database | |
CN114066981A (en) | Unmanned aerial vehicle ground target positioning method | |
CN114326794A (en) | Curtain wall defect identification method, control terminal, server and readable storage medium | |
CN117557931A (en) | Planning method for meter optimal inspection point based on three-dimensional scene | |
CN112504236A (en) | Unmanned aerial vehicle aerial photography power distribution network area surveying and mapping method | |
CN114659499B (en) | Smart city 3D map model photography establishment method based on unmanned aerial vehicle technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder | ||
CP01 | Change in the name or title of a patent holder |
Address after: No.568 longmian Avenue, gaoxinyuan, Jiangning District, Nanjing City, Jiangsu Province, 211000 Patentee after: Xiaoshi Technology (Jiangsu) Co.,Ltd. Address before: No.568 longmian Avenue, gaoxinyuan, Jiangning District, Nanjing City, Jiangsu Province, 211000 Patentee before: NANJING ZHENSHI INTELLIGENT TECHNOLOGY Co.,Ltd. |