CN113758918B - Unmanned aerial vehicle system-based material determination method and device - Google Patents
Unmanned aerial vehicle system-based material determination method and device Download PDFInfo
- Publication number
- CN113758918B CN113758918B CN202010503045.6A CN202010503045A CN113758918B CN 113758918 B CN113758918 B CN 113758918B CN 202010503045 A CN202010503045 A CN 202010503045A CN 113758918 B CN113758918 B CN 113758918B
- Authority
- CN
- China
- Prior art keywords
- unmanned aerial
- scene
- aerial vehicle
- image
- brightness
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 239000000463 material Substances 0.000 title claims abstract description 92
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000010586 diagram Methods 0.000 claims abstract description 17
- 238000004891 communication Methods 0.000 claims description 11
- 239000002184 metal Substances 0.000 claims description 10
- 238000005286 illumination Methods 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 3
- 235000009537 plain noodles Nutrition 0.000 claims 1
- 230000001276 controlling effect Effects 0.000 description 32
- 238000004590 computer program Methods 0.000 description 7
- 238000003860 storage Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000001465 metallisation Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005498 polishing Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64C—AEROPLANES; HELICOPTERS
- B64C39/00—Aircraft not otherwise provided for
- B64C39/02—Aircraft not otherwise provided for characterised by special use
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/30—Measuring arrangements characterised by the use of optical techniques for measuring roughness or irregularity of surfaces
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/17—Systems in which incident light is modified in accordance with the properties of the material investigated
- G01N21/25—Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U2101/00—UAVs specially adapted for particular uses or applications
- B64U2101/30—UAVs specially adapted for particular uses or applications for imaging, photography or videography
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Aviation & Aerospace Engineering (AREA)
- Engineering & Computer Science (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
The application provides a material determining method and device based on an unmanned aerial vehicle system, wherein the method comprises the following steps: acquiring at least one group of scene images shot by a shooting unmanned aerial vehicle at least one preset position, wherein each group of scene images comprises five first images shot at a first preset angle corresponding to the preset position and one second image shot at a second preset angle, the scene brightness of the five first images is different, each scene image comprises an object to be detected, the first preset angle is that the directions of a first polaroid and a second polaroid are mutually perpendicular, the second preset angle is that the directions of the first polaroid and the second polaroid are mutually parallel, and the directions of the second polaroids of a plurality of illuminating unmanned aerial vehicles are consistent; generating a material information diagram corresponding to a preset position according to the five first images and the corresponding second images of each group; and determining the material information corresponding to the object to be detected according to the material information graphs corresponding to all the preset positions.
Description
Technical Field
The application relates to the technical field of material reconstruction, in particular to a material determination method and device based on an unmanned aerial vehicle system.
Background
The conventional reconstruction method of the material of the surface of the object generally comprises the steps of constructing a light frame, inputting the object into the light frame, and obtaining pictures of the surface of the object under different illumination to reversely calculate the material information of the surface of the object.
However, the method is generally only suitable for small objects, is not suitable for large scenes such as large buildings, limits the application range of material reconstruction, and has the problems of high difficulty and high cost when constructing a light frame larger than the large scenes.
Disclosure of Invention
An object of the embodiment of the application is to provide a material determining method and device based on an unmanned aerial vehicle system, which are used for solving the problems that an existing material reconstruction method is generally only unsuitable for a large building caused by a small object, limits the application range of material reconstruction, and simultaneously builds a light frame larger than a large scene, and has high difficulty and high cost.
In a first aspect, an embodiment of the present invention provides a material determining method based on an unmanned aerial vehicle system, where the unmanned aerial vehicle system includes a photographing unmanned aerial vehicle, a plurality of lighting unmanned aerial vehicles and a server, the photographing unmanned aerial vehicle includes a camera and a first polarizer disposed on a lens of the camera, each of the lighting unmanned aerial vehicles includes a light source and a second polarizer disposed on a light emitting surface of the light source, and the server is communicatively connected with the plurality of lighting unmanned aerial vehicles and the photographing unmanned aerial vehicle, and the method is applied to the server and includes: acquiring at least one group of scene images shot by the unmanned aerial vehicle at least one preset position, wherein each group of scene images comprises five first images shot under a first preset angle corresponding to the preset position and one second image shot under a second preset angle, the scene brightness of the five first images is different, each scene image comprises a target object to be detected, the first preset angle is that the directions of the first polaroid and the second polaroid are mutually perpendicular, the second preset angle is that the directions of the first polaroid and the second polaroid are mutually parallel, and the directions of the second polaroids of the plurality of unmanned aerial vehicles are consistent; generating a material information diagram corresponding to a preset position according to the five first images and the corresponding second images of each group; and determining the material information corresponding to the object to be detected according to the material information graphs corresponding to all the preset positions.
In the unmanned aerial vehicle system-based material determination method, the server is used for controlling the plurality of illuminating unmanned aerial vehicles to shine with different scene brightness for the object to be detected to form scenes with different scene brightness, the server is used for controlling the unmanned aerial vehicles to shoot the scenes with different scene brightness at least one preset position to obtain at least one group of scene images, the material information diagram corresponding to the preset position is generated according to the five first images and the corresponding second images in each group of scene images, and then the material information corresponding to the object to be detected is determined according to the plurality of material information diagrams corresponding to all the preset positions.
In an optional implementation manner of the first aspect, the acquiring at least one set of scene images captured by the unmanned photographing vehicle at least one preset position includes: the unmanned aerial vehicle is controlled to shoot the corresponding scene to be shot at each preset position to obtain a corresponding group of scene images, wherein the scene to be shot is formed by a plurality of illuminating unmanned aerial vehicles which are arranged to form a preset shape and the object to be detected, and the preset shape is established by taking the shooting unmanned aerial vehicle as a center.
In an optional implementation manner of the first aspect, the controlling the unmanned aerial vehicle to shoot the corresponding scene to be shot at each preset position to obtain a corresponding set of scene images includes: controlling the photographing unmanned aerial vehicle to photograph the to-be-photographed scenes with five different scene brightnesses at each preset position through a first preset angle to obtain five first images corresponding to each preset position; and controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed at each preset position through a second preset angle to obtain second images corresponding to the five first images.
In an optional implementation manner of the first aspect, the five first images include a first scene brightness image, a second scene brightness image, a third scene brightness image, a fourth scene brightness image, and a fifth scene brightness image, and the controlling the photographing unmanned aerial vehicle to photograph, at each preset position, the to-be-photographed scene under five different scene brightnesses through a first preset angle to obtain five first images corresponding to each preset position includes: adjusting the light source brightness of all the illuminating unmanned aerial vehicles to form the first scene brightness; controlling the photographing unmanned aerial vehicle to photograph a scene to be photographed under the first scene brightness at each preset position through a first preset angle to obtain a first scene brightness image corresponding to each preset position; adjusting the light source brightness of all the illuminating unmanned aerial vehicles to form the second scene brightness; controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the second scene brightness at each preset position through a first preset angle to obtain a second scene brightness image corresponding to each preset position; adjusting the light source brightness of all the illuminating unmanned aerial vehicles to form the third scene brightness; controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the third scene brightness at each preset position through a first preset angle to obtain a third scene brightness image corresponding to each preset position; adjusting the light source brightness of all the illuminating unmanned aerial vehicles to form the fourth scene brightness; controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the fourth scene brightness at each preset position through a first preset angle to obtain a fourth scene brightness image corresponding to each preset position; adjusting the light source brightness of all the illuminating unmanned aerial vehicles to form the fifth scene brightness; and controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the brightness of the fifth scene at each preset position through a first preset angle to obtain a brightness image of the fifth scene corresponding to each preset position.
In an optional implementation manner of the first aspect, the adjusting the light source brightness of all the lighting robots to form the first scene brightness includes: controlling the light sources of all the illuminating unmanned aerial vehicles to be extinguished to form the first scene brightness; the adjusting the light source brightness of all the illuminating unmanned aerial vehicle to form the fifth scene brightness comprises: and adjusting the brightness values of the light sources of all the illuminating unmanned aerial vehicles to be maximum to form the fifth scene brightness.
In an optional implementation manner of the first aspect, the adjusting the light source brightness of all the lighting robots to form the second scene brightness includes:
taking the position of the photographing unmanned aerial vehicle as an origin, taking the orientation of a first polaroid of the photographing unmanned aerial vehicle as a y axis, and taking the direction perpendicular to the direction of the y axis as an x axis to establish a coordinate system; calculating a first brightness value corresponding to each unmanned aerial vehicle according to the abscissa of each unmanned aerial vehicle, and adjusting the brightness of the corresponding unmanned aerial vehicle according to the first brightness value of each unmanned aerial vehicle to form the second scene brightness; the adjusting the light source brightness of all the illuminating unmanned aerial vehicle to form the third scene brightness includes: calculating a second brightness value corresponding to each unmanned aerial vehicle according to the ordinate of each unmanned aerial vehicle, and adjusting the brightness of the corresponding unmanned aerial vehicle according to the second brightness value of each unmanned aerial vehicle to form the third scene brightness; the adjusting the light source brightness of all the illuminating unmanned aerial vehicle to form the fourth scene brightness includes: and calculating a third brightness value corresponding to each unmanned aerial vehicle according to the abscissa and the ordinate of each unmanned aerial vehicle, and adjusting the brightness of the corresponding unmanned aerial vehicle according to the third brightness value of each unmanned aerial vehicle to form the fourth scene brightness.
In an optional implementation manner of the first aspect, the material information map includes a color map, a roughness map, and a metallicity map, and the generating the material information map corresponding to the preset position according to the five first images and the corresponding second images of each group includes: generating a corresponding color map according to the fifth scene brightness image and the first scene brightness image of each group; generating a corresponding normal map according to the first scene brightness image, the second scene brightness image, the third scene brightness image and the fourth scene brightness image of each group; generating a corresponding roughness map according to the fifth scene brightness image of each group and the corresponding normal map; and generating a corresponding metalness map according to the second image of each group, the first image with the same brightness as the second image and the corresponding normal map.
In an optional implementation manner of the first aspect, the generating a corresponding normal map according to the first scene luminance image, the second scene luminance image, the third scene luminance image, and the fourth scene luminance image of each group includes: performing gray conversion on the first scene brightness image, the second scene brightness image, the third scene brightness image and the fourth scene brightness image of each group to obtain a corresponding first gray image, a corresponding second gray image, a corresponding third gray image and a corresponding fourth gray image; subtracting the R channel value of the corresponding pixel point in the first gray level image from the R channel value of each pixel point in the second gray level image to obtain the R channel value of each pixel point in the normal map, subtracting the G channel value of the corresponding pixel point in the first gray level image from the G channel value of each pixel point in the third gray level image to obtain the G channel value of each pixel point in the normal map, and subtracting the B channel value of the corresponding pixel point in the first gray level image from the B channel value of each pixel point in the fourth gray level image to obtain the B channel value of each pixel point in the normal map; and generating each group of corresponding normal maps according to the R channel value of each pixel point, the G channel value of each pixel point and the B channel value of each pixel point in each group of normal maps.
In an optional implementation manner of the first aspect, the determining, according to a plurality of material information graphs corresponding to all preset positions, material information corresponding to the target object to be measured includes: modeling according to all the color maps to generate a model of the object to be tested and calculating color information of the model; calculating roughness information according to all the roughness maps and the model; and calculating the metal degree information according to all the metal degree mapping and the model so as to determine the material information corresponding to the object to be detected.
In a second aspect, an embodiment of the present invention provides a material reconstruction device based on an unmanned aerial vehicle system, where the unmanned aerial vehicle system includes a photographing unmanned aerial vehicle, a plurality of lighting unmanned aerial vehicles and a server, the photographing unmanned aerial vehicle includes a camera and a first polarizer disposed on a lens of the camera, each of the lighting unmanned aerial vehicles includes a light source and a second polarizer disposed on a light emitting surface of the light source, and the server is communicatively connected with the plurality of lighting unmanned aerial vehicles and the photographing unmanned aerial vehicle, and the device is applied to the server, and includes: the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring at least one group of scene images shot by the unmanned aerial vehicle at least one preset position, each group of scene images comprises five first images shot at a first preset angle corresponding to the preset position and one second image shot at a second preset angle, the scene brightness of the five first images is different, each scene image comprises an object to be detected, the first preset angle is that the directions of a first polaroid and a second polaroid are mutually perpendicular, the second preset angle is that the directions of the first polaroid and the second polaroid are mutually parallel, and the directions of the second polaroids of the unmanned aerial vehicle are consistent; the generation module is used for generating a material information graph corresponding to a preset position according to the five first images and the corresponding second images of each group; and the determining module is used for determining the material information corresponding to the object to be detected according to the multiple material information graphs corresponding to all the preset positions.
In the unmanned aerial vehicle system-based material reconstruction device designed as above, the server is used for controlling the plurality of illuminating unmanned aerial vehicles to shine with different scene brightness for the object to be detected to form scenes with different scene brightness, the server is used for controlling the photographing unmanned aerial vehicle to photograph the scenes with different scene brightness at least one preset position to obtain at least one group of scene images, the material information diagram corresponding to the preset position is generated according to the five first images and the corresponding second images in each group of scene images, and then the material information corresponding to the object to be detected is determined according to the plurality of material information diagrams corresponding to all preset positions.
In a third aspect, an embodiment provides an electronic device comprising a memory storing a computer program and a processor that when executing the computer program performs the method of the first aspect, any optional implementation of the first aspect.
In a fourth aspect, embodiments provide a non-transitory readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method of any of the alternative implementations of the first aspect.
In a fifth aspect, embodiments provide a computer program product which, when run on a computer, causes the computer to perform the method of any of the alternative implementations of the first aspect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of an unmanned aerial vehicle system according to an embodiment of the present application;
FIG. 2 is a first flowchart of a material determination method according to an embodiment of the present disclosure;
FIG. 3 is a second flowchart of a material determination method according to an embodiment of the present disclosure;
FIG. 4 is a third flowchart of a material determination method according to an embodiment of the present disclosure;
FIG. 5 is a fourth flowchart of a material determination method according to an embodiment of the present disclosure;
FIG. 6 is a fifth flowchart of a material determination method according to an embodiment of the present disclosure;
FIG. 7 is a sixth flowchart of a material determination method according to an embodiment of the present disclosure;
FIG. 8 is a seventh flowchart of a material determination method according to an embodiment of the present disclosure;
FIG. 9 is an eighth flowchart of a material determination method according to an embodiment of the present disclosure;
FIG. 10 is a block diagram of a texture determining apparatus according to an embodiment of the present disclosure;
fig. 11 is a block diagram of an electronic device according to an embodiment of the present application.
Icon: 10-photographing an unmanned plane; 20-illuminating the unmanned plane; 30-a server; 40-an object to be detected; 300-an acquisition module; 302-a generation module; 304-a determination module; 306-a control module; 4-an electronic device; 401-a processor; 402-memory; 403-communication bus.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
First embodiment
As shown in fig. 1, an embodiment of the present application provides an unmanned aerial vehicle system, where the unmanned aerial vehicle system includes a photographing unmanned aerial vehicle 10, an illuminating unmanned aerial vehicle 20, and a server 30, where the photographing unmanned aerial vehicle 10 is an unmanned aerial vehicle with photographing devices such as a camera or a video camera, and a first polarizer is disposed in front of a camera or a video camera lens of the photographing unmanned aerial vehicle 10; the number of the unmanned aerial vehicles 20 is multiple, each unmanned aerial vehicle comprises a light source with adjustable brightness and a second polaroid arranged on the light emitting surface of the light source, the second polaroids of all unmanned aerial vehicles 20 are oriented consistently, and the unmanned aerial vehicles 20 are used for surrounding a building to be tested and further polishing the object 40 to be tested to provide different scene brightness for the building to be tested; the server 30 may be a terminal device such as a computer with functions of communication, calculation, and storage, and the server 30 performs wireless communication with the photo taking unmanned aerial vehicle 10 and the lighting unmanned aerial vehicle 20, for example, may transmit control instructions to the photo taking unmanned aerial vehicle 10 and the lighting unmanned aerial vehicle 20 and acquire data returned by the unmanned aerial vehicle through wireless communication modes such as 4G, 5G, bluetooth, or WIFI, and the server 30 may wirelessly control the flight stop position of the photo taking unmanned aerial vehicle 10 and each lighting unmanned aerial vehicle 20, wirelessly control the orientation of the first polarizer of the photo taking unmanned aerial vehicle 10 and the second polarizer of each lighting unmanned aerial vehicle 20, wirelessly control the photo taking unmanned aerial vehicle 10 to take a snapshot, wirelessly control the light source brightness of each lighting unmanned aerial vehicle 20, and the like.
Second embodiment
The embodiment of the application provides a material determining method of a unmanned aerial vehicle system based on the description of the first embodiment, which can be applied to a server in the unmanned aerial vehicle system of the first embodiment, as shown in fig. 2, and specifically includes the following steps:
step S100: at least one group of scene images shot by the unmanned shooting unmanned aerial vehicle at least one preset position is acquired, and each group of scene images comprises five first images shot at a first preset angle corresponding to the preset position and one second image shot at a second preset angle.
Step S102: and generating a material information diagram corresponding to the preset position according to the five first images and the corresponding second images of each group.
Step S104: and determining the material information corresponding to the object to be detected according to the material information graphs corresponding to all the preset positions.
In step S100, the unmanned aerial vehicle shoots six times at a preset position, and a set of scene images, that is, six images, is obtained, where each set of scene images includes five first images shot at a first preset angle corresponding to the preset position and one second image shot at a second preset angle. On the basis, the photographing unmanned aerial vehicle can photograph at a plurality of preset positions, so that a plurality of groups of scene images are obtained. The scene brightness of five first images shot at the first preset angle is different, the scene brightness is determined by the light brightness of the plurality of illuminating unmanned aerial vehicles, the scene brightness of one second image shot at the second preset angle is consistent with one first image with brightness of a photo unmanned aerial vehicle in the five first images shot at the current preset position, the first preset angle indicates that the orientation of the first polaroids of all the illuminating unmanned aerial vehicles is mutually perpendicular to the orientation of each second polaroid on the premise that the orientations of the second polaroids of the shooting unmanned aerial vehicles are consistent; the second preset angle indicates that on the premise that the orientations of the second polaroids of all the illuminating unmanned aerial vehicles are consistent, the orientation of the first polaroid of the photographing unmanned aerial vehicle is parallel to the orientation of each second polaroid; the orientation of the polarizer refers to the direction of the wafer surface of the polarizer or the normal direction of the polarizer. The scene image is an image obtained by shining the object to be detected on a plurality of illuminating unmanned aerial vehicles and shooting the object to be detected after the shining by the shooting unmanned aerial vehicles.
It should be noted that, when the unmanned photographing vehicle photographs at the first preset angle as described above, that is, the first polarizer and the second polarizer are perpendicular, only the diffuse reflection optical fiber on the surface of the object can enter the camera to form an image due to the property of the polarizers, and on this basis, the unmanned photographing vehicle photographs at five scene brightnesses to form five first images; when the unmanned photographing vehicle photographs in the second preset angle, namely the first polaroid and the second polaroid in parallel, the diffuse reflection light and the specular reflection light on the surface of the object enter the camera to form an image due to the attribute of the polaroid, and on the basis, the unmanned photographing vehicle photographs one second image at one position at a time.
After executing step S100 to obtain at least one set of scene images captured by the unmanned photographing vehicle at least one preset position, the server may execute step S102.
In step S102, the server may generate a material information map corresponding to the preset position according to the five first images and the corresponding second image in each set of scene images. The material information map includes a rough map, a color map and a metalness map, which reflect material information, that is, the server generates a rough map, a color map and a metalness map corresponding to a predetermined position according to five first images and corresponding second images in each group of scene images. Wherein the roughness map represents diffuse reflection properties of the object surface, the color map represents color properties of the object surface, and the metallization map represents specular reflection properties of the object surface. After the server generates the material information map corresponding to the preset position according to the five first images and the corresponding second images in each set of scene images in step S102, step S104 may be executed to determine the material information corresponding to the object to be measured according to the multiple material information maps corresponding to all the preset positions.
In step S104, the server determines material information corresponding to the target to be measured according to the multiple material information graphs corresponding to all the preset positions, and determines the material information corresponding to the target to be measured according to the material information graph generated at one preset position when only one preset position exists; and when a plurality of preset positions exist, combining the material information graphs corresponding to each preset position to determine the material information corresponding to the target to be detected.
In the unmanned aerial vehicle system-based material determination method, the server is used for controlling the plurality of illuminating unmanned aerial vehicles to shine with different scene brightness for the object to be detected to form scenes with different scene brightness, the server is used for controlling the unmanned aerial vehicles to shoot the scenes with different scene brightness at least one preset position to obtain at least one group of scene images, the material information diagram corresponding to the preset position is generated according to the five first images and the corresponding second images in each group of scene images, and then the material information corresponding to the object to be detected is determined according to the plurality of material information diagrams corresponding to all the preset positions.
In an optional implementation manner of this embodiment, before acquiring at least one set of scene images captured by the unmanned photographing vehicle at least one preset position in step S100, as shown in fig. 3, the method further includes:
step S90: and controlling the photographing unmanned aerial vehicle to reach a corresponding preset position, and establishing a corresponding preset shape by taking the preset position as a center.
Step S91: the plurality of illuminating unmanned aerial vehicles are controlled to be distributed on the preset shape, wherein the preset shape and the object to be detected form a scene to be shot.
Based on the above steps, the step S100 of obtaining at least one group of scene images shot by the unmanned photographing vehicle at least one preset position may specifically be:
step S1000: and controlling the photographing unmanned aerial vehicle to photograph the corresponding scene to be photographed at each preset position to obtain a corresponding group of scene images.
When there is only one preset position, the server only executes the steps S90 to S91 once, and further executes the step S1000 to obtain a group of scene images; when there are a plurality of preset positions, the step S90 to the step S91 are executed for each preset position server to form a scene to be shot at each preset position, and then the step S1000 is executed to obtain a plurality of groups of scene images.
In step S90, the staff may store one or more preset positions in the server, the server controls the unmanned aerial vehicle to fly to the corresponding preset position according to the preset position and keeps the flying stable, and then the server establishes the corresponding preset shape with the preset position reached by the unmanned aerial vehicle as the center. For example, a circle with a radius r can be defined on a plane which passes through the center and is perpendicular to the direction of the first polarizer of the photographing unmanned aerial vehicle by taking the current preset position of the photographing unmanned aerial vehicle as the center of a circle, and the circle with the radius r can surround the object to be measured, and the circle with the radius r is the preset shape corresponding to the preset position. After the server performs step S90 to establish a corresponding preset shape centered on the preset position, step S91 may be performed.
In step S91, the server controls the plurality of unmanned aerial vehicles to be distributed on the established preset shape, and for example, the server controls the plurality of unmanned aerial vehicles to be distributed on the circumference of the established circle with the radius r, and the circle with the radius r surrounds the object to be detected, so that the plurality of unmanned aerial vehicles on the preset shape can effectively shine the object to be detected to form the scene to be shot. After the server executes step S91 to control the plurality of lighting unmanned aerial vehicles to be distributed on the preset distribution shape, step S1000 may be executed to control the photographing unmanned aerial vehicle to photograph the corresponding scene to be photographed at each preset position to obtain a corresponding set of scene images.
In an optional implementation manner of this embodiment, step S1000 controls the photographing unmanned aerial vehicle to photograph the corresponding scene to be photographed at each preset position to obtain a corresponding set of scene images, as shown in fig. 4, specifically, the following steps may be executed for each preset position:
step S1001: and controlling the orientations of the second polaroids of all the illuminating unmanned aerial vehicles to be consistent, and controlling the orientations of the first polaroids of the photographing unmanned aerial vehicle and the second polaroids of each illuminating unmanned aerial vehicle to be mutually perpendicular to form a first preset angle.
Step S1002: and controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the five different scene brightnesses to obtain a group of five first images.
Step S1003: and controlling the orientation of the first polaroid of the photographing unmanned aerial vehicle and the second polaroid of each illuminating unmanned aerial vehicle to be parallel to each other so as to form a second preset angle.
Step S1004: and controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed to obtain second images corresponding to the five first images.
When only one preset position exists, a group of scene images can be obtained by executing the steps; when there are a plurality of preset positions, the above steps S1001 to S1004 are performed for each preset position to obtain a plurality of sets of scene images.
In step S1001, at each preset position, the server controls the orientations of the second polarizers of all the unmanned aerial vehicles to be consistent, and controls the orientations of the first polarizers of the unmanned aerial vehicles to be perpendicular to the orientations of the second polarizers of the unmanned aerial vehicles so as to form the first preset angle. Step S1002 may be performed after the first predetermined angle is formed.
In step S1002, the server may control the photographing unmanned aerial vehicle to photograph the to-be-photographed scenes under the brightness of five different manufacturers to obtain five first images in a set of scene images corresponding to the current preset position, where the different scene brightness may be obtained by adjusting the light brightness of all the illuminating unmanned aerial vehicles by the server. After five first images at the current preset position are obtained, step S1003 may be performed.
In step S1003, the server controls the position of the unmanned aerial vehicle to be unchanged, but controls the first polarizer of the unmanned aerial vehicle to rotate, so that the orientation of the first polarizer of the unmanned aerial vehicle is consistent with the orientation of the second polarizer of each unmanned aerial vehicle, that is, the first polarizers are parallel to each other to form the second preset angle. After the second preset angle is formed, the server may execute step S1004 to control the photographing unmanned aerial vehicle to photograph the scene to be photographed to obtain a second image corresponding to the five first images at the current position, where the scene brightness of the second image may be the same as the scene brightness of any one of the five first images.
Under a preset position, the server obtains a set of scene images under the current preset position by executing the steps S1001 to S1004, and when the preset positions are multiple, the server re-executes the step S90 to control the unmanned aerial vehicle to reach another preset position, and then obtains a set of scene images under another preset position by the steps S91 and S1001 to S1004, so as to obtain a plurality of sets of scene images.
In an alternative implementation manner of this embodiment, the five first images obtained by shooting in the step S1002 include a first scene luminance image, a second scene luminance image, a third scene luminance image, a fourth scene luminance image, and a fifth scene luminance image, and on this basis, as shown in fig. 5, the step S1002 may specifically be the following steps:
step S10020: and adjusting the light source brightness of all the illuminating unmanned aerial vehicles to form first scene brightness.
Step S10021: and controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the first scene brightness to obtain a first scene brightness image.
Step S10022: and adjusting the light source brightness of all the illuminating unmanned aerial vehicles to form second scene brightness.
Step S10023: and controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the second scene brightness to obtain a second scene brightness image.
Step S10024: and adjusting the light source brightness of all the illuminating unmanned aerial vehicles to form third scene brightness.
Step S10025: and controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the third scene brightness to obtain a third scene brightness image.
Step S10026: and adjusting the light source brightness of all the illuminating unmanned aerial vehicles to form fourth scene brightness.
Step S10027: and controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the fourth scene brightness to obtain a fourth scene brightness image.
Step S10028: and adjusting the light source brightness of all the illuminating unmanned aerial vehicles to form fifth scene brightness.
Step S10029: and controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the brightness of the fifth scene to obtain a brightness image of the fifth scene.
In steps S10020 to S10029, the server adjusts the scene brightness five times, and after each adjustment of the scene brightness, the photographing unmanned aerial vehicle can be controlled to perform one photographing to obtain a corresponding scene brightness image. The first scene brightness, the second scene brightness, the third scene brightness, the fourth scene brightness and the fifth scene brightness can be in a form that the brightness is sequentially increased, can be in a form that the brightness is sequentially reduced, can be in a form that the brightness is irregular, and can be achieved by only keeping different brightness conditions of the five scene brightnesses.
As shown in fig. 6, step S10020 of adjusting the light source brightness of all the unmanned aerial vehicle to form the first scene brightness may specifically be as follows:
step S100200: and controlling the light sources of all the illuminating unmanned aerial vehicles to be extinguished to form first scene brightness.
Step S10022 of adjusting the light source brightness of all the unmanned lighting vehicles to form the second scene brightness may specifically be:
step S100220: and taking the position of the unmanned aerial vehicle as an origin, taking the orientation of a first polaroid of the unmanned aerial vehicle as a y axis, taking the direction perpendicular to the y axis direction as an x axis, establishing a coordinate system, calculating a first brightness value corresponding to each unmanned aerial vehicle according to the transverse position coordinate of each unmanned aerial vehicle, and regulating the brightness of the corresponding unmanned aerial vehicle according to the first brightness value of each unmanned aerial vehicle to form second scene brightness.
Step S10024 of adjusting the light source brightness of all the unmanned lighting vehicles to form a third scene brightness may specifically be:
step S100240: and calculating a second brightness value corresponding to each unmanned aerial vehicle according to the ordinate of each unmanned aerial vehicle, and adjusting the brightness of the corresponding unmanned aerial vehicle according to the second brightness value of each unmanned aerial vehicle to form third scene brightness.
Step S10026 of adjusting the light source brightness of all the unmanned lighting vehicles to form a fourth scene brightness may specifically be:
step S100260: and calculating a third brightness value corresponding to each unmanned aerial vehicle according to the abscissa and the ordinate of each unmanned aerial vehicle, and adjusting the brightness of the corresponding unmanned aerial vehicle according to the third brightness value of each unmanned aerial vehicle to form fourth scene brightness.
Step S10028 of adjusting the light source brightness of all the unmanned lighting vehicles to form a fifth scene brightness may specifically be:
step S100280: and adjusting the brightness values of the light sources of all the illuminating unmanned aerial vehicles to be maximum to form fifth scene brightness.
In one implementation manner of adjusting the brightness of the above design, the first scene brightness image is an image obtained by shooting under the condition that the lights of all the illuminating unmanned aerial vehicles are not bright, and the fifth scene brightness image is an image obtained by shooting under the condition that the brightness value of the lights of all the illuminating unmanned aerial vehicles reaches the maximum.
In steps S100220, S100240 and S100260, the second, third and fourth scene brightnesses are related to the position of each illuminating drone. In the above steps, a coordinate system is established by taking the position of the photographing unmanned aerial vehicle as an origin, a y-axis in the established coordinate system can take the direction of the orientation of the first polaroid of the photographing unmanned aerial vehicle as a y-axis, and the direction vertical to the y-axis as an x-axis, so that the coordinate system is established; in addition to the above-described established coordinate system, any other direction may be used as the y-axis, and the direction perpendicular to the y-axis may be used as the x-axis, thereby establishing the coordinate system.
When the preset shape is circular, on the basis of the established coordinate system, the second scene brightness is a first brightness value calculated by the horizontal position coordinate of the unmanned aerial vehicle and a first preset brightness formula, the third scene brightness is a second brightness value calculated by the vertical position coordinate of the unmanned aerial vehicle and a second preset brightness formula, and the fourth scene brightness is a third brightness value calculated by the horizontal and vertical coordinates of the unmanned aerial vehicle and a third preset brightness formula. The first preset brightness formula may be specificallyThe second preset brightness may be +.>The third preset luminance formula may be +.>Wherein x in the above formula is the abscissa corresponding to the unmanned aerial vehicle, y is the ordinate corresponding to the unmanned aerial vehicle under the designed coordinate system, r is the radius of the circle with the preset shape, and maxI is the maximum brightness value corresponding to the unmanned aerial vehicle.
Through the mode, three different scene brightnesses can be achieved, and then a second scene brightness image, a third scene brightness image and a fourth scene brightness image are obtained, and in the process of achieving each scene brightness, as the position of each illuminating unmanned aerial vehicle is different, the brightness value obtained by each illuminating unmanned aerial vehicle is different, and then the brightness value of the illuminating unmanned aerial vehicle can change along with the distance from the photographing unmanned aerial vehicle, for example, the brightness of the illuminating unmanned aerial vehicle at a long distance is stronger, and the brightness of the unmanned aerial vehicle at a short distance is weaker, so that the lighting of all the illuminating unmanned aerial vehicles to a target object to be tested is kept consistent, and the brightness in one shot image is enabled.
In an alternative implementation of this embodiment, the foregoing description has already described that the material information map includes a color map, a rough map and a metalization map, and step S102 generates a material information map corresponding to a preset position according to five first images and corresponding second images of each group, as shown in fig. 7, specifically may be the following steps:
step S1020: and generating a corresponding color map according to the fifth scene brightness image and the first scene brightness image of each group.
Step S1021: and generating a corresponding normal map according to the first scene brightness image, the second scene brightness image, the third scene brightness image and the fourth scene brightness image of each group.
Step S1022: and generating a corresponding roughness map according to the fifth scene brightness image of each group and the corresponding normal map.
Step S1023: and generating a corresponding metalness map according to the second image of each group, the first image with the same brightness as the second image and the corresponding normal map.
In the foregoing step S1020, the corresponding color map is generated by using the fifth scene luminance image and the first scene luminance image, where the fifth scene luminance image is the image obtained by capturing the image when the brightness value of the lamp of all the unmanned aerial vehicles reaches the maximum, and the first scene luminance image is the image obtained by capturing the image when the lamp of all the unmanned aerial vehicles is not on. Specifically, for any same pixel point in the fifth scene luminance image and the first scene luminance image in the same group, the value of the pixel at the position in the first scene luminance image is subtracted from the value of the pixel at the position in the fifth scene luminance image to obtain the value of the pixel at the position in the corresponding color map, and then each position in the image is subjected to the above operation to obtain the values of the pixels at all positions in the corresponding color map in the group, so as to obtain the corresponding color map.
In step S1021, since the first scene luminance image is an image obtained by capturing all the lighting unmanned aerial vehicle under the condition that the lighting is not bright, a corresponding normal map is generated by the first scene luminance image, the second scene luminance image, the third scene luminance image and the fourth scene luminance image, as shown in fig. 8, specifically the following steps are:
step S10210: and carrying out gray scale conversion on the first scene brightness image, the second scene brightness image, the third scene brightness image and the fourth scene brightness image of each group to obtain a corresponding first gray scale image, a corresponding second gray scale image, a corresponding third gray scale image and a corresponding fourth gray scale image.
Step S10211: the R channel value of each pixel point in the second gray level image is subtracted from the R channel value of the corresponding pixel point in the first gray level image to obtain the R channel value of each pixel point in the normal map, the G channel value of each pixel point in the third gray level image is subtracted from the G channel value of the corresponding pixel point in the first gray level image to obtain the G channel value of each pixel point in the normal map, and the B channel value of each pixel point in the fourth gray level image is subtracted from the B channel value of the corresponding pixel point in the first gray level image to obtain the B channel value of each pixel point in the normal map.
Step S10212: and generating each group of corresponding normal maps according to the R channel value of each pixel point, the G channel value of each pixel point and the B channel value of each pixel point in each group of normal maps.
In the step of designing, the server converts the first scene brightness image, the second scene brightness image, the third scene brightness image and the fourth scene brightness image into gray scales, and then performs step S10211 to generate a new image in a combined manner, that is, the R channel value of each pixel point in the normal map is obtained by subtracting the R channel value of the corresponding pixel point in the first gray scale image from the R channel value of each pixel point in the second gray scale image, the G channel value of each pixel point in the third gray scale image is subtracted from the G channel value of the corresponding pixel point in the first gray scale image to obtain the G channel value of each pixel point in the normal map, and the B channel value of each pixel point in the fourth gray scale image is subtracted from the B channel value of the corresponding pixel point in the first gray scale image to obtain the B channel value of each pixel point in the normal map, so as to generate the corresponding normal map.
Specifically, assuming that the first gray image is m1, the second gray image is m2, the third gray image is m3, and the fourth gray image is m4, the value corresponding to the gray image at any position m1 of the picture is v1, the value corresponding to the gray image at m2 is v2, the value corresponding to the gray image at m3 is v3, and the value corresponding to the gray image at m4 is v4, a vector value (v 2-v1, v3-v1, v4-v 1) can be constructed, the vector is the normal direction of the pixel point, and vector values of all the pixel points can be stored by using a 3-channel picture, that is, a normal map.
On the basis of executing the above steps to generate the normal map, step S1022 generates a corresponding roughness map according to the fifth scene luminance image and the corresponding normal map of each group, specifically, may calculate an included angle θ between each pixel point and the unit normal line on the normal map, convert the fifth scene luminance image into a fifth gray level image, and then divide each pixel by tan θ through the fifth gray level image to generate the roughness map.
On the basis of executing the above steps to generate the normal map, step S1023 generates a corresponding metalness map according to each group of the second image, the first image with the same brightness as the second image, and the corresponding normal map, which may specifically be: the pixel value of each position of the second image is subtracted by the pixel value of the corresponding position of the first image with the same brightness to obtain an image obtained after subtraction, the image obtained after subtraction is converted into a gray level image, then the gray level image obtained after subtraction is utilized to divide each pixel by tan theta to generate a metalogram, when the first image is shot, the camera polaroid and the light source polaroid are vertical, only diffuse reflection light rays of the surface of an object can enter the camera, when the second image is shot, the camera polaroid and the light source polaroid are parallel, the diffuse reflection light rays of the surface of the object and the specular reflection light rays are contained, the first image with the same brightness is subtracted by the second image to obtain only the specular reflection light rays of the surface of the object, and then the metalogram representing the specular reflection attribute of the surface of the object to be detected is generated through the normal line image.
In an alternative implementation manner of the present embodiment, based on the foregoing, step S104 determines material information corresponding to the object to be measured according to a plurality of material information graphs corresponding to all preset positions, as shown in fig. 9, and specifically may be the following steps:
step S1040: modeling is carried out according to all the color maps to generate a model of the object to be tested, and color information of the model is calculated.
Step S1042: and calculating roughness information according to all the roughness maps and models.
Step S1044: and calculating the metalness information according to all the metalness maps and the models to determine the material information corresponding to the object to be measured.
In the above steps, after obtaining the color maps, the roughness maps and the metal maps corresponding to all preset positions, the server uses all the color maps to perform model reconstruction, generates a reconstructed three-dimensional model and an actual color information/color map, uses the reconstructed three-dimensional model and all the roughness maps to further calculate an actual rough information/roughness map of the object to be measured, and uses all the metal maps and the reconstructed three-dimensional model to calculate an actual metal information/metal map of the object to be measured.
Third embodiment
Fig. 10 is a schematic block diagram of a material determining apparatus based on a unmanned aerial vehicle system according to the present application, and it should be understood that the unmanned aerial vehicle system is a unmanned aerial vehicle system described in the first embodiment, and the apparatus corresponds to the embodiment of the method performed by the server in fig. 2 to 9, and is capable of performing the steps involved in the method performed by the server in the second embodiment, and specific functions of the apparatus may be referred to the above description, and detailed descriptions thereof are omitted herein as appropriate to avoid redundancy. The device includes at least one software functional module that can be stored in memory in the form of software or firmware (firmware) or cured in an Operating System (OS) of the device. Specifically, the device comprises: the acquiring module 300 is configured to acquire at least one set of scene images shot by the unmanned aerial vehicle at least one preset position, where each set of scene images includes five first images shot at a first preset angle corresponding to the preset position and one second image shot at a second preset angle, scene brightness of the five first images is different, each scene image includes an object to be detected, the first preset angle is that directions of the first polarizer and the second polarizer are perpendicular to each other, the second preset angle is that directions of the first polarizer and the second polarizer are parallel to each other, and directions of the second polarizers of the plurality of unmanned aerial vehicles are consistent; the generating module 302 is configured to generate a material information map corresponding to a preset position according to five first images and a corresponding second image of each group; the determining module 304 is configured to determine material information corresponding to the object to be detected according to the multiple material information graphs corresponding to all the preset positions.
In the unmanned aerial vehicle system-based material determining device, the server is used for controlling the plurality of illuminating unmanned aerial vehicles to shine with different scene brightness for the object to be detected to form scenes with different scene brightness, the server is used for controlling the unmanned aerial vehicles to shoot the scenes with different scene brightness at least one preset position to obtain at least one group of scene images, the material information graph corresponding to the preset position is generated according to the five first images and the corresponding second images in each group of scene images, and then the material information corresponding to the object to be detected is determined according to the plurality of material information graphs corresponding to all the preset positions.
In an optional implementation manner of this embodiment, the apparatus further includes a control module 306, configured to control the photographing unmanned aerial vehicle to reach a corresponding preset position, establish a corresponding preset distribution shape with the preset position as a center, and control the plurality of lighting unmanned aerial vehicles to be distributed on the preset distribution shape, where the preset distribution shape and the object to be detected form a scene to be photographed; and controlling the photographing unmanned aerial vehicle to photograph the corresponding scene to be photographed at least one preset position to obtain at least one group of scene images.
Fourth embodiment
As shown in fig. 11, the present application provides an electronic device 4, including: a processor 401 and a memory 402, the processor 401 and the memory 402 being interconnected and communicating with each other by a communication bus 403 and/or other form of connection mechanism (not shown), the memory 402 storing a computer program executable by the processor 401, which when executed by a computing device, the processor 401 executes the method in any of the alternative implementations of the second embodiment, such as step S100 to step S104: acquiring at least one group of scene images shot by a shooting unmanned aerial vehicle at least one preset position, wherein each group of scene images comprises five first images shot at a first preset angle corresponding to the preset position and one second image shot at a second preset angle; generating a material information diagram corresponding to a preset position according to the five first images and the corresponding second images of each group; and determining the material information corresponding to the object to be detected according to the material information graphs corresponding to all the preset positions.
The present application provides a storage medium having stored thereon a computer program which, when executed by a processor, performs the method of any of the alternative implementations of the second embodiment.
The storage medium may be implemented by any type of volatile or nonvolatile Memory device or combination thereof, such as static random access Memory (Static Random Access Memory, SRAM), electrically erasable Programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), erasable Programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), programmable Read-Only Memory (PROM), read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk.
The present application provides a computer program product which, when run on a computer, causes the computer to perform the method in any of the alternative implementations of the second embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
Further, the units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Furthermore, functional modules in various embodiments of the present application may be integrated together to form a single portion, or each module may exist alone, or two or more modules may be integrated to form a single portion.
It should be noted that the functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM) random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application, and various modifications and variations may be suggested to one skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.
Claims (4)
1. The utility model provides a material determining method based on unmanned aerial vehicle system, its characterized in that, unmanned aerial vehicle system includes unmanned aerial vehicle, a plurality of illumination unmanned aerial vehicle and server of shooing, unmanned aerial vehicle of shooing includes the camera and sets up the first polaroid on the lens of camera, each illumination unmanned aerial vehicle includes light source and sets up the second polaroid on the light-emitting surface of light source, server and a plurality of illumination unmanned aerial vehicle and unmanned aerial vehicle communication connection of shooing, the method is applied to the server includes:
acquiring at least one group of scene images shot by the unmanned aerial vehicle at least one preset position, wherein each group of scene images comprises five first images shot under a first preset angle corresponding to the preset position and one second image shot under a second preset angle, the scene brightness of the five first images is different, each scene image comprises a target object to be detected, the first preset angle is that the directions of the first polaroid and the second polaroid are mutually perpendicular, the second preset angle is that the directions of the first polaroid and the second polaroid are mutually parallel, and the directions of the second polaroids of the plurality of unmanned aerial vehicles are consistent;
Generating a material information diagram corresponding to a preset position according to the five first images and the corresponding second images of each group;
determining material information corresponding to the object to be detected according to a plurality of material information graphs corresponding to all preset positions;
the five first images comprise a first scene brightness image, a second scene brightness image, a third scene brightness image, a fourth scene brightness image and a fifth scene brightness image, the light sources of all the unmanned aerial vehicle are controlled to be extinguished to form first scene brightness, and the unmanned aerial vehicle is controlled to shoot a scene to be shot under the first scene brightness at each preset position through a first preset angle to obtain a first scene brightness image corresponding to each preset position;
taking the position of the photographing unmanned aerial vehicle as an origin, taking the orientation of a first polaroid of the photographing unmanned aerial vehicle as a y axis, and taking the direction perpendicular to the direction of the y axis as an x axis to establish a coordinate system; calculating a first brightness value corresponding to each unmanned aerial vehicle according to the abscissa of each unmanned aerial vehicle, and adjusting the brightness of the corresponding unmanned aerial vehicle according to the first brightness value of each unmanned aerial vehicle to form second scene brightness; controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the second scene brightness at each preset position through a first preset angle to obtain a second scene brightness image corresponding to each preset position;
Calculating a second brightness value corresponding to each unmanned aerial vehicle according to the ordinate of each unmanned aerial vehicle, and adjusting the brightness of the corresponding unmanned aerial vehicle according to the second brightness value of each unmanned aerial vehicle to form the third scene brightness; controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the third scene brightness at each preset position through a first preset angle to obtain a third scene brightness image corresponding to each preset position;
calculating a third brightness value corresponding to each illuminating unmanned aerial vehicle according to the abscissa and the ordinate of each illuminating unmanned aerial vehicle, adjusting the brightness of the corresponding illuminating unmanned aerial vehicle according to the third brightness value of each illuminating unmanned aerial vehicle to form fourth scene brightness, and controlling the photographing unmanned aerial vehicle to photograph a scene to be photographed at the fourth scene brightness at each preset position through a first preset angle to obtain a fourth scene brightness image corresponding to each preset position;
adjusting the brightness values of the light sources of all the illuminating unmanned aerial vehicles to be maximum to form the brightness of the fifth scene, and controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed at each preset position through a first preset angle to obtain a fifth scene brightness image corresponding to each preset position;
The generating a material information diagram corresponding to a preset position according to the five first images and the corresponding second images of each group comprises the following steps:
generating a corresponding color map according to the fifth scene brightness image and the first scene brightness image of each group;
generating a corresponding normal map according to the first scene brightness image, the second scene brightness image, the third scene brightness image and the fourth scene brightness image of each group;
generating a corresponding roughness map according to the fifth scene brightness image of each group and the corresponding normal map;
generating a corresponding metalness map according to the second image of each group, the first image with the same brightness as the second image and the corresponding normal map;
the generating a corresponding normal map according to the first scene luminance image, the second scene luminance image, the third scene luminance image and the fourth scene luminance image of each group includes:
performing gray conversion on the first scene brightness image, the second scene brightness image, the third scene brightness image and the fourth scene brightness image of each group to obtain a corresponding first gray image, a corresponding second gray image, a corresponding third gray image and a corresponding fourth gray image;
Subtracting the R channel value of the corresponding pixel point in the first gray level image from the R channel value of each pixel point in the second gray level image to obtain the R channel value of each pixel point in the normal map, subtracting the G channel value of the corresponding pixel point in the first gray level image from the G channel value of each pixel point in the third gray level image to obtain the G channel value of each pixel point in the normal map, and subtracting the B channel value of the corresponding pixel point in the first gray level image from the B channel value of each pixel point in the fourth gray level image to obtain the B channel value of each pixel point in the normal map;
generating each group of corresponding normal maps according to the R channel value of each pixel point, the G channel value of each pixel point and the B channel value of each pixel point in each group of normal maps;
the determining the material information corresponding to the object to be detected according to the multiple material information diagrams corresponding to all the preset positions comprises the following steps:
modeling according to all the color maps to generate a model of the object to be tested and calculating color information of the model;
calculating roughness information according to all the roughness maps and the model;
and calculating the metal degree information according to all the metal degree mapping and the model so as to determine the material information corresponding to the object to be detected.
2. The method of claim 1, wherein the acquiring at least one set of images of the scene captured by the drone at least one preset location comprises:
the unmanned aerial vehicle is controlled to shoot the corresponding scene to be shot at each preset position to obtain a corresponding group of scene images, wherein the scene to be shot is formed by a plurality of illuminating unmanned aerial vehicles distributed to form preset shapes and the object to be detected, and the preset shapes are built by taking the shooting unmanned aerial vehicle as a center.
3. The method of claim 2, wherein the controlling the drone to capture a corresponding scene to be captured at each preset location to obtain a corresponding set of scene images comprises:
controlling the photographing unmanned aerial vehicle to photograph the to-be-photographed scenes with five different scene brightnesses at each preset position through a first preset angle to obtain five first images corresponding to each preset position;
and controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed at each preset position through a second preset angle to obtain second images corresponding to the five first images.
4. Unmanned aerial vehicle system-based material reconstruction device, a serial communication port, unmanned aerial vehicle system is including taking a picture unmanned aerial vehicle, a plurality of illumination unmanned aerial vehicle and server, take a picture unmanned aerial vehicle and include the camera and set up the first polaroid on the lens of camera, each illumination unmanned aerial vehicle includes the light source and sets up the second polaroid on the play plain noodles of light source, server and a plurality of illumination unmanned aerial vehicle and unmanned aerial vehicle communication connection of taking a picture, the device is applied to the server includes:
The system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring at least one group of scene images shot by the unmanned aerial vehicle at least one preset position, each group of scene images comprises five first images shot at a first preset angle corresponding to the preset position and one second image shot at a second preset angle, the scene brightness of the five first images is different, each scene image comprises an object to be detected, the first preset angle is that the directions of a first polaroid and a second polaroid are mutually perpendicular, the second preset angle is that the directions of the first polaroid and the second polaroid are mutually parallel, and the directions of the second polaroids of the unmanned aerial vehicle are consistent;
the generation module is used for generating a material information graph corresponding to a preset position according to the five first images and the corresponding second images of each group;
the determining module is used for determining material information corresponding to the object to be detected according to a plurality of material information graphs corresponding to all preset positions;
the five first images comprise a first scene brightness image, a second scene brightness image, a third scene brightness image, a fourth scene brightness image and a fifth scene brightness image, and the acquisition module is specifically configured to control light sources of all the unmanned aerial vehicles to be extinguished to form first scene brightness, and control the unmanned aerial vehicles to shoot a scene to be shot in the first scene brightness at each preset position through a first preset angle to obtain a first scene brightness image corresponding to each preset position;
Taking the position of the photographing unmanned aerial vehicle as an origin, taking the orientation of a first polaroid of the photographing unmanned aerial vehicle as a y axis, and taking the direction perpendicular to the direction of the y axis as an x axis to establish a coordinate system; calculating a first brightness value corresponding to each unmanned aerial vehicle according to the abscissa of each unmanned aerial vehicle, and adjusting the brightness of the corresponding unmanned aerial vehicle according to the first brightness value of each unmanned aerial vehicle to form second scene brightness; controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the second scene brightness at each preset position through a first preset angle to obtain a second scene brightness image corresponding to each preset position;
calculating a second brightness value corresponding to each unmanned aerial vehicle according to the ordinate of each unmanned aerial vehicle, and adjusting the brightness of the corresponding unmanned aerial vehicle according to the second brightness value of each unmanned aerial vehicle to form the third scene brightness; controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the third scene brightness at each preset position through a first preset angle to obtain a third scene brightness image corresponding to each preset position;
calculating a third brightness value corresponding to each illuminating unmanned aerial vehicle according to the abscissa and the ordinate of each illuminating unmanned aerial vehicle, adjusting the brightness of the corresponding illuminating unmanned aerial vehicle according to the third brightness value of each illuminating unmanned aerial vehicle to form fourth scene brightness, and controlling the photographing unmanned aerial vehicle to photograph a scene to be photographed at the fourth scene brightness at each preset position through a first preset angle to obtain a fourth scene brightness image corresponding to each preset position;
Adjusting the brightness values of the light sources of all the illuminating unmanned aerial vehicles to be maximum to form the brightness of the fifth scene, and controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed at each preset position through a first preset angle to obtain a fifth scene brightness image corresponding to each preset position;
the generating module is specifically configured to generate a corresponding color map according to the fifth scene luminance image and the first scene luminance image of each group; performing gray conversion on the first scene brightness image, the second scene brightness image, the third scene brightness image and the fourth scene brightness image of each group to obtain a corresponding first gray image, a corresponding second gray image, a corresponding third gray image and a corresponding fourth gray image; subtracting the R channel value of the corresponding pixel point in the first gray level image from the R channel value of each pixel point in the second gray level image to obtain the R channel value of each pixel point in the normal map, subtracting the G channel value of the corresponding pixel point in the first gray level image from the G channel value of each pixel point in the third gray level image to obtain the G channel value of each pixel point in the normal map, and subtracting the B channel value of each pixel point in the fourth gray level image from the B channel value of the corresponding pixel point in the first gray level image to obtain the B channel value of each pixel point in the normal map; generating each group of corresponding normal maps according to the R channel value of each pixel point, the G channel value of each pixel point and the B channel value of each pixel point in each group of normal maps; generating a corresponding roughness map according to the fifth scene brightness image of each group and the corresponding normal map; generating a corresponding metalness map according to the second image of each group, the first image with the same brightness as the second image and the corresponding normal map;
The determining module is specifically used for modeling according to all the color maps to generate a model of the object to be detected and calculating color information of the model; calculating roughness information according to all the roughness maps and the model; and calculating the metal degree information according to all the metal degree mapping and the model so as to determine the material information corresponding to the object to be detected.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010503045.6A CN113758918B (en) | 2020-06-04 | 2020-06-04 | Unmanned aerial vehicle system-based material determination method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010503045.6A CN113758918B (en) | 2020-06-04 | 2020-06-04 | Unmanned aerial vehicle system-based material determination method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113758918A CN113758918A (en) | 2021-12-07 |
CN113758918B true CN113758918B (en) | 2024-02-27 |
Family
ID=78783866
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010503045.6A Active CN113758918B (en) | 2020-06-04 | 2020-06-04 | Unmanned aerial vehicle system-based material determination method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113758918B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116148265A (en) * | 2023-02-14 | 2023-05-23 | 浙江迈沐智能科技有限公司 | Flaw analysis method and system based on synthetic leather high-quality image acquisition |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2420866A1 (en) * | 2000-08-28 | 2002-03-07 | Cognitens, Ltd. | Accurately aligning images in digital imaging systems by matching points in the images |
GB0616685D0 (en) * | 2006-08-23 | 2006-10-04 | Warwick Warp Ltd | Retrospective shading approximation from 2D and 3D imagery |
DE102012104900A1 (en) * | 2011-06-06 | 2012-12-06 | Seereal Technologies S.A. | Method and apparatus for layering thin volume lattice stacks and beam combiner for a holographic display |
CN107514993A (en) * | 2017-09-25 | 2017-12-26 | 同济大学 | The collecting method and system towards single building modeling based on unmanned plane |
CN107888840A (en) * | 2017-10-30 | 2018-04-06 | 广东欧珀移动通信有限公司 | High-dynamic-range image acquisition method and device |
US9984455B1 (en) * | 2017-06-05 | 2018-05-29 | Hana Resources, Inc. | Organism growth prediction system using drone-captured images |
WO2018214077A1 (en) * | 2017-05-24 | 2018-11-29 | 深圳市大疆创新科技有限公司 | Photographing method and apparatus, and image processing method and apparatus |
WO2019127402A1 (en) * | 2017-12-29 | 2019-07-04 | 深圳市大疆创新科技有限公司 | Synthesizing method of spherical panoramic image, uav system, uav, terminal and control method thereof |
CN110581956A (en) * | 2019-08-26 | 2019-12-17 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
CN110807833A (en) * | 2019-11-04 | 2020-02-18 | 成都数字天空科技有限公司 | Mesh topology obtaining method and device, electronic equipment and storage medium |
EP3611665A1 (en) * | 2018-08-17 | 2020-02-19 | Siemens Aktiengesellschaft | Mapping images to the synthetic domain |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CL2008003661A1 (en) * | 2008-12-10 | 2010-12-10 | Aplik S A | Method and device for quantitatively determining the surface optical characteristics of a reference object composed of a plurality of optically differentiable layers. |
FR3037177B1 (en) * | 2015-06-08 | 2018-06-01 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | IMAGE PROCESSING METHOD WITH SPECULARITIES |
US10114467B2 (en) * | 2015-11-30 | 2018-10-30 | Photopotech LLC | Systems and methods for processing image information |
US10778877B2 (en) * | 2015-11-30 | 2020-09-15 | Photopotech LLC | Image-capture device |
US10706621B2 (en) * | 2015-11-30 | 2020-07-07 | Photopotech LLC | Systems and methods for processing image information |
US10727685B2 (en) * | 2017-01-27 | 2020-07-28 | Otoy, Inc. | Drone-based VR/AR device recharging system |
-
2020
- 2020-06-04 CN CN202010503045.6A patent/CN113758918B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2420866A1 (en) * | 2000-08-28 | 2002-03-07 | Cognitens, Ltd. | Accurately aligning images in digital imaging systems by matching points in the images |
GB0616685D0 (en) * | 2006-08-23 | 2006-10-04 | Warwick Warp Ltd | Retrospective shading approximation from 2D and 3D imagery |
DE102012104900A1 (en) * | 2011-06-06 | 2012-12-06 | Seereal Technologies S.A. | Method and apparatus for layering thin volume lattice stacks and beam combiner for a holographic display |
WO2018214077A1 (en) * | 2017-05-24 | 2018-11-29 | 深圳市大疆创新科技有限公司 | Photographing method and apparatus, and image processing method and apparatus |
US9984455B1 (en) * | 2017-06-05 | 2018-05-29 | Hana Resources, Inc. | Organism growth prediction system using drone-captured images |
CN107514993A (en) * | 2017-09-25 | 2017-12-26 | 同济大学 | The collecting method and system towards single building modeling based on unmanned plane |
CN107888840A (en) * | 2017-10-30 | 2018-04-06 | 广东欧珀移动通信有限公司 | High-dynamic-range image acquisition method and device |
WO2019127402A1 (en) * | 2017-12-29 | 2019-07-04 | 深圳市大疆创新科技有限公司 | Synthesizing method of spherical panoramic image, uav system, uav, terminal and control method thereof |
EP3611665A1 (en) * | 2018-08-17 | 2020-02-19 | Siemens Aktiengesellschaft | Mapping images to the synthetic domain |
CN110581956A (en) * | 2019-08-26 | 2019-12-17 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
CN110807833A (en) * | 2019-11-04 | 2020-02-18 | 成都数字天空科技有限公司 | Mesh topology obtaining method and device, electronic equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
A Review of Recent Advances in Surface Defect Detection using Texture analysis Techniques;Xianghua Xie;《Electronic Letters on Computer Vision and Image Analysis》;第7卷(第3期);第1-22页 * |
变电站端子箱电缆屏蔽线的材质识别 与修复方法;张洲全 等,;《工业工程》;第59-63页 * |
Also Published As
Publication number | Publication date |
---|---|
CN113758918A (en) | 2021-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10554896B2 (en) | Stereoscopic imaging using mobile computing devices having front-facing and rear-facing cameras | |
CN110572630B (en) | Three-dimensional image shooting system, method, device, equipment and storage medium | |
CN107370951B (en) | Image processing system and method | |
CN108364292B (en) | Illumination estimation method based on multiple visual angle images | |
CN207766424U (en) | A kind of filming apparatus and imaging device | |
CN114830030A (en) | System and method for capturing and generating panoramic three-dimensional images | |
CN114697623B (en) | Projection plane selection and projection image correction method, device, projector and medium | |
JPWO2013005244A1 (en) | Three-dimensional relative coordinate measuring apparatus and method | |
WO2019029573A1 (en) | Image blurring method, computer-readable storage medium and computer device | |
KR20190095795A (en) | Apparatus and method for estimating optical image stabilization motion | |
CN113965679B (en) | Depth map acquisition method, structured light camera, electronic device, and storage medium | |
WO2023213311A1 (en) | Capsule endoscope, and distance measurement method and device for camera system | |
CN113758918B (en) | Unmanned aerial vehicle system-based material determination method and device | |
JP2017072499A (en) | Processor, processing system, imaging apparatus, processing method, program, and recording medium | |
US10281396B2 (en) | Method and system for simultaneously measuring surface normal vector and surface reflectance function in microscale | |
CN209991983U (en) | Obstacle detection equipment and unmanned aerial vehicle | |
CN112335228A (en) | Image processing method, image acquisition device, movable platform and storage medium | |
JP6575999B2 (en) | Lighting information acquisition device, lighting restoration device, and programs thereof | |
CN113545028A (en) | Gain control for face authentication | |
CN111160233B (en) | Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance | |
CN108010071B (en) | System and method for measuring brightness distribution by using 3D depth measurement | |
CN109427089B (en) | Mixed reality object presentation based on ambient lighting conditions | |
CN111597963B (en) | Light supplementing method, system and medium for face in image and electronic equipment | |
CN111105365B (en) | Color correction method, medium, terminal and device for texture image | |
CN109741384B (en) | Multi-distance detection device and method for depth camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |