CN113758918A - Material determination method and device based on unmanned aerial vehicle system - Google Patents

Material determination method and device based on unmanned aerial vehicle system Download PDF

Info

Publication number
CN113758918A
CN113758918A CN202010503045.6A CN202010503045A CN113758918A CN 113758918 A CN113758918 A CN 113758918A CN 202010503045 A CN202010503045 A CN 202010503045A CN 113758918 A CN113758918 A CN 113758918A
Authority
CN
China
Prior art keywords
scene
unmanned aerial
brightness
aerial vehicle
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010503045.6A
Other languages
Chinese (zh)
Other versions
CN113758918B (en
Inventor
刘宁
唐建波
覃小春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Digital Sky Technology Co ltd
Original Assignee
Chengdu Digital Sky Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Digital Sky Technology Co ltd filed Critical Chengdu Digital Sky Technology Co ltd
Priority to CN202010503045.6A priority Critical patent/CN113758918B/en
Publication of CN113758918A publication Critical patent/CN113758918A/en
Application granted granted Critical
Publication of CN113758918B publication Critical patent/CN113758918B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/30Measuring arrangements characterised by the use of optical techniques for measuring roughness or irregularity of surfaces
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application provides a material determination method and device based on an unmanned aerial vehicle system, and the method comprises the following steps: the method comprises the steps that at least one group of scene images shot by a shooting unmanned aerial vehicle at least one preset position are obtained, each group of scene images comprise five first images shot at a first preset angle corresponding to the preset position and one second image shot at a second preset angle, the scene brightness of the five first images is different, each scene image comprises a target object to be detected, the first preset angle is that the orientations of a first polaroid and a second polaroid are mutually perpendicular, the second preset angle is that the orientations of the first polaroid and the second polaroid are mutually parallel, and the orientations of the second polaroids of a plurality of lighting unmanned aerial vehicles are consistent; generating a material information graph corresponding to a preset position according to the five first images and the corresponding second images of each group; and determining the material information corresponding to the target object to be detected according to the multiple material information graphs corresponding to all the preset positions.

Description

Material determination method and device based on unmanned aerial vehicle system
Technical Field
The application relates to the technical field of material reconstruction, in particular to a material determination method and device based on an unmanned aerial vehicle system.
Background
The traditional method for reconstructing the material of the surface of the object generally comprises the steps of building a light frame, inputting the object into the light frame, and obtaining pictures of the surface of the object under different illumination so as to inversely calculate the material information of the surface of the object.
However, the method only aims at small objects generally, is not suitable for large scenes such as large buildings, limits the application range of material reconstruction, and solves the problems of high difficulty and high cost in constructing a light frame larger than the large scene aiming at the large scene.
Disclosure of Invention
An object of the embodiment of the application is to provide a material determining method and device based on an unmanned aerial vehicle system, which are used for solving the problems that the existing material reconstruction method is not applicable to large buildings only brought by small objects, the application range of material reconstruction is limited, and meanwhile, a light frame larger than a large scene is built for the large scene, so that the difficulty is high and the cost is high.
In a first aspect, an embodiment of the present invention provides a material quality determination method based on an unmanned aerial vehicle system, where the unmanned aerial vehicle system includes a photo unmanned aerial vehicle, a plurality of illumination unmanned aerial vehicles, and a server, where the photo unmanned aerial vehicle includes a camera and a first polarizer arranged on a lens of the camera, each illumination unmanned aerial vehicle includes a light source and a second polarizer arranged on a light emitting surface of the light source, and the server is in communication connection with the plurality of illumination unmanned aerial vehicles and the photo unmanned aerial vehicle, and the method is applied to the server, and includes: acquiring at least one group of scene images shot by the shooting unmanned aerial vehicle at least one preset position, wherein each group of scene images comprises five first images shot at a first preset angle corresponding to the preset position and one second image shot at a second preset angle, the scene brightness of the five first images is different, each scene image comprises a target object to be detected, the first preset angle is that the first polaroid and the second polaroid are perpendicular to each other in orientation, the second preset angle is that the first polaroid and the second polaroid are parallel in orientation, and the orientation of the second polaroids of the plurality of lighting unmanned aerial vehicles is consistent; generating a material information graph corresponding to a preset position according to the five first images and the corresponding second images of each group; and determining the material information corresponding to the target object to be detected according to the multiple material information graphs corresponding to all the preset positions.
In the designed material determining method based on the unmanned aerial vehicle system, the server is used for controlling the lighting unmanned aerial vehicles to perform lighting with different scene brightness on the target object to be detected to form scenes with different scene brightness, the server is used for controlling the photographing unmanned aerial vehicle to photograph the scenes with different scene brightness at least one preset position to obtain at least one group of scene images, the material information graph corresponding to the preset position is generated according to five first images and corresponding second images in each group of scene images, and the material information corresponding to the target object to be detected is determined according to the material information graphs corresponding to all the preset positions, the scheme adopts the lighting unmanned aerial vehicles to realize the construction of a lighting frame of a large scene, the large scene under the lighting frame is photographed by the photographing unmanned aerial vehicle to further realize the material determination of the large scene, and the problem that the existing material reconstructing method is not applicable to large buildings only brought by small objects is solved, the application range of material reconstruction is limited, meanwhile, the problem that a lighting frame which is larger than a large scene is built for the large scene, the difficulty and the cost are high, and the difficulty and the cost for determining the material of the target object of the large scene are reduced.
In an optional implementation manner of the first aspect, the acquiring at least one group of scene images shot by the photo drone at least one preset position includes: control unmanned aerial vehicle shoots the scene of waiting to shoot that corresponds at each preset position and obtains a set of scene image that corresponds, wherein, wait to shoot the scene and form a plurality of illumination unmanned aerial vehicles of presetting the shape and by arranging the target object that awaits measuring forms, it uses to preset the shape take the unmanned aerial vehicle of shooing establishes as the center.
In an optional implementation manner of the first aspect, the controlling the unmanned aerial vehicle to shoot the corresponding scene to be shot at each preset position to obtain a corresponding group of scene images includes: controlling the photographing unmanned aerial vehicle to photograph scenes to be photographed under five different scene brightness at each preset position through a first preset angle to obtain five first images corresponding to each preset position; and controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed at each preset position through a second preset angle to obtain second images corresponding to the five first images.
In an optional implementation manner of the first aspect, the controlling the unmanned aerial vehicle to take pictures of the to-be-taken scenes at five different scene brightnesses at each preset position through a first preset angle to obtain five first images corresponding to each preset position includes: adjusting the brightness of the light sources of all illuminating drones to form the first scene brightness; controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the first scene brightness at each preset position through a first preset angle to obtain a first scene brightness image corresponding to each preset position; adjusting the brightness of the light sources of all illuminating drones to form the second scene brightness; controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the second scene brightness at each preset position through a first preset angle to obtain a second scene brightness image corresponding to each preset position; adjusting the brightness of the light sources of all illuminating drones to form the third scene brightness; controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the third scene brightness at each preset position through a first preset angle to obtain a third scene brightness image corresponding to each preset position; adjusting the brightness of the light sources of all illuminating drones to form the fourth scene brightness; controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the fourth scene brightness at each preset position through a first preset angle to obtain a fourth scene brightness image corresponding to each preset position; adjusting the brightness of the light sources of all illuminating drones to form the fifth scene brightness; and controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the fifth scene brightness at each preset position through a first preset angle to obtain a fifth scene brightness image corresponding to each preset position.
In an optional implementation of the first aspect, the adjusting the brightness of the light sources of all illuminating drones to form the first scene brightness comprises: controlling light sources of all lighting drones to be turned off to form the first scene brightness; the adjusting the brightness of the light sources of all the lighting drones to form the fifth scene brightness comprises: adjusting the brightness values of the light sources of all illuminating drones to a maximum to form the fifth scene brightness.
In an optional implementation of the first aspect, the adjusting the brightness of the light sources of all illuminating drones to form the second scene brightness comprises:
establishing a coordinate system by taking the position of the photographing unmanned aerial vehicle as an original point, the orientation of a first polaroid of the photographing unmanned aerial vehicle as a y axis and the direction vertical to the y axis as an x axis; calculating a first brightness value corresponding to each lighting unmanned aerial vehicle according to the abscissa of each lighting unmanned aerial vehicle, and adjusting the brightness of the corresponding lighting unmanned aerial vehicle according to the first brightness value of each lighting unmanned aerial vehicle to form the second scene brightness; the adjusting the brightness of the light sources of all illuminating drones to form the third scene brightness comprises: calculating a second brightness value corresponding to each lighting unmanned aerial vehicle according to the ordinate of each lighting unmanned aerial vehicle, and adjusting the brightness of the corresponding lighting unmanned aerial vehicle according to the second brightness value of each lighting unmanned aerial vehicle to form the third scene brightness; the adjusting the brightness of the light sources of all illuminating drones to form the fourth scene brightness comprises: and calculating a third brightness value corresponding to each lighting unmanned aerial vehicle according to the abscissa and the ordinate of each lighting unmanned aerial vehicle, and adjusting the brightness of the corresponding lighting unmanned aerial vehicle according to the third brightness value of each lighting unmanned aerial vehicle to form the fourth scene brightness.
In an optional implementation manner of the first aspect, the generating the material information map corresponding to the preset position according to each group of the five first images and the corresponding second image includes: generating a corresponding color map according to the fifth scene brightness image and the first scene brightness image of each group; generating corresponding normal map according to the first scene brightness image, the second scene brightness image, the third scene brightness image and the fourth scene brightness image of each group; generating a corresponding roughness map according to the fifth scene brightness image of each group and the corresponding normal map; and generating a corresponding metal degree mapping according to the second image of each group, the first image with the same brightness as the second image and the corresponding normal mapping.
In an optional implementation manner of the first aspect, the generating a corresponding normal map according to the first scene luminance image, the second scene luminance image, the third scene luminance image, and the fourth scene luminance image of each group includes: performing gray scale conversion on the first scene brightness image, the second scene brightness image, the third scene brightness image and the fourth scene brightness image of each group to obtain a corresponding first gray scale image, a corresponding second gray scale image, a corresponding third gray scale image and a corresponding fourth gray scale image; subtracting the R channel value of each pixel point in the first gray image from the R channel value of each pixel point in the second gray image to obtain the R channel value of each pixel point in the normal map, subtracting the G channel value of each pixel point in the first gray image from the G channel value of each pixel point in the third gray image to obtain the G channel value of each pixel point in the normal map, and subtracting the B channel value of each pixel point in the first gray image from the B channel value of each pixel point in the fourth gray image to obtain the B channel value of each pixel point in the normal map; and generating each group of corresponding normal map according to the R channel value of each pixel point, the G channel value of each pixel point and the B channel value of each pixel point in each group of normal map.
In an optional implementation manner of the first aspect, the determining material information corresponding to the target object to be measured according to the multiple material information maps corresponding to all the preset positions includes: modeling according to all the color maps to generate a model of the target object to be detected and calculating color information of the model; calculating roughness information according to all roughness maps and the model; and calculating the metal degree information according to all the metal degree maps and the model so as to determine the material information corresponding to the target object to be detected.
In a second aspect, an embodiment of the present invention provides a material reconstruction apparatus based on an unmanned aerial vehicle system, where the unmanned aerial vehicle system includes a photo unmanned aerial vehicle, a plurality of illumination unmanned aerial vehicles, and a server, the photo unmanned aerial vehicle includes a camera and a first polarizer disposed on a lens of the camera, each illumination unmanned aerial vehicle includes a light source and a second polarizer disposed on a light emergent surface of the light source, the server is in communication connection with the plurality of illumination unmanned aerial vehicles and the photo unmanned aerial vehicle, and the apparatus is applied to the server, and includes: the system comprises an acquisition module, a processing module and a control module, wherein the acquisition module is used for acquiring at least one group of scene images shot by the shooting unmanned aerial vehicle at least one preset position, each group of scene images comprises five first images shot at a first preset angle corresponding to the preset position and one second image shot at a second preset angle, the scene brightness of the five first images is different, each scene image comprises a target object to be detected, the first preset angle is that the orientations of the first polaroid and the second polaroid are mutually perpendicular, the second preset angle is that the orientations of the first polaroid and the second polaroid are mutually parallel, and the orientations of the second polaroids of the plurality of lighting unmanned aerial vehicles are consistent; the generating module is used for generating a material information graph corresponding to a preset position according to the five first images and the corresponding second images of each group; and the determining module is used for determining the material information corresponding to the target object to be detected according to the multiple material information graphs corresponding to all the preset positions.
In the material reconstruction device based on the unmanned aerial vehicle system, the server is used for controlling the lighting unmanned aerial vehicles to perform lighting with different scene brightness on the target object to be detected to form scenes with different scene brightness, the server is used for controlling the photographing unmanned aerial vehicle to photograph the scenes with different scene brightness at least one preset position to obtain at least one group of scene images, the material information graph corresponding to the preset position is generated according to five first images and corresponding second images in each group of scene images, and the material information corresponding to the target object to be detected is determined according to the material information graphs corresponding to all the preset positions, the scheme adopts the lighting unmanned aerial vehicles to realize the construction of a lighting frame of a large scene, the large scene under the lighting frame is photographed by the photographing unmanned aerial vehicle to further realize the material determination of the large scene, and the problem that the existing material reconstruction method is not applicable to large buildings only brought by small objects is solved, the application range of material reconstruction is limited, meanwhile, the problem that a lighting frame which is larger than a large scene is built for the large scene, the difficulty and the cost are high, and the difficulty and the cost for determining the material of the target object of the large scene are reduced.
In a third aspect, an embodiment provides an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor executes the computer program to perform the method in the first aspect or any optional implementation manner of the first aspect.
In a fourth aspect, embodiments provide a non-transitory readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method of the first aspect, any optional implementation manner of the first aspect.
In a fifth aspect, embodiments provide a computer program product, which when run on a computer, causes the computer to execute the method of the first aspect or any optional implementation manner of the first aspect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic structural diagram of an unmanned aerial vehicle system provided in an embodiment of the present application;
FIG. 2 is a first flowchart of a texture determination method according to an embodiment of the present disclosure;
FIG. 3 is a second flowchart of a texture determination method according to an embodiment of the present disclosure;
FIG. 4 is a third flowchart of a texture determination method according to an embodiment of the present disclosure;
FIG. 5 is a fourth flowchart of a texture determination method according to an embodiment of the present disclosure;
FIG. 6 is a fifth flowchart of a texture determination method according to an embodiment of the present disclosure;
FIG. 7 is a sixth flowchart of a texture determination method according to an embodiment of the present disclosure;
FIG. 8 is a seventh flowchart of a texture determination method according to an embodiment of the present disclosure;
fig. 9 is an eighth flowchart of a material determination method according to an embodiment of the present application;
FIG. 10 is a structural diagram of a texture determining apparatus according to an embodiment of the present application;
fig. 11 is a structural diagram of an electronic device according to an embodiment of the present application.
Icon: 10-a photo unmanned aerial vehicle; 20-illuminating the drone; 30-a server; 40-target object to be detected; 300-an acquisition module; 302-a generation module; 304-a determination module; 306-a control module; 4-an electronic device; 401-a processor; 402-a memory; 403-communication bus.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
First embodiment
As shown in fig. 1, an embodiment of the present application provides an unmanned aerial vehicle system, which includes a photographing unmanned aerial vehicle 10, an illuminating unmanned aerial vehicle 20, and a server 30, where the photographing unmanned aerial vehicle 10 is an unmanned aerial vehicle with a photographing device such as a camera or a video camera, and a first polarizer is arranged in front of a camera or a video camera lens of the photographing unmanned aerial vehicle 10; the number of the lighting unmanned aerial vehicles 20 is multiple, each lighting unmanned aerial vehicle comprises a light source with adjustable brightness and second polarizing films arranged on light emitting surfaces of the light sources, the orientations of the second polarizing films of all the lighting unmanned aerial vehicles 20 are consistent, and the lighting unmanned aerial vehicles 20 are used for surrounding a building to be tested and further polishing a target object 40 to be tested so as to provide different scene brightness for the building to be tested; this server 30 can be for having the communication, calculate and terminal equipment such as computer of functions such as storage, this server 30 carries out wireless communication with unmanned aerial vehicle 10 and illumination unmanned aerial vehicle 20 of shooing, for example, accessible 4G, 5G, wireless communication methods such as bluetooth or WIFI come to transmit control command and acquire the data etc. of unmanned aerial vehicle passback to unmanned aerial vehicle 10 and illumination unmanned aerial vehicle 20 of shooing, for example, this server 30 can the flight stop position of this unmanned aerial vehicle 10 of shooing and each illumination unmanned aerial vehicle 20 of wireless control, the orientation of the first polaroid of wireless control unmanned aerial vehicle 10 of shooing and the second polaroid of each illumination unmanned aerial vehicle 20, this unmanned aerial vehicle 10 of shooing of wireless control takes a candid photograph, the light source luminance of each illumination unmanned aerial vehicle 20 of wireless control etc.
Second embodiment
The embodiment of the present application provides a method for determining a material of an unmanned aerial vehicle system based on the description of the first embodiment, where the method is applicable to a server in the unmanned aerial vehicle system of the first embodiment, as shown in fig. 2, the method may specifically include the following steps:
step S100: the method comprises the steps of obtaining at least one group of scene images shot by a shooting unmanned aerial vehicle at least one preset position, wherein each group of scene images comprises five first images shot at a first preset angle corresponding to the preset position and one second image shot at a second preset angle.
Step S102: and generating a material information graph corresponding to a preset position according to the five first images and the corresponding second images of each group.
Step S104: and determining the material information corresponding to the target object to be detected according to the multiple material information graphs corresponding to all the preset positions.
In step S100, the unmanned aerial vehicle will take six shots at a preset position to obtain a group of scene images, that is, six images, where each group of scene images includes five first images taken at a first preset angle corresponding to the preset position and one second image taken at a second preset angle. On this basis, unmanned aerial vehicle of shooing can all shoot at a plurality of default positions, and then obtain multiunit scene image. The scene brightness of the five first images shot at the first preset angle is different, the scene brightness is determined by the light brightness of the lighting unmanned aerial vehicles, the scene brightness of the second image shot at the second preset angle is consistent with one of the first images with brightness of the photo unmanned aerial vehicle in the five first images shot at the current preset position, and the orientation of the first polarizer of the photo unmanned aerial vehicle is perpendicular to the orientation of each second polarizer on the premise that the first preset angle indicates that the orientations of the second polarizers of all the lighting unmanned aerial vehicles are consistent; the second preset angle indicates that the orientation of the first polaroid of the photographing unmanned aerial vehicle is parallel to the orientation of each second polaroid on the premise that the orientations of the second polaroids of all the lighting unmanned aerial vehicles are consistent; the orientation of the polarizing plate is the direction of the wafer surface of the polarizing plate or the normal direction of the polarizing plate. The scene image shows that a plurality of lighting unmanned aerial vehicles are used for polishing the target object to be measured, and the photographing unmanned aerial vehicle photographs the polished target object to be measured to obtain an image.
It should be noted here that when the unmanned aerial vehicle shoots vertically at the first preset angle, that is, the first polarizing plate and the second polarizing plate, due to the property of the polarizing plate, only the diffuse reflection optical fiber on the surface of the object can enter the camera to form an image, and on the basis, the unmanned aerial vehicle shoots at five scene brightnesses to form five first images; and when the unmanned aerial vehicle of shooing is when the second preset angle that is said as above is that first polaroid and second polaroid are parallel shoots, because the attribute that the polaroid has, the diffuse reflection light and the specular reflection light on object surface all can get into the camera and then form the image, on this basis, the unmanned aerial vehicle of shooing once obtains a second image in a position.
After the server performs step S100 to obtain at least one set of scene images captured by the photo drone at least one preset position, step S102 may be performed.
In step S102, the server may generate a material information map corresponding to the preset position according to five first images and corresponding second images in each set of scene images. The material information map includes rough maps, color maps and metal degree maps reflecting material information, that is, the server generates rough maps, color maps and metal degree maps corresponding to preset positions according to five first images and corresponding second images in each set of scene images. The roughness map represents the diffuse reflection property of the surface of the object, the color map represents the color property of the surface of the object, and the metal degree map represents the specular reflection property of the surface of the object. After the server generates the material information map corresponding to the preset position according to the five first images and the corresponding second images in each set of scene images in step S102, step S104 may be performed to determine the material information corresponding to the target object according to the multiple material information maps corresponding to all the preset positions.
In step S104, the server determines the material information corresponding to the target to be detected according to the multiple material information diagrams corresponding to all the preset positions, and when there is only one preset position, determines the material information corresponding to the target to be detected according to the material information diagram generated at the preset position; when the preset positions are multiple, the material information graph corresponding to each preset position is combined to determine the material information corresponding to the target to be detected.
In the designed material determining method based on the unmanned aerial vehicle system, the server is used for controlling the lighting unmanned aerial vehicles to perform lighting with different scene brightness on the target object to be detected to form scenes with different scene brightness, the server is used for controlling the photographing unmanned aerial vehicle to photograph the scenes with different scene brightness at least one preset position to obtain at least one group of scene images, the material information graph corresponding to the preset position is generated according to five first images and corresponding second images in each group of scene images, and the material information corresponding to the target object to be detected is determined according to the material information graphs corresponding to all the preset positions, the scheme adopts the lighting unmanned aerial vehicles to realize the construction of a lighting frame of a large scene, the large scene under the lighting frame is photographed by the photographing unmanned aerial vehicle to further realize the material determination of the large scene, and the problem that the existing material reconstructing method is not applicable to large buildings only brought by small objects is solved, the application range of material reconstruction is limited, meanwhile, the problem that a lighting frame which is larger than a large scene is built for the large scene, the difficulty and the cost are high, and the difficulty and the cost for determining the material of the target object of the large scene are reduced.
In an optional implementation manner of this embodiment, before acquiring at least one set of scene images captured by the photo drone at least one preset position in step S100, as shown in fig. 3, the method further includes:
step S90: and controlling the photographing unmanned aerial vehicle to reach the corresponding preset position, and establishing a corresponding preset shape by taking the preset position as a center.
Step S91: control a plurality of illumination unmanned aerial vehicles and distribute in this and predetermine on the shape, wherein, should predetermine the shape and await measuring the formation of target object and wait to shoot the scene.
On the basis of the above steps, the step S100 of obtaining at least one group of scene images shot by the photographing unmanned aerial vehicle at least one preset position may specifically be:
step S1000: and controlling the photographing unmanned aerial vehicle to photograph the corresponding scene to be photographed at each preset position to obtain a group of corresponding scene images.
When there is only one preset position, the server will perform steps S90 to S91 only once, and then perform step S1000 to obtain a group of scene images; when there are multiple preset positions, the above steps S90 to S91 are performed for each preset position server to form the scene to be shot at each preset position, and then step S1000 is performed to obtain multiple sets of scene images.
In step S90, the staff may store one or more preset positions in the server, and the server controls the unmanned photographing vehicle to fly to the corresponding preset positions according to the preset positions and keeps the flying stable, and then the server may set up the corresponding preset shapes with the preset positions reached by the unmanned photographing vehicle as the center. For example, the current preset position of the unmanned photographing machine can be used as the circle center, a circle with the radius r is defined on the plane passing through the circle center and perpendicular to the orientation of the first polaroid of the unmanned photographing machine, the circle with the radius r can surround the target object to be measured, and the circle with the radius r is the corresponding preset shape of the preset position. After the server performs step S90 to establish a corresponding preset shape with the preset position as the center, step S91 is performed.
In step S91, the server may control the plurality of lighting drones to be arranged and distributed on the preset shape, for example, the server may control the plurality of lighting drones to be distributed on the circumference of the circle with the radius r, and since the circle with the radius r surrounds the target object to be detected, the lighting drones in the preset shape may effectively illuminate the target object to be detected and further form a scene to be photographed. After the server executes step S91 to control the lighting drones to be distributed on the preset distribution shape, step S1000 may be executed to control the photographing drones to photograph the corresponding scene to be photographed at each preset position to obtain a corresponding group of scene images.
In an optional implementation manner of this embodiment, in step S1000, the unmanned aerial vehicle for photographing is controlled to photograph the corresponding scene to be photographed at each preset position to obtain the corresponding group of scene images, as shown in fig. 4, the following steps may be specifically executed for each preset position:
step S1001: the orientation of the second polaroids of all the lighting unmanned aerial vehicles is controlled to be consistent, and the orientation of the first polaroid of the photographing unmanned aerial vehicle and the orientation of the second polaroid of each lighting unmanned aerial vehicle are controlled to be mutually perpendicular so as to form a first preset angle.
Step S1002: and controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the five different scene brightnesses to obtain a group of five first images.
Step S1003: the first polaroid of the unmanned aerial vehicle that shoots is controlled and the orientation of the second polaroid of each illumination unmanned aerial vehicle are parallel to each other so as to form a second preset angle.
Step S1004: and controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed to obtain five second images corresponding to the first images.
When only one preset position exists, a group of scene images can be obtained by executing the steps; when there are a plurality of preset positions, the above steps S1001 to S1004 are performed for each preset position to obtain a plurality of sets of scene images.
In step S1001, at each preset position, the server controls the orientations of the second polarizers of all the lighting drones to be consistent, and controls the orientation of the first polarizer of the photographing drone to be perpendicular to the orientation of the second polarizer of the lighting drone, so as to form the first preset angle. Step S1002 is performed after the first predetermined angle is formed.
In step S1002, the server may control the photographing unmanned aerial vehicle to photograph scenes to be photographed under five different factory brightnesses to obtain five first images in a group of scene images corresponding to the current preset position, where the different scene brightnesses may be obtained by adjusting the light brightness of all the lighting unmanned aerial vehicles through the server. After five first images at the current preset position are obtained, step S1003 can be executed.
In step S1003, the server may control the photographing unmanned aerial vehicle to be unchanged in position, but may control the first polarizer of the photographing unmanned aerial vehicle to rotate, so that the orientation of the first polarizer of the photographing unmanned aerial vehicle is consistent with the orientation of the second polarizer of each lighting unmanned aerial vehicle, that is, parallel to each other to form the aforementioned second preset angle. After the second preset angle is formed, the server may execute step S1004 to control the unmanned aerial vehicle to shoot the scene to be shot to obtain a second image corresponding to the five first images at the current position, where the scene brightness of the second image may be the same as the scene brightness of any one of the five first images.
In a preset position, the server obtains a set of scene images at the current preset position by performing the steps S1001 to S1004, and when there are multiple preset positions, the server re-performs the step S90 to control the drone to reach another preset position, and then obtains a set of scene images at another preset position by performing the steps S91 and the steps S1001 to S1004, so as to obtain multiple sets of scene images.
In an optional implementation manner of this embodiment, the five first images obtained by shooting in step S1002 include a first scene luminance image, a second scene luminance image, a third scene luminance image, a fourth scene luminance image, and a fifth scene luminance image, and on this basis, as shown in fig. 5, step S1002 may specifically be the following steps:
step S10020: adjusting the brightness of the light sources of all illuminating drones to form a first scene brightness.
Step S10021: and controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the first scene brightness to obtain a first scene brightness image.
Step S10022: adjusting the brightness of the light sources of all illuminating drones to form a second scene brightness.
Step S10023: and controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the second scene brightness to obtain a second scene brightness image.
Step S10024: adjusting the brightness of the light sources of all illuminating drones to form a third scene brightness.
Step S10025: and controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the third scene brightness to obtain a third scene brightness image.
Step S10026: adjusting the brightness of the light sources of all illuminating drones to form a fourth scene brightness.
Step S10027: and controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the fourth scene brightness to obtain a fourth scene brightness image.
Step S10028: adjusting the brightness of the light sources of all the illuminating drones to form a fifth scene brightness.
Step S10029: and controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the fifth scene brightness to obtain a fifth scene brightness image.
In steps S10020 to S10029, the server adjusts the scene brightness five times, and after adjusting the scene brightness once, the photographing unmanned aerial vehicle can be controlled to perform photographing once to obtain a corresponding scene brightness image. The first scene brightness, the second scene brightness, the third scene brightness, the fourth scene brightness and the fifth scene brightness can be in a form of sequentially increasing brightness, a form of sequentially decreasing brightness, and a form of irregular brightness, and only the brightness situations of the five scene brightness need to be kept different.
As shown in fig. 6, the step S10020 of adjusting the brightness of the light sources of all the lighting drones to form the first scene brightness may specifically be:
step S100200: and controlling the light sources of all the lighting unmanned aerial vehicles to be turned off to form first scene brightness.
Step S10022, adjusting the brightness of the light sources of all the lighting drones to form the second scene brightness, may specifically be:
step S100220: the position of the unmanned aerial vehicle for photographing is used as an original point, the orientation of a first polaroid of the unmanned aerial vehicle for photographing is used as a y axis, a coordinate system is established as an x axis in the direction perpendicular to the y axis, a first brightness value corresponding to each illumination unmanned aerial vehicle is calculated according to the transverse position coordinate of each illumination unmanned aerial vehicle, and the brightness of the corresponding illumination unmanned aerial vehicle is adjusted according to the first brightness value of each illumination unmanned aerial vehicle to form second scene brightness.
Step S10024, adjusting the brightness of the light sources of all the lighting drones to form the third scene brightness specifically may be:
step S100240: and calculating a second brightness value corresponding to each lighting unmanned aerial vehicle according to the ordinate of each lighting unmanned aerial vehicle, and adjusting the brightness of the corresponding lighting unmanned aerial vehicle according to the second brightness value of each lighting unmanned aerial vehicle to form third scene brightness.
Step S10026, adjusting the brightness of the light sources of all the lighting drones to form the fourth scene brightness, may specifically be:
step S100260: and calculating a third brightness value corresponding to each lighting unmanned aerial vehicle according to the abscissa and the ordinate of each lighting unmanned aerial vehicle, and adjusting the brightness of the corresponding lighting unmanned aerial vehicle according to the third brightness value of each lighting unmanned aerial vehicle to form a fourth scene brightness.
Step S10028, adjusting the brightness of the light sources of all the lighting drones to form the fifth scene brightness, may specifically be:
step S100280: and adjusting the brightness values of all the light sources of the lighting unmanned aerial vehicle to be maximum to form fifth scene brightness.
In the implementation manner of adjusting the brightness, the first scene brightness image is an image obtained by shooting under the condition that all lighting unmanned aerial vehicle lights are not bright, and the fifth scene brightness image is an image obtained by shooting under the condition that all lighting unmanned aerial vehicle light brightness values reach the maximum.
In steps S100220, S100240, and S100260, the second scene brightness, the third scene brightness, and the fourth scene brightness are associated with the location of each illuminating drone. In the above steps, a coordinate system is established with the position of the unmanned photographing machine as an origin, and a y-axis in the established coordinate system can be established with the direction of the first polarizer of the unmanned photographing machine as the y-axis and the direction perpendicular to the y-axis as the x-axis; in addition to the coordinate system established above, the coordinate system may be established by setting the y-axis as an arbitrary direction and setting the x-axis as a direction perpendicular to the y-axis.
When the preset shape is a circle, on the basis of the established coordinate system, the second scene brightness is a first brightness value calculated through a horizontal position coordinate of the lighting unmanned aerial vehicle and a first preset brightness formula, the third scene brightness is a second brightness value calculated through a vertical position coordinate of the lighting unmanned aerial vehicle and a second preset brightness formula, and the fourth scene brightness is a third brightness value calculated through a horizontal and vertical coordinate of the lighting unmanned aerial vehicle and a third preset brightness formula. Wherein the first predetermined brightness formula may be
Figure BDA0002524692210000141
The second preset brightness may specifically be
Figure BDA0002524692210000142
The third predetermined brightness formula may specifically be
Figure BDA0002524692210000143
Wherein x in the above formula is the abscissa of the corresponding lighting unmanned aerial vehicle, and y is the on-setThe coordinate system of the meter corresponds to the ordinate of the lighting unmanned aerial vehicle, r is the radius of a preset shape circle, and maxI is the maximum brightness value of the corresponding lighting unmanned aerial vehicle.
Can realize three kinds of different scene luminance and then obtain second scene luminance image, third scene luminance image and fourth scene luminance image through above-mentioned mode, and realize each kind of scene luminance because each unmanned aerial vehicle's of illumination position is different, therefore, the luminance value that each unmanned aerial vehicle that throws light on obtains is different, and then make unmanned aerial vehicle's of illumination luminance value can change along with the distance with the unmanned aerial vehicle of shooing, for example, the luminance of long-range unmanned aerial vehicle that throws light on to the target object that awaits measuring is stronger, it is weaker to keep all unmanned aerial vehicles of throwing light on with this to keep unanimous, make luminance in the image of shooing.
In an alternative implementation manner of this embodiment, it has been described above that the material information map includes a color map, a rough map, and a degree of metallization map, and the step S102 generates the material information map corresponding to the preset position according to the five first images and the corresponding second images of each group, as shown in fig. 7, specifically, the following steps may be performed:
step S1020: and generating a corresponding color map according to the fifth scene brightness image and the first scene brightness image of each group.
Step S1021: and generating corresponding normal map according to the first scene brightness image, the second scene brightness image, the third scene brightness image and the fourth scene brightness image of each group.
Step S1022: and generating a corresponding roughness map according to the fifth scene brightness image of each group and the corresponding normal map.
Step S1023: and generating a corresponding metal degree mapping according to the second image of each group, the first image with the same brightness as the second image and the corresponding normal mapping.
Since the fifth scene luminance image is an image captured when the luminance values of all the lighting drones reach the maximum value, and the first scene luminance image is an image captured when all the lighting drones do not light, step S1020 may be performed to generate a corresponding color map by using the fifth scene luminance image and the first scene luminance image. Specifically, for any same pixel point in the fifth scene luminance image and the first scene luminance image in the same group, the value of the pixel at the position of the corresponding color map can be obtained by subtracting the value of the pixel at the position of the first scene luminance image from the value corresponding to the pixel at the position of the fifth scene luminance image, and then the operation is performed on each position of the image to obtain the values of the pixels at all positions of the corresponding color map of the group, so as to obtain the corresponding color map.
In step S1021, since the first scene luminance image is an image obtained by shooting all lighting unmanned aerial vehicle lights without lighting, on the basis, a corresponding normal map is generated through the first scene luminance image, the second scene luminance image, the third scene luminance image, and the fourth scene luminance image, as shown in fig. 8, the following steps may be specifically performed:
step S10210: and performing gray scale conversion on the first scene brightness image, the second scene brightness image, the third scene brightness image and the fourth scene brightness image of each group to obtain corresponding first gray scale image, second gray scale image, third gray scale image and fourth gray scale image.
Step S10211: subtracting the R channel value of the corresponding pixel point in the first gray image from the R channel value of each pixel point in the second gray image to obtain the R channel value of each pixel point in the normal map, subtracting the G channel value of the corresponding pixel point in the first gray image from the G channel value of each pixel point in the third gray image to obtain the G channel value of each pixel point in the normal map, and subtracting the B channel value of the corresponding pixel point in the first gray image from the B channel value of each pixel point in the fourth gray image to obtain the B channel value of each pixel point in the normal map.
Step S10212: and generating each group of corresponding normal map according to the R channel value of each pixel point, the G channel value of each pixel point and the B channel value of each pixel point in each group of normal map.
In the above design steps, the server converts the first scene luminance image, the second scene luminance image, the third scene luminance image, and the fourth scene luminance image into a gray-scale image, and then executes step S10211 to generate a new image by combining, that is, the R channel value of each pixel in the normal map is obtained by subtracting the R channel value of the corresponding pixel in the first gray-scale image from the R channel value of each pixel in the second gray-scale image, the G channel value of each pixel in the third gray-scale image is obtained by subtracting the G channel value of the corresponding pixel in the first gray-scale image from the G channel value of each pixel in the normal map, the B channel value of each pixel in the fourth gray-scale image is obtained by subtracting the B channel value of the corresponding pixel in the first gray-scale image from the B channel value of each pixel in the normal map, and then generates a corresponding normal map.
Specifically, assuming that the first gray image is m1, the second gray image is m2, the third gray image is m3, and the fourth gray image is m4, the value corresponding to the gray map of m1 at any position of the picture is v1, the value corresponding to the gray map of m2 is v2, the value corresponding to the gray map of m3 is v3, and the value corresponding to the gray map of m4 is v4, a vector value (v2-v1, v3-v1, v4-v1) can be constructed, the vector value is the normal direction of the pixel point, and the vector values of all pixel points can be stored by using a 3-channel picture, that is, a normal map.
On the basis of executing the above steps to generate the normal map, step S1022 generates a corresponding roughness map according to each group of fifth scene luminance images and the corresponding normal map, specifically, may be to calculate an included angle θ between each pixel point on the normal map and the unit normal, convert the fifth scene luminance images into fifth grayscale images, and further divide each pixel by tan θ through the fifth grayscale images to generate the roughness map.
After the normal map is generated by performing the above steps, step S1023 generates a corresponding metal degree map according to the second image of each group, the first image with the same brightness as the second image, and the corresponding normal map, and specifically may be: the pixel value of each position of the second image is subtracted from the pixel value of the corresponding position of the first image with the same brightness to obtain an image obtained after subtraction, the image obtained after subtraction is converted into a gray scale map, and then the gray scale map obtained after subtraction and conversion is used for dividing each pixel by tan theta to generate a metal degree map.
In an optional implementation manner of this embodiment, on the basis of the above, the step S104 determines the material information corresponding to the target object to be measured according to the multiple material information maps corresponding to all the preset positions, as shown in fig. 9, specifically includes the following steps:
step S1040: and modeling according to all the color maps to generate a model of the object to be detected and calculating the color information of the model.
Step S1042: roughness information is calculated from all roughness maps and models.
Step S1044: and calculating the metal degree information according to all the metal degree maps and the models so as to determine the material information corresponding to the target object to be detected.
In the above step, after the server obtains the color maps, the roughness maps and the metal maps corresponding to all the preset positions, the server performs model reconstruction by using all the color maps to generate a reconstructed three-dimensional model and actual color information/color maps, calculates actual rough information/roughness maps of the target object to be measured by using the reconstructed three-dimensional model and all the roughness maps, and calculates actual metal degree information/metal degree maps of the target object to be measured by using all the metal degree maps and the reconstructed three-dimensional model.
Third embodiment
Fig. 10 shows a schematic structural block diagram of a material determination apparatus based on a drone system provided by the present application, it should be understood that the drone system is the drone system described in the first embodiment, the apparatus corresponds to the method embodiment executed by the server in fig. 2 to 9, the steps involved in the method executed by the server in the second embodiment can be executed, the specific functions of the apparatus can be referred to the description above, and the detailed description is appropriately omitted here to avoid redundancy. The device includes at least one software function that can be stored in memory in the form of software or firmware (firmware) or solidified in the Operating System (OS) of the device. Specifically, the apparatus includes: the acquisition module 300 is configured to acquire at least one group of scene images shot by the photographing unmanned aerial vehicle at least one preset position, where each group of scene images includes five first images shot at a first preset angle corresponding to the preset position and one second image shot at a second preset angle, where the five first images have different scene brightness, each scene image includes an object to be detected, the first preset angle is that the first polarizer and the second polarizer face each other perpendicularly, the second preset angle is that the first polarizer and the second polarizer face each other in parallel, and the second polarizer faces of the plurality of lighting unmanned aerial vehicles are identical; a generating module 302, configured to generate a material information map corresponding to a preset position according to each group of five first images and corresponding second images; the determining module 304 is configured to determine material information corresponding to the target object according to the multiple material information maps corresponding to all the preset positions.
In the designed material determining device based on the unmanned aerial vehicle system, a plurality of lighting unmanned aerial vehicles are controlled by a server to perform lighting with different scene brightness on a target object to be detected to form scenes with different scene brightness, the photographing unmanned aerial vehicles are controlled by the server to photograph the scenes with different scene brightness at least one preset position to obtain at least one group of scene images, a material information graph corresponding to the preset position is generated according to five first images and corresponding second images in each group of scene images, and then the material information corresponding to the target object to be detected is determined according to a plurality of material information graphs corresponding to all the preset positions, the scheme adopts the plurality of lighting unmanned aerial vehicles to realize the construction of a lighting frame of a large scene, the large scene under the lighting frame is photographed by the photographing unmanned aerial vehicles to further realize the material determination of the large scene, and the problem that the existing material reconstruction method is not applicable to large buildings only brought by small objects is solved, the application range of material reconstruction is limited, meanwhile, the problem that a lighting frame which is larger than a large scene is built for the large scene, the difficulty and the cost are high, and the difficulty and the cost for determining the material of the target object of the large scene are reduced.
In an optional implementation manner of this embodiment, the apparatus further includes a control module 306, configured to control the photographing unmanned aerial vehicle to reach a corresponding preset position, establish a corresponding preset distribution shape with the preset position as a center, and control the plurality of lighting unmanned aerial vehicles to be distributed on the preset distribution shape, where the preset distribution shape and the target object to be measured form a scene to be photographed; and controlling the photographing unmanned aerial vehicle to photograph the corresponding scene to be photographed at least one preset position to obtain at least one group of scene images.
Fourth embodiment
As shown in fig. 11, the present application provides an electronic device 4 including: the processor 401 and the memory 402, the processor 401 and the memory 402 being interconnected and communicating with each other via a communication bus 403 and/or other form of connection mechanism (not shown), the memory 402 storing a computer program executable by the processor 401, the computer program being executed by the processor 401 when the computing device is running to perform the method of the second embodiment, any alternative implementation of the second embodiment, such as the steps S100 to S104: acquiring at least one group of scene images shot by a shooting unmanned aerial vehicle at least one preset position, wherein each group of scene images comprises five first images shot at a first preset angle corresponding to the preset position and one second image shot at a second preset angle; generating a material information graph corresponding to a preset position according to the five first images and the corresponding second images of each group; and determining the material information corresponding to the target object to be detected according to the multiple material information graphs corresponding to all the preset positions.
The present application provides a storage medium having a computer program stored thereon, where the computer program is executed by a processor to perform the method of the second embodiment or any alternative implementation of the second embodiment.
The storage medium may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
The present application provides a computer program product which, when run on a computer, causes the computer to perform the method of the second embodiment, any of its alternative implementations.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
It should be noted that the functions, if implemented in the form of software functional modules and sold or used as independent products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. The utility model provides a material quality determination method based on unmanned aerial vehicle system, its characterized in that, unmanned aerial vehicle system includes the unmanned aerial vehicle of shooing, a plurality of illumination unmanned aerial vehicle and server, the unmanned aerial vehicle of shooing includes the camera and sets up the first polaroid on the camera lens of camera, each the illumination unmanned aerial vehicle includes the light source and sets up the second polaroid on the play plain noodles of light source, the server with a plurality of illumination unmanned aerial vehicles and the unmanned aerial vehicle communication connection of shooing, the method is applied to the server, include:
acquiring at least one group of scene images shot by the shooting unmanned aerial vehicle at least one preset position, wherein each group of scene images comprises five first images shot at a first preset angle corresponding to the preset position and one second image shot at a second preset angle, the scene brightness of the five first images is different, each scene image comprises a target object to be detected, the first preset angle is that the first polaroid and the second polaroid are perpendicular to each other in orientation, the second preset angle is that the first polaroid and the second polaroid are parallel in orientation, and the orientation of the second polaroids of the plurality of lighting unmanned aerial vehicles is consistent;
generating a material information graph corresponding to a preset position according to the five first images and the corresponding second images of each group;
and determining the material information corresponding to the target object to be detected according to the multiple material information graphs corresponding to all the preset positions.
2. The method of claim 1, wherein said obtaining at least one set of scene images taken by said photographing drone at least one preset location comprises:
control unmanned aerial vehicle is in each preset position to the corresponding scene of waiting to shoot and is shot a set of scene image that obtains the correspondence, wherein, wait to shoot the scene and form a plurality of illumination unmanned aerial vehicles of presetting the shape and by the distribution the target object that awaits measuring forms, it uses to preset the shape the unmanned aerial vehicle of shooing establishes as the center.
3. The method of claim 2, wherein the controlling the drone to capture a corresponding scene to be captured at each preset position to obtain a corresponding set of scene images comprises:
controlling the photographing unmanned aerial vehicle to photograph scenes to be photographed under five different scene brightness at each preset position through a first preset angle to obtain five first images corresponding to each preset position;
and controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed at each preset position through a second preset angle to obtain second images corresponding to the five first images.
4. The method of claim 3, wherein the five first images include a first scene luminance image, a second scene luminance image, a third scene luminance image, a fourth scene luminance image, and a fifth scene luminance image, and the controlling the unmanned aerial vehicle to photograph the scene to be photographed at five different scene luminances at each preset position through a first preset angle to obtain five first images corresponding to each preset position includes:
adjusting the brightness of the light sources of all illuminating drones to form the first scene brightness;
controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the first scene brightness at each preset position through a first preset angle to obtain a first scene brightness image corresponding to each preset position;
adjusting the brightness of the light sources of all illuminating drones to form the second scene brightness;
controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the second scene brightness at each preset position through a first preset angle to obtain a second scene brightness image corresponding to each preset position;
adjusting the brightness of the light sources of all illuminating drones to form the third scene brightness;
controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the third scene brightness at each preset position through a first preset angle to obtain a third scene brightness image corresponding to each preset position;
adjusting the brightness of the light sources of all illuminating drones to form the fourth scene brightness;
controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the fourth scene brightness at each preset position through a first preset angle to obtain a fourth scene brightness image corresponding to each preset position;
adjusting the brightness of the light sources of all illuminating drones to form the fifth scene brightness;
and controlling the photographing unmanned aerial vehicle to photograph the scene to be photographed under the fifth scene brightness at each preset position through a first preset angle to obtain a fifth scene brightness image corresponding to each preset position.
5. The method of claim 4, wherein said adjusting the brightness of the light sources of all illuminating drones to form the first scene brightness comprises:
controlling light sources of all lighting drones to be turned off to form the first scene brightness;
the adjusting the brightness of the light sources of all the lighting drones to form the fifth scene brightness comprises:
adjusting the brightness values of the light sources of all illuminating drones to a maximum to form the fifth scene brightness.
6. The method of claim 5, wherein said adjusting the brightness of the light sources of all illuminating drones to form the second scene brightness comprises:
establishing a coordinate system by taking the position of the photographing unmanned aerial vehicle as an original point, the orientation of a first polaroid of the photographing unmanned aerial vehicle as a y axis and the direction vertical to the y axis as an x axis;
calculating a first brightness value corresponding to each lighting unmanned aerial vehicle according to the abscissa of each lighting unmanned aerial vehicle, and adjusting the brightness of the corresponding lighting unmanned aerial vehicle according to the first brightness value of each lighting unmanned aerial vehicle to form the second scene brightness;
the adjusting the brightness of the light sources of all illuminating drones to form the third scene brightness comprises:
calculating a second brightness value corresponding to each lighting unmanned aerial vehicle according to the ordinate of each lighting unmanned aerial vehicle, and adjusting the brightness of the corresponding lighting unmanned aerial vehicle according to the second brightness value of each lighting unmanned aerial vehicle to form the third scene brightness;
the adjusting the brightness of the light sources of all illuminating drones to form the fourth scene brightness comprises:
and calculating a third brightness value corresponding to each lighting unmanned aerial vehicle according to the abscissa and the ordinate of each lighting unmanned aerial vehicle, and adjusting the brightness of the corresponding lighting unmanned aerial vehicle according to the third brightness value of each lighting unmanned aerial vehicle to form the fourth scene brightness.
7. The method according to claim 6, wherein the material information map comprises a color map, a roughness map and a metal degree map, and the generating the material information map corresponding to the preset position according to the five first images and the corresponding second images of each group comprises:
generating a corresponding color map according to the fifth scene brightness image and the first scene brightness image of each group;
generating corresponding normal map according to the first scene brightness image, the second scene brightness image, the third scene brightness image and the fourth scene brightness image of each group;
generating a corresponding roughness map according to the fifth scene brightness image of each group and the corresponding normal map;
and generating a corresponding metal degree mapping according to the second image of each group, the first image with the same brightness as the second image and the corresponding normal mapping.
8. The method of claim 7, wherein generating corresponding normal maps from the first, second, third, and fourth scene luminance images of each group comprises:
performing gray scale conversion on the first scene brightness image, the second scene brightness image, the third scene brightness image and the fourth scene brightness image of each group to obtain a corresponding first gray scale image, a corresponding second gray scale image, a corresponding third gray scale image and a corresponding fourth gray scale image;
subtracting the R channel value of each pixel point in the first gray image from the R channel value of each pixel point in the second gray image to obtain the R channel value of each pixel point in the normal map, subtracting the G channel value of each pixel point in the first gray image from the G channel value of each pixel point in the third gray image to obtain the G channel value of each pixel point in the normal map, and subtracting the B channel value of each pixel point in the first gray image from the B channel value of each pixel point in the fourth gray image to obtain the B channel value of each pixel point in the normal map;
and generating each group of corresponding normal map according to the R channel value of each pixel point, the G channel value of each pixel point and the B channel value of each pixel point in each group of normal map.
9. The method according to claim 7, wherein the determining the material information corresponding to the target object according to the plurality of material information maps corresponding to all the preset positions comprises:
modeling according to all the color maps to generate a model of the target object to be detected and calculating color information of the model;
calculating roughness information according to all roughness maps and the model;
and calculating the metal degree information according to all the metal degree maps and the model so as to determine the material information corresponding to the target object to be detected.
10. The utility model provides a material rebuilds device based on unmanned aerial vehicle system, a serial communication port, unmanned aerial vehicle system is including unmanned aerial vehicle, a plurality of illumination unmanned aerial vehicle and the server of shooing, unmanned aerial vehicle of shooing includes the camera and sets up the first polaroid on the camera lens of camera, each illumination unmanned aerial vehicle includes the light source and sets up the second polaroid on the play plain noodles of light source, server and a plurality of illumination unmanned aerial vehicle and unmanned aerial vehicle communication connection of shooing, the device is applied to the server includes:
the system comprises an acquisition module, a processing module and a control module, wherein the acquisition module is used for acquiring at least one group of scene images shot by the shooting unmanned aerial vehicle at least one preset position, each group of scene images comprises five first images shot at a first preset angle corresponding to the preset position and one second image shot at a second preset angle, the scene brightness of the five first images is different, each scene image comprises a target object to be detected, the first preset angle is that the orientations of the first polaroid and the second polaroid are mutually perpendicular, the second preset angle is that the orientations of the first polaroid and the second polaroid are mutually parallel, and the orientations of the second polaroids of the plurality of lighting unmanned aerial vehicles are consistent;
the generating module is used for generating a material information graph corresponding to a preset position according to the five first images and the corresponding second images of each group;
and the determining module is used for determining the material information corresponding to the target object to be detected according to the multiple material information graphs corresponding to all the preset positions.
CN202010503045.6A 2020-06-04 2020-06-04 Unmanned aerial vehicle system-based material determination method and device Active CN113758918B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010503045.6A CN113758918B (en) 2020-06-04 2020-06-04 Unmanned aerial vehicle system-based material determination method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010503045.6A CN113758918B (en) 2020-06-04 2020-06-04 Unmanned aerial vehicle system-based material determination method and device

Publications (2)

Publication Number Publication Date
CN113758918A true CN113758918A (en) 2021-12-07
CN113758918B CN113758918B (en) 2024-02-27

Family

ID=78783866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010503045.6A Active CN113758918B (en) 2020-06-04 2020-06-04 Unmanned aerial vehicle system-based material determination method and device

Country Status (1)

Country Link
CN (1) CN113758918B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116148265A (en) * 2023-02-14 2023-05-23 浙江迈沐智能科技有限公司 Flaw analysis method and system based on synthetic leather high-quality image acquisition

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2420866A1 (en) * 2000-08-28 2002-03-07 Cognitens, Ltd. Accurately aligning images in digital imaging systems by matching points in the images
GB0616685D0 (en) * 2006-08-23 2006-10-04 Warwick Warp Ltd Retrospective shading approximation from 2D and 3D imagery
US20100141931A1 (en) * 2008-12-10 2010-06-10 Aplik, S.A. Method and device for quantitatively determining the surface optical characteristics of a reference object comprised by a plurality of optically differentiable layers
DE102012104900A1 (en) * 2011-06-06 2012-12-06 Seereal Technologies S.A. Method and apparatus for layering thin volume lattice stacks and beam combiner for a holographic display
US20170154462A1 (en) * 2015-11-30 2017-06-01 Photopotech LLC Systems and Methods for Processing Image Information
US20170154463A1 (en) * 2015-11-30 2017-06-01 Photopotech LLC Systems and Methods for Processing Image Information
CN107514993A (en) * 2017-09-25 2017-12-26 同济大学 The collecting method and system towards single building modeling based on unmanned plane
CN107888840A (en) * 2017-10-30 2018-04-06 广东欧珀移动通信有限公司 High-dynamic-range image acquisition method and device
US20180137635A1 (en) * 2015-06-08 2018-05-17 Commissariat A L'energie Atomique Et Aux Energies Alternatives Method for processing images having specularities and corresponding computer program product
US9984455B1 (en) * 2017-06-05 2018-05-29 Hana Resources, Inc. Organism growth prediction system using drone-captured images
US20180219404A1 (en) * 2017-01-27 2018-08-02 Otoy, Inc. Drone-based vr/ar device recharging system
WO2018214077A1 (en) * 2017-05-24 2018-11-29 深圳市大疆创新科技有限公司 Photographing method and apparatus, and image processing method and apparatus
WO2019127402A1 (en) * 2017-12-29 2019-07-04 深圳市大疆创新科技有限公司 Synthesizing method of spherical panoramic image, uav system, uav, terminal and control method thereof
US20190306391A1 (en) * 2015-11-30 2019-10-03 Photopotech LLC Image-Capture Device
CN110581956A (en) * 2019-08-26 2019-12-17 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN110807833A (en) * 2019-11-04 2020-02-18 成都数字天空科技有限公司 Mesh topology obtaining method and device, electronic equipment and storage medium
EP3611665A1 (en) * 2018-08-17 2020-02-19 Siemens Aktiengesellschaft Mapping images to the synthetic domain

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2420866A1 (en) * 2000-08-28 2002-03-07 Cognitens, Ltd. Accurately aligning images in digital imaging systems by matching points in the images
GB0616685D0 (en) * 2006-08-23 2006-10-04 Warwick Warp Ltd Retrospective shading approximation from 2D and 3D imagery
US20100141931A1 (en) * 2008-12-10 2010-06-10 Aplik, S.A. Method and device for quantitatively determining the surface optical characteristics of a reference object comprised by a plurality of optically differentiable layers
DE102012104900A1 (en) * 2011-06-06 2012-12-06 Seereal Technologies S.A. Method and apparatus for layering thin volume lattice stacks and beam combiner for a holographic display
US20180137635A1 (en) * 2015-06-08 2018-05-17 Commissariat A L'energie Atomique Et Aux Energies Alternatives Method for processing images having specularities and corresponding computer program product
US20170154462A1 (en) * 2015-11-30 2017-06-01 Photopotech LLC Systems and Methods for Processing Image Information
US20170154463A1 (en) * 2015-11-30 2017-06-01 Photopotech LLC Systems and Methods for Processing Image Information
US20190306391A1 (en) * 2015-11-30 2019-10-03 Photopotech LLC Image-Capture Device
US20180219404A1 (en) * 2017-01-27 2018-08-02 Otoy, Inc. Drone-based vr/ar device recharging system
WO2018214077A1 (en) * 2017-05-24 2018-11-29 深圳市大疆创新科技有限公司 Photographing method and apparatus, and image processing method and apparatus
US9984455B1 (en) * 2017-06-05 2018-05-29 Hana Resources, Inc. Organism growth prediction system using drone-captured images
CN107514993A (en) * 2017-09-25 2017-12-26 同济大学 The collecting method and system towards single building modeling based on unmanned plane
CN107888840A (en) * 2017-10-30 2018-04-06 广东欧珀移动通信有限公司 High-dynamic-range image acquisition method and device
WO2019127402A1 (en) * 2017-12-29 2019-07-04 深圳市大疆创新科技有限公司 Synthesizing method of spherical panoramic image, uav system, uav, terminal and control method thereof
EP3611665A1 (en) * 2018-08-17 2020-02-19 Siemens Aktiengesellschaft Mapping images to the synthetic domain
CN110581956A (en) * 2019-08-26 2019-12-17 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN110807833A (en) * 2019-11-04 2020-02-18 成都数字天空科技有限公司 Mesh topology obtaining method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIANGHUA XIE: "A Review of Recent Advances in Surface Defect Detection using Texture analysis Techniques", 《ELECTRONIC LETTERS ON COMPUTER VISION AND IMAGE ANALYSIS》, vol. 7, no. 3, pages 1 - 22 *
张洲全 等,: "变电站端子箱电缆屏蔽线的材质识别 与修复方法", 《工业工程》, pages 59 - 63 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116148265A (en) * 2023-02-14 2023-05-23 浙江迈沐智能科技有限公司 Flaw analysis method and system based on synthetic leather high-quality image acquisition

Also Published As

Publication number Publication date
CN113758918B (en) 2024-02-27

Similar Documents

Publication Publication Date Title
US11115633B2 (en) Method and system for projector calibration
CN109242961B (en) Face modeling method and device, electronic equipment and computer readable medium
CN110572630B (en) Three-dimensional image shooting system, method, device, equipment and storage medium
US20200092524A1 (en) Device, system and method for generating updated camera-projector correspondences from a reduced set of test patterns
CN207766424U (en) A kind of filming apparatus and imaging device
CN114697623A (en) Projection surface selection and projection image correction method and device, projector and medium
CN112648935A (en) Image processing method and device and three-dimensional scanning system
JP7408298B2 (en) Image processing device, image processing method, and program
CN113758918B (en) Unmanned aerial vehicle system-based material determination method and device
JP6575999B2 (en) Lighting information acquisition device, lighting restoration device, and programs thereof
CN114004935A (en) Method and device for three-dimensional modeling through three-dimensional modeling system
CN111601097B (en) Binocular stereo matching method, device, medium and equipment based on double projectors
CN111160233B (en) Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance
CN111105365B (en) Color correction method, medium, terminal and device for texture image
KR101742018B1 (en) Display system based on hologram and hologram display method using same
CN111597963A (en) Light supplementing method, system, medium and electronic device for human face in image
Georgoulis et al. Tackling shapes and brdfs head-on
CN108449589A (en) Handle the method, apparatus and electronic equipment of image
KR20150125246A (en) Controlling method for setting of multiview camera and controlling apparatus for setting of multiview camera
CN109379521B (en) Camera calibration method and device, computer equipment and storage medium
CN108650471B (en) L SC compensation method and device of fisheye camera and readable storage medium
CN112184823A (en) Quick calibration method for panoramic system
CN109862344B (en) Three-dimensional image display method, three-dimensional image display device, computer equipment and storage medium
CN112729246B (en) Black surface object depth image measuring method based on binocular structured light
CN116778127B (en) Panoramic view-based three-dimensional digital scene construction method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant