WO2020103110A1 - Procédé et dispositif d'acquisition de limite d'image fondés sur une carte de nuage de points et aéronef - Google Patents

Procédé et dispositif d'acquisition de limite d'image fondés sur une carte de nuage de points et aéronef

Info

Publication number
WO2020103110A1
WO2020103110A1 PCT/CN2018/117038 CN2018117038W WO2020103110A1 WO 2020103110 A1 WO2020103110 A1 WO 2020103110A1 CN 2018117038 W CN2018117038 W CN 2018117038W WO 2020103110 A1 WO2020103110 A1 WO 2020103110A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
data
semantics
image
cloud map
Prior art date
Application number
PCT/CN2018/117038
Other languages
English (en)
Chinese (zh)
Inventor
王涛
马东东
张明磊
刘政哲
李鑫超
闫光
杨志华
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2018/117038 priority Critical patent/WO2020103110A1/fr
Priority to CN201880038404.6A priority patent/CN110770791A/zh
Publication of WO2020103110A1 publication Critical patent/WO2020103110A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Definitions

  • the invention relates to the technical field of control, and in particular to a method, device and aircraft for acquiring image boundaries based on a point cloud map.
  • Embodiments of the present invention provide an image boundary acquisition method, device, and aircraft based on a point cloud map, which can automatically divide an image area to meet the needs of automation and intelligence for classifying image areas.
  • an embodiment of the present invention provides a method for acquiring an image boundary based on a point cloud map.
  • the method includes:
  • each image area with different semantics on the point cloud map is determined.
  • an embodiment of the present invention provides a route planning method based on a point cloud map.
  • the method includes:
  • an embodiment of the present invention provides an image boundary acquisition device based on a point cloud map, including a memory and a processor;
  • the memory is used to store program instructions
  • the processor executes the program instructions stored in the memory. When the program instructions are executed, the processor is used to perform the following steps:
  • each image area with different semantics on the point cloud map is determined.
  • an embodiment of the present invention provides a route planning device based on a point cloud map, including a memory and a processor;
  • the memory is used to store program instructions
  • the processor executes the program instructions stored in the memory. When the program instructions are executed, the processor is used to perform the following steps:
  • an embodiment of the present invention provides an aircraft, including:
  • a power system provided on the fuselage for providing flight power
  • the processor is used to obtain a point cloud map containing semantics; according to the semantics on the point cloud map, determine each image area with different semantics on the point cloud map.
  • an embodiment of the present invention provides another aircraft, including:
  • a power system provided on the fuselage for providing flight power
  • an embodiment of the present invention provides a computer-readable storage medium that stores a computer program, which when executed by a processor implements a point cloud-based map as described in the first aspect above Image boundary acquisition method or the route planning method based on point cloud map described in the second aspect.
  • an image boundary acquisition device based on a point cloud map can acquire a point cloud map containing semantics; according to the semantics on the point cloud map, each image area with different semantics on the point cloud map is determined.
  • This method can automatically divide the image area to meet the needs of automation and intelligence to classify the image area.
  • FIG. 1 is a schematic diagram of a working scene of an image boundary acquisition system based on a point cloud map provided by an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of an image boundary acquisition method based on a point cloud map provided by an embodiment of the present invention
  • Figure 3.1 is a schematic diagram of an etching operation provided by an embodiment of the present invention.
  • Figure 3.2 is a schematic diagram of an expansion operation provided by an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of a route planning method based on a point cloud map provided by an embodiment of the present invention
  • FIG. 5 is a schematic diagram of an interface of a point cloud map provided by an embodiment of the present invention.
  • Figure 6.1 is a schematic diagram of an orthophoto image interface provided by an embodiment of the present invention.
  • FIG. 6.2 is a schematic diagram of another point cloud map interface provided by an embodiment of the present invention.
  • Figure 6.3 is a schematic diagram of an interface of a point cloud map for marking obstacles provided by an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of an image boundary acquisition device based on a point cloud map provided by an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a route planning device based on a point cloud map provided by an embodiment of the present invention.
  • the method for acquiring an image boundary based on a point cloud map may be performed by an image boundary acquiring system based on a point cloud map, the image boundary acquiring system based on a point cloud map includes an image boundary acquiring based on a point cloud map
  • a two-way communication connection can be established between the point cloud map-based image boundary acquisition device and the aircraft for two-way communication.
  • the point cloud map-based image boundary acquisition device may be set on an aircraft (such as a drone) equipped with a load (such as a camera, infrared detection device, surveying instrument, etc.).
  • the point cloud map-based image boundary acquisition device may also be provided on other movable devices, such as autonomous devices such as robots, unmanned vehicles, and unmanned boats.
  • the point cloud map-based image boundary acquisition device may be a component of an aircraft, that is, the aircraft includes the point cloud map-based image boundary acquisition device; in other embodiments, the based The point cloud map image boundary acquisition device can also be spatially independent of the aircraft. The following describes an example of an embodiment of a method for acquiring an image boundary based on a point cloud map for an aircraft with reference to the drawings.
  • an image boundary acquisition device based on a point cloud map may obtain a point cloud map containing semantics and determine each image area with different semantics on the point cloud map according to the semantics on the point cloud map.
  • the image boundary acquisition device may Determine the image areas with continuous and identical semantics on the point cloud map, and perform edge processing operations on the image areas with continuous and identical semantics to obtain image areas with different semantics on the point cloud map.
  • the edge processing operation includes a forward edge processing operation and / or a reverse edge processing operation.
  • the forward edge processing operation and / or the reverse edge processing operation can eliminate noise, segment independent image elements, connect adjacent elements in the image, and find obvious maxima regions in the image Or the minimum area, find the gradient of the image to achieve the segmentation of the image.
  • the forward edge processing operation may be that the highlighted part in the original image is eroded, that is, "the domain is eroded", and the image obtained through the forward edge processing operation has a smaller height than the original image. Bright area.
  • the reverse edge processing operation may be an expansion operation performed on the highlighted part in the image, that is, "domain expansion", and the image obtained by the reverse edge processing operation has a larger size than the original image. Highlight the area.
  • the image boundary acquisition device based on the point cloud map may perform global positive correction on all image areas on the point cloud map when performing edge processing operations on the image areas with continuous same semantics
  • determine the image boundary of the pseudo-adhesion so as to divide the image regions of the pseudo-adhesion; and / or, perform the local positive edge processing operation on the image regions connected on the point cloud map
  • the semi-adhesive image boundary is determined to divide the semi-adhesive image area among the connected image areas.
  • the image boundary acquisition device based on the point cloud map may perform a global positive edge processing operation on all image areas on the point cloud map to determine the false The image boundary of adhesion is to divide each image area of pseudo adhesion.
  • the image boundary acquisition device based on the point cloud map may also determine the image areas connected on the point cloud map according to the semantics of the point cloud map, and perform the image areas connected on the point cloud map.
  • the local positive edge processing operation determines the semi-adhesive image boundary, so as to segment the semi-adhesive image region among the connected image regions.
  • the image boundary acquisition device based on the point cloud map may also perform a reverse edge processing operation on the point cloud map, thereby dividing the field into multiple images with different semantics region.
  • FIG. 1 is a schematic diagram of a working scene of an image boundary acquisition system based on a point cloud map provided by an embodiment of the present invention.
  • the image boundary acquisition system based on a point cloud map shown in FIG. 1 includes: An image boundary acquisition device 11 for a cloud map and an aircraft 12, the image boundary acquisition device 11 based on a point cloud map may be a control terminal of the aircraft 12, specifically a remote controller, a smartphone, a tablet computer, a laptop computer, Any one or more of ground stations and wearable devices (watches, bracelets).
  • the aircraft 12 may be a rotor-type aircraft, such as a four-rotor aircraft, a six-rotor aircraft, an eight-rotor aircraft, or a fixed-wing aircraft.
  • the aircraft 12 includes a power system 121 for providing flight power to the aircraft 12, wherein the power system 121 includes any one or more of a propeller, a motor, and an electronic governor.
  • the aircraft 12 may further include a pan / tilt 122 and
  • the imaging device 123 is mounted on the main body of the aircraft 12 via the gimbal 122.
  • the camera device 123 is used for taking images or videos during the flight of the aircraft 12, including but not limited to multi-spectral imagers, hyper-spectral imagers, visible light cameras and infrared cameras, etc.
  • the gimbal 122 is a multi-axis transmission and stabilization system
  • the PTZ 122 motor compensates the imaging angle of the imaging device by adjusting the rotation angle of the rotation axis, and prevents or reduces the shaking of the imaging device by setting an appropriate buffer mechanism.
  • the point cloud map-based image boundary acquisition system may acquire a point cloud map containing semantics through the point cloud map-based image boundary acquisition device 11, and according to the semantics on the point cloud map, Each image area with different semantics on the point cloud map is determined.
  • FIG. 2 is a schematic flowchart of a method for acquiring an image boundary based on a point cloud map according to an embodiment of the present invention.
  • the method may be performed by an image boundary acquiring device based on a point cloud map.
  • the specific explanation of the image boundary acquisition device of the point cloud map is as described above.
  • the method in the embodiment of the present invention includes the following steps.
  • an image boundary acquisition device based on a point cloud map can acquire a point cloud map containing semantics.
  • the point cloud map is generated according to the semantics of each pixel on the image captured by the camera.
  • the point cloud map contains a plurality of point data, and each point data includes location data, altitude data, and multiple semantics with different confidence levels.
  • the image boundary acquisition device may collect sample image data through the camera of the aircraft, and perform a sample image corresponding to the sample image data. Semantic annotation, obtaining sample image data including semantic annotation information, and generating an initial semantic recognition model according to a preset semantic recognition algorithm, so that the sample image data including semantic annotation information is used as input data and input into the initial semantic recognition model Train to generate a semantic recognition model.
  • the sample image data may include a color image or an orthophoto; or, the sample image may include a color image and depth of field data corresponding to the color image; or, the sample image may include an orthophoto Depth of field data corresponding to the image and the orthophoto.
  • the orthophoto is an aerial image that has been geometrically corrected (for example, to have a uniform scale). Unlike the aerial image that has not been corrected, the amount of orthophoto can be used to measure the actual Distance, because it is a true description of the earth's surface obtained through geometric correction, the orthophotos have the characteristics of being rich in information, intuitive and measurable.
  • the color image is an image determined according to RGB values.
  • the depth of field data reflects the distance from the camera to the object.
  • the image boundary acquisition device may acquire the first image data collected by a camera mounted on the aircraft during the flight of the aircraft , And input the first image data into the semantic recognition model for processing, identify the semantics of each pixel in the first image data, and according to the identified corresponding to the first image data Position data, height data, and the semantics of each pixel in the first image data generate first point cloud data containing semantics, thereby generating a point cloud map using the first point cloud data containing semantics.
  • the semantic recognition model used in this solution may be a Convolutional Neural Network (CNN) model.
  • the architecture of the CNN model mainly includes an input layer, a convolutional layer, an excitation layer, and pooling Floor.
  • a plurality of subnets may be included, the subnets are arranged in a sequence from lowest to highest, and the input image data is processed by each of the subnets in the sequence.
  • the subnets in the sequence include multiple module subnets and optionally one or more other subnets, all of which are composed of one or more conventional neural network layers, such as maximum pooling layer, convolutional layer , Fully connected layer, regularization layer, etc.
  • Each subnet receives the previous output representation generated by the previous subnet in the sequence; processes the previous output representation by pass-through convolution to generate a pass-through output; and processes it by one or more groups of neural network layers.
  • the front output representation is used to generate one or more groups, and the through output and the group output are connected to generate an output representation of the module subnet.
  • the input layer is used to input image data
  • the convolution layer is used to perform operations on the image data
  • the excitation layer is used to perform non-linear mapping on the output of the convolution layer.
  • the pooling layer is used to compress the amount of data and parameters, reduce overfitting, and improve performance.
  • This solution uses the sample image data after semantic annotation as input data, enters the input layer of the CNN model, and after the calculation of the convolution layer, outputs the confidence of different semantics through multiple channels, for example, farm channel (confidence), fruit tree Channel (confidence), river channel (confidence), etc. As the output result of CNN, it can be expressed as a tensor value.
  • the tensor value represents the three-dimensional point cloud information of the pixel and n
  • the semantic information of the channel, where K1, K2, ..., Kn represent the confidence, and the semantic channel with the highest confidence in the tensor data is taken as the semantics of the pixel.
  • Ki 0.8, which is the highest confidence
  • the semantics corresponding to the i-th channel are taken as the semantics of the pixel.
  • S202 Determine each image area with different semantics on the point cloud map according to the semantics on the point cloud map.
  • the image boundary acquisition device based on the point cloud map may determine each image area with different semantics on the point cloud map according to the semantics on the point cloud map.
  • the image boundary acquisition device based on the point cloud map may determine the image regions of different semantics on the point cloud map according to the semantics on the point cloud map, according to the point cloud map On the point cloud map, determine image areas with continuous and identical semantics on the point cloud map, and perform edge processing operations on the image areas with continuous and identical semantics to obtain image areas with different semantics on the point cloud map .
  • the edge processing operation includes a forward edge processing operation and / or a reverse edge processing operation.
  • the forward edge processing operation may include an erosion operation
  • the reverse edge processing operation may include an expansion operation.
  • the formula of the corrosion operation is shown in formula (1):
  • dst (x, y) represents the target pixel value of the corrosion operation
  • (x, y) represents the pixel coordinate position
  • src (x + x ', y + y ') means value operation.
  • formula (2) is shown in formula (2):
  • dst (x, y) represents the target pixel value of the expansion operation
  • (x, y) represents the pixel coordinate position
  • src (x + x ', y + y ') means value operation.
  • the positive edge processing operation includes: performing a global positive edge processing operation on all image areas on the point cloud map to determine the image boundary of the pseudo-adhesion, Segment each image area; and / or, perform a local positive edge processing operation on each image area connected on the point cloud map to determine a semi-adhesive image boundary, so as to The semi-adhesive image area is segmented.
  • the global positive edge processing operation includes: convolving each semantic set image in the point cloud map with a preset computing kernel to obtain the pixel of the area covered by the computing kernel The minimum value, and assign the minimum value to the specified pixel.
  • the local positive edge processing operation includes: convolving the semantic collection image with connected domains in the point cloud map with a preset calculation kernel to obtain pixels of the area covered by the calculation kernel The minimum value of the point, and assign the minimum value to the specified pixel.
  • the preset calculation kernel is a predetermined figure with reference points.
  • FIG. 3.1 can be used as an example for illustration, and FIG. 3.1 is a schematic diagram of an etching operation provided by an embodiment of the present invention.
  • the image boundary acquisition device based on the point cloud map may use each semantic collection image 311 in the point cloud map as The predetermined figure 312 with reference points of the preset calculation kernel is convoluted to obtain the minimum value of the pixels of the area covered by the calculation kernel, and the minimum value is assigned to the specified pixel, as shown in Figure 3.1 Of the corrosion image 313.
  • the reverse edge processing operation includes: convolving each semantic set image in the point cloud map with a preset calculation kernel to obtain the maximum value of the pixels of the area covered by the calculation kernel And assign the maximum value to the specified pixel.
  • the preset calculation kernel is a predetermined figure with reference points.
  • FIG. 3.2 can be used as an example for illustration, and FIG. 3.2 is a schematic diagram of an expansion operation provided by an embodiment of the present invention.
  • the image boundary acquisition device based on the point cloud map may use each semantic collection image 321 in the point cloud map as The predetermined graph 322 with reference points of the preset calculation kernel is convoluted to obtain the maximum value of the pixels of the area covered by the calculation kernel, and the maximum value is assigned to the specified pixel, and the minimum The value is assigned to the specified pixel, and the expanded image 323 shown in Figure 3.2 is obtained.
  • a highlight area smaller than the original image can be obtained, and through the reverse edge processing operation, a highlight area larger than the original image can be obtained.
  • the image effect can be enhanced, and more effective data can be provided for the calculation in the subsequent image processing process, so as to improve the accuracy of the calculation.
  • an image boundary acquisition device based on a point cloud map may acquire a point cloud map containing semantics, and determine each image area with different semantics on the point cloud map according to the semantics on the point cloud map, by In this way, the image area can be automatically divided, which meets the needs of automation and intelligence to classify the image area, and improves the accuracy of image division.
  • FIG. 4 is a schematic flowchart of a route planning method based on a point cloud map provided by an embodiment of the present invention.
  • the method may be executed by a route planning device based on a point cloud map.
  • the route planning equipment of the map can be installed on the aircraft, or on other mobile equipment that establishes a communication connection with the aircraft, such as autonomous equipment such as robots, unmanned vehicles, and unmanned boats.
  • the point cloud map-based route planning device may be a component of an aircraft; in other embodiments, the point cloud map-based route planning device may also be spatially independent of the aircraft.
  • the method in the embodiment of the present invention includes the following steps.
  • a route planning device based on a point cloud map can obtain a point cloud map containing semantics.
  • a route planning device based on a point cloud map may acquire first image data captured by a camera device mounted on the aircraft, and process the first image data based on a semantic recognition model Image data to obtain the semantics of each pixel in the first image data, and the position data, height data corresponding to the first image data and each pixel in the first image data To generate the first point cloud data containing semantics, so as to generate a point cloud map using the first point cloud data containing semantics.
  • the route planning device based on the point cloud map may train and generate the semantic recognition model before processing the first image data based on the semantic recognition model.
  • the point cloud map-based route planning device may collect sample image data through the camera of the aircraft, and semantically annotate the sample image corresponding to the sample image data to obtain including semantic annotation Sample image data for information.
  • the route planning device based on the point cloud map may generate an initial semantic recognition model according to a preset semantic recognition algorithm, and use the sample image data including semantic annotation information as input data, input the initial semantic recognition model for training, A training result is obtained, where the training result includes position data corresponding to the sample image data, height data, and the semantics of each pixel in the sample image.
  • the position data corresponding to the sample image data includes the longitude and latitude of the sample image
  • the height data corresponding to the sample image data is the height of the sample image.
  • the sample image data may include a color image or an orthophoto; or, the sample image may include a color image and depth of field data corresponding to the color image; or, the sample image may include an orthophoto Depth of field data corresponding to the image and the orthophoto.
  • the orthophoto is an aerial image that has been geometrically corrected (for example, to have a uniform scale). Unlike the aerial image that has not been corrected, the amount of orthophoto can be used to measure the actual Distance, because it is a true description of the earth's surface obtained through geometric correction, the orthophotos have the characteristics of being rich in information, intuitive and measurable.
  • the color image is an image determined according to RGB values.
  • the depth of field data reflects the distance from the camera to the object.
  • the first point cloud data corresponds to each pixel in the first image data
  • the semantics of different point cloud data on the point cloud map can be marked with different display methods, Such as marking by different colors.
  • FIG. 5 is a schematic diagram of an interface of a point cloud map provided by an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of tagging point cloud data with different semantics on a point cloud map by using different colors.
  • FIG. 5 The different colors shown in represent different categories.
  • the route planning device based on the point cloud map may semantically label the orthophotos (that is, mark the categories of features, so that Recognize feature types), obtain orthophotos containing semantic annotation information, and input the orthophotos containing semantic annotation information into the trained semantic recognition model for processing, and identify the orthophotos on the orthophotos Semantics corresponding to each pixel, and output semantic confidence, position data and height data of each pixel on the orthophoto.
  • the position data includes the longitude and latitude of the first image in the first image data
  • the height data includes the height of the first image in the first image data.
  • the point cloud map-based route planning device may use a trained semantic recognition model to The orthophoto and the depth data corresponding to the orthophoto are identified, and the semantics corresponding to each pixel on the orthophoto are identified.
  • the route planning device based on the point cloud map may generate a first point cloud containing semantics according to the position data, altitude data, depth data corresponding to the orthophoto and the semantics corresponding to each pixel on the orthophoto Data to generate a point cloud map containing semantics.
  • the depth of field data may be displayed by a depth map.
  • the depth map refers to a frame of data with depth information (that is, depth of field data) read from the camera device. It is suitable for intuitive viewing, so the depth map can be converted into point cloud data according to preset rules, so that a point cloud map can be generated according to the point cloud data, which is convenient for users to view.
  • the first image data includes orthophotos. Since the orthophotos obtained at different times may have a large overlap, the two orthophotos collected at two different times may be There may be multiple pixels with the same position data, and the semantics of the identified multiple pixels with the same position data in the two orthophotos may be inconsistent. Therefore, in order to more reliably perform semantic recognition on multiple pixels with the same location data, the route planning device based on the point cloud map can output the semantic confidence of the semantics of the multiple pixels with the same location data according to the semantic recognition model To determine the semantics with higher confidence as the semantics of multiple pixels with the same position data.
  • the point cloud map-based route planning device may also use manual voting to determine the semantics of multiple pixels with the same location data; in some embodiments, the point cloud map-based Of the route planning device can also determine the semantics of multiple pixels with the same location data as the most marked times as the semantics of multiple pixels with the same location data; in other embodiments, multiple The semantics of the pixel can also be determined according to other rules, for example, according to the preset semantic priority, which is not specifically limited in this embodiment of the present invention.
  • the semantic recognition model used in this solution may be a CNN model, and the architecture of the CNN model mainly includes an input layer, a convolutional layer, an excitation layer, and a pooling layer.
  • the neural network model a plurality of subnets may be included, the subnets are arranged in a sequence from lowest to highest, and the input image data is processed by each of the subnets in the sequence.
  • the subnets in the sequence include multiple module subnets and optionally one or more other subnets, all of which are composed of one or more conventional neural network layers, such as maximum pooling layer, convolutional layer , Fully connected layer, regularization layer, etc.
  • Each subnet receives the previous output representation generated by the previous subnet in the sequence; processes the previous output representation by pass-through convolution to generate a pass-through output; and processes it by one or more groups of neural network layers.
  • the front output representation is used to generate one or more groups, and the through output and the group output are connected to generate an output representation of the module subnet.
  • the input layer is used to input image data
  • the convolution layer is used to perform operations on the image data
  • the excitation layer is used to perform non-linear mapping on the output of the convolution layer.
  • the pooling layer is used to compress the amount of data and parameters, reduce overfitting, and improve performance.
  • the position data includes longitude and latitude;
  • the first point cloud data includes a plurality of point data, and each point data includes position data, height data, and multiple semantics with different confidence levels, and the Each point data contained in the first point cloud data corresponds to each pixel point in the first image data.
  • the multiple semantics with different confidence levels are obtained from multiple channels after being recognized by the semantic recognition model; in some embodiments, the difference from the output of the general neural network is that A segmented output function is added after the output channel of the neural network. If the channel confidence result is negative, the channel confidence result is set to zero to ensure that the neural network output confidence is positive floating-point data.
  • a route planning device based on a point cloud map may acquire second image data captured by a camera mounted on an aircraft, and process the second image data based on the semantic recognition model to obtain the first The semantics of each pixel in the second image data, and according to the position data, height data corresponding to the second image data and the semantics of each pixel in the second image data, a Two point cloud data, thereby updating the point cloud map using the second point cloud data.
  • the first point cloud data, the second point cloud data, and the point cloud map all contain a plurality of point data, and each point data includes position data, altitude data, and multiple semantics with different confidence levels
  • Each point data contained in the first point cloud data corresponds to each pixel in the first image data
  • each point data contained in the second point cloud data corresponds to the second image data Corresponds to each pixel.
  • the confidence level is positive floating point data.
  • the route planning device based on the point cloud map may detect whether the second point cloud exists in the point cloud map generated from the first point cloud data Point data where the data has the same position data (ie overlapping pixels); if it is detected that there is point data with the same position data as the second point cloud data in the point cloud map generated from the first point cloud data , You can compare the semantic confidence of two point data with the same position data in the second point cloud data and the point cloud map, and retain the point data with higher confidence in the two point data Semantic.
  • the two point data may have higher confidence point data
  • the semantics of is determined as the semantics of point data in the point cloud map that is the same as the position data of the second point data, and the point data in the second point cloud data that is different from the position data in the point cloud map Overlay with the point cloud map, so as to update the point cloud map.
  • two point data having the same position data in the first point cloud data and the second point cloud data overlap two of the first image data and the second image data Pixels correspond.
  • the route planning device based on the point cloud map may compare the first point cloud A plurality of semantics of different confidence levels in two point data with the same position data in the data and the second point cloud data are subtracted.
  • the subtraction operation is to remove the semantics with lower confidence in the two point data and retain the semantics with higher confidence.
  • the route planning device based on the point cloud map detects that the point cloud map generated from the first point cloud data has the same position data as the second point cloud data before updating the point cloud map Point data, if the semantics of the point data of the same location data in the point cloud map generated from the first point cloud data are fruit trees, and the confidence level is 50%, and the second point cloud data
  • the semantic of the point data of the same position data is rice, and the confidence is 80%
  • the semantic confidence of the two point data with the same position data in the second point cloud data and the point cloud map can be compared Since the confidence level of 80% is greater than 50%, the semantics that are lower in the two point data, that is, fruit trees, can be removed, and the semantics in the point cloud map can be updated to rice.
  • the point cloud map generated from the first point cloud data may also be calculated Neutralize the number of semantics of the two point data with the same position data in the second point cloud data in the history records, and use the largest number of semantics as the first point cloud data and all The semantics of the two point data with the same position data in the second point cloud data are described.
  • the point cloud map-based route planning device when the point cloud map-based route planning device uses the second point cloud data to update the point cloud map, it may also be based on the second point cloud data and the first point Priority corresponding to the semantics of the two point data with the same position data in the point cloud map generated by the cloud data, and determining the semantics with the highest priority is that the second point cloud data and the position data in the point cloud map are the same The semantics of the two point data.
  • S402 Determine each image area with different semantics on the point cloud map according to the semantics on the point cloud map.
  • the route planning device based on the point cloud map may determine each image area with different semantics on the point cloud map according to the semantics on the point cloud map.
  • each image area included in the point cloud map is divided according to the semantics of each pixel in the point cloud map, and each image area may be displayed by different display marking methods, for example, by Different colors mark each image area with different semantics. Specific embodiments are as described above, and will not be repeated here.
  • S403 Plan a flight route according to the semantics of each image area on the point cloud map.
  • the route planning device based on the point cloud map may plan the flight route according to the semantics of each image area on the point cloud map.
  • the flight route can be planned according to the semantics of pixel points corresponding to each image area on the point cloud map.
  • the route planning device based on the point cloud map may determine the obstacle area on the point cloud map according to the semantics of pixel points corresponding to each image area on the point cloud map, and pass the obstacle area through a specific marking method Automatic marking, for example, telephone poles in farmland, isolated trees in farmland, etc.
  • the route planning device based on the point cloud map can generate a flight route that automatically avoids the marked obstacle area according to a preset route generation algorithm.
  • the areas corresponding to the semantics designated as obstacles or obstacle areas can be automatically marked as obstacle areas to be avoided by the route, which is greatly reduced
  • the point cloud map containing semantics in real time the point cloud map merges the results of recognition in multiple orthophotos, reducing the misjudgment or omission of ground features Probability improves the efficiency of identifying features.
  • Figure 6.1 is a schematic diagram of an orthophoto image interface provided by an embodiment of the present invention
  • Figure 6.2 is another interface of a point cloud map provided by an embodiment of the present invention
  • FIG. 6.3 is a schematic diagram of an interface of a point cloud map for marking obstacles provided by an embodiment of the present invention.
  • the image boundary acquisition device based on the point cloud map can input the orthophoto shown in FIG. 6.1 into the trained semantic recognition model according to the acquired orthophoto shown in FIG. 6.1, and recognize the image shown in FIG. 6.1 The semantics of the pixels corresponding to the orthophoto.
  • the point cloud map-based image boundary acquisition device The point cloud map is rendered to obtain the point cloud map shown in FIG. 6.2, where the gray dots in the area 601 in FIG. 6.2 represent obstacles such as telephone poles that need to be marked. Therefore, by marking the gray dots in the area 601 in FIG. 6.2, such as marking the gray dots in the area 601 with the circle shown in FIG. 6.3, a schematic diagram of the marked obstacle as shown in FIG. 6.3 can be obtained .
  • the marking method for the obstacle may be other marking methods, which is not specifically limited in the embodiment of the present invention.
  • the route planning device based on the point cloud map may divide the categories of aerial photography scenes based on image regions with different semantics.
  • the route planning device based on the point cloud map divides the category of the aerial photography scene
  • the aerial photography scene can be based on the semantic confidence, position data, and altitude data corresponding to each pixel in the point cloud map. To classify.
  • the planning device may determine, according to any one or more of semantic confidence, position data, and height data corresponding to each pixel point of the point cloud map, pixels whose semantics are trees and whose height data is greater than a first preset height threshold
  • the area corresponding to the point is the area of the tree; the area corresponding to the pixel point whose semantic meaning is cement and / or asphalt is the road; the pixel position corresponding to the semantic confidence level is cement and asphalt is the road; the semantic meaning is the rod,
  • the area corresponding to the pixels whose height data is greater than the second preset height threshold is a telephone pole; it is determined that the area corresponding to the pixels covered by water such as water and rivers is the water surface; (Excluding water surface), factory buildings, plastic sheds, etc.
  • the areas corresponding to the field are buildings; areas corresponding to pixels whose semantic meaning is rice are determined as paddy fields; pixels whose blank area or other semantics whose height data is less than the third preset height threshold are determined The corresponding area is the ground. According to the identified categories included in the field, the areas corresponding to the field are divided.
  • the point cloud map containing semantics can also be applied to the detection of illegal buildings, and the route planning device based on the point cloud map can be based on orthophotos with semantic annotation information (ie, first image data ),
  • semantic annotation information ie, first image data
  • the semantic recognition model to identify the semantics of the pixels corresponding to the two orthophotos collected at different times, and according to the position data, height data and the semantics of each pixel corresponding to the orthophotos collected at two different times, Generate point cloud data with semantics and use point cloud data to generate point cloud maps with semantics.
  • the semantic confidence of the pixels with the same location data can be compared to determine the pixels with the same location data Semantics, so as to determine whether there is illegal building in the pixel area with the same position data according to the semantics; or whether the pixel area with the same position data has changed.
  • the point cloud map containing semantics can also be applied to feature classification. Specifically, the features on the point cloud map may be classified according to the semantics of the corresponding pixel points on the point cloud map, the position data and height data of the corresponding pixel points on the point cloud map, and / or the The features on the point cloud map are divided or divided by category.
  • the point cloud map containing semantics can also be applied to agricultural machinery spraying tasks.
  • pesticide spraying can be controlled by judging whether the area where the agricultural machinery is flying is a crop that needs to be sprayed Switch to avoid wasting pesticides.
  • S404 Control the aircraft to fly according to the flight path.
  • a route planning device based on a point cloud map may control the aircraft to fly according to the flight route.
  • the route planning device when the route planning device based on the point cloud map controls the aircraft to fly according to the flight route, it can determine the semantics of the image area corresponding to the current flight position of the aircraft in the point cloud map Whether it matches the semantics of the target mission, if it is determined that the semantics of the image area corresponding to the current flight position of the aircraft in the point cloud map match the semantics of the target mission, the aircraft can be controlled to execute the Target mission; if it is determined that the semantics of the image area corresponding to the current flight position of the aircraft in the point cloud map do not match the semantics of the target mission, the aircraft can be controlled to stop performing the target mission.
  • the target task may be any one or more tasks such as a pesticide spraying task, an obstacle detection task, and classifying scene targets.
  • the route planning device based on the point cloud map may identify the targets of the aerial scene when controlling the aircraft to perform the target tasks, And generate a point cloud map containing semantics according to the recognition result, and classify the aerial photography scene according to the point cloud map containing semantics.
  • a route planning device based on a point cloud map may obtain a point cloud map containing semantics, and determine each image area with different semantics on the point cloud map according to the semantics on the point cloud map, and The semantic route of each image area on the point cloud map is used to plan a flight route, thereby controlling the aircraft to fly according to the flight route.
  • FIG. 7 is a schematic structural diagram of an image boundary acquisition device based on a point cloud map according to an embodiment of the present invention.
  • the image boundary acquisition device based on the point cloud map includes: a memory 701, a processor 702, and a data interface 703.
  • the memory 701 may include a volatile memory (volatile memory); the memory 701 may also include a non-volatile memory (non-volatile memory); the memory 701 may also include a combination of the foregoing types of memories.
  • the processor 702 may be a central processing unit (central processing unit, CPU).
  • the processor 702 may further include a hardware chip.
  • the hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD) or a combination thereof.
  • ASIC application-specific integrated circuit
  • PLD programmable logic device
  • FPGA field-programmable gate array
  • the memory 701 is used to store program instructions.
  • the processor 702 may call the program instructions stored in the memory 701 to perform the following steps:
  • each image area with different semantics on the point cloud map is determined.
  • processor 702 determines each image area with different semantics on the point cloud map according to the semantics on the point cloud map, it is specifically used to:
  • the edge processing operation includes: a forward edge processing operation and / or a reverse edge processing operation.
  • the forward edge processing operation includes:
  • the global edge processing operation includes:
  • Each semantic collection image in the point cloud map is convolved with a preset calculation kernel to obtain the minimum value of the pixels in the area covered by the calculation kernel, and the minimum value is assigned to the specified pixel.
  • the local positive edge processing operation includes:
  • the reverse edge processing operation includes:
  • Each semantic set image in the point cloud map is convoluted with a preset calculation kernel to obtain the maximum value of the pixels in the area covered by the calculation kernel, and the maximum value is assigned to the specified pixel.
  • the preset calculation kernel is a predetermined figure with reference points.
  • an image boundary acquisition device based on a point cloud map may acquire a point cloud map containing semantics, and determine each image area with different semantics on the point cloud map according to the semantics on the point cloud map, by In this way, the image area can be automatically divided, which meets the needs of automation and intelligence to classify the image area, and improves the accuracy of image division.
  • FIG. 8 is a schematic structural diagram of a route planning device based on a point cloud map according to an embodiment of the present invention.
  • the route planning device based on the point cloud map includes: a memory 801, a processor 802, and a data interface 803.
  • the memory 801 may include a volatile memory (volatile memory); the memory 801 may also include a non-volatile memory (non-volatile memory); the memory 801 may also include a combination of the foregoing types of memories.
  • the processor 802 may be a central processing unit (central processing unit, CPU).
  • the processor 802 may further include a hardware chip.
  • the hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD) or a combination thereof.
  • ASIC application-specific integrated circuit
  • PLD programmable logic device
  • FPGA field programmable logic gate array
  • the memory 801 is used to store program instructions.
  • the processor 802 may call the program instructions stored in the memory 801 to perform the following steps:
  • processor 802 obtains a point cloud map containing semantics, it is specifically used to:
  • first point cloud data containing semantics according to the position data, height data corresponding to the first image data, and the semantics of each pixel in the first image data
  • a point cloud map is generated using the first point cloud data containing semantics.
  • processor 802 obtains a point cloud map containing semantics, it is specifically used to:
  • first point cloud data, the second point cloud data, and the point cloud map all contain a plurality of point data, and each point data includes position data, height data, and multiple semantics with different confidence levels;
  • Each point data included in the first point cloud data corresponds to each pixel in the first image data, and each point data included in the second point cloud data corresponds to the Each pixel corresponds.
  • the confidence level is positive floating point data.
  • processor 802 uses the second point cloud data to update the point cloud map, it is specifically used to:
  • processor 802 compares the second point cloud data and the two point data with the same position data in the point cloud map, it is specifically used to:
  • Subtraction operations are performed on a plurality of semantics with different confidence levels in two point data with the same position data in the first point cloud data and the second point cloud data.
  • two point data having the same position data in the first point cloud data and the second point cloud data correspond to two overlapping pixel points in the first image data and the second image data.
  • processor 802 uses the second point cloud data to update the point cloud map, it is specifically used to:
  • the semantics with the largest number is used as the semantics of the two point data with the same position data in the first point cloud data and the second point cloud data.
  • processor 802 uses the second point cloud data to update the point cloud map, it is specifically used to:
  • the semantics with the highest priority are the second point cloud data and the The semantics of two point data with the same position data in a point cloud map.
  • the first image data includes a color image
  • the first image data includes a color image and depth data corresponding to the color image; or,
  • the first image data includes an orthophoto; or,
  • the first image data includes orthophotos and depth data corresponding to the orthophotos.
  • the processor 802 is further used to:
  • sample database including sample image data
  • the sample image data includes a sample image and semantic annotation information; or, the sample image data includes a sample image, depth data corresponding to each pixel in the sample image and semantic annotation information.
  • the processor 802 trains and optimizes the initial semantic recognition model based on each sample image data in the sample database to obtain the semantic recognition model, it is specifically used to:
  • the model parameters of the initial semantic recognition model are optimized to obtain the semantic recognition model.
  • the point cloud map includes a plurality of image areas, the image areas are divided according to the semantics of each pixel in the point cloud map, and each image area is displayed by different display mark methods.
  • processor 802 is specifically used when planning a flight route according to the semantics of each image area on the point cloud map:
  • processor 802 controls the aircraft to fly according to the flight path, it is specifically used to:
  • a route planning device based on a point cloud map may obtain a point cloud map containing semantics, and determine each image area with different semantics on the point cloud map according to the semantics on the point cloud map, and The semantic route of each image area on the point cloud map is used to plan a flight route, thereby controlling the aircraft to fly according to the flight route.
  • An embodiment of the present invention provides an aircraft including: a fuselage; a power system provided on the fuselage for providing flight power; the power system includes: a blade and a motor for driving the blade to rotate;
  • the processor is used to obtain a point cloud map containing semantics; according to the semantics on the point cloud map, determine each image area with different semantics on the point cloud map.
  • the processor determines each image area with different semantics on the point cloud map according to the semantics on the point cloud map, it is specifically used to:
  • the edge processing operation includes: a forward edge processing operation and / or a reverse edge processing operation.
  • the positive edge processing operation includes:
  • the global edge processing operation includes:
  • Each semantic collection image in the point cloud map is convolved with a preset calculation kernel to obtain the minimum value of the pixels in the area covered by the calculation kernel, and the minimum value is assigned to the specified pixel.
  • the local positive edge processing operation includes:
  • the reverse edge processing operation includes:
  • Each semantic set image in the point cloud map is convoluted with a preset calculation kernel to obtain the maximum value of the pixels in the area covered by the calculation kernel, and the maximum value is assigned to the specified pixel.
  • the preset calculation kernel is a predetermined figure with reference points.
  • an image boundary acquisition device based on a point cloud map may acquire a point cloud map containing semantics, and determine each image area with different semantics on the point cloud map according to the semantics on the point cloud map, by In this way, the image area can be automatically divided, which meets the needs of automation and intelligence for the classification of the image area, and improves the accuracy of image division.
  • An embodiment of the present invention also provides an aircraft including: a fuselage; a power system provided on the fuselage for providing flight power; the power system includes: a blade and a motor for driving the blade to rotate
  • a processor for acquiring a point cloud map containing semantics; determining each image area with different semantics on the point cloud map according to the semantics on the point cloud map; according to the semantics of each image area on the point cloud map , Plan a flight route; control the aircraft to fly according to the flight route.
  • the processor obtains a point cloud map containing semantics, it is specifically used to:
  • first point cloud data containing semantics according to the position data, height data corresponding to the first image data, and the semantics of each pixel in the first image data
  • a point cloud map is generated using the first point cloud data containing semantics.
  • processor is also used to:
  • first point cloud data, the second point cloud data, and the point cloud map all contain a plurality of point data, and each point data includes position data, height data, and multiple semantics with different confidence levels;
  • Each point data included in the first point cloud data corresponds to each pixel in the first image data, and each point data included in the second point cloud data corresponds to the Each pixel corresponds.
  • the confidence level is positive floating point data.
  • the processor uses the second point cloud data to update the point cloud map, it is specifically used to:
  • the processor compares the second point cloud data and the two point data with the same position data in the point cloud map, it is specifically used to:
  • Subtraction operations are performed on a plurality of semantics with different confidence levels in two point data with the same position data in the first point cloud data and the second point cloud data.
  • two point data having the same position data in the first point cloud data and the second point cloud data correspond to two overlapping pixel points in the first image data and the second image data.
  • the processor uses the second point cloud data to update the point cloud map, it is specifically used to:
  • the semantics with the largest number is used as the semantics of the two point data with the same position data in the first point cloud data and the second point cloud data.
  • the processor uses the second point cloud data to update the point cloud map, it is specifically used to:
  • the semantics with the highest priority are the second point cloud data and the The semantics of two point data with the same position data in a point cloud map.
  • the first image data includes a color image
  • the first image data includes a color image and depth data corresponding to the color image; or,
  • the first image data includes an orthophoto; or,
  • the first image data includes orthophotos and depth data corresponding to the orthophotos.
  • the processor is further configured to:
  • sample database including sample image data
  • the sample image data includes a sample image and semantic annotation information; or, the sample image data includes a sample image, depth data corresponding to each pixel in the sample image and semantic annotation information.
  • the processor performs training optimization on the initial semantic recognition model based on each sample image data in the sample database to obtain the semantic recognition model, it is specifically used to:
  • the model parameters of the initial semantic recognition model are optimized to obtain the semantic recognition model.
  • the point cloud map includes a plurality of image areas, the image areas are divided according to the semantics of each pixel in the point cloud map, and each image area is displayed by different display mark methods.
  • the processor is specifically used when planning a flight route according to the semantics of each image area on the point cloud map:
  • the processor controls the aircraft to fly according to the flight path, it is specifically used to:
  • a route planning device based on a point cloud map may obtain a point cloud map containing semantics, and determine each image area with different semantics on the point cloud map according to the semantics on the point cloud map, and The semantic route of each image area on the point cloud map is used to plan a flight route, thereby controlling the aircraft to fly according to the flight route.
  • a computer-readable storage medium stores a computer program, and the computer program is executed by a processor to implement the invention described in the embodiment corresponding to FIG. 2.
  • the method of acquiring the image boundary based on the point cloud map or the route planning method based on the point cloud map described in the embodiment corresponding to FIG. 3 can also realize the method based on the point cloud map of the embodiment corresponding to the present invention shown in FIG.
  • the image boundary acquisition device or the point cloud map-based route planning device according to the embodiment of the present invention described in FIG. 7 will not be repeated here.
  • the computer-readable storage medium may be an internal storage unit of the device according to any one of the foregoing embodiments, such as a hard disk or a memory of the device.
  • the computer-readable storage medium may also be an external storage device of the device, for example, a plug-in hard disk equipped on the device, a smart memory card (Smart Media Card, SMC), and a secure digital (SD) card , Flash card (Flash Card), etc.
  • the computer-readable storage medium may also include both an internal storage unit of the device and an external storage device.
  • the computer-readable storage medium is used to store the computer program and other programs and data required by the device.
  • the computer-readable storage medium may also be used to temporarily store data that has been or will be output.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM) or a random access memory (Random Access Memory, RAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé et un dispositif d'acquisition de limite d'image fondés sur une carte de nuage de points, un aéronef et un support d'informations. Le procédé consiste : à obtenir une carte de nuage de points contenant une sémantique (S201); à déterminer, en fonction de la sémantique sur la carte de nuage de points, des régions d'image respectives de différentes sémantiques sur la carte de nuage de points (S202). Le procédé de l'invention peut réaliser la segmentation automatique de régions d'image, satisfaire les demandes de classification automatique et intelligente de régions d'image, et améliorer la précision de la segmentation d'image.
PCT/CN2018/117038 2018-11-22 2018-11-22 Procédé et dispositif d'acquisition de limite d'image fondés sur une carte de nuage de points et aéronef WO2020103110A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/117038 WO2020103110A1 (fr) 2018-11-22 2018-11-22 Procédé et dispositif d'acquisition de limite d'image fondés sur une carte de nuage de points et aéronef
CN201880038404.6A CN110770791A (zh) 2018-11-22 2018-11-22 一种基于点云地图的图像边界获取方法、设备及飞行器

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/117038 WO2020103110A1 (fr) 2018-11-22 2018-11-22 Procédé et dispositif d'acquisition de limite d'image fondés sur une carte de nuage de points et aéronef

Publications (1)

Publication Number Publication Date
WO2020103110A1 true WO2020103110A1 (fr) 2020-05-28

Family

ID=69328789

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/117038 WO2020103110A1 (fr) 2018-11-22 2018-11-22 Procédé et dispositif d'acquisition de limite d'image fondés sur une carte de nuage de points et aéronef

Country Status (2)

Country Link
CN (1) CN110770791A (fr)
WO (1) WO2020103110A1 (fr)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258474A (zh) * 2020-10-22 2021-01-22 深圳集智数字科技有限公司 一种墙面异常检测方法和装置
CN112396699A (zh) * 2020-11-30 2021-02-23 常州市星图测绘科技有限公司 一种基于无人机影像进行地块自动勾画的方法
CN112991487A (zh) * 2021-03-11 2021-06-18 中国兵器装备集团自动化研究所有限公司 一种多线程实时构建正射影像语义地图的系统
CN113552879A (zh) * 2021-06-30 2021-10-26 北京百度网讯科技有限公司 自移动设备的控制方法、装置、电子设备及存储介质
CN114089787A (zh) * 2021-09-29 2022-02-25 航天时代飞鸿技术有限公司 基于多机协同飞行的地面三维语义地图及其构建方法
CN114298581A (zh) * 2021-12-30 2022-04-08 广州极飞科技股份有限公司 质量评估模型生成方法、质量评估方法、装置、电子设备和可读存储介质
CN115330985A (zh) * 2022-07-25 2022-11-11 埃洛克航空科技(北京)有限公司 用于三维模型优化的数据处理方法和装置
CN115661664A (zh) * 2022-12-08 2023-01-31 东莞先知大数据有限公司 一种边界遮挡检测及补偿方法、电子设备和存储介质
CN116051681A (zh) * 2023-03-02 2023-05-02 深圳市光速时代科技有限公司 一种基于智能手表生成图像数据的处理方法及系统

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111310765A (zh) * 2020-02-14 2020-06-19 北京经纬恒润科技有限公司 激光点云语义分割方法和装置
CN113970752A (zh) * 2020-07-22 2022-01-25 商汤集团有限公司 一种目标检测方法、装置、电子设备及存储介质
CN114743062A (zh) * 2020-12-24 2022-07-12 广东博智林机器人有限公司 一种建筑特征的识别方法及装置
CN116310189B (zh) * 2023-05-22 2023-09-01 浙江大华技术股份有限公司 地图模型构建方法及终端

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036506A (zh) * 2014-06-09 2014-09-10 北京拓维思科技有限公司 结合面片特征与gps位置的地面激光点云拼接方法
CN107063258A (zh) * 2017-03-07 2017-08-18 重庆邮电大学 一种基于语义信息的移动机器人室内导航方法
CN108415032A (zh) * 2018-03-05 2018-08-17 中山大学 一种基于深度学习与激光雷达的点云语义地图构建方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016033797A1 (fr) * 2014-09-05 2016-03-10 SZ DJI Technology Co., Ltd. Cartographie environnementale à multiples capteurs
CN107933921B (zh) * 2017-10-30 2020-11-17 广州极飞科技有限公司 飞行器及其喷洒路线生成和执行方法、装置、控制终端
CN108564874B (zh) * 2018-05-07 2021-04-30 腾讯大地通途(北京)科技有限公司 地面标志提取的方法、模型训练的方法、设备及存储介质

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036506A (zh) * 2014-06-09 2014-09-10 北京拓维思科技有限公司 结合面片特征与gps位置的地面激光点云拼接方法
CN107063258A (zh) * 2017-03-07 2017-08-18 重庆邮电大学 一种基于语义信息的移动机器人室内导航方法
CN108415032A (zh) * 2018-03-05 2018-08-17 中山大学 一种基于深度学习与激光雷达的点云语义地图构建方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PU, SHI ET AL.: "Registration of Terrestrial Laser Point Clouds by Fusing Semantic Features and GPS Positions", ACTA GEODAETICA ET CARTOGRAPHICA SINICA, vol. 43, no. 5, 31 May 2014 (2014-05-31), pages 545 - 550, XP055709781, ISSN: 1001-1595 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258474A (zh) * 2020-10-22 2021-01-22 深圳集智数字科技有限公司 一种墙面异常检测方法和装置
CN112396699A (zh) * 2020-11-30 2021-02-23 常州市星图测绘科技有限公司 一种基于无人机影像进行地块自动勾画的方法
CN112991487B (zh) * 2021-03-11 2023-10-17 中国兵器装备集团自动化研究所有限公司 一种多线程实时构建正射影像语义地图的系统
CN112991487A (zh) * 2021-03-11 2021-06-18 中国兵器装备集团自动化研究所有限公司 一种多线程实时构建正射影像语义地图的系统
CN113552879A (zh) * 2021-06-30 2021-10-26 北京百度网讯科技有限公司 自移动设备的控制方法、装置、电子设备及存储介质
CN113552879B (zh) * 2021-06-30 2024-06-07 北京百度网讯科技有限公司 自移动设备的控制方法、装置、电子设备及存储介质
CN114089787A (zh) * 2021-09-29 2022-02-25 航天时代飞鸿技术有限公司 基于多机协同飞行的地面三维语义地图及其构建方法
CN114298581A (zh) * 2021-12-30 2022-04-08 广州极飞科技股份有限公司 质量评估模型生成方法、质量评估方法、装置、电子设备和可读存储介质
CN115330985A (zh) * 2022-07-25 2022-11-11 埃洛克航空科技(北京)有限公司 用于三维模型优化的数据处理方法和装置
CN115330985B (zh) * 2022-07-25 2023-09-08 埃洛克航空科技(北京)有限公司 用于三维模型优化的数据处理方法和装置
CN115661664B (zh) * 2022-12-08 2023-04-07 东莞先知大数据有限公司 一种边界遮挡检测及补偿方法、电子设备和存储介质
CN115661664A (zh) * 2022-12-08 2023-01-31 东莞先知大数据有限公司 一种边界遮挡检测及补偿方法、电子设备和存储介质
CN116051681A (zh) * 2023-03-02 2023-05-02 深圳市光速时代科技有限公司 一种基于智能手表生成图像数据的处理方法及系统

Also Published As

Publication number Publication date
CN110770791A (zh) 2020-02-07

Similar Documents

Publication Publication Date Title
WO2020103110A1 (fr) Procédé et dispositif d'acquisition de limite d'image fondés sur une carte de nuage de points et aéronef
WO2020103108A1 (fr) Procédé et dispositif de génération de sémantique, drone et support d'informations
WO2020103109A1 (fr) Procédé et dispositif de génération de carte, drone et support d'informations
US20210390329A1 (en) Image processing method, device, movable platform, unmanned aerial vehicle, and storage medium
RU2768997C1 (ru) Способ, устройство и оборудование для распознавания препятствий или земли и управления полетом, и носитель данных
EP3805981A1 (fr) Procédé et appareil pour planifier une opération dans une zone cible, support de stockage, et processeur
CN109324337B (zh) 无人飞行器的航线生成及定位方法、装置及无人飞行器
US20200334551A1 (en) Machine learning based target localization for autonomous unmanned vehicles
EP3770810A1 (fr) Procédé et appareil d'acquisition d'une limite d'une zone à exploiter, et procédé de planification d'itinéraire d'exploitation
CN113791641A (zh) 一种基于飞行器的设施检测方法及控制设备
CN112596071A (zh) 无人机自主定位方法、装置及无人机
CN105526916A (zh) 动态图像遮蔽系统和方法
CN112379681A (zh) 无人机避障飞行方法、装置及无人机
CN112378397A (zh) 无人机跟踪目标的方法、装置及无人机
US20230360234A1 (en) Detection of environmental changes to delivery zone
WO2020208641A1 (fr) Classification et enregistrement d'images à motifs récurrents
CN111831010A (zh) 一种基于数字空间切片的无人机避障飞行方法
CN114077249B (zh) 一种作业方法、作业设备、装置、存储介质
CN112380933A (zh) 无人机识别目标的方法、装置及无人机
CN112542800A (zh) 一种输电线路故障识别的方法及其系统
CN115797397B (zh) 一种机器人全天候自主跟随目标人员的方法及系统
CN111339953A (zh) 一种基于聚类分析的薇甘菊监测方法
CN116739739A (zh) 一种贷款额度评估方法、装置、电子设备及存储介质
CN113095109A (zh) 一种农作物叶面识别模型训练方法、识别方法及装置
Prystavka et al. Information technology of realtime optical navigation based on photorealistic orthophoto plan

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18941041

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18941041

Country of ref document: EP

Kind code of ref document: A1