CN113935950A - Material putting operation method and device, unmanned equipment and storage medium - Google Patents

Material putting operation method and device, unmanned equipment and storage medium Download PDF

Info

Publication number
CN113935950A
CN113935950A CN202111062200.6A CN202111062200A CN113935950A CN 113935950 A CN113935950 A CN 113935950A CN 202111062200 A CN202111062200 A CN 202111062200A CN 113935950 A CN113935950 A CN 113935950A
Authority
CN
China
Prior art keywords
target
area
determining
image data
remote sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111062200.6A
Other languages
Chinese (zh)
Inventor
郑棣元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xaircraft Technology Co Ltd
Original Assignee
Guangzhou Xaircraft Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xaircraft Technology Co Ltd filed Critical Guangzhou Xaircraft Technology Co Ltd
Priority to CN202111062200.6A priority Critical patent/CN113935950A/en
Publication of CN113935950A publication Critical patent/CN113935950A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The embodiment of the application discloses a material putting operation method and device, unmanned equipment and a storage medium. According to the technical scheme provided by the embodiment of the application, the remote sensing image data of the operation area are obtained, and the throwing target contained in the remote sensing image data is determined; determining a target space corresponding to each throwing target, and determining volume information of the corresponding throwing target based on each target space; and determining the material input amount corresponding to the input target according to the volume information, and controlling the unmanned equipment to input the material of the corresponding material input amount to the input target. Through the technical means, the technical problem of poor material feeding effect in the prior art is solved, and the material feeding operation quality is improved.

Description

Material putting operation method and device, unmanned equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of unmanned equipment, in particular to a material throwing operation method and device, unmanned equipment and a storage medium.
Background
Based on the automatic high-altitude flight characteristic of unmanned equipment, a large number of unmanned equipment are applied to the high-altitude material throwing field, such as pesticide spraying and pollination in the agricultural field, fire extinguishing agent throwing in the fire fighting field, or rescue material throwing in the rescue field.
When the existing unmanned equipment is used for high-altitude material throwing operation, the existing unmanned equipment can fly to a preset height and quantitatively throw materials according to a set material quantity. However, this approach is suitable for delivery targets that are highly average, volume-average, and evenly distributed. For the throwing targets with non-uniform height, non-uniform volume or non-uniform distribution, the throwing effect is poor, and the throwing operation quality is low.
Disclosure of Invention
The embodiment of the application provides a material throwing operation method and device, unmanned equipment and a storage medium, solves the technical problem of poor material throwing effect in the prior art, and improves the material throwing operation quality.
In a first aspect, an embodiment of the present application provides a material delivery operation method, including:
acquiring remote sensing image data of a working area, and determining a delivery target contained in the remote sensing image data;
determining a target space corresponding to each throwing target, and determining volume information of the corresponding throwing target based on each target space;
and determining the material input amount corresponding to the input target according to the volume information, and controlling the unmanned equipment to input the material of the material input amount corresponding to the input target.
In a second aspect, an embodiment of the present application provides a material throwing operation device, including:
the system comprises a putting target identification module, a remote sensing image data acquisition module and a putting target identification module, wherein the putting target identification module is configured to acquire the remote sensing image data of a working area and determine a putting target contained in the remote sensing image data;
the volume determination module is configured to determine a target space corresponding to each delivery target and determine volume information of the corresponding delivery target based on each target space;
and the material putting module is configured to determine the material putting amount corresponding to the putting target according to the volume information and control the unmanned equipment to put the material with the corresponding material putting amount to the putting target.
In a third aspect, an embodiment of the present application provides an unmanned device, including:
one or more processors; a storage device storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the material placement method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform the method of material delivery operations according to the first aspect.
According to the method and the device, the remote sensing image data of the operation area are obtained, and the throwing target contained in the remote sensing image data is determined; determining a target space corresponding to each throwing target, and determining volume information of the corresponding throwing target based on each target space; and determining the material input amount corresponding to the input target according to the volume information, and controlling the unmanned equipment to input the material of the corresponding material input amount to the input target. By adopting the technical means, the operating efficiency of material feeding is obviously improved.
Drawings
Fig. 1 is a flowchart of a material delivery operation method according to an embodiment of the present application;
fig. 2 is a schematic diagram of remote sensing image data of a working area according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a delivery target provided by an embodiment of the present application;
FIG. 4 is a schematic illustration of elevation information for a work area provided by an embodiment of the present application;
FIG. 5 is a first schematic view of a target space provided by an embodiment of the present application;
FIG. 6 is a second schematic view of a target space provided by an embodiment of the present application;
FIG. 7 is a schematic view of a flight path of an unmanned aerial device provided by an embodiment of the application;
fig. 8 is a schematic structural diagram of a material throwing operation device according to a second embodiment of the present application;
fig. 9 is a schematic structural diagram of an unmanned aerial vehicle according to a third embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, specific embodiments of the present application will be described in detail with reference to the accompanying drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some but not all of the relevant portions of the present application are shown in the drawings. Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
The application provides a material delivery operation method, which aims to acquire remote sensing image data of an operation area and determine a delivery target contained in the remote sensing image data; determining a target space corresponding to each throwing target, and determining volume information of the corresponding throwing target based on each target space; and determining the material input amount corresponding to the input target according to the volume information, and controlling the unmanned equipment to input the material of the corresponding material input amount to the input target. At present, when unmanned equipment carries out high-altitude material throwing operation, most of unmanned equipment can fly to preset height and throw in according to the ration of setting for the material volume, but this kind of mode is applicable to the throwing target that height is average, volume is average and distribute evenly, and to throwing target that height is not unified, volume is not unified or distribute unevenly, its effect of throwing in is poor, and the quality of throwing in operation is low. Based on this, the material throwing operation method provided by the embodiment of the application is provided to solve the technical problem that the material throwing effect in the prior art is poor.
The first embodiment is as follows:
fig. 1 is a flowchart of a material delivery operation method according to an embodiment of the present disclosure, where the material delivery operation method provided in this embodiment may be executed by an unmanned device, and the unmanned device may be implemented in a software and/or hardware manner, and the unmanned device may be formed by two or more physical entities or may be formed by one physical entity.
The following description will be given taking the unmanned equipment as an example of a main body for executing a material putting operation method. Referring to fig. 1, the material throwing operation method specifically includes:
s110, remote sensing image data of the operation area are obtained, and a delivery target contained in the remote sensing image data is determined.
Exemplarily, the operation area is an area where material putting operation is performed through the unmanned aerial vehicle, the area includes at least one putting target, the putting target is a target to which the unmanned aerial vehicle is required to put materials, and the unmanned aerial vehicle includes unmanned aerial vehicles, unmanned ships and other devices capable of moving by oneself based on a preset air route. For example, unmanned equipment is used for spraying pesticides or pollinating plants in farmlands or mountain forests; the unmanned equipment puts fire extinguishing agent into a fire spot in a house or a suburb; and the unmanned equipment carries out rescue material putting on people in the disaster area. The existing unmanned equipment can put quantitative materials into the putting targets during operation, so that the material putting amount of the putting targets with different volumes is the same, the phenomenon that less or more materials are put into the partial putting targets is caused, and the material putting effect is poor. To this end, this embodiment provides a material input operation method, according to the volume of putting in the target to this puts in the target and puts in the material of the material input volume that the volume of putting in the target corresponds, prevent that the phenomenon of less input or more input of the target of putting in of different volumes that quantitative input leads to from appearing, improve the material and put in the effect. In this embodiment, take unmanned aerial vehicle to carry out high altitude pesticide spraying to the fruit tree in the mountain forest as the example, the mountain forest is the operation region, and the fruit tree is the input target in the mountain forest, and spraying pesticide to the fruit tree is promptly to input the target input material, sprays pesticide to the fruit tree in the high altitude through unmanned aerial vehicle promptly to input the target input material through unmanned equipment. The terrain in the mountain forest is tortuous, and the growth vigor of fruit trees is different, leads to fruit trees in the mountain forest highly non-uniform, the volume is non-uniform and distribute unevenly, if according to the high altitude material input mode of current unmanned equipment, because the height and the volume of fruit tree are different, the condition that some fruit trees spout pesticide more and/or some fruit trees spout pesticide less appears easily, spout pesticide more and can waste the pesticide and cause the soil to harden and the fertility descends, spout pesticide less and then do not have the pesticide effect. The volume of the fruit tree is determined by the material feeding operation method provided by the embodiment so as to control the unmanned aerial vehicle to spray pesticide with the pesticide amount corresponding to the volume to the fruit tree.
The remote sensing image data of the operation area is obtained by processing a picture or a video acquired by the surveying and mapping unmanned aerial vehicle in advance above the operation area, and can be orthoimage data or spectral image data. The remote sensing image data is taken as the ortho image data for description, the ortho image is the remote sensing image with the ortho projection property, and the original image with the overlapping degree of 60% of the ortho acquisition of the monocular camera of the surveying and mapping unmanned aerial vehicle is inlaid through a collinearity equation to obtain the ortho image. Further, point cloud data of the operation area is obtained according to remote sensing image data reconstruction, and a digital earth surface model of the operation area is generated based on the point cloud data and the remote sensing image data. Whether the remote sensing image data, the point cloud data or the digital surface model contains the digital information of the released target, the volume of the released target can be determined according to the remote sensing image data, the point cloud data or the digital information of the released target in the digital surface model.
But before determining the volume of the shot target, the shot target in the work area is determined. In this embodiment, identifying the delivery target in the operation area through the neural network model includes two implementation manners:
the first implementation manner is that the drop targets contained in the remote sensing image data are identified through a first preset network model so as to output each drop target in the operation area. Illustratively, the first preset network model is obtained by training the neural network model in advance through sample data marked with a first bounding box, wherein the first bounding box marks the delivery target. Inputting the remote sensing image data of the operation area into a first preset network model to obtain the remote sensing image data which is output by the first preset network model and is marked with a first enclosing frame, marking the throwing target from the remote sensing image data by the first enclosing frame, taking the area enclosed by the first enclosing frame as a target area corresponding to the throwing target, namely determining the image in the first enclosing frame as the image data of the top view of the target area corresponding to the throwing target. The first enclosing frame can be a circle, a square frame and the like, and can be selected according to the top view of the putting target so as to select the first enclosing frame which is attached to the top view of the putting target. For example, taking the first bounding box as an example, fig. 2 is a schematic diagram of remote sensing image data of a work area provided in an embodiment of the present application. As shown in fig. 2, the placement target 12 in the remote sensing image data of the work area 11 is circled, and the image inside the circle is determined as the image data of the target area 13 corresponding to the placement target 12.
The second implementation manner is to reconstruct and obtain point cloud data of the operation area based on the remote sensing image data, and identify the drop targets contained in the point cloud data through a second preset network model so as to output each drop target in the operation area. Illustratively, the second preset network model is obtained by training the neural network model in advance through sample data marked with a second bounding box, wherein the first bounding box marks the delivery target. And inputting the point cloud data of the operation area into a second preset network model to obtain point cloud data which is output by the second preset network model and is marked with a second enclosing frame, marking the delivery target from the point cloud data by the second enclosing frame, and taking the space enclosed by the second enclosing frame as a target space corresponding to the delivery target, namely determining the point cloud data in the fourth enclosing frame as a three-dimensional point cloud picture of the delivery target. Wherein, the second encloses the frame and can be cylinder, square etc. and the second encloses the frame and can select according to the shape of input target to select the second that the laminating input target shape encloses the frame.
In addition to the neural network model identifying the delivery target, the delivery target of the work area may be determined based on a selection operation of the user at the control device side. In this embodiment, determining the delivery target of the work area based on the selection operation of the user includes two implementation manners:
the first implementation manner is that a first selection operation is received, and each delivery target in the remote sensing image data is determined according to the first selection operation. The first selection operation refers to an operation input by a user at the control device end and used for marking a delivery target in remote sensing image data, and the first selection operation can be a click operation or a frame selection operation. Illustratively, a third bounding box corresponding to each delivery target is generated based on a first selection operation of a user on the control device, and the delivery target corresponding to the remote sensing image data is marked through the third bounding box. And if the first selection operation is a click operation, determining a pixel coordinate in the remote sensing image data clicked by the user according to the touch position coordinate of the click operation, and circling out an image around the pixel coordinate based on a preset third surrounding frame. And if the first selection operation is a framing operation, generating a third surrounding frame according to the touch position coordinates of the framing operation, and marking the third surrounding frame in the remote sensing image data. And whether the frame selection operation or the click operation is performed, the area encircled by the third enclosing frame is used as the target area corresponding to the drop target, namely, the image in the third enclosing frame is determined as the image data of the top view of the target area corresponding to the drop target.
Further, in order to improve the marking efficiency of the delivered target, the worker generally uses a third bounding box to roughly circle the delivered target in the remote sensing image data, at this time, the third bounding box cannot be well attached to the top view of the delivered target, and if the third bounding box is used to calculate the volume information of the subsequent delivered target, the calculation accuracy is affected. Accordingly, the range of the third bounding box may be correspondingly adjusted according to the shape information of the placement target in the third bounding box, and the step of adjusting the third bounding box specifically includes S1101-S1102:
s1101, identifying the target to be thrown in each target area through a third preset network model, and determining the target type and the growth stage of the target to be thrown in the target area.
Illustratively, the third preset network model is obtained by training the neural network model in advance through sample data marked with target type information and growth stage information. And inputting the image data corresponding to each target area into a third preset network model to obtain the target type and the growth stage of the target to be thrown in the target area output by the third preset network model.
And S1102, adjusting the bounding box corresponding to the target area according to the target type and the growth stage of the target to be thrown.
Illustratively, the size and the shape of the bounding box corresponding to the target placement target in the target area are determined according to the size and the shape of the bounding box corresponding to each growth stage of various pre-configured placement targets. And adjusting the size and the shape of a third surrounding frame of the target area according to the size and the shape of the surrounding frame. For example, the top views of different types of trees have different shapes, and the growing stages have different shapes, so that the surrounding frame more fitting the shape of the tree is matched according to the type and the growing stage of the tree in the embodiment, and the subsequent volume calculation precision is improved. In addition, the surrounding frames with relative rules can be matched according to types and growth stages, although the fit degree is not high based on the surrounding frame matched with the outer contour of the tree, the subsequent volume calculation is facilitated, and the calculation efficiency is higher.
And the second realization mode is that a second selection operation is received, and each delivery target in the point cloud data is determined according to the second selection operation. The second selection operation refers to an operation of marking a delivery target in the point cloud data, which is input by a user at the control device side, and the second selection operation may be a click operation or a frame selection operation. And if the second selection operation is a click operation, determining a point cloud coordinate clicked by the user according to the touch position coordinate of the click operation, and circling out point cloud data around the point cloud coordinate based on a preset fourth surrounding frame. And if the second selection operation is a framing operation, generating a fourth surrounding frame according to the touch position coordinates of the framing operation, and marking the fourth surrounding frame in the point cloud data of the operation area. And whether the operation is click operation or frame selection operation, taking the space enclosed by the fourth enclosing frame as a target space corresponding to the delivery target, namely determining the point cloud data in the fourth enclosing frame as the three-dimensional point cloud picture of the delivery target.
Further, a minimum bounding box corresponding to each drop target in the point cloud data is determined, and the corresponding drop target in the first point cloud data is marked through the minimum bounding box. For example, if the frame selection range of the manually selected bounding box is too large, the volume calculation accuracy of the subsequent target delivery is affected. Therefore, after the fourth bounding box of the drop target is determined according to the second selection operation of the user, the fourth bounding box is adjusted to be the minimum bounding box of the three-dimensional point cloud image which just surrounds the drop target according to the distribution range of the three-dimensional point cloud image of the drop target in the fourth bounding box. And the volume information of the subsequent throwing target is calculated through the smallest enclosing frame, so that the subsequent volume calculation precision is improved. However, the minimum enclosing frame of the three-dimensional point cloud image which is usually attached to the throwing target is an irregular enclosing frame, the volume of the irregular enclosing frame is calculated more complexly, and the calculation efficiency is low. In order to improve the volume calculation efficiency, the regular external bounding box of the smallest bounding box is determined according to the shape of the smallest bounding box. Although the three-dimensional point cloud picture laminating degree of the external enclosure frame and the throwing target is lower than that of the smallest enclosure frame, the external enclosure frame is in a regular shape, the volume calculation is simple, and the calculation efficiency is higher.
It can be understood that the bounding box in the remote sensing image data is a two-dimensional bounding box, and the bounding box in the point cloud data is a three-dimensional bounding box.
And S120, determining a target space corresponding to each putting target, and determining the volume information of the corresponding putting target based on each target space.
For example, since the shape of most of the drop targets is difficult to directly determine, and it is complicated to calculate the accurate volume of the drop target, for this embodiment, the drop target in the remote sensing image data of the working area is identified, so as to determine the target space corresponding to the drop target, and the volume of the target space represents the volume corresponding to the drop target. And the target space has a fixed shape, and the calculation of the volume of the target space is simpler. In step S110, the remote sensing image data and the point cloud data of the work area are marked with the placement target by the corresponding bounding box. In this embodiment, since the point cloud data is three-dimensional spatial data, a space enclosed by a bounding box in the point cloud data can be directly used as a target space corresponding to the delivery target. For example, the target space corresponding to each delivery target is generated based on the minimum bounding box, and the space enclosed by the minimum bounding box is used as the target space corresponding to the delivery target. The remote sensing image data is two-dimensional plane data, and after the area enclosed by the enclosing frame in the remote sensing image data is used as the target area corresponding to the target to be launched, the target space of the target to be launched can be formed according to the height information of the target area in the height dimension. Illustratively, the step of determining the corresponding target space based on the target area of the delivery target specifically includes steps S1201-S1202:
s1201, acquiring a digital earth surface model corresponding to the working area, and determining elevation information corresponding to each target area based on the digital earth surface model.
The Digital Surface Model (DSM) is a ground elevation model including the heights of surface buildings, bridges, trees, and the like. For example, when the orthophoto data of the working area is generated according to the original image acquired by the surveying and mapping unmanned aerial vehicle, the geographic position coordinates of each pixel point in the orthophoto data under the world coordinate system need to be determined according to the position coordinates of the surveying and mapping unmanned aerial vehicle under the world coordinate system when the surveying and mapping unmanned aerial vehicle flies over the working area to acquire the original image and the internal reference and the external reference of the monocular camera. It can be understood that each pixel point in the ortho-image data corresponds to a location point in the real world, and the geographic location coordinate corresponding to the pixel point is the geographic location coordinate of the location point in the world coordinate system. The ortho-image data of the working area comprises the mapping relation between the pixel coordinates of the pixel points and the geographic position coordinates of the corresponding position points, namely, the position points in the working area can be uniquely determined by a certain pixel point in the ortho-image data of the working area. It should be noted that the geographic position coordinates may be two-dimensional coordinates that do not include height information in a world coordinate system, that is, X-axis coordinates and Y-axis coordinates, or longitude and latitude coordinates.
Further, processing the ortho-image data of the working area to obtain a digital earth surface model corresponding to the working area, and after the digital earth surface model is generated, each pixel point in the ortho-image data corresponds to one elevation information and one geographic position coordinate. The elevation information refers to the elevation height of the earth surface at the position point corresponding to the pixel point, and the digital earth surface model of the working area comprises the elevation information at each position point in the working area. For example, fig. 3 is a schematic diagram of a delivery target provided in an embodiment of the present application. As shown in fig. 3, each position point of the work area can be seen as a two-dimensional coordinate point orthographically projected onto a horizontal plane formed by the X axis and the Y axis, the position point a and the position point B are position points in the work area, and the two-dimensional coordinates (X1, Y1) and (X2, Y2) of the position point a and the position point B in the horizontal plane are geographical position information of the position point a and the position point B. The elevation information of the position point A is the corresponding elevation of the ground, and the elevation information of the position point B is the corresponding elevation of the top end of the throwing target. Because the position points included in the target area are overlapped with the partial position points of the working area, if the position points included in the target area can be determined, the elevation information corresponding to the target area can be determined according to the elevation information at the position points in the working area. For example, the step of determining the elevation information corresponding to the target area specifically includes S12011-S12012:
s12011, according to the digital earth surface model corresponding to the operation area, determining elevation information corresponding to each position point in the operation area.
Illustratively, based on the elevation information corresponding to each pixel point in the digital earth surface model, the elevation information at the position point corresponding to the pixel point is determined.
S12012, according to the image position coordinates of the target area in the remote sensing image data, determining elevation information of each position point in the target area.
For example, the target area is image data enclosed by the corresponding bounding box, and the pixel coordinates of each pixel point in the image data of the target area can be determined according to the pixel coordinates of the bounding box in the ortho image data. And determining the geographical position coordinates of the position points of the target area according to the pixel coordinates of the pixel points of the target area, and determining the elevation information of each position point in the target area based on the corresponding elevation information of each position point in the digital earth surface model of the working area.
In one embodiment, the digital surface model may be calculated from sensory data collected while the drone is flying over the work area, in addition to being based on the ortho-image data. For example, the calculation manner of the height information of each position point in the work area includes S12013 to S12015:
s12013, acquiring altitude data of the unmanned equipment during flying above each position point in the operation area and acquired sensing data, wherein the sensing data comprise point cloud data or image data.
For example, fig. 4 is a schematic diagram of elevation information of a work area provided by an embodiment of the present application. As shown in fig. 4, the ground surface altitude H of each location point of the work area can be obtained by subtracting the altitude H1 when the unmanned aerial device 14 flies from the distance H2 between the ground surface of each location point of the work area and the unmanned aerial device 14 when flying. In this embodiment, the altitude H1 of the drone 14 as it flies may be captured by the drone's positioning device. The distance data H2 between the unmanned aerial vehicle 14 and the ground surface at each position point of the work area during flight can be obtained from image data collected by a binocular camera or point cloud data collected by a laser sensor.
S12014, determining distance data between each position point and the unmanned equipment according to the sensing data.
For example, the distance from the unmanned equipment to the earth surface of each position point can be obtained by analyzing and processing point cloud data or remote sensing image data. In addition, aerial image processing can be carried out on image data acquired by the monocular camera, and distance data between the unmanned equipment and the earth surface of each position point can be obtained.
S12025, differencing the altitude data and the distance data corresponding to each position point to obtain elevation information of the corresponding position point.
Illustratively, the acquisition of the altitude data of the unmanned aerial vehicle or the distance data between the unmanned aerial vehicle and the ground surface of each position point corresponds to a two-dimensional coordinate in an XY coordinate system. And (4) obtaining the altitude information of the position point corresponding to the two-dimensional coordinate by subtracting the altitude data and the distance data with the same two-dimensional coordinate. Referring to fig. 4, the distance between the unmanned aerial vehicle and the ground surface of the location point C is H2, the flight altitude of the unmanned aerial vehicle at the location coordinates (x3, y3) is determined to be H1 according to the positioning data of the unmanned aerial vehicle during surveying and mapping flight, and the altitude information H at the location point C is obtained from H1-H2.
S1202, determining a target space corresponding to each delivery target based on the target area and the corresponding elevation information.
In this embodiment, since the elevation information in the target area may represent the shape of the upper surface corresponding to the drop target, a target three-dimensional model may be constructed based on the elevation information of all the target areas, the upper surface of the target three-dimensional model may be fitted to the upper surface of the drop target, and a target space corresponding to the drop target may be constructed according to the target three-dimensional model. Fig. 5 is a first schematic diagram of a target space provided by an embodiment of the present application. As shown in fig. 5, the elevation of the earth's surface corresponding to each location point can be determined according to the elevation information in the target area, and a target three-dimensional model 15 (all shaded portions in fig. 5) of the target area is constructed. Because the difference between the ground surface altitude of the target and the ground surface altitude is larger, the elevation information at the periphery of the target area is suddenly reduced, and the ground surface and the target can be determined according to the change condition of the elevation information. And (4) screening the elevation information corresponding to the target to be thrown from the target three-dimensional model 15 to reconstruct a three-dimensional model 16 (a combined part of a triangle and two rectangles in the figure 5) of the target to be thrown. However, the three-dimensional model 16 of the target includes elevation information of the ground, so that the minimum elevation information can be selected from the target area as the elevation information of the ground, and the elevation of each position point of the three-dimensional model 16 of the target is subtracted from the elevation of the ground to obtain the relative height of the target space of the target. A target space 13 of drop targets is constructed (the combined part of the triangle and the first rectangle below in fig. 5) based on the relative height of the target space and the drop targets in the target area.
In one embodiment, the upper surface of the target space constructed based on the target three-dimensional model 15 is often an irregular curved surface, and the volume calculation process is complicated, so that the corresponding target space can be constructed based on the maximum elevation information and the minimum elevation information in the target area. Illustratively, the top altitude of the target to be delivered is the maximum altitude information in the target area corresponding to the target, so the maximum altitude information is determined by comparing the altitude information of each location point in the target area. The target area not only comprises the throwing target, but also comprises the ground where the throwing target is located, and the ground altitude of the throwing target is the minimum elevation information in the corresponding target area, so that the elevation information of each position point in the target area is compared, and the minimum elevation information is determined to be the ground altitude of the throwing target. And obtaining the relative height of the throwing target according to the maximum elevation information and the minimum elevation information, taking the relative height as the height of a target space, and taking the area of a target area as the bottom area of the target space to construct a target space with a regular shape. Fig. 6 is a second schematic diagram of a target space provided by an embodiment of the present application. As shown in fig. 6, the elevation information of each position point in the target area is sorted from small to large, the position point D and the position point E are determined to be the minimum elevation information H4 and the maximum elevation information H5, respectively, and the minimum elevation information H3 is subtracted from the maximum elevation information H5, so that the relative height H3 of the target 12 is obtained. And constructing a target space 13 of the target according to the relative heights of the target area and the target, wherein the relative height H3 of the target is the relative height of the target space.
Further, after a target space corresponding to the target of putting is determined, the volume of the target space is calculated, and the volume of the target space is taken as the volume corresponding to the target of putting. The target area can be regarded as the bottom surface of the corresponding target space, and the volume can be calculated according to the bottom surface area and the height of the target space, which can be known by a volume calculation formula of multiplying the bottom surface area by the height. Illustratively, the step of determining the target spatial volume specifically includes S1203-S1204:
and S1203, determining the geographic area of the target area according to the geographic position coordinates of the operation area and the image position coordinates of the target area corresponding to the target space in the remote sensing image data.
For example, although the target area is image data marked by a bounding box in the remote sensing image data, in the real world, the target area may be regarded as a fence enclosing the target to be delivered, and the real area of the target area, that is, the geographic area of the target area in the world coordinate system, needs to be calculated. The volume information of the target space refers to a geographical volume under a world coordinate system.
In this embodiment, determining the geographic area of the target region includes two implementations, one is to convert the image area of the target region into the geographic area according to the imaging scale. For example, the target areas with different altitudes have different distances from the unmanned device, and the imaging ratios thereof are different, so that the imaging ratio of the image data of the target area needs to be determined according to the distance between the target area and the unmanned device, and the image area of the image data of the target area needs to be converted into the geographic area according to the imaging ratio. Another way to achieve this is to determine the geographic area of the target area based on its geographic coordinate position. The image position coordinates of the target area in the remote sensing image data refer to pixel coordinates of each pixel point of the target area in the orthoimage data of the working area. When the drop target in the orthophoto data is identified and the target area is determined, only the pixel coordinates of the target area in the orthophoto data are obtained. If the geographic area of the target area is calculated according to the geographic position coordinates of the target area, the geographic position coordinates of the target area need to be determined from the geographic position coordinates of the working area according to the pixel coordinates of the target area. For example, the ortho-image data includes mapping relationships between each pixel point and each position point in the work area, and the geographic position coordinates of the corresponding position point can be obtained according to the pixel coordinates of the pixel points in the work area. And determining the geographic position coordinates corresponding to the target area according to the pixel coordinates of the target area in the operation area, and calculating the geographic area of the target area according to the geographic position coordinates of the target area. If the target area is in a regular shape such as a square or a circle, the geographic area of the target area can be determined based on a regular calculation formula. If the target area is irregular, the target area can be divided into a plurality of sub-areas based on the principle of calculus, and the total area of the target area is obtained by calculating the areas of the sub-areas and then summing.
In this embodiment, referring to fig. 2, the bounding box of the target region in the ortho-image data is a circle, and the target region 13 is a circular region. In order to obtain the real bottom area of the circular area, the real radius of the circular area is substituted into a circular area calculation formula. Illustratively, the step of calculating the base area of the target area trueness according to the radius of the circular area trueness includes steps S12301-S12033:
s12031, determining the image center coordinates and the image outer edge coordinates of the target area according to the image position coordinates of the target area in the remote sensing image data.
For example, the image center coordinates refer to pixel coordinates of a pixel point that is the center of the image data of the target area. The image outer edge coordinates are pixel coordinates of any pixel point of a bounding box marked by the image data of the target area. When the bounding box is marked in the orthoimage data, the pixel coordinates of each pixel point in the image data of the target area are obtained, and the image center coordinates and the image outer edge coordinates of the target area are determined from the pixel coordinates of the pixel points.
S12032, determining the geographic circle center coordinate and the geographic radius of the target area according to the geographic position coordinate, the image circle center coordinate and the image outer edge coordinate.
For example, the geographic center coordinates refer to the location coordinates of the center of the target area in the world coordinate system. And determining the position coordinates of the position points corresponding to the circle center of the image according to the corresponding relation between the pixel points and the position points of the image data, and determining the position coordinates as the geographic circle center coordinates of the target area. And determining the position coordinates of the position points corresponding to the pixel points of the surrounding frame according to the corresponding relation between the pixel points and the position points of the image data, and determining the position coordinates as the geographical outer edge coordinates of the target area. And calculating the linear distance between the geographic outer edge coordinate and the geographic circle center coordinate to obtain the geographic radius. It should be noted that, for a circle, the straight lines between the center of the circle and the outer edge can be regarded as radii, and therefore, the position coordinates of the position point corresponding to any pixel point of the first bounding box can be obtained to calculate the geographic radius.
S12033, calculating the geographic area of the target area according to the geographic radius.
Illustratively, substituting the geographic radius into the circular area calculation formula π r2And obtaining the real geographic area of the target area.
And S1204, determining volume information of the launched target corresponding to the target space according to the geographical area and elevation information of the target area.
For a regularly shaped target space, the volume of the target space may be calculated based on conventional calculation formulas. Referring to fig. 6, assuming that the target space in fig. 6 is a cylinder, the volume of the target space is obtained by substituting the bottom area and the relative height of the target space into a cylinder volume calculation formula. As can be seen from the above, the relative height of the target space has been calculated at build time from elevation information for the target area.
For an irregularly shaped target space, the target space is divided into a plurality of subspaces based on the principle of calculus integration, and the total volume of the target space is determined by calculating the volume of each subspace. In this embodiment, the step of calculating the target space of the irregular shape specifically includes S12041-S12044:
s12041, the target region is divided into a plurality of sub-regions according to a preset division rule, and a region area of each sub-region is determined.
Illustratively, the upper surface of the target space constructed by the target three-dimensional model is a curved surface, but the lower surface is a plane, so the lower surface, i.e. the target area, is used as the bottom surface of the target space. The target area is divided into a plurality of sub-areas by evenly distributed grid lines, and the area of the sub-areas is calculated based on the side length of each grid. The area of the sub-region for which the target region edge is less than one grid is also calculated based on the side length of the grid. Note that, the area of the sub-area refers to a geographical area under the world coordinate system.
S12042, determining the area height of each subarea according to the elevation information of each position point in the target area.
Illustratively, the location points in the sub-region are determined based on the location of the sub-region in the target region. And determining the height of the position point with the maximum relative height in the sub-area as the area height of the sub-area according to the relative height of the target space at each position point.
S12043, multiplying the area of each sub-area by the corresponding height of the area, and determining the area volume of each sub-area.
S12044, adding the area volumes of all the sub-areas to determine the volume information of the target.
For example, the area of each sub-area is multiplied by the corresponding area height, and then the area volumes of all the sub-areas are added to estimate the volume information of the target space, and the volume information of the target space is used as the volume information of the delivery target.
S130, determining the material input amount corresponding to the input target according to the volume information, and controlling the unmanned equipment to input the material of the material input amount corresponding to the input target.
For example, the material input amounts required by input targets of different volumes are different, and in this embodiment, the material input amount corresponding to the volume information is determined according to the volume material mapping relationship by determining the volume material mapping relationship. In this embodiment, the material input amounts required by the input targets with different volumes may be tested in advance to establish a volume material mapping relationship between the volume of the input target and the corresponding required material input amount, so as to obtain a preset volume material mapping relationship after calculating the volume of the input target, and determine the material input amount required by the input target according to the volume material mapping relationship. Taking the feeding target as a fruit tree as an example, assuming that a fruit tree with 15 cubic meters needs to spray 100 milliliters of pesticide, the volume material mapping relation of spraying 6.667 milliliters of pesticide on a fruit tree with 1 cubic meter can be determined. According to the volume material mapping relation, after the volume of the current fruit tree to be sprayed is determined to be 10 cubic meters, the fruit tree can be determined to be sprayed with 66.67 milliliters of pesticide.
In one embodiment, the material putting amount of the subsequent putting target is adjusted according to the first putting target of each operation as an anchor point. Illustratively, according to the material input amount of the first input target selected by the user and the volume of the first input target, the volume material mapping relation used in the operation is determined, so that the unmanned device determines the material input amount of the subsequent input target according to the volume material mapping relation and the volume of the subsequent input target. Taking the delivery target as a fruit tree as an example, assuming that the user chooses to spray 100 ml of pesticide on the first fruit tree, and the volume of the first fruit tree is 10 cubic meters, it can be determined that the volume material mapping relationship used in the operation is to spray 10 ml of pesticide on every 1 cubic meter of fruit tree. And if the volume of the second fruit tree is 15 cubic meters, determining that the second fruit tree sprays 150 milliliters of pesticide according to the volume material mapping relation. Besides, the physical putting amount of the putting target can be determined through big data analysis. For example, taking a delivery target as a fruit tree as an example, the volume material mapping relation of fruit trees in different regions, seasons, growth vigors and types can be determined by analyzing data such as regional characteristics, environmental characteristics, growth characteristics and biological characteristics of various fruit trees and corresponding dosage.
Furthermore, different putting modes are provided for different putting targets, and the unmanned equipment can be put in when sailing when putting materials or can be hovered when sailing to the operation position. In this embodiment, the unmanned aerial vehicle is controlled to throw the material of the material throw amount to the throw target when sailing to the working position. Taking the target of putting as a fruit tree as an example, when spraying pesticides to the fruit tree, the pesticides need to be sprayed to the core of the fruit tree to ensure the nutrient of the fruit tree is sufficient. Therefore, the unmanned equipment can be controlled to hover above the tree core of the fruit tree and spray pesticide to the fruit tree, and the operation position is the position where the unmanned equipment hovers and sprays pesticide. Illustratively, the step of the unmanned equipment performing the material putting operation specifically includes steps S1301-S1304:
and S1301, determining a target area center corresponding to each target space, and obtaining an operation position corresponding to each delivery target according to the geographic position coordinates of the position point corresponding to the target area center.
For example, the center of the target area refers to a central pixel point of the image data of the target area. For example, since the image data of the target area is obtained by identifying the top view of the fruit tree, and the position of the tree center of the fruit tree is close to the center of the target area, the geographic position coordinates of the corresponding position point of the central pixel point of the target area are determined as the geographic position coordinates of the working position, so that the tree center of the fruit tree is ensured to be within the spraying range of the unmanned equipment.
S1302, obtaining elevation information of each position point in the target space, and determining the maximum elevation value as the elevation information of the delivery target.
For example, the geographic position coordinates of the position points are two-dimensional coordinates which do not include altitude information, and therefore the flying height of the unmanned aerial vehicle needs to be determined according to the altitude information, namely the ground surface altitude, of each position point in the target area. Wherein the maximum elevation information in the target space may be considered as the elevation of the corresponding delivering target tip. Specifically, because unmanned equipment when spraying insecticide the pesticide receive the interference of wind-force factor easily and make the pesticide spray direction skew, lead to the pesticide all to spray on subaerial or the leaf, influence the pesticide and spray the quality. Therefore, the unmanned equipment can be controlled to spray pesticide to the fruit trees at the height close to the top of the fruit trees as far as possible according to the altitude of the top of the fruit trees, so that the interference degree of environmental factors to the pesticide spraying direction when the distance between the unmanned equipment and the fruit trees is large is reduced.
And S1303, determining the flight height corresponding to each operation position according to the elevation information of each launching target.
Illustratively, the tree top altitude of each fruit tree is matched with the corresponding operation position to obtain a three-dimensional geographic coordinate when the unmanned equipment sprays the fruit trees.
And S1304, controlling the unmanned equipment to fly to each operation position at the corresponding flying height, and releasing the material releasing amount to the corresponding releasing target.
Illustratively, according to the three-dimensional geographic coordinates of each fruit tree in the spraying operation, the unmanned equipment is controlled to spray pesticide with the corresponding dosage to the fruit trees when flying to the position. Because the altitude of the top of the fruit tree is the highest altitude of the fruit tree, the unmanned equipment is controlled to fly with the altitude of the top of the fruit tree, and the influence of the fruit tree on the navigation of the unmanned equipment can be avoided.
In one embodiment, the unmanned aerial device is controlled to navigate in a straight line between the determined work positions. Illustratively, the unmanned device needs to hover and drop materials at three-dimensional geographic coordinates during spraying operation, and a straight line between operation positions of drop targets of adjacent operations can be used as a flight line of the unmanned device based on the principle that the straight line distance between two points is shortest. The unmanned equipment is controlled to navigate linearly between the operation positions of the throwing targets of the adjacent operation according to the flight route, the route distance of the unmanned equipment is effectively shortened, the action efficiency of the unmanned equipment is improved, and the battery consumption of the unmanned equipment is reduced. FIG. 7 is a schematic view of a flight path of an unmanned aerial device provided by an embodiment of the application. As shown in fig. 7, the position coordinates of the position point corresponding to the center point of the target area are determined as the position coordinates of the working position 17, and the maximum elevation information among the elevation information of each position point in the target area is taken as the flying height of the working position 17. And taking a straight line between the three-dimensional geographic coordinates of the throwing targets of the adjacent jobs as a flight path 18 of the unmanned equipment. It can be understood that the throwing targets of adjacent operation refer to throwing targets which are adjacent in sequence, and the operation sequence is set by a worker in advance according to parameters such as the distribution position of the throwing targets in the operation area.
In conclusion, a throwing target contained in remote sensing image data is determined by acquiring the remote sensing image data of the operation area; determining a target space corresponding to each throwing target, and determining volume information of the corresponding throwing target based on each target space; and determining the material input amount corresponding to the input target according to the volume information, and controlling the unmanned equipment to input the material of the corresponding material input amount to the input target. By adopting the technical means, the drop targets in the remote sensing image data of the operation area are identified, and each drop target in the operation area is marked out through the target space in the remote sensing image data of the operation area. The volume of putting in the target is corresponded to the volume characterization through the target space to unmanned aerial vehicle puts in the material of the volume of putting in the target and corresponding material input volume of putting in the target, satisfies each material demand of putting in the target, avoids putting in the target and appears the phenomenon that the material was thrown more or was thrown less, has guaranteed material input effect and input operation quality. In addition, the operation position of the unmanned equipment is determined according to the altitude of the top end of the throwing target and the central position of the target area, so that the unmanned equipment is close to each throwing target to throw materials, and the influence of environmental factors on the material throwing effect is avoided.
Example two:
on the basis of the above embodiments, fig. 8 is a schematic structural diagram of a material throwing operation device according to a second embodiment of the present application. Referring to fig. 8, the material throwing apparatus provided in this embodiment specifically includes: a drop target identification module 21, a volume determination module 22 and a material drop module 23.
The release target identification module 21 is configured to acquire remote sensing image data of a working area and determine a release target contained in the remote sensing image data;
a volume determination module 22 configured to determine a target space corresponding to each delivery target, and determine volume information of the corresponding delivery target based on each target space;
the material putting module 23 is configured to determine a material putting amount corresponding to the putting target according to the volume information, and control the unmanned device to put the material of the material putting amount corresponding to the putting target.
On the basis of the above embodiment, the delivery target identification module includes: the first model identification submodule is configured to identify the throwing targets contained in the remote sensing image data through a first preset network model so as to output each throwing target in the operation area; or the second model identification submodule is configured to reconstruct and obtain point cloud data of the operation area based on the remote sensing image data, and identify the delivery targets contained in the point cloud data through a second preset network model so as to output each delivery target in the operation area.
On the basis of the above embodiment, the delivery target identification module includes: the first selection submodule is configured to receive a first selection operation, and determine each delivery target in the remote sensing image data according to the first selection operation; or, a second selection submodule configured to receive a second selection operation, and determine each delivery target in the point cloud data according to the second selection operation.
On the basis of the above embodiment, the first selection submodule includes: the first framing unit is configured to generate a bounding box corresponding to each delivery target based on first selection operation of a user on the control device, and mark the delivery target corresponding to the remote sensing image data through the bounding box; correspondingly, the volume determination module comprises a target space construction sub-module comprising: the target area determining unit is configured to generate a target area corresponding to the putting target according to each bounding box; the elevation information determining unit is configured to acquire a digital earth surface model corresponding to the working area and determine elevation information corresponding to each target area based on the digital earth surface model; a first target space construction unit configured to determine a target space corresponding to each delivery target based on the target area and the corresponding elevation information.
On the basis of the above embodiment, the first selection submodule further includes: and the framing adjustment unit is configured to determine a target type and a growth stage of the target to be thrown in the target area, and adjust the bounding box corresponding to the target area according to the target type and the growth stage of the target to be thrown.
On the basis of the above-described embodiment, the elevation information determination unit includes: the elevation information matching subunit is configured to determine elevation information corresponding to each position point in the operation area according to the digital earth surface model corresponding to the operation area; and the elevation information determining subunit is configured to determine the elevation information of each position point in the target area according to the image position coordinates of the target area in the remote sensing image data.
On the basis of the above embodiment, the second selection submodule includes: the second framing unit is configured to determine a minimum bounding box corresponding to each delivery target in the point cloud data, and mark the delivery target corresponding to the point cloud data through the minimum bounding box; correspondingly, the second target space construction unit is configured to generate a target space corresponding to each delivery target based on the minimum bounding box.
On the basis of the above embodiment, the volume determination module includes: the area determination submodule is configured to determine the geographical area of the target area according to the geographical position coordinates of the operation area and the image position coordinates of the target area corresponding to the target space in the remote sensing image data; and the volume determination submodule is configured to determine volume information of the target to be thrown corresponding to the target space according to the geographical area and the elevation information of the target area.
On the basis of the above embodiment, the target region is a circular region, and the area determination submodule includes: the image coordinate determination unit is configured to determine the image circle center coordinate and the image outer edge coordinate of the target area according to the image position coordinate of the target area in the remote sensing image data; the geographic coordinate determination unit is configured to determine a geographic circle center coordinate and a geographic radius of the target area according to the geographic position coordinate, the image circle center coordinate and the image outer edge coordinate; and the geographic area calculation unit is configured to calculate the geographic area of the target area according to the geographic radius.
On the basis of the above embodiment, the volume determination submodule includes: a sub-region area determination unit configured to divide the target region into a plurality of sub-regions according to a preset division rule, and determine a region area of each sub-region; a sub-area height determining unit configured to determine an area height of each sub-area according to elevation information of each position point in the target area; a sub-region volume determination unit configured to multiply the region area of each sub-region by the corresponding region height, and determine a region volume of each sub-region; and the volume determining unit is configured to add the area volumes of all the sub-areas to determine the volume information of the target.
On the basis of the above embodiment, the material delivery module includes: the input quantity determining submodule is configured to determine a volume material mapping relation and determine the material input quantity corresponding to the volume information according to the volume material mapping relation; and the throwing operation submodule is configured to control the unmanned equipment to throw the material of the material throwing amount to the throwing target when navigating to the operation position.
On the basis of the above embodiment, the release operation submodule includes: the operation position determining unit is configured to determine a target area center corresponding to each target space, and obtain an operation position corresponding to each delivery target according to the geographic position coordinates of the position point corresponding to the target area center; the target height determining unit is configured to acquire elevation information of each position point in a target space and determine the maximum elevation value as the elevation information of the delivery target; the flight height determining unit is configured to determine the flight height corresponding to each operation position according to the elevation information of each launching target; and the throwing operation unit is configured to control the unmanned equipment to throw the material throwing amount to the corresponding throwing target when flying to each operation position at the corresponding flying height.
Determining a delivery target contained in the remote sensing image data by acquiring the remote sensing image data of the operation area; determining a target space corresponding to each throwing target, and determining volume information of the corresponding throwing target based on each target space; and determining the material input amount corresponding to the input target according to the volume information, and controlling the unmanned equipment to input the material of the corresponding material input amount to the input target. By adopting the technical means, the drop targets in the remote sensing image data of the operation area are identified, and each drop target in the operation area is marked out through the target space in the remote sensing image data of the operation area. The volume of putting in the target is corresponded to the volume characterization through the target space to unmanned aerial vehicle puts in the material of the volume of putting in the target and corresponding material input volume of putting in the target, satisfies each material demand of putting in the target, avoids putting in the target and appears the phenomenon that the material was thrown more or was thrown less, has guaranteed material input effect and input operation quality. In addition, the operation position of the unmanned equipment is determined according to the altitude of the top end of the throwing target and the central position of the target area, so that the unmanned equipment is close to each throwing target to throw materials, and the influence of environmental factors on the material throwing effect is avoided.
The material throwing-in operation device provided by the second embodiment of the application can be used for executing the material throwing-in operation method provided by the first embodiment, and has corresponding functions and beneficial effects.
Example three:
fig. 9 is a schematic structural diagram of an unmanned aerial vehicle provided in a third embodiment of the present application, and with reference to fig. 9, the unmanned aerial vehicle includes: a processor 31, a memory 32, a communication module 33, an input device 34, and an output device 35. The number of processors 31 in the drone may be one or more and the number of memories 32 in the drone may be one or more. The processor 31, memory 32, communication module 33, input device 34, and output device 35 of the drone may be connected by a bus or other means.
The memory 32 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the material placement operation method according to any embodiment of the present application (for example, the placement target identification module 21, the volume determination module 22, and the material placement module 23 in the material placement operation device). The memory 32 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the device, and the like. Further, the memory 32 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The communication module 33 is used for data transmission.
The processor 31 executes various functional applications and data processing of the device by running software programs, instructions and modules stored in the memory 32, so as to implement the above-mentioned material delivery operation method.
The input device 34 may be used to receive entered numeric or character information and to generate key signal inputs relating to user settings and function controls of the apparatus. The output device 35 may include a display device such as a display screen.
The unmanned device can be used for executing the material putting operation method provided by the first embodiment, and has corresponding functions and beneficial effects.
Example four:
embodiments of the present application further provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform a method for material placement operations, the method for material placement operations including: acquiring remote sensing image data of a working area, and determining a delivery target contained in the remote sensing image data; determining a target space corresponding to each throwing target, and determining volume information of the corresponding throwing target based on each target space; and determining the material input amount corresponding to the input target according to the volume information, and controlling the unmanned equipment to input the material of the corresponding material input amount to the input target.
Storage medium-any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk, or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Lanbas (Rambus) RAM, etc.; non-volatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in a first computer system in which the program is executed, or may be located in a different second computer system connected to the first computer system through a network (such as the internet). The second computer system may provide program instructions to the first computer for execution. The term "storage medium" may include two or more storage media residing in different locations, e.g., in different computer systems connected by a network. The storage medium may store program instructions (e.g., embodied as a computer program) that are executable by one or more processors.
Of course, the storage medium provided in the embodiments of the present application and containing computer-executable instructions is not limited to the above material delivery operation method, and may also perform related operations in the material delivery operation method provided in any embodiment of the present application.
The material delivery operation device, the storage medium, and the unmanned aerial vehicle provided in the above embodiments may execute the material delivery operation method provided in any embodiment of the present application, and reference may be made to the material delivery operation method provided in any embodiment of the present application without detailed technical details described in the above embodiments.
The foregoing is considered as illustrative of the preferred embodiments of the invention and the technical principles employed. The present application is not limited to the particular embodiments described herein, and various obvious changes, adaptations and substitutions may be made by those skilled in the art without departing from the scope of the present application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the claims.

Claims (15)

1. A material throwing operation method is characterized by comprising the following steps:
acquiring remote sensing image data of a working area, and determining a delivery target contained in the remote sensing image data;
determining a target space corresponding to each throwing target, and determining volume information of the corresponding throwing target based on each target space;
and determining the material input amount corresponding to the input target according to the volume information, and controlling the unmanned equipment to input the material of the material input amount corresponding to the input target.
2. The material placement operation method according to claim 1, wherein the determining of the placement target included in the remote sensing image data includes:
identifying the throwing targets contained in the remote sensing image data through a first preset network model so as to output each throwing target in the operation area; or
And reconstructing based on the remote sensing image data to obtain point cloud data of the operation area, and identifying the delivery targets contained in the point cloud data through a second preset network model so as to output each delivery target in the operation area.
3. The material placement operation method according to claim 1, wherein the determining of the placement target included in the remote sensing image data includes:
receiving a first selection operation, and determining each throwing target in the remote sensing image data according to the first selection operation; or
And receiving a second selection operation, and determining each delivery target in the point cloud data according to the second selection operation.
4. The material placement operation method according to claim 3, wherein the determining each placement target in the remote sensing image data according to the first selection operation includes:
generating an enclosure corresponding to each delivery target based on first selection operation of a user on control equipment, and marking the delivery target corresponding to the remote sensing image data through the enclosure;
the determining a target space corresponding to each delivery target includes:
generating a target area corresponding to the throwing target according to each bounding box;
acquiring a digital earth surface model corresponding to the operation area, and determining elevation information corresponding to each target area based on the digital earth surface model;
and determining a target space corresponding to each delivery target based on the target area and the corresponding elevation information.
5. The material putting operation method according to claim 4, wherein after marking out the corresponding putting target in the remote sensing image data by the bounding box, the method further comprises:
and determining the target type and the growth stage of the target to be thrown in the target area, and adjusting the bounding box corresponding to the target area according to the target type and the growth stage of the target to be thrown in.
6. The method of material placement operations as claimed in claim 4, wherein said determining elevation information for each target area based on the digital surface model comprises:
according to the digital earth surface model corresponding to the operation area, determining elevation information corresponding to each position point in the operation area;
and determining elevation information of each position point in the target area according to the image position coordinates of the target area in the remote sensing image data.
7. The material placement method according to claim 3, wherein said determining each placement target in the point cloud data according to the second selection operation comprises:
determining a minimum bounding box corresponding to each delivery target in the point cloud data, and marking the delivery target corresponding to the point cloud data through the minimum bounding box;
the determining a target space corresponding to each delivery target includes:
and generating a target space corresponding to each delivery target based on the minimum bounding box.
8. The material placement method according to claim 1, wherein said determining volume information of the corresponding placement target based on each of said target spaces comprises:
determining the geographic area of the target area according to the geographic position coordinates of the operation area and the image position coordinates of the target area corresponding to the target space in the remote sensing image data;
and determining the volume information of the target to be thrown corresponding to the target space according to the geographical area and the elevation information of the target area.
9. The material delivery operation method according to claim 8, wherein the target area is a circular area, and determining the geographic area of the target area according to the geographic position coordinates of the operation area and the image position coordinates of the target area corresponding to the target space in the remote sensing image data comprises:
determining the coordinates of the center of an image circle and the coordinates of the outer edge of the image of the target area according to the coordinates of the image position of the target area in the remote sensing image data;
determining the geographic circle center coordinate and the geographic radius of the target area according to the geographic position coordinate, the image circle center coordinate and the image outer edge coordinate;
and calculating the geographic area of the target area according to the geographic radius.
10. The material placement operation method according to claim 8, wherein the determining, according to the geographical area and the elevation information of the target area, volume information of a placement target corresponding to the target space includes:
dividing the target region into a plurality of sub-regions according to a preset division rule, and determining the region area of each sub-region;
determining the area height of each sub-area according to the elevation information of each position point in the target area;
multiplying the area of each sub-area by the corresponding area height to determine the area volume of each sub-area;
and adding the area volumes of all the sub-areas to determine the volume information of the putting target.
11. The material putting operation method according to claim 1, wherein the determining of the material putting amount corresponding to the putting target according to the volume information and the controlling of the unmanned device to put the material of the material putting amount corresponding to the putting target include:
determining a volume material mapping relation, and determining the material input amount corresponding to the volume information according to the volume material mapping relation;
and controlling the unmanned equipment to throw the material of the material throwing amount to the throwing target when sailing to the operation position.
12. The material placement operation method according to claim 11, wherein said controlling the unmanned aerial device to place the material placement amount of the material to the placement target while navigating to an operation location comprises:
determining a target area center corresponding to each target space, and obtaining an operation position corresponding to each delivery target according to the geographic position coordinates of the position points corresponding to the target area centers;
acquiring elevation information of each position point in the target space, and determining the maximum elevation value as the elevation information of the throwing target;
determining the flight height corresponding to each operation position according to the elevation information of each launching target;
and when controlling the unmanned equipment to fly to each operation position at the corresponding flying height, throwing the material throwing amount to the corresponding throwing target.
13. A material delivery operation device, comprising:
the system comprises a putting target identification module, a remote sensing image data acquisition module and a putting target identification module, wherein the putting target identification module is configured to acquire the remote sensing image data of a working area and determine a putting target contained in the remote sensing image data;
the volume determination module is configured to determine a target space corresponding to each delivery target and determine volume information of the corresponding delivery target based on each target space;
and the material putting module is configured to determine the material putting amount corresponding to the putting target according to the volume information and control the unmanned equipment to put the material with the corresponding material putting amount to the putting target.
14. An unmanned device, comprising: one or more processors; a storage device storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the material placement method as claimed in any one of claims 1-12.
15. A storage medium containing computer-executable instructions for performing the method of material placement operations of any of claims 1-12 when executed by a computer processor.
CN202111062200.6A 2021-09-10 2021-09-10 Material putting operation method and device, unmanned equipment and storage medium Pending CN113935950A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111062200.6A CN113935950A (en) 2021-09-10 2021-09-10 Material putting operation method and device, unmanned equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111062200.6A CN113935950A (en) 2021-09-10 2021-09-10 Material putting operation method and device, unmanned equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113935950A true CN113935950A (en) 2022-01-14

Family

ID=79275425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111062200.6A Pending CN113935950A (en) 2021-09-10 2021-09-10 Material putting operation method and device, unmanned equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113935950A (en)

Similar Documents

Publication Publication Date Title
CN106716062B (en) The flight course planning method and ground control terminal of agriculture unmanned vehicle
US10996055B2 (en) Integrated aerial photogrammetry surveys
US11016488B2 (en) Aerial operation support and real-time management
CN109035871B (en) Unmanned aerial vehicle flight route planning method, device and system and intelligent terminal
CN109845715B (en) Pesticide spraying control method, device, equipment and storage medium
Budiharto et al. A review and progress of research on autonomous drone in agriculture, delivering items and geographical information systems (GIS)
CN109923589A (en) Building and update hypsographic map
Madawalagama et al. Low cost aerial mapping with consumer-grade drones
Stafford et al. A hand-held data logger with integral GPS for producing weed maps by field walking
JP2019120986A (en) Flight course control system for unmanned aircraft and flight course control method for unmanned aircraft
CN102455185B (en) Flight planning method for airborne synthetic aperture radar
CN111741897A (en) Control method and device of unmanned aerial vehicle, spraying system, unmanned aerial vehicle and storage medium
CN109883418A (en) A kind of indoor orientation method and device
CN111750857B (en) Route generation method, route generation device, terminal and storage medium
CN106469249A (en) A kind of satellite covering analyzing method and system over the ground
CN109883398A (en) The system and method that the green amount of plant based on unmanned plane oblique photograph is extracted
CN108286965A (en) Unmanned plane based on subtle three-dimensional landform gets higher course line method, terminal and system
JP2023041675A (en) Drone-work support system and drone-work support method
CN111388908B (en) Aviation fire extinguishing parameter obtaining method and system
CN116310898A (en) Forest fire spread prediction method and system based on neural network and Huygens principle
CN115825067A (en) Geological information acquisition method and system based on unmanned aerial vehicle and electronic equipment
CN112180987B (en) Collaborative operation method, collaborative operation system, collaborative operation device, collaborative operation computer equipment and collaborative operation storage medium
CN113935950A (en) Material putting operation method and device, unmanned equipment and storage medium
CN112823322A (en) Route planning method, equipment and storage medium
JP7254934B2 (en) Method for making forest measurements, forest measurement system and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination