CN116772803B - Unmanned aerial vehicle detection method and device - Google Patents

Unmanned aerial vehicle detection method and device Download PDF

Info

Publication number
CN116772803B
CN116772803B CN202311072096.8A CN202311072096A CN116772803B CN 116772803 B CN116772803 B CN 116772803B CN 202311072096 A CN202311072096 A CN 202311072096A CN 116772803 B CN116772803 B CN 116772803B
Authority
CN
China
Prior art keywords
picture
detected
preset
coordinates
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311072096.8A
Other languages
Chinese (zh)
Other versions
CN116772803A (en
Inventor
高文文
任航
郝树奇
叶成海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Dexin Intelligent Technology Co ltd
Original Assignee
Shaanxi Dexin Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Dexin Intelligent Technology Co ltd filed Critical Shaanxi Dexin Intelligent Technology Co ltd
Priority to CN202311072096.8A priority Critical patent/CN116772803B/en
Publication of CN116772803A publication Critical patent/CN116772803A/en
Application granted granted Critical
Publication of CN116772803B publication Critical patent/CN116772803B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V8/00Prospecting or detecting by optical means
    • G01V8/10Detecting, e.g. by using light barriers

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geophysics (AREA)
  • Image Analysis (AREA)

Abstract

An unmanned aerial vehicle detection method and device relate to the unmanned aerial vehicle detection field; the method comprises the following steps: acquiring a detection instruction, wherein the detection instruction corresponds to a preset flight route; shooting according to a preset flight route to obtain a first picture; acquiring a second picture from the first picture, wherein the second picture contains an object to be detected; acquiring pixel coordinates of an object to be detected in a second picture; obtaining an actual coordinate according to the shooting coordinate and the pixel coordinate; the shooting coordinates are coordinates of the place where the second picture is taken, and the actual coordinates are coordinates of the place where the object to be detected is located, so that the object to be detected is detected. By implementing the technical scheme provided by the application, the problem of low efficiency in confirming the position of the object to be detected can be solved.

Description

Unmanned aerial vehicle detection method and device
Technical Field
The application relates to the field of unmanned aerial vehicle detection, in particular to an unmanned aerial vehicle detection method and device.
Background
In the field of military applications, for example, missile launching requires knowledge of the position of the remains after the ground explosion of the missile, so as to analyze whether the impact accuracy of the missile or the location of the missile explosion accords with expectations.
Since the missile range may be far away, workers near the landing point may need to search for the remains of the missile in a region after the missile is exploded, when the region to be searched is large, and more than one remains may exist to search for the remains, the workers need to search for multiple remains one by one and confirm the positions of the remains, and this process is complicated, and takes much time, i.e. the efficiency of confirming the positions of objects to be detected is low.
Therefore, a method and a device for detecting unmanned aerial vehicle are needed.
Disclosure of Invention
The application provides an unmanned aerial vehicle detection method and device, which can solve the problem of low efficiency of confirming the position of an object to be detected.
The application provides an unmanned aerial vehicle detection method in a first aspect, which is applied to an unmanned aerial vehicle and comprises the following steps: acquiring a detection instruction, wherein the detection instruction corresponds to a preset flight route; shooting according to a preset flight route to obtain a first picture; acquiring a second picture from the first picture, wherein the second picture contains an object to be detected; acquiring pixel coordinates of an object to be detected in a second picture; obtaining an actual coordinate according to the shooting coordinate and the pixel coordinate; the shooting coordinates are coordinates of the place where the second picture is taken, and the actual coordinates are coordinates of the place where the object to be detected is located, so that the object to be detected is detected.
By adopting the technical scheme, the unmanned aerial vehicle can fly and shoot according to a preset route, a plurality of shot pictures are obtained, the pictures are identified, and the pictures with the content of the pictures including the object to be detected are selected; after the pixel coordinates of the object to be detected in the photo are identified, the coordinates of the place where the object to be detected is actually located are obtained according to the pixel coordinates and the coordinates of the photo when the photo is shot; the unmanned aerial vehicle automatically completes detection and position confirmation of an object to be detected, and compared with manual searching, the unmanned aerial vehicle can move more quickly, and under the condition of complex terrain, the unmanned aerial vehicle can also move to different places relatively more easily; under the condition that the flight route planning is sufficient, the unmanned aerial vehicle can traverse the area to be detected and does not repeatedly detect a part of the area, so that the unmanned aerial vehicle has certain advantages compared with the manual recording of the detected places; the unmanned aerial vehicle replaces manual detection, so that accidents such as injury and the like of workers in the detection process can be avoided; in conclusion, the method can improve the detection efficiency of the object to be detected.
Optionally, shooting according to a preset flight route to obtain a first picture, which specifically includes: shooting according to a preset flight route to obtain a first picture; determining the similarity of any picture in a first picture and a preset picture library through a preset similarity algorithm to obtain a first similarity value, wherein the preset picture library is a picture library of an object to be detected; judging whether a picture with a first similarity value larger than or equal to a first preset threshold value exists in the first picture or not, wherein the first preset threshold value is used for judging whether an approximate suspicious object exists in the first picture, and the approximate suspicious object is one of an object to be detected and a non-object to be detected; if the first picture has a picture with the first similarity value larger than or equal to a first preset threshold value, confirming that the first picture contains an approximate suspicious object; temporarily leaving a preset flight path to approach to the approximate suspicious object, and acquiring a temporary leaving position, wherein the temporary leaving position is a point on the preset flight path; shooting the approximate suspicious object to obtain a suspicious object picture; adding the suspicious object picture into the first picture; and returning to the temporary-departure position, and continuously shooting according to the preset flight route to obtain a first picture.
By adopting the technical scheme, the unmanned aerial vehicle can shoot and identify the approximate suspicious object of the object to be detected in the flight process of the preset route, when the approximate suspicious object is confirmed, the unmanned aerial vehicle leaves the preset route temporarily, approaches to the approximate suspicious object to shoot and obtain a suspicious object picture, further judges whether the suspicious object picture contains the object to be detected in the subsequent steps, and returns to the preset route to continue shooting after the suspicious object picture is acquired; through the mode, the recognition accuracy of the unmanned aerial vehicle does not need to be kept at a higher level all the time, namely, through the confirmation of approximate suspicious objects, the unmanned aerial vehicle can fly at a higher height, the shooting range is larger, and the detection efficiency of the unmanned aerial vehicle is improved.
Optionally, a second picture is obtained from the first picture, wherein the second picture contains the object to be detected; the method specifically comprises the following steps: determining the similarity between the suspicious object picture and any picture in a preset picture library through a preset similarity algorithm to obtain a second similarity value; judging whether a picture with a second similarity value larger than or equal to a second preset threshold value exists in the suspicious object picture, wherein the second preset threshold value is used for judging whether an object to be detected exists in the first picture, and the value of the second preset threshold value is larger than that of the first preset threshold value; if the suspicious object picture has a picture with the second similarity value larger than or equal to a second preset threshold value, confirming that the first picture contains an object to be detected; and taking the first picture containing the object to be detected as a second picture.
By adopting the technical scheme, the unmanned aerial vehicle can judge whether the suspicious object picture of the first picture comprises the object to be detected according to the preset similarity algorithm, and takes the suspicious object picture containing the object to be detected as the second picture.
Optionally, after determining whether the suspicious object picture has a picture with the second similarity value greater than or equal to the second preset threshold, the method further includes: if no picture with the second similarity value larger than or equal to the second preset threshold value exists in the suspicious object pictures, prompt information is sent to the server, and the prompt information is used for prompting a user that a preset flight route cannot detect an object to be detected; and acquiring a control instruction of a user, wherein the control instruction comprises one or more of a return instruction, a height adjustment instruction and a path adjustment instruction.
Through adopting above-mentioned technical scheme, unmanned aerial vehicle judges that the picture of taking does not all contain when waiting to detect the object, sends prompt message to the server, provides the reference for the staff to wait for further instruction.
Optionally, obtaining the actual coordinates according to the shooting coordinates and the pixel coordinates specifically includes: calibrating a camera carried on the unmanned aerial vehicle by using a calibration plate to obtain internal parameters and external parameters of the camera, wherein the internal parameters comprise focal length and principal point coordinates, and the external parameters comprise a rotation matrix and a translation vector of the camera; converting the pixel coordinates into coordinates in a camera coordinate system by using the internal parameters and the external parameters, wherein the camera coordinate system takes shooting coordinates as an origin; the homography matrix is used to convert the coordinates in the camera coordinate system into actual coordinates in the ground coordinate system used to describe the geographic position and orientation in the earth's surface to complete the detection of the object to be detected.
By adopting the technical scheme, the unmanned aerial vehicle can obtain the coordinates of the actual place where the object to be detected is located in the photo by shooting the coordinates and the pixel coordinates.
Optionally, acquiring pixel coordinates of the object to be detected in the second picture specifically includes: inputting the second picture into a preset target detection model to obtain a first picture grid, wherein the first picture grid comprises object types and object position information; the object class comprises objects to be detected; performing non-maximum value inhibition processing on the first picture grid to obtain a second picture grid; the second picture grid comprises an object to be detected; and taking the center point coordinates of the second picture grid as pixel coordinates of the object to be detected in the second picture.
Through adopting above-mentioned technical scheme, unmanned aerial vehicle divides into many small squares through the target detection model that trains in advance, then small squares can constitute big square, again because a plurality of small squares can constitute the big square of equidimension, and have in these different big squares just to frame out the square of waiting to detect the object, after arithmetic processing, can get rid of unnecessary big square, remain only to frame out the big square of waiting to detect the object, and then confirm the position of waiting to detect the object in the picture and the pixel coordinate in the picture.
Optionally, the detection instruction includes a plurality of sub-detection instructions, the preset flight path includes a plurality of preset sub-flight paths, and one sub-detection instruction corresponds to one preset sub-flight path.
By adopting the technical scheme, the unmanned aerial vehicle can have a plurality of preset flight routes and detect a plurality of different areas; the detection instruction can comprise a plurality of sub-detection instructions, each sub-detection instruction corresponds to one flight path, namely, the unmanned aerial vehicle can detect a plurality of areas in one detection instruction without requiring an area next instruction of a worker.
The application provides an unmanned aerial vehicle detection device in a second aspect, wherein the unmanned aerial vehicle detection device is an unmanned aerial vehicle, and the unmanned aerial vehicle comprises an acquisition unit and a processing unit;
The acquisition unit is used for acquiring a detection instruction, and the detection instruction corresponds to a preset flight route; acquiring a second picture from the first picture, wherein the second picture contains an object to be detected; acquiring pixel coordinates of an object to be detected in a second picture;
the processing unit is used for shooting according to a preset flight route to obtain a first picture; obtaining an actual coordinate according to the shooting coordinate and the pixel coordinate; the shooting coordinates are coordinates of the place where the second picture is taken, and the actual coordinates are coordinates of the place where the object to be detected is located, so that the object to be detected is detected.
Optionally, the processing unit is used for shooting according to a preset flight route to obtain a first picture; determining the similarity of any picture in a first picture and a preset picture library through a preset similarity algorithm to obtain a first similarity value, wherein the preset picture library is a picture library of an object to be detected; judging whether a picture with a first similarity value larger than or equal to a first preset threshold value exists in the first picture or not, wherein the first preset threshold value is used for judging whether an approximate suspicious object exists in the first picture, and the approximate suspicious object is one of an object to be detected and a non-object to be detected; if the first picture has a picture with the first similarity value larger than or equal to a first preset threshold value, confirming that the first picture contains an approximate suspicious object; temporarily leaving a preset flight path to approach to the approximate suspicious object, and acquiring a temporary leaving position, wherein the temporary leaving position is a point on the preset flight path; shooting the approximate suspicious object to obtain a suspicious object picture; adding the suspicious object picture into the first picture; and returning to the temporary-departure position, and continuously shooting according to the preset flight route to obtain a first picture.
Optionally, the processing unit is configured to determine, through a preset similarity algorithm, a similarity between the suspicious object picture and any picture in the preset picture library, so as to obtain a second similarity value; judging whether a picture with a second similarity value larger than or equal to a second preset threshold value exists in the suspicious object picture, wherein the second preset threshold value is used for judging whether an object to be detected exists in the first picture, and the value of the second preset threshold value is larger than that of the first preset threshold value; if the suspicious object picture has a picture with the second similarity value larger than or equal to a second preset threshold value, confirming that the first picture contains an object to be detected; and taking the first picture containing the object to be detected as a second picture.
Optionally, the sending subunit of the processing unit is configured to send a prompt message to the server if no picture with a second similarity value greater than or equal to a second preset threshold exists in the suspicious object picture, where the prompt message is used to prompt a user that a preset flight path cannot detect an object to be detected; the acquisition unit is used for acquiring control instructions of a user, wherein the control instructions comprise one or more of return instructions, height adjustment instructions and path adjustment instructions.
Optionally, the processing unit is configured to calibrate a camera carried on the unmanned aerial vehicle by using a calibration board, so as to obtain internal parameters and external parameters of the camera, where the internal parameters include a focal length and principal point coordinates, and the external parameters include a rotation matrix and a translation vector of the camera; converting the pixel coordinates into coordinates in a camera coordinate system by using the internal parameters and the external parameters, wherein the camera coordinate system takes shooting coordinates as an origin; the homography matrix is used to convert the coordinates in the camera coordinate system into actual coordinates in the ground coordinate system used to describe the geographic position and orientation in the earth's surface to complete the detection of the object to be detected.
Optionally, the processing unit is configured to input the second picture into a preset target detection model to obtain a first picture grid, where the first picture grid includes object category and object position information; the object class comprises objects to be detected; performing non-maximum value inhibition processing on the first picture grid to obtain a second picture grid; the second picture grid comprises an object to be detected; and taking the center point coordinates of the second picture grid as pixel coordinates of the object to be detected in the second picture.
Optionally, the acquiring unit is configured to acquire a detection instruction, where the detection instruction includes a plurality of sub-detection instructions, and the preset flight route includes a plurality of preset sub-flight routes; one sub-probe command corresponds to one preset sub-flight path.
The present application provides in a third aspect an electronic device comprising a processor, a memory for storing instructions, a user interface and a network interface for communicating to other devices
The processor is configured to execute instructions stored in the memory to cause the electronic device to perform any one of the possible implementation manners of the first aspect or the first aspect as described above.
The present application provides in a fourth aspect a computer readable storage medium storing a computer program for execution by a processor as described above or any one of the possible implementation methods of the first aspect.
In summary, one or more of the technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages.
1. The unmanned aerial vehicle can fly and shoot according to a preset route, a plurality of shot pictures are obtained, the pictures are identified, and the pictures with the content of the pictures including the pictures of the object to be detected are selected; after the pixel coordinates of the object to be detected in the photo are identified, the coordinates of the place where the object to be detected is actually located are obtained according to the pixel coordinates and the coordinates of the photo when the photo is shot; the unmanned aerial vehicle automatically completes detection and position confirmation of an object to be detected, and compared with manual searching, the unmanned aerial vehicle can move more quickly, and under the condition of complex terrain, the unmanned aerial vehicle can also move to different places relatively more easily; under the condition that the flight route planning is sufficient, the unmanned aerial vehicle can traverse the area to be detected and does not repeatedly detect a part of the area, so that the unmanned aerial vehicle has certain advantages compared with the manual recording of the detected places; the unmanned aerial vehicle replaces manual detection, so that accidents such as injury and the like of workers in the detection process can be avoided; in conclusion, the method can improve the detection efficiency of the object to be detected.
2. The unmanned aerial vehicle can shoot and identify an approximate suspicious object of an object to be detected in the flight process of a preset route, when the approximate suspicious object is confirmed, the unmanned aerial vehicle pauses the preset route and approaches the approximate suspicious object to shoot to obtain a suspicious object picture, so that whether the suspicious object picture contains the object to be detected or not is further judged in the follow-up step, and after the suspicious object picture is acquired, the unmanned aerial vehicle returns to the preset route to continue shooting; through the mode, the recognition accuracy of the unmanned aerial vehicle does not need to be kept at a higher level all the time, namely, through the confirmation of approximate suspicious objects, the unmanned aerial vehicle can fly at a higher height, the shooting range is larger, and the detection efficiency of the unmanned aerial vehicle is improved.
3. When the unmanned aerial vehicle judges that the shot pictures do not contain the object to be detected, prompt information is sent to the server, and references are provided for staff to wait for further instructions.
Drawings
Fig. 1 is a schematic flow chart of a detection method of an unmanned aerial vehicle according to an embodiment of the present application.
Fig. 2 is a schematic structural diagram of an unmanned aerial vehicle detection device disclosed in an embodiment of the present application.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Reference numerals illustrate: 201. an acquisition unit; 202. a processing unit; 300. an electronic device; 301. a processor; 302. a communication bus; 303. a user interface; 304. a network interface; 305. a memory.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments.
In the description of embodiments of the present application, words such as "for example" or "for example" are used to indicate examples, illustrations or descriptions. Any embodiment or design described herein as "such as" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "or" for example "is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, the term "plurality" means two or more. For example, a plurality of systems means two or more systems, and a plurality of screen terminals means two or more screen terminals. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating an indicated technical feature. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The staff searches a plurality of remains one by one and confirms the positions of the remains, the process is complicated, more time is required to be spent, and the efficiency of confirming the positions of the objects to be detected is low; therefore, the unmanned aerial vehicle detection method can solve the problem of low efficiency of confirming the position of the object to be detected.
The present application provides a method for detecting an unmanned aerial vehicle, and fig. 1 is a schematic flow chart of the method for detecting an unmanned aerial vehicle, which is provided in an embodiment of the present application, and is applied to an unmanned aerial vehicle. In the embodiment of the present application, a scene in which the unmanned aerial vehicle detects the missile remains is specifically taken as an example, that is, an object to be detected in the scene is the missile remains, and the description will not be repeated below. The method includes steps S101 to S105.
S101, acquiring a detection instruction, wherein the detection instruction corresponds to a preset flight route.
In the above step, the unmanned aerial vehicle acquires a detection instruction from the server, wherein the detection instruction comprises one or more preset flight routes.
In one possible embodiment, the probing instructions include a plurality of sub-probing instructions, and the predetermined flight path includes a plurality of predetermined sub-flight paths, one sub-probing instruction corresponding to each of the predetermined sub-flight paths.
Specifically, the detection instruction can include a plurality of sub-detection instructions, the preset flight route can include a plurality of sub-flight routes, one sub-detection instruction corresponds to one sub-flight route, and the one sub-flight route can enable the unmanned aerial vehicle to detect a complete area under ideal conditions; namely, by acquiring a detection instruction once, the unmanned aerial vehicle can detect a plurality of planned detection areas according to a preset flight path.
For example, a detection instruction of a server is obtained, the detection instruction includes a sub-detection instruction a and a sub-detection instruction B, the sub-detection instruction a corresponds to a sub-flight route a, and the unmanned aerial vehicle can detect an area alpha according to the sub-flight route a; the detection instruction B corresponds to the sub-flight route B, and the unmanned plane flies according to the sub-flight route B to detect the region beta.
S102, shooting according to a preset flight route, and obtaining a first picture.
In the steps, the unmanned aerial vehicle flies according to a preset flight route and shoots, and a plurality of pictures of the areas to be detected are obtained to finish the detection process; in the flight process, the camera carried on the unmanned aerial vehicle defaults to the direction of unmanned aerial vehicle advancing, and when shooting, unmanned aerial vehicle hovers, and the camera is shot towards unmanned aerial vehicle below and below deflection other directions to make unmanned aerial vehicle can shoot the surrounding area of finishing the point of hovering.
In one possible implementation, step S102 specifically includes: shooting according to a preset flight route to obtain a first picture; determining the similarity of any one of a first picture and a preset picture library by a preset similarity algorithm, taking the highest similarity value of the first picture and the picture in the preset picture library as a first similarity value, and taking the preset picture library as a picture library of an object to be detected; judging whether a picture with a first similarity value larger than or equal to a first preset threshold value exists in the first picture or not, wherein the first preset threshold value is used for judging whether an approximate suspicious object exists in the first picture, and the approximate suspicious object is one of an object to be detected and a non-object to be detected; if the first picture has a picture with the first similarity value larger than or equal to a first preset threshold value, confirming that the first picture contains an approximate suspicious object; temporarily leaving a preset flight path to approach to the approximate suspicious object, and acquiring a temporary leaving position, wherein the temporary leaving position is a point on the preset flight path; shooting the approximate suspicious object to obtain a suspicious object picture; adding the suspicious object picture into the first picture; and returning to the temporary-departure position, and continuously shooting according to the preset flight route to obtain a first picture.
Specifically, the unmanned aerial vehicle shoots a detection area when flying in a preset flight route, the shooting frequency is generally set to be 30 pieces per second, and the shooting frequency can be set to be other proper values according to the requirements of staff, so that the unmanned aerial vehicle is not limited; the method comprises the steps that a picture shot by an unmanned aerial vehicle in a preset flight route is a first picture; calculating a first similarity value between the first picture and any picture of the object to be detected in a preset picture library by using a similarity algorithm, and judging whether the first picture contains a first picture similar to a suspicious object or not according to the value, wherein the first picture is similar to the suspicious object but not the object to be detected; in this way, it is not necessary to fly the drone at a relatively low altitude in order to maintain a high degree of recognition accuracy; when a picture with an approximate suspicious object is shot, the unmanned plane temporarily deviates from a preset flight route to approach the approximate suspicious object, and particularly, how far the approximate suspicious object is shot can be set by a worker according to actual conditions; it should be noted that, because the shooting frequency of the unmanned aerial vehicle is thirty pieces per second (under the default condition), an object is repeatedly shot, when a plurality of pictures show repeated objects, the unmanned aerial vehicle can judge whether the images are the same object according to the image recognition technology, if so, the unmanned aerial vehicle does not perform the close shooting operation; and after approaching shooting, returning to the position when leaving the preset flight route, and continuing to fly and shoot according to the preset flight route.
S103, acquiring a second picture from the first picture, wherein the second picture comprises an object to be detected.
In the step, the unmanned aerial vehicle screens out the pictures with missile remains from the shot pictures.
In one possible implementation, step S103 specifically includes: determining the similarity between the suspicious object picture and any picture in a preset picture library through a preset similarity algorithm to obtain a second similarity value; judging whether a picture with a second similarity value larger than or equal to a second preset threshold value exists in the suspicious object picture, wherein the second preset threshold value is used for judging whether an object to be detected exists in the first picture, and the value of the second preset threshold value is larger than that of the first preset threshold value; if the suspicious object picture has a picture with the second similarity value larger than or equal to a second preset threshold value, confirming that the first picture contains an object to be detected; and taking the first picture containing the object to be detected as a second picture.
Specifically, the unmanned aerial vehicle determines the similarity between a suspicious object picture and any picture in a preset picture library through a similarity algorithm, takes the highest similarity value of the suspicious object picture and the picture in the preset picture library as a second similarity value, and confirms that a first picture contains an object to be detected if the second similarity value is greater than or equal to a second preset threshold value in the suspicious object picture, namely, recognizes that the first picture contains missile remains; the method comprises the steps that a preset picture library which is collected in advance is used, wherein a plurality of pictures containing missile remains with different sizes are arranged in the preset picture library, and the pictures of the missile remains shot at different distances; before similarity calculation, the feature extraction needs to be performed on the pictures in the picture library, specifically: loading a trained CNN model, and loading weight parameters of the model into a memory; loading the picture, and preprocessing, wherein the preprocessing comprises the steps of adjusting the picture to be the same size, normalizing, and converting the picture into a form accepted by a CNN model; inputting the preprocessed picture into a CNN model for forward propagation to obtain the output of a convolution layer; the output of one convolution layer is selected as the feature vector of the picture, and the output of the last convolution layer can be generally selected as the feature vector, or the output of all convolution layers is averaged or weighted averaged by using a global averaging pooling method or the like to obtain a feature vector with a fixed length.
After the feature vector of each picture in the preset picture library is obtained, the method is used for carrying out feature extraction on the pictures shot by the unmanned aerial vehicle to obtain the feature vectors of the shot pictures. Calculating the similarity value between the shot picture and the picture of the preset picture library, and determining the similarity value between the shot picture and the picture of the preset picture library according to the characteristics of the two pictures
The method for calculating the distance of the sign vector comprises Euclidean distance, manhattan distance and cosine similarity, if the cosine similarity of two pictures is calculated, the similarity value is between [ -1,1], the closer 1 is the more similar the two pictures, and the closer-1 is the more dissimilar the two pictures are; by setting the similarity threshold, pictures with a certain similarity with a picture library can be screened out, and the pictures are taken as pictures containing missile remains.
In one possible implementation manner, after determining whether there is a picture with the second similarity value greater than or equal to the second preset threshold in the suspicious object picture, the method further includes: if no picture with the second similarity value larger than or equal to the second preset threshold value exists in the suspicious object pictures, prompt information is sent to the server, and the prompt information is used for prompting a user that a preset flight route cannot detect an object to be detected; and acquiring a control instruction of a user, wherein the control instruction comprises one or more of a return instruction, a height adjustment instruction and a path adjustment instruction.
Specifically, if the similarity is calculated and judged, a picture of the suspicious object, which does not contain the object to be detected, is obtained, that is, the missile remains are not found in the detection, various reasons may exist, such as unreasonable setting of a flying route, and too high setting of the route causes that the missile remains are imaged too little and are difficult to identify; or the selection of the zone detection region is wrong; at the moment, the unmanned aerial vehicle sends prompt information and shooting pictures to the server so as to prompt staff and give site picture reference; after a worker makes a decision to send an instruction according to specific conditions, the unmanned aerial vehicle receives the instruction sent by the server and executes the instruction; the instructions include instructions for the unmanned aerial vehicle to return; instructions for the unmanned aerial vehicle to adjust the altitude; a route adjustment instruction, wherein the instruction is accompanied with a new preset route so as to guide the unmanned aerial vehicle to carry out correct shooting; these instructions may be issued and combined at the same time, or may be issued only one.
For example, after confirming that the site topography allows, the staff sends a command for lowering the unmanned aerial vehicle by 5m to the unmanned aerial vehicle through the server, at this time, although the unmanned aerial vehicle can be closer to the ground, the shooting is clearer, but if the shooting is performed according to the original flight path and the hovering point, the shooting range becomes smaller, the situation that the area is not detected exists, at this time, the staff is required to plan a new flight path, and the new hovering point is needed, so that the unmanned aerial vehicle can shoot the original area clearly and has high coverage rate.
S104, acquiring pixel coordinates of the object to be detected in the second picture.
In the above steps, after the unmanned aerial vehicle screens out the picture with the missile remains, the unmanned aerial vehicle confirms the position of the remains in the picture, and since the picture comprises a plurality of pixels, the missile remains displayed in the picture are also composed of part of pixels, so that the position of the missile remains in the picture can be described by using pixel coordinates.
In one possible implementation, step S104 specifically includes: inputting the second picture into a preset target detection model to obtain a first picture grid, wherein the first picture grid comprises object types and object position information; the object class comprises objects to be detected; performing non-maximum value inhibition processing on the first picture grid to obtain a second picture grid; the second picture grid comprises an object to be detected; and taking the center point coordinates of the second picture grid as pixel coordinates of the object to be detected in the second picture.
Specifically, the unmanned aerial vehicle can confirm the pixel coordinates of missile remains in the picture through a target detection algorithm, and the target detection algorithm is numerous, such as YOLO, fast R-CNN, etc., and the embodiment specifically takes the YOLO algorithm as an example: the input picture is divided into a plurality of unit grids with the same size by utilizing a YOLO algorithm, the number of the grids can be set by a worker, the number of the grids is conveniently set to 36, and the unit grids are not the first picture grid and the second picture grid; predicting whether the center point of an object in each unit grid falls in the grid or not, wherein one unit grid can allow two grids to be predicted, namely, two grids can be predicted for the object, and 72 picture grids in total are the first picture grid; each first picture grid comprises a center point of an object predicted by the grid, a category of the object and a position of the object in the picture; since only the image grid of the missile remains is needed, useless grids need to be removed; the non-maximum value suppression processing algorithm is adopted for screening, and a grid which can better enclose an object is obtained, namely the second picture grid; after the grid comprising missile remains is obtained, the coordinates of the central point of the grid are taken as the pixel coordinates of the missile remains in the picture.
S105, obtaining actual coordinates according to shooting coordinates and pixel coordinates; the shooting coordinates are coordinates of the place where the second picture is taken, and the actual coordinates are coordinates of the place where the object to be detected is located, so that the object to be detected is detected.
In the step, the unmanned aerial vehicle obtains the actual coordinates of the missile remains according to the pixel coordinates of the missile remains in the picture and the coordinates of the position of the unmanned aerial vehicle when the picture is shot.
In one possible implementation, step S105 specifically includes: according to the shooting coordinates and the pixel coordinates, obtaining actual coordinates specifically includes: calibrating a camera carried on the unmanned aerial vehicle by using a calibration plate to obtain internal parameters and external parameters of the camera, wherein the internal parameters comprise focal length and principal point coordinates, and the external parameters comprise a rotation matrix and a translation vector of the camera; converting the pixel coordinates into coordinates in a camera coordinate system by using the internal parameters and the external parameters, wherein the camera coordinate system takes shooting coordinates as an origin; the homography matrix is used to convert the coordinates in the camera coordinate system into actual coordinates in the ground coordinate system used to describe the geographic position and orientation in the earth's surface to complete the detection of the object to be detected.
Specifically, the unmanned aerial vehicle places the calibration plate in the shooting range of the camera, shoots a plurality of calibration plate images, detects the focus of the calibration plate, calculates the internal parameters and the external parameters of the camera, and obtains a calibration result; the pixel coordinates are expressed as homogeneous coordinates of [ u, v,1 ]. Wherein u and v represent pixel coordinates, 1 represents the third component of homogeneous coordinates, which is set to 1 for convenience of subsequent calculation; then converting the pixel coordinates into coordinates on a normalized plane according to the internal parameters of the camera, and converting the coordinates on the normalized plane into coordinates in a camera coordinate system according to the external parameters of the camera; and finally, converting the coordinates in the camera coordinate system into coordinates in a ground coordinate system, namely the actual coordinates of the missile remains.
For example, after camera calibration, the internal parameter focal length is f=500 pixels, and the principal point position (cx, cy) = (320, 240); the external parameter rotation matrix R is an identity matrix, the translation vector t is (0, 0), and the pixel coordinates (500, 800) are converted into coordinates (xc, yc, zc) in a camera coordinate system by using the calibration result: xc= (500-320)/500=0.36; yc= (800-240)/500=1.12; zc=f=500; converting coordinates (xc, yc, zc) in the camera coordinate system into coordinates (xw, yw, zw) in the ground coordinate system using the homography matrix; solving to obtain a homography matrix H as follows: h= [0.8, -0.2, 100;0.2,0.8, 200;0.0,0.0,1]; wherein the last behavior constant term of H; the homography matrix H is used to convert the coordinates (xc, yc, zc) in the camera coordinate system to coordinates (xw, yw, zw) in the ground coordinate system:
[xw,yw,zw]=H*[xc,yc,zc]
=[0.80.36-0.21.12+100,0.20.36+0.81.12+200,500]
=[100.52,200.96,500]
The actual coordinates of the missile remains in the ground coordinate system are (100.52, 200.96, 500).
The application also provides an unmanned aerial vehicle detection device, which comprises an acquisition unit 201 and a processing unit 202, and is described with reference to fig. 2.
An acquisition unit 201, configured to acquire a detection instruction, where the detection instruction corresponds to a preset flight path; acquiring a second picture from the first picture, wherein the second picture contains an object to be detected; acquiring pixel coordinates of an object to be detected in a second picture;
the processing unit 202 is configured to take a first picture according to a preset flight path; obtaining an actual coordinate according to the shooting coordinate and the pixel coordinate; the shooting coordinates are coordinates of the place where the second picture is taken, and the actual coordinates are coordinates of the place where the object to be detected is located, so that the object to be detected is detected.
In a possible implementation manner, the processing unit 202 is configured to take a first picture according to a preset flight path; determining the similarity of any picture in a first picture and a preset picture library through a preset similarity algorithm to obtain a first similarity value, wherein the preset picture library is a picture library of an object to be detected; judging whether a picture with a first similarity value larger than or equal to a first preset threshold value exists in the first picture or not, wherein the first preset threshold value is used for judging whether an approximate suspicious object exists in the first picture, and the approximate suspicious object is one of an object to be detected and a non-object to be detected; if the first picture has a picture with the first similarity value larger than or equal to a first preset threshold value, confirming that the first picture contains an approximate suspicious object; temporarily leaving a preset flight path to approach to the approximate suspicious object, and acquiring a temporary leaving position, wherein the temporary leaving position is a point on the preset flight path; shooting the approximate suspicious object to obtain a suspicious object picture; adding the suspicious object picture into the first picture; and returning to the temporary-departure position, and continuously shooting according to the preset flight route to obtain a first picture.
In a possible implementation manner, the processing unit 202 is configured to determine, through a preset similarity algorithm, a similarity between the suspicious object picture and any picture in the preset picture library, so as to obtain a second similarity value; judging whether a picture with a second similarity value larger than or equal to a second preset threshold value exists in the suspicious object picture, wherein the second preset threshold value is used for judging whether an object to be detected exists in the first picture, and the value of the second preset threshold value is larger than that of the first preset threshold value; if the suspicious object picture has a picture with the second similarity value larger than or equal to a second preset threshold value, confirming that the first picture contains an object to be detected; and taking the first picture containing the object to be detected as a second picture.
In a possible implementation manner, the sending subunit of the processing unit 202 is configured to send, if no picture with the second similarity value greater than or equal to the second preset threshold value exists in the suspicious object pictures, a prompt message to the server, where the prompt message is used to prompt the user that the preset flight path cannot detect the object to be detected; the acquiring unit 201 is configured to acquire a control instruction of a user, where the control instruction includes one or more of a return instruction, a height adjustment instruction, and an adjustment path instruction.
In a possible implementation manner, the processing unit 202 is configured to calibrate a camera mounted on the unmanned aerial vehicle by using a calibration board, so as to obtain internal parameters and external parameters of the camera, where the internal parameters include a focal length and principal point coordinates, and the external parameters include a rotation matrix and a translation vector of the camera; converting the pixel coordinates into coordinates in a camera coordinate system by using the internal parameters and the external parameters, wherein the camera coordinate system takes shooting coordinates as an origin; the homography matrix is used to convert the coordinates in the camera coordinate system into actual coordinates in the ground coordinate system used to describe the geographic position and orientation in the earth's surface to complete the detection of the object to be detected.
In a possible implementation manner, the processing unit 202 is configured to input the second picture into the preset target detection model to obtain a first picture grid, where the first picture grid includes object type and object position information; the object class comprises objects to be detected; performing non-maximum value inhibition processing on the first picture grid to obtain a second picture grid; the second picture grid comprises an object to be detected; and taking the center point coordinates of the second picture grid as pixel coordinates of the object to be detected in the second picture.
In a possible embodiment, the obtaining unit 201 is configured to obtain a detection instruction, where the detection instruction includes a plurality of sub-detection instructions, and the preset flight path includes a plurality of preset sub-flight paths; one sub-probe command corresponds to one preset sub-flight path.
It should be noted that: in the device provided in the above embodiment, when implementing the functions thereof, only the division of the above functional modules is used as an example, in practical application, the above functional allocation may be implemented by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the embodiments of the apparatus and the method provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the embodiments of the method are detailed in the method embodiments, which are not repeated herein.
The application also discloses a computer readable storage medium storing a computer program for executing the unmanned aerial vehicle detection method disclosed in the above specification by a processor.
The application also discloses electronic equipment. Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 300 may include: at least one processor 301, at least one communication bus 302, at least one user interface 303, a network interface 304, a memory 305.
Wherein the communication bus 302 is used to enable connected communication between these components.
The user interface 303 may include a Display screen (Display), a Camera (Camera), and the optional user interface 303 may further include a standard wired interface, and a wireless interface.
The network interface 304 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein the processor 301 may include one or more processing cores. The processor 301 utilizes various interfaces and lines to connect various portions of the overall server, perform various functions of the server and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 305, and invoking data stored in the memory 305. Alternatively, the processor 301 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 301 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 301 and may be implemented by a single chip.
The Memory 305 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 305 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 305 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 305 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described respective method embodiments, etc.; the storage data area may store data or the like involved in the above respective method embodiments. Memory 305 may also optionally be at least one storage device located remotely from the aforementioned processor 301. Referring to fig. 3, an operating system, a network communication module, a user interface module, and a drone probing application may be included in the memory 305 as one type of computer storage medium.
In the electronic device 300 shown in fig. 3, the user interface 303 is mainly used for providing an input interface for a user, and acquiring data input by the user; and the processor 301 may be configured to invoke the drone probing application stored in the memory 305, which when executed by the one or more processors 301, causes the electronic device 300 to perform the method as in one or more of the embodiments described above. It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided herein, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, such as a division of units, merely a division of logic functions, and there may be additional divisions in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned memory includes: various media capable of storing program codes, such as a U disk, a mobile hard disk, a magnetic disk or an optical disk.
The above are merely exemplary embodiments of the present disclosure and are not intended to limit the scope of the present disclosure. That is, equivalent changes and modifications are contemplated by the teachings of this disclosure, which fall within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure.
This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a scope and spirit of the disclosure being indicated by the claims.

Claims (7)

1. A method of unmanned aerial vehicle detection, characterized by being applied to an unmanned aerial vehicle, the method comprising:
acquiring a detection instruction, wherein the detection instruction corresponds to a preset flight route;
shooting according to the preset flight route to obtain a first picture; shooting according to the preset flight route to obtain a first picture, wherein the method specifically comprises the following steps of: determining the similarity of any picture in the first picture and a preset picture library through a preset similarity algorithm to obtain a first similarity value, wherein the preset picture library is a picture library of an object to be detected; judging whether a picture with the first similarity value larger than or equal to a first preset threshold value exists in the first picture or not, wherein the first preset threshold value is used for judging whether an approximate suspicious object exists in the first picture or not, and the approximate suspicious object is one of the object to be detected and the object not to be detected; if the first picture has a picture with the first similarity value larger than or equal to the first preset threshold value, confirming that the first picture contains the approximate suspicious object; temporarily leaving the preset flight path to approach the approximate suspicious object to obtain a temporary leaving position, wherein the temporary leaving position is a point on the preset flight path; shooting the approximate suspicious object to obtain a suspicious object picture; adding the suspicious object picture into the first picture; returning to the temporary separation position, and continuously shooting according to the preset flight route to obtain a plurality of first pictures;
Acquiring a second picture from the first picture, wherein the second picture comprises the object to be detected; the obtaining a second picture from the first picture specifically includes: determining the similarity of the suspicious object picture and any picture in the preset picture library through the preset similarity algorithm, and obtaining a second similarity value to judge whether a picture with the second similarity value larger than or equal to a second preset threshold value exists in the suspicious object picture, wherein the second preset threshold value is used for judging whether the object to be detected exists in the first picture, and the value of the second preset threshold value is larger than the value of the first preset threshold value; if the suspicious object picture has a picture with the second similarity value larger than or equal to the second preset threshold value, confirming that the first picture contains the object to be detected; taking a first picture containing the object to be detected as the second picture;
acquiring pixel coordinates of the object to be detected in the second picture; the obtaining the pixel coordinates of the object to be detected in the second picture specifically includes: inputting the second picture into a preset target detection model to obtain a first picture grid, wherein the first picture grid comprises object types and object position information; the object class comprises the object to be detected; performing non-maximum value inhibition processing on the first picture grid to obtain a second picture grid; the second picture grid comprises the object to be detected; taking the center point coordinate of the second picture grid as the pixel coordinate of the object to be detected in the second picture;
Obtaining an actual coordinate according to the shooting coordinate and the pixel coordinate; the shooting coordinates are coordinates of a place where the second picture is shot, and the actual coordinates are coordinates of the place where the object to be detected is located, so that the object to be detected is detected.
2. The method according to claim 1, wherein after the determining whether the picture with the second similarity value greater than or equal to a second preset threshold exists in the suspicious object picture, the method further includes:
if the suspicious object picture does not contain the picture with the second similarity value larger than or equal to the second preset threshold value, sending prompt information to a server, wherein the prompt information is used for prompting a user that the preset flight route cannot detect the object to be detected;
and acquiring a control instruction of the user, wherein the control instruction comprises one or more of a return instruction, a height adjustment instruction and a path adjustment instruction.
3. The method according to claim 1, wherein the obtaining the actual coordinates according to the shooting coordinates and the pixel coordinates specifically includes:
calibrating a camera carried on the unmanned aerial vehicle by using a calibration plate to obtain internal parameters and external parameters of the camera, wherein the internal parameters comprise focal length and principal point coordinates, and the external parameters comprise a rotation matrix and a translation vector of the camera;
Converting the pixel coordinates into coordinates in a camera coordinate system by using the internal parameters and the external parameters, wherein the camera coordinate system takes the shooting coordinates as an origin;
the coordinates in the camera coordinate system are converted into the actual coordinates in a ground coordinate system for describing the geographic position and direction in the earth's surface by using a homography matrix to complete the detection of the object to be detected.
4. The method of claim 1, wherein the probing instructions comprise a plurality of sub-probing instructions, the pre-set flight path comprising a plurality of pre-set sub-flight paths, one of the sub-probing instructions corresponding to one of the pre-set sub-flight paths.
5. An unmanned aerial vehicle detection device, characterized in that the device is an unmanned aerial vehicle comprising an acquisition unit (201) and a processing unit (202), wherein,
the acquisition unit (201) is used for acquiring a detection instruction, and the detection instruction corresponds to a preset flight route;
the processing unit (202) is used for shooting according to the preset flight route to obtain a first picture; shooting according to the preset flight route to obtain a first picture, wherein the method specifically comprises the following steps of: determining the similarity of any picture in the first picture and a preset picture library through a preset similarity algorithm to obtain a first similarity value, wherein the preset picture library is a picture library of an object to be detected; judging whether a picture with the first similarity value larger than or equal to a first preset threshold value exists in the first picture or not, wherein the first preset threshold value is used for judging whether an approximate suspicious object exists in the first picture or not, and the approximate suspicious object is one of the object to be detected and the object not to be detected; if the first picture has a picture with the first similarity value larger than or equal to the first preset threshold value, confirming that the first picture contains the approximate suspicious object; temporarily leaving the preset flight path to approach the approximate suspicious object to obtain a temporary leaving position, wherein the temporary leaving position is a point on the preset flight path; shooting the approximate suspicious object to obtain a suspicious object picture; adding the suspicious object picture into the first picture; returning to the temporary separation position, and continuously shooting according to the preset flight route to obtain a plurality of first pictures;
The acquiring unit (201) is further configured to acquire a second picture from the first picture, where the second picture includes the object to be detected; the obtaining a second picture from the first picture specifically includes: determining the similarity of the suspicious object picture and any picture in the preset picture library through the preset similarity algorithm, and obtaining a second similarity value to judge whether a picture with the second similarity value larger than or equal to a second preset threshold value exists in the suspicious object picture, wherein the second preset threshold value is used for judging whether the object to be detected exists in the first picture, and the value of the second preset threshold value is larger than the value of the first preset threshold value; if the suspicious object picture has a picture with the second similarity value larger than or equal to the second preset threshold value, confirming that the first picture contains the object to be detected; taking a first picture containing the object to be detected as the second picture;
the acquiring unit (201) is further configured to acquire pixel coordinates of the object to be detected in the second picture; the obtaining the pixel coordinates of the object to be detected in the second picture specifically includes: inputting the second picture into a preset target detection model to obtain a first picture grid, wherein the first picture grid comprises object types and object position information; the object class comprises the object to be detected; performing non-maximum value inhibition processing on the first picture grid to obtain a second picture grid; the second picture grid comprises the object to be detected; taking the center point coordinate of the second picture grid as the pixel coordinate of the object to be detected in the second picture;
The processing unit (202) is further used for obtaining actual coordinates according to shooting coordinates and the pixel coordinates; the shooting coordinates are coordinates of a place where the second picture is shot, and the actual coordinates are coordinates of the place where the object to be detected is located, so that the object to be detected is detected.
6. An electronic device comprising a processor (301), a memory (305), a user interface (303) and a network interface (304), the memory (305) being adapted to store instructions, the user interface (303) and the network interface (304) being adapted to communicate to other devices, the processor (301) being adapted to execute the instructions stored in the memory (305) to cause the electronic device (300) to perform the method according to any one of claims 1 to 4.
7. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, implements the method according to any of the preceding claims 1 to 4.
CN202311072096.8A 2023-08-24 2023-08-24 Unmanned aerial vehicle detection method and device Active CN116772803B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311072096.8A CN116772803B (en) 2023-08-24 2023-08-24 Unmanned aerial vehicle detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311072096.8A CN116772803B (en) 2023-08-24 2023-08-24 Unmanned aerial vehicle detection method and device

Publications (2)

Publication Number Publication Date
CN116772803A CN116772803A (en) 2023-09-19
CN116772803B true CN116772803B (en) 2024-02-09

Family

ID=87993511

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311072096.8A Active CN116772803B (en) 2023-08-24 2023-08-24 Unmanned aerial vehicle detection method and device

Country Status (1)

Country Link
CN (1) CN116772803B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102175226A (en) * 2010-12-31 2011-09-07 北京控制工程研究所 Target identification method based on significant characteristics
JP2012027000A (en) * 2010-06-22 2012-02-09 Itt:Kk Image measurement processor, image measurement processing method and image measurement processing program by single camera
JP2013061204A (en) * 2011-09-13 2013-04-04 Asia Air Survey Co Ltd Method for setting corresponding point of aerial photographic image data, corresponding point setting apparatus, and corresponding point setting program
CN110971824A (en) * 2019-12-04 2020-04-07 深圳市凯达尔科技实业有限公司 Unmanned aerial vehicle shooting control method
CN111626212A (en) * 2020-05-27 2020-09-04 腾讯科技(深圳)有限公司 Method and device for identifying object in picture, storage medium and electronic device
CN114138014A (en) * 2021-11-19 2022-03-04 浙江远望土地勘测规划设计有限公司 Unmanned aerial vehicle control method, device and equipment for land surveying and storage medium
CN116164711A (en) * 2023-03-09 2023-05-26 广东精益空间信息技术股份有限公司 Unmanned aerial vehicle mapping method, unmanned aerial vehicle mapping system, unmanned aerial vehicle mapping medium and unmanned aerial vehicle mapping computer
CN116627179A (en) * 2023-07-19 2023-08-22 陕西德鑫智能科技有限公司 Unmanned aerial vehicle formation control method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL228735B (en) * 2013-10-06 2018-10-31 Israel Aerospace Ind Ltd Target direction determination method and system
CN105243119B (en) * 2015-09-29 2019-05-24 百度在线网络技术(北京)有限公司 Determine region to be superimposed, superimposed image, image presentation method and the device of image
CN109240572B (en) * 2018-07-20 2021-01-05 华为技术有限公司 Method for obtaining picture, method and device for processing picture

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012027000A (en) * 2010-06-22 2012-02-09 Itt:Kk Image measurement processor, image measurement processing method and image measurement processing program by single camera
CN102175226A (en) * 2010-12-31 2011-09-07 北京控制工程研究所 Target identification method based on significant characteristics
JP2013061204A (en) * 2011-09-13 2013-04-04 Asia Air Survey Co Ltd Method for setting corresponding point of aerial photographic image data, corresponding point setting apparatus, and corresponding point setting program
CN110971824A (en) * 2019-12-04 2020-04-07 深圳市凯达尔科技实业有限公司 Unmanned aerial vehicle shooting control method
CN111626212A (en) * 2020-05-27 2020-09-04 腾讯科技(深圳)有限公司 Method and device for identifying object in picture, storage medium and electronic device
CN114138014A (en) * 2021-11-19 2022-03-04 浙江远望土地勘测规划设计有限公司 Unmanned aerial vehicle control method, device and equipment for land surveying and storage medium
CN116164711A (en) * 2023-03-09 2023-05-26 广东精益空间信息技术股份有限公司 Unmanned aerial vehicle mapping method, unmanned aerial vehicle mapping system, unmanned aerial vehicle mapping medium and unmanned aerial vehicle mapping computer
CN116627179A (en) * 2023-07-19 2023-08-22 陕西德鑫智能科技有限公司 Unmanned aerial vehicle formation control method and device

Also Published As

Publication number Publication date
CN116772803A (en) 2023-09-19

Similar Documents

Publication Publication Date Title
US10885328B2 (en) Determination of position from images and associated camera positions
US10171794B2 (en) Method for selecting cameras and image distribution system capable of appropriately selecting cameras
US9465976B1 (en) Feature reduction based on local densities for bundle adjustment of images
Yahyanejad et al. Incremental mosaicking of images from autonomous, small-scale uavs
US11238653B2 (en) Information processing device, information processing system, and non-transitory computer-readable storage medium for storing program
JP6510247B2 (en) Survey data processing apparatus, survey data processing method and program
KR101634878B1 (en) Apparatus and method for matching aviation image using swarm filght of unmanned vehicle
CN112700552A (en) Three-dimensional object detection method, three-dimensional object detection device, electronic apparatus, and medium
CN107783555B (en) Target positioning method, device and system based on unmanned aerial vehicle
JP6364101B1 (en) Air monitoring device, air monitoring method and program
KR101684293B1 (en) System and method for detecting emergency landing point of unmanned aerial vehicle
US11321867B2 (en) Method and system for calculating spatial coordinates of region of interest, and non-transitory computer-readable recording medium
KR102303779B1 (en) Method and apparatus for detecting an object using detection of a plurality of regions
JP2015114954A (en) Photographing image analysis method
CN111123340A (en) Logistics distribution navigation method and system, near field positioning navigation device and storage medium
KR102383567B1 (en) Method and system for localization based on processing visual information
US11015929B2 (en) Positioning method and apparatus
CN111105351B (en) Video sequence image splicing method and device
WO2022147655A1 (en) Positioning method and apparatus, spatial information acquisition method and apparatus, and photographing device
CN113987246A (en) Automatic picture naming method, device, medium and electronic equipment for unmanned aerial vehicle inspection
CN116772803B (en) Unmanned aerial vehicle detection method and device
CN111527375A (en) Planning method and device for surveying and mapping sampling point, control terminal and storage medium
KR102260044B1 (en) Method and apparatus for calibrating cameras
CN111581322B (en) Method, device and equipment for displaying region of interest in video in map window
CN107703954B (en) Target position surveying method and device for unmanned aerial vehicle and unmanned aerial vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant