CN113467450A - Unmanned aerial vehicle control method and device, computer equipment and storage medium - Google Patents

Unmanned aerial vehicle control method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113467450A
CN113467450A CN202110746003.XA CN202110746003A CN113467450A CN 113467450 A CN113467450 A CN 113467450A CN 202110746003 A CN202110746003 A CN 202110746003A CN 113467450 A CN113467450 A CN 113467450A
Authority
CN
China
Prior art keywords
path
image
aerial vehicle
unmanned aerial
front image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110746003.XA
Other languages
Chinese (zh)
Inventor
檀冲
王颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Puppy Vacuum Cleaner Group Co Ltd
Original Assignee
Beijing Puppy Vacuum Cleaner Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Puppy Vacuum Cleaner Group Co Ltd filed Critical Beijing Puppy Vacuum Cleaner Group Co Ltd
Priority to CN202110746003.XA priority Critical patent/CN113467450A/en
Publication of CN113467450A publication Critical patent/CN113467450A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The disclosure relates to the technical field of intelligent robots, and provides an unmanned aerial vehicle control method, an unmanned aerial vehicle control device, computer equipment and a storage medium. The method comprises the following steps: acquiring a front image and laser radar data of the unmanned aerial vehicle on a driving path; classifying and identifying the front image by using an image identification model; determining specific position information of the glass obstacle in the case that the glass obstacle is recognized for the front image; transmitting the specific position information to a path planning module to obtain path information; and controlling the driving path of the unmanned robot based on the path information and the laser radar data. The glass that this openly realized will produce great error with laser through image recognition technology carries out optimization processing, has avoided the collision of robot, has effectively optimized walking efficiency and has improved the intelligence of robot.

Description

Unmanned aerial vehicle control method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of intelligent robot technologies, and in particular, to a method and an apparatus for controlling an unmanned aerial vehicle, a computer device, and a storage medium.
Background
With the development of science and technology, mobile robots are also widely applied to various fields, and can replace people to complete a large amount of work, for example, a medicine delivery robot can deliver medicine to patients, a patrol robot helps security personnel to patrol, a cleaning robot can clean floors of people, and the like. The wide use of various types of mobile robots in life can provide great convenience for people.
In the prior art, a sweeping robot generally carries a laser radar as a sensor for inputting environmental information, so that more accurate environment measurement and sensing can be performed. The laser radar passes through when encountering obstacles such as glass in the walking process based on optical detection, so that the measurement of the glass obstacles is inaccurate, and when a robot approaches to the glass, the robot directly collides with the glass due to the incapability of recognition, so that unnecessary collision and obstacle avoidance are caused, and the movement planning and cleaning efficiency are further influenced.
Disclosure of Invention
In view of this, the embodiment of the present disclosure provides an unmanned aerial vehicle control method, an apparatus, a computer device, and a storage medium, so as to solve the problems in the prior art that a laser radar is inaccurate in measuring a glass obstacle, and is prone to causing unnecessary collision and obstacle avoidance, which affects movement planning and cleaning efficiency.
In a first aspect of the embodiments of the present disclosure, an unmanned aerial vehicle control method is provided, including:
acquiring a front image and laser radar data of the unmanned aerial vehicle on a driving path;
classifying and identifying the front image by using an image identification model;
determining specific position information of the glass obstacle in the case that the glass obstacle is recognized for the front image;
transmitting the specific position information to a path planning module to obtain path information;
and controlling the driving path of the unmanned robot based on the path information and the laser radar data.
In a second aspect of the embodiments of the present disclosure, an unmanned aerial vehicle controller is provided, including:
the acquisition module is used for acquiring a front image and laser radar data of the unmanned aerial vehicle on a driving path;
the classification module is used for classifying and identifying the front image by using the image identification model;
the device comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining the specific position information of the glass barrier under the condition that the glass barrier is identified for the front image;
the transmission module is used for transmitting the specific position information to the path planning module to obtain path information;
and the control module is used for controlling the running path of the unmanned robot based on the path information and the laser radar data.
In a third aspect of the embodiments of the present disclosure, a computer device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the above method when executing the computer program.
In a fourth aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, which stores a computer program, which when executed by a processor, implements the steps of the above-mentioned method.
Compared with the prior art, the embodiment of the disclosure has the following beneficial effects: the method comprises the steps of obtaining a front image and laser radar data of the unmanned aerial vehicle on a driving path, then utilizing an image recognition model to classify and recognize the front image, determining specific position information of a glass barrier under the condition that the front image recognizes the glass barrier, transmitting the specific position information to a path planning module to obtain path information, and controlling the driving path of the unmanned aerial vehicle based on the path information and the laser radar data. The glass that this openly realized will produce great error with laser through image recognition technology carries out optimization processing, has avoided the collision of robot, has effectively optimized walking efficiency and has improved the intelligence of robot.
Drawings
To more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without inventive efforts.
FIG. 1 is a scenario diagram of an application scenario of an embodiment of the present disclosure;
fig. 2 is a flowchart of a method for controlling an drone provided by an embodiment of the present disclosure;
FIG. 3 is a flow chart of another method of unmanned aerial vehicle control provided by an embodiment of the present disclosure;
fig. 4 is a block diagram of an drone controller provided by an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the disclosed embodiments. However, it will be apparent to one skilled in the art that the present disclosure may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present disclosure with unnecessary detail.
An unmanned aerial vehicle control method and an unmanned aerial vehicle control device according to the embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a scene schematic diagram of an application scenario of an embodiment of the present disclosure. The application scenario may include an unmanned robot 1, an image acquisition device 2 and a lidar 3, a server 4, and a network 5.
The unmanned robot 1 may be a mobile robot such as a medicine delivery robot, a patrol robot, and a cleaning robot, which is not limited by the embodiment of the present disclosure.
The image capturing apparatus 2 may be various apparatuses for capturing an image of a path ahead of the unmanned robot 1 in the traveling direction, including, but not limited to, a wide-angle camera, a binocular camera, a Charge Coupled Device (CCD) camera, a wireless camera, a zoom camera, a gun type camera, a dome camera, a wide dynamic camera, and the like. The image capturing device 2 may be installed at any position on the unmanned robot 1, for example, in front, in the middle, behind, and the like, which is not limited by the embodiment of the present disclosure. Further, a wireless communication module is provided inside the image pickup device 2 to transmit image information photographed by the image pickup device 2 to a processor or server provided in the unmanned robot 1 via a network.
The lidar 3 is an optical sensor that uses an infrared laser beam to determine the distance between the sensor and nearby objects. In the disclosed embodiment, the lidar 3 is used to collect lidar data for the path ahead of the unmanned robot 1 in the direction of travel. The lidar 3 may be mounted anywhere on the unmanned robot 1, e.g., front, middle, rear, etc., and embodiments of the present disclosure are not limited thereto.
The server 4 may be a server providing various services, for example, a backend server receiving a request sent by a terminal device establishing a communication connection with the server, and the backend server may receive and analyze the request sent by the terminal device and generate a processing result. The server 4 may be one server, may also be a server cluster composed of a plurality of servers, or may also be a cloud computing service center, which is not limited in this disclosure.
The server 4 may be hardware or software. When the server 4 is hardware, it may be various electronic devices that provide various services to the drone 1, the image capture device 2, and the lidar 3. When the server 4 is software, it may be implemented as multiple software or software modules that provide various services for the drone 1, the image acquisition device 2, and the laser radar 3, or may be implemented as a single software or software module that provides various services for the drone 1, the image acquisition device 2, and the laser radar 3, which is not limited in this disclosure.
The network 5 may be a wired network connected by a coaxial cable, a twisted pair and an optical fiber, or may be a wireless network that can interconnect various Communication devices without wiring, for example, Bluetooth (Bluetooth), Near Field Communication (NFC), Infrared (Infrared), and the like, which is not limited in the embodiment of the present disclosure.
Taking the processor of the drone 1 as an example, the image acquisition device 2 and the lidar 3 may establish a communication connection with the server 4 via the network 5 to receive or transmit information and the like. Specifically, after the image acquisition device 2 acquires an image of a front path of the unmanned robot 1 in the traveling direction and the lidar 3 acquires lidar data of the front path of the unmanned robot 1 in the traveling direction, the image acquisition device 2 and the lidar 3 transmit the acquired image and the lidar data to the processor via the network 5; further, the processor classifies and recognizes the acquired images, determines specific position information of the glass obstacle in the case where the glass obstacle is recognized for the front image, and then controls the travel path of the unmanned robot 1 based on the path information and the laser radar data.
It should be noted that specific types, numbers and combinations of the unmanned robot 1, the image acquisition device 2, the laser radar 3, the server 4 and the network 5 may be adjusted according to actual requirements of an application scenario, and the embodiment of the present disclosure does not limit this.
Fig. 2 is a flowchart of a method for controlling an unmanned aerial vehicle provided in an embodiment of the present disclosure. The drone control method of fig. 2 may be performed by the server of fig. 1. As shown in fig. 2, the drone controller control method includes:
s201, acquiring a front image and laser radar data of the unmanned aerial vehicle on a driving path;
s202, classifying and identifying the front image by using an image identification model;
s203, determining specific position information of the glass barrier under the condition that the glass barrier is recognized for the front image;
s204, transmitting the specific position information to a path planning module to obtain path information;
and S205, controlling the running path of the unmanned robot based on the path information and the laser radar data.
Specifically, the server can acquire a front image and laser radar data of the unmanned aerial vehicle on a driving path in a wired or wireless mode; after the front image and the laser radar data are obtained, the server carries out classification and identification on the front image by using an image identification model; determining specific position information of the glass obstacle in the case that the glass obstacle is recognized for the front image; further, the server transmits the specific position information to a path planning module to obtain path information; and controlling the driving path of the unmanned robot based on the path information and the laser radar data.
Here, the lidar data may be data obtained by a lidar mounted on the drone. The laser radar is divided into various categories, and the sweeping robot generally uses the laser ranging radar. The laser ranging radar emits laser beams, the beams return after hitting a measured object and are received by the radar to receive reflected waves, and the time difference is calculated to calculate the flying distance so as to obtain the distance from a test point. The sweeping robot is basically a single-line laser radar which is mainly characterized by high scanning speed, high resolution and high reliability, so that the sweeping robot is more accurate in distance and precision, can only scan in a plane mode and cannot perform three-dimensional measurement. The data may also be point cloud data, which refers to a set of vectors in a three-dimensional coordinate system. In addition to having geometric positions, some point cloud data has color information, which is usually obtained by a camera to obtain a color image, and then color information (RGB) of pixels at corresponding positions is assigned to corresponding points in the point cloud.
The front image may be a picture or a video of the drone on the travel path taken by a camera mounted on the drone.
The image recognition model is obtained by training, for example, AlexNet, VGG19, ResNet _152, inclusion v4, DenseNet. The server can input the front image into the image recognition model to obtain the category corresponding to the front image. Here, the category may be animal, human, wall, glass barrier, etc. Glass obstacles include, but are not limited to, french windows, glass doors, glass tea tables, and the like.
The specific position information may be a three-dimensional space position used for representing the identified glass obstacle in the space where the unmanned aerial vehicle and the glass obstacle are located, and may be a three-dimensional coordinate.
The path planning module may be a path planning algorithm, which may be roughly classified into four types: traditional algorithms, graphical methods, intelligent biomimetic algorithms, and other algorithms. The traditional path planning algorithm is as follows: simulated Annealing (SA), artificial potential field methods, fuzzy logic algorithms, tabu search algorithms (TS), etc. The graphical method comprises the following steps: c-space, grid, free space, voronoi diagrams, and the like. The intelligent bionics algorithm comprises the following steps: ant Colony Algorithm (ACA), neural network Algorithm, particle swarm Algorithm, Genetic Algorithm (GA), and the like.
The path information may be a picture or text describing a route from the present position of the drone to passing through the glass barrier.
The server can establish an initial three-dimensional space map of the space where the unmanned aerial vehicle is located based on the laser radar data, then update the initial three-dimensional space map based on the path information, and control the driving path of the unmanned aerial vehicle based on the updated initial three-dimensional space map. According to the technical scheme provided by the embodiment of the disclosure, the front images and the laser radar data of the unmanned aerial vehicle on the driving path are acquired, the image recognition model is utilized to classify and recognize the front images, the specific position information of the glass obstacle is determined under the condition that the glass obstacle is recognized by the front images, the specific position information is transmitted to the path planning module to obtain the path information, and the driving path of the unmanned aerial vehicle is controlled based on the path information and the laser radar data. The glass that this openly realized will produce great error with laser through image recognition technology carries out optimization processing, has avoided the collision of robot, has effectively optimized walking efficiency and has improved the intelligence of robot.
In some embodiments, in the case where the glass obstacle is recognized for the front image, the specific position information of the glass obstacle is acquired according to the conversion of the position information of the image pickup device and the coordinate system of the unmanned robot.
Specifically, the image capturing device may be various devices used to capture an image of a path ahead of the drone in the direction of travel, including but not limited to a wide-angle camera, a binocular camera, a charge-coupled device camera, a wireless camera, a zoom camera, a gun-type camera, a dome camera, a wide-dynamic camera, and the like. The coordinate system of the unmanned aerial vehicle may be a three-dimensional spatial coordinate system of the space in which the unmanned aerial vehicle is located. The image acquisition device may be mounted on a drone. The server can realize the three-dimensional coordinates of the image acquisition equipment into the three-dimensional space coordinate system, and can know the distance between the glass obstacle in the front image and the image acquisition equipment and the size information of the glass obstacle based on the self attributes of the image acquisition equipment. The property itself may be the focal length of the camera, etc. The dimensional information may be the length, width and height of the glass barrier.
According to the technical scheme provided by the embodiment of the disclosure, the image acquisition equipment is arranged on the unmanned robot, so that the vision related information can be acquired, the indoor space is divided, the category of a specific object is identified, and the three-dimensional measurement can be performed.
In some embodiments, the glass obstacles are marked on the space map of the drone based on specific location information.
Specifically, the space map of the unmanned aerial vehicle may be a space map of a space where the unmanned aerial vehicle is located, which is obtained according to GPS positioning. The server can mark the position represented by the specific position information on the space map of the unmanned aerial vehicle by taking the position of the unmanned aerial vehicle as a standard.
According to the technical scheme that this disclosure provided, can be more clear accurate the position of confirming the glass barrier through mark the position that specific positional information represents on the space map at unmanned aerial vehicle ware to more accurate dodging.
In some embodiments, the image recognition model is trained by: acquiring a training sample set, wherein training samples comprise sample pictures and sample categories corresponding to the sample pictures; and taking a sample picture of a training sample in the training sample set as an input, taking a sample category corresponding to the input sample picture as an expected output, and training to obtain the image recognition model.
Specifically, the image recognition model may be used to represent the correspondence between the image and the category, and the electronic device may train the image recognition model that may represent the correspondence between the image and the category in various ways. The electronic device may generate a correspondence table storing the correspondence between a plurality of recorded images and categories based on counting a large number of recorded images and categories, and use the correspondence table as an image recognition model. In this way, the electronic device can sequentially compare the front image with the plurality of pieces of recorded information in the correspondence table, and if one image in the correspondence table is the same as or similar to the front image, the category corresponding to the image in the correspondence table is set as the category of the front image.
Alternatively, the image recognition model may be obtained by performing the following training steps based on the training sample set: respectively inputting the sample images of at least one training sample in the training sample set into an initial machine learning model to obtain a category corresponding to each sample image in the at least one training sample; comparing the category corresponding to each sample image in the at least one training sample with the corresponding sample category; determining the prediction accuracy of the initial machine learning model according to the comparison result; determining whether the prediction accuracy is greater than a preset accuracy threshold; in response to determining that the accuracy is greater than the preset accuracy threshold, taking the initial machine learning model as a trained image recognition model; and adjusting parameters of the initial machine learning model in response to the determination that the accuracy is not greater than the preset accuracy threshold, forming a training sample set by using unused training samples, using the adjusted initial machine learning model as the initial machine learning model, and executing the training step again.
It is understood that after the training, the image recognition model can be used to characterize the correspondence between the sample images and the sample classes. The above-mentioned image recognition model may be a convolutional neural network model.
According to the technical scheme provided by the embodiment of the disclosure, the front image is identified through the model, and the accuracy and the speed of image identification can be improved.
In some embodiments, lidar data of the drone detected by a lidar mounted on the drone is acquired; the method comprises the steps of obtaining a front image of the unmanned aerial vehicle on a driving path captured by image acquisition equipment installed on the unmanned aerial vehicle.
In particular, radar, also known as radiolocation, is an electronic device that uses electromagnetic waves to detect objects. The radar emits electromagnetic waves to irradiate a target and receives the echo of the target, so that information such as the distance from the target to an electromagnetic wave emission point, the distance change rate (radial speed), the azimuth and the altitude is obtained. The radar has various types, and can be divided into a pulse radar, a continuous wave radar, a pulse compression radar, a frequency agility radar and the like according to the signal form; the method can be divided into monopulse radar, cone scanning radar, hidden cone scanning radar and the like according to an angle tracking mode; according to the parameters of target measurement, the method can be divided into a height measuring radar, a two-coordinate radar, a multi-station radar and the like; and can be divided into over-the-horizon radar, microwave radar, millimeter wave radar, laser radar and the like according to the radar frequency band. Preferably, in the embodiment of the present disclosure, the radar is a laser range radar, which may be installed at any position of the unmanned machine, and the embodiment of the present disclosure does not limit this.
According to the technical scheme provided by the embodiment of the disclosure, because the laser ranging radar is the single-line laser radar, the single-line laser radar has the main characteristics of high scanning speed, high resolution and strong reliability, and is more accurate in distance and precision.
In some embodiments, the path information is obtained using specific location information and a path planning algorithm.
Specifically, the server may calculate the specific location information as a parameter by using a path planning algorithm to obtain the path information.
According to the technical scheme provided by the embodiment of the disclosure, the path information calculated by the algorithm is more accurate than the planned route.
In some embodiments, the lidar includes: single-beam lidar and multi-beam lidar.
In particular, the single-beam lidar is mainly characterized by high scanning speed, high resolution and high reliability, so that the distance and the precision are more accurate, but the single-beam lidar can only scan in a plane mode and cannot perform three-dimensional measurement. The multi-beam laser radar can identify the height information of an object and acquire a 3D scanning image of the surrounding environment, and is mainly applied to the field of unmanned driving. All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
Fig. 3 is a flowchart of another unmanned aerial vehicle control method provided in an embodiment of the present disclosure. The drone control method of fig. 3 may be performed by a server. As shown in fig. 3, the drone controller control method includes:
s301, acquiring laser radar data of the unmanned aerial vehicle detected by a laser radar installed on the unmanned aerial vehicle;
s302, acquiring a front image of the unmanned aerial vehicle on a driving path captured by image acquisition equipment arranged on the unmanned aerial vehicle;
s303, classifying and identifying the front image by using an image identification model;
s304, determining the specific position information of the glass barrier under the condition that the glass barrier is identified for the front image;
s305, transmitting the specific position information to a path planning module to obtain path information;
s306, controlling the running path of the unmanned robot based on the path information and the laser radar data;
and S307, marking the glass barrier on a space map of the unmanned aerial vehicle according to the specific position information.
Specifically, the laser radar installed on the unmanned aerial vehicle sends acquired laser radar data of the unmanned aerial vehicle in the driving direction to a server, an image acquisition device installed on the unmanned aerial vehicle acquires a front image of the unmanned aerial vehicle on a driving path and sends the front image to the server, and after the front image and the laser radar data are acquired, the server performs classification and identification on the front image by using an image identification model; determining specific position information of the glass obstacle in the case that the glass obstacle is recognized for the front image; further, the server transmits the specific position information to a path planning module to obtain path information; and controlling the driving path of the unmanned aerial vehicle based on the path information and the laser radar data, and finally marking the glass barrier on a space map of the unmanned aerial vehicle according to specific position information.
According to the technical scheme provided by the embodiment of the disclosure, the front images and the laser radar data of the unmanned aerial vehicle on the driving path are acquired, the image recognition model is utilized to classify and recognize the front images, the specific position information of the glass obstacle is determined under the condition that the glass obstacle is recognized by the front images, the specific position information is transmitted to the path planning module to obtain the path information, and the driving path of the unmanned aerial vehicle is controlled based on the path information and the laser radar data. The glass that this openly realized will produce great error with laser through image recognition technology carries out optimization processing, has avoided the collision of robot, has effectively optimized walking efficiency and has improved the intelligence of robot.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 4 is a schematic diagram of an drone controller provided by an embodiment of the present disclosure. As shown in fig. 4, the drone controller includes:
the acquiring module 401 is configured to acquire a front image and laser radar data of the unmanned aerial vehicle on a driving path;
a classification module 402, configured to perform classification and identification on the front image by using an image identification model;
a determination module 403, configured to determine specific position information of the glass obstacle in a case where the glass obstacle is recognized for the front image;
a transmission module 404, configured to transmit the specific location information to the path planning module to obtain path information;
and a control module 405 for controlling the driving path of the unmanned robot based on the path information and the lidar data.
According to the technical scheme provided by the embodiment of the disclosure, the front images and the laser radar data of the unmanned aerial vehicle on the driving path are acquired, the image recognition model is utilized to classify and recognize the front images, the specific position information of the glass obstacle is determined under the condition that the glass obstacle is recognized by the front images, the specific position information is transmitted to the path planning module to obtain the path information, and the driving path of the unmanned aerial vehicle is controlled based on the path information and the laser radar data. The glass that this openly realized will produce great error with laser through image recognition technology carries out optimization processing, has avoided the collision of robot, has effectively optimized walking efficiency and has improved the intelligence of robot.
In some embodiments, the determination module 403 of fig. 4 acquires specific position information of the glass obstacle according to the conversion of the position information of the image capturing device and the coordinate system of the unmanned robot in the case where the glass obstacle is recognized for the front image.
In some embodiments, the drone controller apparatus further comprises: and a marking module 406, configured to mark the glass obstacle on a space map of the drone based on the specific location information.
In some embodiments, the image recognition model is trained by: acquiring a training sample set, wherein training samples comprise sample pictures and sample categories corresponding to the sample pictures; and taking a sample picture of a training sample in the training sample set as an input, taking a sample category corresponding to the input sample picture as an expected output, and training to obtain the image recognition model.
In some embodiments, the acquisition module 401 of fig. 4 acquires lidar data of the drone detected by a lidar mounted on the drone; the method comprises the steps of obtaining a front image of the unmanned aerial vehicle on a driving path captured by image acquisition equipment installed on the unmanned aerial vehicle.
In some embodiments, the transmission module 404 of fig. 4 obtains the path information using specific location information and a path planning algorithm.
In some embodiments, the lidar includes: single-beam lidar and multi-beam lidar.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure.
Fig. 5 is a schematic diagram of a computer device 5 provided by an embodiment of the present disclosure. As shown in fig. 5, the computer device 5 of this embodiment includes: a processor 501, a memory 502 and a computer program 503 stored in the memory 502 and operable on the processor 501. The steps in the various method embodiments described above are implemented when the processor 501 executes the computer program 503. Alternatively, the processor 501 implements the functions of the respective modules/units in the above-described respective apparatus embodiments when executing the computer program 503.
Illustratively, the computer program 503 may be partitioned into one or more modules/units, which are stored in the memory 502 and executed by the processor 501 to accomplish the present disclosure. One or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 503 in the computer device 5.
The computer device 5 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computer devices. Computer device 5 may include, but is not limited to, a processor 501 and a memory 502. Those skilled in the art will appreciate that fig. 5 is merely an example of a computer device 5 and is not intended to limit the computer device 5 and may include more or fewer components than shown, or some of the components may be combined, or different components, e.g., the computer device may also include input output devices, network access devices, buses, etc.
The Processor 501 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 502 may be an internal storage unit of the computer device 5, for example, a hard disk or a memory of the computer device 5. The memory 502 may also be an external storage device of the computer device 5, such as a plug-in hard disk provided on the computer device 5, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 502 may also include both internal storage units of the computer device 5 and external storage devices. The memory 502 is used for storing computer programs and other programs and data required by the computer device. The memory 502 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
In the embodiments provided in the present disclosure, it should be understood that the disclosed apparatus/computer device and method may be implemented in other ways. For example, the above-described apparatus/computer device embodiments are merely illustrative, and for example, a division of modules or units, a division of logical functions only, an additional division may be made in actual implementation, multiple units or components may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, the present disclosure may implement all or part of the flow of the method in the above embodiments, and may also be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the above methods and embodiments. The computer program may comprise computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain suitable additions or additions that may be required in accordance with legislative and patent practices within the jurisdiction, for example, in some jurisdictions, computer readable media may not include electrical carrier signals or telecommunications signals in accordance with legislative and patent practices.
The above examples are only intended to illustrate the technical solutions of the present disclosure, not to limit them; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present disclosure, and are intended to be included within the scope of the present disclosure.

Claims (10)

1. An unmanned aerial vehicle control method is characterized by comprising the following steps:
acquiring a front image and laser radar data of the unmanned aerial vehicle on a driving path;
classifying and identifying the front image by using an image identification model;
determining specific position information of the glass obstacle in the case where the glass obstacle is recognized for the front image;
transmitting the specific position information to a path planning module to obtain path information;
controlling a travel path of the unmanned robot based on the path information and the lidar data.
2. The method according to claim 1, wherein the determining the specific position information of the glass obstacle in the case where the glass obstacle is recognized for the front image comprises:
and under the condition that the glass obstacle is identified from the front image, acquiring specific position information of the glass obstacle according to the position information of the image acquisition equipment and the conversion of the coordinate system of the unmanned robot.
3. The method of claim 1, further comprising:
marking the glass obstacle on a spatial map of the unmanned machine based on the specific location information.
4. The method of claim 1, wherein the image recognition model is trained by:
acquiring a training sample set, wherein the training sample comprises a sample picture and a sample category corresponding to the sample picture;
and taking the sample picture of the training sample in the training sample set as input, taking the sample category corresponding to the input sample picture as expected output, and training to obtain the image recognition model.
5. The method of claim 1, wherein the acquiring of the forward image and the lidar data of the drone on the travel path comprises:
acquiring laser radar data of the unmanned machine detected by a laser radar installed on the unmanned machine;
and acquiring a front image of the unmanned aerial vehicle on a driving path captured by image acquisition equipment installed on the unmanned aerial vehicle.
6. The method of claim 1, wherein transmitting the specific location information to a path planning module to obtain path information comprises:
and obtaining the path information by using the specific position information and a path planning algorithm.
7. The method of claim 5, wherein the lidar comprises: single-beam lidar and multi-beam lidar.
8. An unmanned aerial vehicle control device, comprising:
the acquisition module is used for acquiring a front image and laser radar data of the unmanned aerial vehicle on a driving path;
the classification module is used for classifying and identifying the front image by using an image identification model;
a determination module for determining specific position information of the glass obstacle when the glass obstacle is recognized for the front image;
the transmission module is used for transmitting the specific position information to the path planning module to obtain path information;
and the control module is used for controlling the driving path of the unmanned robot based on the path information and the laser radar data.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202110746003.XA 2021-07-01 2021-07-01 Unmanned aerial vehicle control method and device, computer equipment and storage medium Pending CN113467450A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110746003.XA CN113467450A (en) 2021-07-01 2021-07-01 Unmanned aerial vehicle control method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110746003.XA CN113467450A (en) 2021-07-01 2021-07-01 Unmanned aerial vehicle control method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113467450A true CN113467450A (en) 2021-10-01

Family

ID=77877456

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110746003.XA Pending CN113467450A (en) 2021-07-01 2021-07-01 Unmanned aerial vehicle control method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113467450A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114077252A (en) * 2021-11-16 2022-02-22 中国人民解放军陆军工程大学 Robot collision obstacle distinguishing device and method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130064636A (en) * 2011-12-08 2013-06-18 전자부품연구원 Gondola robot and method for controlling thereof
CN107472135A (en) * 2016-06-07 2017-12-15 松下知识产权经营株式会社 Video generation device, image generating method and program
CN108665541A (en) * 2018-04-09 2018-10-16 北京三快在线科技有限公司 A kind of ground drawing generating method and device and robot based on laser sensor
CN109062224A (en) * 2018-09-06 2018-12-21 深圳市三宝创新智能有限公司 Robot food delivery control method, device, meal delivery robot and automatic food delivery system
CN109602345A (en) * 2019-01-10 2019-04-12 轻客小觅智能科技(北京)有限公司 A kind of vision sweeping robot and its barrier-avoiding method
WO2019119350A1 (en) * 2017-12-19 2019-06-27 深圳市海梁科技有限公司 Obstacle recognition method and apparatus for unmanned vehicle, and terminal device
CN111880532A (en) * 2020-07-13 2020-11-03 珠海格力电器股份有限公司 Autonomous mobile device, method, apparatus, device, and storage medium thereof
CN111982124A (en) * 2020-08-27 2020-11-24 华中科技大学 Deep learning-based three-dimensional laser radar navigation method and device in glass scene
CN112051588A (en) * 2020-09-03 2020-12-08 重庆大学 Glass identification system with multi-sensor fusion

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130064636A (en) * 2011-12-08 2013-06-18 전자부품연구원 Gondola robot and method for controlling thereof
CN107472135A (en) * 2016-06-07 2017-12-15 松下知识产权经营株式会社 Video generation device, image generating method and program
WO2019119350A1 (en) * 2017-12-19 2019-06-27 深圳市海梁科技有限公司 Obstacle recognition method and apparatus for unmanned vehicle, and terminal device
CN108665541A (en) * 2018-04-09 2018-10-16 北京三快在线科技有限公司 A kind of ground drawing generating method and device and robot based on laser sensor
CN109062224A (en) * 2018-09-06 2018-12-21 深圳市三宝创新智能有限公司 Robot food delivery control method, device, meal delivery robot and automatic food delivery system
CN109602345A (en) * 2019-01-10 2019-04-12 轻客小觅智能科技(北京)有限公司 A kind of vision sweeping robot and its barrier-avoiding method
CN111880532A (en) * 2020-07-13 2020-11-03 珠海格力电器股份有限公司 Autonomous mobile device, method, apparatus, device, and storage medium thereof
CN111982124A (en) * 2020-08-27 2020-11-24 华中科技大学 Deep learning-based three-dimensional laser radar navigation method and device in glass scene
CN112051588A (en) * 2020-09-03 2020-12-08 重庆大学 Glass identification system with multi-sensor fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张志强,等: "机械专业综合实验教程", 31 May 2021, 武汉:武汉大学出版社, pages: 83 - 85 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114077252A (en) * 2021-11-16 2022-02-22 中国人民解放军陆军工程大学 Robot collision obstacle distinguishing device and method
CN114077252B (en) * 2021-11-16 2023-09-12 中国人民解放军陆军工程大学 Robot collision obstacle distinguishing device and method

Similar Documents

Publication Publication Date Title
KR102032070B1 (en) System and Method for Depth Map Sampling
US10490079B2 (en) Method and device for selecting and transmitting sensor data from a first motor vehicle to a second motor vehicle
Mahlisch et al. Sensorfusion using spatio-temporal aligned video and lidar for improved vehicle detection
US11734935B2 (en) Transferring synthetic lidar system data to real world domain for autonomous vehicle training applications
CN110782465B (en) Ground segmentation method and device based on laser radar and storage medium
CN111814752B (en) Indoor positioning realization method, server, intelligent mobile device and storage medium
CN111045000A (en) Monitoring system and method
CN112232139B (en) Obstacle avoidance method based on combination of Yolo v4 and Tof algorithm
CN110136186B (en) Detection target matching method for mobile robot target ranging
CN113160327A (en) Method and system for realizing point cloud completion
CN111652067A (en) Unmanned aerial vehicle identification method based on image detection
CN110471086A (en) A kind of radar survey barrier system and method
EP4024974A1 (en) Data processing method and apparatus, chip system, and medium
CN115147333A (en) Target detection method and device
CN113467450A (en) Unmanned aerial vehicle control method and device, computer equipment and storage medium
CN113160292B (en) Laser radar point cloud data three-dimensional modeling device and method based on intelligent mobile terminal
CN109708659B (en) Distributed intelligent photoelectric low-altitude protection system
WO2021087751A1 (en) Distance measurement method, distance measurement device, autonomous moving platform, and storage medium
Rana et al. Comparative study of Automotive Sensor technologies used for Unmanned Driving
CN113071498B (en) Vehicle control method, device, system, computer device and storage medium
CN116013067A (en) Vehicle data processing method, processor and server
CN116700228A (en) Robot path planning method, electronic device and readable storage medium
CN113792645A (en) AI eyeball fusing image and laser radar
TWI843116B (en) Moving object detection method, device, electronic device and storage medium
CN117423271B (en) Unmanned aerial vehicle detection and countering method and detection and countering system thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination