WO2023108578A1 - 一种野外自主作业的专家型飞行机器人系统 - Google Patents

一种野外自主作业的专家型飞行机器人系统 Download PDF

Info

Publication number
WO2023108578A1
WO2023108578A1 PCT/CN2021/138988 CN2021138988W WO2023108578A1 WO 2023108578 A1 WO2023108578 A1 WO 2023108578A1 CN 2021138988 W CN2021138988 W CN 2021138988W WO 2023108578 A1 WO2023108578 A1 WO 2023108578A1
Authority
WO
WIPO (PCT)
Prior art keywords
expert
camera
robot system
field
flying robot
Prior art date
Application number
PCT/CN2021/138988
Other languages
English (en)
French (fr)
Inventor
陈仕东
Original Assignee
赛真达国际有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 赛真达国际有限公司 filed Critical 赛真达国际有限公司
Priority to PCT/CN2021/138988 priority Critical patent/WO2023108578A1/zh
Priority to CN202180007514.8A priority patent/CN116829460A/zh
Publication of WO2023108578A1 publication Critical patent/WO2023108578A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use

Definitions

  • the invention relates to the field of flying robots, in particular to an expert flying robot system for field autonomous operations.
  • schistosomiasis is a major natural foci infectious disease that seriously endangers the health of humans and animals. It can infect humans and more than 40 species of mammals. It has been prevalent since history. Schistosomiasis is distributed in tropical and subtropical regions between 34° north latitude and 34° south latitude. It is distributed in 76 countries in 6 major regions of the world.
  • WHO World Health Organization
  • Schistosoma in China is mainly distributed in the Yangtze River Basin and 409 counties (cities, districts) in 12 provinces (cities, autonomous regions) south of the Yangtze River.
  • Schistosomiasis eggs were found in the liver and intestines of the ancient Western Han corpses unearthed from the Mawangdui Han Tomb in Changsha, Hunan, which proved that schistosomiasis appeared in China during the Han Dynasty at least 2,100 years ago.
  • Schistosomiasis is very susceptible to infection. Human and animal skin can be infected after 10 seconds (including holding and washing vegetables, drinking water, etc.) after contact with half a drop of plague water. So far, there is a lack of effective vaccines in the world, and reinfection can occur even after being cured, and epidemics can recur in many places.
  • Oncomelania is the only intermediate host of Schistosoma japonicum endemic in China, so the effective control strategy for schistosomiasis is to eliminate oncomelania and block the transmission route. Oncomelania is a mollusc, divided into male and female, amphibious on land and water, consisting of shell and mollusc.
  • the front part of the mollusc is head, neck, foot and mantle, and the rear part is viscera.
  • Those with longitudinal ribs on the surface are called "Snails with rib shells".
  • the shells are about 10 mm long and 4 mm wide, and they live in lakes and marshes or water network areas; , width of 6 mm and 3 mm, more common in hilly areas.
  • the society in the epidemic area needs to invest a lot of financial, manpower and material resources in the prevention and control of snails.
  • the necessary palliative link is large-scale spraying of molluscicide-niclosamide, which is approved by the World Health Organization, to control the distribution range and density of oncomelania.
  • the molluscicide chemical niclosamide is moderately toxic to aquatic organisms. It is released in large quantities and often leads to the death of large areas of fish and shrimp downstream of the mollusc control. It is very harmful to the ecological environment and agricultural production. Huge compensation is often made economically, which makes the prevention and control work a dilemma. situation.
  • Pest control is one of the three key links in modern high-efficiency agriculture to achieve high and stable yields in addition to breeding and fertilization. Now the simplest and most widely used pest control method is to let agricultural operators regularly apply various insecticides in full coverage without expert diagnosis.
  • the advantage is that no agricultural expert diagnosis is required, and the disadvantage is blind carpet-type large-scale indiscriminate application Pesticides have caused serious food safety problems, ecological pollution, water pollution, embryo teratogenicity, etc., which have become problems that must be solved in modern agriculture.
  • the solution lies in efficient pest control methods, which are the same as the key to the prevention and control of schistosomiasis mentioned above.
  • the first is precise diagnosis, and the second is precise pesticide application. It is necessary to send agricultural experts to inspect, diagnose, and take samples in the fields and orchards. If the diagnosis is confirmed, agricultural workers will be organized to release pesticides according to their distribution and density. The key point is to find small diseased areas as soon as possible and kill them in time.
  • the purpose of the present invention is to: aim at the problems existing in the prior art, to provide an expert-type flying robot system for autonomous operation in the field, which is used in the field of professional operations to replace human experts, go deep into the field, and automatically perform patrolling, diagnosis, picking up and releasing Wait for homework.
  • An expert flying robot system for autonomous operations in the field includes an unmanned aerial vehicle platform, a computer vision system, a manipulator system, a high-precision close-range automatic flight driving system and an automatic professional operation system;
  • the computer vision system includes a camera and The vision processor that is connected with it;
  • Described manipulator system includes one or more manipulators that are installed on the unmanned aerial vehicle platform;
  • Described high-precision short-distance automatic flight driving system includes high-precision positioning subsystem, obstacle detection subsystem and Three-dimensional path planning subsystem;
  • the automatic professional operation system includes a computer expert system supporting specific on-site operations and professional operation attachments, and realizes on-site operations under the cooperation of the computer vision system and the mechanical arm system.
  • the UAV platform is an existing general-purpose UAV, including a flight machinery subsystem, a camera, a flight control subsystem, a wireless communication subsystem, a positioning subsystem, and a power supply subsystem.
  • the camera includes one or more main cameras arranged on the drone body and an operation camera on the mechanical arm.
  • the camera is a monocular camera, a binocular stereo camera or a multi-eye surround view camera.
  • the vision processor is used for specific object recognition and ground and water surface recognition in professional operations, as well as calculating object distance, measuring object size, and calculating ground and water surface distance.
  • the vision processor includes an artificial neural network, which is trained through deep learning of specific objects in related application fields, and then recognizes specific objects based on pictures taken by the on-site camera.
  • the artificial neural network is a convolutional neural network or a derivative network of a convolutional neural network.
  • a convolutional neural network or a convolutional neural network-derived network is trained using a backpropagation supervised learning method.
  • a pair of identical cameras installed in parallel namely the left camera and the right camera, are used to form binocular stereo vision; the same scene point is incident on the optical centers of the left and right cameras at two different angles, and the mapping is A pair of pixels at different positions on the image plane of the left and right cameras, the position deviation of the image plane of the pair of pixels or the deviation of the incident angle is called the visual distance, and the scene point is calculated according to the visual distance of the scene point
  • the distance of the scene point is to obtain the distance information of the scene point to complete the distance measurement; because the incident angle information of the scene point has been retained in the two-dimensional plane image, after obtaining the distance information of the scene point, the scene point is positioned in the three-dimensional real world .
  • the mechanical arm is provided with a manipulator, and the manipulator and the manipulator are provided with pressure sensors for detecting the rigidity and flexibility of objects on site.
  • the high-precision positioning subsystem includes satellite navigation positioning, inertial navigation positioning, visual positioning and local wireless positioning, which are used to determine the operation area and the inspection route in the area, determine the latitude and longitude of the sampling point, and mark the epidemic area Survey point maps and statistical distribution extent and density maps.
  • the obstacle detection subsystem includes the visual depth perception of the main camera and the operation camera, which is used to measure the distance of surrounding obstacles and determine the safe space for flying.
  • the three-dimensional path planning subsystem includes steps:
  • step (3) According to the operation surface and the operation range, for each point on the two-dimensional horizontal inspection route, determine the three-dimensional point above which has the same relative height from the ground water surface, obtain the three-dimensional path, and then judge whether there is an obstacle on the three-dimensional path, if If there is no obstacle, follow this route to perform contour cruise scanning, and if there is an obstacle, go to step (3);
  • the computer expert system adopts a classical structure, including a knowledge base and an inference engine.
  • each knowledge base has its own Independent inference engine, that is, establishment of inspection inference engine, diagnosis inference engine, pick-up inference engine and release inference engine.
  • separate knowledge bases are established for inspection, diagnosis, pick-up and release operations, that is, to establish inspection-type knowledge bases, diagnosis-type knowledge bases, pick-up-type knowledge bases, and release-type knowledge bases.
  • Each knowledge base shares a common inference engine.
  • the computer expert system adopts an artificial neural network structure; patrolling, diagnosis, picking up and releasing operations respectively establish their own artificial neural network expert systems, and take the conclusions and actions of reasoning and decision-making of each operation as output, and then Through the comprehensive integration of various tasks across the network, a complete multi-task artificial neural network expert system is formed.
  • the additional professional operation device includes one or more sample storage containers for sampling and item storage containers for release.
  • the present invention combines artificial intelligence with unmanned equipment to form an automatic unmanned equipment for expert-type artificial intelligence independent decision-making, which replaces human experts and goes deep into the field on a large scale to automatically conduct inspections, diagnoses, picking up and Casting and other major operations are very safe, precise and efficient.
  • Fig. 1 is the structural representation of this system
  • Fig. 2 is the architecture diagram of artificial neural network nodes
  • Fig. 3 is the first kind of expert system structure
  • Figure 4 shows the structure of the second expert system.
  • the present invention provides an expert-type flying robot system 100 for autonomous operations in the field, which is used to replace human experts in the field of professional operations, go deep into the field, automatically patrol, Diagnose, pick up and cast.
  • the system of the present invention includes an unmanned aerial vehicle platform 110, a computer vision system 120, a robotic arm system 130, a high-precision close-range automatic flight driving system 140, an automatic professional operation system 150, etc., as shown in FIG. 1 .
  • UAV platform 110 is an existing general-purpose unmanned aerial vehicle, which is widely used in applications such as aerial photography, and generally includes flight machinery subsystems, cameras, flight control subsystems, wireless communication subsystems, positioning subsystems, power supply subsystems, etc.
  • the main function is to accept the flight driving instructions of the automatic flight pilot system 140, control the flight attitude by itself, maintain stability, fly or hover according to the specified route, and meet the operation requirements.
  • the computer vision system 120 includes a camera and a visual processor connected to it, which is the basic functional system of the invention system, and the visual recognition and measurement information generated by it is fully used in the high-precision positioning, obstacle detection, and automatic professional operation of the automatic flight driving system 140 Expert system reasoning of the system 150, visual servoing of the robotic arm system 130, etc.
  • Cameras include one or more main cameras on the drone body, operation cameras on the robotic arm, etc. Cameras include monocular cameras, binocular stereo cameras, and multi-eye surround-view cameras. The camera may be a visible light camera, an infrared camera, or the like.
  • the vision processor is used for specific object recognition and ground water surface recognition in professional operations, as well as calculating object distance, measuring object size, calculating ground water surface distance, etc.
  • the vision processor includes an artificial neural network, through deep learning training of specific objects in relevant application fields, and can recognize specific objects according to the pictures taken by the on-site camera.
  • the specific objects include snails, manure and livestock manure, etc.; in an embodiment for the control of agricultural diseases and insect pests, the specific objects include pests that need to be diagnosed, crops and injured Crop characteristics (such as semicircular gaps, perforations, curling, discoloration) on the edge of the leaf surface, etc.
  • Artificial neural network is a computing model, a nonlinear and self-adaptive information processing system composed of a large number of interconnected processing units, as shown in Figure 2.
  • the processing units in the network are also called nodes and neurons.
  • a neuron represents a specific output function, called an activation function.
  • the connection between every two neurons represents a weighted value for the signal passing through the connection, called weight.
  • the connection weight between neurons reflects the connection strength between units, and the representation and processing of information is reflected in the connection relationship of network processing units.
  • the output of a neuron is produced by the weighted sum of all input connections through the activation function.
  • Processing units fall into three categories: input units, output units, and hidden units.
  • the input unit accepts signals and data from the outside world; the output unit realizes the output of system processing results; the hidden unit is between the input and output units and cannot be observed by the outside of the system.
  • the artificial neural network is divided into layers from input to output.
  • the layer composed of input units is called the input layer
  • the layer composed of hidden units is called the hidden layer or intermediate layer
  • the layer composed of output units is called the output layer.
  • the present invention adopts a convolutional neural network (CNN) or a derivative network to recognize images.
  • CNN convolutional neural network
  • this artificial neural network for image processing requires a large number of nodes and intermediate layers.
  • This artificial neural network with a deep layer is called a deep artificial neural network, which includes a large number of parameters.
  • Deep learning is a machine learning method for training deep artificial neural networks.
  • the present invention employs a supervised learning method of backpropagation. First, the sample image is processed by manual recognition to identify specific objects.
  • identifying snails In the embodiment of identifying snails, experts manually identify the snails in the sample image, and in the embodiment of identifying pests, experts manually identify the pests in the sample image, and mark the manual identification results on the sample image. Then, input the labeled sample image into the network, obtain the network output, and calculate the error between the network output and the standard output of the manual recognition label. Starting from the output layer, optimize and adjust its network parameters to reduce the error, then optimize and adjust the penultimate layer (the last intermediate layer), and push forward layer by layer until all parameters of the network are optimized and adjusted. After training a large number of sample images, the network parameters are optimized, and a fully trained deep artificial neural network can achieve a recognition accuracy rate similar to or even higher than that of human experts.
  • the image when the convolutional neural network recognizes a specific object in the image, the image is required to have a sufficiently high spatial resolution, that is, the size of the specific object to be recognized in the image (ie, the number of horizontal pixels or the number of vertical pixels) should be large enough.
  • the size in the image is generally required to be at the level of 100 pixels.
  • the size of snails in the real world does not exceed 1 cm. Taking the size of snails in the image as 100 pixels as an example, the space required for the image formed by the camera is The resolution is not lower than 0.1mm/pixel.
  • a mainstream camera with 2K pixels and 90-degree field of view when used to capture images, its field of view (ie, image width) only covers 20 cm from the ground or water surface, and the distance from the camera to the ground or water surface should not be higher than 10 cm. .
  • its field of view covers 40 cm of ground or water, and the camera must be no higher than 20 cm from the ground or water.
  • a camera with a 45-degree field of view when used, its field of view coverage is the same, but the distance from the camera to the ground or water surface can be doubled, 20 cm for the 2K camera, and 40 cm for the 4K camera.
  • the distance from the camera to a specific object or its height from the ground is tens of centimeters, that is, "macro staring". This confirms that the existing barrier-free high-altitude and high-flying drones cannot be used for oncomelania inspection and diagnosis, let alone sampling and picking.
  • the UAV of the present invention adopts high-precision short-distance automatic flight driving to avoid obstacles (automatically avoiding automatic flight obstacles in various complex natural terrains such as ground or water surface plants, raised earth and stones, poles and wires), to reach the ground or water surface obstacles.
  • the oncomelania inspection, diagnosis and sampling can only be completed by flying low and flying close to the ground, and then using the robotic arm to get closer to the ground or water surface, and finally scan and observe the ground or water surface at a height of tens of centimeters.
  • the vision processor of the present invention is also used for ranging, positioning and three-dimensional measurement.
  • the basic imaging model of the existing camera is the pinhole camera model, which maps each scene point in the real world to the corresponding pixel of the image plane with a straight line passing through the optical center, and maps the three-dimensional real world into a two-dimensional plane image. After imaging, the incident angle information of the scene point is preserved, but the depth information is lost, which cannot be used for ranging, positioning and three-dimensional measurement.
  • the system adopts a pair of identical cameras installed in parallel, that is, a left camera and a right camera, to form binocular stereo vision.
  • a scene point in the real world is mapped to corresponding pixels of the image planes of the left and right cameras with a straight line passing through the optical center of the left and right cameras, and the three-dimensional real world is mapped into a pair of two-dimensional plane images.
  • the distance of the scene point is limited (that is, less than infinity)
  • the same scene point is incident on the optical center of the left and right cameras at two different angles, and is mapped to a pair of pixels at different positions on the image plane of the left and right cameras.
  • the position deviation of the image plane of the pair of pixels or the deviation of the incident angle is called the apparent disparity.
  • the distance of the scene point can be calculated inversely, that is, the distance information of the scene point can be obtained to complete the distance measurement. Because the incident angle information of the scene point has been retained in the two-dimensional plane image, after obtaining the distance information of the scene point, the positioning of the scene point in the three-dimensional real world can be completed. Furthermore, combined with visual recognition, through the positioning of each scene point of a specific object in the three-dimensional real world, the three-dimensional measurement of the specific object can be completed, such as measuring the length and width of the snail.
  • the system uses a single monocular camera, and the famous SLAM (Simultaneous Localization Modeling) algorithm is used to move the monocular camera to different positions to image the same scene point under the condition that the external scene is still. , to obtain a pair of two-dimensional plane images, equivalent to binocular imaging, to complete ranging, positioning and three-dimensional measurement.
  • SLAM Simultaneous Localization Modeling
  • the robot arm system 130 includes one or more robot arms installed on the drone platform.
  • the manipulator includes a manipulator device.
  • the manipulator arm can be replaced with a variety of manipulator devices for operations such as poking, grabbing, fishing, and releasing.
  • the manipulator and its manipulator device have a pressure sensor, so that the manipulator has a sense of touch, and detects the rigidity and flexibility of objects on the scene, which can be used in the expert system described later to fuse it with the visual recognition of the computer vision system, and more accurately
  • the diagnostic target can also be used to push away flexible obstacles (such as grass blades and leaves).
  • the robotic arm in addition to the main camera on the UAV platform body, the robotic arm is equipped with a working camera and working lights for specific professional work.
  • the main camera on the drone body is often blocked, forming a large number of blind spots, which can neither be recognized nor servo-controlled.
  • the arm moves, and the operation camera on the robotic arm can go deep into or under the crops, grasses, and bushes with the robotic arm, and is used for visual servo control, professional identification, and grasping of the robotic arm. Flying overlooking drones and human experts cannot go deep and touch a lot of small spaces because of their large body size.
  • the robotic arm of the present invention, the camera on it, and the robotic arm are similar to the neck, eyes, and mouth of birds, and cooperate with each other to complete operations such as inspection, identification, and picking.
  • the spectrum of the task camera and the task lighting are matched, for example, both are visible light or both are infrared light.
  • the manipulators of the manipulator system can be installed interchangeably, and multiple manipulators can also be installed at the same time.
  • the manipulator for picking up snails can be interchanged with the manipulator for picking up manure, and can also be installed on the manipulator at the same time.
  • the high-precision short-distance automatic flight driving system 140 includes a high-precision positioning subsystem, an obstacle detection subsystem, a three-dimensional path planning subsystem, and the like.
  • the high-precision positioning subsystem includes satellite navigation positioning, inertial navigation positioning, visual positioning and local wireless positioning, etc. It is used to determine the operation area and the inspection route in the area, determine the latitude and longitude of the sampling point, mark the map of the epidemic area investigation point and the statistical distribution range And density maps, etc., to facilitate the comparison and evaluation of the killing effect before and after.
  • the aerial photography of the main camera on the UAV platform it can form a micro + macro visual geographic information system of the epidemic area in the region, and become a "combat system" for expert decision-making.
  • the obstacle detection subsystem includes the visual depth perception of the main camera and the operation camera, which is used to measure the distance of surrounding obstacles and determine the safe space for flying.
  • the obstacle detection subsystem includes other ranging devices, including laser radar three-dimensional ranging modeling, ultrasonic ranging, and microwave radar ranging modeling. These ranging modeling devices and methods have been widely used in the field of environment perception of autonomous vehicles, and will not be described in detail.
  • the first step of the 3D path planning subsystem is to determine a 2D horizontal patrol route within the area.
  • the system of the present invention travels back and forth in a zigzag route in the working area, and the interval between adjacent scanning routes is not greater than the width of the ground or water surface covered by the field of view of the camera during single-line scanning, so as to achieve full-coverage scanning without gaps the whole area.
  • the boundary of the designated scanning area scan from outside to inside in a spiral shape, or start from a certain center point in the designated scanning area, and scan from inside to outside in a spiral shape until reaching the difference boundary.
  • the two-dimensional horizontal inspection routes can be varied and will not be listed here.
  • the second step according to the operation surface (ground, water surface) and operation range, for each point on the two-dimensional horizontal patrol route, determine the three-dimensional point above which has the same relative height from the ground water surface, and obtain the three-dimensional path, that is, the contour cruise route , if the path is unobstructed, perform contour cruise scanning according to this path.
  • This three-dimensional path with equal relative heights rather than equal absolute heights (that is, altitudes) will automatically adjust the absolute height of the flight as the hillside undulates, similar to the human body walking inspection, and can avoid various obstacles with a high probability.
  • the safe flight space (such as the space that can be safely traversed between trees) is determined according to the obstacle detection, and the local horizontal route (horizontal orbit) is adjusted. Flying through) and height (vertically flying around), avoiding obstacles, and recalculating the real coverage area of the inspection route.
  • the automatic professional operation system 150 mainly includes a computer expert system that supports specific on-site operations and professional operation attachments, and realizes on-site operations with the cooperation of other systems such as computer vision systems and robotic arm systems.
  • An expert system is a computing system with a large amount of specialized knowledge and experience. It uses artificial intelligence technology and computer technology to reason and judge based on the knowledge and experience provided by experts in a certain field, and simulates the decision-making process of human experts in order to solve specific tasks. Complex problems that require human experts to deal with.
  • the expert system of the present invention adopts a classical structure, including knowledge base, inference engine and other parts.
  • the knowledge representation forms in artificial intelligence include production formulas, frames, semantic networks, etc., while the knowledge that is more commonly used in expert systems is production rules.
  • Production rules appear in the form of IF...THEN..., and the understanding of production rules is very simple: if the preconditions are met, corresponding actions or conclusions are produced.
  • the inference engine repeatedly matches the rules in the knowledge base against the conditions or known information of the current problem, and obtains new conclusions to obtain the problem-solving result.
  • reasoning methods of the reasoning machine There are two kinds of reasoning methods of the reasoning machine: forward reasoning and reverse reasoning.
  • the strategy of forward chain reasoning is to find out those rules whose premise can match the facts or assertions in the database, and use the conflict elimination strategy to select one of these satisfying rules to execute, thereby changing the content of the original database. This search is repeated until the facts in the database are consistent with the goal, that is, the answer is found, or when there is no rule to match it, it stops.
  • the strategy of reverse chain reasoning is to start from the selected goal and find a rule whose execution consequences can achieve the goal; if the premise of this rule matches the facts in the database, the problem is solved; otherwise, the premise of this rule is taken as A new subgoal, and find applicable rules for the new subgoal, execute the premises in reverse order until the premises of the last applied rule can match the facts in the database, or until no more rules can be applied, the expert The system then asks other assisting systems to provide the necessary facts.
  • the inference engine is like the way of thinking of experts to solve problems.
  • the inference engine is universal; the knowledge base determines the value of a specific application, and a specific application needs to establish a specific knowledge base, which is specific.
  • main operations such as patrolling, diagnosis, picking up and dispensing of the present invention can all establish their separate knowledge bases, that is, establish a patrolling knowledge base 311, a diagnosis knowledge base 321, Picking class knowledge base 331 and release class knowledge base 341, each knowledge base has its own independent inference engine, namely the establishment of inspection inference engine 315, diagnosis inference engine 325, picking inference engine 335 and release inference engine 345, each main operation has its own Independent expert systems 310 to 340 , the expert systems of each major operation are comprehensively integrated in the order of operation links through external comprehensive programs other than the expert system, and the expert systems 310 to 340 finally output conclusions and decisions 350 .
  • each knowledge base is jointly used by a common reasoning engine to become a comprehensive business expert system supporting all major operations, which can automatically integrate and execute all required series of operations in a one-stop manner, that is, establish Inspection knowledge base 411, diagnosis knowledge base 421, picking knowledge base 431, release knowledge base 441, common inference engine, form an inference engine for comprehensive integrated operation, as shown in Figure 4, finally output conclusion and decision 450 .
  • a common reasoning engine to become a comprehensive business expert system supporting all major operations, which can automatically integrate and execute all required series of operations in a one-stop manner, that is, establish Inspection knowledge base 411, diagnosis knowledge base 421, picking knowledge base 431, release knowledge base 441, common inference engine, form an inference engine for comprehensive integrated operation, as shown in Figure 4, finally output conclusion and decision 450 .
  • the testing and validation of AI systems quickly becomes an extremely complex and huge problem with their scale.
  • the comprehensive business expert system is downgraded to a certain independent operation expert system, which makes each operation part of the comprehensive business expert system of the present invention independent
  • the test greatly reduces the workload of its test verification, so that this embodiment not only has the advantages of automatic synthesis and integration of multiple operations, but also has a test workload similar to that of the previous embodiment.
  • the intelligence of the expert system depends on the artificially established knowledge base, which requires manual detailed analysis and decomposition of the expert's reasoning and decision-making process, and the extraction of knowledge items, which is not self-adaptive, and it is difficult to establish a complete large-scale
  • the knowledge base forms a large-scale expert system to solve large-scale artificial intelligence problems.
  • the expert system of the present invention adopts an artificial neural network structure.
  • the present invention establishes respective artificial neural network expert systems for the main operations such as inspection, diagnosis, picking up and releasing, and the reasoning and decision-making of each main operation
  • main operations such as inspection, diagnosis, picking up and releasing, and the reasoning and decision-making of each main operation
  • conclusions and actions are taken as output, and then through the comprehensive integration of various tasks across the network, a complete multi-task artificial neural network expert system is formed.
  • a direct mapping relationship can be established between the classical structural expert system and the artificial neural network expert system.
  • each knowledge entry in the knowledge base of the classical structure can be mapped to an observable node (which can be an output node or an intermediate node) in the artificial neural network expert system, and the node Observable is because its output is an interpretable decision or conclusion.
  • the present invention adopts a common artificial neural network expert system to become an integrated business expert system supporting all major operations, which can be automatically integrated and executed in a one-stop manner. The required series of operations no longer require external multi-service integration.
  • the method of supervised learning can be adopted, and a large number of cases of manually labeled output conclusions and actions are used for training, so that the network automatically and implicitly establishes a knowledge base, thereby relying on machine learning to establish a large-scale
  • the expert system completely solves the limitation that the knowledge base in the classical structure relies on manual establishment.
  • this micro-expert system may not use a special expert system structure, but can list all combinations and assign corresponding Actions or conclusions are directly solidified (hard-coded) in software code or hardware logic.
  • the automatic professional operation system sends local inspection instructions to the robotic arm system.
  • the robotic arm Under the control of visual servoing, the robotic arm goes deep into or under crops, bushes, and bushes, and uses a professional camera for close-range microscopic photography.
  • the artificial neural network in the vision processor has been fully trained according to various images of specific objects to recognize the on-site images. If the recognition probability is higher than the preset threshold, it will report to the automatic professional operation system.
  • the expert system in the automatic professional operation system is verified according to the professional knowledge of the diagnostic knowledge base (including the characteristic data of specific objects, such as the length and width of the oncomelania, the number of coils, the direction of rotation, etc.), and if it matches, it is determined that the oncomelania is found. Complete the diagnosis assignment. Then, the automatic professional operation system judges whether a sampling operation is required according to the picking knowledge base, and if so, sends a picking operation instruction to the manipulator, grabs or fishes, and puts it into a storage container to complete the picking operation.
  • the diagnostic knowledge base including the characteristic data of specific objects, such as the length and width of the oncomelania, the number of coils, the direction of rotation, etc.
  • Additional devices for professional operations include one or more sample storage containers for sampling and item storage containers for dispensing, etc.
  • the shapes include boxes, mesh bags, bottles, etc., and can be replaced.
  • this system is used to monitor wild manure from people and livestock in severely affected areas.
  • the robotic arm picks up about 50 grams of wild manure, puts it in a plastic bag, seals it, and sends it to the scene Staff, and then sent to laboratory tests confirmed.
  • the item storage container used for dispensing stores molluscicides or specific pest and disease killers and other items to be disseminated, and is connected to the nozzle through a pipe.
  • the nozzle can be installed on the body of the drone, or installed on the mechanical arm to become a spray-type manipulator.
  • the spout on this spray-type manipulator can move freely with the manipulator, and change the spraying position and direction (for example, spraying upwards on the back of the crop page where insects gather), surpassing the fixed installation on the existing high-altitude spraying type drone body, only
  • the nozzles that can be sprayed from top to bottom can be similar to manual spraying by experts, so that they can penetrate deep into or under the vegetation, spray efficiently in a targeted manner, and achieve precise pesticide application.
  • the spray-type manipulator can inject the medicine into the snail soil to form a slow release, so as to prevent the medicine liquid from polluting the air or being washed away by water in a short time, endangering the safety of downstream people and aquatic organisms. Safety.
  • the existing high-altitude touch-and-sprinkle unmanned aerial vehicle is completely unable to engage in this operation.
  • the system can complete multiple operations at one time, such as combining the diagnosis of snails or pests and diseases with the release of molluscs and insects.
  • the same inspection operation At the same time, complete the process from source control to system control, step by step, and consolidate the achievements in prevention and control.
  • the system of the present invention also includes the remote manual mode of fully manipulating the system remotely on the spot, to the semi-automatic mode of manual remote decision-making (such as diagnosis operation) and automatic execution of the system (such as picking operation) , to the system's automatic decision-making (such as diagnosis operation) plus manual remote review plus the system's automatic execution (such as picking operation) near-automatic operation mode and other intermediate modes.
  • the automatic operation mode can be dropped, and manual operations of different levels and workloads can be added, and professional operations can be completed with the assistance of man and machine.
  • the unmanned aerial vehicle platform of the present invention can be replaced by a ground-moving robot platform (including wheeled or crawler-type walking unmanned vehicle platform, legged walking robot platform, etc.), becoming a field autonomous An expert ground walking robot system for land operations.
  • the UAV platform of the present invention can be replaced with a surface-navigating unmanned ship platform to become an expert-type surface-navigating robot system for field autonomous operations, which is used for river and lake water network operations.
  • the UAV platform of the present invention can be used or removed, and the system is manually held or placed in an environment with snails, that is, the UAV platform is replaced by an artificial footed ground walking platform, It is automatically operated by a robotic arm to complete automatic diagnosis, pick-up and release operations.
  • This can not only greatly improve the operating efficiency, but also effectively reduce the occupational disease hazards caused by personnel squatting down for a long time to distinguish picking up or killing snails, and avoid the harm caused by subjective and objective reasons to the planning of the prevention and control system.
  • the system of the present invention has application embodiment variants in various fields.
  • fire prevention in forests and forests after the system of the present invention has undergone fire identification training and the establishment of a fire extinguishing expert system, it can be used to penetrate deep into forests and forests, detect small fire sources early, and release fire extinguishing materials (including water, dry ice, etc.), becoming an expert flying robot system for forest fire prevention and fire extinguishing, patrolling vast forests and forests.

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

一种野外自主作业的专家型飞行机器人系统(100),该系统包括无人机平台(110);计算机视觉系统(120),包括摄像头和与之相连的视觉处理器;机械臂系统(130),包括安装于无人机平台上的一至多个机械臂;高精度近距离自动飞行驾驶系统(140),包括高精度定位子系统、障碍检测子系统和三维路径规划子系统;自动专业作业系统(150),包括支持特定现场作业的计算机专家系统以及专业作业附加装置,在计算机视觉系统和机械臂系统的协作下,实现现场作业。本系统采用高精度自动飞行驾驶避障,达到在地面或水面障碍物之间甚至之下之中贴地低飞,再使用机械臂进一步接近地面或水面,最终在几十厘米的高度上扫描观察地面或水面,完成特定目标巡查诊察。

Description

一种野外自主作业的专家型飞行机器人系统 技术领域
本发明涉及飞行机器人领域,尤其是涉及一种野外自主作业的专家型飞行机器人系统。
背景技术
现代社会高度知识化,需要大量专家深入野外,应用专业知识进行现场作业。在人类疾病防控领域,历史与事实证明大规模流行病可导致人类社会生命财产的严重损失,对社会造成巨大的破坏作用。其中血吸虫病是一种重大的、严重危害人畜健康的自然疫源性传染病,可以感染人和40余种哺乳动物,从历史流行至今。血吸虫病分布于北纬34°到南纬34°之间热带和亚热带地区,分布于全球6大区76个国家,流行地区人口远超31亿,受直接威胁人口超6亿,流行于亚洲、美洲、非洲,曾严重危害中国、日本等国。1986年世界卫生组织(WHO)将血吸虫病列为经水传播疾病之首,致1.6亿人患病、死亡,按人口加权计算,当时感染率,远超当今新冠肺炎病毒感染率。目前有74个国家和地区的6亿人口生活在流行区,其中有2亿病人【1】。这些地区的气候和自然环境适于血吸虫中间寄主螺类的繁殖,人与疫水接触机会亦多。
中国的血吸虫主要分布于长江流域及长江以南12个省(市、自治区)409县(市、区)均有发现。湖南长沙马王堆汉墓出土的西汉古尸的肝脏、肠道均查出血吸虫虫卵,证实至少在2100年前的汉代中国就出现血吸虫病。中国只有日本血吸虫病(因本病首先在日本发现,病原虫的生活史也首先经日本学者详细研究而阐明,故得名),20世纪50年代以前曾流行于长江以南大部分产粮区,患者多达1000余万人。经大规模群众性防治,至70年代发病人数已降至250万。最近几年血吸虫病疫情开始变得很严峻。根据2003年中国卫生部向全国人大常委会报告时提供的数据,有427个县(市、区)存在疫情,受威胁人口6500万,钉螺分布面积达2.22万公顷,洞庭湖、鄱阳湖的江湖洲滩地带,四川、云南的部分山区为重疫区。据卫生部门发布的统计资料,2003年全国有血吸虫病人84万,比2000年增加了15万;其中急性感染人数明显增加,2003年发生了30多起急性血吸虫病爆发疫情,2003年流行区7省报告急性感染1114人, 较2002年同期上升22%。
血吸虫病极易感染。人畜皮肤接触半滴一滴疫水,经过10秒(包括拿、洗蔬菜,饮水等)就可感染。至今,全球缺有效疫苗,治好后也可再感染,多地疫情可反复发生。中国流行的日本血吸虫的唯一中间宿主是钉螺,故有效的血吸虫病防治策略是消灭钉螺,阻断传播途径。钉螺属软体动物,有雌、雄之分,水陆两栖,由螺壳和软体两部分组成,软体部分的前部为头、颈、足和外套膜,后部是内脏。表面有纵肋者称“肋壳钉螺”,壳长约10毫米,宽约4毫米,生存于湖沼或水网地区;壳面光滑者为“光壳钉螺”,比肋壳钉螺稍小,长、宽分别为6毫米和3毫米,多见于山丘地区。疫区社会需要投入大量的财力、人力和物力进行钉螺防控。必须的治标环节是大面积喷洒世界卫生组织同意使用的灭螺化学药剂——氯硝柳胺,以控制钉螺的分布范围和密度。这需要大量血吸虫病防治专家深入水网山丘、田间地头,深入水草、作物、草丛树丛之下,对数万公顷的广大疫区搜索识别毫米级的钉螺,需要进行显微式的大规模野外扫描诊察,取样确诊,测定疫区钉螺分布范围和密度;然后制定系统化控制策略(如药物施放剂量、频度等),组织指挥人员大规模实施控制策略(如施放药物等)。灭螺策略的关键是两个,第一是精准诊察,第二是精准施药。因疫区极其广大而钉螺极小,虽然大量一线防治专家常年在疫区野外危险、恶劣、病害环境中长时间高强度辛勤工作,但人数远远不足,且人体较大较重,大量水网丛林之处是人体不能或难以触及的,依靠专家人工实现全覆盖性精准诊察显然是不可行的。精准施药同样要求专家级人员,根据钉螺的实际位置和密度针对性地喷洒药物。因目前不能做到精准诊察,也极其缺乏专家级的施药人员,主要采用在疫区进行粗放式地毯式的大规模无差别施药方式。灭螺化学药剂氯硝柳胺是对水生生物具有中等毒性,大量释放常至灭螺下游鱼虾大面积死亡,对生态环境和农业生产危害甚大,经济上常做巨额赔偿,使防治工作陷入两难的境地。
在农业农作物病虫害防治领域,也需要大量专家深入野外,应用专业知识进行现场作业。现代社会中,人口的增长和饮食水平的提高都要求农业高产稳产,而城市化和环境保护使耕种面积反有下降之势,因而发展高效现代农业成为必然之路。病虫害防治是现代高效农业在育种、施肥之外,实现高产稳产的三大关键环节之一。现在最简单、最广泛使用的病虫害防治方法是,不作专家诊断,让农业作业人员定期全覆盖性施放各种杀虫剂,优点是无需农业专家诊断,缺点是盲目性地毯式大规模无差别施放农药,导致严重的食品安全问题和生态污染、水源污染、胚胎致畸等,已成为现代 农业必须解决的问题。解决的办法在于高效的病虫害防治方法,与前述血吸虫病防治的关键相同,第一是精准诊察,第二是精准施药。就是需要派遣农业专家深入田间地头果园巡查、诊别、取样,如确诊,则组织农业作业人员根据其分布和密度施放杀虫剂,要点是做到尽早发现小病区、及时杀灭小病区,防止大面积病虫害出现。然而,农业专家数量极其不足,根本不能满足现代农业的需要。例如,某果产基地种植有几十万亩果林,如一个专家一天巡查几十亩果林,则需要一千个病虫害专家十天才能完成一次巡查。这种依靠人类专家人工作业的病虫害防治方法显然是不现实的。现在已有无人机用于农业施放杀虫剂。但是,这些无人机体积大、飞行高度高,只用于代替非专家型的普通农业作业人员,进行盲目性地毯式大量施放农药。因农业、人类疾病病虫害体积很小,须近距离观看,且经过生命进化,大量病虫害附着于枝叶的下表面躲避天敌和人为杀灭,须深入作物之下,才能发现,才能高效杀灭。这使得现有在作物上方飞行的高空型农业无人机不能用于病虫害巡查诊察,更不能取样拾取。更进一步,这种无人机也不具有一个农业专家诊别病虫害的人工智能,不能代替专家完成自动巡查作物、诊别病虫害、有的放矢的高效施药杀灭。
现代社会生产作业的发展急缺大量多种专家人员,而人类专家的培育规模及其人工作业的效率均远远无法满足现代社会的发展。随着当今人工智能的高速发展,将人工智能与无人设备结合,发展专家型人工智能自主决策的自动化无人设备,代替人类专家,大规模深入野外现场,自动进行巡查、诊别、拾取和施放等主要作业,解决社会发展的各种“专业知识”瓶颈,是根本的发展之路。
发明内容
本发明的目的在于:针对现有技术存在的问题,提供一种野外自主作业的专家型飞行机器人系统,用于专业作业领域代替人类专家,深入野外现场,自动进行巡查、诊别、拾取和施放等作业。
本发明的发明目的通过以下技术方案来实现:
一种野外自主作业的专家型飞行机器人系统,该系统包括无人机平台、计算机视觉系统、机械臂系统、高精度近距离自动飞行驾驶系统和自动专业作业系统;所述计算机视觉系统包括摄像头和与之相连的视觉处理器;所述机械臂系统包括安装于无人机平台上的一至多个机械臂;所述高精度近距离自动飞行驾驶系统包括高精度定位子系统、障碍检测子系统和三维路径规划子系统;所述自动专业作业系统包括支持特定 现场作业的计算机专家系统以及专业作业附加装置,在计算机视觉系统和机械臂系统的协作下,实现现场作业。
作为进一步的技术方案,所述无人机平台为现有的通用型无人机,包括飞行机械子系统、摄像头、飞控子系统、无线通信子系统、定位子系统、供电子系统。
作为进一步的技术方案,所述摄像头包括一至多个设置于无人机本体上的主摄像头、机械臂上的作业摄像头。
作为进一步的技术方案,摄像头为单目摄像头、双目立体摄像头或多目环视摄像头。
作为进一步的技术方案,所述视觉处理器用于专业作业中的特定物体识别及地面水面识别以及计算物体距离、测量物体大小、计算地面水面距离。
作为进一步的技术方案,所述视觉处理器包括人工神经网络,通过相关应用领域的特定物体的深度学习训练,而后根据现场摄像头的拍摄图片识别特定物体。
作为进一步的技术方案,所述人工神经网络为卷积神经网络或卷积神经网络派生型网络。
作为进一步的技术方案,采用反向传播的监督学习方法训练卷积神经网络或卷积神经网络派生型网络。
作为进一步的技术方案,采用一对平行安装的相同摄像头,即左摄像头和右摄像头,形成双目立体视觉;同一个景物点以两个不同的角度入射到左、右摄像头的光心,映射为左、右摄像头的图像平面上的不同位置处的一对像素,这对像素的图像平面位置偏差或其入射角度的偏差称为视差距,根据该景物点的视差距反推计算出该景物点的距离,即获得景物点的距离信息,完成测距;因景物点的入射角度信息已保留在二维平面图像中,获得景物点的距离信息后,再对该景物点在三维现实世界中定位。
作为进一步的技术方案,所述机械臂上设有机械手装置,机械臂及机械手装置上设有用于探测现场物体刚柔度的压力传感器。
作为进一步的技术方案,所述高精度定位子系统包括卫星导航定位、惯导定位、视觉定位以及本地无线定位,用于确定作业区域以及区域内的巡查路线、确定采样点的经纬度、标记疫区调查点地图以及统计分布范围和密度地图。
作为进一步的技术方案,所述障碍检测子系统包括采用主摄像头和作业摄像头的视觉深度感知,用于测量周围障碍物距离,确定可飞行的安全空间。
作为进一步的技术方案,所述三维路径规划子系统包括步骤:
(1)确定区域内的二维水平巡查路线;
(2)根据作业面和作业范围,对二维水平巡查路线上每点,确定其上方离地面水面具有等同的相对高度的三维点,获得三维路径,而后判断该三维路径上是否有障碍,若无障碍则按此路线进行等高巡航扫描,若有障碍则转步骤(3);
(3)根据障碍检测确定安全飞行空间,调整局部水平路线和高度,避障飞行,并重新计算巡查路线的真实覆盖区。
作为进一步的技术方案,所述计算机专家系统采用经典结构,包括知识库、推理机。
作为进一步的技术方案,巡查、诊别、拾取和施放作业分别建立单独的知识库,即建立巡查类知识库、诊别类知识库、拾取类知识库和施放类知识库,各知识库具有自己独立的推理机,即建立巡查推理机、诊别推理机、拾取推理机和施放推理机。
作为进一步的技术方案,巡查、诊别、拾取和施放作业分别建立单独的知识库,即建立巡查类知识库、诊别类知识库、拾取类知识库和施放类知识库,各知识库共用一个共同推理机。
作为进一步的技术方案,所述计算机专家系统采用人工神经网络结构;巡查、诊别、拾取和施放作业分别建立各自的人工神经网络专家系统,将各作业的推理决策的结论和行动作为输出,再经过跨网络的多种作业综合集成,形成完整的多作业人工神经网络型专家系统。
作为进一步的技术方案,所述专业作业附加装置包括一至多个用于取样的样品存储容器以及用于施放的物品存储容器。
与现有技术相比,本发明将人工智能与无人设备结合,构成专家型人工智能自主决策的自动化无人设备,代替人类专家,大规模深入野外现场,自动进行巡查、诊别、拾取和施放等主要作业,非常安全、精准和高效。
附图说明
图1为本系统的结构示意图;
图2为人工神经网络节点构架图;
图3为第一种专家系统结构;
图4为第二种专家系统结构。
具体实施方式
下面结合附图和具体实施例对本发明进行详细说明。
实施例
为大规模高效实现野外精准诊察、精准施药等专家级作业,本发明提供一种野外自主作业的专家型飞行机器人系统100,用于在专业作业领域代替人类专家,深入野外现场,自动巡查、诊别、拾取和施放。本发明的系统包括无人机平台110,计算机视觉系统120,机械臂系统130,高精度近距离自动飞行驾驶系统140,自动专业作业系统150等,如图1所示。
无人机平台110是现有的通用型无人机,广泛用于航拍等应用,一般包括飞行机械子系统、摄像头、飞控子系统、无线通信子系统、定位子系统、供电子系统等,主要功能是接受自动飞行驾驶系统140的飞行驾驶指令,自行控制飞行姿态,保持平稳性,按指定路线飞行或悬停,符合作业要求等。
计算机视觉系统120包括摄像头和与之相连的视觉处理器,是发明系统的基本功能系统,其产生的视觉识别与测量信息全面用于自动飞行驾驶系统140的高精度定位、障碍检测,自动专业作业系统150的专家系统推理,机械臂系统130的视觉伺服等。摄像头包括一至多个无人机本体上的主摄像头、机械臂上的作业摄像头等。摄像头包括单目摄像头、双目立体摄像头和多目环视摄像头等。摄像头可以是可见光摄像头、红外摄像头等。视觉处理器用于专业作业中的特定物体识别及地面水面识别以及计算物体距离、测量物体大小、计算地面水面距离等。在一个实施例中,视觉处理器包括人工神经网络,通过相关应用领域的特定物体的深度学习训练,可根据现场摄像头的拍摄图片识别特定物体。在一个用于防控血吸虫病的实施例中,特定物体包括钉螺、人和家畜的野粪等;在一个用于农业病虫害防治的实施例中,特定物体包括需要诊别的害虫、农作物以及受害作物特征(如叶面边沿半圆形缺口、穿孔、卷翘、变色)等。
人工神经网络是一种运算模型,由大量处理单元互联组成的非线性、自适应信息处理系统,如图2所示。网络中处理单元亦称节点、神经元。一个神经元代表一种特定的输出函数,称为激励函数(activation function)。每两个神经元之间的连接代表一个对于通过该连接信号的加权值,称之为权重。神经元间的连接权值反映了单元间的连接强度,信息的表示和处理体现在网络处理单元的连接关系中。神经元的输出由所有输入连接的加权和通过激励函数而产生。处理单元分为三类:输入单元、输出单元和隐单元。输入单元接受外部世界的信号与数据;输出单元实现系统处理结果的输出; 隐单元是处在输入和输出单元之间,不能由系统外部观察的单元。人工神经网络从输入到输出分为层,由输入单元组成的层称为输入层,由隐单元组成的层称为隐藏层或中间层,由输出单元组成的层称为输出层。
人工神经网络有多种网络结构,在一个实施例中,本发明采用卷积神经网络(CNN)或其派生型网络识别图像。为了产生与人脑相当的信息处理能力,这种用于处理图像的人工神经网络需要大量的节点和中间层,这种层数很深的人工神经网络称为深度人工神经网络,其包括海量参数需要优化调整(即训练)。深度学习是训练深度人工神经网络的一种机器学习方法。在一个实施例中,本发明采用反向传播的监督学习方法。首先通过人工识别处理样本图像,识别特定物体。在识别钉螺的实施例中,则是由专家人工识别样本图像中的钉螺,在识别病虫害的实施例中,则是由专家人工识别样本图像中的病虫害,并将人工识别结果标注到样本图像。然后,将带标注的样本图像输入到网络,获得网络输出,计算网络输出与人工识别标注的标准输出之间的误差。从输出层开始,优化调整其网络参数,使误差减小,后优化调整倒数第二层(最后一个中间层),依次逐层前推,直至对网络全部参数进行优化调整。经过大量样本图像的训练后,网络参数达到优化,充分训练的深度人工神经网络可达到与人类专家相近甚至更高的识别准确率。
在上述实施例中,当卷积神经网络识别图像中的特定物体时,要求图像具有足够高的空间分辨率,即待识别的特定物体在图像中的尺寸(即水平像素数或垂直像素数)应该足够大。为产生可靠的识别,对于形体复杂的自然物体,一般要求其在图像中的尺寸在百像素级。在识别地面或水面的钉螺的实施例中,如前所述,钉螺在现实世界中的尺寸不超过1厘米,以钉螺在图像中的尺寸为100像素为例,则要求摄像头所成图像的空间分辨率不低于0.1毫米/像素。按此空间分辨率要求,当以一个2K像素、90度视场角的主流摄像头拍摄成像,其视野(即图像宽度)只覆盖20厘米地面或水面,摄像头需距离地面或水面不高于10厘米。采用4K摄像头,其视野覆盖40厘米地面或水面,摄像头需距离地面或水面不高于20厘米高。当采用45度视场角摄像头时,其视野覆盖相同,但摄像头需距离地面或水面可加倍,2K摄像头为20厘米,4K摄像头为40厘米。大体而言,摄像头距离特定物体的距离或其离地高度要在几十厘米,即需“微距凝视”。这证实了现有无障碍高空高飞型无人机不能用于钉螺巡查诊察,更不能取样拾取。本发明无人机采用高精度近距离自动飞行驾驶避障(自动避开地面或水面植物、隆起的土石、电杆电线等多种复杂自然地形的自动飞行障碍),达到在 地面或水面障碍物之间甚至之下之中贴地穿越低飞,再使用机械臂进一步接近至接触地面或水面,最终在几十厘米的高度上扫描观察地面或水面,才能完成钉螺巡查诊察取样。
在一个实施例中,除开特定物体识别,本发明的视觉处理器还用于测距、定位与三维测量等。现有摄像头的基本成像模型是小孔相机模型,将现实世界的每个景物点以通过光心的直线映射到图像平面的对应像素,将三维现实世界映射为二维平面图像。成像后,景物点的入射角度信息得到保留,但深度信息丢失,不能用于测距、定位与三维测量等。在一个实施例中,本系统采用一对平行安装的相同摄像头,即左摄像头和右摄像头,形成双目立体视觉。现实世界的一个景物点以通过左右摄像头光心的直线,分别映射到左右摄像头的图像平面的对应像素,将三维现实世界映射为一对二维平面图像。当景物点距离有限(即小于无穷远)时,同一个景物点以两个不同的角度入射到左右摄像头的光心,映射为左右摄像头的图像平面上的不同位置处的一对像素。这对像素的图像平面位置偏差或其入射角度的偏差称为视差距,根据该景物点的视差距可以反推计算出该景物点的距离,即获得景物点的距离信息,完成测距。因景物点的入射角度信息已保留在二维平面图像中,获得景物点的距离信息后,可完成对该景物点在三维现实世界中定位。再进一步,结合视觉识别,通过特定物体的各个景物点的三维现实世界中的定位,可完成对该特定物体的三维测量,例如测量钉螺的长度、宽度等等。在另一个实施例中,本系统采用一个单一的单目摄像头,在外部景物静止的条件下,采用著名的SLAM(同步定位建模)算法,将单目摄像头移动到不同位置对同一景物点成像,获得一对二维平面图像,等效于双目成像,完成测距、定位与三维测量。
机械臂系统130包括安装于无人机平台上的一至多个机械臂。机械臂上包括机械手装置,根据作业需要,机械臂可更换多种机械手装置,用于拨开、抓取、捞取、施放等作业。在一个实施例中,机械臂及其机械手装置具有压力传感器,使机械臂具有触感,探测现场物体的刚柔度,可用于后述专家系统将其与计算机视觉系统视觉识别相融合,更准确地诊断目标,还可用于拨开柔性障碍物(如草叶树叶)等。在一个实施例中,除开无人机平台本体上的主摄像头之外,机械臂安装有作业摄像头以及作业照明灯等,用于特定专业作业。当本发明需要巡查或抓取作物、草丛、树丛之中或之下的特定物体时,无人机本体上的主摄像头,常被遮挡,形成大量的盲区,既不能识别,也不能伺服控制机械臂移动,而机械臂上的作业摄像头可随机械臂深入作物、草 丛、树丛之中或之下,用于机械臂的视觉伺服控制、专业识别、抓取等作业,这可有效解决现有高飞俯视型无人机和人类专家因其本体尺寸大而不能深入和触及大量狭小空间作业。从功能上,本发明的机械臂及其上的摄像头、机械手与鸟类的颈眼嘴部有相似性,相互协作配合,完成巡视、识别、拾取等作业。作业摄像头以及作业照明灯的光谱匹配一致,例如都是可见光或都是红外光等。本机械臂系统的机械手可以多种互换安装,也可同时安装多种机械手。例如在血吸虫病防治的实施例中,拾取钉螺的机械手可与拾取野粪的机械手互换,也可同时安装于机械臂之上。
高精度近距离自动飞行驾驶系统140包括高精度定位子系统,障碍检测子系统,三维路径规划子系统等。高精度定位子系统包括卫星导航定位、惯导定位、视觉定位以及本地无线定位等,用于确定作业区域以及区域内的巡查路线,确定采样点的经纬度,标记疫区调查点地图以及统计分布范围和密度地图等,便于前后对照评估灭杀效果,同时结合无人机平台上的主摄像头航拍可形成本地区微观+宏观可视疫区疫情地理信息系统,成为专家决策的“作战系统”。障碍检测子系统包括采用主摄像头和作业摄像头的视觉深度感知,用于测量周围障碍物距离,确定可飞行的安全空间。在其他实施例中,障碍检测子系统包括其他测距装置,包括激光雷达三维测距建模、超声测距以及微波雷达测距建模等。这些测距建模的装置与方法在自动驾驶汽车的环境感知领域已获广泛使用,不再详述。在一个实施例中,三维路径规划子系统的第一步,是确定区域内的二维水平巡查路线。在一个实施例中,本发明的系统在作业区域内按之字形路线往返,相邻扫描路线的间隔不大于单线扫描时的摄像头视野所覆盖的地面或水面宽度,以达到无间隙地全覆盖扫描整个区域。在另一个实施例中,按指定扫描区域的边界,从外向内螺旋形扫描,或从指定扫描区域内某中心点开始,从内向外螺旋形扫描,直至到达区别边界。二维水平巡查路线可以多种多样,不再列出。第二步,根据作业面(地面、水面)和作业范围,对二维水平巡查路线上每点,确定其上方离地面水面具有等同的相对高度的三维点,获得三维路径,即等高巡航路线,如该路径无障碍,则按此路线进行等高巡航扫描。这种相对高度相等而不是绝对高度(即海拔高度)相等的等高三维路径会随着山坡起伏而自动调整飞行的绝对高度,与同人体步行巡查相似,可大概率避免各种障碍。第三步,如等高巡航扫描有障碍,尤其是刚性障碍(包括树干、巨石等),则根据障碍检测确定安全飞行空间(如树林间可安全穿越的空间),调整局部水平路线(水平绕飞穿行)和高度(垂直绕飞穿行),避障飞行,并重新计算巡查路线的真实覆盖区。
自动专业作业系统150主要包括支持特定现场作业的计算机专家系统以及专业作业附加装置,在计算机视觉系统和机械臂系统等其他系统的协作下,实现现场作业。
专家系统是一个具有大量的专门知识与经验的计算系统,它应用人工智能技术和计算机技术,根据某领域专家提供的知识和经验,进行推理和判断,模拟人类专家的决策过程,以便解决特定作业那些需要人类专家处理的复杂问题。
在一个实施例中,本发明的专家系统采用经典结构,包括知识库、推理机等部分。人工智能中的知识表示形式有产生式、框架、语义网络等,而在专家系统中运用得较为普遍的知识是产生式规则。产生式规则以IF…THEN…的形式出现,产生式规则的理解非常简单:如果前提条件得到满足,就产生相应的动作或结论。
推理机针对当前问题的条件或已知信息,反复匹配知识库中的规则,获得新的结论,以得到问题求解结果。推理机的推理方式可以有正向推理和反向推理两种。正向链推理的策略是寻找出前提可以同数据库中的事实或断言相匹配的那些规则,并运用冲突消除策略,从这些都可满足的规则中挑选出一个执行,从而改变原来数据库的内容。这样反复地进行寻找,直到数据库的事实与目标一致即找到解答,或者到没有规则可以与之匹配时才停止。反向链推理的策略是从选定的目标出发,寻找执行后果可以达到目标的规则;如果这条规则的前提与数据库中的事实相匹配,问题就得到解决;否则把这条规则的前提作为新的子目标,并对新的子目标寻找可以运用的规则,执行反向序列的前提,直到最后运用的规则的前提可以与数据库中的事实相匹配,或者直到没有规则再可以应用时,专家系统便要求协助的其他系统提供必需的事实。由此可见,推理机就如同专家解决问题的思维方式,推理机具有通用性;知识库决定特定应用的价值,特定应用需要建立特定的知识库,知识库具有专用性。
在一个实施例中,如图3所示,本发明的巡查、诊别、拾取和施放等主要作业都可以建立其单独的知识库,即建立巡查类知识库311、诊别类知识库321、拾取类知识库331和施放类知识库341,各知识库具有自己独立的推理机,即建立巡查推理机315、诊别推理机325、拾取推理机335和施放推理机345,各主要作业具有自己独立的专家系统310至340,各主要作业的专家系统通过专家系统之外的外部综合程序按照作业环节的顺序综合集成,专家系统310至340最后输出结论和决策350。
在另一个实施例中,各知识库由一个共同的推理机联合使用,成为一个综合业务的专家系统,支撑所有主要作业,这可自动综合集成、一条龙地执行所有需要的系列作业,即建立巡查类知识库411、诊别类知识库421、拾取类知识库431和施放类知识 库441,共同推理机,形成一个综合集成作业的推理机,如图4所示,最后输出结论和决策450。一般而言,人工智能系统的测试与验证随其规模而迅速成为一个极其复杂而巨大的问题。在本实施例中,当限定推理机只使用某一类知识库时,则本综合业务专家系统降级为某一单独作业专家系统,这使本发明的综合业务的专家系统的各个作业部分可以单独测试,大大降低了其测试验证的工作量,使本实施例既具有多种作业自动综合集成的优点,又具有与前实施例相近的测试工作量。
上述采用经典结构的专家系统实施例中,专家系统的智能依赖于人工建立的知识库,需要人工细致分析、分解专家的推理决策过程,提取知识条目,不具有自适应性,难以建立完整的大型知识库,形成大型专家系统,求解大型人工智能问题。在另一个实施例中,本发明的专家系统采用人工神经网络结构。与采用经典结构的专家系统实施例相对应,在一个实施例中,本发明为巡查、诊别、拾取和施放等主要作业分别建立各自的人工神经网络专家系统,将各主要作业的推理决策的结论和行动作为输出,再经过跨网络的多种作业综合集成,形成完整的多作业人工神经网络型专家系统。可在经典结构型专家系统和人工神经网络型专家系统建立直接的映射关系。在一个直接的实施例中,可将经典结构的知识库中的每一个知识条目映射为人工神经网络型专家系统中的一个可观测节点(可以是输出节点,也可以是中间节点),该节点可观测是因为其输出是可解释的决策或结论,在反向传播的监督学习中,与卷积神经网络不同的是,无论可观测节点是否是输出节点,可观测节点与输出节点一样,可对比标注结果,直接产生误差,反向逐层优化调整网络参数。在人工神经网络型专家系统的另一个实施例中,本发明采用一个共同的人工神经网络专家系统,成为一个综合业务的专家系统,支撑所有主要作业,这可自动综合集成、一条龙地执行所有需要的系列作业,不再需要外部的多业务集成。在上述人工神经网络型专家系统的实施例中,均可采用监督学习的方法,用人工标注输出的结论和行动的大量案例加以训练,使网络自动隐式建立知识库,从而依靠机器学习建立大型专家系统,彻底解决经典结构中知识库依赖人工建立的限制。
在一个简化的实施例中,当专家系统的输入变量和状态变量的数目较少时,这种微小型专家系统可以不采用专门的专家系统结构,而是可以罗列出所有的组合,赋予相应的行动或结论,直接固化(hard-coded)在软件代码或硬件逻辑之中。
在实际应用中,无论采用哪种专家系统结构,本系统作业一般都需要进行多种作业的综合集成,由专家系统调度协作使用本发明的其他系统。在一个实施例中,自动 专业作业系统向机械臂系统发送局部巡查指令,机械臂在视觉伺服的控制下,深入作物、草丛、树丛之中或之下,利用专业摄像头进行近距离显微拍摄。视觉处理器中的人工神经网络已按特定物体的各形图像充分训练,对现场图像进行识别,如识别概率高于预设门限,则报告给自动专业作业系统。自动专业作业系统中的专家系统根据诊别类知识库的专业知识(包括特定物体的特征数据,比如钉螺的长度、宽度,螺圈的数目、转向等)予以核实,如符合,判定发现钉螺,完成诊别作业。然后,自动专业作业系统根据拾取类知识库判断是否需要取样作业,如是则向机械手发送拾取作业指令,抓取或捞取,并放入存储容器,完成拾取作业。
专业作业附加装置包括一至多个用于取样的样品存储容器以及用于施放的物品存储容器等,外形包括盒状、网袋状、瓶状等,可更换。除开用于存放钉螺或虫害等样本,在一个实施例中,本系统用于监测重疫区人和家畜的野粪,由机械臂拾取野粪50克左右,装塑料袋,密封,送往现场工作人员,再送实验室化验确诊。在另一个实施例中,用于施放的物品存储容器存有杀螺剂或特定病虫害杀灭剂等需施放的物品,通过管道与喷口相连。喷口可安装于无人机本体之上,或安装于机械臂之上成为一种喷射型机械手。这种喷射型机械手上的喷口可以随机械臂自由移动,变换喷洒位置和方向(例如向上喷洒于虫害聚集的农作物页面背面),超越现有高空碰洒型无人机本体上固定安装的、只能从上向下的喷口,在本系统的视觉处理器和专家系统的支撑下,可与专家人工喷洒相似,做到深入植被之下或之中,有的放矢地高效喷洒,实现精准施药。在另一个实施例中,也可巡查诊别之后,喷射型机械手将药物注入有螺土内形成缓慢释放,防止药液污染空气、或短时间内被水冲走,危害下游人和水生生物的安全。现有高空碰洒型无人机则完全不能从事此项作业。
一个实施例中,本系统经过主要作业综合集成之后,可一次性完成多项作业,例如将螺类或病虫害的诊别作业与灭螺灭虫的施放作业结合,在同一次巡查作业之中,同时完成由源头控制到系统控制,步步为营,巩固取得的防治成果。
除开上述全自动作业模式,本发明的系统还包括从完全由人工远程临场操作本系统的远程人工模式,到人工远程决策(如诊别作业)加本系统自动执行(如拾取作业)的半自动模式,到本系统自动决策(如诊别作业)加人工远程复核加本系统自动执行(如拾取作业)的近自动化作业模式等多种中间模式。在草从树从纵横交错、密集参差的极端复杂地形地貌环境中,可视需要,从全自动作业模式下降,加入不同级别和工作量的人工操作,由人机协助共同完成专业作业。
本发明的系统有多种变形和组合使用的方法。在一个变形实施例中,可将本发明的无人机平台置换为地面移动的机器人平台(包括轮式或履带式行走无人车平台,足式行走机器人平台等等),成为一种野外自主作业的专家型地面行走机器人系统,用于陆地作业。在另一个变形实施例中,可将本发明的无人机平台置换为水面航行的无人船平台,成为一种野外自主作业的专家型水面航行机器人系统,用于河流湖泊水网作业。在再有另一个变形实施例中,可将本发明的无人机平台不用或拆下,本系统由人工手持或放置在有螺环境,即无人机平台由人工足式地面行走平台代替,由机械臂自动操作,完成自动诊别、拾取和施放作业。这既可大大提供作业效率,还可有效减少人员长期下蹲判别拾取或灭螺带来的职业病危害,避免因主观客观原而工作不到位给防控系统规划带来的危害。
本发明的系统有多种领域的应用实施例变形。在一个森林山林防火的应用实施例中,本发明的系统经烟火识别训练和建立灭火专家系统之后,用于可深入到森林山林之间之中,早期发现微小火源,及时释放灭火物料(包括水、干冰等),成为森林山林防火灭火专家型飞行机器人系统,巡防广袤森林山林。
参考文献
[1]维基百科,血吸虫病,
https://zh.wikipedia.org/zh-cn/%E8%A1%80%E5%90%B8%E8%99%AB
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,应当指出的是,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (18)

  1. 一种野外自主作业的专家型飞行机器人系统,其特征在于,该系统包括无人机平台、计算机视觉系统、机械臂系统、高精度近距离自动飞行驾驶系统和自动专业作业系统;所述计算机视觉系统包括摄像头和与之相连的视觉处理器;所述机械臂系统包括安装于无人机平台上的一至多个机械臂;所述高精度近距离自动飞行驾驶系统包括高精度定位子系统、障碍检测子系统和三维路径规划子系统;所述自动专业作业系统包括支持特定现场作业的计算机专家系统以及专业作业附加装置,在计算机视觉系统和机械臂系统的协作下,实现现场作业。
  2. 根据权利要求1所述的一种野外自主作业的专家型飞行机器人系统,其特征在于,所述无人机平台为现有的通用型无人机,包括飞行机械子系统、摄像头、飞控子系统、无线通信子系统、定位子系统、供电子系统。
  3. 根据权利要求1所述的一种野外自主作业的专家型飞行机器人系统,其特征在于,所述摄像头包括一至多个设置于无人机本体上的主摄像头、机械臂上的作业摄像头。
  4. 根据权利要求3所述的一种野外自主作业的专家型飞行机器人系统,其特征在于,摄像头为单目摄像头、双目立体摄像头或多目环视摄像头。
  5. 根据权利要求1所述的一种野外自主作业的专家型飞行机器人系统,其特征在于,所述视觉处理器用于专业作业中的特定物体识别及地面水面识别以及计算物体距离、测量物体大小、计算地面水面距离。
  6. 根据权利要求5所述的一种野外自主作业的专家型飞行机器人系统,其特征在于,所述视觉处理器包括人工神经网络,通过相关应用领域的特定物体的深度学习训练,而后根据现场摄像头的拍摄图片识别特定物体。
  7. 根据权利要求6所述的一种野外自主作业的专家型飞行机器人系统,其特征在于,所述人工神经网络为卷积神经网络或卷积神经网络派生型网络。
  8. 根据权利要求7所述的一种野外自主作业的专家型飞行机器人系统,其特征在于,采用反向传播的监督学习方法训练卷积神经网络或卷积神经网络派生型网络。
  9. 根据权利要求5所述的一种野外自主作业的专家型飞行机器人系统,其特 征在于,采用一对平行安装的相同摄像头,即左摄像头和右摄像头,形成双目立体视觉;同一个景物点以两个不同的角度入射到左、右摄像头的光心,映射为左、右摄像头的图像平面上的不同位置处的一对像素,这对像素的图像平面位置偏差或其入射角度的偏差称为视差距,根据该景物点的视差距反推计算出该景物点的距离,即获得景物点的距离信息,完成测距;因景物点的入射角度信息已保留在二维平面图像中,获得景物点的距离信息后,再对该景物点在三维现实世界中定位。
  10. 根据权利要求1所述的一种野外自主作业的专家型飞行机器人系统,其特征在于,所述机械臂上设有机械手装置,机械臂及机械手装置上设有用于探测现场物体刚柔度的压力传感器。
  11. 根据权利要求1所述的一种野外自主作业的专家型飞行机器人系统,其特征在于,所述高精度定位子系统包括卫星导航定位、惯导定位、视觉定位以及本地无线定位,用于确定作业区域以及区域内的巡查路线、确定采样点的经纬度、标记疫区调查点地图以及统计分布范围和密度地图。
  12. 根据权利要求1所述的一种野外自主作业的专家型飞行机器人系统,其特征在于,所述障碍检测子系统包括采用主摄像头和作业摄像头的视觉深度感知,用于测量周围障碍物距离,确定可飞行的安全空间。
  13. 根据权利要求1所述的一种野外自主作业的专家型飞行机器人系统,其特征在于,所述三维路径规划子系统包括步骤:
    (1)确定区域内的二维水平巡查路线;
    (2)根据作业面和作业范围,对二维水平巡查路线上每点,确定其上方离地面水面具有等同的相对高度的三维点,获得三维路径,而后判断该三维路径上是否有障碍,若无障碍则按此路线进行等高巡航扫描,若有障碍则转步骤(3);
    (3)根据障碍检测确定安全飞行空间,调整局部水平路线和高度,避障飞行,并重新计算巡查路线的真实覆盖区。
  14. 根据权利要求1所述的一种野外自主作业的专家型飞行机器人系统,其特征在于,所述计算机专家系统采用经典结构,包括知识库、推理机。
  15. 根据权利要求14所述的一种野外自主作业的专家型飞行机器人系统,其特征在于,巡查、诊别、拾取和施放作业分别建立单独的知识库,即建立巡查类知识库、诊别类知识库、拾取类知识库和施放类知识库,各知识库具有自己独立的推理机,即建立巡查推理机、诊别推理机、拾取推理机和施放推理机。
  16. 根据权利要求14所述的一种野外自主作业的专家型飞行机器人系统,其特征在于,巡查、诊别、拾取和施放作业分别建立单独的知识库,即建立巡查类知识库、诊别类知识库、拾取类知识库和施放类知识库,各知识库共用一个共同推理机。
  17. 根据权利要求1所述的一种野外自主作业的专家型飞行机器人系统,其特征在于,所述计算机专家系统采用人工神经网络结构;巡查、诊别、拾取和施放作业分别建立各自的人工神经网络专家系统,将各作业的推理决策的结论和行动作为输出,再经过跨网络的多种作业综合集成,形成完整的多作业人工神经网络型专家系统。
  18. 根据权利要求1所述的一种野外自主作业的专家型飞行机器人系统,其特征在于,所述专业作业附加装置包括一至多个用于取样的样品存储容器以及用于施放的物品存储容器。
PCT/CN2021/138988 2021-12-17 2021-12-17 一种野外自主作业的专家型飞行机器人系统 WO2023108578A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/138988 WO2023108578A1 (zh) 2021-12-17 2021-12-17 一种野外自主作业的专家型飞行机器人系统
CN202180007514.8A CN116829460A (zh) 2021-12-17 2021-12-17 一种野外自主作业的专家型飞行机器人系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/138988 WO2023108578A1 (zh) 2021-12-17 2021-12-17 一种野外自主作业的专家型飞行机器人系统

Publications (1)

Publication Number Publication Date
WO2023108578A1 true WO2023108578A1 (zh) 2023-06-22

Family

ID=86775287

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/138988 WO2023108578A1 (zh) 2021-12-17 2021-12-17 一种野外自主作业的专家型飞行机器人系统

Country Status (2)

Country Link
CN (1) CN116829460A (zh)
WO (1) WO2023108578A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106771041A (zh) * 2016-12-28 2017-05-31 中国计量大学 一种基于无人机水质在线检测装置的投放和回收方法
JP2017193331A (ja) * 2016-04-19 2017-10-26 インダストリーネットワーク株式会社 ドローン飛行体
CN108633482A (zh) * 2018-07-06 2018-10-12 华南理工大学 一种水果采摘飞行器
CN110187714A (zh) * 2019-04-22 2019-08-30 西安电子科技大学 一种基于无人机的水上垃圾打捞控制方法及系统、无人机
CN110253581A (zh) * 2019-06-25 2019-09-20 华北水利水电大学 一种基于视觉识别的辅助抓取方法
CN111418349A (zh) * 2020-03-19 2020-07-17 南京赫曼机器人自动化有限公司 一种水果采摘智能机器人及其实现水果采摘的方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017193331A (ja) * 2016-04-19 2017-10-26 インダストリーネットワーク株式会社 ドローン飛行体
CN106771041A (zh) * 2016-12-28 2017-05-31 中国计量大学 一种基于无人机水质在线检测装置的投放和回收方法
CN108633482A (zh) * 2018-07-06 2018-10-12 华南理工大学 一种水果采摘飞行器
CN110187714A (zh) * 2019-04-22 2019-08-30 西安电子科技大学 一种基于无人机的水上垃圾打捞控制方法及系统、无人机
CN110253581A (zh) * 2019-06-25 2019-09-20 华北水利水电大学 一种基于视觉识别的辅助抓取方法
CN111418349A (zh) * 2020-03-19 2020-07-17 南京赫曼机器人自动化有限公司 一种水果采摘智能机器人及其实现水果采摘的方法

Also Published As

Publication number Publication date
CN116829460A (zh) 2023-09-29

Similar Documents

Publication Publication Date Title
Roldán et al. Robots in agriculture: State of art and practical experiences
Abbas et al. Different sensor based intelligent spraying systems in Agriculture
Bergerman et al. Robotics in agriculture and forestry
Oberti et al. Selective spraying of grapevines for disease control using a modular agricultural robot
Aravind et al. Task-based agricultural mobile robots in arable farming: A review
Xie et al. Actuators and sensors for application in agricultural robots: A review
KR101801746B1 (ko) 방제용 스마트 드론, 이를 이용한 스마트 방제 시스템 및 방법
CN109997116A (zh) 用于监视现场的装置和方法
Niu et al. Intelligent bugs mapping and wiping (iBMW): An affordable robot-driven robot for farmers
US20230029636A1 (en) Unmanned aerial vehicle
Partel et al. Smart Sprayer for Precision Weed Control Using Artificial Intelligence: Comparison of Deep Learning Frameworks.
Singh et al. Usage of internet of things based devices in smart agriculture for monitoring the field and pest control
Mousavi et al. A novel enhanced vgg16 model to tackle grapevine leaves diseases with automatic method
WO2023108578A1 (zh) 一种野外自主作业的专家型飞行机器人系统
EP3454650B1 (de) Verfahren zum betrieb eines roboters zur ungezieferbekämpfung
Amarasinghe et al. A path planning algorithm for an autonomous drone against the overuse of pesticides
Zlatov et al. Research of the Present and Emerging Applications of Smart Robots and Unmanned Aerial Vehicles in the Agriculture Domain
Lippi et al. An autonomous spraying robot architecture for sucker management in large‐scale hazelnut orchards
Amarasinghe et al. A Path Planning Drone Solution to Safe Pesticide Usage in Arable Lands
Raikov et al. Artificial intelligence and robots in agriculture
Gül et al. Eye of the farmer in the sky: Drones.
Kaya et al. The Use of Drones in Agricultural Production
Patt et al. An optical system to detect, surveil, and kill flying insect vectors of human and crop pathogens
Ochieng Assessment of the Efficiency of Drones in Surveillance and Control of Desert Locust, Schistocerca Gregaria
Karouta et al. Autonomous platforms

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 202180007514.8

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21967713

Country of ref document: EP

Kind code of ref document: A1