WO2021232359A1 - 控制方法、控制装置、可移动平台与计算机可读存储介质 - Google Patents
控制方法、控制装置、可移动平台与计算机可读存储介质 Download PDFInfo
- Publication number
- WO2021232359A1 WO2021232359A1 PCT/CN2020/091587 CN2020091587W WO2021232359A1 WO 2021232359 A1 WO2021232359 A1 WO 2021232359A1 CN 2020091587 W CN2020091587 W CN 2020091587W WO 2021232359 A1 WO2021232359 A1 WO 2021232359A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- parameter condition
- condition set
- position information
- parameter
- echo energy
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 230000033001 locomotion Effects 0.000 claims abstract description 22
- 238000004590 computer program Methods 0.000 claims description 19
- 238000013527 convolutional neural network Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 5
- 230000006870 function Effects 0.000 description 35
- 235000014676 Phragmites communis Nutrition 0.000 description 16
- 240000008042 Zea mays Species 0.000 description 10
- 235000005824 Zea mays ssp. parviglumis Nutrition 0.000 description 10
- 235000002017 Zea mays subsp mays Nutrition 0.000 description 10
- 235000005822 corn Nutrition 0.000 description 10
- 235000017166 Bambusa arundinacea Nutrition 0.000 description 9
- 235000017491 Bambusa tulda Nutrition 0.000 description 9
- 241001330002 Bambuseae Species 0.000 description 9
- 241000745987 Phragmites Species 0.000 description 9
- 235000015334 Phyllostachys viridis Nutrition 0.000 description 9
- 239000011425 bamboo Substances 0.000 description 9
- 230000035945 sensitivity Effects 0.000 description 9
- 241000196324 Embryophyta Species 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 230000003993 interaction Effects 0.000 description 4
- 238000003062 neural network model Methods 0.000 description 4
- 239000004575 stone Substances 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000005507 spraying Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 240000006394 Sorghum bicolor Species 0.000 description 2
- 235000011684 Sorghum saccharatum Nutrition 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 244000302697 Phragmites karka Species 0.000 description 1
- 239000012773 agricultural material Substances 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 239000003337 fertilizer Substances 0.000 description 1
- 239000003595 mist Substances 0.000 description 1
- 239000000575 pesticide Substances 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64C—AEROPLANES; HELICOPTERS
- B64C1/00—Fuselages; Constructional features common to fuselages, wings, stabilising surfaces or the like
- B64C1/36—Fuselages; Constructional features common to fuselages, wings, stabilising surfaces or the like adapted to receive antennas or radomes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/93—Radar or analogous systems specially adapted for specific applications for anti-collision purposes
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/10—Simultaneous control of position or course in three dimensions
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/12—Target-seeking control
Definitions
- This application relates to the field of motion control technology, and in particular to a control method, a control device, a movable platform, and a computer-readable storage medium.
- Obstacle avoidance is an important function in UAVs.
- the obstacle avoidance function is usually realized based on radar obstacle avoidance technology.
- the sensitivity of the radar is usually fixed at a high level.
- the point cloud obtained by the radar scanning the environment is more comprehensive, including scanning points corresponding to various objects.
- point cloud clusters corresponding to different objects can be clustered.
- the drone can identify obstacles in it and perform obstacle avoidance motions on the obstacles.
- embodiments of the present application provide a control method, a control device, a movable platform, and a computer-readable storage medium.
- the first aspect of the embodiments of the present application provides a control method, including:
- the parameter condition set includes echo energy parameter conditions
- a second aspect of the embodiments of the present application provides a control device, including: a processor and a memory storing a computer program;
- the processor implements the following steps when executing the computer program:
- the parameter condition set includes echo energy parameter conditions
- the third aspect of the embodiments of the present application provides a movable platform, including: a body, a power device connected to the body, a radar mounted on the body, a processor, and a memory storing a computer program;
- the processor implements the following steps when executing the computer program:
- the parameter condition set includes echo energy parameter conditions
- the fourth aspect of the embodiments of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, it implements any of the methods provided in the above-mentioned first aspect. Control Method.
- the control method provided by the embodiments of the present application can filter out the scan points whose echo energy does not meet the echo energy parameter conditions from the multiple scan points obtained by radar scanning according to the echo energy parameter conditions, and this part is filtered out.
- the scan points correspond to the scan points of negligible weak and small obstacles. That is to say, among the remaining target scan points, the scan points corresponding to the obstacles are the scan points of non-negligible obstacles. Therefore, based on the target scan The obstacle avoidance function realized by the point will not consider the weak obstacles, and realize the ignorance of the weak obstacles in the obstacle avoidance function.
- Fig. 1 is a schematic diagram of a drone operation scenario provided by an embodiment of the present application.
- Fig. 2 is a flowchart of a control method provided by an embodiment of the present application.
- Fig. 3 is a schematic diagram of another drone operation scenario provided by an embodiment of the present application.
- Fig. 4 is a user interaction interface provided by an embodiment of the present application.
- Fig. 5 is a schematic structural diagram of a control device provided by an embodiment of the present application.
- Fig. 6 is a schematic structural diagram of a movable platform provided by an embodiment of the present application.
- Obstacle avoidance is an important function of drones. When the obstacle avoidance function is turned on, the drone can automatically detect obstacles on the flight path and dodge obstacles. There are many ways to avoid obstacles, such as visual obstacle avoidance and radar obstacle avoidance. For agricultural UAVs, it relies more on radar to avoid obstacles when implementing obstacle avoidance functions. Because many tasks of agricultural drones are spraying agricultural materials such as pesticides and fertilizers, and the water mist formed during the spraying process will greatly interfere with vision and greatly reduce the effect of visual obstacle avoidance. On the other hand, the operation time of agricultural drones is not necessarily during the day. It may start operation before dawn or continue to operate after dark. At this time, the lack of light will also greatly affect the visual avoidance. The effect of barriers.
- the sensitivity of the radar is usually fixed at a high level.
- agricultural drones Take agricultural drones as an example. When agricultural drones operate, there will be various obstacles in the operating environment, but not all obstacles need to be dodged in the eyes of users.
- Fig. 1 is a schematic diagram of a drone operation scenario provided by an embodiment of the present application.
- the existing high-sensitivity radar can accurately detect the reed and use it as an obstacle Perform obstacle avoidance on it.
- the reed is located on the flight path of the drone, these users believe that the drone can completely ignore the reed and fly over, and the weak reed cannot affect the normal flight of the drone.
- the risk of bombing is extremely low, and the implementation of obstacle avoidance reduces the efficiency of the operation. Therefore, these users are likely to turn off the UAV's obstacle avoidance function during operation, and when encountering a non-ignorable obstacle (such as a telegraph pole in the picture), it will blow up.
- a non-ignorable obstacle such as a telegraph pole in the picture
- the applicant found through research that in the point cloud data obtained by radar scanning, negligible obstacles and non-negligible obstacles have a large difference in the echo energy corresponding to the scanning point. Specifically, The echo energy corresponding to non-negligible obstacles is always higher than the echo energy corresponding to negligible obstacles. For example, for reeds and telephone poles that are both obstacles, the echo energy of the scanning point corresponding to the telephone pole is significantly higher than the echo energy of the scanning point corresponding to the reed. Based on this, the applicant proposes a solution. By setting a condition on the echo energy, multiple scanning points obtained by radar scanning can be filtered through this condition, and the scanning points corresponding to weak obstacles can be filtered out, thereby These weak obstacles can be ignored when avoiding obstacles.
- the movable platform can be an object with movable ability, it can be a small electronic device, such as a drone, a small robot, a remote control car, etc., or a large machine, such as a car, an airplane, a boat, etc. .
- the mobile platform can be equipped with a radar.
- the radar can be a variety of radars with environmental scanning capabilities, such as lidar, electromagnetic wave (millimeter wave) radar, etc., which can be fixed or connected to the mobile platform. Disconnected connection.
- Step S201 Obtain the echo energy and position information corresponding to each of the multiple scanning points obtained by scanning the environment by the radar of the movable platform.
- Step S202 Acquire a set of set parameter conditions.
- the parameter condition set includes echo energy parameter conditions.
- Step S203 According to the parameter condition set, determine the target scanning point whose echo energy meets the echo energy parameter condition among the multiple scanning points.
- Step S204 Control the movable platform to perform obstacle avoidance movement according to the position information of the target scanning point.
- Each scan point in the point cloud can correspond to a variety of data. These data include echo energy and position information. Of course, it can also include other information. Or data.
- the parameter condition set may be a set of parameter conditions, including echo energy parameter conditions.
- the echo energy parameter condition can be used to determine the target scan point, specifically, it can be used to determine the target scan point whose echo energy meets the echo energy parameter condition from the multiple scan points collected by the radar. It is not difficult to find that this process actually filters or discards a part of the scan points collected by the radar. When the echo energy parameter conditions are set appropriately, this part of the filtered scan points can correspond to the scan points of negligible weak obstacles.
- the echo energy parameter condition may include that the echo energy corresponding to the scan point is greater than the echo energy threshold.
- E may be used to represent the echo energy corresponding to the scan point
- the echo energy parameter condition may be E >E set , where E set represents the echo energy threshold.
- the echo energy parameter condition may also include that the echo energy corresponding to the scan point falls within the echo energy range.
- the echo energy parameter condition may be E min ⁇ E ⁇ E max , where E min It represents the minimum value of the echo energy, and E max represents the maximum value of the echo energy.
- the obstacle can be determined according to the position information of the target scanning point, and the movable platform can be controlled to perform obstacle avoidance motion on the obstacle.
- the control method provided by the embodiments of the present application can filter out the scan points whose echo energy does not meet the echo energy parameter conditions from the multiple scan points obtained by radar scanning according to the echo energy parameter conditions, and this part is filtered out.
- the scan points correspond to the scan points of negligible weak and small obstacles. That is to say, among the remaining target scan points, the scan points corresponding to the obstacles are the scan points of non-negligible obstacles. Therefore, based on the target scan The obstacle avoidance function realized by the point will not consider the weak obstacles, and realize the ignorance of the weak obstacles in the obstacle avoidance function.
- the determination of the obstacles also needs to pay attention to the position of the scan point, such as the scan point obtained by the scan, the stone pillar corresponding
- the scanning point meets the echo energy parameter conditions in terms of echo energy, but the height of the stone pillar is very low. If the movable platform is a drone, and it will fly at a level higher than the stone pillar during operation, the stone pillar should not be considered It is an obstacle, or in other words, is an obstacle but is not an obstacle that the drone needs to pay attention to.
- the parameter condition set may also include location information parameter conditions.
- the position information parameter condition corresponds to the position information corresponding to the scanning point in the form of expression.
- the position information corresponding to a scanning point can include one or more of the following: distance, azimuth, pitch, coordinates in the body coordinate system, and coordinates in the coordinate system corresponding to the horizontal plane of the body. Coordinates, height relative to ground level, height relative to plants.
- FIG. 3 is a schematic diagram of another drone operation scenario provided by an embodiment of the present application. As shown in Figure 3, the body coordinate system is a coordinate system composed of three coordinate axes X0, Y0, and Z0 (Y0 axis is not shown).
- the body coordinate system is established based on the body of the drone.
- the body of the drone remains fixed and changes as the attitude of the drone changes.
- the coordinate system corresponding to the horizontal plane of the machine body can also be called the horizontal plane coordinate system, as shown in Figure 3, which includes three coordinate axes X1, Y1, and Z1 (Y1 axis is not shown).
- the platform where the X1 axis and the Y1 axis are located It is the horizontal plane where the drone is located.
- the height of the scanning point relative to the ground plane and the height relative to the plant can be calculated by the terrain detection algorithm. This part is the prior art and will not be further described here.
- the location information parameter conditions may also include conditions for different data in the location information.
- the coordinate point of the scan point in the horizontal plane coordinate system can be expressed as (x1, y1, z1).
- the position information parameter condition can include z1>-0.5, which indicates that the target scan point should fall into the body The space within 0.5m below the horizontal plane.
- the height of the scanning point relative to the ground plane can be represented by h0
- the height of the scanning point relative to the plant can be represented by h1
- the position information parameter conditions can also include conditions such as h0>1, h1>0.5.
- the location information parameter condition is an element in the parameter condition set.
- the location information parameter condition does not necessarily need to be attributed to the parameter condition set. It can be independent of the parameter condition set. Obtained in a separate step.
- the target scanning points can be clustered first, and the scanning points corresponding to different obstacles can be clustered into corresponding point cloud clusters; further, The location information of the point cloud cluster can be determined according to the location information of the scanning points in the point cloud cluster.
- the location information of the point cloud cluster can include the center point coordinates of the point cloud cluster and the three-dimensional size of the point cloud cluster; finally, the point cloud cluster can be based on the location information.
- the location information of the cloud cluster controls the movable platform to dodge the point cloud cluster, thereby achieving obstacle avoidance.
- the scan point corresponding to the non-ignorable obstacle can be determined, that is, the target scan point.
- the parameter condition set can be determined according to the scene category of the environment in which the mobile platform is located.
- the parameter condition sets corresponding to different scene categories may be predetermined in the preliminary work. For example, it is possible to debug the obstacle avoidance function of the movable platform in the environment corresponding to different scene categories in advance, so as to debug the parameter condition set most suitable for the scene category.
- the so-called most suitable means that when applying the parameter condition set, the mobile platform can automatically ignore the weak and small obstacles in the environment corresponding to the scene category, and at the same time can maintain the obstacle avoidance ability against strong obstacles (for the convenience of description, you can This obstacle avoidance function is called a smarter obstacle avoidance function).
- the scene categories can be cornfields, paddy fields, sorghum fields, etc.
- the movable platform can be drones, and drone trials can be carried out in cornfields, paddy fields, and sorghum fields. Operation, combined with the experience of pilots and experts to analyze the environment, determine the negligible obstacles and non-ignorable obstacles in the environment, and adjust the parameter condition set for the purpose of smarter obstacle avoidance function, thereby Determine the most suitable parameter condition set for the current scene category. After determining the most suitable parameter condition set corresponding to each scene category, the corresponding relationship between the scene category and the most suitable parameter condition set can be established, so that the corresponding parameter condition set can be directly matched according to the scene category during application.
- the cornfield can be scanned by the radar mounted on the drone to obtain the point cloud (multiple scanning points) corresponding to the cornfield.
- the point cloud can be clustered, and after the point cloud clusters corresponding to different objects are clustered, the point cloud clusters corresponding to the obstacles can be analyzed.
- the obstacles included in the corn field include telephone poles, reeds, and corn leaves. Among them, reeds and high-stalked corn leaves can be considered as negligible obstacles, and telephone poles can be considered as non-negligible obstacles.
- the echo energy of scanning points in the corresponding point cloud clusters of telephone poles, reeds, and corn leaves can be analyzed.
- the result of the analysis can be that the echo energy of the scanning point corresponding to the pole is greater than 10000, the echo energy of the scanning point corresponding to the reed is in the range of 5000-6000, and the echo energy of the scanning point corresponding to the corn leaf is in the range of 5000-6000. In the range of 5500-6500.
- the echo energy parameter condition in the parameter condition set can be adjusted to E>10000 (E represents the echo energy of the scanning point), then when the parameter condition set is applied, only the telegraph pole will be Regarded as obstacles to avoid obstacles, reeds and corn leaves can be ignored.
- the position information parameter conditions can be flexibly determined according to actual needs and the pilot's flying experience. For example, you can take the condition of the height z1 of the scanning point relative to the horizontal plane where the UAV is located as an example. If you think that the UAV needs to fly more safely, you can set z1>-1, that is, the space 1m below the horizontal plane where the UAV is located. All are included in the observation range, so that the scanning points falling into the observation range may be confirmed as target scanning points.
- the drone does not need to pay attention to such a large range, for example, in a scenario, if the drone is operating at a fixed height relative to the plant height of 0.5m (relative to the plant height 0.5m than relative to the plant height 1m, the spraying effect is Better), the observation range of z1>-1 is obviously too large, you can set z1>-0.5, and the space 0.5m below the horizontal plane where the drone is located can be included in the observation range.
- the parameter condition set corresponding to a certain scene category in the early stage can also be implemented through a neural network model.
- a training sample set can be established using the environmental data corresponding to multiple sets of scene categories that have been determined as input, and the parameter condition set corresponding to the scene category is the output training sample, and the neural network model can be trained through the training sample set.
- the environment data corresponding to the scene type can be collected, and the collected environment data can be input into the trained neural network model, so that the output of the neural network model can be obtained.
- the parameter condition set corresponding to the new scene category can also be implemented through a neural network model.
- At least two different parameter condition sets can be determined for each scene category.
- two parameter condition sets can be set for the cornfield scene category.
- the first parameter condition set can include the echo energy parameter condition E>5000, and the position information parameter condition z1>-1 , H0>1, h1>0.5
- the second parameter condition set may include echo energy parameter condition E>10000, position information parameter condition z1>-0.5, h0>1, h1>0.5.
- the first parameter condition set mentioned above corresponds to the high-sensitivity obstacle avoidance function.
- the drone When this parameter condition set is applied, the drone will perform obstacle avoidance on obstacles with low echo energy such as reeds and corn leaves. Scanning points in the space within 1m below the human-machine may be regarded as scanning points corresponding to obstacles.
- the above second parameter condition set corresponds to the low-sensitivity obstacle avoidance function. When applying this parameter condition set, the drone will ignore obstacles with low echo energy such as reeds and corn leaves, and only 0.5% below the drone. The scanning point in the space within m may be considered as the scanning point corresponding to the obstacle.
- At least two different parameter condition sets are determined for each scene category, then in application, after the scene category of the current environment is determined, at least two parameter condition sets corresponding to the scene category can be provided to the user for selection, and One of the parameter condition sets can be determined according to the user's selection instruction as the parameter condition set used by the drone in the operation process.
- Fig. 4 which shows a possible user interaction interface, in which the user can switch the sensitivity level of the obstacle avoidance function to any one of high, medium, or low by touching the sensitivity control.
- the parameter condition set can include echo energy parameter condition E>10000, position information parameter condition z1>-0.5, h0>1, h1>0.5, then it can be set to satisfy E>10000 and z1>-
- the scan point with the two conditions of 0.5 is the target scan point, and the scan point that meets E>10000, h0>1 and h1>0.5 is also the target scan point, that is, the target scan point specifically meets the echo energy parameter condition and the position information parameter condition Which conditions can be flexibly set according to needs.
- the scene category corresponding to the current working environment may be determined according to an instruction input by the user through interaction with the user.
- the environment can also be automatically identified, and the corresponding scene category can be determined.
- point cloud data obtained by scanning the environment by radar may be used for recognition.
- a scene recognition model with point cloud data as input and scene category as output can be pre-trained, and the scene category corresponding to the environment can be determined through the scene recognition model.
- the scene category can also be identified through image recognition technology.
- the mobile platform may be equipped with a camera, through which the current working environment is photographed to obtain a scene image corresponding to the environment, and by recognizing the scene image, the scene category corresponding to the environment can be determined.
- the scene image can be input to a pre-trained scene recognition model, and the scene recognition model can output the scene category corresponding to the scene image through calculations, thereby determining the scene category corresponding to the environment.
- the scene recognition model can be a convolutional neural network model. Of course, there are other optional models, which will not be repeated here.
- the parameter condition set applied by the obstacle avoidance function can be determined according to the scene category corresponding to the environment, in some scenarios, the determined parameter condition set may still not meet the needs of users. Due to the complex and changeable operating environment, the parameter condition set corresponding to the scene category determined in the preliminary work is often only the most suitable most common environment corresponding to the scene category, and the actual operating environment is likely to be different from the general environment. difference. One possible situation is that there may be some special obstacles in the actual environment that the corresponding scene category does not usually have. The corn field cultivated by a user is quite special, and there are special obstacles such as bamboo poles in the corn field cultivated by the user.
- the parameter condition set corresponding to the cornfield determined in the previous work does not take into account the special situation of the bamboo pole in the cornfield, so when the drone is used When this parameter condition set is operated, it is likely to perform obstacle avoidance on the bamboo pole, which is not intelligent enough to meet the needs of the user.
- the embodiments of the present application provide an implementation manner. Specifically, the obstacle that the user wants to ignore can be determined by interacting with the user, and the current parameter can be determined according to the negligible obstacle selected by the user.
- the condition set is revised. It is understandable that the current parameter condition set can also be considered as the initial parameter condition set, which can be the default parameter condition set provided by the system when the drone is started, or it can be the scene category corresponding to the environment mentioned above. Determined parameter condition set.
- the object category can be determined for the negligible obstacle selected by the user.
- the object category can be bamboo poles, reeds, etc. in one example.
- the feature parameter corresponding to the object category can be further determined, and the initial parameter condition set can be corrected according to the feature parameter.
- the matching may be performed based on the pre-configured correspondence relationship.
- the pre-configured correspondence relationship may include the characteristic parameters corresponding to various object categories, such as bamboo poles. Corresponding characteristic parameters, characteristic parameters corresponding to reeds, etc. The corresponding relationship can be determined in advance in the preliminary work.
- the corresponding point cloud data can be obtained for each object category, and the point cloud data corresponding to the object category can be scanned by radar for the obstacles corresponding to the object category. owned. After the point cloud data corresponding to the object category is obtained, feature extraction can be performed on the point cloud data to obtain corresponding feature parameters.
- the feature parameters corresponding to the object category may also be extracted in real time. For example, after determining the ignorable obstacle selected by the user, the currently collected point cloud data corresponding to the ignorable obstacle can be analyzed, and the characteristic parameter corresponding to the ignorable obstacle can be extracted in real time, and the characteristic parameter can be extracted according to the characteristic parameter. Correct the current parameter condition set.
- the parameter condition set is modified according to the characteristic parameters
- the parameter conditions corresponding to the cornfield determined in the previous work are concentrated
- the echo energy parameter conditions include E>6000
- the echo energy parameter condition can be adjusted to E>6600, so that the obstacle avoidance function can ignore the bamboo pole and not avoid obstacles.
- the object category can be determined through interaction with the user.
- the user can be provided with a selection page that includes multiple object categories for the user to select, so that the ignorable can be determined according to the user's selection.
- the object category corresponding to the obstacle can also be determined through image recognition technology.
- the work scene can be photographed through the camera mounted on the movable platform.
- the user can select the obstacles that the user wants to ignore in the captured frame, and then the user can intercept the selection according to the box selection instruction input by the user.
- the image corresponding to the obstacle can be ignored, so that the object category corresponding to the negligible obstacle can be determined by recognizing the image corresponding to the negligible obstacle.
- the corresponding obstacle recognition model can also be pre-trained, so that in the application, you only need to select the box
- the image corresponding to the negligible obstacle is input into the obstacle recognition model, and the object category of the negligible obstacle output by the obstacle recognition model can be obtained.
- the obstacle recognition model can also be a convolutional neural network model.
- the foregoing is a detailed description of the control method provided by the embodiment of the present application.
- the method provided in the embodiments of the present application can filter out the scan points whose echo energy does not meet the echo energy parameter conditions from the multiple scan points obtained by the radar scan according to the echo energy parameter condition, and this part of the scanned scan is filtered out.
- the points correspond to the scan points of the negligible weak obstacles, that is to say, among the remaining target scan points, the scan points corresponding to the obstacles are the scan points of the non-negligible obstacles. Therefore, based on the target scan point
- the realized obstacle avoidance function will not consider the weak obstacles, and realizes the ignorance of the weak obstacles in the obstacle avoidance function.
- the method provided in the embodiments of the present application can provide users with the most suitable parameter condition set for the operating environment according to the specific operating environment.
- the parameter condition set can be adjusted according to the obstacles that the user wants to ignore, so that the obstacle avoidance function can meet the needs of the user to the greatest extent, and the obstacles selected by the user can be ignored, so that the user can Expected intelligence.
- FIG. 5 is a schematic structural diagram of a control device provided by an embodiment of the present application.
- the control device provided in the embodiments of the present application can have multiple implementation modes in specific product forms.
- the control device can be an independent product itself, or it can be a part of other electronic equipment such as radar and drone controllers.
- the control device includes: a processor 510 and a memory 520 storing a computer program;
- the processor 510 implements the following steps when executing the computer program:
- the parameter condition set includes echo energy parameter conditions
- the parameter condition set further includes a position information parameter condition; the determined position information of the target scanning point meets the position information parameter condition.
- the processor executes the step of controlling the movable platform to perform obstacle avoidance movement according to the position information of the target scanning point, it is specifically configured to cluster the target scanning point to obtain a point cloud cluster Determine the position information of the point cloud cluster according to the position information of the target scanning point in the point cloud cluster; control the movable platform to perform obstacle avoidance movement according to the position information of the point cloud cluster.
- the position information of the point cloud cluster includes the coordinates of the center point of the point cloud cluster and the three-dimensional size of the point cloud cluster.
- the echo energy parameter condition includes that the echo energy corresponding to the scanning point is greater than the echo energy threshold.
- the parameter condition set is determined according to the scene category corresponding to the environment.
- the processor executes the step of determining the parameter condition set according to the scene category, it is specifically configured to determine all corresponding to the scene category according to a pre-configured correspondence between the scene category and the parameter condition set.
- the parameter condition set when the processor executes the step of determining the parameter condition set according to the scene category, it is specifically configured to determine all corresponding to the scene category according to a pre-configured correspondence between the scene category and the parameter condition set.
- the parameter condition set is specifically configured to determine all corresponding to the scene category according to a pre-configured correspondence between the scene category and the parameter condition set.
- one scene category corresponds to at least two parameter condition sets.
- the parameter condition set is determined from the at least two parameter condition sets according to a user's selection instruction.
- the movable platform is equipped with a camera, and the scene category is determined by identifying and determining scene images taken by the camera.
- the processor executes the step of recognizing the scene image, it is specifically configured to input the scene image into a pre-trained scene recognition model to obtain the scene image output by the scene recognition model Corresponding scene category; wherein the scene recognition model is a convolutional neural network model.
- the parameter condition set is obtained by modifying the initial parameter condition set according to the negligible obstacle selected by the user.
- the processor executes the step of correcting the initial parameter condition set according to the negligible obstacle, it is specifically configured to determine the object category corresponding to the negligible obstacle, and determine the object The feature parameter corresponding to the category is modified according to the feature parameter to the initial parameter condition set; wherein the feature parameter is obtained by performing feature extraction on the radar scan data corresponding to the object category in advance.
- the object category corresponding to the negligible obstacle is determined by identifying the image corresponding to the negligible obstacle.
- the movable platform is equipped with a camera, and the image corresponding to the negligible obstacle is intercepted from the image taken by the camera according to a frame selection instruction input by the user.
- the position information corresponding to the scan point includes one or more of the following: distance, azimuth, pitch angle, coordinates in the body coordinate system, coordinates in the coordinate system corresponding to the horizontal plane of the body, relative to The height of the ground level, relative to the height of the plant.
- the radar is a millimeter wave radar.
- control device in the various embodiments described above, reference may be made to the corresponding description of the control method in the foregoing, which will not be repeated here.
- the control device provided by the embodiment of the present application can filter out the scan points whose echo energy does not meet the echo energy parameter condition from the multiple scan points obtained by radar scanning according to the echo energy parameter condition, and this part is filtered out.
- the scan points correspond to the scan points of negligible weak and small obstacles. That is to say, among the remaining target scan points, the scan points corresponding to the obstacles are the scan points of non-negligible obstacles. Therefore, based on the target scan The obstacle avoidance function realized by the point will not consider the weak obstacles, and realize the ignorance of the weak obstacles in the obstacle avoidance function.
- control device can also provide users with the most suitable parameter condition set for the operating environment according to the specific operating environment.
- the parameter condition set can be adjusted according to the obstacles that the user wants to ignore, so that the obstacle avoidance function can meet the needs of the user to the greatest extent, and the obstacles selected by the user can be ignored, so that the user can Expected intelligence.
- FIG. 6 is a schematic structural diagram of a movable platform provided by an embodiment of the present application.
- the movable platform may include: a body 610, a power device 620 connected to the body, a radar 630 mounted on the body 610, a processor 611, and a memory 612 storing computer programs;
- the processor 611 implements the following steps when executing the computer program:
- the parameter condition set includes echo energy parameter conditions
- the parameter condition set further includes a position information parameter condition; the determined position information of the target scanning point meets the position information parameter condition.
- the processor executes the step of controlling the movable platform to perform obstacle avoidance movement according to the position information of the target scanning point, it is specifically configured to cluster the target scanning point to obtain a point cloud cluster Determine the position information of the point cloud cluster according to the position information of the target scanning point in the point cloud cluster; control the movable platform to perform obstacle avoidance movement according to the position information of the point cloud cluster.
- the position information of the point cloud cluster includes the coordinates of the center point of the point cloud cluster and the three-dimensional size of the point cloud cluster.
- the echo energy parameter condition includes an echo energy threshold, and the echo energy of the target scan point is higher than the echo energy threshold.
- the parameter condition set is determined according to the scene category corresponding to the environment.
- the processor executes the step of determining the parameter condition set according to the scene category, it is specifically configured to determine all corresponding to the scene category according to a pre-configured correspondence between the scene category and the parameter condition set.
- the parameter condition set when the processor executes the step of determining the parameter condition set according to the scene category, it is specifically configured to determine all corresponding to the scene category according to a pre-configured correspondence between the scene category and the parameter condition set.
- the parameter condition set is specifically configured to determine all corresponding to the scene category according to a pre-configured correspondence between the scene category and the parameter condition set.
- one scene category corresponds to at least two parameter condition sets.
- the parameter condition set is determined from the at least two parameter condition sets according to a user's selection instruction.
- the movable platform is equipped with a camera, and the scene category is determined by identifying and determining scene images taken by the camera.
- the processor executes the step of recognizing the scene image, it is specifically configured to input the scene image into a pre-trained scene recognition model to obtain the scene image output by the scene recognition model Corresponding scene category; wherein the scene recognition model is a convolutional neural network model.
- the parameter condition set is obtained by modifying the initial parameter condition set according to the negligible obstacle selected by the user.
- the processor executes the step of correcting the initial parameter condition set according to the negligible obstacle, it is specifically configured to determine the object category corresponding to the negligible obstacle, and determine the object The feature parameter corresponding to the category is modified according to the feature parameter to the initial parameter condition set; wherein the feature parameter is obtained by performing feature extraction on the radar scan data corresponding to the object category in advance.
- the object category corresponding to the negligible obstacle is determined by identifying the image corresponding to the negligible obstacle.
- the movable platform is equipped with a camera, and the image corresponding to the negligible obstacle is intercepted from the image taken by the camera according to a frame selection instruction input by the user.
- the position information corresponding to the scan point includes one or more of the following: distance, azimuth, pitch angle, coordinates in the body coordinate system, coordinates in the coordinate system corresponding to the horizontal plane of the body, relative to The height of the ground level, relative to the height of the plant.
- the radar is a millimeter wave radar.
- the movable platform includes a drone.
- the movable platform provided by the embodiments of the present application can filter out the scan points whose echo energy does not meet the echo energy parameter conditions from the multiple scan points obtained by radar scanning according to the echo energy parameter conditions, and this part is filtered out
- the scan points correspond to the scan points of negligible weak obstacles, that is to say, among the remaining target scan points, the scan points corresponding to the obstacles are the scan points of non-negligible obstacles. Therefore, based on the target The obstacle avoidance function realized by the scanning point will not consider the weak obstacles, and realize the ignorance of the weak obstacles in the obstacle avoidance function.
- the mobile platform can also provide users with the most suitable parameter condition set for the operating environment according to the specific operating environment.
- the parameter condition set can be adjusted according to the obstacles that the user wants to ignore, so that the obstacle avoidance function can meet the needs of the user to the greatest extent, and the obstacles selected by the user can be ignored, so that the user can Expected intelligence.
- the embodiments of the present application also provide a computer-readable storage medium that stores a computer program, and when the computer program is executed by a processor, any one of the control methods provided in the embodiments of the present application is implemented.
- the embodiments of the present application may adopt the form of a computer program product implemented on one or more storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing program codes.
- Computer usable storage media include permanent and non-permanent, removable and non-removable media, and information storage can be achieved by any method or technology.
- the information can be computer-readable instructions, data structures, program modules, or other data.
- Examples of computer storage media include, but are not limited to: phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
- PRAM phase change memory
- SRAM static random access memory
- DRAM dynamic random access memory
- RAM random access memory
- ROM read-only memory
- EEPROM electrically erasable programmable read-only memory
- flash memory or other memory technology
- CD-ROM compact disc
- DVD digital versatile disc
- Magnetic cassettes magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Mechanical Engineering (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- Traffic Control Systems (AREA)
Abstract
公开了一种控制方法,包括:获取可移动平台的雷达(630)对环境扫描得到的多个扫描点中的每一扫描点对应的回波能量和位置信息;获取设置的参数条件集;其中,参数条件集包括回波能量参数条件;根据参数条件集,在多个扫描点中确定目标扫描点;其中,目标扫描点的回波能量符合回波能量参数条件;根据目标扫描点的位置信息控制可移动平台执行避障运动。所公开的方法解决了无人机面对弱小的障碍物也执行避障导致作业效率降低的技术问题。
Description
本申请涉及运动控制技术领域,尤其涉及一种控制方法、控制装置、可移动平台与计算机可读存储介质。
避障功能是无人机中的一项重要功能。对于农业无人机而言,其避障功能通常是基于雷达避障技术实现的。
为保证农业无人机的飞行安全,雷达的灵敏度通常固定在较高的水准,在该灵敏度下,雷达扫描环境所得到的点云是较为全面的,其中包括各种物体对应的扫描点,通过对点云进行聚类,可以聚类出不同物体对应的点云簇,根据各个点云簇的位置信息,无人机可以识别出其中的障碍物,并可以对该障碍物执行避障运动。
发明内容
为解决上述的无人机面对弱小的障碍物也执行避障导致作业效率降低的技术问题,本申请实施例提供了一种控制方法、控制装置、可移动平台与计算机可读存储介质。
本申请实施例第一方面提供了一种控制方法,包括:
获取可移动平台的雷达对环境扫描得到的多个扫描点中的每一所述扫描点对应的回波能量和位置信息;
获取设置的参数条件集;其中,所述参数条件集包括回波能量参数条件;
根据所述参数条件集,在多个所述扫描点中确定目标扫描点;其中,所述目标扫描点的所述回波能量符合所述回波能量参数条件;
根据所述目标扫描点的所述位置信息控制所述可移动平台执行避障运动。
本申请实施例第二方面提供了一种控制装置,包括:处理器与存储有计算机程序的存储器;
所述处理器在执行所述计算机程序时实现以下步骤:
获取可移动平台的雷达对环境扫描得到的多个扫描点中的每一所述扫描点对应的 回波能量和位置信息;
获取设置的参数条件集;其中,所述参数条件集包括回波能量参数条件;
根据所述参数条件集,在多个所述扫描点中确定目标扫描点;其中,所述目标扫描点的所述回波能量符合所述回波能量参数条件;
根据所述目标扫描点的所述位置信息控制所述可移动平台执行避障运动。
本申请实施例第三方面提供了一种可移动平台,包括:机体、与所述机体连接的动力装置、搭载于所述机体的雷达、处理器与存储有计算机程序的存储器;
所述处理器在执行所述计算机程序时实现以下步骤:
获取所述雷达对环境扫描得到的多个扫描点中的每一所述扫描点对应的回波能量和位置信息;
获取设置的参数条件集;其中,所述参数条件集包括回波能量参数条件;
根据所述参数条件集,在多个所述扫描点中确定目标扫描点;其中,所述目标扫描点的所述回波能量符合所述回波能量参数条件;
根据所述目标扫描点的所述位置信息控制所述可移动平台执行避障运动。
本申请实施例第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如上述第一方面所提供的任一种控制方法。
本申请实施例提供的控制方法,可以根据回波能量参数条件,从雷达扫描得到的多个扫描点中过滤掉回波能量不符合该回波能量参数条件的扫描点,这部分被过滤掉的扫描点对应的是可忽略的弱小障碍物的扫描点,也就是说,保留下来的目标扫描点中,对应障碍物的扫描点都是不可忽略的障碍物的扫描点,因此,基于该目标扫描点实现的避障功能将不会考虑弱小障碍物,实现了在避障功能中对弱小障碍物的忽略。
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种无人机作业场景示意图。
图2是本申请实施例提供的一种控制方法的流程图。
图3是本申请实施例提供的另一种无人机作业场景示意图。
图4是本申请实施例提供的一种用户交互界面。
图5是本申请实施例提供的一种控制装置的结构示意图。
图6是本申请实施例提供的一种可移动平台的结构示意图。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
避障是无人机的一项重要的功能。当避障功能开启时,无人机可以自动检测到飞行路径上的障碍物并对障碍物进行闪避。而实现避障的方法有多种,比如有视觉避障、雷达避障等。对于农业无人机而言,其在实现避障功能时更多依靠的是雷达避障。因为农业无人机的很多作业任务都是喷洒农药、肥料等农资,而喷洒过程形成的水雾会对视觉产生很大的干扰,使视觉避障的效果大打折扣。另一方面,农业无人机的作业时间并不一定在白天,其可能在天亮前就开始作业,或者在天黑后仍然继续作业,此时,光线的不足也会很大程度上影响视觉避障的效果。
在基于雷达实现避障功能时,为了保证无人机不会与障碍物碰撞,雷达的灵敏度通常固定在较高的水准。但在一些场景中,申请人发现,用户并不希望雷达有如此高的灵敏度。以农业无人机为例子,在农业无人机作业时,作业环境中会有各种各样的障碍物,但并不是所有的障碍物在用户看来都需要闪避。
可以参见图1,图1是本申请实施例提供的一种无人机作业场景示意图。如图1所示,水田中有一根高出来的芦苇,并且该芦苇落入无人机的飞行路径,那么,现有的高灵敏度的雷达可以准确的检测到该芦苇,并将其作为障碍物对其执行避障。但对一些用户而言,虽然这根芦苇位于无人机的飞行路径上,但这些用户认为,无人机完全可以忽略该芦苇直接飞过去,弱小的芦苇并不能影响无人机的正常飞行,炸机风险是极低的,执行避障反而降低了作业效率。因此,这些用户很可能在作业时将无人机的避障功能关闭,当遇到不可忽略的障碍物(如图中的电线杆)时,便会发生炸机。
针对上述问题,申请人经过研究发现,在雷达扫描得到的点云数据中,可忽略的 障碍物与不可忽略的障碍物在扫描点对应的回波能量上有较大的差异,具体而言,不可忽略的障碍物对应的回波能量总是高于可忽略的障碍物对应的回波能量。比如,同为障碍物的芦苇与电线杆,电线杆对应的扫描点的回波能量是明显高于芦苇对应的扫描点的回波能量的。基于此,申请人提出一种解决思路,可以通过设置一个关于回波能量的条件,通过该条件对雷达扫描得到的多个扫描点进行过滤,过滤掉其中弱小的障碍物对应的扫描点,从而使避障时可以忽略这些弱小的障碍物。
可以参见图2,图2是本申请实施例提供的一种控制方法的流程图。该方法可以应用于可移动平台的避障功能。可移动平台可以是具有可移动能力的物体,其可以是体积较小的电子设备,如无人机、小型机器人、遥控车等,也可以是体积较大的机器,比如汽车、飞机、船等。该可移动平台可以搭载有雷达,雷达可以是各种具有环境扫描能力的雷达,比如可以是激光雷达、电磁波(毫米波)雷达等等,其可以与可移动平台固连或者与可移动平台可拆卸的连接。
本申请实施例提供的一种控制方法可以包括以下步骤:
步骤S201、获取通过可移动平台的雷达对环境扫描得到的多个扫描点中的每一扫描点对应的回波能量和位置信息。
步骤S202、获取设置的参数条件集。
其中,所述参数条件集包括回波能量参数条件。
步骤S203、根据参数条件集,在多个扫描点中确定回波能量符合该回波能量参数条件的目标扫描点。
步骤S204、根据目标扫描点的位置信息控制可移动平台执行避障运动。
通过可移动平台上搭载的雷达对环境进行旋转式扫描,可以得到该环境对应的多个扫描点。这些雷达采集的扫描点的集合,也可以称为点云,点云中的每一个扫描点可以对应有多种数据,这些数据中包括回波能量与位置信息,当然,也可以包括其他的信息或数据。
参数条件集可以是参数条件的集合,其中包括回波能量参数条件。回波能量参数条件可以用于确定目标扫描点,具体的,即可以用于从雷达采集的多个扫描点中确定回波能量符合该回波能量参数条件的目标扫描点。不难发现,该过程实际过滤或者说丢弃掉了雷达采集的一部分扫描点。在回波能量参数条件设置合适时,这部分被过滤掉的扫描点可以对应可忽略的弱小障碍物的扫描点。
回波能量参数条件在表现形式也有多种。在一种实施方式中,回波能量参数条件可以包括扫描点对应的回波能量大于回波能量阈值,比如可以用E来表示扫描点对应 的回波能量,则回波能量参数条件可以是E>E
set,其中E
set表示回波能量阈值。在另一种实施方式中,回波能量参数条件也可以包括扫描点对应的回波能量落入回波能量范围,比如,回波能量参数条件可以是E
min<E<E
max,其中E
min表示回波能量最小值,E
max表示回波能量最大值。
根据该回波能量参数条件确定出目标扫描点后,可以根据该目标扫描点的位置信息确定障碍物,并控制可移动平台对该障碍物执行避障运动。
本申请实施例提供的控制方法,可以根据回波能量参数条件,从雷达扫描得到的多个扫描点中过滤掉回波能量不符合该回波能量参数条件的扫描点,这部分被过滤掉的扫描点对应的是可忽略的弱小障碍物的扫描点,也就是说,保留下来的目标扫描点中,对应障碍物的扫描点都是不可忽略的障碍物的扫描点,因此,基于该目标扫描点实现的避障功能将不会考虑弱小障碍物,实现了在避障功能中对弱小障碍物的忽略。
由于回波能量参数条件仅是从回波能量上滤除了可忽略障碍物对应的扫描点,但障碍物的确定还需要关注扫描点所处的位置,比如扫描得到的扫描点中,石柱对应的扫描点在回波能量上符合回波能量参数条件,但该石柱的高度很低,若可移动平台是无人机,其作业时都会在高于石柱的水平面飞行,则该石柱不应被认定为障碍物,或者说,属于障碍物但不是无人机需要关注的障碍物。
基于上述问题,障碍物除了应该有足够强度的以外(通过回波能量参数条件可以区分出),还需要在距离上落入可移动平台的安全距离,在可移动平台的移动路径或作业的平面上。因此,在一种实施方式中,参数条件集还可以包括位置信息参数条件。在根据参数条件集确定目标扫描点时,一方面需要确定扫描点的回波能量是否符合回波能量参数条件,另一方面,还可以确定扫描点的位置信息是否符合位置信息参数条件,只有回波能量符合回波能量参数条件、并且位置信息也符合位置信息参数条件的扫描点才可以被确认为目标扫描点。
位置信息参数条件在表现形式上与扫描点对应的位置信息是相对应的。在雷达采集的点云中,一个扫描点对应的位置信息可以包括以下一种或多种:距离、方位角、俯仰角、在机体坐标系下的坐标、在机体所在水平面对应的坐标系下的坐标、相对于地平面的高度、相对于植株的高度。其中,关于机体坐标系与机体所在水平面对应的坐标系,可以参考图3,图3是本申请实施例提供的另一种无人机作业场景示意图。如图3所示,机体坐标系是由X0、Y0、Z0三个坐标轴组成的坐标系(Y0轴未示出),该机体坐标系是基于无人机的机体建立的,其与无人机的机体保持固定,随着无人机的姿态变化而变化。机体所在水平面对应的坐标系也可以称为水平面坐标系,如图3 所示,其包括X1、Y1、Z1三个坐标轴(Y1轴未示出),其中,X1轴与Y1轴所在的平台是无人机所在的水平面。扫描点相对于地平面的高度与相对于植株的高度可以通过地形检测算法计算得到,该部分为现有技术,在此不再展开说明。
与位置信息可以包括多种不同的数据相对应,位置信息参数条件也可以包括针对位置信息中的不同数据的条件。比如,扫描点在水平面坐标系下的坐标点可以表示为(x1,y1,z1),则在一个例子中,位置信息参数条件可以包括z1>-0.5,该条件表示目标扫描点应当落入机体所在水平面下方0.5m以内的空间。又比如,扫描点相对于地平面的高度可以用h0来表示,扫描点相对于植株的高度可以用h1来表示,则位置信息参数条件还可以包括h0>1、h1>0.5等条件。
需要注意的是,在上述实施方式中,位置信息参数条件是参数条件集中的元素,但在实际实施时,位置信息参数条件并不一定需要归于参数条件集,其可以独立于参数条件集,在独立的步骤中获取。
在根据目标扫描点的位置信息控制可移动平台执行避障运动时,具体的,可以先对目标扫描点进行聚类,将不同障碍物对应的扫描点聚类成相应的点云簇;进而,可以根据点云簇中的扫描点的位置信息确定出点云簇的位置信息,点云簇的位置信息可以包括点云簇的中心点坐标与点云簇的三维尺寸;最后,可以根据该点云簇的位置信息,控制可移动平台对该点云簇进行闪避,从而实现避障。
通过参数条件集(包括回波能量参数条件与位置信息参数条件)对雷达采集的多个扫描点进行筛选,可以确定出不可忽略的障碍物对应的扫描点,即目标扫描点。而考虑到不同环境下,可忽略的弱小障碍物与不可忽略的强大障碍物是不同的,因此,在确定参数条件集中的具体条件时,在一种实施方式中,可以针对不同的环境设置不同的参数条件集,具体的,可以根据可移动平台所处环境的场景类别来确定参数条件集。
不同的场景类别对应的参数条件集可以是前期工作中预先确定的。比如,可以预先到不同场景类别对应的环境中调试可移动平台的避障功能,从而调试出最适合该场景类别的参数条件集。所谓最适合,即在应用该参数条件集时,可移动平台可以自动的忽略该场景类别对应的环境中的弱小障碍物,同时还可以保持对强大障碍物的避障能力(为方便描述,可以将这种避障功能称为更智能的避障功能)。比如,在农业无人机的应用场景中,场景类别可以是玉米地、水田、高粱地等,可移动平台可以是无人机,则可以分别在玉米地、水田、高粱地进行无人机试作业,结合飞手与专家等人员的经验对环境进行分析,确定出该环境下的可忽略障碍物与不可忽略障碍物,并以更 智能的避障功能为目的对参数条件集进行调整,从而确定出最合适当前场景类别的参数条件集。在确定每个场景类别对应的最适合的参数条件集后,可以建立场景类别与其最适合的参数条件集的对应关系,以便在应用时可以根据场景类别直接匹配到相应的参数条件集。
上述的调试过程,可以举一个具体的例子以方便理解。比如对于玉米地这种场景类别,在前期确定其对应的参数条件集时,可以通过无人机上搭载的雷达对玉米地进行扫描,得到玉米地对应的点云(多个扫描点)。可以对该点云进行聚类,在聚类出不同物体对应的点云簇后,可以对其中的障碍物对应的点云簇进行分析。比如在玉米地中包括的障碍物有电线杆、芦苇、玉米叶子,其中,芦苇与高杆玉米叶子可以认为是可忽略的障碍物,电线杆可以认为是不可忽略的障碍物。对于回波能量参数条件的确定,可以分析电线杆、芦苇、玉米叶子各自对应的点云簇中扫描点的回波能量。比如,分析得到对结果可以是,电线杆对应的扫描点的回波能量均大于10000,芦苇对应的扫描点的回波能量在5000-6000的范围内,玉米叶子对应的扫描点的回波能量在5500-6500的范围内。为实现更智能的避障功能,可以调整参数条件集中的回波能量参数条件是E>10000(E表示扫描点的回波能量),则在该参数条件集被应用时,只有电线杆会被视为要进行避障的障碍物,芦苇与玉米叶子都可以被忽略。
对于位置信息参数条件,其可以根据实际需要与飞手的飞行经验等灵活确定。比如,可以以扫描点相对于无人机所在水平面的高度z1的条件为例,若认为无人机需要更安全的飞行,则可以设置z1>-1,即将无人机所在水平面以下1m的空间都纳入观测范围,使落入该观测范围的扫描点具有被确认为目标扫描点的可能。若认为无人机不需要关注如此大的范围,比如在一种场景中,若无人机相对于植株高0.5m定高作业(相对于植株高0.5m比相对于植株高1m在喷洒效果上更好),则z1>-1显然观测范围太大,可以设置z1>-0.5,将无人机所在水平面以下0.5m的空间纳入观测范围即可。
在前期确定某一场景类别对应的参数条件集时,在一种实施方式中,还可以通过神经网络模型实现。具体的,可以利用已经确定的多组场景类别对应的环境数据为输入、该场景类别对应的参数条件集为输出的训练样本建立训练样本集,并通过该训练样本集对神经网络模型进行训练。在确定某一种新的场景类别对应的参数条件集时,可以采集该场景类型对应的环境数据,并将采集的环境数据输入训练好的神经网络模型,从而可以得到该神经网络模型输出的该新的场景类别对应的参数条件集。
考虑到有些用户可能希望无人机(以无人机为例子,但不限于无人机这一种可移动平台)的避障功能可以忽略弱小的障碍物,但也有些用户更倾向于无人机的飞行安 全,希望无人机不会与任何障碍物相撞,因此,可以在调试时,针对每一个场景类别确定出至少两个不同的参数条件集。继续沿用上述的玉米地的例子进行说明,可以对玉米地的场景类别设置两个参数条件集,第一个参数条件集可以包括回波能量参数条件E>5000,位置信息参数条件z1>-1、h0>1、h1>0.5,第二个参数条件集可以包括回波能量参数条件E>10000,位置信息参数条件z1>-0.5、h0>1、h1>0.5。
可以理解,上述第一个参数条件集对应高灵敏度的避障功能,在应用该参数条件集时,无人机会对芦苇、玉米叶子等回波能量较低的障碍物执行避障,并且位于无人机下方1m以内的空间的扫描点都可能被认为是障碍物对应的扫描点。上述第二个参数条件集对应低灵敏度的避障功能,在应用该参数条件集时,无人机会忽略芦苇、玉米叶子等回波能量较低的障碍物,并且,只有位于无人机下方0.5m以内的空间的扫描点才可能会被认为是障碍物对应的扫描点。
针对每一个场景类别确定出至少两个不同的参数条件集,则在应用时,可以在确定当前环境的场景类别后,将该场景类别对应的至少两个参数条件集提供给用户进行选择,并可以根据用户的选择指令确定其中的一个参数条件集作为无人机在作业过程中使用的参数条件集。可以参见图4,图4所示的是一种可能的用户交互界面,在该交互界面中,用户可以通过触摸灵敏度控件来切换避障功能的灵敏度级别至高、中、低任意一种。
需要说明的是,在确定目标扫描点时,目标扫描点虽然同时符合回波能量参数条件与位置信息参数条件,但符合这两种参数条件并不一定意味着符合两种参数条件中的所有条件。比如,在一个例子中,参数条件集可以包括回波能量参数条件E>10000,位置信息参数条件z1>-0.5、h0>1、h1>0.5,则可以设定满足E>10000与z1>-0.5两个条件的扫描点即为目标扫描点,满足E>10000、h0>1与h1>0.5的扫描点也为目标扫描点,即目标扫描点具体符合回波能量参数条件与位置信息参数条件中的哪些条件是可以根据需要灵活设定的。
在确定环境对应的场景类别时,可以有多种实施方式。在一种实施方式中,可以通过与用户的交互,根据用户输入的指令确定当前作业环境对应的场景类别。在另一种实施方式中,还可以自动的对环境进行识别,确定出其对应的场景类别。具体的,在对环境进行识别时,在一种实施方式中,可以利用雷达扫描环境得到的点云数据进行识别。比如,可以预先训练出以点云数据为输入、场景类别为输出的场景识别模型,通过该场景识别模型可以确定出环境对应的场景类别。
在另一种实施方式中,还可以通过图像识别技术识别出场景类别。具体的,可移 动平台可以搭载有摄像头,通过摄像头对当前的作业环境进行拍摄,得到该环境对应的场景图像,通过对该场景图像进行识别,可以确定该环境对应的场景类别。在对场景图像进行识别时,可以将场景图像输入预先训练好的场景识别模型,场景识别模型通过运算可以输出该场景图像对应的场景类别,从而确定环境对应的场景类别。其中,场景识别模型可以是卷积神经网络模型,当然,也有其他可选的模型,在此不再赘述。
虽然避障功能所应用的参数条件集可以根据环境对应的场景类别确定,但在某些场景中,确定的参数条件集可能仍然不能满足用户的需求。由于作业环境复杂多变,在前期工作中确定的场景类别对应的参数条件集往往只是最适应该场景类别对应的大多数的、普遍的环境,而实际的作业环境很可能与普遍的环境有所差别。可能的一种情况是,实际环境中可能存在一些其对应的场景类别通常没有的特殊障碍物,比如玉米地中的障碍物通常是一些高杆的玉米叶子,或者还有一些电线杆,但某个用户所耕作的玉米地却比较特别,其所耕作的玉米地中还存在竹竿这种特殊的障碍物。若用户该竹竿是弱小的不需要闪避的障碍物,但由于前期工作中所确定的玉米地对应的参数条件集并没有考虑到玉米地中有竹竿的这种特殊情况,因此当无人机采用该参数条件集进行作业时,很可能会对该竹竿执行避障,从而显得不够智能,不能满足该用户的需求。
针对上述的情况,本申请实施例提供一种实施方式,具体的,可以通过与用户交互的方式,确定用户想要忽略的障碍物,并根据用户选定的可忽略障碍物,对当前的参数条件集进行修正。可以理解的是,当前的参数条件集也可以认为是初始的参数条件集,其可以是无人机启动时系统提供的默认参数条件集,也可以是前文中提及的根据环境对应的场景类别确定的参数条件集。
具体在修正时,可以先对用户选定的可忽略障碍物确定物体类别,物体类别在一个例子中可以是竹竿、芦苇等。在确定可忽略障碍物的物体类别后,可以进一步确定该物体类别对应的特征参数,并根据该特征参数对初始的参数条件集进行修正。在确定物体类别对应的特征参数时,在一种实施方式中,可以基于预先配置的对应关系进行匹配,具体的,预先配置的对应关系可以包括各种物体类别对应的特征参数,比如可以包括竹竿对应的特征参数、芦苇对应的特征参数等。该对应关系可以是预先在前期工作中确定的,比如可以针对每一种物体类别获取对应的点云数据,该物体类别对应的点云数据可以是通过雷达对该物体类别对应的障碍物进行扫描得到的。在得到该物体类别对应的点云数据后,可以对该点云数据进行特征提取,得到相应的特征参数。
在另一种实施方式中,物体类别对应的特征参数也可以是实时提取的。比如可以 在确定用户选定的可忽略障碍物后,对当前采集的该可忽略障碍物对应的点云数据进行分析,实时提取出该可忽略障碍物对应的特征参数,并可以根据该特征参数对当前的参数条件集进行修正。
而在具体根据特征参数对参数条件集进行修正时,为方便理解,可以参考以下例子。比如在前期工作确定的玉米地对应的参数条件集中,回波能量参数条件包括E>6000,而用户选定的可忽略障碍物——竹竿,其特征参数可以指示出竹竿所对应的回波能量,比如是6500,则在根据该特征参数修正参数条件集时,可以将回波能量参数条件调高为E>6600,以使避障功能可以忽略该竹竿,不对其进行避障。
而在确定用户选定的可忽略障碍物对应的物体类别时,也有多种可选的实施方式。在一种实施方式中,可以通过与用户的交互来确定物体类别,比如可以向用户提供一个选择页面,该选择页面中包括多个供用户选择的物体类别,从而可以根据用户的选择确定可忽略障碍物对应的物体类别。在另一种实施方式中,还可以通过图像识别技术确定该可忽略障碍物对应的物体类别。比如,可以通过可移动平台搭载的摄像头对作业场景进行拍摄,此时用户可以对所拍摄的画面中框选出用户想要忽略的障碍物,则根据用户输入的框选指令,可以截取出该可忽略障碍物对应的图像,从而可以通过对该可忽略障碍物对应的图像进行识别,确定该可忽略障碍物对应的物体类别。
而在对该可忽略障碍物对应的图像进行识别时,与前文中对场景图像的识别相类似的,也可以预先训练出相应的障碍物识别模型,从而在应用时,只需将框选出的可忽略障碍物对应的图像输入该障碍物识别模型,便可以得到该障碍物识别模型输出的可忽略障碍物的物体类别。同样的,该障碍物识别模型也可以是卷积神经网络模型。
以上是对本申请实施例提供的控制方法的详细说明。本申请实施例提供的方法,可以根据回波能量参数条件,从雷达扫描得到的多个扫描点中过滤掉回波能量不符合该回波能量参数条件的扫描点,这部分被过滤掉的扫描点对应的是可忽略的弱小障碍物的扫描点,也就是说,保留下来的目标扫描点中,对应障碍物的扫描点都是不可忽略的障碍物的扫描点,因此,基于该目标扫描点实现的避障功能将不会考虑弱小障碍物,实现了在避障功能中对弱小障碍物的忽略。
并且,本申请实施例所提供的方法,可以根据具体的作业环境,给用户提供该作业环境最适用的参数条件集,所提供的参数条件集还可以有多个,每个可以对应不同的灵敏度级别,从而满足各种不同用户对避障功能的需求。此外,还可以个性化的针对用户想要忽略的障碍物对参数条件集进行调整,使避障功能能够最大限度的顺应用户的需求,对用户所选定的障碍物进行忽略,做到用户所期望的智能。
下面请参见图5,图5是本申请实施例提供的一种控制装置的结构示意图。本申请实施例所提供的控制装置,其在具体的产品形态上可以有多种实施方式。比如,该控制装置可以本身是独立的产品,也可以是其他电子设备如雷达、无人机控制器等的一部分。该控制装置包括:处理器510与存储有计算机程序的存储器520;
所述处理器510在执行所述计算机程序时实现以下步骤:
获取可移动平台的雷达对环境扫描得到的多个扫描点中的每一所述扫描点对应的回波能量和位置信息;
获取设置的参数条件集;其中,所述参数条件集包括回波能量参数条件;
根据所述参数条件集,在多个所述扫描点中确定目标扫描点;其中,所述目标扫描点的所述回波能量符合所述回波能量参数条件;
根据所述目标扫描点的所述位置信息控制所述可移动平台执行避障运动。
可选的,所述参数条件集还包括位置信息参数条件;确定的所述目标扫描点的所述位置信息符合所述位置信息参数条件。
可选的,所述处理器在执行根据所述目标扫描点的所述位置信息控制所述可移动平台执行避障运动的步骤时,具体用于对所述目标扫描点聚类得到点云簇;根据所述点云簇中所述目标扫描点的位置信息确定所述点云簇的位置信息;根据所述点云簇的位置信息控制所述可移动平台执行避障运动。
可选的,所述点云簇的位置信息包括所述点云簇的中心点坐标与所述点云簇的三维尺寸。
可选的,所述回波能量参数条件包括扫描点对应的回波能量大于回波能量阈值。
可选的,所述参数条件集是根据所述环境对应的场景类别确定的。
可选的,所述处理器在执行根据所述场景类别确定所述参数条件集的步骤时,具体用于根据预先配置的场景类别与参数条件集的对应关系,确定所述场景类别对应的所述参数条件集。
可选的,在所述对应关系中,一个场景类别对应至少两个参数条件集。
可选的,所述参数条件集是根据用户的选择指令从所述至少两个参数条件集中确定的。
可选的,所述可移动平台搭载有摄像头,所述场景类别是对所述摄像头拍摄的场景图像进行识别确定的。
可选的,所述处理器在执行对所述场景图像进行识别的步骤时,具体用于将所述场景图像输入预先训练好的场景识别模型,得到所述场景识别模型输出的所述场景图 像对应的场景类别;其中,所述场景识别模型是卷积神经网络模型。
可选的,所述参数条件集是根据用户选定的可忽略障碍物对初始的参数条件集进行修正得到的。
可选的,所述处理器在执行根据所述可忽略障碍物对所述初始的参数条件集进行修正的步骤时,具体用于确定所述可忽略障碍物对应的物体类别,确定所述物体类别对应的特征参数,根据所述特征参数对所述初始的参数条件集进行修正;其中,所述特征参数是预先对所述物体类别对应的雷达扫描数据进行特征提取得到的。
可选的,所述可忽略障碍物对应的物体类别是通过识别所述可忽略障碍物对应的图像确定的。
可选的,所述可移动平台搭载有摄像头,所述可忽略障碍物对应的图像是根据用户输入的框选指令从所述摄像头拍摄的图像中截取得到的。
可选的,扫描点对应的所述位置信息包括以下一种或多种:距离、方位角、俯仰角、在机体坐标系下的坐标、在机体所在水平面对应的坐标系下的坐标、相对于地平面的高度、相对于植株的高度。
可选的,所述雷达是毫米波雷达。
以上所述的各种实施方式下的控制装置,其具体实现可以参考前文中的对控制方法的相应说明,在此不再赘述。
本申请实施例提供的控制装置,可以根据回波能量参数条件,从雷达扫描得到的多个扫描点中过滤掉回波能量不符合该回波能量参数条件的扫描点,这部分被过滤掉的扫描点对应的是可忽略的弱小障碍物的扫描点,也就是说,保留下来的目标扫描点中,对应障碍物的扫描点都是不可忽略的障碍物的扫描点,因此,基于该目标扫描点实现的避障功能将不会考虑弱小障碍物,实现了在避障功能中对弱小障碍物的忽略。
并且,该控制装置还可以根据具体的作业环境,给用户提供该作业环境最适用的参数条件集,所提供的参数条件集还可以有多个,每个可以对应不同的灵敏度级别,从而满足各种不同用户对避障功能的需求。此外,还可以个性化的针对用户想要忽略的障碍物对参数条件集进行调整,使避障功能能够最大限度的顺应用户的需求,对用户所选定的障碍物进行忽略,做到用户所期望的智能。
下面请参见图6,图6是本申请实施例提供的一种可移动平台的结构示意图。该可移动平台可以包括:机体610、与所述机体连接的动力装置620、搭载于所述机体610的雷达630、处理器611与存储有计算机程序的存储器612;
所述处理器611在执行所述计算机程序时实现以下步骤:
获取所述雷达630对环境扫描得到的多个扫描点中的每一所述扫描点对应的回波能量和位置信息;
获取设置的参数条件集;其中,所述参数条件集包括回波能量参数条件;
根据所述参数条件集,在多个所述扫描点中确定目标扫描点;其中,所述目标扫描点的所述回波能量符合所述回波能量参数条件;
根据所述目标扫描点的所述位置信息控制所述可移动平台执行避障运动。
可选的,所述参数条件集还包括位置信息参数条件;确定的所述目标扫描点的所述位置信息符合所述位置信息参数条件。
可选的,所述处理器在执行根据所述目标扫描点的所述位置信息控制所述可移动平台执行避障运动的步骤时,具体用于对所述目标扫描点聚类得到点云簇;根据所述点云簇中所述目标扫描点的位置信息确定所述点云簇的位置信息;根据所述点云簇的位置信息控制所述可移动平台执行避障运动。
可选的,所述点云簇的位置信息包括所述点云簇的中心点坐标与所述点云簇的三维尺寸。
可选的,所述回波能量参数条件包括回波能量阈值,所述目标扫描点的所述回波能量高于所述回波能量阈值。
可选的,所述参数条件集是根据所述环境对应的场景类别确定的。
可选的,所述处理器在执行根据所述场景类别确定所述参数条件集的步骤时,具体用于根据预先配置的场景类别与参数条件集的对应关系,确定所述场景类别对应的所述参数条件集。
可选的,在所述对应关系中,一个场景类别对应至少两个参数条件集。
可选的,所述参数条件集是根据用户的选择指令从所述至少两个参数条件集中确定的。
可选的,所述可移动平台搭载有摄像头,所述场景类别是对所述摄像头拍摄的场景图像进行识别确定的。
可选的,所述处理器在执行对所述场景图像进行识别的步骤时,具体用于将所述场景图像输入预先训练好的场景识别模型,得到所述场景识别模型输出的所述场景图像对应的场景类别;其中,所述场景识别模型是卷积神经网络模型。
可选的,所述参数条件集是根据用户选定的可忽略障碍物对初始的参数条件集进行修正得到的。
可选的,所述处理器在执行根据所述可忽略障碍物对所述初始的参数条件集进行 修正的步骤时,具体用于确定所述可忽略障碍物对应的物体类别,确定所述物体类别对应的特征参数,根据所述特征参数对所述初始的参数条件集进行修正;其中,所述特征参数是预先对所述物体类别对应的雷达扫描数据进行特征提取得到的。
可选的,所述可忽略障碍物对应的物体类别是通过识别所述可忽略障碍物对应的图像确定的。
可选的,所述可移动平台搭载有摄像头,所述可忽略障碍物对应的图像是根据用户输入的框选指令从所述摄像头拍摄的图像中截取得到的。
可选的,扫描点对应的所述位置信息包括以下一种或多种:距离、方位角、俯仰角、在机体坐标系下的坐标、在机体所在水平面对应的坐标系下的坐标、相对于地平面的高度、相对于植株的高度。
可选的,所述雷达是毫米波雷达。
可选的,所述可移动平台包括无人机。
以上所述的各种实施方式下的可移动平台,其具体实现可以参考前文中的对控制方法的相应说明,在此不再赘述。
本申请实施例提供的可移动平台,可以根据回波能量参数条件,从雷达扫描得到的多个扫描点中过滤掉回波能量不符合该回波能量参数条件的扫描点,这部分被过滤掉的扫描点对应的是可忽略的弱小障碍物的扫描点,也就是说,保留下来的目标扫描点中,对应障碍物的扫描点都是不可忽略的障碍物的扫描点,因此,基于该目标扫描点实现的避障功能将不会考虑弱小障碍物,实现了在避障功能中对弱小障碍物的忽略。
并且,该可移动平台还可以根据具体的作业环境,给用户提供该作业环境最适用的参数条件集,所提供的参数条件集还可以有多个,每个可以对应不同的灵敏度级别,从而满足各种不同用户对避障功能的需求。此外,还可以个性化的针对用户想要忽略的障碍物对参数条件集进行调整,使避障功能能够最大限度的顺应用户的需求,对用户所选定的障碍物进行忽略,做到用户所期望的智能。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现本申请实施例所提供的任一种控制方法。
以上实施例中提供的技术特征,只要不存在冲突或矛盾,本领域技术人员可以根据实际情况对各个技术特征进行组合,从而构成各种不同的实施例。而本申请文件限于篇幅,未对各种不同的实施例展开说明,但可以理解的是,各种不同的实施例也属于本申请实施例公开的范围。
本申请实施例可采用在一个或多个其中包含有程序代码的存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。计算机可用存储介质包括永久性和非永久性、可移动和非可移动媒体,可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括但不限于:相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本申请实施例所提供的方法进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。
Claims (36)
- 一种控制方法,其特征在于,包括:获取可移动平台的雷达对环境扫描得到的多个扫描点中的每一所述扫描点对应的回波能量和位置信息;获取设置的参数条件集;其中,所述参数条件集包括回波能量参数条件;根据所述参数条件集,在多个所述扫描点中确定目标扫描点;其中,所述目标扫描点的所述回波能量符合所述回波能量参数条件;根据所述目标扫描点的所述位置信息控制所述可移动平台执行避障运动。
- 根据权利要求1所述的控制方法,其特征在于,所述参数条件集还包括位置信息参数条件;确定的所述目标扫描点的所述位置信息符合所述位置信息参数条件。
- 根据权利要求1所述的控制方法,其特征在于,所述根据所述目标扫描点的所述位置信息控制所述可移动平台执行避障运动,包括:对所述目标扫描点聚类得到点云簇;根据所述点云簇中所述目标扫描点的位置信息确定所述点云簇的位置信息;根据所述点云簇的位置信息控制所述可移动平台执行避障运动。
- 根据权利要求3所述的控制方法,其特征在于,所述点云簇的位置信息包括所述点云簇的中心点坐标与所述点云簇的三维尺寸。
- 根据权利要求1所述的控制方法,其特征在于,所述回波能量参数条件包括扫描点对应的回波能量大于回波能量阈值。
- 根据权利要求1所述的控制方法,其特征在于,所述参数条件集是根据所述环境对应的场景类别确定的。
- 根据权利要求6所述的控制方法,其特征在于,根据所述场景类别确定所述参数条件集,包括:根据预先配置的场景类别与参数条件集的对应关系,确定所述场景类别对应的所述参数条件集。
- 根据权利要求7所述的控制方法,其特征在于,在所述对应关系中,一个场景类别对应至少两个参数条件集。
- 根据权利要求8所述的控制方法,其特征在于,所述参数条件集是根据用户的选择指令从所述至少两个参数条件集中确定的。
- 根据权利要求6所述的控制方法,其特征在于,所述可移动平台搭载有摄像头,所述场景类别是对所述摄像头拍摄的场景图像进行识别确定的。
- 根据权利要求10所述的控制方法,其特征在于,对所述场景图像进行识别,包括:将所述场景图像输入预先训练好的场景识别模型,得到所述场景识别模型输出的所述场景图像对应的场景类别;其中,所述场景识别模型是卷积神经网络模型。
- 根据权利要求1所述的控制方法,其特征在于,所述参数条件集是根据用户选定的可忽略障碍物对初始的参数条件集进行修正得到的。
- 根据权利要求12所述的控制方法,其特征在于,根据所述可忽略障碍物对所述初始的参数条件集进行修正,包括:确定所述可忽略障碍物对应的物体类别,确定所述物体类别对应的特征参数,根据所述特征参数对所述初始的参数条件集进行修正;其中,所述特征参数是预先对所述物体类别对应的雷达扫描数据进行特征提取得到的。
- 根据权利要求13所述的控制方法,其特征在于,所述可忽略障碍物对应的物体类别是通过识别所述可忽略障碍物对应的图像确定的。
- 根据权利要求14所述的控制方法,其特征在于,所述可移动平台搭载有摄像头,所述可忽略障碍物对应的图像是根据用户输入的框选指令从所述摄像头拍摄的图像中截取得到的。
- 根据权利要求1所述的控制方法,其特征在于,扫描点对应的所述位置信息包括以下一种或多种:距离、方位角、俯仰角、在机体坐标系下的坐标、在机体所在水平面对应的坐标系下的坐标、相对于地平面的高度、相对于植株的高度。
- 根据权利要求1所述的控制方法,其特征在于,所述雷达是毫米波雷达。
- 一种控制装置,其特征在于,包括:处理器与存储有计算机程序的存储器;所述处理器在执行所述计算机程序时实现以下步骤:获取可移动平台的雷达对环境扫描得到的多个扫描点中的每一所述扫描点对应的回波能量和位置信息;获取设置的参数条件集;其中,所述参数条件集包括回波能量参数条件;根据所述参数条件集,在多个所述扫描点中确定目标扫描点;其中,所述目标扫描点的所述回波能量符合所述回波能量参数条件;根据所述目标扫描点的所述位置信息控制所述可移动平台执行避障运动。
- 根据权利要求18所述的控制装置,其特征在于,所述参数条件集还包括位置信息参数条件;确定的所述目标扫描点的所述位置信息符合所述位置信息参数条件。
- 根据权利要求18所述的控制装置,其特征在于,所述处理器在执行根据所述 目标扫描点的所述位置信息控制所述可移动平台执行避障运动的步骤时,具体用于对所述目标扫描点聚类得到点云簇;根据所述点云簇中所述目标扫描点的位置信息确定所述点云簇的位置信息;根据所述点云簇的位置信息控制所述可移动平台执行避障运动。
- 根据权利要求20所述的控制装置,其特征在于,所述点云簇的位置信息包括所述点云簇的中心点坐标与所述点云簇的三维尺寸。
- 根据权利要求18所述的控制装置,其特征在于,所述回波能量参数条件包括扫描点对应的回波能量大于回波能量阈值。
- 根据权利要求18所述的控制装置,其特征在于,所述参数条件集是根据所述环境对应的场景类别确定的。
- 根据权利要求23所述的控制装置,其特征在于,所述处理器在执行根据所述场景类别确定所述参数条件集的步骤时,具体用于根据预先配置的场景类别与参数条件集的对应关系,确定所述场景类别对应的所述参数条件集。
- 根据权利要求24所述的控制装置,其特征在于,在所述对应关系中,一个场景类别对应至少两个参数条件集。
- 根据权利要求25所述的控制装置,其特征在于,所述参数条件集是根据用户的选择指令从所述至少两个参数条件集中确定的。
- 根据权利要求23所述的控制装置,其特征在于,所述可移动平台搭载有摄像头,所述场景类别是对所述摄像头拍摄的场景图像进行识别确定的。
- 根据权利要求27所述的控制装置,其特征在于,所述处理器在执行对所述场景图像进行识别的步骤时,具体用于将所述场景图像输入预先训练好的场景识别模型,得到所述场景识别模型输出的所述场景图像对应的场景类别;其中,所述场景识别模型是卷积神经网络模型。
- 根据权利要求18所述的控制装置,其特征在于,所述参数条件集是根据用户选定的可忽略障碍物对初始的参数条件集进行修正得到的。
- 根据权利要求29所述的控制装置,其特征在于,所述处理器在执行根据所述可忽略障碍物对所述初始的参数条件集进行修正的步骤时,具体用于确定所述可忽略障碍物对应的物体类别,确定所述物体类别对应的特征参数,根据所述特征参数对所述初始的参数条件集进行修正;其中,所述特征参数是预先对所述物体类别对应的雷达扫描数据进行特征提取得到的。
- 根据权利要求30所述的控制装置,其特征在于,所述可忽略障碍物对应的物 体类别是通过识别所述可忽略障碍物对应的图像确定的。
- 根据权利要求31所述的控制装置,其特征在于,所述可移动平台搭载有摄像头,所述可忽略障碍物对应的图像是根据用户输入的框选指令从所述摄像头拍摄的图像中截取得到的。
- 根据权利要求18所述的控制装置,其特征在于,扫描点对应的所述位置信息包括以下一种或多种:距离、方位角、俯仰角、在机体坐标系下的坐标、在机体所在水平面对应的坐标系下的坐标、相对于地平面的高度、相对于植株的高度。
- 根据权利要求18所述的控制装置,其特征在于,所述雷达是毫米波雷达。
- 一种可移动平台,其特征在于,包括:机体、与所述机体连接的动力装置、搭载于所述机体的雷达、处理器与存储有计算机程序的存储器;所述处理器在执行所述计算机程序时实现以下步骤:获取所述雷达对环境扫描得到的多个扫描点中的每一所述扫描点对应的回波能量和位置信息;获取设置的参数条件集;其中,所述参数条件集包括回波能量参数条件;根据所述参数条件集,在多个所述扫描点中确定目标扫描点;其中,所述目标扫描点的所述回波能量符合所述回波能量参数条件;根据所述目标扫描点的所述位置信息控制所述可移动平台执行避障运动。
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至17任一项所述的控制方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/091587 WO2021232359A1 (zh) | 2020-05-21 | 2020-05-21 | 控制方法、控制装置、可移动平台与计算机可读存储介质 |
CN202080039444.XA CN113966496A (zh) | 2020-05-21 | 2020-05-21 | 控制方法、控制装置、可移动平台与计算机可读存储介质 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/091587 WO2021232359A1 (zh) | 2020-05-21 | 2020-05-21 | 控制方法、控制装置、可移动平台与计算机可读存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021232359A1 true WO2021232359A1 (zh) | 2021-11-25 |
Family
ID=78708968
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/091587 WO2021232359A1 (zh) | 2020-05-21 | 2020-05-21 | 控制方法、控制装置、可移动平台与计算机可读存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113966496A (zh) |
WO (1) | WO2021232359A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114693187A (zh) * | 2022-05-31 | 2022-07-01 | 杭州未名信科科技有限公司 | 塔吊集群的运行分析方法、装置、存储介质及终端 |
CN117472069A (zh) * | 2023-12-28 | 2024-01-30 | 烟台宇控软件有限公司 | 一种用于输电线路检测的机器人控制方法及系统 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105892489A (zh) * | 2016-05-24 | 2016-08-24 | 国网山东省电力公司电力科学研究院 | 一种基于多传感器融合的自主避障无人机系统及控制方法 |
CN107121677A (zh) * | 2017-06-02 | 2017-09-01 | 太原理工大学 | 基于超宽带认知cppm信号的避障雷达方法及装置 |
US20180356840A1 (en) * | 2016-02-29 | 2018-12-13 | Thinkware Corporation | Method and system for controlling unmanned air vehicle |
WO2019142181A1 (en) * | 2018-01-18 | 2019-07-25 | Israel Aerospace Industries Ltd. | Automatic camera driven aircraft control for radar activation |
US20200026310A1 (en) * | 2017-12-27 | 2020-01-23 | Topcon Corporation | Three-Dimensional Information Processing Unit, Apparatus Having Three-Dimensional Information Processing Unit, Unmanned Aerial Vehicle, Informing Device, Method and Program for Controlling Mobile Body Using Three-Dimensional Information Processing Unit |
CN110892285A (zh) * | 2018-11-26 | 2020-03-17 | 深圳市大疆创新科技有限公司 | 一种微波雷达和无人飞行器 |
CN212135234U (zh) * | 2020-01-19 | 2020-12-11 | 国网江苏省电力有限公司 | 一种用于输电线路巡检无人机的飞行辅助装置 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102508246B (zh) * | 2011-10-13 | 2013-04-17 | 吉林大学 | 车辆前方障碍物检测跟踪方法 |
CN109344715A (zh) * | 2018-08-31 | 2019-02-15 | 北京达佳互联信息技术有限公司 | 智能构图控制方法、装置、电子设备及存储介质 |
CN110390814A (zh) * | 2019-06-04 | 2019-10-29 | 深圳市速腾聚创科技有限公司 | 监控系统和方法 |
CN110865365B (zh) * | 2019-11-27 | 2022-05-24 | 江苏集萃智能传感技术研究所有限公司 | 一种基于毫米波雷达的停车场噪声消除方法 |
-
2020
- 2020-05-21 WO PCT/CN2020/091587 patent/WO2021232359A1/zh active Application Filing
- 2020-05-21 CN CN202080039444.XA patent/CN113966496A/zh active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180356840A1 (en) * | 2016-02-29 | 2018-12-13 | Thinkware Corporation | Method and system for controlling unmanned air vehicle |
CN105892489A (zh) * | 2016-05-24 | 2016-08-24 | 国网山东省电力公司电力科学研究院 | 一种基于多传感器融合的自主避障无人机系统及控制方法 |
CN107121677A (zh) * | 2017-06-02 | 2017-09-01 | 太原理工大学 | 基于超宽带认知cppm信号的避障雷达方法及装置 |
US20200026310A1 (en) * | 2017-12-27 | 2020-01-23 | Topcon Corporation | Three-Dimensional Information Processing Unit, Apparatus Having Three-Dimensional Information Processing Unit, Unmanned Aerial Vehicle, Informing Device, Method and Program for Controlling Mobile Body Using Three-Dimensional Information Processing Unit |
WO2019142181A1 (en) * | 2018-01-18 | 2019-07-25 | Israel Aerospace Industries Ltd. | Automatic camera driven aircraft control for radar activation |
CN110892285A (zh) * | 2018-11-26 | 2020-03-17 | 深圳市大疆创新科技有限公司 | 一种微波雷达和无人飞行器 |
CN212135234U (zh) * | 2020-01-19 | 2020-12-11 | 国网江苏省电力有限公司 | 一种用于输电线路巡检无人机的飞行辅助装置 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114693187A (zh) * | 2022-05-31 | 2022-07-01 | 杭州未名信科科技有限公司 | 塔吊集群的运行分析方法、装置、存储介质及终端 |
CN117472069A (zh) * | 2023-12-28 | 2024-01-30 | 烟台宇控软件有限公司 | 一种用于输电线路检测的机器人控制方法及系统 |
CN117472069B (zh) * | 2023-12-28 | 2024-03-26 | 烟台宇控软件有限公司 | 一种用于输电线路检测的机器人控制方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN113966496A (zh) | 2022-01-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108122553B (zh) | 一种无人机控制方法、装置、遥控设备和无人机系统 | |
Wang et al. | UAV environmental perception and autonomous obstacle avoidance: A deep learning and depth camera combined solution | |
CN111932588B (zh) | 一种基于深度学习的机载无人机多目标跟踪系统的跟踪方法 | |
AU2019321145B2 (en) | Method, device, and equipment for obstacle or ground recognition and flight control, and storage medium | |
WO2021232359A1 (zh) | 控制方法、控制装置、可移动平台与计算机可读存储介质 | |
WO2020103109A1 (zh) | 一种地图生成方法、设备、飞行器及存储介质 | |
WO2020103108A1 (zh) | 一种语义生成方法、设备、飞行器及存储介质 | |
US20200409394A1 (en) | Unmanned aerial vehicle control method and device, unmanned aerial vehicle, system, and storage medium | |
US20190278303A1 (en) | Method of controlling obstacle avoidance for unmanned aerial vehicle and unmanned aerial vehicle | |
CN111741897A (zh) | 无人机的控制方法、设备、喷洒系统、无人机及存储介质 | |
CN109669475A (zh) | 基于人工蜂群算法的多无人机三维编队重构方法 | |
CN113093772B (zh) | 一种无人机机库精确降落方法 | |
WO2020107248A1 (zh) | 一种无人机的安全降落方法、装置、无人机及介质 | |
WO2010129907A2 (en) | Method and system for visual collision detection and estimation | |
CN105679322A (zh) | 一种基于机载语音操控的无人机系统及控制方法 | |
Fu et al. | Vision-based obstacle avoidance for flapping-wing aerial vehicles | |
CN111831010A (zh) | 一种基于数字空间切片的无人机避障飞行方法 | |
Chen et al. | A review of autonomous obstacle avoidance technology for multi-rotor UAVs | |
Hu et al. | Research on uav balance control based on expert-fuzzy adaptive pid | |
GB2567921A (en) | Unmanned aerial vehicles | |
Lee | Research on multi-functional logistics intelligent Unmanned Aerial Vehicle | |
CN113454558A (zh) | 障碍物检测方法、装置、无人机和存储介质 | |
CN116661498A (zh) | 一种基于动态视觉传感和神经网络的障碍物目标跟踪方法 | |
Tang et al. | sUAS and Machine Learning Integration in Waterfowl Population Surveys | |
Laurito et al. | Airborne localisation of small UAS using visual detection: A field experiment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20936646 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20936646 Country of ref document: EP Kind code of ref document: A1 |