WO2023056789A1 - 农机自动驾驶障碍物识别方法、系统、设备和存储介质 - Google Patents

农机自动驾驶障碍物识别方法、系统、设备和存储介质 Download PDF

Info

Publication number
WO2023056789A1
WO2023056789A1 PCT/CN2022/114025 CN2022114025W WO2023056789A1 WO 2023056789 A1 WO2023056789 A1 WO 2023056789A1 CN 2022114025 W CN2022114025 W CN 2022114025W WO 2023056789 A1 WO2023056789 A1 WO 2023056789A1
Authority
WO
WIPO (PCT)
Prior art keywords
obstacle
agricultural machinery
picture
automatic driving
frame
Prior art date
Application number
PCT/CN2022/114025
Other languages
English (en)
French (fr)
Inventor
梅军辉
李奕成
李晓宇
具大源
Original Assignee
上海联适导航技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海联适导航技术股份有限公司 filed Critical 上海联适导航技术股份有限公司
Publication of WO2023056789A1 publication Critical patent/WO2023056789A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0219Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory ensuring the processing of the whole working surface
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision

Definitions

  • the invention relates to the technical field of visual recognition, in particular to a method and system for recognizing obstacles in automatic driving of agricultural machinery.
  • Automatic driving of agricultural machinery is an interdisciplinary technology involving computer science, pattern recognition, electronics, communication, and control. This technology plans the most suitable path for walking and operation by sensing the environmental information around the vehicle and positioning the vehicle itself, and then controls the direction and speed of the vehicle, and finally realizes the autonomous walking and autonomous operation of agricultural machinery and vehicles.
  • the automatic driving technology of agricultural machinery is of great significance to reduce agricultural labor intensity, improve work efficiency and increase agricultural production capacity.
  • In order for agricultural machinery vehicles to realize autonomous walking it is necessary to sense the surrounding environment of the vehicle and accurately identify and locate surrounding obstacles.
  • Computer vision and deep learning are the key technologies used in the field of automatic driving to perceive the surrounding environment of the vehicle. They play a vital role in automatic driving and are important factors for the successful realization of automatic driving.
  • the present invention provides a method, system, device and storage medium for identifying obstacles in automatic driving of agricultural machinery , the specific technical scheme is as follows:
  • the present invention provides a method for identifying obstacles in automatic driving of agricultural machinery, comprising the steps of:
  • the obstacle recognition method for automatic driving of agricultural machinery calculates the relative distance between the agricultural machinery and the obstacle through the visual recognition model and the structured light camera, so as to solve the problem of avoiding the obstacle due to the inability to accurately identify the position of the obstacle during the automatic driving of the agricultural machinery.
  • the problem of not affecting the effect of automatic driving in time can improve the accuracy of identifying the position of obstacles in the process of automatic driving of agricultural machinery, and realize the accurate perception of the surrounding environment of agricultural machinery.
  • the visual recognition model is trained according to the original image corresponding to each obstacle, the vertex information of the prior frame, and the category.
  • the obstacle recognition method for automatic driving of agricultural machinery trains the visual recognition model for the category of obstacles in the working scene of the agricultural machinery and the vertex information of the frame when training the visual recognition model. Or the border information of static obstacles, and identify the relative position of obstacles and agricultural machinery.
  • the distance between the structured light camera and the obstacle is obtained through the structured light camera installed on the agricultural machine, specifically including:
  • the structured light is projected to the obstacle in the three-dimensional space according to the projection angle, and the depth values of several points on the obstacle in the three-dimensional space are identified.
  • the obstacle identification method for automatic driving of agricultural machinery discloses a method for calculating the distance between the structured light camera and the obstacle by combining the structured light camera with the environment picture, which is suitable for accurately identifying the distance between the low-speed or static obstacle and the agricultural machinery during the operation of the agricultural machinery, and improving the automatic driving of the agricultural machinery.
  • the accuracy of identifying the position of obstacles during driving is suitable for accurately identifying the position of obstacles during driving.
  • the calculation of the included angle between the center point of the obstacle in the three-dimensional space and the center point of the agricultural machine according to the environment picture specifically includes:
  • the obstacle recognition method for automatic driving of agricultural machinery discloses a method for calculating the angle between the obstacle and the structured light camera. According to the angle, the distance between the structured light camera and the obstacle can be further calculated to improve the automatic driving of agricultural machinery. The accuracy of identifying the location of obstacles.
  • the calculating the relative position of the obstacle and the structured light camera as the relative position of the agricultural machine and the obstacle specifically includes:
  • the obstacle recognition method for automatic driving of agricultural machinery discloses a method for calculating the relative position of the obstacle and the structured light camera, which can calculate the relative distance between the agricultural machinery and the obstacle, so that the agricultural machinery can directly adjust the forward direction according to the relative distance in the future Avoid obstacles to achieve accurate perception of the surrounding environment of agricultural machinery.
  • the calibration of the obstacle category and frame vertex information in the working scene of the agricultural machine specifically includes:
  • the training of the visual recognition model according to the original image corresponding to each obstacle, the vertex information of the prior frame and the category specifically includes:
  • the visual recognition model is trained in advance according to the original image corresponding to each obstacle, the vertex information of the prior frame, and the category.
  • the obstacle recognition method for automatic driving of agricultural machinery also provides a method for training an offline visual recognition model. According to the construction of the offline visual recognition model and the relative position between the agricultural machinery and the obstacle, the unmanned driving process of the agricultural machinery does not require a network Connect to realize offline unmanned agricultural machinery and expand the application range of unmanned agricultural machinery.
  • the present invention also provides an automatic driving obstacle recognition system for agricultural machinery, including:
  • the collection module is used to collect the environmental picture of the traveling direction of the agricultural machinery
  • the visual recognition model processing module is connected with the acquisition module and stores a visual recognition model, and recognizes the first coordinates of several vertices on the border of obstacles in the environmental picture through the visual recognition model according to the environmental picture;
  • the first recognition module is connected with the visual recognition model processing module, and is used to combine several first coordinates to recognize the depth values of several points on the obstacle in three-dimensional space;
  • a first calculation module connected to the first identification module, used to calculate the distance between the agricultural machine and the obstacle according to the depth value
  • the second calculation module is connected to the collection module and is used to calculate the angle between the center point of the obstacle and the center point of the agricultural machine in the three-dimensional space according to the environment picture;
  • the third calculation module is connected with the acquisition module and the second calculation module, and is used to calculate the distance between the obstacle and the agricultural machine in combination with the distance and angle between the agricultural machine and the obstacle. relative position.
  • an obstacle recognition system for automatic driving of agricultural machinery further includes:
  • An acquisition module configured to acquire multiple frames of the environmental picture in the working scene of the agricultural machine, and identify the original pictures of several obstacles in the environmental picture of each frame;
  • a calibration module connected to the acquisition module, used to adjust the scale of the preset shape prior frame, and use the prior frame to calibrate each of the obstacles in the environmental picture of each frame;
  • An identification module connected to the calibration module, used to identify the vertex information of the frame corresponding to each of the obstacles, and the category corresponding to each of the obstacles;
  • the visual recognition model training module is connected with the acquisition module and the recognition module, and is used to train the visual recognition model according to the original image corresponding to each obstacle, the vertex information of the border and the category.
  • the present invention also provides an obstacle recognition device for automatic driving of agricultural machinery, including a processor, a memory, and a computer program stored in the memory and operable on the processor.
  • the processor is used for
  • the computer program stored in the memory is executed to realize the operations performed by the above-mentioned method for identifying obstacles in automatic driving of agricultural machinery.
  • the present invention also provides a storage medium, at least one instruction is stored in the storage medium, and the instruction is loaded and executed by a processor to realize the operations performed by the above-mentioned method for identifying obstacles for automatic driving of agricultural machinery.
  • the present invention provides a method, system, device and storage medium for identifying obstacles in automatic driving of agricultural machinery, at least including one of the following technical effects:
  • Fig. 1 is the flow chart of a kind of agricultural machine automatic driving obstacle recognition method of the present invention
  • Fig. 2 is a flowchart of the establishment of a visual recognition model in a method for recognizing obstacles in automatic driving of agricultural machinery according to the present invention
  • Fig. 3 is another flowchart of the establishment of the visual recognition model in a method for identifying obstacles in automatic driving of agricultural machinery according to the present invention
  • Fig. 4 is a flowchart of calculating the distance between the structured light camera and the obstacle in a method for identifying obstacles in automatic driving of agricultural machinery according to the present invention
  • Fig. 5 is a flowchart of calculating the angle between an obstacle and a structured light camera in a method for identifying an obstacle in automatic driving of agricultural machinery according to the present invention
  • Fig. 6 is a flow chart of calculating the relative position of an obstacle and a structured light camera in an obstacle recognition method for automatic driving of agricultural machinery according to the present invention
  • Fig. 7 is an example diagram of an obstacle recognition system for automatic driving of agricultural machinery according to the present invention.
  • FIG. 8 is another example diagram of an obstacle recognition system for automatic driving of agricultural machinery according to the present invention.
  • Fig. 9 is an example diagram of an obstacle recognition device for automatic driving of agricultural machinery according to the present invention.
  • the visual recognition model training module 74 includes a processor 110 , a memory 120 and a computer program 121 .
  • the present invention provides a method for identifying obstacles in automatic driving of agricultural machinery, comprising steps:
  • the S200 collects the environmental images in the direction of the agricultural machinery.
  • the camera captures the picture of the forward direction of the agricultural machine in real time, and the captured picture can be a video or a picture.
  • the environment picture recognizes the first coordinates of several vertices on the frame of the obstacle in the environment picture through the visual recognition model.
  • a frame of the forward direction of the agricultural machinery is selected every preset time. Since the driving speed of the agricultural machinery is relatively slow, and most of the agricultural machinery work scenes are low-speed or static obstacles, the preset time can be set to 0.5s, 1s or 2s etc.
  • the frame of the frame of the forward direction of the agricultural machinery is transmitted to the processor, and the visual recognition model is installed in the processor, and the frame outline of the obstacle in the frame of the forward direction of the agricultural machinery is recognized by the visual recognition model, and several vertices of the frame outline of the obstacle are marked .
  • a plane coordinate system is established with the center point of the environment changing plane as the origin, and the pixel coordinates of several vertices of the border outline of the obstacle are calculated in this coordinate system as the first coordinates .
  • S400 Combine several first coordinates to identify depth values of several points on the obstacle in three-dimensional space.
  • the structured light camera can be used to identify the depth values of several points on the obstacle in the three-dimensional space.
  • the structured light depth camera projects light with certain structural characteristics into the environmental picture captured by the camera through the near-infrared laser. Within the frame outline of the object, it is collected by a special infrared camera to obtain the position and depth information of the object.
  • S500 calculates the distance between the agricultural machine and obstacles based on the depth value.
  • the distance d from the obstacle to the camera can be obtained by filtering the null value and the outlier value of the depth value of the points in the reduced frame and taking the average value. Take multiple points around the center point to improve accuracy.
  • the S600 calculates the angle between the center point of the obstacle and the center point of the agricultural machine in the three-dimensional space according to the environment picture.
  • the light between the center point of the frame outline of the obstacle in the environmental picture captured by the camera and the center point of the structured light camera is calculated through the inverse matrix of the structured light camera installed at the center point of the agricultural machinery, and the structured light camera captures
  • the center point of the center point is projected vertically to the light rays captured by the camera, and the angle between the two light rays is calculated as the angle between the obstacle and the structured light camera.
  • the S700 calculates the relative position between the obstacle and the agricultural machine based on the distance and angle between the agricultural machine and the obstacle.
  • the relative coordinates between the obstacle and the agricultural machine can be obtained through trigonometric functions and similar triangles.
  • the obstacle recognition method for automatic driving of agricultural machinery calculates the relative distance between the agricultural machinery and the obstacle through the visual recognition model and the structured light camera, and solves the problem of avoiding obstacles due to the inability to accurately identify the position of the obstacle during the automatic driving of the agricultural machinery.
  • the problem that objects do not affect the effect of automatic driving in time can improve the accuracy of identifying the position of obstacles during the automatic driving of agricultural machinery, and realize the accurate perception of the surrounding environment of agricultural machinery.
  • the method for identifying obstacles in the automatic driving of agricultural machinery further includes:
  • S110 acquires multiple frames of environmental images in the working scene of the agricultural machinery, and recognizes original images of several obstacles in each frame of the environmental images.
  • the a priori frame Preset the a priori frame that conforms to the size of the obstacle in the agricultural machinery working scene.
  • the a priori frame is used to calibrate the position of the obstacle.
  • the visual recognition model used in this method is built based on the real-time YOLO-v3 visual recognition model and optimized and trained for agricultural scenarios.
  • the model uses convolutional neural network and residual neural network to extract features from image data, and uses multi-scale features for object detection and logistic regression for border prediction.
  • the model is optimized with gradient descent for the following loss function:
  • S is the number of grids
  • B is the box.
  • This model is customized for agricultural machinery operation scenarios.
  • This model uses images of common obstacles in farmland as training data, and accurately calibrates the category and border information of obstacles in these training set images.
  • hyperparameters are optimized to further improve the accuracy of model recognition.
  • S140 Train a visual recognition model according to the original image corresponding to each obstacle, the vertex information and the category of the prior frame.
  • this model is customized for agricultural machinery operation scenarios.
  • This model uses images of common obstacles in farmland as training data, and accurately calibrates the category and border information of obstacles in these training set images.
  • hyperparameters are optimized to further improve the accuracy of model recognition.
  • the model recognizes the images captured by the camera and outputs obstacle information: class-obstacle category, confidence-recognized obstacle confidence, bounding box-obstacle border vertex coordinates.
  • step S200 gathers the environmental picture of agricultural machinery traveling direction, also includes:
  • S150 pre-acquires multiple frames of environmental images in the working scene of the agricultural machinery, and recognizes the original images of several obstacles in each frame of the environmental images.
  • the unmanned agricultural machinery can be driven without a network connection, the offline unmanned agricultural machinery can be realized, and the application scope of the unmanned agricultural machinery can be expanded.
  • step S400 combines several first coordinates to identify the depth values of several points on the obstacle in three-dimensional space, specifically including:
  • S410 Combine the several first coordinates to calculate the center point of the frame of the obstacle in the environment picture.
  • S420 takes the center point as a reference, and reduces the frame of obstacles in the environment screen according to the preset ratio.
  • a reduced frame can be obtained around the central point according to a preset ratio.
  • the preset ratio may be 1:10.
  • S440 Project the structured light to the obstacle in the three-dimensional space according to the projection angle, and identify the depth values of several points on the obstacle in the three-dimensional space.
  • the depth value of the points inside the reduced frame can be read through the depth camera.
  • the distance d from the obstacle to the camera can be obtained by filtering the null value and the outlier value of the depth value of the point inside the reduced frame and taking the average value. Take multiple points around the center point to improve accuracy.
  • the obstacle recognition method for automatic driving of agricultural machinery trains the visual recognition model for the category of obstacles in the working scene of the agricultural machinery and the border vertex information when training the visual recognition model.
  • the visual recognition model can be accurately recognized Frame information of low-speed or static obstacles, and discloses a method of calculating the distance between the structured light camera and the obstacle with the combination of the structured light camera and the environment picture, which is suitable for accurately identifying the distance between the low-speed or static obstacles and the agricultural machinery during agricultural machinery operation, and improving the automatic driving of agricultural machinery
  • the accuracy of identifying the position of obstacles in the process can also be based on the construction of an offline visual recognition model and the relative position between the agricultural machine and the obstacle, so that the process of unmanned agricultural machinery does not need a network connection, realizing offline unmanned agricultural machinery, and expanding the scope of agricultural machinery.
  • the scope of application of unmanned driving can be based on the construction of an offline visual recognition model and the relative position between the agricultural machine and the obstacle, so that the process of unmanned agricultural machinery does not need a network connection, realizing
  • step S500 calculates the distance between the agricultural machinery and the obstacle according to the depth value, specifically including:
  • S520 calculates a first vector that is vertically projected from the central point of the agricultural machine to the environment picture through the projection relationship.
  • the inverse matrix of the camera is a projection relationship
  • the ray r 1 that back-projects the 2D point to the 3D space can be obtained by calculating the inverse matrix of the camera matrix.
  • x and y are the coordinates of the pixel in the two-dimensional image.
  • S530 Calculate, through the projection relationship, a second vector projected from the center point of the agricultural machine to the center point of the obstacle in the environment picture.
  • the second vector r 2 projected from the center point of the structured light camera installed at the center point of the agricultural machine to the center point of the obstacle in the environment picture is calculated.
  • S540 Calculate the angle between the first vector and the second vector as the angle between the obstacle and the agricultural machine.
  • the cosine function of the angle ⁇ between the camera and the object can be expressed by dividing the dot product of r 1 and r 2 by the product of the norms of r 1 and r 2 , and then ⁇ can be obtained through the arc cosine function, the formula is as follows:
  • step S600 calculates the angle between the center point of the obstacle and the center point of the agricultural machine according to the environment picture, specifically including:
  • S610 Acquire the second coordinates of the projected point of the central point of the agricultural machine in the environment picture, and the third coordinates of the center point of the obstacle in the environment picture.
  • S620 Calculate a first pixel distance in the horizontal direction and a second pixel distance in the vertical direction between the second coordinate and the third coordinate.
  • the first pixel distance is The second pixel distance is focal length is
  • S630 Calculate the relative position between the obstacle and the agricultural machine according to the first pixel distance, the second pixel distance, the included angle, and the distance between the agricultural machine and the obstacle.
  • the obstacle recognition method for automatic driving of agricultural machinery discloses a method for calculating the angle between the obstacle and the structured light camera, according to which the distance between the structured light camera and the obstacle can be further calculated, and discloses a method
  • the method of calculating the relative position of the obstacle and the structured light camera can calculate the relative distance between the agricultural machine and the obstacle, so that the agricultural machine can directly adjust the forward direction according to the relative distance to avoid the obstacle, and realize the accurate perception of the surrounding environment of the agricultural machine.
  • the present invention provides an automatic driving obstacle recognition system for agricultural machinery, including an acquisition module 10, a visual recognition model processing module 20, a first recognition module 30, a first calculation module 40, The second calculation module 50 and the third calculation module 60 .
  • the collection module 10 is used to collect the environmental picture of the traveling direction of the agricultural machinery.
  • the image of the advancing direction of the agricultural machine is captured in real time by the acquisition module 10, and the captured image can be a video or a picture.
  • the visual recognition model processing module 20 is connected with the acquisition module 10 and stores a visual recognition model for recognizing the first coordinates of several vertices on the border of obstacles in the environmental picture through the visual recognition model according to the environmental picture.
  • the visual recognition model processing module 20 captures the picture of the forward direction of the agricultural machinery every preset time. Since the driving speed of the agricultural machinery is relatively slow, and most of the agricultural machinery work scenes are low-speed or static obstacles, the preset time can be set as 0.5s, 1s or 2s and so on.
  • the picture of the agricultural machine's advancing direction is transmitted to the visual recognition model processing module 20, the visual recognition model processing module 20 is equipped with a visual recognition model, and the frame outline of the obstacle in the picture of the agricultural machine's advancing direction is recognized by the visual recognition model, and marked Several vertices of the border outline of the obstacle.
  • a plane coordinate system is established with the center point of the environment changing plane as the origin, and the pixel coordinates of several vertices of the border outline of the obstacle are calculated in the coordinate system as the first a coordinate.
  • the first identification module 30 is connected with the visual identification model processing module 20 and is used to identify the depth values of several points on the obstacle in three-dimensional space by combining several first coordinates.
  • the first recognition module 30 recognizes the depth values of several points on the obstacle in the three-dimensional space, and the first recognition module 30 can realize this function through a structured light camera.
  • the structured light depth camera projects light with certain structural characteristics into the environmental picture captured by the camera through the near-infrared laser. Within the frame outline of the object, it is collected by a special infrared camera to obtain the position and depth information of the object.
  • the first calculation module 40 is connected with the first identification module 30 and used for calculating the distance between the agricultural machine and the obstacle according to the depth value.
  • the distance d from the obstacle to the camera can be obtained by filtering the null value and the outlier value of the depth value of the points in the reduced frame and taking the average value. Take multiple points around the center point to improve accuracy.
  • the second calculation module 50 is connected with the acquisition module 10 and is used for calculating the angle between the center point of the obstacle and the center point of the agricultural machine in the three-dimensional space according to the environment picture.
  • the second calculation module 50 calculates the ray between the center point of the frame outline of the obstacle in the environmental picture captured by the camera and the center point of the structured light camera, and captures the center point of the structured light camera and projects it vertically to the camera The light of the captured environment picture is calculated, and the angle between the two light rays is calculated as the angle between the obstacle and the structured light camera.
  • the function of the second computing module 50 can be realized by the inverse matrix of the structured light camera installed at the central point of the agricultural machine.
  • the third calculation module 60 is connected with the collection module 10 and the second calculation module 50, and is used to calculate the relative position between the obstacle and the agricultural machine in combination with the distance and angle between the agricultural machine and the obstacle.
  • the relative coordinates between the obstacle and the agricultural machine can be obtained through trigonometric functions and similar triangles.
  • the agricultural machinery automatic driving obstacle recognition system calculates the relative distance between the agricultural machinery and the obstacle through the visual recognition model and the structured light camera, so as to solve the problem of avoiding obstacles due to the inability to accurately identify the position of the obstacle during the automatic driving of the agricultural machinery.
  • the problem that objects do not affect the effect of automatic driving in time can improve the accuracy of identifying the position of obstacles during the automatic driving of agricultural machinery, and realize the accurate perception of the surrounding environment of agricultural machinery.
  • the agricultural machinery automatic driving obstacle recognition system provided by the present invention also includes an acquisition module 71 , a calibration module 72 , a recognition module 73 and a visual recognition model training module 74 .
  • the acquiring module 71 is used to acquire multiple frames of environmental images in the working scene of the agricultural machinery, and identify the original images of several obstacles in each frame of the environmental images.
  • the calibration module 72 is connected with the acquisition module 71, and is used to adjust the scale of the preset shape prior frame, and use the prior frame to calibrate each obstacle in each frame of the environment picture.
  • a priori frame that conforms to the size of the obstacle in the agricultural machinery working scene is preset, and when an obstacle is detected in the environment picture, the priori frame is used to calibrate the position of the obstacle.
  • the second identification module 73 is connected with the calibration module 72, and is used to identify the vertex information of the frame corresponding to each obstacle, and the category corresponding to each obstacle.
  • the visual recognition model used in this method is built based on the real-time YOLO-v3 visual recognition model and optimized and trained for agricultural scenarios.
  • the model uses convolutional neural network and residual neural network to extract features from image data, and uses multi-scale features for object detection and logistic regression for border prediction.
  • the model is optimized with gradient descent for the following loss function:
  • S is the number of grids
  • B is the box.
  • This model is customized for agricultural machinery operation scenarios.
  • This model uses images of common obstacles in farmland as training data, and accurately calibrates the category and frame information of obstacles in these training set images.
  • hyperparameters are optimized to further improve the accuracy of model recognition.
  • the visual recognition model training module 74 is connected with the acquisition module 71, the second recognition module 73 and the visual recognition model processing module 20, and is used to train the visual recognition model according to the original image corresponding to each obstacle, the vertex information of the border and the category.
  • this model is customized for agricultural machinery operation scenarios.
  • This model uses images of common obstacles in farmland as training data, and accurately calibrates the category and border information of obstacles in these training set images.
  • hyperparameters are optimized to further improve the accuracy of model recognition.
  • the model recognizes the images captured by the camera and outputs obstacle information: class-obstacle category, confidence-recognized obstacle confidence, bounding box-obstacle border vertex coordinates.
  • the obstacle recognition system for automatic driving of agricultural machinery trains the visual recognition model for the category of obstacles in the working scene of the agricultural machinery and the border vertex information when training the visual recognition model.
  • the visual recognition model can be accurately recognized Border information of low-speed or static obstacles.
  • the present invention provides an obstacle recognition device 100 for automatic driving of agricultural machinery, including a processor 110 and a memory 120 .
  • the memory 120 is used to store the computer program 121
  • the processor 110 is used to execute the computer program 121 stored on the memory 120 to realize the method for identifying obstacles in automatic driving of agricultural machinery provided by any one of the method embodiments 1 to 3. .
  • the smart device 100 mentioned in this embodiment may include, but not limited to, a processor 110 and a memory 120 .
  • FIG. 9 is only an example of the obstacle recognition device 100 for automatic driving of agricultural machinery, and does not constitute a limitation to the obstacle recognition device 100 for automatic driving of agricultural machinery, and may include more or less components than those shown in the figure. Or combine certain components, or different components, for example: the agricultural machinery automatic driving obstacle identification device 100 may also include input/output interfaces, display devices, network access devices, communication buses, communication interfaces, and the like.
  • the communication interface and the communication bus may also include an input/output interface, wherein the processor 110, the memory 120, the input/output interface and the communication interface complete mutual communication through the communication bus.
  • the memory 120 stores a computer program 121, and the processor 110 is used to execute the computer program 121 stored in the memory 120 to realize the method for identifying obstacles in the automatic driving of agricultural machinery in the corresponding method embodiment above.
  • An embodiment of the present invention is a storage medium, at least one instruction is stored in the storage medium, and the instruction is loaded and executed by the processor to realize the implementation of the method for identifying obstacles for automatic driving of agricultural machinery provided by any one of Embodiments 1 to 3 operation.
  • the storage medium may be read only memory (ROM), random access memory (RAM), compact disk read only (CD-ROM), magnetic tape, floppy disk, and optical data storage device, among others.
  • the disclosed method, system, device, and storage medium for identifying obstacles in automatic driving of agricultural machinery may be implemented in other ways.
  • the above-described embodiments of the agricultural machinery automatic driving obstacle recognition method, system, device, and storage medium are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, it can There are other divisions, for example, multiple units or modules may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
  • the shown or discussed mutual communication connections may be through some interfaces, communication connections of devices or units or integrated circuits, which may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

一种农机自动驾驶障碍物识别方法、系统、设备(100)和存储介质,其方法包括步骤:采集农机行进方向的环境画面(S200);根据环境画面通过视觉识别模型识别环境画面中障碍物的边框上若干个顶点的第一坐标(S300);结合若干个第一坐标,识别三维空间中障碍物上若干个点的深度值(S400);根据深度值计算农机与障碍物之间的距离(S500);根据环境画面计算三维空间中障碍物的中心点与农机中心点之间的夹角(S600);结合农机与障碍物之间的距离与夹角,计算障碍物与农机之间的相对位置(S700)。方法提高了农机自动驾驶过程中识别障碍物位置的准确性,实现对于农机周边环境的精准感知。

Description

农机自动驾驶障碍物识别方法、系统、设备和存储介质 技术领域
本发明涉及视觉识别技术领域,特别涉及一种农机自动驾驶障碍物识别方法和系统。
背景技术
农机自动驾驶是一门涉及到计算机科学、模式识别、电子、通信、和控制等多个领域的交叉学科技术。该技术通过对车辆周边的环境信息进行感知以及车辆自身的定位、规划出最适合行走和作业的路径,进而控制车辆的方向和速度,最终实现农机车辆的自主行走和自主作业。农机自动驾驶技术对于减轻农业劳动强度,提升工作效率以及提升农业产能有着重要的意义。为了使农机车辆实现自主行走,需要对车辆周边环境进行感知并对周遭障碍物进行精准识别与定位。计算机视觉与深度学习是自动驾驶领域中用来对车辆周边环境进行感知的关键技术,在自动驾驶中扮演着至关重要的角色,是成功的实现自动驾驶的重要因素。
在传统的车辆自动驾驶的障碍物感知功能中,大多通过机器视觉加上激光雷达,毫米波雷达来实现。但是在农机驾驶领域,雷达系统对于动态物体的识别虽然准确,但对于农机作业时常见的低速或静态障碍物,雷达系统的感知效果较差,无法达到农机自动驾驶所需要的环境感知精度,使农机在自动驾驶过程中会由于无法准确识别障碍物位置,导致避让障碍物不及时影响自动驾驶效果。
因此目前需要一种识别农机自动驾驶过程中障碍物位置的方法,提高农机自动驾驶过程中识别障碍物位置的准确性,实现对于农机前进方向环境的精准感知。
发明内容
为解决农机在自动驾驶过程中会由于无法准确识别障碍物位置,导致避让障碍物不及时影响自动驾驶效果的技术问题,本发明提供一种农机自动驾驶障碍物识别方法、系统、设备和存储介质,具体的技术方案如下:
本发明提供一种农机自动驾驶障碍物识别方法,包括步骤:
采集农机行进方向的环境画面;
根据所述环境画面通过视觉识别模型识别所述环境画面中障碍物的边框上若干个顶点的第一坐标;
结合若干个所述第一坐标,识别三维空间中所述障碍物上若干个点的深度值;
根据所述深度值计算所述农机与所述障碍物之间的距离;
根据所述环境画面计算三维空间中所述障碍物的中心点与所述农机中心点之间的夹角;
结合所述农机与所述障碍物之间的距离与夹角,计算所述障碍物与所述农机之间的相对位置。
本发明提供的农机自动驾驶障碍物识别方法通过视觉识别模型与结构光相机计算农机与障碍物之间的相对距离,解决农机在自动驾驶过程中会由于无法准确识别障碍物位置,导致避让障碍物不及时影响自动驾驶效果的问题,提高农机自动驾驶过程中识别障碍物位置的准确性,实现对于农机周边环境的精准感知。
进一步地,所述的采集农机行进方向的环境画面之前,还包括:
获取所述农机的工作场景中多帧所述环境画面,并识别各帧所述环境画面中若干个所述障碍物的原始画面;
调整预设形状的先验框的尺度,采用所述先验框标定各帧所述环境画面中的各个所述障碍物;
识别各个所述障碍物对应的所述先验框的顶点信息,以及各个所述障碍物对应的类别;
根据各个所述障碍物对应的所述原始图像、所述先验框的顶点信息和所述类别,训练所述视觉识别模型。
本发明提供的农机自动驾驶障碍物识别方法在训练视觉识别模型时,针对农机的工作场景中障碍物的类别和边框顶点信息训练视觉识别模型,根据该视觉识别模型可以准确的识别工作场景中低速或静态障碍物的边框信息,并识别障碍物与农机的相对位置。
进一步地,所述的结合若干个所述第一坐标,通过安装于农机上的结构光相机得到所述结构光相机与所述障碍物之间的距离,具体包括:
结合若干个所述第一坐标,计算所述环境画面中所述障碍物的边框的中心点;
以所述中心点为参考,在所述环境画面中按预设比例缩小所述障碍物的边框;
根据所述环境画面中缩小后所述障碍物的边框,获取向三维空间中所述障碍物投射结构光的投射角度;
根据所述投射角度向三维空间中所述障碍物投射结构光,识别三维空间中所述障碍物上若干个点的深度值。
本发明提供的农机自动驾驶障碍物识别方法公开一种结构光相机结合环境画面计算结构光相机和障碍物距离的方法,适用于准确识别农机作业时低速或静态障碍物与农机距离,提高农机自动驾驶过程中识别障碍物位置的准确性。
进一步地,所述的根据所述环境画面计算三维空间中所述障碍物的中心点与所述农机中心点之间的夹角,具体包括:
获取所述障碍物的三维空间坐标与所述环境画面中所述障碍物的二维图像坐标的投影关系;
通过所述投影关系计算所述农机的中心点垂直投射向所述环境画面的第 一向量;
通过所述投影关系计算所述农机的中心点投射向所述环境画面中所述障碍物中心点的第二向量;
计算所述第一向量与所述第二向量的夹角作为所述障碍物与所述农机之间的夹角。
本发明提供的农机自动驾驶障碍物识别方法公开一种计算障碍物与结构光相机之间的夹角的方法,根据该夹角可以进一步计算结构光相机和障碍物距离,提高农机自动驾驶过程中识别障碍物位置的准确性。
进一步地,所述的计算所述障碍物与所述结构光相机的相对位置作为农机与所述障碍物的相对位置,具体包括:
获取所述农机的中心点在所述环境画面中投影点的第二坐标,以及所述环境画面中所述障碍物中心点的第三坐标;
计算所述第二坐标和所述第三坐标在水平方向的第一像素距离、在竖直方向的第二像素距离;
根据所述第一像素距离、所述第二像素距离、所述夹角和所述农机与所述障碍物之间的距离,计算所述障碍物与所述农机之间的相对位置。
本发明提供的农机自动驾驶障碍物识别方法公开了一种计算障碍物与结构光相机的相对位置的方法,可以计算农机与障碍物之间的相对距离,便于农机后续根据相对距离直接调整前进方向对障碍物进行避让,实现对于农机周边环境的精准感知。
进一步地,所述的标定所述农机的工作场景中所述障碍物的类别和边框顶点信息,具体包括:
预先获取所述农机的工作场景中多帧所述环境画面,并识别各帧所述环境画面中若干个所述障碍物的原始画面;
所述的根据各个所述障碍物对应的所述原始图像、所述先验框的顶点信息和所述类别训练所述视觉识别模型,具体包括:
预先根据各个所述障碍物对应的所述原始图像、所述先验框的顶点信息和所述类别训练所述视觉识别模型。
本发明提供的农机自动驾驶障碍物识别方法还提供一种训练离线的视觉识别模型方法,根据构建离线的视觉识别模型与农机与障碍物之间的相对位置,可以使农机无人驾驶过程无需网络连接,实现离线农机无人驾驶,扩大农机无人驾驶的应用范围。
另外地,本发明还提供一种农机自动驾驶障碍物识别系统,包括:
采集模块,用于采集农机行进方向的环境画面;
视觉识别模型处理模块,与所述采集模块连接,存储有视觉识别模型,根据所述环境画面通过视觉识别模型识别所述环境画面中障碍物的边框上若干个顶点的第一坐标;
第一识别模块,与所述视觉识别模型处理模块连接,用于结合若干个所述第一坐标,识别三维空间中所述障碍物上若干个点的深度值;
第一计算模块,与所述第一识别模块连接,用于根据所述深度值计算所述农机与所述障碍物之间的距离;
第二计算模块,与所述采集模块连接,用于根据所述环境画面计算三维空间中所述障碍物的中心点与所述农机中心点之间的夹角;
第三计算模块,与所述采集模块和所述第二计算模块连接,用于结合所述农机与所述障碍物之间的距离与夹角,计算所述障碍物与所述农机之间的相对位置。
进一步地,本发明提供的一种农机自动驾驶障碍物识别系统,还包括:
获取模块,用于获取所述农机的工作场景中多帧所述环境画面,并识别各帧所述环境画面中若干个所述障碍物的原始画面;
标定模块,与所述获取模块连接,用于调整预设形状的先验框的尺度,采用所述先验框标定各帧所述环境画面中的各个所述障碍物;
识别模块,与所述标定模块连接,用于识别各个所述障碍物对应的所述边 框的顶点信息,以及各个所述障碍物对应的类别;
视觉识别模型训练模块,与所述获取模块和所述识别模块连接,用于根据各个所述障碍物对应的所述原始图像、所述边框的顶点信息和所述类别训练所述视觉识别模型。
另外地,本发明还提供一种农机自动驾驶障碍物识别设备,包括处理器、储存器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器,用于执行所述存储器上所存放的计算机程序,实现上述的农机自动驾驶障碍物识别方法所执行的操作。
另外地,本发明还提供一种存储介质,所述存储介质中存储有至少一条指令,所述指令由处理器加载并执行以实现上述的农机自动驾驶障碍物识别方法所执行的操作。
本发明提供农机自动驾驶障碍物识别方法、系统、设备和存储介质,至少包括以下一项技术效果:
(1)解决农机在自动驾驶过程中会由于无法准确识别障碍物位置,导致避让障碍物不及时影响自动驾驶效果的问题,提高农机自动驾驶过程中识别障碍物位置的准确性,实现对于农机周边环境的精准感知;
(2)针对农机的工作场景中障碍物的类别和边框顶点信息训练视觉识别模型,可以准确的识别工作场景中低速或静态障碍物的边框信息,并识别障碍物与农机的相对位置;
(3)构建离线的视觉识别模型,根据离线的视觉识别模型与农机与障碍物之间的相对位置,可以使农机无人驾驶过程无需网络连接,实现离线农机无人驾驶,扩大农机无人驾驶的应用范围。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简要介绍,显而易见地,下面描述中的附图仅仅是本发明的 一些实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本发明一种农机自动驾驶障碍物识别方法的流程图;
图2为本发明一种农机自动驾驶障碍物识别方法中视觉识别模型建立的一个流程图;
图3为本发明一种农机自动驾驶障碍物识别方法中视觉识别模型建立的另一个流程图;
图4为本发明一种农机自动驾驶障碍物识别方法中计算结构光相机与障碍物之间的距离的一个流程图;
图5为本发明一种农机自动驾驶障碍物识别方法中计算障碍物与结构光相机之间的夹角的一个流程图;
图6为本发明一种农机自动驾驶障碍物识别方法中计算障碍物与结构光相机的相对位置的流程图;
图7为本发明一种农机自动驾驶障碍物识别系统的示例图;
图8为本发明一种农机自动驾驶障碍物识别系统的另一个示例图;
图9为本发明一种农机自动驾驶障碍物识别设备的一个示例图。
图中标号:采集模块10、视觉识别模型处理模块20、第一识别模块30、第一计算模块40、第二计算模块50、第三计算模块60、获取模块71、标定模块72、识别模块73、视觉识别模型训练模块74、包括处理器110、存储器120和计算机程序121。
具体实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其他实施例中也可以实现本申请。在其他情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节 妨碍本申请的描述。
应当理解,当在本说明书和所附权利要求书中使用时,术语“包括”指示所述描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其他特征、整体、步骤、操作、元素、组件和/或集合的存在或添加。
为使图面简洁,各图中只示意性地表示出了与本发明相关的部分,它们并不代表其作为产品的实际结构。另外,以使图面简洁便于理解,在有些图中具有相同结构或功能的部件,仅示意性地绘出了其中的一个,或仅标出了其中的一个。在本文中,“一个”不仅表示“仅此一个”,也可以表示“多于一个”的情形。
还应当进一步理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
另外,在本申请的描述中,术语“第一”、“第二”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对照附图说明本发明的具体实施方式。显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图,并获得其他的实施方式。
实施例1
本发明的一个实施例,如图1所示,本发明提供一种农机自动驾驶障碍物识别方法,包括步骤:
S200采集农机行进方向的环境画面。
具体地,通过摄像头实时捕捉农机前进方向的画面,捕捉的画面可以为视频,也可以为图片。
S300环境画面通过视觉识别模型识别环境画面中障碍物的边框上若干个顶点的第一坐标。
具体地,每隔预设时间选取一帧农机前进方向的画面,由于农机行驶速度较慢,且农机工作场景中多为低速或静态障碍物,所以预设时间可以设置为0.5s、1s或2s等等。
将该帧农机前进方向的画面传输至处理器中,处理器内搭载视觉识别模型,通过视觉识别模型识别农机前进方向的画面中障碍物的边框轮廓,并标记障碍物的边框轮廓的若干个顶点。
示例性地,在摄像头捕捉到的环境画面中,以环境换面中心点为原点,建立平面坐标系,并在该坐标系中计算障碍物的边框轮廓的若干个顶点的像素坐标作为第一坐标。
S400结合若干个第一坐标,识别三维空间中障碍物上若干个点的深度值。
示例性地,可以通过结构光相机识别三维空间中障碍物上若干个点的深度值。
具体地,根据摄像头捕捉到的环境画面中障碍物的边框轮廓的若干个顶点的坐标,结构光深度相机通过近红外激光器,将具有一定结构特征的光线投射到摄像头捕捉到的环境画面中被拍摄物体的边框轮廓内,再由专门的红外摄像头进行采集,得到物体的位置和深度信息。
S500根据深度值计算农机与障碍物之间的距离。
具体地,将缩小边框内点的深度值的空值以及离群值过滤之后取平均值可得到障碍物到相机的距离d。中心点周围取多个点,提高精度。
S600根据环境画面计算三维空间中障碍物的中心点与农机中心点之间的夹角。
示例性地,通过安装于农机中心点的结构光相机的逆矩阵计算摄像头捕捉到的环境画面中障碍物的边框轮廓的中心点与结构光相机的中心点之间的光 线,并捕捉结构光相机的中心点垂直投射向摄像头捕捉到的环境画面的光线,计算两个光线之间的夹角作为障碍物与结构光相机之间的夹角。
S700结合农机与障碍物之间的距离与夹角,计算障碍物与农机之间的相对位置。
具体地,得出障碍物与农机之间的夹角后,结合障碍物与农机之间的距离,通过三角函数以及相似三角形可得出障碍物与农机之间的相对坐标。
本实施例提供的农机自动驾驶障碍物识别方法通过视觉识别模型与结构光相机计算农机与障碍物之间的相对距离,解决农机在自动驾驶过程中会由于无法准确识别障碍物位置,导致避让障碍物不及时影响自动驾驶效果的问题,提高农机自动驾驶过程中识别障碍物位置的准确性,实现对于农机周边环境的精准感知。
实施例2
基于实施例1,如图2~3所示,本发明提供的农机自动驾驶障碍物识别方法中在步骤S200采集农机行进方向的环境画面之前,还包括:
S110获取农机的工作场景中多帧环境画面,并识别各帧环境画面中若干个障碍物的原始画面。
S120调整预设形状的先验框的尺度,采用先验框标定各帧环境画面中的各个障碍物。
预设符合农机工作场景中障碍物尺寸大小的先验框,检测到环境画面中障碍物时,采用先验框标定该障碍物的位置。
S130识别各个障碍物对应的先验框的顶点信息,以及各个障碍物对应的类别。
具体地,本方法采用的视觉识别模型基于实时性较强的YOLO-v3视觉识别模型所搭建并针对农业场景进行了优化与训练。模型采用卷积神经网络与残差 神经网络对图像数据进行特征提取,并利用多尺度特征进行对象检测以及逻辑回归进行边框预测。
模型针对以下损失函数进行梯度递减优化:
Figure PCTCN2022114025-appb-000001
其中S为网格数,B为方框。
Figure PCTCN2022114025-appb-000002
本模型针对农机作业场景进行了定制化训练。本模型采用了农田常见障碍物的图像作为训练数据,并在这些训练集图像中精准标定出障碍物的类别与边框信息。在模型训练过程中,针对超参数进行优化,从而进一步提升模型识别精度。
S140根据各个障碍物对应的原始图像、先验框的顶点信息和类别,训练视觉识别模型。
具体地,本模型针对农机作业场景进行了定制化训练。本模型采用了农田常见障碍物的图像作为训练数据,并在这些训练集图像中精准标定出障碍物的类别与边框信息。在模型训练过程中,针对超参数进行优化,从而进一步提升模型识别精度。
将上述视觉识别模型与模型训练后所得到的权重矩阵部署到处理器中。针对处理器对模型进行优化。模型对摄像头捕捉到的画面进行识别并输出障碍物信息:class-障碍物类别,confidence-识别到的障碍物置信度,bounding box-障碍物边框顶点坐标。
可选地,如图3所示,步骤S200采集农机行进方向的环境画面之前,还 包括:
S150预先获取农机的工作场景中多帧环境画面,并识别各帧环境画面中若干个障碍物的原始画面。
S160预先根据各个障碍物对应的原始图像、先验框的顶点信息和类别训练视觉识别模型。
具体地,跟据构建离线的视觉识别模型与农机与障碍物之间的相对位置,可以使农机无人驾驶过程无需网络连接,实现离线农机无人驾驶,扩大农机无人驾驶的应用范围。
可选地,如图4所示,步骤S400结合若干个第一坐标,识别三维空间中障碍物上若干个点的深度值,具体包括:
S410结合若干个第一坐标,计算环境画面中障碍物的边框的中心点。
具体地,公式如下:
x centre=(x min+x max)/2;
y centre=(y min+y max)/2。
S420以中心点为参考,在环境画面中按预设比例缩小障碍物的边框。
具体地,在中心点周围根据预设比例可得到缩小后的边框。例如预设比例可以为1:10。
S430根据环境画面中缩小后障碍物的边框,获取向三维空间中障碍物投射结构光的投射角度。
S440根据投射角度向三维空间中障碍物投射结构光,识别三维空间中障碍物上若干个点的深度值。
具体地,通过深度相机可读取缩小边框内点的深度值。将缩小边框内点的深度值的空值以及离群值过滤之后取平均值可得到障碍物到相机的距离d。中心点周围取多个点,提高精度。
本实施例提供的农机自动驾驶障碍物识别方法在训练视觉识别模型时,针对农机的工作场景中障碍物的类别和边框顶点信息训练视觉识别模型,根据该 视觉识别模型可以准确的识别工作场景中低速或静态障碍物的边框信息,并且公开一种结构光相机结合环境画面计算结构光相机和障碍物距离的方法,适用于准确识别农机作业时低速或静态障碍物与农机距离,提高农机自动驾驶过程中识别障碍物位置的准确性,还可以根据构建离线的视觉识别模型与农机与障碍物之间的相对位置,可以使农机无人驾驶过程无需网络连接,实现离线农机无人驾驶,扩大农机无人驾驶的应用范围。
实施例3
基于实施例1或实施例2,如图5~6所示,本发明提供的农机自动驾驶障碍物识别方法中,步骤S500根据深度值计算农机与障碍物之间的距离,具体包括:
S510获取障碍物的三维空间坐标与环境画面中障碍物的二维图像坐标的投影关系。
通过校准相机可以得到,相机参数:帧面宽度w,帧面高度h。以及相机矩阵:
Figure PCTCN2022114025-appb-000003
其中f为相机像素焦距。
S520通过投影关系计算农机的中心点垂直投射向环境画面的第一向量。
示例性地当采用结构光相机时相机的逆矩阵为投影关系,通过相机矩阵的逆矩阵计算可得将2D点反向投影至3D空间的光线r 1,公式如下:
r 1=K -1[x y 1],
其中x,y为像素点在二维图像中的坐标。
S530通过投影关系计算农机的中心点投射向环境画面中障碍物中心点的第二向量。
示例性地,计算安装于农机中心点的结构光相机的中心点投射向环境画面 中障碍物中心点的第二向量r 2
S540计算第一向量与第二向量的夹角作为障碍物与农机之间的夹角。
具体地,相机与物体夹角α的余弦函数可通过r 1与r 2的点积除以r 1与r 2的范数的积表示,进而通过反余弦函数得出α,公式如下:
cos(α)=(r 1·r 2)/(||r1||*||r2||),
Figure PCTCN2022114025-appb-000004
可选地,如图6所示,步骤S600根据环境画面计算障碍物的中心点与农机中心点之间的夹角,具体包括:
S610获取农机的中心点在环境画面中投影点的第二坐标,以及环境画面中障碍物中心点的第三坐标。
S620计算第二坐标和第三坐标在水平方向的第一像素距离、在竖直方向的第二像素距离。
具体地,第一像素距离为
Figure PCTCN2022114025-appb-000005
第二像素距离为
Figure PCTCN2022114025-appb-000006
焦距为
Figure PCTCN2022114025-appb-000007
S630根据第一像素距离、第二像素距离、夹角和农机与障碍物之间的距离,计算障碍物与农机之间的相对位置。
具体地,公式如下:
Figure PCTCN2022114025-appb-000008
y w=z w*y r/f;
x w=z w*x r/f。
本实施例提供的农机自动驾驶障碍物识别方法公开一种计算障碍物与结构光相机之间的夹角的方法,根据该夹角可以进一步计算结构光相机和障碍物 距离,并公开了一种计算障碍物与结构光相机的相对位置的方法,可以计算农机与障碍物之间的相对距离,便于农机后续根据相对距离直接调整前进方向对障碍物进行避让,实现对于农机周边环境的精准感知。
实施例4
本发明的一个实施例,如图7所示,本发明提供一种农机自动驾驶障碍物识别系统,包括采集模块10、视觉识别模型处理模块20、第一识别模块30、第一计算模块40、第二计算模块50和第三计算模块60。
其中采集模块10,用于采集农机行进方向的环境画面。
具体地,通过采集模块10实时捕捉农机前进方向的画面,捕捉的画面可以为视频,也可以为图片。
视觉识别模型处理模块20,与采集模块10连接,存储有视觉识别模型,用于根据环境画面通过视觉识别模型识别环境画面中障碍物的边框上若干个顶点的第一坐标。
具体地,通过视觉识别模型处理模块20每隔预设时间捕捉一次农机前进方向的画面,由于农机行驶速度较慢,且农机工作场景中多为低速或静态障碍物,所以预设时间可以设置为0.5s、1s或2s等等。
具体地,将农机前进方向的画面传输至视觉识别模型处理模块20中,视觉识别模型处理模块20内搭载视觉识别模型,通过视觉识别模型识别农机前进方向的画面中障碍物的边框轮廓,并标记障碍物的边框轮廓的若干个顶点。
示例性地,在采集模块10捕捉到的环境画面中,以环境换面中心点为原点,建立平面坐标系,并在该坐标系中计算障碍物的边框轮廓的若干个顶点的像素坐标作为第一坐标。
第一识别模块30,与视觉识别模型处理模块20连接,用于结合若干个第一坐标,识别三维空间中障碍物上若干个点的深度值。
示例性地,通过第一识别模块30识别三维空间中障碍物上若干个点的深 度值,第一识别模块30可以通过结构光相机的实现该功能。
具体地,根据摄像头捕捉到的环境画面中障碍物的边框轮廓的若干个顶点的坐标,结构光深度相机通过近红外激光器,将具有一定结构特征的光线投射到摄像头捕捉到的环境画面中被拍摄物体的边框轮廓内,再由专门的红外摄像头进行采集,得到物体的位置和深度信息。
第一计算模块40,与第一识别模块30连接,用于根据深度值计算农机与障碍物之间的距离。
具体地,将缩小边框内点的深度值的空值以及离群值过滤之后取平均值可得到障碍物到相机的距离d。中心点周围取多个点,提高精度。
第二计算模块50,与采集模块10连接,用于根据环境画面计算三维空间中障碍物的中心点与农机中心点之间的夹角。
示例性地,通过第二计算模块50计算摄像头捕捉到的环境画面中障碍物的边框轮廓的中心点与结构光相机的中心点之间的光线,并捕捉结构光相机的中心点垂直投射向摄像头捕捉到的环境画面的光线,计算两个光线之间的夹角作为障碍物与结构光相机之间的夹角。
第二计算模块50的功能可以由安装于农机中心点的结构光相机的逆矩阵实现。
第三计算模块60,与采集模块10和第二计算模块50连接,用于结合农机与障碍物之间的距离与夹角,计算障碍物与农机之间的相对位置。
具体地,得出障碍物与农机之间的夹角后,结合障碍物与农机之间的距离,通过三角函数以及相似三角形可得出障碍物与农机之间的相对坐标。
本实施例提供的农机自动驾驶障碍物识别系统通过视觉识别模型与结构光相机计算农机与障碍物之间的相对距离,解决农机在自动驾驶过程中会由于无法准确识别障碍物位置,导致避让障碍物不及时影响自动驾驶效果的问题,提高农机自动驾驶过程中识别障碍物位置的准确性,实现对于农机周边环境的精准感知。
实施例5
基于实施例4,如图8所示,本发明提供的农机自动驾驶障碍物识别系统还包括获取模块71、标定模块72、识别模块73和视觉识别模型训练模块74。
其中获取模块71用于获取农机的工作场景中多帧环境画面,并识别各帧环境画面中若干个障碍物的原始画面。
标定模块72与获取模块71连接,用于调整预设形状的先验框的尺度,采用先验框标定各帧环境画面中的各个障碍物。
具体地,预设符合农机工作场景中障碍物尺寸大小的先验框,检测到环境画面中障碍物时,采用先验框标定该障碍物的位置。
第二识别模块73与标定模块72连接,用于识别各个障碍物对应的边框的顶点信息,以及各个障碍物对应的类别。
具体地,本方法采用的视觉识别模型基于实时性较强的YOLO-v3视觉识别模型所搭建并针对农业场景进行了优化与训练。模型采用卷积神经网络与残差神经网络对图像数据进行特征提取,并利用多尺度特征进行对象检测以及逻辑回归进行边框预测。
模型针对以下损失函数进行梯度递减优化:
Figure PCTCN2022114025-appb-000009
其中S为网格数,B为方框。
Figure PCTCN2022114025-appb-000010
本模型针对农机作业场景进行了定制化训练。本模型采用了农田常见障碍 物的图像作为训练数据,并在这些训练集图像中精准标定出障碍物的类别与边框信息。在模型训练过程中,针对超参数进行优化,从而进一步提升模型识别精度。
视觉识别模型训练模块74与获取模块71、第二识别模块73和视觉识别模型处理模块20连接,用于根据各个障碍物对应的原始图像、边框的顶点信息和类别训练视觉识别模型。
具体地,本模型针对农机作业场景进行了定制化训练。本模型采用了农田常见障碍物的图像作为训练数据,并在这些训练集图像中精准标定出障碍物的类别与边框信息。在模型训练过程中,针对超参数进行优化,从而进一步提升模型识别精度。
将上述视觉识别模型与模型训练后所得到的权重矩阵部署到处理器中。针对处理器对模型进行优化。模型对摄像头捕捉到的画面进行识别并输出障碍物信息:class-障碍物类别,confidence-识别到的障碍物置信度,bounding box-障碍物边框顶点坐标。
本实施例提供的农机自动驾驶障碍物识别系统在训练视觉识别模型时,针对农机的工作场景中障碍物的类别和边框顶点信息训练视觉识别模型,根据该视觉识别模型可以准确的识别工作场景中低速或静态障碍物的边框信息。
实施例6
本发明的一个实施例,如图9所示,本发明提供一种农机自动驾驶障碍物识别设备100,包括处理器110、存储器120。其中,存储器120,用于存放计算机程序121,处理器110,用于执行存储器120上所存放的计算机程序121,实现方法实施例1~3中任意一个实施例提供的农机自动驾驶障碍物识别方法。
本实施例中提及的智能设备100可包括,但不仅限于处理器110和存储器120。本领域技术人员可以理解,图9仅仅是农机自动驾驶障碍物识别设备100的示例,并不构成对农机自动驾驶障碍物识别设备100的限定,可以包括比图 示更多或更少的部件,或者组合某些部件,或者不同的部件,例如:农机自动驾驶障碍物识别设备100还可以包括输入/输出接口、显示设备、网络接入设备、通信总线、通信接口等。通信接口和通信总线,还可以包括输入/输出接口,其中,处理器110、存储器120、输入/输出接口和通信接口通过通信总线完成相互间的通信。该存储器120存储有计算机程序121,该处理器110用于执行存储器120上所存放的计算机程序121,实现上述所对应方法实施例中的农机自动驾驶障碍物识别方法。
实施例7
本发明的一个实施例,一种存储介质,存储介质中存储有至少一条指令,指令由处理器加载并执行以实现实施例1~3中任意一项提供的农机自动驾驶障碍物识别方法所执行的操作。例如,存储介质可以是只读内存(ROM)、随机存取存储器(RAM)、只读光盘(CD-ROM)、磁带、软盘和光数据存储设备等。
它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本发明不限制于任何特定的硬件和软件结合。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详细描述或记载的部分,可以参见其他实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的农机自动驾驶障碍物识别方法、系统、设备和存储介质,可以通过其他的方式实现。例如,以上所 描述的农机自动驾驶障碍物识别方法、系统、设备和存储介质实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如,多个单元或模块可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的通讯连接可以是通过一些接口,装置或单元的通讯连接或集成电路,可以是电性、机械或其他的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可能集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
应当说明的是,以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。

Claims (10)

  1. 一种农机自动驾驶障碍物识别方法,其特征在于,包括步骤:
    采集农机行进方向的环境画面;
    根据所述环境画面通过视觉识别模型识别所述环境画面中障碍物的边框上若干个顶点的第一坐标;
    结合若干个所述第一坐标,识别三维空间中所述障碍物上若干个点的深度值;
    根据所述深度值计算所述农机与所述障碍物之间的距离;
    根据所述环境画面计算三维空间中所述障碍物的中心点与所述农机中心点之间的夹角;
    结合所述农机与所述障碍物之间的距离与夹角,计算所述障碍物与所述农机之间的相对位置。
  2. 根据权利要求1所述的一种农机自动驾驶障碍物识别方法,其特征在于,所述的采集农机行进方向的环境画面之前,还包括:
    获取所述农机的工作场景中多帧所述环境画面,并识别各帧所述环境画面中若干个所述障碍物的原始画面;
    调整预设形状的先验框的尺度,采用所述先验框标定各帧所述环境画面中的各个所述障碍物;
    识别各个所述障碍物对应的所述先验框的顶点信息,以及各个所述障碍物对应的类别;
    根据各个所述障碍物对应的所述原始图像、所述先验框的顶点信息和所述类别,训练所述视觉识别模型。
  3. 根据权利要求1所述的一种农机自动驾驶障碍物识别方法,其特征在于,所述的结合若干个所述第一坐标,识别三维空间中所述障碍物上若干个点 的深度值,具体还包括:
    结合若干个所述第一坐标,计算所述环境画面中所述障碍物的边框的中心点;
    以所述中心点为参考,在所述环境画面中按预设比例缩小所述障碍物的边框;
    根据所述环境画面中缩小后所述障碍物的边框,获取向三维空间中所述障碍物投射结构光的投射角度;
    根据所述投射角度向三维空间中所述障碍物投射结构光,识别三维空间中所述障碍物上若干个点的深度值。
  4. 根据权利要求1所述的一种农机自动驾驶障碍物识别方法,其特征在于,所述的根据所述环境画面计算三维空间中所述障碍物的中心点与所述农机中心点之间的夹角,具体包括:
    获取所述障碍物的三维空间坐标与所述环境画面中所述障碍物的二维图像坐标的投影关系;
    通过所述投影关系计算所述农机的中心点垂直投射向所述环境画面的第一向量;
    通过所述投影关系计算所述农机的中心点投射向所述环境画面中所述障碍物中心点的第二向量;
    计算所述第一向量与所述第二向量的夹角作为所述障碍物与所述农机之间的夹角。
  5. 根据权利要求1所述的一种农机自动驾驶障碍物识别方法,其特征在于,所述的计算所述障碍物与所述农机之间的相对位置,具体包括:
    获取所述农机的中心点在所述环境画面中投影点的第二坐标,以及所述环境画面中所述障碍物中心点的第三坐标;
    计算所述第二坐标和所述第三坐标在水平方向的第一像素距离、在竖直方向的第二像素距离;
    根据所述第一像素距离、所述第二像素距离、所述夹角和所述农机与所述障碍物之间的距离,计算所述障碍物与所述农机之间的相对位置。
  6. 根据权利要求2所述的一种农机自动驾驶障碍物识别方法,其特征在于,所述的获取所述农机的工作场景中多帧所述环境画面,并识别各帧所述环境画面中若干个所述障碍物的原始画面,具体包括:
    预先获取所述农机的工作场景中多帧所述环境画面,并识别各帧所述环境画面中若干个所述障碍物的原始画面;
    所述的根据各个所述障碍物对应的所述原始图像、所述先验框的顶点信息和所述类别训练所述视觉识别模型,具体包括:
    预先根据各个所述障碍物对应的所述原始图像、所述先验框的顶点信息和所述类别训练所述视觉识别模型。
  7. 一种农机自动驾驶障碍物识别系统,其特征在于,包括:
    采集模块,用于采集农机行进方向的环境画面;
    视觉识别模型处理模块,与所述采集模块连接,存储有视觉识别模型,根据所述环境画面通过视觉识别模型识别所述环境画面中障碍物的边框上若干个顶点的第一坐标;
    第一识别模块,与所述视觉识别模型处理模块连接,用于结合若干个所述第一坐标,识别三维空间中所述障碍物上若干个点的深度值;
    第一计算模块,与所述第一识别模块连接,用于根据所述深度值计算所述农机与所述障碍物之间的距离;
    第二计算模块,与所述采集模块连接,用于根据所述环境画面计算三维空间中所述障碍物的中心点与所述农机中心点之间的夹角;
    第三计算模块,与所述采集模块和所述第二计算模块连接,用于结合所述农机与所述障碍物之间的距离与夹角,计算所述障碍物与所述农机之间的相对位置。
  8. 根据权利要求7所述的一种农机自动驾驶障碍物识别系统,其特征在于,还包括:
    获取模块,用于获取所述农机的工作场景中多帧所述环境画面,并识别各帧所述环境画面中若干个所述障碍物的原始画面;
    标定模块,与所述获取模块连接,用于调整预设形状的先验框的尺度,采用所述先验框标定各帧所述环境画面中的各个所述障碍物;
    第二识别模块,与所述标定模块连接,用于识别各个所述障碍物对应的所述边框的顶点信息,以及各个所述障碍物对应的类别;
    视觉识别模型训练模块,与所述获取模块、所述第二识别模块和视觉识别模型处理模块连接,用于根据各个所述障碍物对应的所述原始图像、所述边框的顶点信息和所述类别,训练所述视觉识别模型。
  9. 一种农机自动驾驶障碍物识别设备,其特征在于,包括处理器、储存器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器,用于执行所述存储器上所存放的计算机程序,实现如权利要求1~6中任意一项所述的农机自动驾驶障碍物识别方法所执行的操作。
  10. 一种存储介质,其特征在于,所述存储介质中存储有至少一条指令,所述指令由处理器加载并执行以实现如权利要求1~6中任意一项所述的农机自动驾驶障碍物识别方法所执行的操作。
PCT/CN2022/114025 2021-10-09 2022-08-22 农机自动驾驶障碍物识别方法、系统、设备和存储介质 WO2023056789A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111177280.XA CN113848931B (zh) 2021-10-09 2021-10-09 农机自动驾驶障碍物识别方法、系统、设备和存储介质
CN202111177280.X 2021-10-09

Publications (1)

Publication Number Publication Date
WO2023056789A1 true WO2023056789A1 (zh) 2023-04-13

Family

ID=78977895

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/114025 WO2023056789A1 (zh) 2021-10-09 2022-08-22 农机自动驾驶障碍物识别方法、系统、设备和存储介质

Country Status (2)

Country Link
CN (1) CN113848931B (zh)
WO (1) WO2023056789A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116243716A (zh) * 2023-05-08 2023-06-09 中铁第四勘察设计院集团有限公司 一种融合机器视觉的集装箱智能举升控制方法及系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113848931B (zh) * 2021-10-09 2022-09-27 上海联适导航技术股份有限公司 农机自动驾驶障碍物识别方法、系统、设备和存储介质
CN115390572A (zh) * 2022-10-28 2022-11-25 潍柴雷沃智慧农业科技股份有限公司 一种无人收获机的避障控制方法和系统

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013254474A (ja) * 2012-02-10 2013-12-19 Konan Gakuen 障害物検出装置
CN109084724A (zh) * 2018-07-06 2018-12-25 西安理工大学 一种基于双目视觉的深度学习障碍物测距方法
CN109116374A (zh) * 2017-06-23 2019-01-01 百度在线网络技术(北京)有限公司 确定障碍物距离的方法、装置、设备及存储介质
CN109657638A (zh) * 2018-12-28 2019-04-19 百度在线网络技术(北京)有限公司 障碍物定位方法、装置和终端
CN111160302A (zh) * 2019-12-31 2020-05-15 深圳一清创新科技有限公司 基于自动驾驶环境的障碍物信息识别方法和装置
CN111443704A (zh) * 2019-12-19 2020-07-24 苏州智加科技有限公司 用于自动驾驶系统的障碍物定位方法及装置
CN112083730A (zh) * 2020-09-28 2020-12-15 双擎科技(杭州)有限公司 一种融合多组传感器数据在复杂环境中避障的方法
CN112101092A (zh) * 2020-07-31 2020-12-18 北京智行者科技有限公司 自动驾驶环境感知方法及系统
CN113031597A (zh) * 2021-03-02 2021-06-25 南京理工大学 一种基于深度学习和立体视觉的自主避障方法
CN113848931A (zh) * 2021-10-09 2021-12-28 上海联适导航技术股份有限公司 农机自动驾驶障碍物识别方法、系统、设备和存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106855411A (zh) * 2017-01-10 2017-06-16 深圳市极思维智能科技有限公司 一种机器人及其以深度摄像头和避障系统构建地图的方法

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013254474A (ja) * 2012-02-10 2013-12-19 Konan Gakuen 障害物検出装置
CN109116374A (zh) * 2017-06-23 2019-01-01 百度在线网络技术(北京)有限公司 确定障碍物距离的方法、装置、设备及存储介质
CN109084724A (zh) * 2018-07-06 2018-12-25 西安理工大学 一种基于双目视觉的深度学习障碍物测距方法
CN109657638A (zh) * 2018-12-28 2019-04-19 百度在线网络技术(北京)有限公司 障碍物定位方法、装置和终端
CN111443704A (zh) * 2019-12-19 2020-07-24 苏州智加科技有限公司 用于自动驾驶系统的障碍物定位方法及装置
CN111160302A (zh) * 2019-12-31 2020-05-15 深圳一清创新科技有限公司 基于自动驾驶环境的障碍物信息识别方法和装置
CN112101092A (zh) * 2020-07-31 2020-12-18 北京智行者科技有限公司 自动驾驶环境感知方法及系统
CN112083730A (zh) * 2020-09-28 2020-12-15 双擎科技(杭州)有限公司 一种融合多组传感器数据在复杂环境中避障的方法
CN113031597A (zh) * 2021-03-02 2021-06-25 南京理工大学 一种基于深度学习和立体视觉的自主避障方法
CN113848931A (zh) * 2021-10-09 2021-12-28 上海联适导航技术股份有限公司 农机自动驾驶障碍物识别方法、系统、设备和存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116243716A (zh) * 2023-05-08 2023-06-09 中铁第四勘察设计院集团有限公司 一种融合机器视觉的集装箱智能举升控制方法及系统

Also Published As

Publication number Publication date
CN113848931B (zh) 2022-09-27
CN113848931A (zh) 2021-12-28

Similar Documents

Publication Publication Date Title
WO2021004312A1 (zh) 一种基于双目立体视觉系统的车辆智能测轨迹方法
WO2023056789A1 (zh) 农机自动驾驶障碍物识别方法、系统、设备和存储介质
US10776939B2 (en) Obstacle avoidance system based on embedded stereo vision for unmanned aerial vehicles
WO2020135446A1 (zh) 一种目标定位方法和装置、无人机
CN111062873B (zh) 一种基于多对双目相机的视差图像拼接与可视化方法
WO2020215194A1 (zh) 用于移动目标物体检测的方法、系统以及可移动平台
CN110782524A (zh) 基于全景图的室内三维重建方法
CN111028155B (zh) 一种基于多对双目相机的视差图像拼接方法
CN112567201A (zh) 距离测量方法以及设备
US11783443B2 (en) Extraction of standardized images from a single view or multi-view capture
CN113359782B (zh) 一种融合lidar点云与图像数据的无人机自主选址降落方法
US20220319146A1 (en) Object detection method, object detection device, terminal device, and medium
WO2021114773A1 (en) Target detection method, device, terminal device, and medium
US20220301277A1 (en) Target detection method, terminal device, and medium
EP4213128A1 (en) Obstacle detection device, obstacle detection system, and obstacle detection method
US11842440B2 (en) Landmark location reconstruction in autonomous machine applications
CN110992424A (zh) 基于双目视觉的定位方法和系统
CN110197104B (zh) 基于车辆的测距方法及装置
CN110864670B (zh) 目标障碍物位置的获取方法和系统
CN117115271A (zh) 无人机飞行过程中的双目相机外参数自标定方法及系统
CN114648639B (zh) 一种目标车辆的检测方法、系统及装置
WO2023283929A1 (zh) 双目相机外参标定的方法及装置
Xiong et al. A 3d estimation of structural road surface based on lane-line information
WO2021114775A1 (en) Object detection method, object detection device, terminal device, and medium
CN114463832A (zh) 一种基于点云的交通场景视线追踪方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22877812

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE