WO2023142865A1 - 自主移动机器人及利用其进行车道线检测的方法 - Google Patents

自主移动机器人及利用其进行车道线检测的方法 Download PDF

Info

Publication number
WO2023142865A1
WO2023142865A1 PCT/CN2022/143259 CN2022143259W WO2023142865A1 WO 2023142865 A1 WO2023142865 A1 WO 2023142865A1 CN 2022143259 W CN2022143259 W CN 2022143259W WO 2023142865 A1 WO2023142865 A1 WO 2023142865A1
Authority
WO
WIPO (PCT)
Prior art keywords
lane
candidate
coordinate system
robot
frame image
Prior art date
Application number
PCT/CN2022/143259
Other languages
English (en)
French (fr)
Inventor
李乐
潘金凤
程今
暴印行
Original Assignee
灵动科技(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 灵动科技(北京)有限公司 filed Critical 灵动科技(北京)有限公司
Publication of WO2023142865A1 publication Critical patent/WO2023142865A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Definitions

  • the present application relates to the field of robot technology, in particular to an autonomous mobile robot, a method for detecting lane lines by using an autonomous mobile robot, a method for laterally positioning a robot within a lane line based on the method, an electronic device, and a non-transient computer-readable storage medium.
  • autonomous mobile robots (Autonomous Mobile Robot, AMR) have been widely used in scenarios such as automated warehousing to perform handling, picking and other tasks.
  • AMR autonomous Mobile Robot
  • the robot can use a variety of methods for positioning and navigation, such as two-dimensional code methods, laser methods, visual methods, etc.
  • Robots also typically use lane markings attached to the ground for assisted localization and navigation.
  • the camera on the robot needs to first identify and detect the lane line, and then position and navigate the robot according to the detected lane line.
  • the detection of lane lines in the existing technology is not accurate, thus affecting the positioning and navigation results.
  • autonomous mobile robots used in automated warehousing scenarios are usually short in height (eg, below 20 or 30 cm), so the cameras mounted on the robots are closer to the ground. Due to the perspective effect, the distance of the lane line on the ground in the image is very serious, and the distance measurement error of the far point is large, but if only the near point is used for detection, the number of points is small, and the fitting is unstable. , thus affecting the detection results of lane lines.
  • the present application provides an autonomous mobile robot, a method for detecting lane lines by using an autonomous mobile robot, a method for laterally positioning the robot in the lane line based on the method, electronic equipment and Non-transitory computer readable storage medium.
  • a method for lane line detection using an autonomous mobile robot wherein the robot is equipped with a camera, and the method includes:
  • the input images include current frame images and historical frame images;
  • the endpoints of the successfully matched candidate lanes are merged, and the merged endpoints are fitted as the current lane segment.
  • determining the coordinate value of the end point of the candidate lane in the robot coordinate system includes:
  • the candidate lane fragments of the same lane are merged into candidate lanes, and the coordinate values of the endpoints of the candidate lanes in the robot coordinate system are determined.
  • determining the coordinate values of the candidate lane fragments in the camera coordinate system includes:
  • the extracted edge image is processed by the hough line detection algorithm to obtain the coordinate values of the candidate lane fragments.
  • determining the coordinate values of the endpoints of the candidate lane fragments in the robot coordinate system includes:
  • determining the coordinate value of the end point of the candidate lane in the robot coordinate system further includes:
  • candidate lanes whose length is smaller than a preset second threshold are removed.
  • merging candidate lane fragments of the same lane into candidate lanes includes:
  • Fitting is performed using the orientation of each candidate lane fragment and/or the distance between every two candidate lane fragments.
  • matching the candidate lanes in the current frame image with the candidate lanes in the historical frame image includes:
  • the matching fails.
  • calculating the distance between each candidate lane in the current frame image and each candidate lane in the historical frame image includes:
  • matching the candidate lanes in the current frame image with the candidate lanes in the historical frame image further includes:
  • the candidate lane in the current frame image if it fails to match all the candidate lanes in the historical frame image, it is added as a new candidate lane and added to the candidate lane set in the historical frame image.
  • matching the candidate lanes in the current frame image with the candidate lanes in the historical frame image further includes:
  • the candidate lanes in the historical frame image if they fail to match all the candidate lanes in the current frame image, it is regarded as a candidate lane that has reached an end, and is deleted from the candidate lane set in the historical frame image.
  • the method also includes:
  • the merged endpoints are added to the set of candidate lanes of the historical frame image.
  • merging endpoints of successfully matched candidate lanes, and fitting the merged endpoints to the current lane segment includes:
  • the method also includes:
  • a method for laterally positioning a robot within a lane line based on the above-mentioned lane line detection method including:
  • determining the lateral position of the robot in the lane line includes:
  • determining the lateral position of the robot within the lane line in the map according to the matching result includes:
  • a lateral position of the robot is determined based on the plurality of candidate positions.
  • the lateral position comprises a coordinate position and orientation of the robot.
  • an autonomous mobile robot comprising:
  • a driving device for driving the vehicle body to move
  • a camera mounted on the body and having a side view relative to the body
  • the memory stores a computer program, and when the computer program is executed by the processor, causes the processor to perform the method as described above.
  • the camera includes a first camera, which is installed at the front of the vehicle body and forms a first included angle with the front direction of the vehicle body.
  • the camera further includes a second camera, which is installed at the front of the vehicle body and forms a second angle with the direction of the front of the vehicle body, and the first camera and the second camera are separated by a predetermined distance. distance, and the first included angle and the second included angle are symmetrical with respect to the direction of the vehicle head.
  • an electronic device including:
  • the memory stores a computer program, and when the computer program is executed by the processor, causes the processor to perform the method as described above.
  • a non-transitory computer-readable storage medium on which computer-readable instructions are stored, and when the instructions are executed by a processor, the processor performs the above-mentioned method.
  • Fig. 1 shows a schematic side view of an autonomous mobile robot according to an embodiment of the present application.
  • Fig. 2 shows a schematic view of a field of view of a camera according to an embodiment of the present application.
  • Fig. 3 shows a flow chart of a method for lane line detection using an autonomous mobile robot according to an embodiment of the present application.
  • Fig. 4 shows a flow chart of determining the coordinate values of the endpoints of the candidate lanes in the robot coordinate system according to the current frame image according to one embodiment of the present application.
  • Fig. 5 shows a schematic diagram of merging candidate lane fragments of the same lane into candidate lanes according to an embodiment of the present application.
  • Fig. 6 shows a flow chart of determining the coordinate values of candidate lane fragments in the camera coordinate system according to the current frame image according to an embodiment of the present application.
  • Fig. 7 shows a flow chart of determining the coordinate values of the endpoints of the candidate lane fragments in the robot coordinate system according to the coordinate values of the candidate lane fragments in the camera coordinate system according to an embodiment of the present application.
  • Fig. 8 shows a flow chart of determining the coordinate values of the endpoints of the candidate lanes in the robot coordinate system according to another embodiment of the present application according to the current frame image.
  • FIG. 9 shows a flow chart of matching the candidate lanes in the current frame image with the candidate lanes in the historical frame images based on the coordinate values of the odom coordinate system according to an embodiment of the present application.
  • Fig. 10 shows a flowchart of a method for lane line detection using an autonomous mobile robot according to another embodiment of the present application.
  • Fig. 11 shows a flowchart of a method for lane line detection using an autonomous mobile robot according to yet another embodiment of the present application.
  • Fig. 12 shows a flow chart of determining the lateral position of the robot within the lane line based on the current lane line segment according to an embodiment of the present application.
  • Fig. 13 shows a flow chart of determining the lateral position of the robot within the lane line in the map according to the matching result according to an embodiment of the present application.
  • Fig. 1 shows a schematic side view of an autonomous mobile robot according to an embodiment of the present application.
  • an autonomous mobile robot 100 may include a vehicle body 110 , a driving device 120 , and a camera 130 .
  • the driving device 120 may be installed on the lower part of the vehicle body 110 to drive the vehicle body 110 to move.
  • the camera 130 is mounted on the vehicle body 110 and has a side view relative to the vehicle body 110 .
  • the camera 130 may be installed facing the side of the vehicle body 110 , and the angle between the camera 130 and the front of the vehicle body 110 may be set to yaw.
  • Fig. 2 shows a schematic view of a field of view of a camera according to an embodiment of the present application.
  • the coordinate origin O in Figure 2 is the position of the camera
  • the direction of the X-axis is the front of the vehicle body
  • the direction of the dotted line OA is the orientation of the camera. Therefore, the setting of the camera is convenient for detecting the lane line on the side of the vehicle body, while taking into account the front of the vehicle body. For example, if the field of view angle of the camera is 120 degrees, C 11 OC 12 is the FOV of the camera. The camera set in this way can make the nearest visible point P of the lane line closer to the camera.
  • cameras can be respectively arranged on both sides of the front of the vehicle body 110.
  • the origin O in FIG. ( ⁇ /2-(FOV/2-yaw)) ⁇ 12 cm, so this dead zone can be ignored.
  • the autonomous mobile robot is equipped with two cameras, the application is not limited thereto. Since each camera can independently detect lane lines, the application can be realized by a monocular camera.
  • the ability to detect and recognize lane lines on the side of the vehicle body can be improved, while taking into account the front of the vehicle body.
  • the autonomous mobile robot 100 may also include a processor and a memory (not shown in the figure), on which a computer program is stored, and when the computer program is executed by the processor, the processor is made to perform the method described below.
  • Fig. 3 shows a flow chart of a method for lane line detection using an autonomous mobile robot according to an embodiment of the present application.
  • the method 200 may include steps S210, S220, S230, S240, and S250.
  • step S210 the input images captured by the camera are acquired continuously.
  • the camera mounted on it performs continuous shooting, so multiple frames of input images that are continuously shot can be obtained, and the input images may include current frame images and historical frame images.
  • step S220 the coordinate values of the endpoints of the candidate lanes in the robot coordinate system are determined according to the current frame image in the continuously acquired input images.
  • the coordinate value of the end point of the candidate lane in the robot coordinate system can be determined in the current frame, and the candidate lane is the part identified in the image that may be the lane line. The specific determination method will be described in detail below.
  • step S230 the coordinate values of the endpoints of the candidate lanes in the robot coordinate system are converted into coordinate values in the odom coordinate system.
  • the odom coordinate system is the odometer coordinate system. It is a world fixed coordinate system. When the autonomous mobile robot starts to move, the odom coordinate system and the map coordinate system are coincident, but as time goes by, there is a deviation. The deviation is the cumulative error of the odometer. If the odometer is highly accurate and there is no calculation error, then the map coordinate system and the odom coordinate system will always coincide. In practice, however, the odometer is biased. The pose of the mobile platform in the odom coordinate system can be moved arbitrarily without any boundaries.
  • the pose of the robot in the odom coordinate system can be guaranteed to be continuous, which means that the pose of the mobile platform in the odom coordinate system always changes smoothly without jumps.
  • the odom coordinate system is computed based on an odometry source, such as wheel odometry, visual odometry or an inertial measurement unit.
  • the odom coordinate system is useful as a precise, short-term local reference, but the offset makes it useless as a long-term reference.
  • several adjacent frames of images in the continuously acquired input images are considered, so calculation using the odom coordinate system can ensure that the result is accurate.
  • step S240 based on the coordinate values of the odom coordinate system, the candidate lanes in the current frame image are matched with the candidate lanes in the historical frame images.
  • the camera has just started to acquire input images.
  • the first frame is processed, there is no historical frame image.
  • the second frame, the third frame, etc. are processed, the historical frame image That is, there are already candidate lanes, and at this time, the candidate lanes in the current frame image can be matched with the candidate lanes in the historical frame image under the odom coordinate system.
  • step S250 the endpoints of the successfully matched candidate lanes are merged, and the merged endpoints are fitted as the current lane segment. If the matching is successful, the endpoints of the candidate lanes can be merged and fitted to form the current lane segment, thereby completing the detection of the lane line.
  • a ransac algorithm may be used to filter out abnormal points in endpoints of candidate lanes.
  • Fig. 4 shows a flow chart of determining the coordinate values of the endpoints of the candidate lanes in the robot coordinate system according to the current frame image according to an embodiment of the present application.
  • the above step S220 may include sub-steps S221, S222, and S223.
  • sub-step S221 the coordinate values of the candidate lane fragments in the camera coordinate system are determined according to the current frame image in the continuously acquired input images.
  • the first obtained coordinate value is the coordinate value in the camera coordinate system. How to determine the coordinate values of the candidate lane fragments in the camera coordinate system will be described in detail below.
  • sub-step S222 according to the coordinate values of the candidate lane fragments in the camera coordinate system, the coordinate values of the end points of the candidate lane fragments in the robot coordinate system are determined.
  • the coordinates can be used to determine the coordinates of the endpoints of the candidate lane fragments in the robot coordinate system in substep S222, for example, it can be defined as laneSeg3dList. How to implement the transformation from the camera coordinate system to the robot coordinate system will be further detailed below.
  • sub-step S223 the candidate lane fragments of the same lane are merged into candidate lanes, and the coordinate values of the endpoints of the candidate lanes in the robot coordinate system are determined.
  • the coordinates can be used to determine which candidate lane fragments belong to the same lane for merging to obtain lane3dList, as shown in FIG. 5 .
  • Fig. 5 shows a schematic diagram of merging candidate lane fragments of the same lane into candidate lanes according to an embodiment of the present application.
  • the coordinate values of the endpoints of the candidate lanes in the robot coordinate system can be determined.
  • step S230 lane3dList is transformed into the odom coordinate system to obtain lane3dListInOdom.
  • sub-step S223 when merging candidate lane fragments of the same lane into candidate lanes, the direction of each candidate lane fragment and/or the distance between every two candidate lane fragments can be used to perform fit.
  • the coordinate values of the endpoints of the candidate lanes in the robot coordinate system can be determined in the current frame image through coordinate transformation and merging of lane fragments.
  • Fig. 6 shows a flow chart of determining the coordinate values of candidate lane fragments in the camera coordinate system according to the current frame image according to an embodiment of the present application.
  • the above sub-step S221 may include sub-steps S221A, S221B, and S221C.
  • a preset neural network model is used to obtain the mask binary image of the lane in the current frame image.
  • the neural network model can be a trained neural network model, so that it can be directly used to obtain the binary image of the mask.
  • sub-step S221B use the canny algorithm to extract the edge image of the mask binary image.
  • sub-step S221C the extracted edge image is processed by the hough line detection algorithm, so that the coordinate values of the candidate lane fragments in the camera coordinate system can be obtained.
  • sub-step S222 may include sub-steps S222A and S222B.
  • sub-step S222A according to the coordinate values of the candidate lane fragments in the camera coordinate system, the coordinate values of the end points of the candidate lane fragments in the camera coordinate system are obtained.
  • sub-step S222B based on the transformation relationship between the camera coordinate system and the robot coordinate system, determine the coordinate values laneSeg3dList of the end points of the candidate lane fragments in the robot coordinate system.
  • the transformation relationship between the camera coordinate system and the robot coordinate system can be preset or known. Therefore, the coordinate values of the end points of the lane fragments in the camera coordinate system obtained in substep S222A can be transformed into coordinate values of the robot coordinate system.
  • Fig. 8 shows a flow chart of determining the coordinate values of the endpoints of the candidate lanes in the robot coordinate system according to another embodiment of the present application according to the current frame image.
  • the above step S220 may also include substeps S224 and/or S225 .
  • sub-step S224 before the merging in sub-step S223, the candidate lane fragments whose distance from the origin in the robot coordinate system exceeds a preset first threshold are removed.
  • the candidate lane fragment can be deleted , for noise reduction. That is to say, the points that are too far away in the laneSeg3dList are removed, and the laneSeg3dListFiltered is obtained.
  • sub-step S225 after merging in sub-step S223, candidate lanes whose length is smaller than the preset second threshold are removed. According to this sub-step, if the length of a certain candidate lane is too short (that is, less than the preset second threshold) after merging, it can also be considered as noise and deleted.
  • FIG. 9 shows a flow chart of matching the candidate lanes in the current frame image with the candidate lanes in the historical frame images based on the coordinate values of the odom coordinate system according to an embodiment of the present application.
  • the above step S240 may include sub-steps S241 , S242 , S243 , and S244 .
  • sub-step S241 the distance between each candidate lane in the current frame image and multiple candidate lanes in the historical frame image is calculated.
  • the slope difference between each candidate lane in the current frame image and each candidate lane in the historical frame image can be calculated as the distance between the two, or Calculate the distance between each candidate lane in the current frame image and the endpoint of each candidate lane in the historical frame image as the distance between the two.
  • sub-step S242 for each candidate lane in the current frame image, determine the minimum value matchDistmin among the distances between it and all candidate lanes in the historical frame image. That is to say, the candidate lanes in the historical frame images are traversed to determine the one with the closest distance.
  • the third threshold is preset in advance, if the minimum matchDistmin is less than or equal to the third threshold, the matching is successful (substep S243); if the minimum matchDistmin is greater than the third threshold, the matching fails (substep S244).
  • a candidate lane in the current frame image if it fails to match all candidate lanes in the historical frame image, it indicates that the candidate lane is a new candidate lane in the current frame image, so it can be added to the candidate lane collection of the historical frame image (because the current current frame will become the historical frame when processing the next frame). For example, the histList can be traversed to obtain an updated histList.
  • the newly appearing lanes captured by the camera can be added to the set of candidate lanes in the historical frame images for calculation and processing of subsequent frames.
  • the candidate lane in the historical frame image if it fails to match all the candidate lanes in the current frame image, it means that the candidate lane has disappeared in the current frame (for example, the lane has disappeared in the previous frame). come to the end), then it can be deleted from the set of candidate lanes in the historical frame image.
  • the candidate lane in the historical frame if no matching candidate lane can be found by traversing lane3dListInOdom, it can be considered that the lane has come to an end and can be deleted from the histList. Therefore, during the traveling process of the autonomous mobile robot, when a certain lane disappears from the view of the camera, the lane can be deleted from the set of candidate lanes in the historical frame, so as to avoid affecting the subsequent calculation processing.
  • Fig. 10 shows a flowchart of a method for lane line detection using an autonomous mobile robot according to another embodiment of the present application.
  • the method 200 may further include step S260 .
  • step S260 the merged endpoints are added to the set of candidate lanes in the historical frame images. Therefore, when the autonomous mobile robot continues to move and the camera continues to capture images, in the process of processing the next frame of image, the current frame has become a historical frame, so the historical frame image can be added at the end point where the current frame image is merged In the set of candidate lanes, it is used for the calculation processing of subsequent frames.
  • Fig. 11 shows a flowchart of a method for lane line detection using an autonomous mobile robot according to yet another embodiment of the present application.
  • the method 200 may further include step S270 .
  • step S270 the current lane segment is transformed from the odom coordinate system to the robot coordinate system.
  • the subsequent positioning program it is convenient for the subsequent positioning program to perform lateral positioning of the autonomous mobile robot.
  • the ransac algorithm can be used to filter out the abnormal points in the endpoints of the candidate lanes, and then fit to obtain histList[i].fitline, and then through step S270, the histList[i] under the odom coordinate system i].fitline is transformed into histList[i].fitlineInBase in the robot coordinate system. Finally, the fitlineInBase of all candidate lanes is taken to be used for the subsequent localization program to perform lateral localization of the autonomous mobile robot.
  • the robot can be positioned laterally within the lane line.
  • the method for laterally positioning the robot within the lane may include step S310.
  • step S310 using the current lane segment obtained in method 200, the lateral position of the robot within the lane may be determined. For example, some lane line pictures taken at different angles and/or different distances from the lane line can be preset in the robot system. Subsequently, the current lane line segment obtained in method 200 is matched with the preset lane line picture, and the lateral coordinates and direction of the robot are obtained according to the matching result.
  • lane lines may include different types such as single solid line, single dashed line, and double solid line. Then, different pictures can be preset according to these different types of lane lines, so as to facilitate matching.
  • Fig. 12 shows a flow chart of determining the lateral position of the robot within the lane line based on the current lane line segment according to an embodiment of the present application.
  • the above step S310 may include sub-steps S311 and S312.
  • sub-step S311 according to the location information of the robot, the current lane line segment is matched with the lane line at the corresponding position in the preset map.
  • sub-step S312 the lateral position of the robot within the lane line in the map is determined according to the matching result.
  • the detection result of the lane line in the method 200 can be used to laterally position the robot within the lane line.
  • Fig. 13 shows a flow chart of determining the lateral position of the robot within the lane line in the map according to the matching result according to an embodiment of the present application.
  • the above sub-step S312 may include sub-steps S312A and S312B.
  • sub-step S312A multiple candidate positions are determined based on multiple lane lines matched in the map.
  • sub-step S312B the lateral position of the robot is determined according to a plurality of candidate positions.
  • two or more cameras can be installed on the body of the autonomous mobile robot, and each camera can capture input images independently, and can use the input images captured by it to detect lane lines.
  • multiple current lane lines can be detected (for example, two cameras, one left and one right, can respectively detect the lane lines on the left and right sides of the vehicle body), and can Use these lane lines to position the robot laterally, respectively. Therefore, there may be multiple candidate positions, that is, the lateral position of the robot may be deviated from the different lane lines detected by the input images captured by each camera.
  • the lateral position of the robot may be determined from a plurality of candidate positions using any known technique, for example, the plurality of candidate positions may be averaged.
  • the above-mentioned lateral position may include the coordinate position (x, y) and orientation ⁇ of the robot.
  • an electronic device which includes a processor and a memory, the memory stores a computer program, and when the computer program is executed by the processor, the processor performs any one of the above implementations. method described in the method.
  • a non-transitory computer-readable storage medium on which computer-readable instructions are stored, and when the instructions are executed by a processor, the processor can perform any one of the above implementations. method described in the method.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a computer-readable medium, where the computer program includes program codes for executing the methods shown in the flowcharts.
  • the computer program can be downloaded and installed from a network via its communication means, and/or installed from a removable medium.
  • CPU central processing unit
  • the above-mentioned functions defined in the method of the present application are performed.
  • the computer-readable medium in the present application may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, in which computer-readable program codes are carried. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device. .
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out the operations of this application may be written in one or more programming languages, or combinations thereof, including object-oriented programming languages—such as Java, Smalltalk, C++, and conventional procedural programming language—such as "C" or a similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present application may be implemented by means of software or by means of hardware.
  • the described units may also be provided in the processor.
  • the present application also provides a computer-readable medium.
  • the computer-readable medium may be included in the device described in the above embodiments, or it may exist independently without being assembled into the device.
  • the above-mentioned computer-readable medium bears one or more programs, and when the above-mentioned one or more programs are executed by the device, the device is made to execute the above-mentioned method.

Abstract

一种自主移动机器人(100)、利用自主移动机器人(100)进行车道线检测的方法(200)、基于方法对机器人在车道线内进行横向定位的方法(310)、电子设备以及非瞬时性计算机可读存储介质。利用自主移动机器人(100)进行车道线检测的方法(200)包括:连续获取摄像头(130)拍摄的输入图像(S210),输入图像包括当前帧图像和历史帧图像;根据连续获取的输入图像中的当前帧图像,确定候选车道的端点在机器人坐标系的坐标值(S220);将候选车道的端点在机器人坐标系的坐标值转换为odom坐标系的坐标值(S230);基于odom坐标系的坐标值,将当前帧图像中的候选车道与历史帧图像中的候选车道进行匹配(S240);以及将匹配成功的候选车道的端点进行合并,并将合并后的端点拟合为当前车道线段(S250)。

Description

自主移动机器人及利用其进行车道线检测的方法 技术领域
本申请涉及机器人技术领域,具体涉及一种自主移动机器人、利用自主移动机器人进行车道线检测的方法、基于该方法对机器人在车道线内进行横向定位的方法、电子设备以及非瞬时性计算机可读存储介质。
背景技术
目前,自主移动机器人(Autonomous Mobile Robot,AMR)已经大量地应用于自动化仓储等场景中,以执行搬运、拣选等工作。为完成自身任务,机器人可使用多种方式进行定位和导航,例如二维码方式、激光方式、视觉方式等。机器人通常还使用附着于地面的车道线进行辅助定位和导航。
如果利用附着于地面的车道线进行辅助定位和导航,那么首先需要通过机器人上搭载的摄像头对车道线进行识别和检测,然后再根据检测到的车道线对机器人进行定位和导航。
然而,现有技术对车道线的检测并不准确,从而影响了定位和导航结果。例如,在自动化仓储场景中使用的自主移动机器人通常高度较矮(例如低于20或30厘米),所以机器人上搭载的摄像头距离地面较近。由于透视效应,地面上的车道线在图像里的近大远小效果很严重,远处点测距的误差较大,但是如果只用近处点进行检测,则点数较少,拟合不稳,从而影响了车道线的检测结果。
发明内容
为了解决现有技术中出现的上述问题,本申请提供了一种自主移动 机器人、利用自主移动机器人进行车道线检测的方法、基于该方法对机器人在车道线内进行横向定位的方法、电子设备以及非瞬时性计算机可读存储介质。
根据本申请的一个方面,提供了一种利用自主移动机器人进行车道线检测的方法,其中所述机器人搭载有摄像头,所述方法包括:
连续获取所述摄像头拍摄的输入图像,所述输入图像包括当前帧图像和历史帧图像;
根据连续获取的输入图像中的当前帧图像,确定候选车道的端点在机器人坐标系的坐标值;
将所述候选车道的端点在所述机器人坐标系的坐标值转换为odom坐标系的坐标值;
基于odom坐标系的坐标值,将所述当前帧图像中的候选车道与历史帧图像中的候选车道进行匹配;以及
将匹配成功的候选车道的端点进行合并,并将合并后的端点拟合为当前车道线段。
根据一个实施方式,根据连续获取的输入图像中的当前帧图像,确定候选车道的端点在机器人坐标系的坐标值包括:
根据连续获取的输入图像中的当前帧图像,确定候选车道碎片在摄像头坐标系的坐标值;
根据所述候选车道碎片在所述摄像头坐标系的坐标值,确定所述候选车道碎片的端点在所述机器人坐标系的坐标值;以及
将同一车道的候选车道碎片合并为候选车道,并确定所述候选车道的端点在所述机器人坐标系的坐标值。
根据一个实施方式,根据连续获取的输入图像中的当前帧图像,确定候选车道碎片在摄像头坐标系的坐标值包括:
利用预设的神经网络模型获取所述当前帧图像中车道的mask二值 图;
利用canny算法提取所述mask二值图的边缘图像;以及
通过hough直线检测算法处理所提取的边缘图像,以得到候选车道碎片的坐标值。
根据一个实施方式,根据所述候选车道碎片在所述摄像头坐标系的坐标值,确定所述候选车道碎片的端点在所述机器人坐标系的坐标值包括:
根据所述候选车道碎片在所述摄像头坐标系的坐标值,获取所述候选车道碎片的端点在所述摄像头坐标系的坐标值;以及
基于所述摄像头坐标系和所述机器人坐标系之间的变换关系,确定所述候选车道碎片的端点在所述机器人坐标系的坐标值。
根据一个实施方式,根据连续获取的输入图像中的当前帧图像,确定候选车道的端点在机器人坐标系的坐标值还包括:
在合并前,去除在所述机器人坐标系中距离原点超过预设的第一阈值的候选车道碎片;和/或
在合并后,去除长度小于预设的第二阈值的候选车道。
根据一个实施方式,将同一车道的候选车道碎片合并为候选车道包括:
利用每个候选车道碎片的方向和/或每两个候选车道碎片之间的距离进行拟合。
根据一个实施方式,基于odom坐标系的坐标值,将所述当前帧图像中的候选车道与历史帧图像中的候选车道进行匹配包括:
计算所述当前帧图像中的每条候选车道与所述历史帧图像中的多条候选车道的距离;
对于所述当前帧图像中的每条候选车道,确定所述距离中的最小值;
若所述最小值小于或等于预定的第三阈值,则匹配成功;以及
若所述最小值大于所述第三阈值,则匹配失败。
根据一个实施方式,计算所述当前帧图像中的每条候选车道与所述历史帧图像中的每条候选车道的距离包括:
计算所述当前帧图像中的每条候选车道与所述历史帧图像中的每条候选车道的斜率差值;和/或
计算所述当前帧图像中的每条候选车道与所述历史帧图像中的每条候选车道的端点的距离。
根据一个实施方式,基于odom坐标系的坐标值,将所述当前帧图像中的候选车道与历史帧图像中的候选车道进行匹配还包括:
对于所述当前帧图像中的候选车道,若与所述历史帧图像中所有候选车道均匹配失败,则作为新增候选车道,并加入所述历史帧图像的候选车道集合中。
根据一个实施方式,基于odom坐标系的坐标值,将所述当前帧图像中的候选车道与历史帧图像中的候选车道进行匹配还包括:
对于所述历史帧图像中的候选车道,若与所述当前帧图像中所有候选车道均匹配失败,则作为已到尽头的候选车道,并从所述历史帧图像的候选车道集合中删除。
根据一个实施方式,该方法还包括:
将合并后的端点加入所述历史帧图像的候选车道集合中。
根据一个实施方式,将匹配成功的候选车道的端点进行合并,并将合并后的端点拟合为当前车道线段包括:
利用ransac算法过滤掉所述候选车道的端点中的异常点。
根据一个实施方式,该方法还包括:
将所述当前车道线段从所述odom坐标系变换至所述机器人坐标系。
根据本申请的另一方面,提供了一种基于如上所述的车道线检测的方法对机器人在车道线内进行横向定位的方法,包括:
基于所述当前车道线段,确定机器人在车道线内的横向位置。
根据一个实施方式,基于所述当前车道线段,确定机器人在车道线内的横向位置包括:
根据所述机器人的位置信息,对所述当前车道线段和预设的地图中对应位置的车道线进行匹配;以及
根据匹配结果确定所述机器人在所述地图中的车道线内的横向位置。
根据一个实施方式,根据匹配结果确定所述机器人在所述地图中的车道线内的横向位置包括:
基于所述地图中匹配的多条车道线,确定多个候选位置;以及
根据所述多个候选位置,确定所述机器人的横向位置。
根据一个实施方式,所述横向位置包括所述机器人的坐标位置和朝向。
根据本申请的另一方面,提供了一种自主移动机器人,包括:
车身;
处理器;
驱动装置,驱动所述车身进行移动;
摄像头,安装在所述车身上,并具有相对于所述车身的侧向视野;以及
存储器,存储有计算机程序,当所述计算机程序被所述处理器执行时,使得所述处理器执行如上所述的方法。
根据一个实施方式,所述摄像头包括第一摄像头,安装在所述车身的车头位置,并与所述车身的车头方向呈第一夹角。
根据一个实施方式,所述摄像头还包括第二摄像头,安装在所述车身的车头位置,并与所述车身的车头方向呈第二夹角,所述第一摄像头与所述第二摄像头间隔预定距离,并且所述第一夹角与所述第二夹角相对于所述车头方向对称。
根据本申请的另一方面,提供了一种电子设备,包括:
处理器;以及
存储器,存储有计算机程序,当所述计算机程序被所述处理器执行时,使得所述处理器执行如上所述的方法。
根据本申请的另一方面,提供了一种非瞬时性计算机可读存储介质,其上存储有计算机可读指令,当所述指令被处理器执行时,使得所述处理器执行如上所述的方法。
由此,由于本申请对自主移动机器人摄像头的设置,便于在机器人的移动过程中通过单目相机对车道线进行检测,而利用odom坐标系并通过历史帧匹配的方式更有助于提高检测准确度和精度。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了根据本申请一个实施方式的自主移动机器人的侧视示意图。
图2示出了根据本申请一个实施例的摄像头的视野示意图。
图3示出了根据本申请一个实施方式利用自主移动机器人进行车道线检测的方法的流程图。
图4示出了根据本申请一个实施方式根据当前帧图像确定候选车 道的端点在机器人坐标系的坐标值的流程图。
图5示出了根据本申请一个实施例将同一车道的候选车道碎片合并为候选车道的示意图。
图6示出了根据本申请一个实施方式根据当前帧图像确定候选车道碎片在摄像头坐标系的坐标值的流程图。
图7示出了根据本申请一个实施方式根据候选车道碎片在摄像头坐标系的坐标值确定候选车道碎片的端点在机器人坐标系的坐标值的流程图。
图8示出了根据本申请另一实施方式根据当前帧图像确定候选车道的端点在机器人坐标系的坐标值的流程图。
图9示出了根据本申请一个实施方式基于odom坐标系的坐标值将当前帧图像中的候选车道与历史帧图像中的候选车道进行匹配的流程图。
图10示出了根据本申请另一实施方式利用自主移动机器人进行车道线检测的方法的流程图。
图11示出了根据本申请又一实施方式利用自主移动机器人进行车道线检测的方法的流程图。
图12示出了根据本申请一个实施方式基于当前车道线段确定机器人在车道线内的横向位置的流程图。
图13示出了根据本申请一个实施方式根据匹配结果确定机器人在地图中的车道线内的横向位置的流程图。
具体实施方式
为了更好地理解本申请的技术方案及优点,下面结合附图和具体实施例对本申请的内容做进一步详细说明。但此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。此外,以下所描述的本申请的 各实施方式中所涉及到的技术特征除彼此构成冲突的情况外均可以组合使用,从而构成本申请范围内的其他实施方式。
下文中描述的内容提供了许多不同的实施方式或例子用来实现本申请的不同结构。为了简化本申请的公开内容,下文中对特定例子的部件和设置进行描述。当然,它们仅仅为示例,并且目的不在于限制本申请。此外,本申请可以在不同例子中重复参考数字和/或参考字母,这种重复是为了简化和清楚的目的,其本身不指示所讨论各种实施方式和/或设置之间的关系。
图1示出了根据本申请一个实施方式的自主移动机器人的侧视示意图。如图1所示,自主移动机器人100可包括车身110、驱动装置120、摄像头130。驱动装置120可安装在车身110的下部,从而驱动车身110进行移动。摄像头130安装在车身110上,并具有相对于车身110的侧向视野。例如,摄像头130可设置为朝向车身110的侧面的位姿安装,与车身110的正前方夹角可设为yaw。图2示出了根据本申请一个实施例的摄像头的视野示意图。如图2所示,以yaw设为30度为例,图2中的坐标原点O为摄像头所在位置,X轴方向为车身的正前方,虚线OA方向为摄像头的朝向。由此,该摄像头的设置便于检测到位于车身侧面的车道线,同时兼顾车身正前方。例如,摄像头的视场角度为120度,则C 11OC 12即为摄像头的视场FOV。如此设置的摄像头可使车道线最近可视点P距离摄像头更近。如图2所示,lDist为摄像头到车道线的真实距离,FOV范围内为摄像头的可见车道线,则车道线最近可视点P距离摄像头OP=lDist/acos(π/2-FOV/2-yaw),其中yaw越大,则OP越小,且满足0°<FOV/2+yaw<90°。
再例如,可在车身110的车头两侧分别设置摄像头,图2中原点O处为右侧摄像头,Cm为两摄像头的中点,OCm为6厘米,则车身110正前方的盲区为OCm*tan(π/2-(FOV/2-yaw))<12厘米,因此该盲区可忽略。虽然在本例中,自主移动机器人搭载有两个摄像头,但本申请不限于此,由于每个摄像头均可独立地完成车道线的检测,因此本申请可实现通过单目相机实现。
由此,根据本实施方式的自主移动机器人上的摄像头的设置,可以提高车身侧面的车道线的检测识别能力,同时兼顾了车身正前方。
此外,自主移动机器人100还可包括处理器和存储器(图中未示出),该存储器上存储有计算机程序,当该计算机程序被处理器执行时,使得处理器执行如下所述的方法。
图3示出了根据本申请一个实施方式利用自主移动机器人进行车道线检测的方法的流程图。如图3所示,该方法200可包括步骤S210、S220、S230、S240、S250。在步骤S210中,连续获取摄像头拍摄的输入图像。在自主移动机器人的行进过程中,其上搭载的摄像头进行连续拍摄,因此可获得连续拍摄的多帧输入图像,输入图像可包括当前帧图像和历史帧图像。
在步骤S220中,根据连续获取的输入图像中的当前帧图像,确定候选车道的端点在机器人坐标系的坐标值。在输入图像中,可以在当前帧中确定候选车道的端点在机器人坐标系的坐标值,候选车道即为在图像中识别出的有可能是车道线的部分。具体确定方式将在下文中进行详细描述。
在步骤S230中,将候选车道的端点在机器人坐标系的坐标值转换为odom坐标系的坐标值。odom坐标系即里程计坐标系,它是一个世界固定坐标系,在自主移动机器人刚开始运动时,odom坐标系与map坐标系是重合的,但是随着时间的推移,出现了偏差,出现的偏差也就是里程计的累计误差,如果里程计精确度很高,没有计算误差,那么map坐标系与odom坐标系会一直是重合的。然而实际情况中,里程计是有偏差的。在odom坐标系中移动平台的位姿可以任意移动,没有任何界限。这种移动使得odom坐标系不能作为长期的全局参考。然而,在odom坐标系中的机器人的姿态能够保证是连续的,这意味着在odom坐标系中的移动平台的姿态总是平滑变化,没有跳变。在一个典型设置中,odom坐标系是基于测距源来计算的,如车轮里程计,视觉里程计或惯性测量单元。odom坐标系作为一种精确,作为短期的本地 参考是很有用的,但偏移使得它不能作为长期参考。而在本申请的应用中,考虑的是连续获取的输入图像中相邻的数帧图像,因此采用odom坐标系进行计算可以确保结果是精确的。
在步骤S240中,基于odom坐标系的坐标值,将当前帧图像中的候选车道与历史帧图像中的候选车道进行匹配。在自主移动机器人刚刚开始运行时,摄像头也刚刚开始获取输入图像,在处理第一帧时,还没有历史帧图像,在处理到第二帧、第三帧等等后续帧时,历史帧图像中即已有候选车道,此时可在odom坐标系下将当前帧图像中的候选车道与历史帧图像中的候选车道进行匹配。
在步骤S250中,将匹配成功的候选车道的端点进行合并,并将合并后的端点拟合为当前车道线段。如果匹配成功,则可将该候选车道的端点进行合并、拟合,形成当前车道线段,从而完成对车道线的检测。根据一个实施例,在进行合并、拟合前,可利用ransac算法过滤掉候选车道的端点中的异常点。
由此,由于本申请对自主移动机器人摄像头的设置,便于在机器人的移动过程中通过单目相机对车道线进行检测,而利用odom坐标系并通过历史帧匹配的方式更有助于提高检测准确度和精度。
图4示出了根据本申请一个实施方式根据当前帧图像确定候选车道的端点在机器人坐标系的坐标值的流程图。如图4所示,上述步骤S220可包括子步骤S221、S222、S223。在子步骤S221中,根据连续获取的输入图像中的当前帧图像,确定候选车道碎片在摄像头坐标系的坐标值。在获取到每一帧图像时,首先得到的坐标值是在摄像头坐标系下的坐标值。具体如何确定候选车道碎片在摄像头坐标系的坐标值,将在下文中进行详细描述。
在子步骤S222中,根据候选车道碎片在摄像头坐标系的坐标值,确定候选车道碎片的端点在机器人坐标系的坐标值。在子步骤S221确定了候选车道碎片在摄像头坐标系的坐标值后,在子步骤S222可利用 该坐标值确定候选车道碎片的端点在机器人坐标系的坐标值,例如可定义为laneSeg3dList。具体如何实现从摄像头坐标系到机器人坐标系的变换,将在下文中进一步详述。
在子步骤S223中,将同一车道的候选车道碎片合并为候选车道,并确定候选车道的端点在机器人坐标系的坐标值。对于在子步骤S222中确定了端点坐标值的多个候选车道碎片,可以通过该坐标值确定哪些候选车道碎片属于同一车道以进行合并,得到lane3dList,如图5所示。图5示出了根据本申请一个实施例将同一车道的候选车道碎片合并为候选车道的示意图。合并后,可确定候选车道的端点在机器人坐标系的坐标值。随后,在步骤S230中将lane3dList变换到odom坐标系,可得到lane3dListInOdom。
根据本申请一个实施例,在子步骤S223中,在将同一车道的候选车道碎片合并为候选车道时,可利用每个候选车道碎片的方向和/或每两个候选车道碎片之间的距离进行拟合。
由此,可通过坐标变换和车道碎片的合并,实现在当前帧图像中确定候选车道的端点在机器人坐标系的坐标值。
图6示出了根据本申请一个实施方式根据当前帧图像确定候选车道碎片在摄像头坐标系的坐标值的流程图。如图6所示,上述子步骤S221可包括子步骤S221A、S221B、S221C。在子步骤S221A中,利用预设的神经网络模型获取当前帧图像中车道的mask二值图。该神经网络模型可以是已经训练好的神经网络模型,从而可以直接用于获取mask的二值图。
在子步骤S221B中,利用canny算法提取mask二值图的边缘图像。随后,在子步骤S221C中,通过hough直线检测算法处理所提取的边缘图像,从而可以得到候选车道碎片在摄像头坐标系的坐标值。
图7示出了根据本申请一个实施方式根据候选车道碎片在摄像头 坐标系的坐标值确定候选车道碎片的端点在机器人坐标系的坐标值的流程图。如图7所示,上述子步骤S222可包括子步骤S222A和S222B。在子步骤S222A中,根据候选车道碎片在摄像头坐标系的坐标值,获取候选车道碎片的端点在摄像头坐标系的坐标值。随后,在子步骤S222B中,基于摄像头坐标系和机器人坐标系之间的变换关系,确定候选车道碎片的端点在机器人坐标系的坐标值laneSeg3dList。根据本申请,由于摄像头是事先安装在自主移动机器人的车身上的,并且其参数已知,因此摄像头坐标系和机器人坐标系之间的变换关系可以是预设的或者已知的。因此,可以将子步骤S222A中得到的车道碎片端点在摄像头坐标系的坐标值变换为机器人坐标系的坐标值。
图8示出了根据本申请另一实施方式根据当前帧图像确定候选车道的端点在机器人坐标系的坐标值的流程图。如图8所示,除了子步骤S221、S222、S223之外,上述步骤S220还可包括子步骤S224和/或S225。
在子步骤S224中,在子步骤S223进行合并前,去除在机器人坐标系中距离原点超过预设的第一阈值的候选车道碎片。根据该子步骤,如果检测到的某个候选车道碎片距离过远(即,超出了预设的第一阈值),则为了避免远处点测距误差大的问题,可将该候选车道碎片删除,以实现降噪。也就是说,去除了laneSeg3dList中距离过远的点,得到laneSeg3dListFiltered。
在子步骤S225中,在子步骤S223进行合并后,去除长度小于预设的第二阈值的候选车道。根据该子步骤,如果合并后,某个候选车道的长度过短(即,不及预设的第二阈值),则也可将其认为是噪声,从而删除掉。
通过上述子步骤S224和S225,可去除距离过远和长度过短的噪声,从而提高检测准确度和精度。
图9示出了根据本申请一个实施方式基于odom坐标系的坐标值将 当前帧图像中的候选车道与历史帧图像中的候选车道进行匹配的流程图。如图9所示,上述步骤S240可包括子步骤S241、S242、S243、S244。在子步骤S241中,计算当前帧图像中的每条候选车道与历史帧图像中的多条候选车道的距离。
在计算当前帧的候选车道与历史帧的候选车道的距离时,可以计算当前帧图像中的每条候选车道与历史帧图像中的每条候选车道的斜率差值作为二者的距离,还可以计算当前帧图像中的每条候选车道与历史帧图像中的每条候选车道的端点的距离作为二者的距离。
在子步骤S242中,对于当前帧图像中的每条候选车道,确定其与历史帧图像中所有候选车道距离中的最小值matchDistmin。也就是说,遍历历史帧图像中的候选车道,确定出距离最近的那一条。并且,事先预设第三阈值,如果该最小值matchDistmin小于或等于第三阈值,则匹配成功(子步骤S243);如果该最小值matchDistmin大于第三阈值,则匹配失败(子步骤S244)。以当前帧lane3dListInOdom中的第j个候选车道和历史帧histList中的第i个候选车道为例,匹配histList[i]和lane3dListInOdom[j]的距离,如果小于或等于预设的第三阈值,则认为匹配,记为histList[i].matchDist。如果多个j都与i相匹配,则可取matchDist最小的lane3dListInOdom[j],拼接histList[i]得到更新的histList[i]。
根据一个实施例,对于当前帧图像中的候选车道,如果其与历史帧图像中所有候选车道均匹配失败,则说明该候选车道是在当前帧图像中新增的候选车道,那么可将其加入到历史帧图像的候选车道集合中(由于在处理下一帧时,目前的当前帧将成为历史帧)。例如,可遍历histList,从而得到更新的histList。由此,在自主移动机器人的行进过程中,摄像头拍摄到的新出现的车道,将能够添加至历史帧图像的候选车道集合中,以用于后续帧的计算处理。
根据另一实施例,对于历史帧图像中的候选车道,如果其与当前帧图像中所有候选车道均匹配失败,则说明该候选车道已在当前帧中消失(例如,该车道在上一帧已走到尽头),那么可从历史帧图像的候选车 道集合中将其删除。例如,对于历史帧中的某个候选车道,遍历lane3dListInOdom,都无法找到与其匹配的候选车道,则可认为该车道已到尽头,可从histList中删掉。由此,在自主移动机器人的行进过程中,当某条车道从摄像头的视野中消失时,可将该车道从历史帧的候选车道集合中删除,从而避免影响后续的计算处理。
图10示出了根据本申请另一实施方式利用自主移动机器人进行车道线检测的方法的流程图。如图10所示,除了步骤S210至S250,该方法200还可包括步骤S260。在步骤S260中,将合并后的端点加入历史帧图像的候选车道集合中。由此,当自主移动机器人继续行进并由摄像头继续拍摄图像时,在处理下一帧图像的过程中,当前帧即已成为历史帧,所以在处理当前帧图像是合并的端点可加入历史帧图像的候选车道集合中,以用于后续帧的计算处理。
图11示出了根据本申请又一实施方式利用自主移动机器人进行车道线检测的方法的流程图。如图11所示,除了步骤S210至S250,该方法200还可包括步骤S270。在步骤S270中,将当前车道线段从odom坐标系变换至机器人坐标系。由此,可便于后续的定位程序对自主移动机器人进行横向定位。
如上所述,对于histList[i],可利用ransac算法过滤掉候选车道的端点中的异常点,再拟合得到histList[i].fitline,然后再经由步骤S270,把odom坐标系下的histList[i].fitline变换为机器人坐标系下的histList[i].fitlineInBase。最终取所有候选车道的fitlineInBase,以用于后续的定位程序对自主移动机器人进行横向定位。
根据本申请,基于上述方法200的车道线检测结果,可对机器人在车道线内进行横向定位。对机器人在车道线内进行横向定位的方法可包括步骤S310,在步骤S310中,利用方法200中得到的当前车道线段, 可确定机器人在车道线内的横向位置。例如,可在机器人系统中预设一些与车道线呈不同角度和/或不同距离拍摄的车道线图片。随后,将方法200中得到的当前车道线段与预设的车道线图片进行匹配,并根据匹配结果获得机器人的横向坐标和方向。
如果考虑车道线的种类不同,还可根据不同种类的车道线预设不同的图片。例如,车道线可能包括单实线、单虚线、双实线等不同种类。那么,可根据这些不同种类的车道线预设不同的图片,以便于匹配。
图12示出了根据本申请一个实施方式基于当前车道线段确定机器人在车道线内的横向位置的流程图。如图12所示,上述步骤S310可包括子步骤S311和S312。在子步骤S311中,根据机器人的位置信息,对当前车道线段和预设的地图中对应位置的车道线进行匹配。具体的匹配方式可参照上述子步骤S241至S244的方式,在此不再赘述。
随后在子步骤S312中,根据匹配结果确定机器人在地图中的车道线内的横向位置。
由此,可利用上述方法200的车道线检测结果,对机器人在车道线内进行横向定位。
图13示出了根据本申请一个实施方式根据匹配结果确定机器人在地图中的车道线内的横向位置的流程图。如图13所示,上述子步骤S312可包括子步骤S312A和S312B。在子步骤S312A中,基于地图中匹配的多条车道线,确定多个候选位置。随后,在子步骤S312B中,根据多个候选位置,确定机器人的横向位置。如上所述,在自主移动机器人的车身上可设置有两个或者更多个摄像头,每个摄像头均可独立地拍摄输入图像,并可利用其拍摄的输入图像进行车道线的检测。那么也就是说,借助两个或更多个摄像头,可以检测出多条当前车道线(例如,一左一右两个摄像头,可分别检测出车身左侧和右侧的车道线),并可利用这些车道线分别对机器人进行横向定位。因此,将有可能出现多个候选位置的情况,也就是说,利用各摄像头拍摄的输入图像所检测出的不 同车道线,所得到的机器人的横向位置可能有所偏差。可利用任何已知的技术,根据多个候选位置来确定机器人的横向位置,例如,可对多个候选位置求平均。
根据一个实施例,上述横向位置可包括机器人的坐标位置(x,y)和朝向θ。
根据本申请的另一方面,提供了一种电子设备,其包括处理器和存储器,该存储器存储有计算机程序,当该计算机程序被该处理器执行时,使得该处理器执行如以上任一个实施方式所述的方法。
根据本申请的另一方面,提供了一种非瞬时性计算机可读存储介质,其上存储有计算机可读指令,当该指令被处理器执行时,能够使得该处理器执行如以上任一个实施方式所述的方法。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过其通信部件从网络上被下载和安装,和/或从可拆卸介质被安装。在该计算机程序被中央处理单元(CPU)执行时,执行本申请的方法中限定的上述功能。需要说明的是,本申请的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本申请中,计算机可读存 储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本申请中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本申请的操作的计算机程序代码,程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如”C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本申请各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬 件与计算机指令的组合来实现。
描述于本申请实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中。
作为另一方面,本申请还提供了一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的装置中所包含的;也可以是单独存在,而未装配入该装置中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该装置执行时,使得该装置执行如上所述的方法。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。上述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上对本申请实施例进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明仅用于帮助理解本申请的方法及其核心思想。同时,本领域技术人员依据本申请的思想,基于本申请的具体实施方式及应用范围上做出的改变或变形之处,都属于本申请保护的范围。综上所述,本说明书内容不应理解为对本申请的限制。

Claims (22)

  1. 一种利用自主移动机器人进行车道线检测的方法,其中所述机器人搭载有摄像头,所述方法包括:
    连续获取所述摄像头拍摄的输入图像,所述输入图像包括当前帧图像和历史帧图像;
    根据连续获取的输入图像中的当前帧图像,确定候选车道的端点在机器人坐标系的坐标值;
    将所述候选车道的端点在所述机器人坐标系的坐标值转换为odom坐标系的坐标值;
    基于odom坐标系的坐标值,将所述当前帧图像中的候选车道与历史帧图像中的候选车道进行匹配;以及
    将匹配成功的候选车道的端点进行合并,并将合并后的端点拟合为当前车道线段。
  2. 如权利要求1所述的方法,其中,根据连续获取的输入图像中的当前帧图像,确定候选车道的端点在机器人坐标系的坐标值包括:
    根据连续获取的输入图像中的当前帧图像,确定候选车道碎片在摄像头坐标系的坐标值;
    根据所述候选车道碎片在所述摄像头坐标系的坐标值,确定所述候选车道碎片的端点在所述机器人坐标系的坐标值;以及
    将同一车道的候选车道碎片合并为候选车道,并确定所述候选车道的端点在所述机器人坐标系的坐标值。
  3. 如权利要求2所述的方法,其中根据连续获取的输入图像中的当前帧图像,确定候选车道碎片在摄像头坐标系的坐标值包括:
    利用预设的神经网络模型获取所述当前帧图像中车道的mask二值 图;
    利用canny算法提取所述mask二值图的边缘图像;以及
    通过hough直线检测算法处理所提取的边缘图像,以得到候选车道碎片的坐标值。
  4. 如权利要求2所述的方法,其中根据所述候选车道碎片在所述摄像头坐标系的坐标值,确定所述候选车道碎片的端点在所述机器人坐标系的坐标值包括:
    根据所述候选车道碎片在所述摄像头坐标系的坐标值,获取所述候选车道碎片的端点在所述摄像头坐标系的坐标值;以及
    基于所述摄像头坐标系和所述机器人坐标系之间的变换关系,确定所述候选车道碎片的端点在所述机器人坐标系的坐标值。
  5. 如权利要求2所述的方法,其中,根据连续获取的输入图像中的当前帧图像,确定候选车道的端点在机器人坐标系的坐标值还包括:
    在合并前,去除在所述机器人坐标系中距离原点超过预设的第一阈值的候选车道碎片;和/或
    在合并后,去除长度小于预设的第二阈值的候选车道。
  6. 如权利要求2所述的方法,其中,将同一车道的候选车道碎片合并为候选车道包括:
    利用每个候选车道碎片的方向和/或每两个候选车道碎片之间的距离进行拟合。
  7. 如权利要求1所述的方法,其中,基于odom坐标系的坐标值, 将所述当前帧图像中的候选车道与历史帧图像中的候选车道进行匹配包括:
    计算所述当前帧图像中的每条候选车道与所述历史帧图像中的多条候选车道的距离;
    对于所述当前帧图像中的每条候选车道,确定所述距离中的最小值;
    若所述最小值小于或等于预定的第三阈值,则匹配成功;以及
    若所述最小值大于所述第三阈值,则匹配失败。
  8. 如权利要求7所述的方法,其中,计算所述当前帧图像中的每条候选车道与所述历史帧图像中的每条候选车道的距离包括:
    计算所述当前帧图像中的每条候选车道与所述历史帧图像中的每条候选车道的斜率差值;和/或
    计算所述当前帧图像中的每条候选车道与所述历史帧图像中的每条候选车道的端点的距离。
  9. 如权利要求7所述的方法,基于odom坐标系的坐标值,将所述当前帧图像中的候选车道与历史帧图像中的候选车道进行匹配还包括:
    对于所述当前帧图像中的候选车道,若与所述历史帧图像中所有候选车道均匹配失败,则作为新增候选车道,并加入所述历史帧图像的候选车道集合中。
  10. 如权利要求7所述的方法,基于odom坐标系的坐标值,将所述当前帧图像中的候选车道与历史帧图像中的候选车道进行匹配还包括:
    对于所述历史帧图像中的候选车道,若与所述当前帧图像中所有候选车道均匹配失败,则作为已到尽头的候选车道,并从所述历史帧图像的候选车道集合中删除。
  11. 如权利要求1所述的方法,还包括:
    将合并后的端点加入所述历史帧图像的候选车道集合中。
  12. 如权利要求1所述的方法,其中,将匹配成功的候选车道的端点进行合并,并将合并后的端点拟合为当前车道线段包括:
    利用ransac算法过滤掉所述候选车道的端点中的异常点。
  13. 如权利要求1所述的方法,还包括:
    将所述当前车道线段从所述odom坐标系变换至所述机器人坐标系。
  14. 一种基于如权利要求1-13中任一项所述的车道线检测的方法对机器人在车道线内进行横向定位的方法,包括:
    基于所述当前车道线段,确定机器人在车道线内的横向位置。
  15. 如权利要求14所述的横向定位的方法,其中基于所述当前车道线段,确定机器人在车道线内的横向位置包括:
    根据所述机器人的位置信息,对所述当前车道线段和预设的地图中对应位置的车道线进行匹配;以及
    根据匹配结果确定所述机器人在所述地图中的车道线内的横向位置。
  16. 如权利要求15所述的横向定位的方法,其中,根据匹配结果确定所述机器人在所述地图中的车道线内的横向位置包括:
    基于所述地图中匹配的多条车道线,确定多个候选位置;以及
    根据所述多个候选位置,确定所述机器人的横向位置。
  17. 如权利要求15或16所述的横向定位的方法,其中,所述横向位置包括所述机器人的坐标位置和朝向。
  18. 一种自主移动机器人,包括:
    车身;
    处理器;
    驱动装置,驱动所述车身进行移动;
    摄像头,安装在所述车身上,并具有相对于所述车身的侧向视野;以及
    存储器,存储有计算机程序,当所述计算机程序被所述处理器执行时,使得所述处理器执行如权利要求1-17中任一项所述的方法。
  19. 如权利要求18所述的自主移动机器人,其中所述摄像头包括第一摄像头,安装在所述车身的车头位置,并与所述车身的车头方向呈第一夹角。
  20. 如权利要求19所述的自主移动机器人,其中所述摄像头还包括第二摄像头,安装在所述车身的车头位置,并与所述车身的车头方向呈第二夹角,所述第一摄像头与所述第二摄像头间隔预定距离,并且所 述第一夹角与所述第二夹角相对于所述车头方向对称。
  21. 一种电子设备,包括:
    处理器;以及
    存储器,存储有计算机程序,当所述计算机程序被所述处理器执行时,使得所述处理器执行如权利要求1-17中任一项所述的方法。
  22. 一种非瞬时性计算机可读存储介质,其上存储有计算机可读指令,当所述指令被处理器执行时,使得所述处理器执行如权利要求1-17中任一项所述的方法。
PCT/CN2022/143259 2022-01-28 2022-12-29 自主移动机器人及利用其进行车道线检测的方法 WO2023142865A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210105343.9 2022-01-28
CN202210105343.9A CN116563814A (zh) 2022-01-28 2022-01-28 自主移动机器人及利用其进行车道线检测的方法

Publications (1)

Publication Number Publication Date
WO2023142865A1 true WO2023142865A1 (zh) 2023-08-03

Family

ID=87470416

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/143259 WO2023142865A1 (zh) 2022-01-28 2022-12-29 自主移动机器人及利用其进行车道线检测的方法

Country Status (2)

Country Link
CN (1) CN116563814A (zh)
WO (1) WO2023142865A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090201370A1 (en) * 2008-02-05 2009-08-13 Hitachi, Ltd. Traveling Lane Detector
CN109583280A (zh) * 2017-09-29 2019-04-05 比亚迪股份有限公司 车道线识别方法、装置、设备及存储介质
WO2020107326A1 (zh) * 2018-11-29 2020-06-04 深圳市大疆创新科技有限公司 车道线检测方法、设备、计算机可读存储介质
CN111291603A (zh) * 2018-12-07 2020-06-16 长沙智能驾驶研究院有限公司 车道线检测方法、装置、系统及存储介质
CN112528859A (zh) * 2020-12-11 2021-03-19 中国第一汽车股份有限公司 车道线检测方法、装置、设备及存储介质
CN113486796A (zh) * 2018-09-07 2021-10-08 百度在线网络技术(北京)有限公司 无人车位置检测方法、装置、设备、存储介质及车辆
CN113657282A (zh) * 2021-08-18 2021-11-16 苏州挚途科技有限公司 自车道的提取方法、装置和电子设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090201370A1 (en) * 2008-02-05 2009-08-13 Hitachi, Ltd. Traveling Lane Detector
CN109583280A (zh) * 2017-09-29 2019-04-05 比亚迪股份有限公司 车道线识别方法、装置、设备及存储介质
CN113486796A (zh) * 2018-09-07 2021-10-08 百度在线网络技术(北京)有限公司 无人车位置检测方法、装置、设备、存储介质及车辆
WO2020107326A1 (zh) * 2018-11-29 2020-06-04 深圳市大疆创新科技有限公司 车道线检测方法、设备、计算机可读存储介质
CN111291603A (zh) * 2018-12-07 2020-06-16 长沙智能驾驶研究院有限公司 车道线检测方法、装置、系统及存储介质
CN112528859A (zh) * 2020-12-11 2021-03-19 中国第一汽车股份有限公司 车道线检测方法、装置、设备及存储介质
CN113657282A (zh) * 2021-08-18 2021-11-16 苏州挚途科技有限公司 自车道的提取方法、装置和电子设备

Also Published As

Publication number Publication date
CN116563814A (zh) 2023-08-08

Similar Documents

Publication Publication Date Title
Zhang et al. Road-segmentation-based curb detection method for self-driving via a 3D-LiDAR sensor
US10133278B2 (en) Apparatus of controlling movement of mobile robot mounted with wide angle camera and method thereof
US20200208970A1 (en) Method and device for movable object distance detection, and aerial vehicle
US10399228B2 (en) Apparatus for recognizing position of mobile robot using edge based refinement and method thereof
US20200353914A1 (en) In-vehicle processing device and movement support system
KR101784183B1 (ko) ADoG 기반 특징점을 이용한 이동 로봇의 위치를 인식하기 위한 장치 및 그 방법
Kim et al. Lane-level localization using an AVM camera for an automated driving vehicle in urban environments
WO2021056841A1 (zh) 定位方法、路径确定方法、装置、机器人及存储介质
CN110587597B (zh) 一种基于激光雷达的slam闭环检测方法及检测系统
WO2020253010A1 (zh) 一种泊车定位中的停车场入口定位方法、装置及车载终端
EP3939863A1 (en) Overhead-view image generation device, overhead-view image generation system, and automatic parking device
JP2014034251A (ja) 車両走行制御装置及びその方法
JP2020083140A (ja) 駐車支援装置
US11482007B2 (en) Event-based vehicle pose estimation using monochromatic imaging
Cudrano et al. Advances in centerline estimation for autonomous lateral control
JP2018048949A (ja) 物体識別装置
CN112686951A (zh) 用于确定机器人位置的方法、装置、终端及存储介质
US20200193184A1 (en) Image processing device and image processing method
CN112665506B (zh) 定位装置的安装偏差检测方法、装置、设备及存储介质
WO2023142865A1 (zh) 自主移动机器人及利用其进行车道线检测的方法
Park et al. Vehicle localization using an AVM camera for an automated urban driving
CN113469045B (zh) 无人集卡的视觉定位方法、系统、电子设备和存储介质
CN114037977B (zh) 道路灭点的检测方法、装置、设备及存储介质
CN116189150A (zh) 基于融合输出的单目3d目标检测方法、装置、设备和介质
KR20200102108A (ko) 차량의 객체 검출 장치 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22923650

Country of ref document: EP

Kind code of ref document: A1