WO2021056841A1 - 定位方法、路径确定方法、装置、机器人及存储介质 - Google Patents
定位方法、路径确定方法、装置、机器人及存储介质 Download PDFInfo
- Publication number
- WO2021056841A1 WO2021056841A1 PCT/CN2019/124412 CN2019124412W WO2021056841A1 WO 2021056841 A1 WO2021056841 A1 WO 2021056841A1 CN 2019124412 W CN2019124412 W CN 2019124412W WO 2021056841 A1 WO2021056841 A1 WO 2021056841A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- robot
- information
- positioning
- road
- route
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 92
- 230000004927 fusion Effects 0.000 claims description 34
- 238000004590 computer program Methods 0.000 claims description 12
- 239000011159 matrix material Substances 0.000 claims description 9
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 8
- 238000009499 grossing Methods 0.000 claims description 8
- 230000009466 transformation Effects 0.000 claims description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 102100034112 Alkyldihydroxyacetonephosphate synthase, peroxisomal Human genes 0.000 description 1
- 101000799143 Homo sapiens Alkyldihydroxyacetonephosphate synthase, peroxisomal Proteins 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0248—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means in combination with a laser
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0268—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
- G05D1/0274—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/254—Fusion techniques of classification results, e.g. of results related to same input data
- G06F18/256—Fusion techniques of classification results, e.g. of results related to same input data of results relating to different input data, e.g. multimodal recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/97—Determining parameters from multiple pictures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/809—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
- G06V10/811—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data the classifiers operating on different input data, e.g. multi-modal recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
Definitions
- This application relates to the field of robotics, in particular to positioning methods, path determination methods, devices, robots, and storage media.
- unmanned control robots such as unmanned vehicles
- unmanned vehicles In order for an unmanned robot to move on the road, it first needs to be able to accurately locate the position of the robot itself so that the next path of action can be determined based on the positioning.
- the commonly used positioning methods are: positioning through positioning components such as single-line lidar and global positioning system (global positioning system, GPS).
- the embodiments of the present application provide a positioning method, a path determination method, a device, a robot, and a storage medium.
- the first aspect provides a positioning method, including:
- the positioning of the robot is obtained by fusing the positioning of the positioning component and the positioning of the image collected by the camera. Combining the positioning of the positioning component and the positioning of the sensing result, the positioning of the positioning component can be corrected, which can improve positioning accuracy.
- the positioning component includes a lidar
- the determining the first position information of the robot by the positioning component includes:
- the influence of illumination changes, occlusion, etc. on positioning can be reduced, thereby improving positioning accuracy.
- the determining the second position information of the robot according to the image includes:
- the second position information of the robot is determined according to the landmark object and the relative position.
- the influence of inaccurate positioning through the map can be reduced, and the positioning accuracy can be improved.
- the determining the relative position between the robot and the landmark object in the image includes:
- the relative position between the robot and the landmark object is determined.
- the determining the second position information of the robot according to the landmark object and the relative position includes:
- the second position information of the robot is determined according to the first position information, the map, the landmark object, and the relative position.
- the second position information of the robot can be determined by the relative position of the landmark object to the robot and the first position information obtained by the positioning component, which can improve the positioning accuracy of the robot.
- the determining the second position information of the robot according to the first position information, the map, the landmark object, and the relative position includes:
- the longitudinal information is the position information of the initial position information in the direction of the road edge
- the horizontal information is the position information of the initial position information in the direction perpendicular to the road edge.
- Correcting the position of the robot through the relative position of the landmark object and the robot and the road information recognized by the image can improve the positioning accuracy of the robot.
- the fusing the first position information and the second position information to obtain the positioning information of the robot includes:
- the fusion positioning information is the positioning information of the robot.
- the confidence level can determine the credibility of the fusion positioning information obtained through fusion.
- the confidence level is greater than the threshold, it indicates that the credibility of the fusion positioning information is high, and the fusion positioning information can be determined as the positioning information of the robot. Can improve positioning accuracy.
- the method further includes:
- the travel path of the robot is determined according to the first route and the second route.
- the route determined by the map and the route determined by the image collected by the camera are combined to obtain the driving path of the robot. Combining the route determined by the map and the route determined by the sensing result can improve the determination accuracy of the driving path .
- the determining the first route of the robot according to the image includes:
- the curve smoothing process is performed on the center line to obtain the first route of the robot.
- the first route of the robot is determined by recognizing the edge of the road in the image, which can reduce the influence of inaccurate route determination through the map, thereby improving the accuracy of determining the driving route.
- the determining the second route of the robot according to the map and the positioning information of the robot includes:
- Determining the second route of the robot through the map can reduce the influence of changes in illumination, occlusion, etc. on the determination of the route, thereby improving the accuracy of determining the driving route.
- the determining the first route of the robot according to the image includes:
- a turning curve is calculated to obtain the first route of the robot.
- the route of the robot in the turning direction can be determined, which can reduce the influence of occlusion and other factors on the determination of the route, thereby improving the accuracy of determining the driving route.
- the determining the second route of the robot according to the map and the positioning information of the robot includes:
- the positioning information of the robot corresponding to the center line of the turning road is queried from the map, and the second route of the robot is obtained.
- the time to determine the route can be reduced, so that the speed of determining the driving route can be improved.
- the determining the travel path of the robot according to the first route and the second route includes:
- the first route and the second route are aligned to obtain the travel path of the robot.
- the route of the robot can be optimized, so that the accuracy of determining the driving path can be improved.
- the method further includes:
- the second aspect provides a path determination method, including:
- the travel path of the robot is determined according to the first route and the second route.
- the route determined by the map and the route determined by the image collected by the camera are combined to obtain the driving path of the robot. Combining the route determined by the map and the route determined by the sensing result can improve the determination accuracy of the driving path .
- the determining the first route of the robot according to the image includes:
- the curve smoothing process is performed on the center line to obtain the first route of the robot.
- the first route of the robot is determined by recognizing the edge of the road in the image, which can reduce the influence of inaccurate route determination through the map, thereby improving the accuracy of determining the driving route.
- the determining the second route of the robot according to the map and the positioning information of the robot includes:
- Determining the second route of the robot through the map can reduce the influence of changes in illumination, occlusion, etc. on the determination of the route, thereby improving the accuracy of determining the driving route.
- the determining the first route of the robot according to the image includes:
- a turning curve is calculated to obtain the first route of the robot.
- the route of the robot in the turning direction can be determined, which can reduce the influence of occlusion and other factors on the determination of the route, thereby improving the accuracy of determining the driving route.
- the determining the second route of the robot according to the map and the positioning information of the robot includes:
- the positioning information of the robot corresponding to the center line of the turning road is queried from the map to obtain the second route of the robot.
- the time to determine the route can be reduced, so that the speed of determining the driving route can be improved.
- the determining the travel path of the robot according to the first route and the second route includes:
- the first route and the second route are aligned to obtain the travel path of the robot.
- the route of the robot can be optimized, so that the accuracy of determining the driving path can be improved.
- the method further includes:
- a third aspect provides a positioning device, including:
- the first determining unit is configured to determine the first position information of the robot through the positioning component
- the collection unit is used to collect images through the camera
- a second determining unit configured to determine second position information of the robot according to the image
- the fusion unit is used to fuse the first position information and the second position information to obtain the positioning information of the robot.
- the positioning component includes a lidar
- the first determining unit is specifically configured to:
- the second determining unit is specifically configured to:
- the second position information of the robot is determined according to the landmark object and the relative position.
- the second determining unit determining the relative position between the robot and the landmark object in the image includes:
- the relative position between the robot and the landmark object is determined.
- the second determining unit determining the second position information of the robot according to the landmark object and the relative position includes:
- the second position information of the robot is determined according to the first position information, the map, the landmark object, and the relative position.
- the second determining unit determining the second position information of the robot according to the first position information, the map, the landmark object, and the relative position includes:
- the longitudinal information is the position information of the initial position information in the direction of the road edge
- the horizontal information is the position information of the initial position information in the direction perpendicular to the road edge.
- the fusion unit is specifically used for:
- the fusion positioning information is the positioning information of the robot.
- the device further includes:
- a third determining unit configured to determine the first route of the robot according to the image
- a fourth determining unit configured to determine the second route of the robot according to the map and the positioning information of the robot
- the fifth determining unit is configured to determine the travel path of the robot according to the first route and the second route.
- the third determining unit is specifically configured to:
- the fourth determining unit is specifically configured to query the center line of the road corresponding to the positioning information of the robot from the map to obtain the second route of the robot.
- the third determining unit is specifically configured to:
- a turning curve is calculated to obtain the first route of the robot.
- the fourth determining unit is specifically configured to query the center line of the turning road corresponding to the positioning information of the robot from a map to obtain the second route of the robot.
- the fifth determining unit is specifically configured to align the first route and the second route to obtain the travel path of the robot.
- the device further includes:
- a generating unit configured to generate a driving instruction for driving according to the driving path
- the execution unit is used to execute the driving instruction.
- a fourth aspect provides a path determination device, including:
- the collection unit is used to collect images through the camera
- a first determining unit configured to determine a first route of the robot according to the image
- a second determining unit configured to determine a second route of the robot according to the map and the positioning information of the robot
- the third determining unit is configured to determine the travel path of the robot according to the first route and the second route.
- the first determining unit is specifically configured to:
- the curve smoothing process is performed on the center line to obtain the first route of the robot.
- the second determining unit is specifically configured to query the center line of the road corresponding to the positioning information of the robot from a map to obtain the second route of the robot.
- the first determining unit is specifically configured to:
- a turning curve is calculated to obtain the first route of the robot.
- the second determining unit is specifically configured to query the center line of the turning road corresponding to the positioning information of the robot from a map to obtain the second route of the robot.
- the third determining unit is specifically configured to align the first route and the second route to obtain the travel path of the robot.
- the device further includes:
- a generating unit configured to generate a driving instruction for driving according to the driving path
- the execution unit is used to execute the driving instruction.
- a fifth aspect provides a robot, which includes a processor, a memory, a positioning component, and a camera.
- the memory is used to store computer program codes
- the positioning component is used for positioning
- the camera is used to collect images
- the processor is used to perform operations such as the first aspect or the first aspect. The method provided in any possible implementation manner in the aspect.
- a sixth aspect provides a robot, including a processor, a memory, and a camera.
- the memory is used to store computer program codes.
- the camera is used to collect images, and the processor is used to execute the second aspect or any one of the possible implementation manners of the second aspect. Provided method.
- a seventh aspect provides a readable storage medium storing a computer program, and the computer program includes program code that, when executed by a processor, causes the processor to execute the first aspect or the first aspect A method provided in any possible implementation manner, or a method provided in the second aspect or any one of the possible implementation manners of the second aspect.
- An eighth aspect provides a computer program, including computer-readable code.
- a processor in the electronic device executes the first aspect or any one of the first aspects.
- FIG. 1 is a schematic flowchart of a positioning method provided by an embodiment of the present application
- FIG. 2 is a schematic flowchart of another positioning method provided by an embodiment of the present application.
- FIG. 3 is a schematic flowchart of a path determination method provided by an embodiment of the present application.
- FIG. 4 is a schematic structural diagram of a positioning device provided by an embodiment of the present application.
- FIG. 5 is a schematic structural diagram of a path determination device provided by an embodiment of the present application.
- Fig. 6 is a schematic structural diagram of a robot provided by an embodiment of the present application.
- the embodiments of the present application provide a positioning method, a path determination method, a robot, and a storage medium, which are used to improve the accuracy of positioning. Detailed descriptions are given below.
- FIG. 1 is a schematic flowchart of a positioning method provided by an embodiment of the present application.
- the positioning method is applied to robots.
- the robot can be a small car used for teaching, playing, etc., a large car used for carrying passengers, carrying objects, etc., or a robot used for teaching, playing, etc., which is not limited here.
- the system used by the robot may be an embedded system or other systems, which is not limited here.
- the steps of the positioning method may be executed by hardware such as a robot, or executed by a processor running computer executable code. As shown in Figure 1, the positioning method may include the following steps.
- the first position information is the position information of the robot itself determined by the positioning component. After the robot is powered on or started, the first position information of the robot can be determined by the positioning component in real time or periodically.
- the positioning component may be lidar, global positioning system (Global Positioning System, GPS), assisted global positioning system (Assisted Global Positioning System, AGPS), Beidou positioning, etc.
- the lidar can be a single-line lidar or a multi-line lidar. Among them, the period can be 1s, 2s, 5s, and so on.
- the positioning data can be collected through the lidar first, and then the first position information of the robot is determined according to the point cloud positioning map and the positioning data, that is, the points in the positioning data and the points in the point cloud positioning map
- the position of the collected positioning data in the point cloud map can be determined by matching, so as to determine the first position information of the robot.
- the point cloud location map is a map that is stitched together based on point clouds for location.
- the point cloud location map can be stored in the robot in advance. In the case of using the point cloud location map, the stored point cloud location map needs to be obtained locally.
- the point cloud positioning map can also be stored in the cloud or other devices, and the robot can be obtained from the cloud or other devices when it needs to be used.
- the robot After the robot is powered on or started, it can collect images through the camera in real time or periodically.
- the period here and the period in step 101 may be the same or different.
- the number of cameras can be one, or two or more.
- the second position information of the robot can be determined according to the collected image.
- the relative position between the robot and the iconic object in the image may be determined first, and then the second position information of the robot may be determined according to the iconic object and the relative position. It is also possible to determine the coordinates of the landmark object in the image first, and then determine the relative position between the robot and the landmark object in the image according to the camera’s relative shooting angle of the landmark object and the shooting ratio of the image, and then determine the relative position between the robot and the landmark object in the image.
- the relative position determines the second position information of the robot. After the robot is recognized according to the target recognition technology, the position of the robot in the camera coordinate system can be converted to the world coordinate system according to the preset coordinate conversion matrix, so as to obtain the second position information of the robot.
- the landmark objects may be landmark objects such as traffic lights, road signs and signs.
- a distance sensor can also be used to measure the relative distance between the robot and the landmark object.
- the coordinates of the robot in the coordinate system with the camera as the origin are (0,0,0), and the coordinates of the landmark object in the coordinate system with the camera as the origin are (x1, y1, z1), then the robot and the logo
- the relative position between sex objects is (x1, y1, z1).
- the coordinates of the robot in the coordinate system with the camera as the origin can be obtained by further combining the above external parameters, and then according to the above The process obtains the relative position of the landmark object relative to the center of the robot (that is, the robot).
- the second position of the robot can be determined according to the first position information, the map, the iconic object and the relative position information.
- the first position information can be converted to the position on the map to obtain the initial position information of the robot.
- the road edge of the road where the robot is located can be identified from the image, for example, the lane where the robot car is located.
- the horizontal information in the initial position information can be corrected according to the identified road edge, and the vertical information in the initial position information can be corrected according to the relative position between the robot and the iconic object in the image, and the second position information of the robot can be obtained.
- the direction of the sideline of the road is the longitudinal direction
- the direction perpendicular to the sideline of the road is the horizontal direction
- the longitudinal information is the position information of the initial position information in the direction of the road edge
- the horizontal information is the position information of the initial position information in the direction perpendicular to the road edge.
- the initial position information is the horizontal and vertical coordinates of the robot
- the horizontal information is the horizontal coordinate
- the vertical information is the vertical coordinate.
- Correcting the longitudinal information in the initial position information according to the relative position between the robot and the iconic object in the image can be to first map the coordinates (x1, y1, z1) of the iconic object in the coordinate system with the camera as the origin to the map Obtain the mapped horizontal position and the mapped vertical position, and directly query the position of the landmark object from the map to obtain the inquired horizontal position and the inquired vertical position. Then, the longitudinal position of the landmark object can be obtained according to the mapped longitudinal position and the query longitudinal position, and the average or weighted average of the mapped longitudinal position and the query longitudinal position can be determined as the longitudinal position of the landmark object.
- the longitudinal information in the initial position information is corrected according to the relative position between the robot and the iconic object in the image and the longitudinal position of the iconic object.
- the coordinates of the initial position information are (x2, y2)
- the longitudinal position of the determined landmark object is y3
- the relative position between the robot and the landmark object is (x1, y1, z1).
- the relative position corresponds to The longitudinal coordinate difference between the landmark object and the robot is y1
- Correcting the lateral information in the initial position information according to the recognized road edges can be to first determine the center line of the road where the robot is located according to the recognized road edges, then determine the point corresponding to the initial position information in the center line, and correct the initial position according to the lateral information of the point Horizontal information in information.
- the corrected lateral information may be the average or weighted average of the lateral information of the point and the lateral information in the initial position information.
- the point corresponding to the initial position information in the center line may be the same point as the longitudinal information of the initial position.
- the point corresponding to the initial position information in the center line may be the point closest to the initial position.
- the center line of the road can be determined according to the sideline of the road where the robot is located.
- the abscissa of the center line is x3, and the average value or weighted average of x2 and x3 can be used. It can be used as the horizontal coordinate point of the robot.
- x3 may be the abscissa of the point closest to (x2, y2) in the center line.
- the map can be a high-precision map or a common physical location map.
- High-precision maps are electronic maps with higher accuracy and more data dimensions. Higher accuracy is reflected in the accuracy to the centimeter level, and the data dimension is more reflected in the fact that it includes surrounding static information related to driving in addition to road information.
- High-precision maps store a large amount of robot driving assistance information as structured data, which can be divided into two categories.
- the first category is road data, such as lane information such as the location, type, width, slope, and curvature of the road sideline.
- the second category is information about fixed objects around the road, such as traffic signs, traffic lights and other information, road height limits, sewer crossings, obstacles and other road details.
- the road may be a lane, or a road where robots can move, such as a sidewalk.
- the sideline of the road is the sideline of the road, which can be a lane line, or a road tooth, can also isolate objects, and can also be other things that can be used as a road sideline.
- the map is stored in the robot in advance, and the stored map can be obtained locally before use.
- the map can also be stored in the cloud or other devices, and the robot can be obtained from the cloud or other devices when needed.
- the first position information of the robot is determined by the positioning component, and the second position information of the robot is determined according to the image, the first position information and the second position information can be merged to obtain the positioning information of the robot.
- the first location information and the second location information can be input into the fusion algorithm to obtain the fusion positioning information and the confidence of the fusion positioning information, and then it is judged whether the confidence is greater than the threshold, and the confidence is determined. If it is greater than the threshold, it indicates that the accuracy of the fusion positioning information is high, and it can be determined that the fusion positioning information is the positioning information of the robot. In the case where it is determined that the confidence is less than or equal to the threshold, it indicates that the accuracy of the fused positioning information is low, and the fused positioning information can be discarded, and then repositioning is performed.
- the positioning information of the robot may be an average or a weighted average of the first location information and the second location information.
- the fusion algorithm can be a comprehensive average method, Kalman filter method, Bayesian estimation method, etc.
- the first position information and the second position information may also be directly subjected to weighted or averaged fusion processing to obtain the positioning information of the robot.
- FIG. 2 is a schematic flowchart of another positioning method provided by an embodiment of the present application.
- the positioning method is applied to robots.
- the robot can be a small car used for teaching, playing, etc., a large car used for carrying passengers, carrying objects, etc., or a robot used for teaching, playing, etc., which is not limited here.
- the system used by the robot may be an embedded system or other systems, which is not limited here.
- the steps of the positioning method may be executed by hardware such as a robot, or executed by a processor running computer executable code. As shown in Figure 2, the positioning method may include the following steps.
- step 201 is the same as step 101.
- step 101 please refer to step 101, which will not be repeated here.
- step 202 is the same as step 102.
- step 102 please refer to step 102, which will not be repeated here.
- step 203 is the same as step 103.
- step 103 please refer to step 103, which will not be repeated here.
- step 204 is the same as step 104.
- step 104 please refer to step 104, which will not be repeated here.
- the first route is based on the collected image information to plan the robot movement path. After the image is collected by the camera, the first route of the robot can be determined based on the image.
- the robot takes a vehicle as an example.
- the robot can first identify two road edges corresponding to the road where the robot is located in the image, for example, using a pre-trained road edge recognition model.
- the two road edges corresponding to the road where the robot is located and then calculate the center line of the two road edges.
- the center line of the road sideline can be directly determined as the first route of the robot, or the center line of the road sideline can be smoothly processed to obtain the first route of the robot.
- the road where the robot is located may have only one road edge, and the detected road teeth can be determined as another road edge in the image detection.
- the road where the robot is located may only have one sideline of the road, which can be detected in the image detection
- the separated object is determined as another road edge.
- the first road edge corresponding to the road where the robot is located in the image can be identified.
- the second road sideline of the road after turning by the robot can be determined according to the map and the positioning information of the robot. That is, the information of the road after turning on the road where the robot is located is queried on the map according to the positioning information of the robot.
- the road information can include the width of the road and the road of the robot. Sidelines etc. Then, according to the identified first road sideline and the determined second road sideline, the entrance position and entrance direction of the road after the robot turns are determined.
- the determined road sideline can be complemented according to the recognized road sideline, and the road sideline after the robot turns can be determined based on the completed road sideline.
- the turning curve can be calculated according to the entry position and entry direction of the road after the robot turns, and the positioning information and direction of the robot, to obtain the first route of the robot.
- methods such as b-spline and polynomial fitting can be used to calculate the turning curve.
- the second route is a reference path for the robot to travel planned according to the map and the positioning information of the robot.
- the second route of the robot can be determined according to the map and the positioning information of the robot.
- the positioning information of the robot can be queried from the map corresponding to the center line of the road where the robot is currently located, and the center line is used as the second route of the robot.
- the positioning information of the robot can be queried from the map corresponding to the center line of the road that the robot will turn, and the center line is taken as the second route of the robot.
- positions on the road for example, a route along the 2/3 position on the left side of the road, can also be used as the second route of the robot.
- the driving path of the robot can be determined according to the first route and the second route.
- the first route and the second route are aligned to obtain the travel path of the robot, wherein the first route and the second route can be aligned using methods such as weighted average and curve fitting.
- step S207 it further includes:
- a driving instruction for driving according to the driving path can be generated according to the driving path.
- a travel instruction for traveling 100 meters straight on the current road may be generated.
- the robot After generating the driving instruction for driving in accordance with the driving route, the robot can execute the driving instruction to travel in accordance with the driving route.
- the positioning of the positioning component and the positioning performed by the image collected by the camera are merged to obtain the positioning of the robot.
- Combining the positioning of the positioning component and the positioning of the perception result can position the positioning component Corrections can improve the accuracy of positioning.
- the route of the robot is determined according to the route of the robot determined by the positioning information and the route of the robot determined by the image collected by the camera.
- the route of the robot determined by the combination of the positioning information and the route of the robot determined by the image collected by the camera can be compared to the positioning information. Correcting the determined route of the robot can improve the accuracy of determining the driving route.
- FIG. 3 is a schematic flowchart of a path determination method provided by an embodiment of the present application.
- the path determination method can be applied to robots.
- the robot can be a small car used for teaching, playing, etc., a large car used for carrying passengers, carrying objects, etc., or a robot used for teaching, playing, etc., which is not limited here.
- the system used by the robot may be an embedded system or other systems, which is not limited here.
- the steps of the path determination method can be executed by hardware such as a robot, or can be executed by a processor running computer executable code. As shown in Fig. 3, the path determination method may include the following steps.
- step 301 is the same as step 102.
- step 102 please refer to step 102, which will not be repeated here.
- step 302 is the same as step 205.
- step 205 please refer to step 205, which will not be repeated here.
- step 303 is similar to step 206.
- step 206 please refer to step 206, which will not be repeated here.
- the robot may also use one of the positioning components or the map to directly obtain the robot positioning information, and then determine the first route and the second route of the robot.
- step 304 is the same as step 207.
- step 207 which will not be repeated here.
- step 305 is the same as step 208.
- step 208 please refer to step 208, which will not be repeated here.
- step 306 is the same as step 209.
- step 209 which will not be repeated here.
- the robot’s route is determined based on the robot’s route determined by the positioning information and the robot’s route determined by the image collected by the camera, combined with the robot’s route determined by the positioning information and the camera’s image acquisition.
- the route of the robot can be corrected for the route of the robot determined by the positioning information, which can improve the accuracy of determining the driving path.
- FIG. 4 is a schematic structural diagram of a positioning device provided by an embodiment of the present application.
- the positioning device can be applied to robots.
- the robot can be a small car used for teaching, playing, etc., a large car used for carrying passengers, carrying objects, etc., or a robot used for teaching, playing, etc., which is not limited here.
- the system used by the robot may be an embedded system or other systems, which is not limited here.
- the positioning device may include:
- the first determining unit 401 is configured to determine the first position information of the robot through the positioning component
- the collection unit 402 is used to collect images through a camera
- the second determining unit 403 is configured to determine the second position information of the robot according to the image
- the fusion unit 404 is used for fusing the first position information and the second position information to obtain the positioning information of the robot.
- the positioning component may include lidar, and the first determining unit 401 is specifically configured to:
- the first position information of the robot is determined.
- the second determining unit 403 is specifically configured to:
- the second position information of the robot is determined according to the landmark object and the relative position.
- the second determining unit 403 determining the relative position between the robot and the landmark object in the image includes:
- the relative position between the robot and the landmark object is determined.
- the second determining unit 403 determining the second position information of the robot according to the landmark object and the relative position includes:
- the second position information of the robot is determined according to the first position information, the map, the landmark object and the relative position.
- the second determining unit 403 determining the second position information of the robot according to the first position information, the map, the landmark object, and the relative position includes:
- the vertical information is the position information of the initial position information in the direction of the road edge
- the horizontal information is the position information of the initial position information in the direction perpendicular to the road edge.
- the fusion unit 404 is specifically configured to:
- the fusion positioning information is the positioning information of the robot.
- the positioning device may further include:
- the third determining unit 405 is configured to determine the first route of the robot according to the image
- the fourth determining unit 406 is configured to determine the second route of the robot according to the map and the positioning information of the robot;
- the fifth determining unit 407 is configured to determine the travel path of the robot according to the first route and the second route.
- the third determining unit 405 is specifically configured to:
- the curve smoothing process is performed on the center line to obtain the first route of the robot.
- the fourth determining unit 406 is specifically configured to query the center line of the road corresponding to the positioning information of the robot from the map to obtain the second route of the robot.
- the third determining unit 405 is specifically configured to:
- the turning curve is calculated to obtain the first route of the robot.
- the fourth determining unit 406 is specifically configured to query the center line of the turning road corresponding to the positioning information of the robot from the map, and obtain the second route of the robot.
- the fifth determining unit 407 is specifically configured to align the first route and the second route to obtain the travel path of the robot.
- the positioning device may further include:
- the generating unit 408 is configured to generate a driving instruction for driving according to the driving path;
- the execution unit 409 is used to execute driving instructions.
- This embodiment may correspond to the description of the method embodiment in the embodiment of the application, and the above and other operations and/or functions of each unit are used to implement the corresponding processes in each method in FIG. 1 and FIG. 2 respectively. For the sake of brevity, it is not here. Go into details again.
- FIG. 5 is a schematic structural diagram of a path determination device provided by an embodiment of the present application.
- the path determination device can be applied to robots.
- the robot can be a small car used for teaching, playing, etc., a large car used for carrying passengers, carrying objects, etc., or a robot used for teaching, playing, etc., which is not limited here.
- the system used by the robot may be an embedded system or other systems, which is not limited here.
- the path determination device may include:
- the collection unit 501 is used to collect images through a camera
- the first determining unit 502 is configured to determine the first route of the robot according to the image
- the second determining unit 503 is configured to determine the second route of the robot according to the map and the positioning information of the robot;
- the third determining unit 504 is configured to determine the travel path of the robot according to the first route and the second route.
- the first determining unit 502 is specifically configured to:
- the curve smoothing process is performed on the center line to obtain the first route of the robot.
- the second determining unit 503 is specifically configured to query the center line of the road corresponding to the positioning information of the robot from the map to obtain the second route of the robot.
- the first determining unit 502 is specifically configured to:
- the turning curve is calculated to obtain the first route of the robot.
- the second determining unit 503 is specifically configured to query the center line of the turning road corresponding to the positioning information of the robot from the map to obtain the second route of the robot.
- the third determining unit 504 is specifically configured to align the first route and the second route to obtain the travel path of the robot.
- the path determination device may further include:
- the generating unit 505 is configured to generate a driving instruction for driving according to the driving path;
- the execution unit 506 is used to execute driving instructions.
- This embodiment may correspond to the description of the method embodiment in the embodiment of the present application, and the above and other operations and/or functions of each unit are used to realize the corresponding flow in each method in FIG. 2 and FIG. 3 respectively. For the sake of brevity, it is not here. Go into details again.
- FIG. 6 is a schematic structural diagram of a robot provided by an embodiment of the present application.
- the robot may be a small car used for teaching, playing, etc., a large car used for carrying passengers, carrying objects, etc., or a robot used for teaching, playing, etc., which is not limited here.
- the system used by the robot may be an embedded system or other systems, which is not limited here.
- the robot may include at least one processor 601, a memory 602, a positioning component 603, a camera 604, and a communication line 605.
- the memory 602 may exist independently, and may be connected to the processor 601 through a communication line 605.
- the memory 602 may also be integrated with the processor 601.
- the communication line 605 is used to realize the connection between these components.
- the processor 601 when the computer program instructions stored in the memory 602 are executed, the processor 601 is configured to execute the second determining unit 403, the fusion unit 404, the third determining unit 405, and the fourth determining unit in the foregoing embodiment. 406.
- the fifth determining unit 407, the generating unit 408, and the execution unit 409 perform operations of at least some of the units.
- the positioning component 603 is used to perform the operations performed by the first determining unit 401 in the above-mentioned embodiment, and the camera 604 is used to perform the operations in the above-mentioned embodiment. Operations performed by the collection unit 402.
- the above-mentioned robot can also be used to execute various methods executed by the terminal device in the foregoing method embodiments, and details are not described herein again.
- the processor 601 when the computer program instructions stored in the memory 602 are executed, the processor 601 is configured to execute the first determining unit 502, the second determining unit 503, the third determining unit 504, and the The operation of at least part of the unit 505 and the execution unit 505, and the camera 604 is used to execute the operation executed by the acquisition unit 501 in the foregoing embodiment.
- the above-mentioned robot can also be used to execute various methods executed in the foregoing method embodiments, and details are not described herein again.
- the embodiment of the present application also discloses a computer-readable storage medium with an instruction stored thereon, and the method in the foregoing method embodiment is executed when the instruction is executed.
- the readable storage medium may be a volatile storage medium or a non-volatile storage medium.
- the embodiment of the present application also discloses a computer program product containing instructions, which execute the method in the foregoing method embodiment when the instruction is executed.
- the program can be stored in a computer-readable memory, and the memory can include: flash disk, Read-Only Memory (ROM), Random-Access Memory (RAM), magnetic disk or optical disk, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Automation & Control Theory (AREA)
- Mathematical Physics (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Aviation & Aerospace Engineering (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Mathematical Optimization (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Pure & Applied Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Optics & Photonics (AREA)
- Electromagnetism (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Algebra (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Navigation (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Traffic Control Systems (AREA)
- Manipulator (AREA)
Abstract
Description
Claims (31)
- 一种定位方法,包括:通过定位部件确定机器人的第一位置信息;通过摄像头采集图像;根据所述图像,确定所述机器人的第二位置信息;融合所述第一位置信息和所述第二位置信息,得到所述机器人的定位信息。
- 根据权利要求1所述的方法,其中,所述定位部件包括激光雷达,所述通过定位部件确定所述机器人的第一位置信息包括:通过所述激光雷达采集定位数据;根据点云定位地图和所述定位数据,确定所述机器人的第一位置信息。
- 根据权利要求1或2所述的方法,其中,所述根据所述图像,确定所述机器人的第二位置信息包括:确定所述机器人与所述图像中标志性物体之间的相对位置;根据所述标志性物体和所述相对位置确定所述机器人的第二位置信息。
- 根据权利要求3所述的方法,其中,所述确定所述机器人与所述图像中标志性物体之间的相对位置包括:检测所述图像中的标志性物体;根据所述摄像头的仿射变换矩阵,确定所述机器人与所述标志性物体之间的相对位置。
- 根据权利要求3或4所述的方法,其中,所述根据所述标志性物体和所述相对位置确定所述机器人的第二位置信息包括:根据所述第一位置信息、地图、所述标志性物体和所述相对位置确定所述机器人的第二位置信息。
- 根据权利要求5所述的方法,其中,所述根据所述第一位置信息、地图、所述标志性物体和所述相对位置确定所述机器人的第二位置信息包括:将所述第一位置信息转换为在所述地图中的位置,得到所述机器人的初始位置信息;识别所述图像中所述机器人所在道路的道路边线;根据所述识别的道路边线修正所述初始位置信息的横向信息,以及根据所述相对位置修正所述初始位置信息中的纵向信息,得到所述机器人的第二位置信息;其中,纵向信息为所述初始位置信息在道路边线所在方向上的位置信息,所述横向信息为所述初始位置信息在与道路边线垂直的方向上的位置信息。
- 根据权利要求1-6任一项所述的方法,其中,所述融合所述第一位置信息和所述第二位置信息,得到所述机器人的定位信息包括:将所述第一位置信息和所述第二位置信息进行融合,得到融合定位信息以及所述融合定位信息的置信度;在所述置信度大于阈值的情况下,确定所述融合定位信息为所述机器人的定位信息。
- 根据权利要求1-7任一项所述的方法,其中,所述方法还包括:根据所述图像确定所述机器人的第一路线;根据地图和所述机器人的定位信息确定所述机器人的第二路线;根据所述第一路线和所述第二路线确定所述机器人的行驶路径。
- 根据权利要求8所述的方法,其中,所述根据所述图像确定所述机器人的第一路线包括:识别所述图像中所述机器人所在道路对应的两条道路边线;计算所述两条道路边线的中线;对所述中线进行曲线平滑处理,得到所述机器人的第一路线。
- 根据权利要求9所述的方法,其中,所述根据地图和所述机器人的定位信息确定所述机器人的第二路线包括:从地图中查询所述机器人的定位信息对应道路的中线,得到所述机器人的第二路线。
- 根据权利要求8所述的方法,其中,所述根据所述图像确定所述机器人的第一路线包括:识别所述图像中所述机器人所在道路对应的第一道路边线;根据地图和所述机器人的定位信息,确定所述机器人转弯后道路的第二道路边线;根据所述第一道路边线和所述第二道路边线,确定所述机器人转弯后道路的入口位置和入口方向;根据所述入口位置、所述入口方向以及所述机器人的定位信息和方向,计算转弯曲线,得到所述机器人的第一路线。
- 根据权利要求11所述的方法,其中,所述根据地图和所述机器人的定位信息确定所述机器人的第二路线包括:从地图中查询所述机器人的定位信息对应转弯道路的中线,得到所述机器人的第二路线。
- 根据权利要求8-12任一项所述的方法,其中,所述根据所述第一路线和所述第二路线确定所述机器人的行驶路径包括:对齐所述第一路线和所述第二路线,得到所述机器人的行驶路径。
- 根据权利要求8-13任一项所述的方法,其中,所述方法还包括:生成用于按照所述行驶路径行驶的行驶指令;执行所述行驶指令。
- 一种定位装置,包括:第一确定单元,用于通过定位部件确定机器人的第一位置信息;采集单元,用于通过摄像头采集图像;第二确定单元,用于根据所述图像,确定所述机器人的第二位置信息;融合单元,用于融合所述第一位置信息和所述第二位置信息,得到所述机器人的定位信息。
- 根据权利要求15所述的装置,其中,所述定位部件包括激光雷达,所述第一确定单元具体用于:通过所述激光雷达采集定位数据;根据点云定位地图和所述定位数据,确定所述机器人的第一位置信息。
- 根据权利要求15或16所述的装置,其中,所述第二确定单元具体用于:确定所述机器人与所述图像中标志性物体之间的相对位置;根据所述标志性物体和所述相对位置确定所述机器人的第二位置信息。
- 根据权利要求17所述的装置,其中,所述第二确定单元确定所述机器人与所述图像中标志性物体之间的相对位置包括:检测所述图像中的标志性物体;根据所述摄像头的仿射变换矩阵,确定所述机器人与所述标志性物体之间的相对位置。
- 根据权利要求17或18所述的装置,其中,所述第二确定单元根据所述标志性物体和所述相对位置确定所述机器人的第二位置信息包括:根据所述第一位置信息、地图、所述标志性物体和所述相对位置确定所述机器人的 第二位置信息。
- 根据权利要求19所述的装置,其中,所述第二确定单元根据所述第一位置信息、地图、所述标志性物体和所述相对位置确定所述机器人的第二位置信息包括:将所述第一位置信息转换为在所述地图中的位置,得到所述机器人的初始位置信息;识别所述图像中所述机器人所在道路的道路边线;根据所述识别的道路边线修正所述初始位置信息的横向信息,以及根据所述相对位置修正所述初始位置信息中的纵向信息,得到所述机器人的第二位置信息;其中,纵向信息为所述初始位置信息在道路边线所在方向上的位置信息,所述横向信息为所述初始位置信息在与道路边线垂直的方向上的位置信息。
- 根据权利要求15-20任一项所述的装置,其中,所述融合单元具体用于:将所述第一位置信息和所述第二位置信息进行融合,得到融合定位信息以及所述融合定位信息的置信度;在所述置信度大于阈值的情况下,确定所述融合定位信息为所述机器人的定位信息。
- 根据权利要求15-21任一项所述的装置,其中,所述装置还包括:第三确定单元,用于根据所述图像确定所述机器人的第一路线;第四确定单元,用于根据地图和所述机器人的定位信息确定所述机器人的第二路线;第五确定单元,用于根据所述第一路线和所述第二路线确定所述机器人的行驶路径。
- 根据权利要求22所述的装置,其中,所述第三确定单元具体用于:识别所述图像中所述机器人所在道路对应的两条道路边线;计算所述两条道路边线的中线;对所述中线进行曲线平滑处理,得到所述机器人的第一路线。
- 根据权利要求23所述的装置,其中,所述第四确定单元,具体用于从地图中查询所述机器人的定位信息对应道路的中线,得到所述机器人的第二路线。
- 根据权利要求22所述的装置,其中,所述第三确定单元具体用于:识别所述图像中所述机器人所在道路对应的第一道路边线;根据地图和所述机器人的定位信息,确定所述机器人转弯后道路的第二道路边线;根据所述第一道路边线和所述第二道路边线,确定所述机器人转弯后道路的入口位置和入口方向;根据所述入口位置、所述入口方向以及所述机器人的定位信息和方向,计算转弯曲线,得到所述机器人的第一路线。
- 根据权利要求25所述的装置,其中,所述第四确定单元,具体用于从地图中查询所述机器人的定位信息对应转弯道路的中线,得到所述机器人的第二路线。
- 根据权利要求22-26任一项所述的装置,其中,所述第五确定单元,具体用于对齐所述第一路线和所述第二路线,得到所述机器人的行驶路径。
- 根据权利要求22-27任一项所述的装置,其中,所述装置还包括:生成单元,用于生成用于按照所述行驶路径行驶的行驶指令;执行单元,用于执行所述行驶指令。
- 一种机器人,包括处理器、存储器、定位部件、摄像头,所述存储器用于存储计算机程序代码,所述定位部件用于定位,所述摄像头用于采集图像,所述处理器用于调用所述计算机程序代码执行如权利要求1-14任一项所述的方法。
- 一种可读存储介质,所述可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-14任一项所述的方法。
- 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1-14任一项所述的方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021519865A JP2022504728A (ja) | 2019-09-26 | 2019-12-10 | ポジショニング方法、経路決定方法、装置、ロボットおよび記憶媒体 |
SG11202103843YA SG11202103843YA (en) | 2019-09-26 | 2019-12-10 | Positioning method and device, path determination method and device, robot and storage medium |
US17/227,915 US20210229280A1 (en) | 2019-09-26 | 2021-04-12 | Positioning method and device, path determination method and device, robot and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910915168.8 | 2019-09-26 | ||
CN201910915168.8A CN110530372B (zh) | 2019-09-26 | 2019-09-26 | 定位方法、路径确定方法、装置、机器人及存储介质 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/227,915 Continuation US20210229280A1 (en) | 2019-09-26 | 2021-04-12 | Positioning method and device, path determination method and device, robot and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021056841A1 true WO2021056841A1 (zh) | 2021-04-01 |
Family
ID=68670274
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/124412 WO2021056841A1 (zh) | 2019-09-26 | 2019-12-10 | 定位方法、路径确定方法、装置、机器人及存储介质 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20210229280A1 (zh) |
JP (1) | JP2022504728A (zh) |
CN (1) | CN110530372B (zh) |
SG (1) | SG11202103843YA (zh) |
TW (2) | TWI742554B (zh) |
WO (1) | WO2021056841A1 (zh) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110530372B (zh) * | 2019-09-26 | 2021-06-22 | 上海商汤智能科技有限公司 | 定位方法、路径确定方法、装置、机器人及存储介质 |
CN111121805A (zh) * | 2019-12-11 | 2020-05-08 | 广州赛特智能科技有限公司 | 基于道路交通标线标志的局部定位修正方法、设备及介质 |
CN111524185A (zh) * | 2020-04-21 | 2020-08-11 | 上海商汤临港智能科技有限公司 | 定位方法及装置、电子设备和存储介质 |
CN113884093A (zh) * | 2020-07-02 | 2022-01-04 | 苏州艾吉威机器人有限公司 | Agv建图和定位的方法、系统、装置及计算机可读存储介质 |
CN114076602B (zh) * | 2020-08-20 | 2024-07-16 | 北京四维图新科技股份有限公司 | 定位方法及定位设备 |
CN112405526A (zh) * | 2020-10-26 | 2021-02-26 | 北京市商汤科技开发有限公司 | 一种机器人的定位方法及装置、设备、存储介质 |
CN112867977A (zh) * | 2021-01-13 | 2021-05-28 | 华为技术有限公司 | 一种定位方法、装置和车辆 |
CN112800159B (zh) * | 2021-01-25 | 2023-10-31 | 北京百度网讯科技有限公司 | 地图数据处理方法及装置 |
CN113706621B (zh) * | 2021-10-29 | 2022-02-22 | 上海景吾智能科技有限公司 | 基于带标记图像的标志点定位及姿态获取方法和系统 |
CN116008991B (zh) * | 2022-12-12 | 2024-08-06 | 北京斯年智驾科技有限公司 | 一种岸桥下车辆定位方法、装置、电子设备及存储介质 |
TWI832686B (zh) * | 2023-01-23 | 2024-02-11 | 國立陽明交通大學 | 路徑規劃系統及其路徑規劃方法 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108010360A (zh) * | 2017-12-27 | 2018-05-08 | 中电海康集团有限公司 | 一种基于车路协同的自动驾驶环境感知系统 |
CN108776474A (zh) * | 2018-05-24 | 2018-11-09 | 中山赛伯坦智能科技有限公司 | 集成高精度导航定位与深度学习的机器人嵌入式计算终端 |
US20190187241A1 (en) * | 2018-12-27 | 2019-06-20 | Intel Corporation | Localization system, vehicle control system, and methods thereof |
CN109920011A (zh) * | 2019-05-16 | 2019-06-21 | 长沙智能驾驶研究院有限公司 | 激光雷达与双目摄像头的外参标定方法、装置及设备 |
CN109931939A (zh) * | 2019-02-27 | 2019-06-25 | 杭州飞步科技有限公司 | 车辆的定位方法、装置、设备及计算机可读存储介质 |
CN110530372A (zh) * | 2019-09-26 | 2019-12-03 | 上海商汤智能科技有限公司 | 定位方法、路径确定方法、装置、机器人及存储介质 |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105674992A (zh) * | 2014-11-20 | 2016-06-15 | 高德软件有限公司 | 一种导航方法及装置 |
CN105448184B (zh) * | 2015-11-13 | 2019-02-12 | 北京百度网讯科技有限公司 | 地图道路的绘制方法及装置 |
CN105783936B (zh) * | 2016-03-08 | 2019-09-24 | 武汉中海庭数据技术有限公司 | 用于自动驾驶中的道路标识制图及车辆定位方法及系统 |
CN107398899A (zh) * | 2016-05-20 | 2017-11-28 | 富泰华工业(深圳)有限公司 | 无线信号强度定位引导系统及方法 |
JP6535634B2 (ja) * | 2016-05-26 | 2019-06-26 | 本田技研工業株式会社 | 経路案内装置及び経路案内方法 |
CN108073167A (zh) * | 2016-11-10 | 2018-05-25 | 深圳灵喵机器人技术有限公司 | 一种基于深度相机与激光雷达的定位与导航方法 |
JP6834401B2 (ja) * | 2016-11-24 | 2021-02-24 | 日産自動車株式会社 | 自己位置推定方法及び自己位置推定装置 |
JP7016214B2 (ja) * | 2016-11-29 | 2022-02-04 | アルパイン株式会社 | 走行可能領域設定装置および走行可能領域設定方法 |
JP6891753B2 (ja) * | 2017-09-28 | 2021-06-18 | ソニーグループ株式会社 | 情報処理装置、移動装置、および方法、並びにプログラム |
CN111492403A (zh) * | 2017-10-19 | 2020-08-04 | 迪普迈普有限公司 | 用于生成高清晰度地图的激光雷达到相机校准 |
JP6859927B2 (ja) * | 2017-11-06 | 2021-04-14 | トヨタ自動車株式会社 | 自車位置推定装置 |
JP2019152924A (ja) * | 2018-02-28 | 2019-09-12 | 学校法人立命館 | 自己位置同定システム、車両、及び処理装置 |
CN109241835A (zh) * | 2018-07-27 | 2019-01-18 | 上海商汤智能科技有限公司 | 图像处理方法及装置、电子设备和存储介质 |
CN109141437B (zh) * | 2018-09-30 | 2021-11-26 | 中国科学院合肥物质科学研究院 | 一种机器人全局重定位方法 |
-
2019
- 2019-09-26 CN CN201910915168.8A patent/CN110530372B/zh active Active
- 2019-12-10 WO PCT/CN2019/124412 patent/WO2021056841A1/zh active Application Filing
- 2019-12-10 JP JP2021519865A patent/JP2022504728A/ja active Pending
- 2019-12-10 SG SG11202103843YA patent/SG11202103843YA/en unknown
-
2020
- 2020-03-05 TW TW109107316A patent/TWI742554B/zh active
- 2020-03-05 TW TW110131343A patent/TW202144150A/zh unknown
-
2021
- 2021-04-12 US US17/227,915 patent/US20210229280A1/en not_active Abandoned
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108010360A (zh) * | 2017-12-27 | 2018-05-08 | 中电海康集团有限公司 | 一种基于车路协同的自动驾驶环境感知系统 |
CN108776474A (zh) * | 2018-05-24 | 2018-11-09 | 中山赛伯坦智能科技有限公司 | 集成高精度导航定位与深度学习的机器人嵌入式计算终端 |
US20190187241A1 (en) * | 2018-12-27 | 2019-06-20 | Intel Corporation | Localization system, vehicle control system, and methods thereof |
CN109931939A (zh) * | 2019-02-27 | 2019-06-25 | 杭州飞步科技有限公司 | 车辆的定位方法、装置、设备及计算机可读存储介质 |
CN109920011A (zh) * | 2019-05-16 | 2019-06-21 | 长沙智能驾驶研究院有限公司 | 激光雷达与双目摄像头的外参标定方法、装置及设备 |
CN110530372A (zh) * | 2019-09-26 | 2019-12-03 | 上海商汤智能科技有限公司 | 定位方法、路径确定方法、装置、机器人及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
SG11202103843YA (en) | 2021-05-28 |
TW202144150A (zh) | 2021-12-01 |
TWI742554B (zh) | 2021-10-11 |
CN110530372A (zh) | 2019-12-03 |
US20210229280A1 (en) | 2021-07-29 |
JP2022504728A (ja) | 2022-01-13 |
TW202112513A (zh) | 2021-04-01 |
CN110530372B (zh) | 2021-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021056841A1 (zh) | 定位方法、路径确定方法、装置、机器人及存储介质 | |
KR102483649B1 (ko) | 차량 위치 결정 방법 및 차량 위치 결정 장치 | |
RU2645388C2 (ru) | Устройство определения неправильного распознавания | |
JP5157067B2 (ja) | 自動走行用マップ作成装置、及び自動走行装置。 | |
RU2668459C1 (ru) | Устройство оценки положения и способ оценки положения | |
Schreiber et al. | Laneloc: Lane marking based localization using highly accurate maps | |
JP5966747B2 (ja) | 車両走行制御装置及びその方法 | |
KR101241651B1 (ko) | 영상 인식 장치 및 그 방법, 영상 기록 장치 또는 그방법을 이용한 위치 판단 장치, 차량 제어 장치 및네비게이션 장치 | |
JP5747787B2 (ja) | 車線認識装置 | |
KR102091580B1 (ko) | 이동식 도면화 시스템을 이용한 도로 표지 정보 수집 방법 | |
KR20220033477A (ko) | 자동 발렛 파킹 시스템의 위치 추정 장치 및 방법 | |
Shunsuke et al. | GNSS/INS/on-board camera integration for vehicle self-localization in urban canyon | |
US10942519B2 (en) | System and method for navigating an autonomous driving vehicle | |
US10963708B2 (en) | Method, device and computer-readable storage medium with instructions for determining the lateral position of a vehicle relative to the lanes of a road | |
US20170103275A1 (en) | Traffic Signal Recognition Apparatus and Traffic Signal Recognition Method | |
JP2018048949A (ja) | 物体識別装置 | |
JP7461399B2 (ja) | 原動機付き車両の走行動作を補助する方法及びアシスト装置並びに原動機付き車両 | |
JP7321035B2 (ja) | 物体位置検出方法及び物体位置検出装置 | |
JP6790951B2 (ja) | 地図情報学習方法及び地図情報学習装置 | |
KR20160125803A (ko) | 영역 추출 장치, 물체 탐지 장치 및 영역 추출 방법 | |
US20240221390A1 (en) | Lane line labeling method, electronic device and storage medium | |
JP2010176592A (ja) | 車両用運転支援装置 | |
TWI805077B (zh) | 路徑規劃方法及其系統 | |
CN116242375A (zh) | 一种基于多传感器的高精度电子地图生成方法和系统 | |
CN115050203A (zh) | 地图生成装置以及车辆位置识别装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2021519865 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19947309 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19947309 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23.05.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19947309 Country of ref document: EP Kind code of ref document: A1 |