WO2021093420A1 - Procédé et système de navigation de véhicule et support d'informations lisible par ordinateur - Google Patents

Procédé et système de navigation de véhicule et support d'informations lisible par ordinateur Download PDF

Info

Publication number
WO2021093420A1
WO2021093420A1 PCT/CN2020/112216 CN2020112216W WO2021093420A1 WO 2021093420 A1 WO2021093420 A1 WO 2021093420A1 CN 2020112216 W CN2020112216 W CN 2020112216W WO 2021093420 A1 WO2021093420 A1 WO 2021093420A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
target
straight line
ground image
ground
Prior art date
Application number
PCT/CN2020/112216
Other languages
English (en)
Chinese (zh)
Inventor
赵健章
刘瑞超
Original Assignee
深圳创维数字技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳创维数字技术有限公司 filed Critical 深圳创维数字技术有限公司
Publication of WO2021093420A1 publication Critical patent/WO2021093420A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • This application relates to the field of intelligent driving technology, and in particular to a vehicle navigation method, device, and computer-readable storage medium.
  • SLAM Simultaneous Localization and mapping, real-time positioning and map construction
  • SLAM includes two major functions: positioning and mapping.
  • the main function of mapping is to understand the surrounding environment and establish the corresponding relationship between the surrounding environment and space; the main function of positioning is to judge the position of the car body on the map based on the built map, so as to obtain the information in the environment.
  • lidar is an active detection sensor that does not depend on external light conditions and has high-precision ranging information. Therefore, the SLAM method based on lidar is still the most widely used method in the robot SLAM method, and in ROS (Robot Operating System, robot software platform) SLAM applications have also been very extensive.
  • the distance between pallet positions should be reduced as much as possible.
  • the aisle between the two deep pile positions is only a little longer than the length of the vehicle.
  • the existing navigation method is difficult. This allows the vehicle to enter the target location accurately and quickly, which in turn affects the navigation efficiency of the vehicle.
  • the main purpose of this application is to provide a vehicle navigation method, device, and computer-readable storage medium, aiming to solve the technical problem that the existing navigation method is difficult to make the vehicle enter the target location accurately and quickly.
  • the present application provides a vehicle navigation method, which includes the following steps:
  • the vehicle is controlled to stop rotating.
  • the present application also provides a vehicle navigation device, the vehicle navigation device comprising: a memory, a processor, and computer-readable instructions stored in the memory and running on the processor, When the computer-readable instructions are executed by the processor, the steps of the vehicle navigation method described in any one of the above are implemented.
  • the present application also provides a computer-readable storage medium having computer-readable instructions stored on the computer-readable storage medium, and when the computer-readable instructions are executed by a processor, any one of the above is realized.
  • This application obtains the coordinate origin corresponding to the vehicle when it is determined that the vehicle is located at the first designated position corresponding to the target storage location based on the current first position information of the vehicle; then controls the rotation of the vehicle based on the coordinate origin, and Based on the first ground image currently captured by the camera device installed on the vehicle, it is determined whether the edge identification line corresponding to the target location is perpendicular to the vehicle; and then when the edge identification line is perpendicular to the vehicle, the control station is controlled When the vehicle stops rotating and moves in the narrow lane corresponding to the vehicle location, by rotating the vehicle to make the vehicle perpendicular to the edge marking line of the target location, so that the vehicle and the target location can be accurately aligned, so that the vehicle can enter accurately and quickly Target storage location to improve the efficiency of vehicle navigation.
  • FIG. 1 is a schematic structural diagram of a path navigation device in a hardware operating environment involved in a solution of an embodiment of the application;
  • FIG. 2 is a schematic flowchart of a first embodiment of a vehicle navigation method according to this application;
  • Fig. 3 is a detailed flow of the step of determining whether the edge marking line of the target location corresponding to the first designated location is perpendicular to the vehicle based on the first ground image currently captured by the camera device installed on the vehicle in the vehicle navigation method of this application
  • FIG. 4 is a schematic diagram of a scene in an embodiment of the route navigation method of this application.
  • Fig. 5 is a schematic flowchart of a second embodiment of a vehicle navigation method according to this application.
  • FIG. 1 is a schematic structural diagram of a path navigation device in a hardware operating environment involved in a solution of an embodiment of the present application.
  • the route navigation apparatus may include a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, and a communication bus 1002.
  • the communication bus 1002 is used to implement connection and communication between these components.
  • the user interface 1003 may include a display screen (Display) and an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
  • the network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • the memory 1005 may be a high-speed RAM memory, or a stable memory (non-volatile memory), such as a magnetic disk memory.
  • the memory 1005 may also be a storage device independent of the aforementioned processor 1001.
  • the route navigation device may also include a camera, RF (Radio Frequency, radio frequency) circuits, sensors, audio circuits, WiFi modules, etc.
  • sensors such as light sensors, motion sensors and other sensors.
  • the light sensor may include a proximity sensor, where the proximity sensor may control the vehicle to stop when the movement path navigation device moves to an obstacle.
  • the structure of the route navigation device shown in FIG. 1 does not constitute a limitation on the route navigation device, and may include more or fewer components than those shown in the figure, or a combination of certain components, or different components. Layout.
  • the memory 1005 which is a computer storage medium, may include an operating system, a network communication module, a user interface module, and a route navigation program.
  • the network interface 1004 is mainly used to connect to the back-end server and communicate with the back-end server; the user interface 1003 is mainly used to connect to the client (user side) and communicate with the client; and
  • the processor 1001 may be used to call a route navigation program stored in the memory 1005.
  • the path navigation device includes: a memory 1005, a processor 1001, and a path navigation program stored on the memory 1005 and running on the processor 1001, wherein the processor 1001 calls the memory 1005 to store When the route navigation program, and perform the following operations in the various embodiments of the vehicle navigation method.
  • FIG. 2 is a schematic flowchart of the first embodiment of the vehicle navigation method of this application.
  • the vehicle navigation method of this embodiment can be applied to the process of intelligent automatic driving, where intelligent automatic driving can be applied to warehouse freight in a closed environment and also applicable to road transportation in an open environment.
  • This embodiment takes warehouse freight as an example for illustration;
  • the vehicle corresponding to warehouse freight can be a forklift, a truck, or an AGV (Automated Guided Vehicle, automatic guided transport vehicle) trolley and other equipment that can realize the transportation of goods; goods are stacked in warehouse freight, and the goods are placed on pallets, and the vehicle realizes the transportation of goods by transporting the pallets.
  • AGV Automate Guided Vehicle, automatic guided transport vehicle
  • a preset identification line is preset on the floor of the warehouse, and the driving route of the vehicle is formed between two adjacent and parallel preset identification lines.
  • the vehicle navigation method includes:
  • Step S100 when it is determined that the vehicle is located at the first designated location corresponding to the target storage location based on the current first location information of the vehicle, obtain the origin of the coordinates corresponding to the vehicle;
  • the vehicle is equipped with a lidar, and the position information of the vehicle can be obtained in real time through the lidar.
  • the target location corresponding to the vehicle is obtained, that is, the goods to be stored or the goods to be picked up.
  • the first designated location corresponding to the target warehouse is determined according to the preset rules for the location where the vehicle is located.
  • the vehicle can rotate at the first designated location to match the target location, and the first designated location is based on The target location, the length and width of the vehicle are determined.
  • the vehicle is located at the first designated position, that is, whether the vehicle currently reaches the first designated position, and when the vehicle is at the first designated position, the coordinate origin corresponding to the vehicle is obtained, for example, the vehicle For forklifts that carry pallets, the center position (near) of the rear axle of the forklift is the origin of the coordinates (center of the circle) corresponding to the vehicle.
  • Step S200 controlling the rotation of the vehicle based on the coordinate origin, and determining whether the edge marking line corresponding to the target storage location is perpendicular to the vehicle based on the first ground image currently captured by the camera installed on the vehicle;
  • the vehicle is controlled to rotate according to the origin of the coordinates. Specifically, when the camera device is installed on the left side of the vehicle, the vehicle is controlled to rotate clockwise, and the camera device is installed on the right side of the vehicle. When turning, control the vehicle to rotate counterclockwise.
  • an image is taken in real time by the camera installed on the vehicle to obtain the corresponding first ground image, and the first ground image is used to determine whether the edge marking line corresponding to the target location is perpendicular to the vehicle.
  • the device is a depth camera.
  • the camera device includes two camera devices, and the two camera devices are respectively arranged in the front and the side of the vehicle,
  • step S200 includes:
  • Step S210 Acquire a first ground image currently captured by the camera device, and identify the initial position of each identification element in the first ground image;
  • Step S220 Separate the ground feature area from the first ground image, and determine the centroid position of the target element in each of the identification elements according to the ground feature area and each of the initial positions;
  • Step S230 Determine the depth data coordinates of each target element according to each of the centroid positions, and determine the first target straight line equation corresponding to the edge identification line according to each of the depth data coordinates;
  • Step S240 Determine whether the edge identification line is perpendicular to the vehicle based on the first target straight line equation.
  • the image is taken in real time by the camera installed on the vehicle to obtain the corresponding first ground image. Understandably, the edge markings of the storage locations in the warehouse are actually pasted on the ground.
  • the tape on the top is usually composed of diamond-shaped blocks with two colors spaced apart from each other, such as black diamond-shaped blocks with yellow diamond-shaped blocks, black diamond-shaped blocks with white diamond-shaped blocks, etc.
  • each identification element is extracted from the background-processed first ground image; each of the identification elements is sequentially processed by edge extraction, contour search, and polyline fitting. , Obtain the initial contour of each of the identification elements; transmit each of the initial outlines to a preset function, and determine the initial coordinates of each of the identification elements in the ground image; take each of the initial coordinates as the center of the circle, and set Defining a circular area, and identifying each of the circular areas as the initial position of each identification element in the ground image.
  • the first ground image is stripped of the background by flood filling, and 8-10 multi-color seeds are preset to fill the first ground image to remove other substances in the first ground image; wherein
  • the seed is set according to actual requirements.
  • the vertices of the four corners in the first ground image and the third halves of the edge of the first ground image can be set, which is not limited. Since then, combined with OpenCV (Open Source Computer Vision Library, an open source computer vision library) is used to realize the preset function of HSV (hue, saturation, value, hue, color saturation, value) color recognition, and extract the black diamond block as the identification element.
  • OpenCV Open Source Computer Vision Library, an open source computer vision library
  • the preset function used to extract the edge in OpenCV set the edge range parameters and transfer to the preset function, and perform edge extraction on the extracted identification elements; when the edge size of a certain identification element is in the edge range Within the parameters, the edge extraction operation is performed on the identification element to obtain the edge pixels formed by each black diamond block in the first ground image; when the edge size of a certain identification element is not within the edge range parameters, no edge extraction is performed on it Operation, and remove it as interference.
  • the preset function for contour search in OpenCV is called, the parameters of the set contour range are transferred to the preset function, and the contour search is performed on each identification element on the basis of edge extraction; when the contour size of a certain identification element is in Within the contour range parameter, the contour of the identification element is retained to obtain its contour point; when the contour size of a certain identification element is not within the contour range parameter, its contour is removed to remove the first ground image Interfere with contours.
  • the contour points of each identification element are processed by polyline fitting to obtain the initial contour of each identification element.
  • this embodiment has a pre-established three-dimensional space coordinate system.
  • the three-dimensional space coordinate system uses the position of the stereo camera as the coordinate origin, the plane where the AGV car is located is the XY plane, and the upper space perpendicular to the XY plane is the positive direction of the Z axis. Space; where for the XY plane, the direction of the X axis is the direction directly in front of the vehicle and the direction perpendicular to the X axis on the right side of the vehicle is the Y axis.
  • the preset function used to calculate the centroid position in OpenCV is called, the initial contour of each identification element is transferred to the preset function, and the coordinate value is output through the processing of the preset function.
  • the coordinate value is the initial coordinate of each identification element in the first ground image.
  • the preset radius value is called, and the circular area corresponding to each identification element is set with the initial coordinate as the center of the circle, and the circular area is the initial position of the identification element in the first ground image.
  • the obstacles are imaged in the first ground image, resulting in interference signals that are misidentified as identification elements; in order to avoid interference from interference signals, a ground feature region extraction mechanism is provided .
  • the ground feature area is the area occupied by the ground image in the first ground image. Because the obstacle has a certain projection height on the Z axis of the three-dimensional space coordinate system, a certain projection threshold can be set to identify the first ground image And remove the identified obstacles from the first ground image to obtain the ground feature area.
  • the coordinate origin in the three-dimensional space coordinate system is used as the preset coordinate origin, and the ground feature area and each initial position are generated according to the three-dimensional space coordinate system.
  • the image and The images representing each initial position are processed to determine the target element in each identification element and the centroid coordinates of each target element.
  • the ground feature area and each of the initial positions are combined, and the overlapping feature area between the ground feature area and each of the initial positions is extracted; and each of the overlapping feature areas is It is transmitted to a preset model, the target elements in each of the identification elements are screened out, and the element coordinates of each of the target elements are calculated as the centroid position of each of the target elements.
  • the preset model Call the preset model, and transfer each overlapping feature area to the preset model, classify the features of each overlapping feature area, and filter out the areas that meet the ground features and have black blocks, and this area is the identification element
  • the target element that meets the characteristics of the black diamond block.
  • the preset model has the function of calculating the coordinates of the filtered area, and the element coordinates of each target element are obtained through the calculation function; the element coordinates are essentially the centroid coordinates of the target element, which is taken as the centroid position of the target element.
  • the ground restoration identification line is generated from the first target straight line equation.
  • the hole data in the ground feature area is detected one by one, and the peripheral depth data corresponding to the hole data is read; the hole data is filled according to the peripheral depth data until the ground feature area The hole data in are all filled, so as to determine the depth data coordinates based on the filled ground feature area.
  • the hole data is expanded in the neighborhood to realize the filling of the hole data. All the hole data in the ground feature area are filled, and then on the basis of the filled ground feature area, the polar coordinate conversion of each centroid coordinate can be performed to obtain the depth data coordinates of each target element.
  • the preset algorithm is called to calculate the converted depth data coordinates to identify the edge identification line in the first ground image; specifically, the target of each depth data coordinate is determined according to the preset range interval Data coordinates, and generate a first target straight line equation according to each of the target data coordinates.
  • the preset algorithm in this embodiment is preferably the least squares method.
  • the circular area as the initial position of the target element is taken as the preset range interval, and the depth data coordinates of each target element are based on the preset range interval. Points adjacent to the front, back, left, and right. Whenever the front and back or left and right points are found, the three points are removed and saved in an array as the target coordinate data of each depth data coordinate.
  • the least square method is used to generate each target coordinate data into the first target straight line equation, and the straight line corresponding to the first target straight line equation is the edge identification line in the first ground image Location.
  • the depth data coordinates determined by the centroid position accurately reflect the position of each marking element, which improves Accuracy of edge identification line recognition.
  • the straight line equation of the vehicle can be determined, and the edge is determined according to the first target straight line equation of the edge identification line and the straight line equation of the vehicle Whether the identification line is perpendicular to the vehicle.
  • Step S300 when the edge marking line is perpendicular to the vehicle, control the vehicle to stop rotating.
  • the vehicle when the edge marking line is perpendicular to the vehicle, the vehicle has rotated to the designated storage point. At this time, the vehicle is controlled to stop rotating to match the vehicle with the target storage location, which is convenient for subsequent direct reverse linear movement Vehicles so that the vehicles can be accurately stored.
  • 1.1-1.3 are the position of the origin of the vehicle's coordinates, 2.1 is the direction of movement of the vehicle; 2.2 is the trajectory of the vehicle.
  • the dotted line is the ground marking line of the storage location, including parallel yellow and black warning lines and yellow and black edge markings at the entrance (exit) of the storage location.
  • the camera includes two camera devices, and the two camera devices are respectively arranged in front of and on the side of the vehicle.
  • Step S230 includes:
  • Step a According to each of the depth data coordinates, determine a first straight line equation of the edge identification line corresponding to a front camera device, and a second straight line equation of the edge identification line corresponding to a side camera device;
  • Step b Perform fusion based on the first straight line equation and the second straight line equation to obtain the first target straight line equation.
  • the first ground image includes the front ground image and the side ground image
  • the edge identification line corresponding to the front camera device can be obtained according to the front ground image.
  • the first straight line equation, and the second straight line equation that obtains the edge identification line corresponding to the side camera device according to the side ground image, and then the coordinate system where the first straight line equation is located and the coordinate system where the second straight line equation is located are merged ,
  • the fusion is specifically performed according to the fusion filtering algorithm, and the fused coordinate system and the first target straight line equation in the fused coordinate system are obtained.
  • the straight line equation in the fused coordinate system is the first target straight line equation needed. If there are two straight line equations in the fused coordinate system, and two If the straight line equation is vertical, it means that the vertical ratio of the two straight line equations is the yellow and black warning line of the target storage location and the edge marking line of the storage location entrance. The straight line equation with the largest angle between the straight line equations is the first target straight line equation.
  • the vehicle navigation method proposed in this embodiment obtains the coordinate origin corresponding to the vehicle when it is determined based on the current first position information of the vehicle that the vehicle is located at the first designated location corresponding to the target storage location; and then based on the coordinate origin Control the rotation of the vehicle, and determine whether the edge identification line corresponding to the target storage location is perpendicular to the vehicle based on the first ground image currently captured by the camera installed on the vehicle;
  • the vehicle is vertical, the vehicle is controlled to stop rotating, and when moving in the narrow lane corresponding to the vehicle location, the vehicle is perpendicular to the edge marking line of the target location by rotating the vehicle, so that the vehicle and the target location can be accurately aligned.
  • the vehicle navigation method proposed in this embodiment obtains the coordinate origin corresponding to the vehicle when it is determined based on the current first position information of the vehicle that the vehicle is located at the first designated location corresponding to the target storage location; and then based on the coordinate origin Control the rotation of the vehicle, and determine whether the edge identification line corresponding to the target storage location is
  • the vehicle navigation method further includes:
  • Step S400 acquiring a second ground image based on the camera device, and determining a relative position parameter between the vehicle and a preset identification line according to the second ground image;
  • Step S500 reading the historical ground image acquired based on the camera device, and determining the displacement parameter of the vehicle according to the second ground image and the historical ground image;
  • Step S600 Determine a position adjustment parameter according to the relative position parameter, and use the position adjustment parameter and the displacement parameter as posture adjustment parameters to adjust the posture of the vehicle.
  • the stereo camera photographs and images the side ground in the driving direction in real time, and generates a second ground image that characterizes the relative position between the vehicle and the preset marking line. If the driving path of the vehicle deviates, the relative position between the vehicle and the preset identification line also deviates, so that the preset identification line in the second ground image deviates.
  • the preset identification line is a straight line corresponding to the edge identification line of the target storage location.
  • a three-dimensional space coordinate system is established based on the current position of the vehicle, the position of the stereo camera is taken as the coordinate origin, and the plane where the vehicle is located is the XY plane, which is compared with the XY plane.
  • the vertical upper space is the space where the positive direction of the Z-axis is located; among them, for the XY plane, the direction directly in front of the vehicle is the X-axis direction, and the direction perpendicular to the X-axis direction on the right side of the vehicle is the Y-axis direction.
  • the linear equation of the preset identification line on the XY plane is determined, and the relative position parameter of the vehicle relative to the preset identification line is determined.
  • the relative position parameter includes the relative position of the vehicle.
  • the passing angle indicates whether the vehicle is parallel to the preset identification line
  • the passing distance indicates whether the distance between the vehicle and the preset identification line on the left and right sides is equal.
  • the straight line equation corresponding to the preset identification line is obtained, and the slope of the straight line equation is calculated; the driving direction of the vehicle is taken as the reference direction, and the straight line equation and the reference are calculated according to the slope.
  • the included angle between directions; according to the straight line equation, the distance between the vehicle and the preset identification line is calculated, and the included angle and the distance are determined as the relative position parameter.
  • the preset marking line is essentially a pattern composed of rhombuses with yellow and black intervals or black and white intervals. After the second ground image is obtained, image processing is performed on it, and the black rhombuses are extracted, and each black is determined. The position of the center of mass of the rhombus; fitting the position of the center of mass to generate a straight line equation, which is the straight line where the preset marking line is located. After that, the slope of the linear equation is calculated.
  • the historical ground image acquired by the camera device at the previous moment is read, and the relative position change of the vehicle is reflected by the comparison between the historical ground image at the previous moment and the second ground image, thereby determining the displacement parameter of the vehicle .
  • a preset included angle and a preset distance are preset. Compare the included angle in the relative position parameter with the preset included angle to get the angle difference between the two.
  • the angle difference is used to characterize the difference between the actual angle and the theoretical angle of the vehicle. The smaller the difference, the smaller the difference between the vehicle and the theoretical angle.
  • the parallelism between the preset marking lines is better; at the same time, the distance in the relative position parameter is compared with the preset distance to obtain the distance difference between the two.
  • the distance difference is used to characterize the actual distance between the vehicle and the theoretical distance. The size of the difference between the two, the smaller the difference, the greater the possibility that the distance between the vehicle and the preset signs on both sides is equal.
  • the angle difference and distance difference obtained through the comparison are determined as the position adjustment parameters, and the position adjustment parameters and the displacement parameters are used as the attitude adjustment parameters to adjust the angle of the vehicle position and the preset marking line on both sides Adjust the attitude of the distance between the two, and calculate the displacement of the vehicle at the front and rear moments; while avoiding collisions with the goods stacked on both sides, the displacement distance is calculated to determine the driving distance to the destination.
  • the posture adjustment of the vehicle can be adjusted through the control center of the vehicle, or through an upper computer connected to the vehicle in communication.
  • the vehicle sends the position adjustment parameters and displacement parameters as the attitude adjustment parameters to the upper computer; the upper computer adjusts the driving angle of the vehicle according to the angle difference represented by the angle difference.
  • the distance difference represented by the distance difference the left and right positions of the vehicle are adjusted.
  • the displacement distance of the vehicle from the previous moment to the current moment is calculated according to the displacement parameter; the displacement distance is used to update the driving distance of the vehicle to represent the vehicle The distance to the destination.
  • the host computer will determine whether to adjust or not, and the adjusted parameters will be sent to the vehicle to control the driving state of the vehicle and realize the precise transportation of the vehicle.
  • the control center When the vehicle's control center adjusts the attitude of the vehicle, the control center directly adjusts the driving angle of the vehicle according to the angle difference represented by the angle difference, and adjusts the vehicle's driving angle according to the distance difference represented by the distance difference.
  • the left and right positions are adjusted, and the displacement distance of the vehicle from the last moment to the current moment is calculated according to the displacement parameters.
  • the displacement distance is used to update the travel distance of the vehicle, which represents the distance between the vehicle and the destination; in this way, the vehicle is controlled
  • the driving state of the vehicle can realize the accurate transportation of the vehicle.
  • step S500 includes: identifying the first data point in the second ground image and the second data point in the historical ground image, and filtering out the first coordinate point and the second data point in each of the first data points.
  • the second coordinate point in each of the second data points; the first center coordinates in each of the first coordinate points and the second center coordinates in each of the second coordinate points are determined respectively, and according to the first The center coordinates and the second center coordinates determine the displacement parameters of the vehicle.
  • the displacement parameters used to calculate the displacement distance include displacement value and displacement angle, where the displacement value is the distance value between the position of the vehicle at the previous moment and the position point of the current moment, and the displacement angle is the distance between the vehicle and the driving direction , That is, the included angle in the x-axis direction.
  • the displacement value is the distance value between the position of the vehicle at the previous moment and the position point of the current moment
  • the displacement angle is the distance between the vehicle and the driving direction , That is, the included angle in the x-axis direction.
  • first extract the black diamond blocks of the preset marking line in the second ground image identify the centroid of each black diamond block, and determine each centroid as the first data in the second ground image Point;
  • extract the black diamond blocks of the preset marking line in the historical ground image identify the centroid of each black diamond block, and determine each centroid as the second data point in the historical ground image.
  • the first data point filter according to the straight line equation of the preset identification line in the second ground image, and determine the point belonging to the straight line equation as the first coordinate point; at the same time, for the second data point, according to the preset
  • the line equation of the marking line in the historical ground image is screened, and the point belonging to the line equation is determined as the second coordinate point.
  • the slopes of the two linear equations are within the preset range. If it exceeds the preset range, it means that the vehicle has a large displacement at the front and rear moments and an abnormal situation has occurred. At this time, the displacement of the vehicle is monitored on the one hand, and the other Regenerate the linear equation to ensure the correctness of the calculation.
  • first coordinate point and the second coordinate point are screened, and the valid points in the historical ground image and the second ground image are determined, and the first center in the first coordinate point is determined by the respective valid points. Coordinates and the second center coordinates in the second coordinate point. Specifically, according to each of the second coordinate points, a first effective point corresponding to each of the first coordinate points is determined, and the averaging process is performed on each of the first effective points to determine the first center coordinate; For each of the first coordinate points, a second effective point corresponding to each of the second coordinate points is determined, and average processing is performed on each of the second effective points to generate the second center coordinate.
  • each second coordinate point When determining the first center coordinates of each first coordinate point, use each second coordinate point as a basis to filter out the point with the closest distance to each first coordinate point from each second coordinate point; after that, the closest distance to each first coordinate point is selected.
  • the point is used as the first effective point to perform the average value processing of the coordinate values, and the average value obtained is the first center coordinate corresponding to each first coordinate point.
  • the first coordinate point contains a1 (x1, y1), a2 (x2, y2), a3 (x3, y3), a4 (x4, y4)•••an(xn, yn) N points
  • the first point is determined by searching Among the two coordinate points, the point closest to a1 is b1, the point closest to a2 is b2, the point closest to a3 is b3, the point closest to a4 is b4, and the point closest to an is bn;
  • the coordinate values of b1, b2, b3, b4•••bn are averaged, and the average value (x, y) of the coordinate values is obtained.
  • the average value (x, y) is the first center of each first coordinate point coordinate.
  • each first coordinate point as a basis to filter out the point with the closest distance to each second coordinate point from each first coordinate point;
  • the closest point is used as the second effective point to perform the average value processing of the coordinate values, and the average value obtained is the second center coordinate corresponding to each second coordinate point.
  • the displacement value and the displacement angle in the displacement parameter can be calculated. Specifically, according to the first center coordinates and the second center coordinates, calculate the slope of the straight line formed by the first center coordinates and the second center coordinates, and calculate the relative displacement value of the vehicle; The slope of the straight line calculates the displacement angle of the vehicle, and the relative displacement value and the displacement angle are determined as the displacement parameters of the vehicle.
  • (x1, y1) are the first center coordinates
  • (x0, y0) are the second center coordinates
  • a straight line is formed between the first center coordinates and the second center coordinates, passing through the first center coordinates and the second center coordinates
  • the slope of the straight line formed is calculated, and the angle corresponding to the slope reflects the angle change of the vehicle relative to the preset marking line at the front and back two moments.
  • the first center coordinate and the second center coordinate also reflect the displacement of the vehicle at the front and rear moments, so that the relative displacement value of the vehicle can be calculated through the first center coordinate and the second center coordinate.
  • the calculated relative displacement value and displacement angle are determined as the displacement parameters of the vehicle, so as to calculate the displacement distance traveled by the vehicle at two moments before and after the displacement parameters.
  • the projected value is the displacement distance the vehicle travels along the direction of travel; and the displacement distance is used to compare the distance between the vehicle and the destination.
  • the driving distance between the two is updated to achieve accurate transportation.
  • a second ground image is acquired based on the camera device, and the relative position parameter between the vehicle and a preset identification line is determined according to the second ground image;
  • the historical ground image acquired by the camera device determines the displacement parameter of the vehicle according to the second ground image and the historical ground image; then the position adjustment parameter is determined according to the relative position parameter, and the position adjustment parameter And the displacement parameter is used as the attitude adjustment parameter to adjust the attitude of the vehicle. Because the attitude adjustment parameter used to realize the adjustment is generated according to the relative position parameter and the displacement parameter of the vehicle, it can accurately characterize the displacement change of the vehicle, and realize The accurate adjustment of the vehicle posture is conducive to accurate transportation.
  • the vehicle navigation method further includes:
  • Step S700 Determine the marking lines on both sides of the target storage location based on the third ground image currently captured by the camera device;
  • Step S800 Control the vehicle to move in the reverse direction based on the marking lines on both sides, and control the vehicle to stop moving when the rear stop line is monitored or it is determined that the anti-collision sensor currently detects the cargo, wherein the anti-collision sensor Installed at the end of the vehicle.
  • the vehicle is a forklift
  • the forklift is equipped with two anti-collision sensors, which are respectively installed at the end of the fork of the forklift.
  • the fork When the fork is lifted, it can detect the pallet cargo at a specified distance behind, and it can be inserted when the fork is down. Pallet anti-collision detection.
  • the identification lines on both sides of the target location are determined, wherein the determination method of the identification lines on both sides is the same as that of the edge identification lines.
  • the determination method is similar. First, the straight line equations of the identification lines on both sides are determined, the identification lines on both sides are determined according to the straight line equation, and then the vehicle is controlled to move in the reverse direction based on the identification lines on both sides, so that the vehicle can move to the target storage location.
  • the ground image captured by the camera device and the detection result of the collision avoidance sensor are collected in real time, and the rear stop line is determined according to the ground image captured by the camera device, or according to the detection result of the collision avoidance sensor
  • the detection result determines that when the anti-collision sensor currently detects goods, the vehicle is controlled to stop moving.
  • the steps in the second embodiment can be executed first to realize the posture adjustment of the vehicle.
  • the vehicle navigation method proposed in this embodiment determines the identification lines on both sides of the target location based on the third ground image currently captured by the camera; and then controls the vehicle to move in the reverse direction based on the identification lines on both sides , And when the rear stop line is monitored or it is determined that the anti-collision sensor currently detects the goods, the vehicle is controlled to stop moving, wherein the anti-collision sensor is installed at the end of the vehicle to realize the accurate movement of the vehicle in the warehouse , To further improve the navigation efficiency of the vehicle.
  • the vehicle navigation method further includes:
  • Step c determining whether the vehicle satisfies a U-turn condition based on the first location information and the target storage location;
  • Step d if it is not met, execute the step of acquiring the origin of the coordinates corresponding to the vehicle when it is determined based on the first location information that the vehicle is located at the first designated location corresponding to the target storage location.
  • the moving direction and the target location determine whether the vehicle currently needs to move to the other side of the warehouse, and if not, execute step S200.
  • step c it further includes:
  • Step e if it is satisfied, determine a second designated location corresponding to the vehicle based on the current location information
  • Step f controlling the vehicle based on the second designated position
  • Step g when the current second position information of the vehicle determines that the vehicle is located at the second designated position, control the rotation of the vehicle based on the origin of the second coordinate corresponding to the vehicle;
  • Step h when the vehicle rotates to a third designated position corresponding to the target storage location, control the vehicle to stop rotating.
  • the second designated position corresponding to the vehicle is determined based on the current position information, and the vehicle is controlled based on the second designated position, so that the second designated position of the vehicle is moving.
  • the designated location, where the second designated location can be set according to the target storage location, so that the vehicle can quickly reach the target storage location after turning around at the second specified location.
  • the vehicle is controlled to rotate based on the origin of the second coordinate corresponding to the vehicle, even if the vehicle rotates 180 degrees, a U-turn of the vehicle is realized, And when the vehicle rotates to the third designated position corresponding to the target storage location, that is, when the vehicle completes a U-turn operation, the vehicle is controlled to stop rotating, thereby realizing a U-turn of the vehicle.
  • the steps in the third embodiment can be executed to adjust the vehicle's posture.
  • the vehicle navigation method proposed in this embodiment determines whether the vehicle satisfies the U-turn condition based on the first location information and the target location, and then if it is not met, then executes the determination that the vehicle is located based on the first location information.
  • the target location corresponds to the first designated location
  • the step of obtaining the coordinate origin corresponding to the vehicle and then execute the subsequent steps when the vehicle does not need to turn around, so as to realize the accurate navigation of the vehicle and further improve the navigation efficiency of the vehicle.
  • the vehicle navigation method further includes:
  • Step i When it is determined that the vehicle is in a narrow-track linear movement state based on the current third position information of the vehicle and the navigation path corresponding to the vehicle, acquire the fourth ground image currently captured by the camera device, and identify the first 4. The initial position of each identification element in the ground image;
  • Step j Separate the ground feature area from the fourth ground image, and determine the centroid position of the target element in each of the identification elements according to the ground feature area and each of the initial positions;
  • Step k Determine the depth data coordinates of each target element according to each of the centroid positions, and according to each of the depth data coordinates;
  • Step 1 Identify the second target straight line equation corresponding to the edge identification line in the fourth ground image according to each of the depth data coordinates, and determine the calibration position of the vehicle based on the second target straight line equation;
  • Step m Determine the target position and attitude information of the vehicle based on the calibration position, and control the vehicle based on the target position and the attitude information.
  • the fourth ground image currently captured by the camera device is acquired, and the fourth ground image is acquired according to the first
  • the fourth ground image determines the second target straight line equation corresponding to the edge identification line in the fourth ground image, where the narrow-track linear movement state refers to the driving state of the vehicle moving linearly in the narrow lane between the two ground stack warehouses, specifically , Determining that the vehicle is in a narrow lane according to the third location information and the navigation path corresponding to the vehicle, and determining that the vehicle needs to move in a straight line according to the third location information and the target location, the vehicle is in a state of straight moving in the narrow lane,
  • the method for determining the second target straight line equation corresponding to the edge identification line in the fourth ground image is similar to the method for determining the first target straight line equation in the foregoing embodiment, and will not be repeated here.
  • the calibration position of the vehicle is determined based on the second target straight line equation; and the target position and attitude information of the vehicle are determined based on the calibration position, and the vehicle is controlled based on the target position and the attitude information , In order to realize the straight line movement of the vehicle in the narrow lane.
  • the camera device includes two camera devices, the two camera devices are respectively arranged in the front and the side of the vehicle, and the edge marking line includes, according to each of the depth data coordinates, identifying in the ground image
  • the steps of the second target straight line equation corresponding to the edge identification line include:
  • the third straight line equation of the edge identification line corresponding to the front camera device and the fourth straight line equation of the edge identification line corresponding to the side camera device are determined according to each of the depth data coordinates; based on the third The straight line equation and the fourth straight line equation are fused to obtain the second target straight line equation.
  • the first ground image includes the front ground image and the side ground image
  • the edge identification line corresponding to the front camera device can be obtained according to the front ground image.
  • the first straight line equation, and the second straight line equation that obtains the edge identification line corresponding to the side camera device according to the side ground image, and then the coordinate system where the first straight line equation is located and the coordinate system where the second straight line equation is located are merged ,
  • the fusion is specifically performed according to the fusion filtering algorithm, and the fused coordinate system and the second target straight line equation in the fused coordinate system are obtained.
  • the coordinate origin in the coordinate system after the fusion is the calibration position. If there are two linear equations in the coordinate system after the fusion, and the two linear equations are perpendicular, then The intersection of the two straight line equations is the calibration position.
  • the vehicle navigation method proposed in this embodiment acquires the fourth ground currently photographed by the camera device when it is determined that the vehicle is in a state of linear movement in a narrow lane based on the current third position information of the vehicle and the navigation path corresponding to the vehicle.
  • Image and identify the initial position of each identification element in the fourth ground image; then separate the ground feature area from the fourth ground image, and determine each location based on the ground feature area and each of the initial positions
  • the position of the center of mass of the target element in the identification element; then according to the position of the center of mass, the depth data coordinates of each of the target elements are determined, and according to each of the depth data coordinates; and then according to each of the depth data coordinates, the
  • the second target straight line equation corresponding to the edge identification line in the ground image, and the calibration position of the vehicle is determined based on the second target straight line equation; finally, the target position and attitude information of the vehicle are determined based on the calibration position, and The vehicle is controlled based on the target position and the posture information, and then when the
  • an embodiment of the present application also proposes a computer-readable storage medium having computer-readable instructions stored on the computer-readable storage medium, and when the computer-readable instructions are executed by a processor, any of the foregoing The steps of the vehicle navigation method.
  • the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM) as described above. , Magnetic disks, optical disks), including several instructions to make a terminal device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the method described in each embodiment of the present application.
  • a terminal device which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Traffic Control Systems (AREA)

Abstract

La présente invention concerne un procédé de navigation de véhicule. Le procédé de navigation de véhicule comprend les étapes suivantes consistant à : lors de la détermination, sur la base des premières informations de position actuelles d'un véhicule, que le véhicule est situé à une première position spécifiée correspondant à un emplacement de stockage cible, obtenir une origine de coordonnées correspondant au véhicule ; commander, sur la base de l'origine des coordonnées, le véhicule pour qu'il tourne, et sur la base d'une première image de sol actuellement photographiée par un appareil photographique installé sur le véhicule, déterminer si une ligne de marquage de bord correspondant à l'emplacement de stockage cible est perpendiculaire au véhicule ; et lorsque la ligne de marquage de bord est perpendiculaire au véhicule, commander le véhicule pour arrêter la rotation.
PCT/CN2020/112216 2019-11-12 2020-08-28 Procédé et système de navigation de véhicule et support d'informations lisible par ordinateur WO2021093420A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911117651.8A CN110837814B (zh) 2019-11-12 2019-11-12 车辆导航方法、装置及计算机可读存储介质
CN201911117651.8 2019-11-12

Publications (1)

Publication Number Publication Date
WO2021093420A1 true WO2021093420A1 (fr) 2021-05-20

Family

ID=69575095

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/112216 WO2021093420A1 (fr) 2019-11-12 2020-08-28 Procédé et système de navigation de véhicule et support d'informations lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN110837814B (fr)
WO (1) WO2021093420A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378735A (zh) * 2021-06-18 2021-09-10 北京东土科技股份有限公司 一种道路标识线识别方法、装置、电子设备及存储介质
CN114004881A (zh) * 2021-12-30 2022-02-01 山东捷瑞数字科技股份有限公司 在井喷口架设引火筒的远程控制方法
CN114038191A (zh) * 2021-11-05 2022-02-11 青岛海信网络科技股份有限公司 一种采集交通数据的方法及装置
CN114415677A (zh) * 2021-12-31 2022-04-29 科大智能机器人技术有限公司 一种自动导航车的控制方法及装置
CN115601271A (zh) * 2022-11-29 2023-01-13 上海仙工智能科技有限公司(Cn) 一种视觉信息防抖方法、仓储库位状态管理方法及系统

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110837814B (zh) * 2019-11-12 2022-08-19 深圳创维数字技术有限公司 车辆导航方法、装置及计算机可读存储介质
CN111856537B (zh) * 2020-06-18 2023-03-07 北京九曜智能科技有限公司 一种自动驾驶车辆的导航方法及装置
CN113341443A (zh) * 2021-05-26 2021-09-03 和芯星通科技(北京)有限公司 一种定位轨迹信息的处理方法和车载导航装置
CN114265414A (zh) * 2021-12-30 2022-04-01 深圳创维数字技术有限公司 车辆控制方法、装置、设备及计算机可读存储介质
CN115218918B (zh) * 2022-09-20 2022-12-27 上海仙工智能科技有限公司 一种智能导盲方法及导盲设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5410346A (en) * 1992-03-23 1995-04-25 Fuji Jukogyo Kabushiki Kaisha System for monitoring condition outside vehicle using imaged picture by a plurality of television cameras
CN101631695A (zh) * 2007-05-30 2010-01-20 爱信精机株式会社 驻车辅助装置
CN109934140A (zh) * 2019-03-01 2019-06-25 武汉光庭科技有限公司 基于检测地面横向标线的自动倒车辅助停车方法及系统
CN110837814A (zh) * 2019-11-12 2020-02-25 深圳创维数字技术有限公司 车辆导航方法、装置及计算机可读存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2005108171A1 (ja) * 2004-05-06 2008-03-21 松下電器産業株式会社 駐車支援装置
JP5397321B2 (ja) * 2009-06-09 2014-01-22 株式会社デンソー 駐車支援システム
CN102152763A (zh) * 2011-03-19 2011-08-17 重庆长安汽车股份有限公司 一种泊车辅助装置
CN103234542B (zh) * 2013-04-12 2015-11-04 东南大学 基于视觉的汽车列车弯道行驶轨迹测量方法
CN105094134B (zh) * 2015-08-25 2017-10-31 杭州金人自动控制设备有限公司 一种基于图像巡线的agv定点停车方法
CN105128746A (zh) * 2015-09-25 2015-12-09 武汉华安科技股份有限公司 一种车辆泊车方法及其泊车系统
CN109508021B (zh) * 2018-12-29 2022-04-26 歌尔股份有限公司 一种自动导引车的导引方法、装置和系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5410346A (en) * 1992-03-23 1995-04-25 Fuji Jukogyo Kabushiki Kaisha System for monitoring condition outside vehicle using imaged picture by a plurality of television cameras
CN101631695A (zh) * 2007-05-30 2010-01-20 爱信精机株式会社 驻车辅助装置
CN109934140A (zh) * 2019-03-01 2019-06-25 武汉光庭科技有限公司 基于检测地面横向标线的自动倒车辅助停车方法及系统
CN110837814A (zh) * 2019-11-12 2020-02-25 深圳创维数字技术有限公司 车辆导航方法、装置及计算机可读存储介质

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378735A (zh) * 2021-06-18 2021-09-10 北京东土科技股份有限公司 一种道路标识线识别方法、装置、电子设备及存储介质
CN113378735B (zh) * 2021-06-18 2023-04-07 北京东土科技股份有限公司 一种道路标识线识别方法、装置、电子设备及存储介质
CN114038191A (zh) * 2021-11-05 2022-02-11 青岛海信网络科技股份有限公司 一种采集交通数据的方法及装置
CN114038191B (zh) * 2021-11-05 2023-10-27 青岛海信网络科技股份有限公司 一种采集交通数据的方法及装置
CN114004881A (zh) * 2021-12-30 2022-02-01 山东捷瑞数字科技股份有限公司 在井喷口架设引火筒的远程控制方法
CN114004881B (zh) * 2021-12-30 2022-04-05 山东捷瑞数字科技股份有限公司 在井喷口架设引火筒的远程控制方法
CN114415677A (zh) * 2021-12-31 2022-04-29 科大智能机器人技术有限公司 一种自动导航车的控制方法及装置
CN115601271A (zh) * 2022-11-29 2023-01-13 上海仙工智能科技有限公司(Cn) 一种视觉信息防抖方法、仓储库位状态管理方法及系统
CN115601271B (zh) * 2022-11-29 2023-03-24 上海仙工智能科技有限公司 一种视觉信息防抖方法、仓储库位状态管理方法及系统

Also Published As

Publication number Publication date
CN110837814B (zh) 2022-08-19
CN110837814A (zh) 2020-02-25

Similar Documents

Publication Publication Date Title
WO2021093420A1 (fr) Procédé et système de navigation de véhicule et support d'informations lisible par ordinateur
KR102194426B1 (ko) 실내 이동 로봇이 엘리베이터에서 환경을 인식하기 위한 장치 및 방법, 이를 구현하기 위한 프로그램이 저장된 기록매체 및 이를 구현하기 위해 매체에 저장된 컴퓨터프로그램
US11320833B2 (en) Data processing method, apparatus and terminal
CN110969655B (zh) 用于检测车位的方法、装置、设备、存储介质以及车辆
US10859684B1 (en) Method and system for camera-lidar calibration
CN110796063B (zh) 用于检测车位的方法、装置、设备、存储介质以及车辆
CN111856491B (zh) 用于确定车辆的地理位置和朝向的方法和设备
US20190172215A1 (en) System and method for obstacle avoidance
WO2021046716A1 (fr) Procédé, système et dispositif pour détecter un objet cible et support de stockage
WO2021037086A1 (fr) Procédé et appareil de positionnement
US20220012509A1 (en) Overhead-view image generation device, overhead-view image generation system, and automatic parking device
WO2019187816A1 (fr) Corps mobile et système de corps mobile
JP2020057307A (ja) 自己位置推定のための地図データを加工する装置および方法、ならびに移動体およびその制御システム
CN110764110B (zh) 路径导航方法、装置及计算机可读存储介质
WO2022000197A1 (fr) Procédé d'opération de vol, véhicule aérien sans pilote et support de stockage
TW202020734A (zh) 載具、載具定位系統及載具定位方法
CN114179788A (zh) 自动泊车方法、系统、计算机可读存储介质及车机端
Flade et al. Lane detection based camera to map alignment using open-source map data
JP7482453B2 (ja) 測位装置及び移動体
CN112987748A (zh) 机器人狭窄空间的控制方法、装置、终端及存储介质
CN114078247A (zh) 目标检测方法及装置
WO2023036212A1 (fr) Procédé de localisation d'étagère, procédé et appareil d'amarrage d'étagère, dispositif et support
CN113605766B (zh) 一种汽车搬运机器人的探测系统及位置调整方法
US11631197B2 (en) Traffic camera calibration
Nowicki et al. Laser-based localization and terrain mapping for driver assistance in a city bus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20887661

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20887661

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 02.11.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20887661

Country of ref document: EP

Kind code of ref document: A1