CN110837814B - Vehicle navigation method, device and computer readable storage medium - Google Patents

Vehicle navigation method, device and computer readable storage medium Download PDF

Info

Publication number
CN110837814B
CN110837814B CN201911117651.8A CN201911117651A CN110837814B CN 110837814 B CN110837814 B CN 110837814B CN 201911117651 A CN201911117651 A CN 201911117651A CN 110837814 B CN110837814 B CN 110837814B
Authority
CN
China
Prior art keywords
vehicle
target
determining
ground image
linear equation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911117651.8A
Other languages
Chinese (zh)
Other versions
CN110837814A (en
Inventor
赵健章
刘瑞超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Skyworth Digital Technology Co Ltd
Original Assignee
Shenzhen Skyworth Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Skyworth Digital Technology Co Ltd filed Critical Shenzhen Skyworth Digital Technology Co Ltd
Priority to CN201911117651.8A priority Critical patent/CN110837814B/en
Publication of CN110837814A publication Critical patent/CN110837814A/en
Priority to PCT/CN2020/112216 priority patent/WO2021093420A1/en
Application granted granted Critical
Publication of CN110837814B publication Critical patent/CN110837814B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Abstract

The invention discloses a vehicle navigation method, which comprises the following steps: when the vehicle is determined to be located at a first designated position corresponding to a target storage position based on current first position information of the vehicle, acquiring a coordinate origin corresponding to the vehicle; controlling the vehicle to rotate based on the origin of coordinates, and determining whether an edge identification line corresponding to the target storage location is vertical to the vehicle based on a first ground image currently shot by a camera mounted on the vehicle; and when the edge marking line is vertical to the vehicle, controlling the vehicle to stop rotating. The invention also discloses a vehicle navigation device and a computer readable storage medium. According to the invention, the vehicle is rotated to enable the vehicle to be vertical to the edge identification line of the target storage position, so that the vehicle and the target storage position are accurately aligned, the vehicle can accurately and quickly enter the target storage position, and the vehicle navigation efficiency is improved.

Description

Vehicle navigation method, device and computer readable storage medium
Technical Field
The invention relates to the technical field of intelligent driving, in particular to a vehicle navigation method, a vehicle navigation device and a computer readable storage medium.
Background
SLAM (simultaneous localization and mapping, instantaneous localization and mapping) based on natural environment includes two major functions: and (5) positioning and mapping. The main function of the map building is to understand the surrounding environment and build the corresponding relation between the surrounding environment and the space; the main positioning function is to judge the position of the vehicle body in the map according to the established map, so as to obtain the information in the environment. Secondly, the laser radar is an active detection sensor, does not depend on the external illumination condition, and has high-precision ranging information. Therefore, the SLAM method based on the laser radar is still the most widely applied method in the SLAM method of the Robot, and the SLAM application in ROS (Robot Operating System) has also been very widely applied.
In practical application, due to the fact that warehouse area is saved, the distance between the pallet bin positions is reduced as much as possible, usually, the passage between the deep stacking bin positions on two sides is a little longer than the length of a vehicle, and the existing navigation mode is difficult to enable the vehicle to accurately and quickly enter a target bin position, so that the navigation efficiency of the vehicle is influenced.
The above is only for the purpose of assisting understanding of the technical solution of the present invention, and does not represent an admission that the above is the prior art.
Disclosure of Invention
The invention mainly aims to provide a vehicle navigation method, a vehicle navigation device and a computer readable storage medium, and aims to solve the technical problem that the existing navigation mode is difficult to enable a vehicle to accurately and quickly enter a target storage position.
In order to achieve the above object, the present invention provides a vehicle navigation method, including the steps of:
when the vehicle is determined to be located at a first designated position corresponding to a target library position based on current first position information of the vehicle, acquiring a coordinate origin corresponding to the vehicle;
controlling the vehicle to rotate based on the coordinate origin, and determining whether an edge identification line corresponding to the target storage location is perpendicular to the vehicle based on a first ground image currently shot by a camera device installed on the vehicle;
and when the edge marking line is vertical to the vehicle, controlling the vehicle to stop rotating.
In an embodiment, the step of determining whether the edge identification line of the target storage location corresponding to the first designated location is perpendicular to the vehicle based on a first ground image currently captured by a camera mounted on the vehicle includes:
acquiring a first ground image currently shot by the camera device, and identifying the initial position of each identification element in the first ground image;
separating a ground characteristic region from the first ground image, and determining the centroid position of a target element in each identification element according to the ground characteristic region and each initial position;
determining the depth data coordinates of each target element according to each centroid position, and determining a first target linear equation corresponding to the edge identification line according to each depth data coordinate;
determining whether the edge marking line is perpendicular to the vehicle based on the first target straight line equation.
In one embodiment, the two image capturing devices are respectively arranged in front of and at the side of the vehicle, and the step of determining the first target linear equation corresponding to the edge identification line according to each depth data coordinate includes:
according to the depth data coordinates, determining a first linear equation of the edge identification line corresponding to the front camera device and a second linear equation of the edge identification line corresponding to the side camera device;
and fusing based on the first linear equation and the second linear equation to obtain the first target linear equation.
In one embodiment, before the step of controlling the rotation of the vehicle based on the origin of coordinates, the method further comprises:
acquiring a second ground image based on the camera device, and determining a relative position parameter between the vehicle and a preset identification line according to the second ground image;
reading a historical ground image acquired based on the camera device, and determining a displacement parameter of the vehicle according to the second ground image and the historical ground image;
and determining a position adjustment parameter according to the relative position parameter, and taking the position adjustment parameter and the displacement parameter as attitude adjustment parameters to adjust the attitude of the vehicle.
In one embodiment, after the step of controlling the vehicle to stop rotating, the vehicle navigation method further includes:
determining identification lines on two sides of the target storage position based on a third ground image currently shot by the camera device;
and controlling the vehicle to move reversely based on the two side identification lines, and controlling the vehicle to stop moving when a rear stop line is monitored or when it is determined that the collision avoidance sensor detects goods currently, wherein the collision avoidance sensor is installed at the tail end of the vehicle.
In one embodiment, after the step of obtaining the current position information of the vehicle in real time, the vehicle navigation method further includes:
determining whether the vehicle meets a u-turn condition based on the first location information and a target bin location;
and if not, acquiring a coordinate origin corresponding to the vehicle when the vehicle is determined to be located at a first designated position corresponding to the target storage location based on the first position information.
In one embodiment, after the step of determining whether the vehicle satisfies a u-turn condition based on the first location information and a target depot level, the vehicle navigation method further includes:
if so, determining a second appointed position corresponding to the vehicle based on the current position information;
controlling the vehicle based on the second designated position;
when the vehicle is determined to be located at the second designated position according to the current second position information of the vehicle, controlling the vehicle to rotate on the basis of a second coordinate origin corresponding to the vehicle;
and when the vehicle rotates to a third designated position corresponding to the target storage position, controlling the vehicle to stop rotating.
In one embodiment, the vehicle navigation method further comprises:
when the vehicle is determined to be in a narrow-road linear movement state based on the current third position information of the vehicle and a navigation path corresponding to the vehicle, acquiring a fourth ground image currently shot by the camera device, and identifying the initial position of each identification element in the fourth ground image;
separating a ground characteristic region from the fourth ground image, and determining the centroid position of a target element in each identification element according to the ground characteristic region and each initial position;
determining the depth data coordinates of each target element according to each centroid position, and determining the depth data coordinates according to each depth data coordinate;
according to each depth data coordinate, identifying a second target linear equation corresponding to the edge identification line in the fourth ground image, and determining the calibration position of the vehicle based on the second target linear equation;
and determining a target position and attitude information of the vehicle based on the calibration position, and controlling the vehicle based on the target position and the attitude information.
In an embodiment, the two image capturing devices are respectively arranged in front of and at the side of the vehicle, and the edge identification line includes a second target straight line equation corresponding to the edge identification line in the ground image, which is identified according to each depth data coordinate, and includes:
according to the depth data coordinates, determining a third linear equation of the edge identification line corresponding to the front camera device and a fourth linear equation of the edge identification line corresponding to the side camera device;
and fusing based on the third linear equation and the fourth linear equation to obtain the second target linear equation.
Further, to achieve the above object, the present invention also provides a vehicular navigation apparatus including: a memory, a processor and a vehicle navigation program stored on the memory and executable on the processor, the vehicle navigation program when executed by the processor implementing the steps of the vehicle navigation method of any of the above.
Further, to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a vehicle navigation program that, when executed by a processor, implements the steps of the vehicle navigation method described in any one of the above.
According to the method, when the vehicle is determined to be located at a first designated position corresponding to a target storage position based on current first position information of the vehicle, a coordinate origin corresponding to the vehicle is obtained; then, controlling the vehicle to rotate based on the origin of coordinates, and determining whether an edge identification line corresponding to the target storage location is vertical to the vehicle or not based on a first ground image currently shot by a camera mounted on the vehicle; and then when the edge identification line is vertical to the vehicle, controlling the vehicle to stop rotating, and when the vehicle moves in a narrow road corresponding to the vehicle storage position, rotating the vehicle to enable the vehicle to be vertical to the edge identification line of the target storage position, so that the vehicle is accurately aligned with the target storage position, the vehicle can accurately and quickly enter the target storage position, and the vehicle navigation efficiency is improved.
Drawings
FIG. 1 is a schematic diagram of a path navigation apparatus in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart diagram of a first embodiment of a vehicle navigation method of the present invention;
fig. 3 is a detailed flowchart illustrating a step of determining whether an edge marking line of a target storage location corresponding to the first designated location is perpendicular to the vehicle based on a first ground image currently captured by a camera mounted on the vehicle according to the vehicle navigation method of the present invention;
FIG. 4 is a schematic view of a scenario in an embodiment of a route guidance method according to the present invention;
fig. 5 is a flowchart illustrating a second embodiment of a vehicle navigation method according to the present invention.
The implementation, functional features and advantages of the present invention will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a route guidance device in a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the route guidance apparatus may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001 described previously.
Optionally, the path guidance device may further include a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WiFi module, and the like. Such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include a proximity sensor, wherein the proximity sensor may control the vehicle to stop when the moving path guidance device moves to the obstacle.
Those skilled in the art will appreciate that the configuration of the route guidance apparatus shown in FIG. 1 does not constitute a limitation of the route guidance apparatus, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a path guidance program.
In the route guidance device shown in fig. 1, the network interface 1004 is mainly used for connecting with a background server and performing data communication with the background server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be used to invoke a path navigation program stored in the memory 1005.
In this embodiment, the route guidance device includes: a memory 1005, a processor 1001, and a route guidance program stored on the memory 1005 and executable on the processor 1001, wherein the processor 1001 calls the route guidance program stored in the memory 1005 and performs the following operations in the respective embodiments of the vehicle guidance method.
The invention also provides a vehicle navigation method, and referring to fig. 2, fig. 2 is a schematic flow chart of a first embodiment of the vehicle navigation method.
The vehicle navigation method of the embodiment can be applied to the intelligent automatic driving process, wherein the intelligent automatic driving process can be suitable for warehouse freight in a closed environment and can also be suitable for road transportation in an open environment, and the embodiment takes warehouse freight as an example for description; the Vehicle corresponding to warehouse freight can be a forklift, a cradle, an Automatic Guided Vehicle (AGV) trolley and other equipment capable of realizing goods transportation; goods are stacked in warehouse freight, the goods are placed on the trays, and the transportation of the goods is realized through the transportation of the trays by the vehicles. It can be understood that the preset identification lines are preset on the ground of the warehouse, and a driving route of the vehicle is formed between two adjacent and parallel preset identification lines.
The vehicle navigation method includes:
step S100, when the vehicle is determined to be located at a first designated position corresponding to a target storage position based on current first position information of the vehicle, a coordinate origin corresponding to the vehicle is obtained;
in this embodiment, the vehicle is provided with the laser radar, the position information of the vehicle can be obtained in real time through the laser radar, meanwhile, if the vehicle is on the way of storing and taking goods, a target storage location corresponding to the vehicle, namely a storage location where goods are to be stored or goods are to be taken, a first designated position corresponding to the target storage location is determined according to a preset rule, the vehicle can rotate and move to the position matched with the target storage location at the first designated position, and the first designated position is determined according to the target storage location, the length of the vehicle and the width of the vehicle.
Then, it is determined whether the vehicle is located at a first designated position based on the first position information, that is, whether the vehicle reaches the first designated position at present, and when the vehicle is located at the first designated position, a coordinate origin corresponding to the vehicle is obtained, for example, the vehicle is a forklift truck for pallet handling, and a center position (vicinity) of a rear wheel axle of the forklift truck is used as a coordinate origin (center of a circle) corresponding to the vehicle.
Step S200, controlling the vehicle to rotate based on the origin of coordinates, and determining whether an edge identification line corresponding to the target storage position is vertical to the vehicle or not based on a first ground image currently shot by a camera mounted on the vehicle;
in the present embodiment, after the origin of coordinates is determined, the vehicle is controlled to rotate in accordance with the origin of coordinates, specifically, the vehicle is controlled to rotate clockwise when the imaging device is mounted on the left side of the vehicle, and the vehicle is controlled to rotate counterclockwise when the imaging device is mounted on the right side of the vehicle.
In the process of vehicle rotation, a camera device installed on the vehicle shoots images in real time to obtain a corresponding first ground image, and whether an edge identification line corresponding to a target storage position is perpendicular to the vehicle or not is determined through the first ground image, wherein the camera device is a depth camera.
Specifically, in the embodiment, the number of the camera devices is two, the two camera devices are respectively arranged at the front and the side of the vehicle,
referring to fig. 3, in an embodiment, step S200 includes:
step S210, acquiring a first ground image currently shot by the camera device, and identifying the initial position of each identification element in the first ground image;
step S220, a ground characteristic region is separated from the first ground image, and the centroid position of a target element in each identification element is determined according to the ground characteristic region and each initial position;
step S230, determining the depth data coordinates of each target element according to each centroid position, and determining a first target linear equation corresponding to the edge identification line according to each depth data coordinate;
step S240, determining whether the edge marking line is perpendicular to the vehicle based on the first target straight-line equation.
In this embodiment, during the rotation of the vehicle, the camera device installed on the vehicle captures images in real time to obtain a corresponding first ground image, and it can be understood that the edge identification line of the storage location in the warehouse is substantially a tape adhered to the ground, and is generally composed of diamond block patterns formed by two colors at intervals, such as a black diamond block matched with a yellow diamond block, a black diamond block matched with a white diamond block, and the like.
Then, carrying out background processing on the first ground image, and extracting each identification element from the first ground image subjected to the background processing; sequentially carrying out edge extraction, contour searching and broken line fitting processing on each identification element to obtain an initial contour of each identification element; transmitting each initial contour into a preset function, and determining initial coordinates of each identification element in the ground image; and setting circular areas by taking the initial coordinates as circle centers, and identifying the circular areas as the initial positions of the identification elements in the ground image.
Further, the first ground image is subjected to background stripping treatment through flood filling, and 8-10 seeds with various colors are preset to fill the first ground image so as to remove other substances in the first ground image; the seed is set according to actual requirements, for example, the vertex of the 4 corners in the first ground image, the trisection point of the edge of the first ground image, and the like can be set, which is not limited herein. After that, in combination with a preset function for realizing HSV (hue, saturation, value) color recognition in OpenCV (Open Source Computer Vision Library), a black rhombus block is extracted as an identification element.
Furthermore, a preset function used for extracting the edge in the OpenCV is called, the edge range parameter is set and transmitted to the preset function, and the edge extraction is carried out on each extracted identification element; when the edge size of a certain identification element is within the edge range parameter, performing edge extraction operation on the identification element to obtain edge pixel points formed by black diamond blocks in the first ground image; when the edge size of a certain identification element is not in the edge range parameter, the edge extraction operation is not performed on the certain identification element, and the certain identification element is removed as interference. Then, calling a preset function for contour search in OpenCV, setting contour range parameters and transmitting the contour range parameters to the preset function, and performing contour search on each identification element on the basis of edge extraction; when the size of the outline of a certain identification element is within the outline range parameter, the outline of the identification element is reserved to obtain an outline point; and when the size of the outline of a certain identification element is not within the outline range parameter, removing the outline of the certain identification element to remove the interference outline in the first ground image. After each identification element is subjected to edge extraction and contour searching to obtain contour points of each identification element meeting requirements, carrying out broken line fitting processing on the contour points of each identification element to obtain an initial contour of each identification element.
In addition, a three-dimensional space coordinate system is pre-established in the embodiment, the three-dimensional space coordinate system takes the position of the stereo camera as the origin of coordinates, the plane where the AGV is located is taken as the XY plane, and the upper space perpendicular to the XY plane is taken as the space where the positive direction of the Z axis is located; for the XY plane, the direction right in front of the vehicle is the X-axis direction, and the direction perpendicular to the X-axis direction on the right side of the vehicle is the Y-axis direction. After the initial contour of each representing element is obtained, a preset function used for calculating the position of the center of mass in OpenCV is called, the initial contour of each identifying element is transmitted to the preset function, and a coordinate value is output after the processing of the preset function, wherein the coordinate value is the initial coordinate of each identifying element in the first ground image. And then, calling a preset radius numerical value, and setting a circular area corresponding to each identification element by taking the initial coordinate as a circle center, wherein the circular area is the initial position of the identification element in the first ground image.
Understandably, there may be an obstacle in the path along which the vehicle travels, the obstacle being imaged in the first ground image, forming an interference signal that is misidentified as the identification element; in order to avoid the interference of interference signals, a ground characteristic region extraction mechanism is arranged. The ground characteristic region is a region occupied by imaging of the ground in the first ground image, and the obstacle in the first ground image can be identified by setting a certain projection threshold value because the obstacle has a certain projection height on the Z axis of the three-dimensional space coordinate system, and the identified obstacle is removed from the first ground image, so that the ground characteristic region is obtained.
Then, the coordinate origin in the three-dimensional space coordinate system is used as a preset coordinate origin, the ground feature region and each initial position are generated according to the three-dimensional space coordinate system, and the images for representing the ground feature region and the images for representing each initial position can be processed on the basis of the preset coordinate origin to determine the target elements in each identification element and the centroid coordinates of each target element. Specifically, the ground feature region and each initial position are combined according to a preset coordinate origin, and a superposition feature region between the ground feature region and each initial position is extracted; and transmitting each coincident characteristic region to a preset model, screening out target elements in each identification element, and calculating element coordinates of each target element to serve as the centroid position of each target element.
Calling a preset model, transmitting each overlapped characteristic region into the preset model, carrying out characteristic classification on each overlapped characteristic region, and screening out a region which is in accordance with ground characteristics and has a black block, wherein the region is a target element which meets the characteristics of the black rhombus block in each identification element. Meanwhile, the preset model has a function of calculating coordinates of the screened area, and element coordinates of each target element are obtained through the calculation function; the element coordinates are substantially the coordinates of the centroid of the target element, which is taken as the centroid position of the target element.
Then, after the centroid position of each target element is obtained, polar coordinate conversion is carried out on the centroid coordinate representing the centroid position by combining with the installation parameters of the stereo camera, the depth data coordinate of each target element is obtained, a first target linear equation is fit and synthesized according to the depth data coordinate, the ground restoration identification line generated by the first target linear equation considers that the influence of factors such as the characteristic, reflection, absorption and refraction of the stereo camera in the imaging process can cause the existence of cavity data in the first ground image, and the cavity data cannot obtain the depth data coordinate in the conversion process, so that the failure of polar coordinate conversion is caused, preprocessing is required to be carried out between conversions, and the cavity data is filled. Specifically, detecting the hole data in the ground characteristic region one by one, and reading the peripheral depth data corresponding to the hole data; and filling the hole data according to the peripheral depth data until the hole data in the ground characteristic region are filled completely, so as to determine the depth data coordinate based on the filled ground characteristic region.
And scanning the ground characteristic region, detecting the cavity data therein one by one, reading other data around the ground characteristic region as peripheral depth data corresponding to the detected cavity data when the cavity data is detected, and performing neighborhood expansion on the cavity data by using the peripheral depth data through an expansion algorithm to realize filling and leveling of the cavity data. And filling all the hole data in the ground characteristic region, and performing polar coordinate conversion on each centroid coordinate on the basis of the filled ground characteristic region to obtain the depth data coordinate of each target element. Then, calling a preset algorithm to calculate the converted coordinates of each depth data so as to identify an edge identification line in the first ground image; specifically, according to a preset range interval, target data coordinates of each depth data coordinate are determined, and a first target linear equation is generated according to each target data coordinate.
Further, in the present embodiment, the preset algorithm is preferably a least square method, a circular area as an initial position of the target element is used as a preset range interval, and the depth data coordinates of each target element are based on the preset range interval to search for points adjacent to the preset range interval in front, back, left, and right. And when points are found before and after or left and right, removing the three points and saving the points into an array to serve as target coordinate data of each depth data coordinate. After the target coordinate data are found in the depth data coordinates, generating the target coordinate data into a first target linear equation by using a least square method, wherein a straight line corresponding to the first target linear equation is the position of the edge identification line in the first ground image. The initial positions of the identification elements in the edge identification line are firstly preliminarily identified, and then the accurate centroid position of the identification elements is accurately identified on the basis of eliminating interference, so that the depth data coordinate determined by the centroid position accurately reflects the position of each identification element, and the identification accuracy of the edge identification line is improved.
Then, whether the edge marking line is perpendicular to the vehicle is determined based on the first target linear equation, a linear equation of the vehicle can be determined, and whether the edge marking line is perpendicular to the vehicle is determined according to the first target linear equation of the edge marking line and the linear equation of the vehicle.
And step S300, when the edge marking line is vertical to the vehicle, controlling the vehicle to stop rotating.
In this embodiment, when the edge identification line is perpendicular to the vehicle, the vehicle has rotated to a specified warehousing point, and the vehicle is controlled to stop rotating at this time, so that the vehicle is matched with the target warehousing location, and the subsequent direct reverse linear movement of the vehicle is facilitated, so that the vehicle can be accurately warehoused.
It should be noted that the embodiment is applied to a scene in which the vehicle moves in a narrow lane among a plurality of storage locations, so that the vehicle moves quickly first, and the width L of the narrow lane is usually greater than or equal to +0.2 m of the diagonal length of the forklift; the width l of the storage position is more than or equal to the width of the tray and is more than or equal to +0.1 meter; for example, in this embodiment, the length of the forklift is 1.8 meters, the width of the pallet is 1 meter, and the stacking space size is arranged: the aisle L is 2 meters, and the storehouse position L is 1.1 meters.
Referring to fig. 4, in fig. 4, 1.1-1.3 are positions of the coordinate origin of the vehicle, and 2.1 is a moving direction of the vehicle; and 2.2, the motion track of the vehicle. The dotted line is a warehouse location ground identification line and comprises a yellow-black interphase warning line and a yellow-black interphase edge identification line at a warehouse location entrance (exit). When the coordinate origin of the vehicle is located at the first designated position 1.1, the vehicle starts to rotate according to the movement direction shown by 2.1, and when the edge marking line is perpendicular to the vehicle, namely the vehicle rotates 90 degrees, the rotation is stopped.
In one embodiment, the image capturing devices include two image capturing devices, the two image capturing devices are respectively disposed at the front and the side of the vehicle, and the step S230 includes:
step a, according to each depth data coordinate, determining a first linear equation of the edge identification line corresponding to the front camera device and a second linear equation of the edge identification line corresponding to the side camera device;
and b, fusing based on the first linear equation and the second linear equation to obtain the first target linear equation.
The camera device comprises two depth cameras which are respectively arranged in front of and at the side of the vehicle, the first ground image comprises a front ground image and a side ground image, a first linear equation of an edge identification line corresponding to the front camera device can be obtained according to the front ground image, a second linear equation of the edge identification line corresponding to the side camera device can be obtained according to the side ground image, then a coordinate system where the first linear equation is located and a coordinate system where the second linear equation is located are fused, and the fusion is specifically carried out according to a fusion filtering algorithm, so that a fused coordinate system and a first target linear equation in the fused coordinate system are obtained.
If the number of the linear equations in the fused coordinate system is only one, the linear equation in the fused coordinate system is the required first target linear equation, if the number of the linear equations in the fused coordinate system is two and the two linear equations are perpendicular, the perpendicular division ratio of the two linear equations is a yellow-black phase warning line of a target reservoir position and an edge identification line of a reservoir position inlet, and if the rotation angle of the vehicle is larger than a preset angle, such as 60 degrees, the linear equation with the largest included angle with the linear equation of the vehicle is the first target linear equation.
In the vehicle navigation method provided by the embodiment, when the vehicle is determined to be located at a first designated position corresponding to a target library position based on the current first position information of the vehicle, a coordinate origin corresponding to the vehicle is obtained; then, controlling the vehicle to rotate based on the coordinate origin, and determining whether an edge identification line corresponding to the target storage location is vertical to the vehicle based on a first ground image currently shot by a camera device installed on the vehicle; and then when the edge identification line is vertical to the vehicle, controlling the vehicle to stop rotating, and when the vehicle moves in a narrow road corresponding to the vehicle storage position, rotating the vehicle to enable the vehicle to be vertical to the edge identification line of the target storage position, so that the vehicle is accurately aligned with the target storage position, the vehicle can accurately and quickly enter the target storage position, and the vehicle navigation efficiency is improved.
A second embodiment of the vehicle navigation method of the present invention is proposed based on the first embodiment, and referring to fig. 5, in the present embodiment, before step S200, the vehicle navigation method further includes:
step S400, acquiring a second ground image based on the camera device, and determining a relative position parameter between the vehicle and a preset identification line according to the second ground image;
step S500, reading a historical ground image obtained based on the camera device, and determining a displacement parameter of the vehicle according to the second ground image and the historical ground image;
step S600, determining a position adjustment parameter according to the relative position parameter, and taking the position adjustment parameter and the displacement parameter as attitude adjustment parameters to adjust the attitude of the vehicle.
In the driving process of the vehicle, the stereo camera shoots and images the side ground in the driving direction in real time to generate a second ground image representing the relative position between the vehicle and the preset identification line. If the running path of the vehicle deviates, the relative position between the vehicle and the preset identification line also deviates, so that the preset identification line in the second ground image deviates. The preset identification line is a straight line corresponding to the edge identification line of the target library position.
In order to determine whether the preset identification line in the second ground image has deviation, establishing a three-dimensional space coordinate system based on the current position of the vehicle, taking the position of the stereo camera as an origin of coordinates, taking the plane of the vehicle as an XY plane, and taking an upper space vertical to the XY plane as a space in which the Z-axis positive direction is located; for the XY plane, the direction right in front of the vehicle is the X-axis direction, and the direction perpendicular to the X-axis direction on the right side of the vehicle is the Y-axis direction.
According to the position of the preset identification line in the second ground image, a linear equation of the preset identification line on an XY plane is determined, and then relative position parameters of the vehicle relative to the preset identification line are determined, wherein the relative position parameters comprise the angle and the distance of the vehicle relative to the preset identification line, whether the vehicle is parallel to the preset identification line is represented through the angle, and whether the distance from the vehicle to the preset identification line on the left side and the distance from the vehicle to the preset identification line on the right side are equal is represented through the distance. Specifically, a linear equation corresponding to the preset identification line is obtained, and the slope of the linear equation is calculated; taking the driving direction of the vehicle as a reference direction, and calculating an included angle between the linear equation and the reference direction according to the slope; and calculating the distance between the vehicle and the preset identification line according to the linear equation, and determining the included angle and the distance as the relative position parameters.
Understandably, the preset identification line is substantially a pattern formed by diamond blocks with yellow-black intervals or black-white intervals, after the second ground image is obtained, the second ground image is subjected to image processing, black diamond blocks in the second ground image are extracted, and the mass center position of each black diamond block is determined; and fitting the centroid position to generate a linear equation, wherein the linear equation is a straight line where the preset identification line is located. Thereafter, the slope of the linear equation is calculated.
Further, the traveling direction of the vehicle is taken as a reference direction, and an angle between the equation of a straight line and the reference direction is calculated on the basis of the slope. The driving negative direction in the three-dimensional space coordinate system is the positive direction of the x axis, so that the reference direction is substantially the direction of the x axis, and the calculated included angle is the included angle of the preset identification line relative to the x axis, namely the included angle between the driving direction of the vehicle and the preset identification line. Meanwhile, calculating the distance between the vehicle and a preset identification line according to the parameters of the linear equation; wherein the included angle delta alpha is represented by the formula delta alpha ═ tan -1 k is calculated and the distance DeltaL is given by the formula DeltaL ═ C |/(A) 2 +B 2 ) 1/2 And (4) performing calculation. Thereafter, the calculated included angle and distance are determined as relative position parameters to determine whether the posture of the vehicle needs to be adjusted or not from the relative position parameters.
Furthermore, the historical ground image acquired at the last moment on the camera device is read, and the relative position change of the vehicle is reflected by comparing the historical ground image at the last moment with the second ground image, so that the displacement parameter of the vehicle is determined.
Understandably, in order to determine whether the vehicle is parallel to the preset identification lines and the distances between the preset identification lines to both sides are equal, a preset included angle and a preset distance are preset. Comparing the included angle in the relative position parameter with a preset included angle to obtain an included angle difference value between the included angle and the preset included angle, representing the difference between the actual angle and the theoretical angle of the vehicle through the included angle difference value, wherein the smaller the difference is, the better the parallelism between the vehicle and the preset identification line is represented; and simultaneously comparing the distance in the relative position parameters with the preset distance to obtain a distance difference value between the distance and the preset distance, representing the difference between the actual distance and the theoretical distance of the vehicle through the distance difference value, wherein the smaller the difference is, the greater the possibility that the distances between the vehicle and the preset marks on the two sides are equal is represented.
Determining the included angle difference value and the distance difference value obtained through comparison as position adjustment parameters, taking the position adjustment parameters and the displacement parameters as attitude adjustment parameters to perform attitude adjustment on the angle of the position of the vehicle and the distance between the position of the vehicle and the preset identification lines on the two sides, and calculating the displacement of the vehicle at the front and rear moments; and calculating the displacement distance while avoiding collision with the stacked goods on two sides, and determining the driving distance between the vehicle and the destination.
Further, the posture of the vehicle can be adjusted by a control center of the vehicle, or by an upper computer which is in communication connection with the vehicle. Specifically, when the vehicle is adjusted by the upper computer, the vehicle sends a position adjustment parameter and a displacement parameter which are used as attitude adjustment parameters to the upper computer; the upper computer adjusts the driving angle of the vehicle according to the angle difference represented by the angle difference, adjusts the positions of the left side and the right side of the vehicle according to the distance difference represented by the distance difference, and calculates the running displacement distance of the vehicle from the previous moment to the current moment according to the displacement parameters; and updating the driving distance of the vehicle according to the displacement distance, and representing the distance between the vehicle and the destination. The upper computer issues the determined adjustment and the adjusted parameters to the vehicle, controls the running state of the vehicle and realizes the accurate transportation of the vehicle.
When the control center of the vehicle adjusts the posture of the vehicle, the control center directly adjusts the driving angle of the vehicle according to the angle difference represented by the angle difference, adjusts the positions of the left side and the right side of the vehicle according to the distance difference represented by the distance difference, calculates the running displacement distance of the vehicle from the last moment to the current moment according to the displacement parameters, updates the driving distance of the vehicle according to the displacement distance, and represents the distance between the vehicle and the destination; therefore, the running state of the vehicle is controlled, and the accurate transportation of the vehicle is realized.
Further, step S500 includes: identifying first data points in the second ground image and second data points in the historical ground image, and screening out first coordinate points in the first data points and second coordinate points in the second data points; and respectively determining a first central coordinate in each first coordinate point and a second central coordinate in each second coordinate point, and determining the displacement parameter of the vehicle according to the first central coordinate and the second central coordinate.
Furthermore, the displacement parameters for calculating the displacement distance include a displacement value and a displacement angle, wherein the displacement value is a distance value between a position point of the vehicle at a previous moment and a position point of the vehicle at a current moment, and the displacement angle is an included angle between the vehicle and the driving direction, i.e. the x-axis direction. In the embodiment, when the displacement parameter is determined, the black rhombus blocks of the preset identification line in the second ground image are extracted, the mass center of each black rhombus block is identified, and each mass center is determined as a first data point in the second ground image; and simultaneously extracting black diamond blocks of preset identification lines in the historical ground image, identifying the mass center of each black diamond block, and determining each mass center as a second data point in the historical ground image. Then, aiming at the first data point, screening according to a linear equation of a preset identification line in the second ground image, and determining a point belonging to the linear equation as a first coordinate point; and meanwhile, screening the second data point according to a linear equation of a preset identification line in the historical ground image, and determining a point belonging to the linear equation as a second coordinate point. It should be noted that the slopes of the two linear equations are within a preset range, and if the slopes exceed the preset range, it is indicated that the displacement of the vehicle at the front and rear moments is large, and an abnormal condition occurs; the displacement of the vehicle is monitored on one hand, and the linear equation is regenerated on the other hand, so that the calculation correctness is ensured.
And further, screening the first coordinate points and the second coordinate points, determining effective points which belong to the historical ground image and the second ground image at the same time, and determining a first central coordinate in the first coordinate point and a second central coordinate in the second coordinate point respectively by the respective effective points. Specifically, according to each second coordinate point, a first effective point corresponding to each first coordinate point is determined, and an average value processing is performed on each first effective point to determine the first center coordinate; and determining second effective points corresponding to the second coordinate points according to the first coordinate points, and performing mean processing on the second effective points to generate a second center coordinate.
When the first center coordinates in the first coordinate points are determined, the second coordinate points are used as the basis, and points closest to the first coordinate points are screened from the second coordinate points; and then, taking each point closest to the first effective point as a first effective point to perform average value processing of coordinate values, wherein the obtained average value is the first center coordinate corresponding to each first coordinate point. If the first coordinate point comprises N points of a1(x1, y1), a2(x2, y2), a3(x3, y3), a4(x4, y4) · · an (xn, yn), the point closest to a1 in the second coordinate points is determined to be b1, the point closest to a2 is b2, the point closest to a3 is b3, the point closest to a4 is b4, and the point closest to an is bn; and averaging the coordinate values of b1, b2, b3 and b4 · bn to obtain an average value (x, y) of the coordinate values, wherein the average value (x, y) is the first center coordinate of each first coordinate point.
Similarly, when determining the second center coordinate in each second coordinate point, based on each first coordinate point, screening out a point closest to each second coordinate point from each first coordinate point; and then taking each point closest to the first effective point as a second effective point to perform average value processing of coordinate values, wherein the obtained average value is the second center coordinate corresponding to each second coordinate point.
Further, according to the first center coordinate and the second center coordinate, the displacement value and the displacement angle in the displacement parameter can be calculated. Specifically, calculating the slope of a straight line formed by the first center coordinate and the second center coordinate, and calculating the relative displacement value of the vehicle, from the first center coordinate and the second center coordinate; and calculating a displacement angle of the vehicle according to the slope of the straight line, and determining the relative displacement value and the displacement angle as displacement parameters of the vehicle.
Further, (x1, y1) is a first center coordinate, and (x0, y0) is a second center coordinate, a straight line is formed between the first center coordinate and the second center coordinate, the slope of the formed straight line is calculated through the first center coordinate and the second center coordinate, and the angle change of the vehicle relative to the preset identification line at the front time and the rear time is reflected by the angle corresponding to the slope. Meanwhile, the first central coordinate and the second central coordinate reflect the displacement of the vehicle at two moments in front and at two moments in back, so that the relative displacement value of the vehicle can be calculated through the first central coordinate and the second central coordinate. The calculation formula of the slope k is as follows: k is (y1-y 2)/(x 1-x0), and the relative displacement value Δ S is calculated by the formula: Δ S ═ x1-x0 2 +(y1-y0) 2 ) 1/2
Further, calculating a displacement angle of the vehicle through the slope k, wherein the displacement angle is an angle change of the vehicle relative to a preset identification line at front and rear moments; the formula for the displacement angle is: tan ═ Tan -1 k. Determining the calculated relative displacement value and displacement angle as the displacement parameters of the vehicleSo as to calculate the displacement distance of the vehicle at the front moment and the rear moment according to the displacement parameters. Calculating the projection of the relative displacement value in the x-axis direction through the relative displacement value and the displacement angle in the displacement parameter, wherein the numerical value of the projection is the displacement distance of the vehicle running along the running direction; and then the running distance between the vehicle and the destination is updated through the displacement distance, and accurate transportation is realized.
In the vehicle navigation method provided by this embodiment, a second ground image is acquired based on the camera device, and a relative position parameter between the vehicle and a preset identification line is determined according to the second ground image; reading a historical ground image acquired based on the camera device, and determining a displacement parameter of the vehicle according to the second ground image and the historical ground image; and then determining a position adjustment parameter according to the relative position parameter, and taking the position adjustment parameter and the displacement parameter as attitude adjustment parameters to adjust the attitude of the vehicle.
Based on the first embodiment, a third embodiment of the vehicle navigation method of the present invention is proposed, in this embodiment, before step S200, the vehicle navigation method further includes:
step S700, determining identification lines on two sides of the target storage position based on a third ground image currently shot by the camera device;
and S800, controlling the vehicle to move reversely based on the two side identification lines, and controlling the vehicle to stop moving when a rear stop line is monitored or when the current goods detection of the anti-collision sensor is determined, wherein the anti-collision sensor is installed at the tail end of the vehicle.
The vehicle is a forklift, the forklift is provided with two anti-collision sensors, the two anti-collision sensors are respectively installed at the tail ends of the forks of the forklift, tray goods with rear specified distances can be detected when the forks are lifted, and the insertion of the tray anti-collision detection is realized when the forks are put down.
In this embodiment, when the vehicle stops rotating, the two side identification lines of the target garage position are determined based on the third ground image currently captured by the camera device, where the determination manner of the two side identification lines is similar to that of the edge identification line, a linear equation of the two side identification lines is determined first, the two side identification lines are determined according to the linear equation, and then the vehicle is controlled to move in the reverse direction based on the two side identification lines, so that the vehicle performs the target garage position. And in the process of the reverse movement of the vehicle, acquiring ground images shot by the camera device and detection results of the anti-collision sensor in real time, and controlling the vehicle to stop moving when a rear stop line is determined to be monitored according to the ground images shot by the camera device or when the current goods detected by the anti-collision sensor is determined according to the detection results of the anti-collision sensor.
It is understood that the steps in the second embodiment may be performed first when the vehicle stops rotating, to achieve the attitude adjustment of the vehicle.
In the vehicle navigation method provided by the embodiment, the identification lines on the two sides of the target storage location are determined based on the third ground image currently shot by the camera device; and then controlling the vehicle to move reversely based on the two side identification lines, and controlling the vehicle to stop moving when a rear stop line is monitored or the current goods detection of the anti-collision sensor is determined, wherein the anti-collision sensor is installed at the tail end of the vehicle, so that the vehicle can accurately move in a storage space, and the navigation efficiency of the vehicle is further improved.
Based on the first embodiment, a fourth embodiment of the vehicle navigation method of the present invention is proposed, in this embodiment, after step S100, the vehicle navigation method further includes:
c, determining whether the vehicle meets a turning condition or not based on the first position information and a target position;
and d, if not, acquiring a coordinate origin corresponding to the vehicle when the vehicle is determined to be located at a first designated position corresponding to the target library position based on the first position information.
In this embodiment, when the first location information is acquired, it is determined whether the vehicle meets a u-turn condition based on the first location information and the target storage location, specifically, the moving direction of the vehicle is determined according to the first location information, and it is determined whether the vehicle needs to move to a ground pile storage on the other side currently based on the moving direction and the target storage location, and if not, step S200 is executed.
Further, in an embodiment, after the step c, the method further includes:
step e, if yes, determining a second appointed position corresponding to the vehicle based on the current position information;
step f, controlling the vehicle based on the second appointed position;
step g, when the vehicle is determined to be located at the second designated position by the current second position information of the vehicle, controlling the vehicle to rotate based on a second coordinate origin corresponding to the vehicle;
and h, controlling the vehicle to stop rotating when the vehicle rotates to a third designated position corresponding to the target storage position.
In this embodiment, if the vehicle meets the u-turn condition, the second designated position corresponding to the vehicle is determined based on the current position information, and the vehicle is controlled based on the second designated position, so that the vehicle moves at the second designated position, where the second designated position may be set according to the target storage location, so that the vehicle can reach the target storage location quickly after the vehicle turns around at the second designated position.
And then, when the current second position information of the vehicle determines that the vehicle is located at the second designated position, controlling the vehicle to rotate based on a second coordinate origin corresponding to the vehicle, namely the vehicle rotates 180 degrees, so that the vehicle turns around, and controlling the vehicle to stop rotating when the vehicle rotates to a third designated position corresponding to the target storage position, namely the vehicle finishes the turning around operation, so that the vehicle turns around.
It is to be understood that the steps in the third embodiment may be performed to adjust the pose of the vehicle when the current second position information of the vehicle determines that the vehicle is located at the second designated position.
In the vehicle navigation method provided by this embodiment, whether the vehicle meets a u-turn condition is determined based on the first location information and the target library position, and then if not, a step of obtaining the origin of coordinates corresponding to the vehicle is performed when the vehicle is determined to be located at a first designated location corresponding to the target library position based on the first location information, and then when the vehicle does not need to turn around, subsequent steps are performed to achieve accurate navigation of the vehicle, so as to further improve the navigation efficiency of the vehicle.
On the basis of the above respective embodiments, a fifth embodiment of the vehicle navigation method of the present invention is proposed, which in this embodiment further includes:
step i, when the vehicle is determined to be in a narrow-road linear movement state based on current third position information of the vehicle and a navigation path corresponding to the vehicle, acquiring a fourth ground image currently shot by the camera device, and identifying initial positions of all identification elements in the fourth ground image;
j, separating a ground characteristic region from the fourth ground image, and determining the centroid position of a target element in each identification element according to the ground characteristic region and each initial position;
k, determining the depth data coordinates of each target element according to each centroid position, and determining the depth data coordinates according to each depth data coordinate;
step l, according to each depth data coordinate, identifying a second target linear equation corresponding to an edge identification line in the fourth ground image, and determining a calibration position of the vehicle based on the second target linear equation;
and m, determining the target position and the attitude information of the vehicle based on the calibration position, and controlling the vehicle based on the target position and the attitude information.
In this embodiment, when it is determined that the vehicle is in the narrow-lane linear movement state based on the current third position information of the vehicle and the navigation path corresponding to the vehicle, the fourth ground image currently captured by the camera device is obtained, and the second target linear equation corresponding to the edge mark line in the fourth ground image is determined according to the fourth ground image, where the narrow-lane linear movement state is the driving state of the vehicle in the narrow-lane linear movement between two ground storage warehouses, specifically, the vehicle is determined to be in the narrow lane according to the third position information and the navigation path corresponding to the vehicle, and when it is determined that the vehicle needs to perform linear movement according to the third position information and the target warehouse location, the vehicle is in the narrow-lane linear movement state, and the determination manner of the second target linear equation corresponding to the edge mark line in the fourth ground image is similar to the determination manner of the first target linear equation in the above embodiment, and will not be described in detail herein.
Then, determining the calibration position of the vehicle based on the second target linear equation; and determining a target position and attitude information of the vehicle based on the calibration position, and controlling the vehicle based on the target position and the attitude information to realize the linear movement of the vehicle in the narrow road.
Further, the two camera devices are respectively arranged in front of and at the side of the vehicle, and the edge identification line includes a second target straight line equation corresponding to the edge identification line in the ground image identified according to each depth data coordinate, which includes:
according to the depth data coordinates, determining a third linear equation of the edge identification line corresponding to the front camera device and a fourth linear equation of the edge identification line corresponding to the side camera device; and fusing based on the third linear equation and the fourth linear equation to obtain the second target linear equation.
The camera device comprises two depth cameras which are respectively arranged in front of and at the side of the vehicle, the first ground image comprises a front ground image and a side ground image, a first linear equation of an edge identification line corresponding to the front camera device can be obtained according to the front ground image, a second linear equation of the edge identification line corresponding to the side camera device can be obtained according to the side ground image, then a coordinate system where the first linear equation is located and a coordinate system where the second linear equation is located are fused, and the fusion is specifically carried out according to a fusion filtering algorithm, so that a fused coordinate system and a second target linear equation in the fused coordinate system are obtained.
If the number of the linear equations in the fused coordinate system is only one, the origin of the coordinates in the fused coordinate system is the calibration position, and if the number of the linear equations in the fused coordinate system is two and the two linear equations are vertical, the intersection point of the two linear equations is the calibration position.
In the vehicle navigation method provided by the embodiment, when it is determined that the vehicle is in a narrow-lane linear movement state based on the current third position information of the vehicle and a navigation path corresponding to the vehicle, a fourth ground image currently shot by the camera device is acquired, and initial positions of identification elements in the fourth ground image are identified; then, separating a ground characteristic region from the fourth ground image, and determining the centroid position of a target element in each identification element according to the ground characteristic region and each initial position; then determining the depth data coordinates of each target element according to each centroid position, and determining the depth data coordinates according to each depth data coordinate; then according to each depth data coordinate, a second target linear equation corresponding to the edge identification line in the ground image is identified, and the calibration position of the vehicle is determined based on the second target linear equation; and finally, determining the target position and the attitude information of the vehicle based on the calibration position, controlling the vehicle based on the target position and the attitude information, and further realizing the rapid linear movement of the vehicle in the narrow road by adjusting the attitude of the vehicle and controlling the vehicle according to the target position when the vehicle is in the narrow road linear movement state, thereby further improving the efficiency of vehicle navigation.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, where a vehicle navigation program is stored on the computer-readable storage medium, and the vehicle navigation program, when executed by a processor, implements the steps of the vehicle navigation method described in any one of the above.
The specific embodiment of the computer-readable storage medium of the present invention is substantially the same as the embodiments of the vehicle navigation method described above, and will not be described in detail herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or system in which the element is included.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solution of the present invention or the portions contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) as described above and includes several instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (11)

1. A vehicle navigation method, characterized by comprising the steps of:
when the vehicle is determined to be located at a first designated position corresponding to a target storage position based on current first position information of the vehicle, acquiring a coordinate origin corresponding to the vehicle;
controlling the vehicle to rotate based on the origin of coordinates, and determining whether an edge identification line corresponding to the target storage location is vertical to the vehicle based on a first ground image currently shot by a camera mounted on the vehicle;
when the edge marking line is vertical to the vehicle, controlling the vehicle to stop rotating;
the step of determining whether the edge identification line corresponding to the target storage location is perpendicular to the vehicle based on a first ground image currently shot by a camera mounted on the vehicle comprises:
determining the centroid position of a target element in each identification element of the first ground image;
determining the depth data coordinates of each target element according to each centroid position, and determining a first target linear equation corresponding to the edge identification line according to each depth data coordinate;
determining whether the edge marking line is perpendicular to the vehicle based on the first target straight line equation.
2. The vehicle navigation method according to claim 1, wherein the step of determining the centroid position of the target element among the identification elements of the first ground image comprises:
identifying initial positions of the identification elements in the first ground image;
and separating a ground characteristic region from the first ground image, and determining the centroid position of a target element in each identification element according to the ground characteristic region and each initial position.
3. The vehicle navigation method according to claim 2, wherein the two image capturing devices are respectively disposed in front of and at the side of the vehicle, and the step of determining the first target linear equation corresponding to the edge marking line based on each of the depth data coordinates includes:
determining a first linear equation of the edge identification line corresponding to the front camera device and a second linear equation of the edge identification line corresponding to the side camera device according to the depth data coordinates;
and fusing based on the first linear equation and the second linear equation to obtain the first target linear equation.
4. The vehicle navigation method of claim 1, wherein the step of controlling the rotation of the vehicle based on the origin of coordinates is preceded by:
acquiring a second ground image based on the camera device, and determining a relative position parameter between the vehicle and a preset identification line according to the second ground image;
reading a historical ground image acquired based on the camera device, and determining a displacement parameter of the vehicle according to the second ground image and the historical ground image;
and determining a position adjustment parameter according to the relative position parameter, and taking the position adjustment parameter and the displacement parameter as attitude adjustment parameters to adjust the attitude of the vehicle.
5. The vehicle navigation method of claim 1, wherein after the step of controlling the vehicle to stop rotating, the vehicle navigation method further comprises:
determining identification lines on two sides of the target storage position based on a third ground image currently shot by the camera device;
and controlling the vehicle to move reversely based on the two side identification lines, and controlling the vehicle to stop moving when a rear stop line is monitored or when a collision avoidance sensor is determined to detect goods currently, wherein the collision avoidance sensor is arranged at the tail end of the vehicle.
6. The vehicle navigation method of claim 1, wherein the step of obtaining the origin of coordinates corresponding to the vehicle upon determining that the vehicle is located at the first designated location corresponding to the target depot based on the current first location information of the vehicle comprises:
determining whether the vehicle meets a u-turn condition based on the first location information and a target bin position;
and if not, acquiring a coordinate origin corresponding to the vehicle when the vehicle is determined to be located at a first designated position corresponding to the target storage location based on the first position information.
7. The vehicle navigation method according to claim 6, wherein after the step of determining whether the vehicle satisfies a u-turn condition based on the first location information and a target bin position, the vehicle navigation method further comprises:
if yes, determining a second appointed position corresponding to the vehicle based on the first position information;
controlling the vehicle based on the second designated position;
when the vehicle is determined to be located at the second designated position according to the current second position information of the vehicle, controlling the vehicle to rotate on the basis of a second coordinate origin corresponding to the vehicle;
and when the vehicle rotates to a third designated position corresponding to the target storage position, controlling the vehicle to stop rotating.
8. The vehicle navigation method according to any one of claims 1 to 7, characterized in that the vehicle navigation method further includes:
when the vehicle is determined to be in a narrow road linear movement state based on the current third position information of the vehicle and a navigation path corresponding to the vehicle, acquiring a fourth ground image currently shot by the camera device, and identifying the initial position of each identification element in the fourth ground image;
separating a ground characteristic region from the fourth ground image, and determining the centroid position of a target element in each identification element according to the ground characteristic region and each initial position;
determining the depth data coordinates of each target element according to each centroid position, and determining the depth data coordinates according to each depth data coordinate;
according to each depth data coordinate, identifying a second target linear equation corresponding to an edge identification line in the fourth ground image, and determining the calibration position of the vehicle based on the second target linear equation;
and determining a target position and attitude information of the vehicle based on the calibration position, and controlling the vehicle based on the target position and the attitude information.
9. The vehicle navigation method according to claim 8, wherein the two camera devices are respectively disposed in front of and at the side of the vehicle, and the step of identifying the second target linear equation corresponding to the edge identification line in the fourth ground image according to each of the depth data coordinates includes:
according to the depth data coordinates, determining a third linear equation of the edge identification line corresponding to the front camera device and a fourth linear equation of the edge identification line corresponding to the side camera device;
and fusing based on the third linear equation and the fourth linear equation to obtain the second target linear equation.
10. A vehicular navigation apparatus, characterized by comprising: memory, a processor and a vehicle navigation program stored on the memory and executable on the processor, the vehicle navigation program when executed by the processor implementing the steps of the vehicle navigation method as claimed in any one of claims 1 to 9.
11. A computer-readable storage medium, characterized in that a vehicle navigation program is stored thereon, which when executed by a processor implements the steps of the vehicle navigation method according to any one of claims 1 to 9.
CN201911117651.8A 2019-11-12 2019-11-12 Vehicle navigation method, device and computer readable storage medium Active CN110837814B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911117651.8A CN110837814B (en) 2019-11-12 2019-11-12 Vehicle navigation method, device and computer readable storage medium
PCT/CN2020/112216 WO2021093420A1 (en) 2019-11-12 2020-08-28 Vehicle navigation method and apparatus, and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911117651.8A CN110837814B (en) 2019-11-12 2019-11-12 Vehicle navigation method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110837814A CN110837814A (en) 2020-02-25
CN110837814B true CN110837814B (en) 2022-08-19

Family

ID=69575095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911117651.8A Active CN110837814B (en) 2019-11-12 2019-11-12 Vehicle navigation method, device and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN110837814B (en)
WO (1) WO2021093420A1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110837814B (en) * 2019-11-12 2022-08-19 深圳创维数字技术有限公司 Vehicle navigation method, device and computer readable storage medium
CN111856537B (en) * 2020-06-18 2023-03-07 北京九曜智能科技有限公司 Navigation method and device for automatically driving vehicle
CN113341443A (en) * 2021-05-26 2021-09-03 和芯星通科技(北京)有限公司 Processing method of positioning track information and vehicle-mounted navigation device
CN113378735B (en) * 2021-06-18 2023-04-07 北京东土科技股份有限公司 Road marking line identification method and device, electronic equipment and storage medium
CN114038191B (en) * 2021-11-05 2023-10-27 青岛海信网络科技股份有限公司 Method and device for collecting traffic data
CN114265414A (en) * 2021-12-30 2022-04-01 深圳创维数字技术有限公司 Vehicle control method, device, equipment and computer readable storage medium
CN114004881B (en) * 2021-12-30 2022-04-05 山东捷瑞数字科技股份有限公司 Remote control method for erecting ignition tube on well nozzle
CN115218918B (en) * 2022-09-20 2022-12-27 上海仙工智能科技有限公司 Intelligent blind guiding method and blind guiding equipment
CN115601271B (en) * 2022-11-29 2023-03-24 上海仙工智能科技有限公司 Visual information anti-shake method, storage warehouse location state management method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1906063A (en) * 2004-05-06 2007-01-31 松下电器产业株式会社 Parking assisting apparatus
CN101920679A (en) * 2009-06-09 2010-12-22 株式会社电装 Parking assistance system
CN102152763A (en) * 2011-03-19 2011-08-17 重庆长安汽车股份有限公司 Parking auxiliary device
CN103234542A (en) * 2013-04-12 2013-08-07 东南大学 Combination vehicle curve driving track measurement method base on visual sense
CN105094134A (en) * 2015-08-25 2015-11-25 杭州金人自动控制设备有限公司 Image-patrolling-line based method for AGV (Automated Guided Vehicle) parking at designated point
CN105128746A (en) * 2015-09-25 2015-12-09 武汉华安科技股份有限公司 Vehicle parking method and parking system adopting vehicle parking method
CN109508021A (en) * 2018-12-29 2019-03-22 歌尔股份有限公司 A kind of guidance method of automatic guided vehicle, device and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05265547A (en) * 1992-03-23 1993-10-15 Fuji Heavy Ind Ltd On-vehicle outside monitoring device
JP5003946B2 (en) * 2007-05-30 2012-08-22 アイシン精機株式会社 Parking assistance device
CN109934140B (en) * 2019-03-01 2022-12-02 武汉光庭科技有限公司 Automatic reversing auxiliary parking method and system based on detection of ground transverse marking
CN110837814B (en) * 2019-11-12 2022-08-19 深圳创维数字技术有限公司 Vehicle navigation method, device and computer readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1906063A (en) * 2004-05-06 2007-01-31 松下电器产业株式会社 Parking assisting apparatus
CN101920679A (en) * 2009-06-09 2010-12-22 株式会社电装 Parking assistance system
CN102152763A (en) * 2011-03-19 2011-08-17 重庆长安汽车股份有限公司 Parking auxiliary device
CN103234542A (en) * 2013-04-12 2013-08-07 东南大学 Combination vehicle curve driving track measurement method base on visual sense
CN105094134A (en) * 2015-08-25 2015-11-25 杭州金人自动控制设备有限公司 Image-patrolling-line based method for AGV (Automated Guided Vehicle) parking at designated point
CN105128746A (en) * 2015-09-25 2015-12-09 武汉华安科技股份有限公司 Vehicle parking method and parking system adopting vehicle parking method
CN109508021A (en) * 2018-12-29 2019-03-22 歌尔股份有限公司 A kind of guidance method of automatic guided vehicle, device and system

Also Published As

Publication number Publication date
CN110837814A (en) 2020-02-25
WO2021093420A1 (en) 2021-05-20

Similar Documents

Publication Publication Date Title
CN110837814B (en) Vehicle navigation method, device and computer readable storage medium
CN109160452B (en) Unmanned transfer forklift based on laser positioning and stereoscopic vision and navigation method
US11320833B2 (en) Data processing method, apparatus and terminal
US10290115B2 (en) Device and method for determining the volume of an object moved by an industrial truck
KR102194426B1 (en) Apparatus and method for environment recognition of indoor moving robot in a elevator and recording medium storing program for executing the same, and computer program stored in recording medium for executing the same
CN111856491B (en) Method and apparatus for determining geographic position and orientation of a vehicle
US20200074192A1 (en) Vehicle-Mounted Image Processing Device
WO2021046716A1 (en) Method, system and device for detecting target object and storage medium
CN110082775B (en) Vehicle positioning method and system based on laser device
CN111797734A (en) Vehicle point cloud data processing method, device, equipment and storage medium
CN105431370A (en) Method and system for automatically landing containers on a landing target using a container crane
JPWO2016199366A1 (en) Dimension measuring apparatus and dimension measuring method
CN110789529B (en) Vehicle control method, device and computer-readable storage medium
CN111638530B (en) Fork truck positioning method, fork truck and computer readable storage medium
JPWO2019187816A1 (en) Mobiles and mobile systems
CN110764110B (en) Path navigation method, device and computer readable storage medium
WO2022000197A1 (en) Flight operation method, unmanned aerial vehicle, and storage medium
CN110796118B (en) Method for obtaining attitude adjustment parameters of transportation equipment, transportation equipment and storage medium
JP2023507675A (en) Automated guided vehicle control method and control system configured to execute the method
CN110816522B (en) Vehicle attitude control method, apparatus, and computer-readable storage medium
CN113841101A (en) Method for creating an environment map for use in autonomous navigation of a mobile robot
EP4116941A2 (en) Detection system, processing apparatus, movement object, detection method, and program
US20230264938A1 (en) Obstacle detector and obstacle detection method
WO2023151603A1 (en) Cargo box storage method and robot
US20240104768A1 (en) Article detection device, calibration method, and article detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant