WO2022156630A1 - 一种用于车辆对接拖车的挂接方法和挂接系统 - Google Patents

一种用于车辆对接拖车的挂接方法和挂接系统 Download PDF

Info

Publication number
WO2022156630A1
WO2022156630A1 PCT/CN2022/072334 CN2022072334W WO2022156630A1 WO 2022156630 A1 WO2022156630 A1 WO 2022156630A1 CN 2022072334 W CN2022072334 W CN 2022072334W WO 2022156630 A1 WO2022156630 A1 WO 2022156630A1
Authority
WO
WIPO (PCT)
Prior art keywords
point
trailer
vehicle
target
data
Prior art date
Application number
PCT/CN2022/072334
Other languages
English (en)
French (fr)
Inventor
史亮
Original Assignee
北京九曜智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京九曜智能科技有限公司 filed Critical 北京九曜智能科技有限公司
Publication of WO2022156630A1 publication Critical patent/WO2022156630A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • Embodiments of the present disclosure relate to an automatic docking method, and more particularly, to a hooking method and a hooking system for a vehicle to be docked with a trailer.
  • Self-piloting vehicles also known as driverless vehicles, computer-driven vehicles, or wheeled mobile robots
  • Driverless vehicles also known as driverless vehicles
  • computer-driven vehicles or wheeled mobile robots
  • Google's self-driving car obtained the first self-driving vehicle license in the United States in May 2012, and is expected to be in 2015. To enter the market in 2017.
  • Self-driving cars rely on artificial intelligence, visual computing, radar, surveillance devices, and global positioning systems to work together to allow computers to operate motor vehicles autonomously and safely without any human intervention.
  • Trailers are mainly used for road failure vehicles, urban illegal vehicles and emergency rescue.
  • the trailer is attached to the power vehicle, and the power vehicle can drive the trailer to move.
  • At least one embodiment of the present disclosure provides a method for attaching a vehicle to a trailer, wherein a data acquisition device, an industrial control device, and a limit guide device are installed on the vehicle, and at least two target points are arranged at the front end of the trailer, and the hook is mounted on the vehicle.
  • the connection method includes: the vehicle obtains data including target point information of the trailer through a data acquisition device, wherein the target point information includes point cloud data corresponding to the target point; according to the point cloud data corresponding to the target point, the coordinate data corresponding to the target point is calculated and determined.
  • the position of the hook of the trailer; according to the coordinates of the towing point of the vehicle, the pose of the trailer is calculated; the industrial control device plans a path for the vehicle according to the pose of the trailer and drives the vehicle to the hook position of the trailer; it is guided by the limit guide device of the vehicle And complete the hitch of the vehicle and the trailer.
  • the data acquisition device includes a lidar
  • the vehicle acquires data including target point information of the trailer through the data acquisition device, including: the industrial control device of the vehicle acquires the lidar scanning the trailer's data. Scan data; the industrial control device cyclically reads the scan data and compares the reflectivity of the point corresponding to the scan data with the reflectivity threshold; in response to the reflectivity being greater than or equal to the reflectivity threshold, the industrial control device saves the data of the corresponding point and generates a point Cloud data; in response to the reflectivity being less than the reflectivity threshold, the industrial control device continues to read subsequent scan data.
  • the reflectivity of the target is different from the reflectivity of the surrounding environment.
  • the vehicle obtains the data including the target point information of the trailer through the data acquisition device, and further includes: after the industrial control device saves the data of the corresponding point and generates the point cloud data, Perform point cloud filtering on point cloud data.
  • the reflectivity threshold is stored in the memory of the industrial control device.
  • performing point cloud filtering on point cloud data includes: an industrial control device traverses the point cloud data; for each point of the point cloud, judging whether the point is part of the point cloud first point; in response to the point being the first point, establish a point set and store the point in the point set; in response to the point not being the first point, determine whether the distance between the point and the previous point exceeds a preset threshold; in response to the distance If the preset threshold is exceeded, a point set is established and the point is stored in the point set; in response to the distance not exceeding the preset threshold, the point is stored in the current point set; the established point sets are calculated separately to obtain each point set and store the center coordinates.
  • the data acquisition device includes a camera
  • the vehicle acquires data including target point information of the trailer through the data acquisition device, including: the camera acquires an image and transmits the image to the industrial control device;
  • the industrial control device performs visual filtering on the image to obtain the data of the trailer including the target point information.
  • the industrial control device performs visual filtering on the image, including: converting the image into HSV or HSL format; determining the coordinates of the target point according to the color space; according to the coordinates of the target point and The distance between the pixel point and the target point, and the coordinates of the pixel point are calculated.
  • the color of the target is different from the color of the surrounding environment.
  • the data acquisition device includes an ultra-wideband receiver
  • the vehicle obtains data including target point information of the trailer through the data acquisition device, including: the vehicle uses the ultra-wideband receiver to receive the target Point emission data; calculate the distance between the UWB receiver and the target point according to the flight time; calculate the coordinates of the target point according to the distance.
  • the data acquisition device includes a camera and a lidar
  • the vehicle acquires data including target point information of the trailer through the data acquisition device, including: calibrating the lidar and the camera; The image of the target point is acquired by the camera; the point cloud data of the target point is acquired by the lidar; the image data and the point cloud data are fused to form the fused point cloud data; the front baffle of the trailer in the image is recognized by the deep learning algorithm ; Obtain the point cloud data of the front baffle in the fusion data, wherein the point cloud data of the front baffle is the point cloud data of the target point.
  • determining the position of the hook of the trailer according to the coordinate data corresponding to the target points includes: calculating the distance between the target points according to the coordinate data corresponding to the target points; The distance between the targets is compared with the preset target information to determine the model of the trailer, wherein for different trailer models, the distance between the targets is different; the position of the hook of the trailer is determined according to the model of the trailer and the preset target information.
  • the pose of the trailer includes the coordinates of the hook, the slope of the trailer, and the distance from the hook to the vehicle.
  • the industrial control device plans a path for the vehicle according to the position and posture of the trailer and drives the vehicle to travel to the hook position of the trailer, including: in response to the vehicle traveling to the hooking start position, the industrial control device The device calculates the position and attitude of the trailer; judges whether to establish a connection with the trailer according to the position and attitude of the trailer; in response to no connection with the trailer, the industrial control device reports an error to the dispatching system and ends the connection; in response to establishing a connection with the trailer, according to the trailer
  • the pose is the planned path of the vehicle; it is judged whether the path is reachable; in response to the path being unreachable, the connection is ended; in response to the path being reachable, the orientation of the vehicle body is adjusted according to the pose of the trailer, and the vehicle is driven to the hook according to the path .
  • the vehicle is guided by a limit guide device of the vehicle to complete the hooking of the vehicle and the trailer, including: the hook approaches the towing point under the guidance of the limit guide device;
  • the limit sensor is set when the hook reaches the towing point, the limit sensor is triggered and sends out a signal; in response to the vehicle receiving the signal, the towing point and the hook are hooked up.
  • the limit guide device includes a figure-eight limit guide and a tongue guide.
  • At least one embodiment of the present disclosure provides a hitch system, including a vehicle and a trailer, the vehicle is equipped with a data acquisition device, an industrial control device, and a limit guide device, and at least two target points are provided at the front end of the trailer,
  • the vehicle is configured to drive towards the trailer under the driving of the industrial control device and complete the coupling with the trailer
  • the data acquisition device is configured to acquire data of the trailer including target point information, wherein the target point information includes the point corresponding to the target point cloud data
  • the industrial control device is configured to calculate the coordinate data corresponding to the target point according to the point cloud data corresponding to the target point and determine the position of the hook of the trailer, calculate the pose of the trailer according to the coordinates of the towing point of the vehicle, and calculate the position and attitude of the trailer according to the position of the trailer.
  • the posture is to plan a path for the vehicle and drive the vehicle to the hook position of the trailer
  • the limit guide device is configured to guide the vehicle and complete the coupling between the vehicle and the trailer.
  • FIG. 1A shows a flowchart of a method for hooking up a vehicle to a trailer provided by at least one embodiment of the present disclosure
  • 1B is a schematic diagram of the installation of a laser radar and a target according to at least one embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of a method for attaching a vehicle to a trailer provided by at least one embodiment of the present disclosure
  • FIG. 3A is a flowchart of a vehicle acquiring data including target point information of a trailer through a data acquisition device provided by at least one embodiment of the present disclosure
  • FIG. 3B is a schematic diagram of processing data screening provided by at least one embodiment of the present disclosure.
  • 4A is a flowchart of performing point cloud filtering on point cloud data provided by at least one embodiment of the present disclosure
  • 4B is a schematic diagram of the processing of the laser target point polymerization provided by at least one embodiment of the present disclosure
  • FIG. 5A is a flowchart of at least one embodiment of the present disclosure for a vehicle to acquire data including target point information of a trailer through a data acquisition device;
  • FIG. 5B is a flowchart of visual filtering of an image by an industrial control device provided by at least one embodiment of the present disclosure
  • 5C is a data processing flowchart of a visual target provided by at least one embodiment of the present disclosure.
  • FIG. 6A is a flowchart of at least one embodiment of the present disclosure for a vehicle to acquire data including target point information of a trailer through a data acquisition device;
  • 6B is a data processing flow chart of a radio target provided by at least one embodiment of the present disclosure.
  • FIG. 7A is a flowchart of a vehicle acquiring data including target point information of a trailer through a data acquisition device according to at least one embodiment of the present disclosure
  • FIG. 7B is a flowchart of a vision fusion laser deep learning process provided by at least one embodiment of the present disclosure.
  • FIG. 8A shows a flowchart of determining the position of the hook of the trailer according to the coordinate data corresponding to the target point provided by at least one embodiment of the present disclosure
  • 8B is a schematic diagram of processing for filtering noise provided by at least one embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of processing output of a calculation provided by at least one embodiment of the present disclosure.
  • 10A shows a flowchart of the industrial control device provided by at least one embodiment of the present disclosure planning a path for the vehicle according to the pose of the trailer and driving the vehicle to the hook position of the trailer;
  • FIG. 10B is a schematic diagram of a vehicle traveling process provided by at least one embodiment of the present disclosure.
  • FIG. 11A is a flow chart of guiding and completing the coupling of the vehicle and the trailer through the limit guide device of the vehicle provided by at least one embodiment of the present disclosure
  • 11B is a schematic structural diagram of a limit guide device provided by at least one embodiment of the present disclosure.
  • FIG. 12 shows a schematic block diagram of an attachment system provided by at least one embodiment of the present disclosure.
  • the trailer or the pallet is used to carry goods, and the front end is provided with a hitch device.
  • the operation of the power vehicle to connect the trailer or the pallet is mainly performed manually.
  • self-driving vehicles there is no driver or operator to manually connect the self-driving vehicle to the trailer or pallet. If a separate person is set up for the connection, it will increase labor costs and lose the meaning of automatic driving.
  • the method of fixed position attachment is usually adopted, that is, when the automatic driving vehicle needs to be attached to the trailer or the trailer, the trailer or the trailer is parked at the fixed position that has been calibrated, and the automatic driving vehicle is based on the preset Location information to hook up.
  • At least one embodiment of the present disclosure provides a method for attaching a vehicle to a trailer, wherein a data acquisition device, an industrial control device, and a limit guide device are installed on the vehicle, and at least two target points are arranged at the front end of the trailer.
  • the hooking method includes: the vehicle obtains data including target point information of the trailer through a data acquisition device, wherein the target point information includes point cloud data corresponding to the target point; and calculates the coordinate data corresponding to the target point according to the point cloud data corresponding to the target point.
  • the industrial control device plans a path for the vehicle according to the pose of the trailer and drives the vehicle to the hook position of the trailer; through the limit guide device of the vehicle Guide and complete vehicle and trailer hitch.
  • the articulation method provided by the present disclosure solves the problem of the automatic driving vehicle attaching the trailer or the trailer, solves the problem of the existing automatic driving vehicle and the manual driving vehicle attaching the trailer and the trailer, and can greatly improve the docking of the automatic driving vehicle.
  • the operating efficiency of the trailer reduces manual intervention, makes the operation process smoother, realizes automatic hooking, and can complete some tasks that cannot be completed by the original solution, making the application range of automatic driving wider.
  • FIG. 1A is a flowchart of a method for attaching a vehicle to a trailer provided by at least one embodiment of the present disclosure.
  • a data acquisition device, an industrial control device and a limit guide device are installed on the vehicle, and at least two target points are arranged at the front end of the trailer.
  • the hooking method includes the following steps S001-S005.
  • Step S001 the vehicle acquires data of the trailer including target point information through the data acquisition device, wherein the target point information includes point cloud data corresponding to the target point.
  • the data acquisition device includes any one or more of a lidar, a camera, and a UWB (Ultra Wide Band, Ultra Wide Band) receiver, etc., which is not limited in the embodiments of the present disclosure.
  • a lidar a camera
  • UWB Ultra Wide Band, Ultra Wide Band
  • the data containing target point information includes point cloud data, images, radio signals, etc. of the trailer, and may also be any other type of data, as long as the target point information can be reflected, which is not limited in the embodiments of the present disclosure.
  • Step S002 Calculate the coordinate data corresponding to the target point according to the point cloud data corresponding to the target point and determine the position of the hook of the trailer.
  • the center coordinate of the point cloud data corresponding to each target point is the coordinate data corresponding to the target point.
  • the coordinate average value of all point cloud data in each point cloud set can be calculated, and the coordinate average value can be used as the center coordinate.
  • the distance between the target points is calculated from the coordinate data corresponding to the target points, and the distance between the target points is compared with the preset target point information stored in the industrial control device to determine the model of the trailer.
  • different models of trailers have different preset target information.
  • the preset distance between the targets is x
  • the preset distance between the targets is y.
  • Obtain the actual distance between the two target points by any of the methods described in steps S1 to S5 (to be described later), and compare the actual distance with the preset distance between the target points. If the distance x is close, the model of the trailer is A; if the actual distance is close to the preset distance y, the model of the trailer is B.
  • the preset distance between the target points may be stored in the memory of the industrial computer, or may be stored in other storage devices, which is not limited in the embodiment of the present disclosure. The above is an example of determining the model of the trailer, and the embodiments of the present disclosure are not limited to this, and any other suitable method may also be used to determine the model of the trailer.
  • two targets are set symmetrically around the hook of the trailer, and the positions of the targets and the hook of the trailer are distributed in a triangle.
  • the lengths of the sides of the triangle are different and are stored as preset target information in the industrial control system.
  • the position of the hook can be determined through the determined trailer model and preset target information.
  • Step S003 Calculate the pose of the trailer according to the coordinates of the towing point of the vehicle.
  • the pose data of the trailer includes the coordinates of the hitch, the slope of the trailer, and the distance from the hitch to the vehicle.
  • Step S004 The industrial control device plans a path for the vehicle according to the position and posture of the trailer, and drives the vehicle to travel to the hook position of the trailer.
  • path planning can be implemented not only by calculating the pose of the trailer, but also by calculating the pose of the vehicle, which is not limited in the present disclosure.
  • the industrial control device first determines whether the trailer is found according to the position and attitude data of the trailer, and if the trailer can be found, plans a path for the autonomous vehicle according to the position and attitude data of the trailer and determines whether the path is reachable. Then straighten the body of the vehicle according to the pose data and drive the vehicle to the hook position.
  • Step S005 Guide and complete the coupling of the vehicle and the trailer through the vehicle's limit guiding device.
  • the limit guide device includes a figure-eight limit guide and a tongue guide.
  • FIG. 1B is a schematic diagram of the installation of a laser radar and a target according to at least one embodiment of the present disclosure.
  • At least one embodiment of the present disclosure uses a lidar to identify a trailer.
  • a tractor vehicle 1 capable of automatic driving is adopted as the self-driving vehicle that provides power.
  • the middle of the rear end of the tractor vehicle 1 is provided with a traction point 2, and the tractor vehicle 1 is provided with a lidar 3.
  • Recognition is performed by lidar 3 .
  • the lidar 3 can be replaced with other data acquisition devices, such as cameras, ultra-wideband receivers, and the like.
  • the middle part of the front end of the trailer 4 that needs to be docked with the tractor 1 is provided with a trailer hitch 5 corresponding to the traction point 2, which can also be called a hook, and the left target point 6 and the right target point are respectively provided on both sides of the front end of the trailer 4. point 7.
  • two targets are symmetrically installed at positions one meter on both sides of the trailer hitch 5 of the trailer 4 .
  • the present disclosure is not limited to the symmetrical installation of two targets, the present disclosure does not limit the installation positions of the targets, in addition, the present disclosure does not limit the shape of the target, and the size of the target needs to ensure that the data acquisition equipment The target can be identified.
  • the shape of the target is a rectangle, and the size of the target is 50mm*250mm, which ensures that the lidar 3 can hit the target, thereby realizing target recognition.
  • the height of the existing trailer body is usually 500mm, and the target is installed on the front baffle of the trailer 4, such as the lower side of the front baffle, so as to ensure that the loading and unloading of goods is not affected.
  • target recognition is achieved through reflectivity (also known as reflectivity) combined with the target installation mode.
  • reflectivity also known as reflectivity
  • the present disclosure uses a material with a specific reflectivity to make a target (also known as a laser target), and filters the radar data (data reflected by the target) according to the specific reflectivity. data to find the target. For example, the point corresponding to the data whose reflectivity is greater than or equal to the reflectivity threshold is the target point.
  • the installation of the target points is based on the mathematical model, and the preset target points can be screened out from the target point information to remove the interference points.
  • the data screened by the lidar 3 is transmitted to the industrial control device (for example, an industrial computer) of the tractor 1, and the identification and position calculation of the trailer 4 are performed in the industrial computer.
  • the vehicle control unit in the industrial computer calculates the type and posture of the trailer 4 , plans the articulation path according to the posture and posture of the trailer 4 , and controls the tractor 1 for articulation.
  • FIG. 2 is a flowchart of a method for attaching a vehicle to a trailer provided by at least one embodiment of the present disclosure.
  • a method for hooking a vehicle to a trailer includes the following steps S1 to S10:
  • Step S1 The tractor vehicle 1 scans the target point through the lidar 3, wherein the laser reflectivity of the target point is clearly distinguished from the surrounding environment, so as to obtain the point cloud data of the reflectivity of the target point.
  • Point cloud data refers to a set of vectors in a three-dimensional coordinate system. Scan data is recorded in the form of points. Each point contains three-dimensional coordinates. Some may contain reflection intensity information (reflectivity) or color information. 3. The point cloud data obtained by scanning the target contains reflectivity information.
  • step S1 the data acquisition device is a laser radar, the reflectivity of the target used here is obviously different from the environment, the target is easier to identify, and the measurement accuracy is high.
  • the data acquisition device is a lidar
  • a method for the vehicle to acquire the data including the target point information of the trailer through the data acquisition device is shown in FIG. 3A .
  • the method includes steps S301-S304.
  • Step S301 the industrial control device of the vehicle acquires the scanning data of the lidar scanning the trailer.
  • Step S302 The industrial control device reads the scan data cyclically and compares the reflectivity of the point corresponding to the scan data with the reflectivity threshold.
  • Step S303 In response to the reflectivity being greater than or equal to the reflectivity threshold, the industrial control device saves the data of the corresponding point and generates point cloud data.
  • Step S304 In response to the reflectivity being less than the reflectivity threshold, the industrial control device continues to read subsequent scan data.
  • step S303 generating point cloud data is implemented through data filtering.
  • FIG. 3B is a schematic diagram of a data screening process provided by at least one embodiment of the present disclosure. With reference to Fig. 3B, scanning a target point is to judge each point scanned, and record the points that meet the reflectivity threshold, thereby forming point cloud data of the target point. The left target point 6 and the right target point 7 two target points will form two pieces of point cloud data.
  • the screening of point cloud data includes the following steps:
  • S15 According to the reflectivity threshold, determine whether the reflectivity of the point conforms, and the industrial computer records the conforming points. If it does not conform, it jumps to step S12; S16: The industrial computer saves the data of the conforming points and generates point cloud data.
  • points whose reflectivity is greater than or equal to the reflectivity threshold are eligible points, and these matching points form point cloud data.
  • lidar scanning can refer to the conventional design, which will not be repeated here.
  • the following is the input data involved in the scanning of lidar 3 in an embodiment:
  • the point cloud input monitors the topic published by the lidar.
  • the following is an example of the topic format in the standard PointCloud2 format:
  • the industrial computer After the industrial computer obtains the point cloud data of the laser target point, it filters the point cloud data according to the reflectivity to optimize the point cloud data. Since the reflectivity of the target point is obviously different from the environment, the point cloud coordinates can be determined by filtering. .
  • FIG. 4A is a flowchart of performing point cloud filtering on point cloud data provided by at least one embodiment of the present disclosure.
  • the flowchart includes steps S401 to S407.
  • Step S401 The industrial control device traverses the point cloud data.
  • Step S402 For each point of the point cloud, determine whether the point is the first point of the point cloud.
  • Step S403 In response to the point being the first point, create a point set and store the point to the point set.
  • Step S404 In response to the point not being the first point, determine whether the distance between the point and the previous point exceeds a preset threshold.
  • Step S405 In response to the distance exceeding the preset threshold, establish a point set and store the point in the point set.
  • Step S406 In response to the distance not exceeding the preset threshold, store the point in the current point set.
  • Step S407 Calculate the established point sets respectively to obtain the center coordinates of each point set and store the center coordinates.
  • 4B is a schematic diagram of the processing of target aggregation provided by at least one embodiment of the present disclosure.
  • point cloud filtering includes the following steps:
  • S27 Calculate the center coordinates of each point set to obtain the center coordinates of each point set, and store the center coordinates, thereby realizing the clustering of the filtered point cloud data.
  • the filtering of point cloud coordinates does not require complex modeling or deep learning.
  • step S3 the data acquisition device is a camera. Images are acquired with a camera, and since the color of the targets used here is clearly distinct from the environment, it is easy to filter out the targets. Data acquisition using visual targets has the advantage of low cost.
  • the method for the vehicle to acquire the data including the target point information of the trailer through the data acquisition device is shown in FIG. 5A .
  • the method includes steps S501-S502.
  • Step S501 the camera acquires the image and transmits the image to the industrial control device.
  • Step S502 The industrial control device performs visual filtering on the image to obtain data of the trailer including target point information.
  • the visual filtering of the image by the industrial control device in step S502 may include steps S511 to S513, as shown in FIG. 5B .
  • Step S511 Convert the image to HSV or HSL format.
  • RGB red (R), green (G), and blue (B), different combinations of these three colors can form almost all other colors.
  • HSV and HSL color spaces are closer to people's perceptual experience of color than RGB. It is very intuitive to express the hue, vividness and lightness and darkness of the color, which is convenient for color contrast. In HSV and HSL color spaces, it is easier to track objects of a certain color than RGB.
  • Step S512 Determine the coordinates of the target point according to the color space.
  • Step S513 Calculate the coordinates of the pixel point according to the coordinates of the target point and the distance between the pixel point and the target point.
  • FIG. 5C is a flowchart of data processing of a visual target provided by at least one embodiment of the present disclosure.
  • image processing includes the following steps:
  • S33 Calculate the coordinates of each pixel point according to the predicted coordinates of the target point and the distance between the pixel point and the target point.
  • S4 When using a target point that can transmit radio (also called a radio target point), use the UWB receiver to measure the distance of the target point, and enter the S6 calculation after obtaining the distance.
  • a target point that can transmit radio also called a radio target point
  • step S4 the data acquisition device is a UWB receiver.
  • the method for the vehicle to acquire the data containing the target point information of the trailer through the data acquisition device is shown in FIG. 6A .
  • the method includes steps S601-S603.
  • Step S601 The vehicle uses an ultra-wideband receiver to receive data transmitted by the target.
  • Step S602 Calculate the distance between the UWB receiver and the target point according to the flight time.
  • the flight time of the data between the transmitter and the UWB receiver is the result of T2 minus T1.
  • the propagation speed of radio is the speed of light
  • the distance between the transmitter and the UWB receiver is the speed of light times the time of flight.
  • Step S603 Calculate the coordinates of the target point according to the distance.
  • the three-dimensional coordinates of the target point can be calculated.
  • FIG. 6B is a data processing flow chart of a radio target provided by at least one embodiment of the present disclosure.
  • the UWB receiver processing flow is as follows:
  • S41 Receive transmitter data through the UWB receiver.
  • the transmitter is a radio transmitter mounted on the target.
  • S42 Calculate the distance according to the flight time.
  • S43 Calculate the three-dimensional coordinates of the target point.
  • S44 Enter into S6 for calculation.
  • step S5 the above-mentioned method of using a lidar and a camera is combined.
  • the lidar and camera are installed on the vehicle, and the two are calibrated before use, that is, the parameters are adjusted so that the point cloud data obtained by the lidar and the image obtained by the camera can conform to a specific mapping relationship.
  • the data acquisition device includes a camera and a lidar
  • a method for the vehicle to acquire the data containing the target point information of the trailer through the data acquisition device is shown in FIG. 7A .
  • the method includes steps S701-S706.
  • Step S701 Calibrate the lidar and the camera.
  • Step S702 acquiring an image of the target point through a camera.
  • Step S703 Acquire point cloud data of the target point through the laser radar.
  • Step S704 Fusion of image data and point cloud data to form fused point cloud data.
  • Step S705 Identify the front fender of the trailer in the image through a deep learning algorithm.
  • Step S706 Obtain the point cloud data of the front baffle in the fusion data, wherein the point cloud data of the front baffle is the point cloud data of the target point.
  • the point cloud data of the front baffle is the point cloud data of the target.
  • FIG. 7B is a flowchart of a vision fusion laser deep learning process provided by at least one embodiment of the present disclosure.
  • a camera For example, use a camera to capture images of a trailer.
  • lidar For example, use lidar to scan trailers and obtain point cloud data based on reflectivity.
  • the baffle point cloud data is consistent with the target point cloud data, and is included in S6 for calculation.
  • the combination of the two methods can further improve the accuracy of the measurement.
  • S6 Through the above processing, the point cloud data corresponding to at least two target points are obtained, and through the distance between the target points, according to the target point information preset in the memory of the industrial computer, the model identification of the trailer is realized, and the target points are further filtered. , to achieve point cloud aggregation.
  • FIG. 8A shows a flowchart of determining the position of the hook of the trailer according to the coordinate data corresponding to the target point provided by at least one embodiment of the present disclosure.
  • the flowchart includes steps S801 to S803.
  • Step S801 Calculate the distance between the target points according to the coordinate data corresponding to the target points.
  • Step S802 Compare the distance between the target points with the preset target point information to determine the model of the trailer, wherein the distance between the target points is different for different trailer models.
  • Step S803 Determine the position of the hook of the trailer according to the model of the trailer and the preset target information.
  • FIG. 8B is a schematic diagram of processing of filtering targets provided by at least one embodiment of the present disclosure.
  • the preset distances between the targets are different, and the distribution patterns of the targets are different.
  • S65 Retain the points that conform to the distribution pattern, and filter the points that do not conform.
  • noise points are mixed, and the center points of three point sets are calculated, and only two of the points are points that conform to the distribution pattern, so that the points that do not conform to the distribution pattern can be filtered out.
  • the pose of the trailer is calculated according to the preset data in the memory, and the calculation method adopts the least square method.
  • k is the slope of the line
  • x is the x-coordinate of the obtained point
  • y is the y-coordinate of the obtained point
  • c1 and c2 can be acquired in advance and stored in the memory of the industrial control device.
  • the hanging point is used as the origin in the radar coordinate system through coordinate transformation, and the lidar of the tractor is used as a coordinate point in the coordinate system, and then the following path planning is performed.
  • the origin of the radar coordinate system is the data acquisition device (for example, lidar, camera, etc.)
  • the position of the hook of the trailer is the origin in the radar coordinate system through coordinate transformation
  • the data acquisition device of the tractor is used as the coordinate system
  • Another point in the path planning can be done according to the coordinates of these points.
  • S9 The industrial computer drives the tractor to reverse to the hang point according to the path planned by the pose.
  • 10A shows a flowchart of the industrial control device provided by at least one embodiment of the present disclosure planning a path for the vehicle according to the pose of the trailer and driving the vehicle to the hook position of the trailer.
  • the method includes steps S1001 to S1007.
  • Step S1001 In response to the vehicle traveling to the start position of the hitch, the industrial control device calculates the pose of the trailer.
  • Step S1002 Determine whether to establish communication with the trailer according to the pose of the trailer.
  • Step S1003 In response to not establishing communication with the trailer, the industrial control device reports an error to the dispatching system, and ends the current connection.
  • Step S1004 In response to establishing communication with the trailer, plan a path for the vehicle according to the pose of the trailer.
  • Step S1005 Determine whether the path is reachable.
  • Step S1006 In response to the path being unreachable, end this attachment.
  • Step S1007 In response to the path being accessible, adjust the orientation of the vehicle body according to the pose of the trailer, and drive the vehicle to travel toward the hook according to the path.
  • FIG. 10B shows a schematic diagram of a vehicle traveling process provided by at least one embodiment of the present disclosure.
  • the starting position of the automatic hook is an artificially set area.
  • the industrial computer can start to calculate the pose data of the trailer.
  • S92 Determine whether to establish communication with the trailer according to the position and attitude of the trailer.
  • the industrial control device reports an error to the dispatching system to end the connection.
  • the vehicle is determined according to the position and attitude of the trailer. Plan your path.
  • step S93 it is judged whether the trailer is found according to the calculated pose. If found, step S93 is performed. If not found, an error is reported to the dispatching system, and the connection is ended.
  • step S94 Determine whether the path is reachable, and if it is reachable, attach it according to the planned path, and end the current operation; if it is not reachable, execute step S95.
  • the body of the vehicle and the body of the trailer are not on the same line, the body of the vehicle is aligned so that the body of the vehicle and the trailer are on the same line.
  • FIG. 11A shows a flow chart of guiding and completing the coupling between the vehicle and the trailer provided by the vehicle's limit guiding device provided by at least one embodiment of the present disclosure.
  • the flowchart includes steps S1101 to S1103.
  • Step S1101 the hook approaches the traction point under the guidance of the limit guide device.
  • Step S1102 In response to the hook reaching the limit sensor set at the towing point, the limit sensor is triggered and a signal is sent.
  • Step S1103 In response to the vehicle receiving the signal, the tow point and the hitch are hooked up.
  • a limit guide device at the hitch point of the vehicle, use the left and right 8-shaped limit guides and downward
  • the tongue guide is used to tolerance and guide the hitch point of the trailer.
  • a limit sensor is set at the hitch point (traction point) of the vehicle. When the hitch point (hook) of the trailer is in place, the sensor is triggered, and the pin at the towing point is dropped. Automatic hitch of tow point and hitch.
  • the structural diagram of the limit guide device is shown in FIG. 11B .
  • FIG. 12 shows a schematic block diagram of an attachment system 1200 provided by at least one embodiment of the present disclosure.
  • the hitch system 1200 includes a vehicle 1201 and a trailer 1202.
  • the vehicle 1201 is equipped with a data acquisition device 1203, an industrial control device 1204, and a limit guide device 1205.
  • a data acquisition device 1203 At the front end of the trailer 1202, there are at least two targets.
  • an industrial control device 1204 At the front end of the trailer 1202, there are at least two targets.
  • the vehicle 1201 is configured to drive toward the trailer 1202 and complete the coupling with the trailer 1202 under the driving of the industrial control device 1204 .
  • the data acquisition device 1203 is configured to acquire data of the trailer 1202 including target point information, wherein the target point information includes point cloud data corresponding to the target points.
  • the industrial control device 1204 is configured to calculate the coordinate data corresponding to the target point and determine the position of the hook of the trailer 1202 according to the point cloud data corresponding to the target point, calculate the pose of the trailer 1202 according to the coordinates of the towing point of the vehicle 1201 , and The pose plans a path for the vehicle 1201 and drives the vehicle 1201 to the hitch position of the trailer 1202 .
  • the limit guide device 1205 is configured to guide the vehicle 1201 and complete the coupling of the vehicle 1201 and the trailer 1202 .
  • hooking system 1200 For a detailed description of the hooking system 1200, reference may be made to the description in FIG. 1B above, which will not be repeated here.
  • the present disclosure realizes automatic identification and measurement of the parking posture of the trailer without manual intervention, and can truly automatically adjust the articulation path and articulation, thereby improving the process efficiency.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

一种用于车辆对接拖车的挂接方法,车辆上安装有数据采集设备、工控装置和限位引导装置,在拖车的前端设置有至少两个靶点。该挂接方法包括:车辆通过数据采集设备获取拖车的包含靶点信息的数据,其中,靶点信息包括靶点对应的点云数据(S001);根据靶点对应的点云数据计算靶点对应的坐标数据并确定拖车的挂钩的位置(S002);根据车辆的牵引点的坐标,计算拖车的位姿(S003);工控装置根据拖车的位姿为车辆规划路径并驱动车辆向拖车的挂钩位置行驶(S004);通过车辆的限位引导装置来引导并完成车辆与拖车的挂接(S005)。该挂接方法可以提高车辆对接拖车的运行效率,减少了人工干预,实现自动挂接,并且可以提高车辆对接拖车的准确性。

Description

一种用于车辆对接拖车的挂接方法和挂接系统
本申请要求于2021年1月19日递交的中国专利申请第202110068304.1号的优先权,在此全文引用上述中国专利申请公开的内容以作为本申请的一部分。
技术领域
本公开的实施例涉及一种自动对接方法,尤其是涉及一种用于车辆对接拖车的挂接方法和挂接系统。
背景技术
自动驾驶汽车(Autonomous vehicles;Self-piloting automobile)又称无人驾驶汽车、电脑驾驶汽车、或轮式移动机器人,是一种通过电脑系统实现无人驾驶的智能汽车。在20世纪也已经有数十年的历史,于21世纪初呈现出接近实用化的趋势,比如,谷歌自动驾驶汽车于2012年5月获得了美国首个自动驾驶车辆许可证,预计于2015年至2017年进入市场销售。
自动驾驶汽车依靠人工智能、视觉计算、雷达、监控装置和全球定位系统协同合作,让电脑可以在没有任何人类主动的操作下,自动安全地操作机动车辆。
拖车主要用于道路故障车辆、城市违章车辆以及抢险救援等,拖车被挂接在动力车辆上,动力车辆行驶可以带动拖车行进。
发明内容
本公开至少一实施例提供一种用于车辆对接拖车的挂接方法,其中,车辆上安装有数据采集设备、工控装置和限位引导装置,在拖车的前端设置有至少两个靶点,挂接方法包括:车辆通过数据采集设备获取拖车的包含靶点信息的数据,其中,靶点信息包括靶点对应的点云数据;根据靶点对应的点云数据计算靶点对应的坐标数据并确定拖车的挂钩的位置;根据车辆的牵引点的坐标,计算拖车的位姿;工控装置根据拖车的位姿为车辆规划路径并驱动车辆向拖车的挂钩位置行驶;通过车辆的限位引导装置进行引导并完成车辆与拖车的挂接。
例如,在本公开至少一实施例提供的挂接方法中,数据采集设备包括激光雷达,车辆通过数据采集设备获取拖车的包含靶点信息的数据,包括:车辆的工控装置获取激光雷达扫描拖车的扫描数据;工控装置循环读取扫描数据并将扫描数据对应的点的反射率与反射率阈值进行比较;响应于反射率大于或等于反射率阈值,工控装置将对应的点的数据保存并生成点云数据;响应于反射率小于反射率阈值,工控装置继续读取后续扫描数据。
例如,在本公开至少一实施例提供的挂接方法中,靶点的反射率与周边环境的反射率不同。
例如,在本公开至少一实施例提供的挂接方法中,车辆通过数据采集设备获取拖车的包含靶点信息的数据,还包括:工控装置将对应的点的数据保存并生成点云数据后,对点云数据进行点云滤波。
例如,在本公开至少一实施例提供的挂接方法中,反射率阈值存储在工控装置的存储器中。
例如,在本公开至少一实施例提供的挂接方法中,对点云数据进行点云滤波,包括:工控装置遍历点云数据;对点云的每个点,判断该点是否为点云的首点;响应于该点是首点,建立点集并将该点存储到该点集;响应于该点不是首点,判断该点与前一个点的距离是否超出预设阈值;响应于距离超出预设阈值,建立点集并将该点存储到该点集;响应于距离没有超出预设阈值,将该点存储到当前点集;对所建立的点集分别进行计算得到每个点集的中心坐标并存储中心坐标。
例如,在本公开至少一实施例提供的挂接方法中,数据采集设备包括相机,车辆通过数据采集设备获取拖车的包含靶点信息的数据,包括:相机获取图像并将图像传输到工控装置;工控装置对图像进行视觉滤波,得到拖车的包含靶点信息的数据。
例如,在本公开至少一实施例提供的挂接方法中,工控装置对图像进行视觉滤波,包括:将图像转换为HSV或HSL格式;根据颜色空间确定靶点的坐标;根据靶点的坐标以及像素点和靶点的距离,计算像素点的坐标。
例如,在本公开至少一实施例提供的挂接方法中,靶点的颜色与周边环境的颜色不同。
例如,在本公开至少一实施例提供的挂接方法中,数据采集设备包括超宽带接收器,车辆通过数据采集设备获取拖车的包含靶点信息的数据,包括:车辆使用超宽带接收器接收靶点发射的数据;根据飞行时间计算超宽带接收器和靶点之间的距离;根据距离计算靶点的坐标。
例如,在本公开至少一实施例提供的挂接方法中,数据采集设备包括相机和激光雷达,车辆通过数据采集设备获取拖车的包含靶点信息的数据,包括:对激光雷达和相机进行标定;通过相机获取靶点的图像;通过激光雷达获取靶点的点云数据;将图像的数据和点云数据进行融合,形成融合的点云数据;通过深度学习算法识别图像中的拖车的前挡板;获得融合数据中的前挡板的点云数据,其中,前挡板的点云数据即为靶点的点云数据。
例如,在本公开至少一实施例提供的挂接方法中,根据靶点对应的坐标数据确定拖车的挂钩的位置,包括:通过靶点对应的坐标数据,计算靶点间的距离;将靶点间的距离与预设靶点信息进行比较以确定拖车的型号,其中,对于不同的拖车型号,靶点间的距离不同;根据拖车的型号和预设靶点信息确定拖车的挂钩的位置。
例如,在本公开至少一实施例提供的挂接方法中,挂钩的位置的坐标为(Xc,Yc),第一靶点的坐标为(Xl,Yl),第二靶点的坐标为(Xr,Yr),计算挂钩的坐标的公式为Xc=(Xl+Xr)/2+c1,Yc=(Yl+Yr)/2+c2,其中,c1为靶点之间的连线的中心点与挂钩之间的x方向的距离,c2为靶点之间的连线的中心点与挂钩之间的y方向的距离。
例如,在本公开至少一实施例提供的挂接方法中,拖车的位姿包括挂钩的坐标、拖车 的斜率以及挂钩到车辆的距离。
例如,在本公开至少一实施例提供的挂接方法中,工控装置根据拖车的位姿为车辆规划路径并驱动车辆向拖车的挂钩位置行驶,包括:响应于车辆行驶到挂接开始位置,工控装置计算拖车的位姿;根据拖车的位姿判断是否与拖车建立连接;响应于与拖车没有建立连接,工控装置向调度系统报告错误,结束本次挂接;响应于与拖车建立连接,根据拖车的位姿为车辆规划路径;判断路径是否可达;响应于路径不可达,结束本次挂接;响应于路径可达,根据拖车的位姿调整车体方位,并驱动车辆按照路径向挂钩行驶。
例如,在本公开至少一实施例提供的挂接方法中,通过车辆的限位引导装置进行引导并完成车辆与拖车的挂接,包括:挂钩在限位引导装置的引导下接近牵引点;响应于挂钩到达牵引点处设置的限位传感器,限位传感器被触发并发出信号;响应于车辆接收到信号,牵引点和挂钩进行挂接。
例如,在本公开至少一实施例提供的挂接方法中,限位引导装置包括八字型限位引导器以及舌型引导器。
本公开至少一实施例提供一种挂接系统,包括车辆和拖车,所述车辆上安装有数据采集设备、工控装置和限位引导装置,在所述拖车的前端设置有至少两个靶点,其中,车辆被配置为在工控装置的驱动下向拖车行驶并完成和拖车的挂接;数据采集设备被配置为获取拖车的包含靶点信息的数据,其中,靶点信息包括靶点对应的点云数据;工控装置被配置为根据靶点对应的点云数据计算靶点对应的坐标数据并确定拖车的挂钩的位置,根据车辆的牵引点的坐标,计算拖车的位姿,以及根据拖车的位姿为车辆规划路径并驱动车辆向拖车的挂钩位置行驶;限位引导装置被配置为对车辆进行引导并完成车辆与拖车的挂接。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例的附图作简单地介绍,显而易见地,下面描述中的附图仅仅涉及本公开的一些实施例,而非对本公开的限制。
图1A示出了本公开至少一实施例提供的用于车辆对接拖车的挂接方法的流程图;
图1B是本公开至少一实施例提供的激光雷达与靶点的安装示意图;
图2是本公开至少一实施例提供的用于车辆对接拖车的挂接方法的流程示意图;
图3A是本公开至少一实施例提供的车辆通过数据采集设备获取拖车的包含靶点信息的数据的流程图;
图3B是本公开至少一实施例提供的数据筛选的处理示意图;
图4A是本公开至少一实施例提供的对点云数据进行点云滤波的流程图;
图4B是本公开至少一实施例提供的所述激光靶点聚合的处理示意图;
图5A是本公开至少一实施例提供的车辆通过数据采集设备获取拖车的包含靶点信息的数据的流程图;
图5B是本公开至少一实施例提供的工控装置对图像进行视觉滤波的流程图;
图5C是本公开至少一实施例提供的视觉靶点的数据处理流程图;
图6A是本公开至少一实施例提供的车辆通过数据采集设备获取拖车的包含靶点信息的数据的流程图;
图6B是本公开至少一实施例提供的无线电靶点的数据处理流程图;
图7A是本公开至少一实施例提供的车辆通过数据采集设备获取拖车的包含靶点信息的数据的流程图;
图7B是本公开至少一实施例提供的视觉融合激光深度学习处理流程图;
图8A示出了本公开至少一实施例提供的根据靶点对应的坐标数据确定拖车的挂钩的位置的流程图;
图8B是本公开至少一实施例提供的过滤杂点的处理示意图;
图9是本公开至少一实施例提供的计算输出的处理示意图;
图10A示出了本公开至少一实施例提供的工控装置根据拖车的位姿为车辆规划路径并驱动车辆向拖车的挂钩位置行驶的流程图;
图10B是本公开至少一实施例提供的车辆行进的处理示意图;
图11A是本公开至少一实施例提供的通过车辆的限位引导装置进行引导并完成车辆与拖车的挂接的流程图;
图11B是本公开至少一实施例提供的限位引导装置的结构示意图;
图12示出了本公开至少一实施例提供的挂接系统的示意框图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例的附图,对本公开实施例的技术方案进行清楚、完整地描述。显然,所描述的实施例是本公开的一部分实施例,而不是全部的实施例。基于所描述的本公开的实施例,本领域普通技术人员在无需创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
除非另外定义,本公开使用的技术术语或者科学术语应当为本公开所属领域内具有一般技能的人士所理解的通常意义。本公开中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。同样,“一个”、“一”或者“该”等类似词语也不表示数量限制,而是表示存在至少一个。“包括”或者“包含”等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后面列举的元件或者物件及其等同,而不排除其他元件或者物件。“连接”或者“相连”等类似的词语并非限定于物理的或者机械的连接,而是可以包括电性的连接,不管是直接的还是间接的。“上”、“下”、“左”、“右”等仅用于表示相对位置关系,当被描述对象的绝对位置改变后,则该相对位置关系也可能相应地改变。
拖车或托板用于承载货物,其前端设置有挂接装置,目前动力车辆挂接拖车或拖板的操作主要由人工手动进行。在涉及到自动驾驶车辆的时候,没有司机或者操作人员进行自动驾驶车辆与拖车或拖板的人工挂接,如果单独设立一个挂接的人员,则又会增加人力成本,丧失自动驾驶的意义。
为了解决上述问题,通常采用固定位置挂接的方法,即当需要自动驾驶车辆与拖车或拖板进行挂接时,即将拖车或拖板停放到标定好的固定位置,自动驾驶车辆根据预设的位置信息来挂接。
然而,人工驾驶的车辆很难一次性停靠到精准的位置,而自动驾驶车辆也存在精度不够的情况。对于使用固定位置挂接的方案,当拖车停靠的位置存在偏差、错位、倾斜或精度不够时,就可能会发生挂接失败的情况。如果多次调整定位或者通过人工干预调整的话,则会增加人力成本,降低工作效率,失去了自动驾驶的意义。
本公开至少一实施例提供一种用于车辆对接拖车的挂接方法,其中,车辆上安装有数据采集设备、工控装置和限位引导装置,在拖车的前端设置有至少两个靶点,该挂接方法包括:车辆通过数据采集设备获取拖车的包含靶点信息的数据,其中,靶点信息包括靶点对应的点云数据;根据靶点对应的点云数据计算出靶点对应的坐标数据并确定拖车的挂钩的位置;根据车辆的牵引点的坐标,计算拖车的位姿;工控装置根据拖车的位姿为车辆规划路径并驱动车辆向拖车的挂钩位置行驶;通过车辆的限位引导装置进行引导并完成车辆与拖车的挂接。
本公开提供的挂接方法解决了自动驾驶车辆挂接拖车或拖板的问题,解决了现有的自动驾驶车辆、人工驾驶车辆挂接拖车和拖板的难题,可以极大地提高自动驾驶车辆对接拖车的运行效率,减少了人工干预,使作业流程更流畅,实现自动挂接,并且可以完成一些原来的方案无法完成的任务,使自动驾驶的应用范围更广泛。
图1A是本公开至少一实施例提供的用于车辆对接拖车的挂接方法的流程图。车辆上安装有数据采集设备、工控装置和限位引导装置,在拖车的前端设置有至少两个靶点。
如图1A所示,该挂接方法包括如下的步骤S001-S005。
步骤S001:车辆通过数据采集设备获取拖车的包含靶点信息的数据,其中,靶点信息包括靶点对应的点云数据。
例如,数据采集设备包括激光雷达、相机、UWB(Ultra Wide Band,超宽带)接收器等中的任意一个或多个,本公开的实施例对此不作限制。
例如,包含靶点信息的数据包括拖车的点云数据、图像、无线电信号等,也可以为其他任意类型的数据,只要能反映靶点信息即可,本公开的实施例对此不作限制。
步骤S002:根据靶点对应的点云数据计算出靶点对应的坐标数据并确定拖车的挂钩的位置。
例如,每个靶点对应的点云数据的中心坐标即为靶点对应的坐标数据。
对于中心坐标,可以计算每个点云的集合中的所有点云数据的坐标平均值,将坐标平均值作为中心坐标。
例如,通过靶点对应的坐标数据计算出靶点间的距离,并将该距离与工控装置中存储的预设靶点信息进行比较以确定拖车的型号。
例如,不同型号的拖车具有不同的预设的靶点信息,对于型号A,靶点间的预设距离为x,对于型号B,靶点间的预设距离为y。通过步骤S1~S5(将在后文中描述)所述的任 一种方法获取到两个靶点间的实际距离,将实际距离与预设的靶点间的距离进行比较,如果实际距离和预设距离x接近,则拖车的型号为A,如果实际距离和预设距离y接近,则拖车的型号为B。例如,预设的靶点间的距离可以存储在工控机的存储器中,也可以存储在其他存储装置中,本公开的实施例对此不作限制。以上为确定拖车型号的示例,本公开的实施例不限于此,也可以采用其他任意适用的方法来确定拖车型号。
例如,两个靶点以拖车的挂钩为中心对称设置,靶点的位置和拖车的挂钩位置呈三角形分布,对于不同的拖车型号,三角形的边的长度不同并作为预设靶点信息存储在工控装置的存储器中,通过确定的拖车型号以及预设靶点信息即可确定挂钩的位置。
步骤S003:根据车辆的牵引点的坐标,计算拖车的位姿。
例如,拖车的位姿数据包括挂钩的坐标、拖车的斜率以及挂钩到车辆的距离。
步骤S004:工控装置根据拖车的位姿为车辆规划路径并驱动车辆向拖车的挂钩位置行驶。
需要说明的是,不仅可以通过计算拖车的位姿实施路径规划,也可以通过计算车辆的位姿实施路径规划,本公开对此不做限制。
例如,工控装置首先根据拖车的位姿数据判断是否找到所述拖车,在能找到拖车的情况下根据拖车的位姿数据为自动驾驶车辆规划路径并判断路径是否可达,在路径可达的情况下根据位姿数据摆正车辆的车体并驱动车辆行驶到挂钩位置。
步骤S005:通过车辆的限位引导装置来引导并完成车辆与拖车的挂接。
例如,限位引导装置包括八字型限位引导器以及舌型引导器。
由于车辆不一定能准确地到达目标位置,车辆的牵引点和拖车的挂钩挂接时会存在一定的误差,因此需要通过限位引导装置进一步提高挂接的准确度。通过限位引导装置引导挂接的方法可以参考下文的相关描述,在此不再赘述。
图1B是本公开至少一实施例提供的激光雷达与靶点的安装示意图。
如图1B所示,本公开至少一实施例采用激光雷达的方式进行拖车识别。为便于表述,作为提供动力的自动驾驶车辆,采用能够自动驾驶的拖头车1,所述拖头车1的后端中部设置有牵引点2,在拖头车1上设置有激光雷达3,通过激光雷达3来进行识别。需要说明的是,激光雷达3可以替换为其它数据采集设备,例如相机、超宽带接收器等。
需要和拖头车1进行对接的拖车4的前端中部,设置有与牵引点2对应的拖车挂点5,也可称为挂钩,拖车4的前端两侧分别设置有左靶点6、右靶点7。在此实施例中,例如,以拖车挂点5为中心,在拖车4的拖车挂点5两侧各一米的位置,对称安装两个靶点。需要说明的是,本公开不限于对称安装两个靶点,本公开对靶点安装的位置不做限制,另外,本公开对靶点的形状不做限制,靶点的尺寸需要保证数据采集设备可以识别到靶点。例如,靶点的形状为矩形,靶点尺寸为50mm*250mm,保证激光雷达3可以打到靶点上,进而实现靶点识别。现有拖车车体的高度通常为500mm,靶点安装在拖车4的前挡板上,例如前挡板的下侧,以保证不影响货物装卸。
其中(在使用激光雷达进行拖车识别的过程中),靶点识别通过反光率(也被称为反射 率)结合靶点安装模式来实现,对反光率的理解是,不同材质的物体对激光雷达的反光率是不同的,根据这一原理本公开使用具有特定反光率的材质制作靶点(也可以称为激光靶点),在雷达数据(靶点反射的数据)中根据特定反光率来筛选数据找到靶点。例如,反射率大于或等于反射率阈值的数据对应的点即为靶点。靶点的安装是依照数学模式,可以在靶点信息中筛选出预置靶点,去除干扰点。
这样,激光雷达3筛选出的数据传送到拖头车1的工控装置(例如工控机),在工控机中进行拖车4的识别和位置计算。工控机中的车辆控制单元,计算拖车4的类型和位姿,根据拖车4的位姿来规划挂接路径,控制拖头车1进行挂接。
图2是本公开至少一实施例提供的用于车辆对接拖车的挂接方法的流程图。
如图2所示,本公开至少一实施例提供的车辆对接拖车的挂接方法,包括以下步骤S1~S10:
步骤S1:拖头车1通过激光雷达3扫描靶点,其中,靶点的激光反射率明显区分于周边环境,从而获取到靶点反射率的点云数据。
点云数据是指在一个三维坐标系统中的一组向量的集合,扫描资料以点的形式记录,每一个点包含三维坐标,有些可能含有反射强度信息(反射率)或颜色信息,通过激光雷达3扫描靶点获取的点云数据包含的是反射率信息。
在步骤S1中,数据采集设备为激光雷达,这里使用的靶点的反射率与环境明显不同,靶点比较容易识别,测量精度高。
在数据采集设备为激光雷达的情形,车辆通过数据采集设备获取拖车的包含靶点信息的数据的方法在图3A中示出。
如图3A所示,该方法包括步骤S301~S304。
步骤S301:车辆的工控装置获取激光雷达扫描拖车的扫描数据。
步骤S302:工控装置循环读取扫描数据并将扫描数据对应的点的反射率与反射率阈值进行比较。
步骤S303:响应于反射率大于或等于反射率阈值,工控装置将对应的点的数据保存并生成点云数据。
步骤S304:响应于反射率小于反射率阈值,工控装置继续读取后续扫描数据。
在步骤S303中,生成点云数据是通过数据筛选来实施的。图3B是本公开至少一实施例提供的数据筛选的处理示意图。结合图3B所示,扫描靶点是对扫描到的每个点都进行判断,将符合反射率阈值的点进行记录,进而形成靶点的点云数据。左靶点6、右靶点7两个靶点,会形成两片点云数据。
其中,点云数据的筛选包括以下步骤:
S11:拖头车1的工控机获取激光雷达3的扫描数据;
S12:工控机循环读取扫描数据对应的每个点;
S13:工控机读取每个点的点反射率;
S14:工控机在其存储器查找反射率阈值;
S15:根据反射率阈值,判断点反射率是否符合,工控机将符合的点记录下来,不符合则跳转到步骤S12;S16:工控机将符合的点的数据保存,并生成点云数据。
例如,对于反射率大于或等于反射率阈值的点即为符合条件的点,这些符合的点形成点云数据。
激光雷达扫描的详细说明可以参考常规设计,此处不再赘述,以下是某实施例中激光雷达3扫描时涉及的输入数据:
点云输入监听激光雷达发表的话题,以下为话题格式为标准PointCloud2格式的示例:
Header header#文件头
uint32 height#点云的高度
uint32 width
PointField[]fields
bool is_bigendian
uint32 point_step
uint32 row_step
uint8[]data
bool is_dense
S2:工控机获取到激光靶点的点云数据后,根据反射率对点云数据进行滤波处理,优化点云数据,由于靶点的反射率明显区别于环境,通过滤波即可确定点云坐标。
图4A是本公开至少一实施例提供的对点云数据进行点云滤波的流程图。
如图4A所示,该流程图包括步骤S401~S407。
步骤S401:工控装置遍历点云数据。
步骤S402:对点云的每个点,判断该点是否为点云的首点。
步骤S403:响应于该点是首点,建立点集并将该点存储到该点集。
步骤S404:响应于该点不是首点,判断该点与前一个点的距离是否超出预设阈值。
步骤S405:响应于距离超出预设阈值,建立点集并将该点存储到该点集。
步骤S406:响应于距离没有超出预设阈值,将该点存储到当前点集。
步骤S407:对所建立的点集分别进行计算得到每个点集的中心坐标并存储中心坐标。
图4B是本公开至少一实施例提供的靶点聚合的处理示意图。
如图4B所示,点云滤波包括下列步骤:
S21:工控机遍历雷达点云;
S22:对点云的每个点,判断该点是否为靶点;不是则返回步骤S21;
S23:该点是靶点时,则判断该点是否为点云的首点;是首点则将该点存储到新点集,并返回步骤S21;
S24:该点不是点云的首点时,判断该点前面的点是否为靶点;不是则将该点存储到新点集,并返回步骤S21;
S25:该点前面的点是靶点时,判断该点与前面的点的距离是否超出设定的阈值;是的 话说明该点也是靶点,将该点存储到新点集,并返回步骤S21;
S26:如果两点间的距离没有超出阈值,则将该点存储到当前点集;
S27:对每个点集进行计算得到每个点集的中心坐标并存储中心坐标,进而实现对过滤后的点云数据进行聚类。
该方法中,点云坐标的滤波并不需要复杂的建模,或者进行深度学习。
S3:当采用颜色明显区别于环境的靶点(也可以称为视觉靶点)时,根据颜色对图像数据进行滤波处理,优化图像数据,由于靶点的颜色明显区别于环境,通过滤波即可确定点云坐标。
在步骤S3中,数据采集设备为相机。利用相机获取图像,由于这里使用的靶点的颜色明显区别于环境,因此很容易过滤出靶点。使用视觉靶点进行数据采集具有成本低的优势。
在数据采集设备为相机的情形,车辆通过数据采集设备获取拖车的包含靶点信息的数据的方法在图5A中示出。
如图5A所示,该方法包括步骤S501~S502。
步骤S501:相机获取图像并将图像传输到工控装置。
步骤S502:工控装置对图像进行视觉滤波,得到拖车的包含靶点信息的数据。
在本公开的一些实施例中,步骤S502中的工控装置对图像进行视觉滤波可以包括步骤S511~S513,如图5B所示。
步骤S511:将图像转换为HSV或HSL格式。
常见的颜色模型是RGB格式,此外还有HSL和HSV等其它颜色模型。HSL和HSV格式都可以通过RGB格式由数学公式转化而来。RGB是最常用的颜色空间,由三个通道表示一幅图像,分别为红色(R),绿色(G)和蓝色(B),这三种颜色的不同组合可以形成几乎所有的其他颜色。HSV和HSL颜色空间比RGB更接近人们对彩色的感知经验。非常直观地表达颜色的色调、鲜艳程度和明暗程度,方便进行颜色的对比。在HSV和HSL颜色空间下,比RGB更容易跟踪某种颜色的物体。
步骤S512:根据颜色空间确定靶点的坐标。
步骤S513:根据靶点的坐标以及像素点和靶点的距离,计算像素点的坐标。
图5C是本公开至少一实施例提供的视觉靶点的数据处理流程图。
如图5C所示,图像处理包括以下步骤:
S31:将图像转为HSV或HSL格式。
S32:根据颜色空间确定靶点的位置。
S33:根据预知的靶点坐标以及像素点和靶点之间的距离,计算出各像素点的坐标。
S34:进入S6进行计算。
S4:使用可以发射无线电的靶点(也可以称为无线电靶点)时,使用UWB接收器测量靶点的距离,获取到距离后进入S6计算。
在步骤S4中,数据采集设备为UWB接收器。
在数据采集设备为UWB的情形,车辆通过数据采集设备获取拖车的包含靶点信息的 数据的方法在图6A中示出。
如图6A所示,该方法包括步骤S601~S603。
步骤S601:车辆使用超宽带接收器接收靶点发射的数据。
步骤S602:根据飞行时间计算超宽带接收器和靶点之间的距离。
例如,T1时刻从发射器向UWB接收器发射数据,T2时刻数据达到UWB接收器,则数据在发射器和UWB接收器之间的飞行时间就是T2减去T1的结果。已知无线电的传播速度为光速,则发射器和UWB接收器之间的距离为光速乘以飞行时间。
步骤S603:根据距离计算靶点的坐标。
例如,已知靶点之间的距离和靶点与UWB接收器之间的距离,可以计算出靶点的三维坐标。
图6B是本公开至少一实施例提供的无线电靶点的数据处理流程图。
如图6B所示,UWB接收器处理流程如下:
S41:通过UWB接收器接收发射器数据。
例如,发射器是安装在靶点上的无线电发射器。
S42:根据飞行时间计算距离。S43:计算出靶点三维坐标。S44:进入S6进行计算。
S5:当采用视觉和激光融合算法时,先对相机和激光雷达进行标定,将图像和点云数据融合,利用视觉深度学习识别算法比较成熟,激光测量数据非常精准的特点进行测量。
在步骤S5中,结合上述使用激光雷达和相机的方法。车辆上安装有激光雷达和相机,在使用前先对两者进行标定,即调整参数使得激光雷达获取的点云数据和相机获取的图像能够符合特定的映射关系。
在数据采集设备包括相机和激光雷达的情形,车辆通过数据采集设备获取拖车的包含靶点信息的数据的方法在图7A中示出。
如图7A所示,该方法包括步骤S701~S706。
步骤S701:对激光雷达和相机进行标定。
步骤S702:通过相机获取靶点的图像。
步骤S703:通过激光雷达获取靶点的点云数据。
步骤S704:将图像的数据和点云数据进行融合,形成融合的点云数据。
步骤S705:通过深度学习算法识别图像中的拖车的前挡板。
步骤S706:获得融合数据中的前挡板的点云数据,其中,前挡板的点云数据即为靶点的点云数据。
由于靶点安装在前挡板上,因此前挡板的点云数据即为靶点的点云数据。
图7B是本公开至少一实施例提供的视觉融合激光深度学习处理流程图。
如图7B所示,融合处理流程如下:
S41:获取视觉图像。
例如,采用相机获取拖车的图像。
S42:获取激光点云。
例如,利用激光雷达对拖车进行扫描并根据反射率获得点云数据。
S43:融合图像和点云数据,形成统一数据。
例如,统一数据中既有图像信息又有点云信息。
S44:通过深度学习识别出图像中的拖车的前挡板。
S45:找到融合数据中前挡板的激光点云数据。
S46:挡板点云数据与靶点点云数据是一致的,计入S6进行计算。
由于视觉深度学习识别算法计算靶点间的距离较为精确,激光雷达测量靶点和雷达之间的距离较为精确,因此结合两种方法可以进一步提高测量的精确性。S6:通过上述处理,获取到至少两个靶点对应的点云数据,通过靶点间的距离,根据工控机的存储器预设的靶点信息,来实现拖车的型号识别,并进一步过滤靶点,实现点云聚合。
下面说明如何进一步过滤靶点。
图8A示出了本公开至少一实施例提供的根据靶点对应的坐标数据确定拖车的挂钩的位置的流程图。
如图8A所示,该流程图包括步骤S801~S803。
步骤S801:通过靶点对应的坐标数据,计算靶点间的距离。
步骤S802:将靶点间的距离与预设靶点信息进行比较以确定拖车的型号,其中,对于不同的拖车型号,靶点间的距离不同。
步骤S803:根据拖车的型号和预设靶点信息确定拖车的挂钩的位置。
图8B是本公开至少一实施例提供的过滤靶点的处理示意图。
结合图8B所示,包括下列步骤:
S61:遍历点云数据。
S62:对每个点云的点集求(x,y)坐标的平均值,即为中心点坐标。
S63:计算中心点的分布模式。
S64:匹配已知拖车的分布模式,判断拖车类型。
例如,对于不同的拖车类型,预设的靶点间的距离不同,并且靶点的分布模式不同。
S65:保留符合分布模式的点,过滤不符合的点。
例如,获取点云数据的过程中混入了杂点,算出了三个点集的中心点,只有其中的两个点是符合分布模式的点,这样就可以过滤掉不符合分布模式的点。
S7:根据符合分布模式的点云数据,获取挂钩位置,进而得到拖车的位姿。
拖车的位姿根据存储器中的预设数据进行计算,计算方法采用最小二乘法。
最小二乘法确定位姿时:
在具体的测量中,根据最小二乘法公式:
拟合直线公式为:y=k*x+b;其中x为直线上点的x坐标、y为直线上点的y坐标、k为直线的斜率、b为直线的截距。
斜率公式为:
Figure PCTCN2022072334-appb-000001
其中k为直线的斜率,x为获得的点的x坐标,y为获得的点的y坐标,
Figure PCTCN2022072334-appb-000002
为所有点x乘y的值的平均值,
Figure PCTCN2022072334-appb-000003
为所有点的x值的平均值乘所有点的y值的平均值,
Figure PCTCN2022072334-appb-000004
为所有点x值平方的平均值,
Figure PCTCN2022072334-appb-000005
为所有点的x值的平均值的平方。
计算出斜率k后,根据
Figure PCTCN2022072334-appb-000006
和前面算出的斜率k,使用待定系数法求出截距b。
对于两个靶点连线的中心点坐标,需要先根据点间距聚合为左侧点群和右侧点群分别求出Pl和Pr的平均坐标,再求出中心点的坐标Xc=(Xl+Xr)/2;Yc=(Yl+Yr)/2。其中Pl为左侧靶点,为Pr右侧靶点。(Xc,Yc)为两个靶点连线的中心点坐标,(Xl,Yl)为左侧靶点的坐标,(Xr,Yr)为右侧靶点坐标。则挂钩坐标(Xg,Yg)的计算公式为Xg=Xc+c1,Yg=Yc+c2,c1为两个靶点连线的中心点与挂钩之间的x方向的距离,c2为两个靶点连线的中心点与挂钩之间的y方向的距离。例如,c1和c2可以提前获取并存储在工控装置的存储器中。
S8:然后,根据坐标变换公式,根据牵引点在雷达坐标系中的位姿,求出激光雷达在雷达坐标系中的位姿。输出位姿的数据如下表所示:
Figure PCTCN2022072334-appb-000007
此时,通过坐标变换将挂点作为雷达坐标系中的原点,将拖头车的激光雷达作为该坐标系中的某坐标点,然后进行下面的路径规划。
例如,雷达坐标系的原点为数据采集装置(例如,激光雷达、相机等),通过坐标变换将拖车的挂钩的位置为雷达坐标系中的原点,将拖头车的数据采集设备作为该坐标系中的另一点,根据这些点的坐标可以进行路径规划。
S9:工控机根据位姿规划路径驱动拖头车向挂点倒车。
图10A示出了本公开至少一实施例提供的工控装置根据拖车的位姿为车辆规划路径并驱动车辆向拖车的挂钩位置行驶的流程图。
如图10A所示,该方法包括步骤S1001~S1007。
步骤S1001:响应于车辆行驶到挂接开始位置,工控装置计算拖车的位姿。
步骤S1002:根据拖车的位姿判断是否与拖车建立通信。
步骤S1003:响应于与拖车没有建立通信,工控装置向调度系统报告错误,结束本次挂接。
步骤S1004:响应于与拖车建立通信,根据拖车的位姿为车辆规划路径。
步骤S1005:判断路径是否可达。
步骤S1006:响应于路径不可达,结束本次挂接。
步骤S1007:响应于路径可达,根据拖车的位姿调整车体方位,并驱动车辆按照路径向挂钩行驶。
图10B示出了本公开至少一实施例提供的车辆行进的处理示意图。
结合图10B所示,包括下列步骤:
S91:响应于拖头车到达自动托挂钩开始位置,工控机开始计算拖车位姿。
例如,自动托挂钩开始位置是人为设置的一个区域,拖头车到达该区域时,工控机即可开始计算拖车的位姿数据。
S92:根据拖车的位姿判断是否与拖车建立通信,响应于与拖车没有建立通信,工控装置向调度系统报告错误,结束本次挂接,响应于与拖车建立通信,根据拖车的位姿为车辆规划路径。
例如,根据计算的位姿判断是否找到拖车,如果找到执行步骤S93,如果找不到则向调度系统报告错误,结束本次挂接。
S93:根据拖板位姿规划路径。
S94:判断路径是否可达,如果可达则按照规划路径挂接,本结束本次操作;如果不可达则执行步骤S95。
S95:沿斜率切向向前摆正车体。
例如,车辆的车体和拖车的车体不在一条线上,则通过摆正车辆的车体使得车辆和拖车位于一条线上。
S96:规划挂接路径并挂接。
S10:通过车辆的限位引导装置进行引导并完成车辆与拖车的挂接。
图11A示出了本公开至少一实施例提供的通过车辆的限位引导装置进行引导并完成车辆与拖车的挂接的流程图。
如图11A所示,该流程图包括步骤S1101~S1103。
步骤S1101:挂钩在限位引导装置的引导下接近牵引点。
步骤S1102:响应于挂钩到达牵引点处设置的限位传感器,限位传感器被触发并发出信号。
步骤S1103:响应于车辆接收到信号,牵引点和挂钩进行挂接。
例如,在拖车挂点(例如挂钩)和车辆挂点(例如牵引点)挂接时会有一定误差,在车辆的挂接点安装限位引导装置,使用左右的八字型限位引导器以及向下的舌型引导器来容差并引导拖车的挂点,在车辆的挂点(牵引点)处设置限位传感器,当拖车的挂点(挂钩)到位时触发传感器,牵引点处的插销落下完成牵引点和挂钩的自动挂接。限位引导装置的结构图在图11B中示出。
图12示出了本公开至少一实施例提供的挂接系统1200的示意框图。
如图12所示,挂接系统1200车辆1201和拖车1202,车辆1201上安装有数据采集设 备1203、工控装置1204和限位引导装置1205,在拖车1202的前端设置有至少两个靶点。
车辆1201被配置为在工控装置1204的驱动下向拖车1202行驶并完成和拖车1202的挂接。
数据采集设备1203被配置为获取拖车1202的包含靶点信息的数据,其中,靶点信息包括靶点对应的点云数据。
工控装置1204被配置为根据靶点对应的点云数据计算靶点对应的坐标数据并确定拖车1202的挂钩的位置,根据车辆1201的牵引点的坐标,计算拖车1202的位姿,以及根据拖车1202的位姿为车辆1201规划路径并驱动车辆1201向拖车1202的挂钩位置行驶。
限位引导装置1205被配置为对车辆1201进行引导并完成车辆1201与拖车1202的挂接。
挂接系统1200的详细说明可以参照前面的图1B中的描述,在此不再赘述。
本公开实现了无人工干预的自动识别、测量拖车的停靠位姿,可以真正地自动调整挂接路径、自动挂接,提高流程效率。
对于本公开,还有以下几点需要说明:
(1)本公开实施例附图只涉及到与本公开实施例涉及到的结构,其他结构可参考通常设计。
(2)在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合以得到新的实施例。
以上所述仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,本公开的保护范围应以所述权利要求的保护范围为准。

Claims (18)

  1. 一种用于车辆对接拖车的挂接方法,其中,所述车辆上安装有数据采集设备、工控装置和限位引导装置,在所述拖车的前端设置有至少两个靶点,所述挂接方法包括:
    所述车辆通过所述数据采集设备获取所述拖车的包含靶点信息的数据,其中,所述靶点信息包括所述靶点对应的点云数据;
    根据所述靶点对应的所述点云数据计算所述靶点对应的坐标数据并确定所述拖车的挂钩的位置;
    根据所述车辆的牵引点的坐标,计算所述拖车的位姿;
    所述工控装置根据所述拖车的位姿为所述车辆规划路径并驱动所述车辆向所述拖车的挂钩位置行驶;
    通过所述车辆的所述限位引导装置进行引导并完成所述车辆与所述拖车的挂接。
  2. 根据权利要求1所述的挂接方法,其中,所述数据采集设备包括激光雷达,
    所述车辆通过所述数据采集设备获取所述拖车的包含靶点信息的数据,包括:
    所述车辆的所述工控装置获取所述激光雷达扫描所述拖车的扫描数据;
    所述工控装置循环读取所述扫描数据并将所述扫描数据对应的点的反射率与反射率阈值进行比较;
    响应于所述反射率大于或等于所述反射率阈值,所述工控装置将对应的点的数据保存并生成点云数据;
    响应于所述反射率小于所述反射率阈值,所述工控装置继续读取后续扫描数据。
  3. 根据权利要求2所述的挂接方法,其中,所述靶点的反射率与周边环境的反射率不同。
  4. 根据权利要求2所述的挂接方法,其中,所述车辆通过所述数据采集设备获取所述拖车的包含靶点信息的数据,还包括:
    所述工控装置将所述对应的点的数据保存并生成所述点云数据后,对所述点云数据进行点云滤波。
  5. 根据权利要求2所述的挂接方法,其中,所述反射率阈值存储在所述工控装置的存储器中。
  6. 根据权利要求4所述的挂接方法,其中,对所述点云数据进行点云滤波,包括:
    所述工控装置遍历所述点云数据;
    对点云的每个点,判断该点是否为所述点云的首点;
    响应于该点是首点,建立点集并将该点存储到该点集;
    响应于该点不是首点,判断该点与前一个点的距离是否超出预设阈值;
    响应于所述距离超出所述预设阈值,建立点集并将该点存储到该点集;
    响应于所述距离没有超出所述预设阈值,将该点存储到当前点集;
    对所建立的点集分别进行计算得到每个点集的中心坐标并存储所述中心坐标。
  7. 根据权利要求1所述的挂接方法,其中,所述数据采集设备包括相机,
    所述车辆通过所述数据采集设备获取所述拖车的包含靶点信息的数据,包括:
    所述相机获取图像并将所述图像传输到所述工控装置;
    所述工控装置对所述图像进行视觉滤波,得到所述拖车的包含靶点信息的数据。
  8. 根据权利要求7所述的挂接方法,其中,所述工控装置对所述图像进行视觉滤波,包括:
    将所述图像转换为HSV或HSL格式;
    根据颜色空间确定所述靶点的坐标;
    根据所述靶点的坐标以及像素点和所述靶点的距离,计算所述像素点的坐标。
  9. 根据权利要求8所述的挂接方法,其中,所述靶点的颜色与周边环境的颜色不同。
  10. 根据权利要求1所述的挂接方法,其中,所述数据采集设备包括超宽带接收器,
    所述车辆通过所述数据采集设备获取所述拖车的包含靶点信息的数据,包括:
    所述车辆使用所述超宽带接收器接收所述靶点发射的数据;
    根据飞行时间计算所述超宽带接收器和所述靶点之间的距离;
    根据所述距离计算所述靶点的坐标。
  11. 根据权利要求1所述的挂接方法,其中,所述数据采集设备包括相机和激光雷达,
    所述车辆通过所述数据采集设备获取所述拖车的包含靶点信息的数据,包括:
    对所述激光雷达和所述相机进行标定;
    通过所述相机获取所述靶点的图像;
    通过所述激光雷达获取所述靶点的点云数据;
    将所述图像的数据和所述点云数据进行融合,形成融合的点云数据;
    通过深度学习算法识别所述图像中的所述拖车的前挡板;
    获得所述融合数据中的所述前挡板的点云数据,其中,所述前挡板的点云数据即为所述靶点的点云数据。
  12. 根据权利要求1所述的挂接方法,其中,根据所述靶点对应的所述坐标数据确定所述拖车的所述挂钩的位置,包括:
    通过所述靶点对应的所述坐标数据,计算所述靶点间的距离;
    将所述靶点间的距离与预设靶点信息进行比较以确定所述拖车的型号,其中,对于不同的拖车型号,所述靶点间的距离不同;
    根据所述拖车的型号和所述预设靶点信息确定所述拖车的所述挂钩的位置。
  13. 根据权利要求12所述的挂接方法,其中,所述挂钩的位置的坐标为(Xc,Yc),第一靶点的坐标为(Xl,Yl),第二靶点的坐标为(Xr,Yr),计算所述挂钩的坐标的公式为Xc=(Xl+Xr)/2+c1,Yc=(Yl+Yr)/2+c2,c1为所述靶点之间连线的中心点与所述挂钩之间的x方向的距离,c2为所述靶点之间连线的中心点与所述挂钩之间的y方向的距离。
  14. 根据权利要求1所述的挂接方法,其中,所述拖车的位姿包括所述挂钩的坐标、所述拖车的斜率以及所述挂钩到所述车辆的距离。
  15. 根据权利要求1所述的挂接方法,其中,所述工控装置根据所述拖车的位姿为所述车辆规划路径并驱动所述车辆向所述拖车的挂钩位置行驶,包括:
    响应于所述车辆行驶到挂接开始位置,所述工控装置计算所述拖车的位姿;
    根据所述拖车的位姿判断是否与所述拖车建立连接;
    响应于与所述拖车没有建立连接,所述工控装置向调度系统报告错误,结束本次挂接;
    响应于与所述拖车建立连接,根据所述拖车的位姿为所述车辆规划路径;
    判断所述路径是否可达;
    响应于所述路径不可达,结束本次挂接;
    响应于所述路径可达,根据所述拖车的位姿调整车体方位,并驱动所述车辆按照所述路径向所述挂钩行驶。
  16. 根据权利要求1所述的挂接方法,其中,通过所述车辆的所述限位引导装置进行引导并完成所述车辆与所述拖车的挂接,包括:
    所述挂钩在所述限位引导装置的引导下接近所述牵引点;
    响应于所述挂钩到达所述牵引点处设置的限位传感器,所述限位传感器被触发并发出信号;
    响应于所述车辆接收到所述信号,所述牵引点和所述挂钩进行挂接。
  17. 根据权利要求16所述的挂接方法,其中,所述限位引导装置包括八字型限位引导器以及舌型引导器。
  18. 一种挂接系统,包括车辆和拖车,所述车辆上安装有数据采集设备、工控装置和限位引导装置,在所述拖车的前端设置有至少两个靶点,其中,
    所述车辆被配置为在所述工控装置的驱动下向所述拖车行驶并完成和所述拖车的挂接;
    所述数据采集设备被配置为获取所述拖车的包含靶点信息的数据,其中,所述靶点信息包括所述靶点对应的点云数据;
    所述工控装置被配置为根据所述靶点对应的所述点云数据计算所述靶点对应的坐标数据并确定所述拖车的挂钩的位置,根据所述车辆的牵引点的坐标,计算所述拖车的位姿,以及根据所述拖车的位姿为所述车辆规划路径并驱动所述车辆向所述拖车的挂钩位置行驶;
    所述限位引导装置被配置为对所述车辆进行引导并完成所述车辆与所述拖车的挂接。
PCT/CN2022/072334 2021-01-19 2022-01-17 一种用于车辆对接拖车的挂接方法和挂接系统 WO2022156630A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110068304.1A CN112904363B (zh) 2021-01-19 2021-01-19 一种自动驾驶车辆对接拖车自动托挂钩的方法
CN202110068304.1 2021-01-19

Publications (1)

Publication Number Publication Date
WO2022156630A1 true WO2022156630A1 (zh) 2022-07-28

Family

ID=76115500

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/072334 WO2022156630A1 (zh) 2021-01-19 2022-01-17 一种用于车辆对接拖车的挂接方法和挂接系统

Country Status (2)

Country Link
CN (1) CN112904363B (zh)
WO (1) WO2022156630A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116413735A (zh) * 2023-06-12 2023-07-11 九曜智能科技(浙江)有限公司 牵引车和被牵引目标的跟踪对接方法电子设备
CN116428996A (zh) * 2023-06-06 2023-07-14 北京斯年智驾科技有限公司 一种吊具高度的检测方法和检测装置
CN116945826A (zh) * 2023-08-17 2023-10-27 合肥马格勒斯汽车科技发展有限公司 一种供电拖车控制方法、系统、存储介质及智能终端
CN118470688A (zh) * 2024-07-03 2024-08-09 比亚迪股份有限公司 车辆控制方法、系统、设备及存储介质

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112904363B (zh) * 2021-01-19 2023-04-25 北京九曜智能科技有限公司 一种自动驾驶车辆对接拖车自动托挂钩的方法
CN113296107B (zh) * 2021-06-23 2024-07-23 上海西井科技股份有限公司 传感器协同检测拖挂角度的方法、系统、设备及存储介质
CN113805194B (zh) * 2021-07-30 2024-03-29 上海西井科技股份有限公司 无人车功能组件的复合导航系统、方法、设备及存储介质
CN113686331B (zh) * 2021-08-24 2024-07-26 苏州星空位智科技有限公司 一种利用自动配对技术进行动态定位的系统及方法
CN114115236A (zh) * 2021-10-29 2022-03-01 中国航空工业集团公司洛阳电光设备研究所 一种基于激光雷达的飞机牵引车自动对接导航装置及方法
CN114966737A (zh) * 2022-04-19 2022-08-30 河北易沃克机器人科技有限公司 空间两目标物的自主对准对接系统及方法
CN116443012B (zh) * 2023-06-13 2023-09-22 九曜智能科技(浙江)有限公司 牵引车和并排被牵引目标的对接方法和电子设备
CN116424331B (zh) * 2023-06-13 2023-09-22 九曜智能科技(浙江)有限公司 牵引车和被牵引目标的对接方法和电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008064892A1 (de) * 2006-11-29 2008-06-05 Universität Koblenz-Landau Verfahren zum bestimmen einer position, vorrichtung und computerprogrammprodukt
US20180147900A1 (en) * 2012-07-05 2018-05-31 Uusi, Llc Vehicle trailer connect system
CN110733302A (zh) * 2018-07-18 2020-01-31 福特全球技术公司 在自动挂接操作中对拖车联接器高度进行补偿
CN111051087A (zh) * 2017-08-31 2020-04-21 塞夫霍兰德有限公司 用于识别挂车并辅助牵引机的挂接过程的系统
CN111366947A (zh) * 2018-12-26 2020-07-03 武汉万集信息技术有限公司 导航激光雷达对场景的识别方法、装置及系统
CN111372795A (zh) * 2017-09-25 2020-07-03 大陆汽车系统公司 使用图像坐标的自动化拖车挂接
CN112904363A (zh) * 2021-01-19 2021-06-04 北京九曜智能科技有限公司 一种自动驾驶车辆对接拖车自动托挂钩的方法

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6687609B2 (en) * 2002-06-13 2004-02-03 Navcom Technology, Inc. Mobile-trailer tracking system and method
GB201312038D0 (en) * 2013-07-04 2013-08-21 Jaguar Land Rover Ltd Trailer parameter identification system
CN206106841U (zh) * 2016-09-22 2017-04-19 苏州安井自动化设备有限公司 Agv料车牵引的自动挂接机构
US10332002B2 (en) * 2017-03-27 2019-06-25 GM Global Technology Operations LLC Method and apparatus for providing trailer information
IT201700054083A1 (it) * 2017-05-18 2018-11-18 Cnh Ind Italia Spa Sistema e metodo di collegamento automatico tra trattore ed attrezzo
DE102017112786A1 (de) * 2017-06-09 2018-12-13 Valeo Schalter Und Sensoren Gmbh Verfahren zum Charakterisieren eines an ein Zugfahrzeug angehängten Anhängers, Fahrerassistenzsystem sowie Gespann
US20200276989A1 (en) * 2017-12-20 2020-09-03 Intel Corporation Computer assisted or autonomous driving (ca/ad) towing vehicles and trailers
CN108278981A (zh) * 2018-02-11 2018-07-13 北京主线科技有限公司 检测无人驾驶挂车轴偏角的装置及其检测方法
CN112004696B (zh) * 2018-05-01 2024-04-12 大陆汽车系统公司 牵引车辆和拖车的对准
US10628690B2 (en) * 2018-05-09 2020-04-21 Ford Global Technologies, Llc Systems and methods for automated detection of trailer properties
US10810445B1 (en) * 2018-06-29 2020-10-20 Zoox, Inc. Pipeline with point cloud filtering
CN112141890A (zh) * 2020-08-19 2020-12-29 太原重工股份有限公司 用于起重机的自动脱挂钩方法和系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008064892A1 (de) * 2006-11-29 2008-06-05 Universität Koblenz-Landau Verfahren zum bestimmen einer position, vorrichtung und computerprogrammprodukt
US20180147900A1 (en) * 2012-07-05 2018-05-31 Uusi, Llc Vehicle trailer connect system
CN111051087A (zh) * 2017-08-31 2020-04-21 塞夫霍兰德有限公司 用于识别挂车并辅助牵引机的挂接过程的系统
CN111372795A (zh) * 2017-09-25 2020-07-03 大陆汽车系统公司 使用图像坐标的自动化拖车挂接
CN110733302A (zh) * 2018-07-18 2020-01-31 福特全球技术公司 在自动挂接操作中对拖车联接器高度进行补偿
CN111366947A (zh) * 2018-12-26 2020-07-03 武汉万集信息技术有限公司 导航激光雷达对场景的识别方法、装置及系统
CN112904363A (zh) * 2021-01-19 2021-06-04 北京九曜智能科技有限公司 一种自动驾驶车辆对接拖车自动托挂钩的方法

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116428996A (zh) * 2023-06-06 2023-07-14 北京斯年智驾科技有限公司 一种吊具高度的检测方法和检测装置
CN116428996B (zh) * 2023-06-06 2023-09-01 北京斯年智驾科技有限公司 一种吊具高度的检测方法和检测装置
CN116413735A (zh) * 2023-06-12 2023-07-11 九曜智能科技(浙江)有限公司 牵引车和被牵引目标的跟踪对接方法电子设备
CN116413735B (zh) * 2023-06-12 2023-09-22 九曜智能科技(浙江)有限公司 牵引车和被牵引目标的跟踪对接方法和电子设备
CN116945826A (zh) * 2023-08-17 2023-10-27 合肥马格勒斯汽车科技发展有限公司 一种供电拖车控制方法、系统、存储介质及智能终端
CN116945826B (zh) * 2023-08-17 2024-01-30 合肥马格勒斯汽车科技发展有限公司 一种供电拖车控制方法、系统、存储介质及智能终端
CN118470688A (zh) * 2024-07-03 2024-08-09 比亚迪股份有限公司 车辆控制方法、系统、设备及存储介质
CN118470688B (zh) * 2024-07-03 2024-12-10 比亚迪股份有限公司 车辆控制方法、系统、设备及存储介质

Also Published As

Publication number Publication date
CN112904363B (zh) 2023-04-25
CN112904363A (zh) 2021-06-04

Similar Documents

Publication Publication Date Title
WO2022156630A1 (zh) 一种用于车辆对接拖车的挂接方法和挂接系统
US20190166338A1 (en) Projection apparatus
EP3380392B1 (en) Auto docking method for application in heavy trucks
CN112004696B (zh) 牵引车辆和拖车的对准
US20190161084A1 (en) Vehicle control apparatus and method
CN109631896A (zh) 一种基于车辆视觉和运动信息的停车场自主泊车定位方法
WO2019105665A1 (en) Parking assist method and apparatus
US20190161118A1 (en) Parking assist method and apparatus
CN113228135B (zh) 一种盲区图像获取方法及相关终端装置
US20190162545A1 (en) Imaging apparatus and method
WO2021093420A1 (zh) 车辆导航方法、装置及计算机可读存储介质
US20190161121A1 (en) Parking assist method and apparatus
JP7552101B2 (ja) 産業車両
US20190161119A1 (en) Vehicle parking apparatus
US11766947B2 (en) DC fast charger wireless-charging adapter
US20190164427A1 (en) Docking apparatus
JP2024533966A (ja) 充電ステーションにおける電気車両の駐車支援
GB2568751A (en) Terrain analysis apparatus and method
GB2568750A (en) Terrain analysis apparatus and method
CN116736335A (zh) 清扫车的避障方法、装置、终端设备及存储介质
CN116022129A (zh) 泊车方法、装置和智能驾驶设备
CN111650965B (zh) 一种无人机自主返航控制系统及其控制方法
CN114299466A (zh) 基于单目相机的车辆姿态确定方法、装置和电子设备
CN112550277B (zh) 车辆和自动泊车系统
KR102482613B1 (ko) 차량을 위한 동적으로 로컬화된 센서

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22742109

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21/11/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 22742109

Country of ref document: EP

Kind code of ref document: A1