CN112859826A - Method and apparatus for controlling an automated guided vehicle - Google Patents

Method and apparatus for controlling an automated guided vehicle Download PDF

Info

Publication number
CN112859826A
CN112859826A CN201911173759.9A CN201911173759A CN112859826A CN 112859826 A CN112859826 A CN 112859826A CN 201911173759 A CN201911173759 A CN 201911173759A CN 112859826 A CN112859826 A CN 112859826A
Authority
CN
China
Prior art keywords
target
guided vehicle
automated guided
coordinate
actual position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911173759.9A
Other languages
Chinese (zh)
Inventor
任修孟
宋国库
姜雪原
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Qianshi Technology Co Ltd
Original Assignee
Beijing Jingdong Qianshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Qianshi Technology Co Ltd filed Critical Beijing Jingdong Qianshi Technology Co Ltd
Priority to CN201911173759.9A priority Critical patent/CN112859826A/en
Publication of CN112859826A publication Critical patent/CN112859826A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The embodiment of the application discloses a method and a device for controlling an unmanned transport vehicle. One embodiment of the method comprises: acquiring an actual position coordinate and a deflection angle of a target unmanned transport vehicle at the current moment under a preset coordinate system; predicting the position coordinate of the target unmanned transport vehicle at the next moment as a predicted position coordinate by utilizing a Longge Kutta algorithm based on the actual position coordinate and the deflection angle; acquiring planned position coordinates of a target unmanned transport vehicle planned in advance at the next moment, and determining a position error based on the planned position coordinates and the predicted position coordinates; determining target position coordinates based on the position error and the actual position coordinates; and transmitting control information to the target automated guided vehicle based on the target position coordinates. According to the embodiment, the track precision of the automatic guided vehicle in the driving process is improved, and the derailing probability of the automatic guided vehicle is reduced.

Description

Method and apparatus for controlling an automated guided vehicle
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for controlling an unmanned transport vehicle.
Background
With the rapid development of electronic commerce and new retail fields, Automated Guided Vehicles (AGVs) are widely used. When the automated guided vehicle runs in the warehouse, due to some reasons such as uneven pavement of the warehouse, two wheels may run asynchronously, so that the automated guided vehicle deviates from a pre-planned running path. If the running path of the automated guided vehicle is not corrected in time, the automated guided vehicle may be derailed.
Disclosure of Invention
The embodiment of the application provides a method and a device for controlling an unmanned transport vehicle.
In a first aspect, embodiments of the present application provide a method for controlling an automated guided vehicle, including: acquiring an actual position coordinate and a deflection angle of a target unmanned transport vehicle at the current moment under a preset coordinate system; predicting the position coordinate of the target unmanned transport vehicle at the next moment as a predicted position coordinate by utilizing a Longge Kutta algorithm based on the actual position coordinate and the deflection angle; acquiring planned position coordinates of a target unmanned transport vehicle planned in advance at the next moment, and determining a position error based on the planned position coordinates and the predicted position coordinates; determining target position coordinates based on the position error and the actual position coordinates; and transmitting control information to the target automated guided vehicle based on the target position coordinates.
In some embodiments, the predetermined coordinate system has a start point of the target automated guided vehicle as an origin of coordinates and a start traveling direction of the target automated guided vehicle as a positive direction of the abscissa axis.
In some embodiments, predicting the position coordinates of the target automated guided vehicle at the next time as predicted position coordinates using a longkutta algorithm based on the actual position coordinates and the deflection angle includes: acquiring a self-rotation angle of a target unmanned transport vehicle and a running speed of the target unmanned transport vehicle in a target period, wherein the target period is a communication period with the target unmanned transport vehicle; determining projection lengths of projections of a running path of a target unmanned transport vehicle running at a running speed on an abscissa axis and an ordinate axis of a coordinate system respectively under each target deflection angle, wherein the target deflection angle is determined based on a deflection angle and a rotation angle; determining a weighted average value of preset weights corresponding to all target deflection angles and corresponding projection lengths of projections on the abscissa axis as a first average value, and determining the sum of an abscissa value in an actual position coordinate and the first average value as an abscissa value of the target unmanned transport vehicle at the next moment; and determining the weighted average value of the weight corresponding to each target deflection angle and the corresponding projection length of the projection on the ordinate axis as a second average value, and determining the sum of the ordinate value in the actual position coordinate and the second average value as the ordinate value of the target unmanned transport vehicle at the next moment.
In some embodiments, the degree of rotation is determined based on a wheel diameter of the target automated guided vehicle, a variation value of a wheel motor encoder when the target automated guided vehicle travels a target period, a resolution of the wheel motor encoder of the target automated guided vehicle, a mechanical reduction ratio of the target automated guided vehicle, and a spacing between two wheels of the target automated guided vehicle.
In some embodiments, acquiring the actual position coordinate and the deflection angle of the target automated guided vehicle at the current moment in the preset coordinate system includes: acquiring an image presenting a target identifier; identifying the actual position information of the target automated guided vehicle at the current moment from the target identifier, and determining the driving direction of the target automated guided vehicle at the current moment from the image; determining the actual position coordinate of the target unmanned transport vehicle under a preset coordinate system at the current moment based on the actual position information; and determining the deflection angle of the target unmanned transport vehicle in the coordinate system at the current moment based on the driving direction.
In a second aspect, an embodiment of the present application provides an apparatus for controlling an automated guided vehicle, including: an acquisition unit configured to acquire an actual position coordinate and a deflection angle of a target automated guided vehicle at a current time in a preset coordinate system; a prediction unit configured to predict a position coordinate of the target automated guided vehicle at a next time as a predicted position coordinate using a Longge Kutta algorithm based on the actual position coordinate and the deflection angle; a first determination unit configured to acquire planned position coordinates of a previously planned target automated guided vehicle at a next time, and determine a position error based on the planned position coordinates and the predicted position coordinates; a second determination unit configured to determine target position coordinates based on the position error and the actual position coordinates; a transmitting unit configured to transmit the control information to the target automated guided vehicle based on the target position coordinates.
In some embodiments, the predetermined coordinate system has a start point of the target automated guided vehicle as an origin of coordinates and a start traveling direction of the target automated guided vehicle as a positive direction of the abscissa axis.
In some embodiments, the prediction unit is further configured to predict the position coordinate of the target automated guided vehicle at the next time as the predicted position coordinate using the longkutta algorithm based on the actual position coordinate and the deflection angle as follows: acquiring a self-rotation angle of a target unmanned transport vehicle and a running speed of the target unmanned transport vehicle in a target period, wherein the target period is a communication period with the target unmanned transport vehicle; determining projection lengths of projections of a running path of a target unmanned transport vehicle running at a running speed on an abscissa axis and an ordinate axis of a coordinate system respectively under each target deflection angle, wherein the target deflection angle is determined based on a deflection angle and a rotation angle; determining a weighted average value of preset weights corresponding to all target deflection angles and corresponding projection lengths of projections on the abscissa axis as a first average value, and determining the sum of an abscissa value in an actual position coordinate and the first average value as an abscissa value of the target unmanned transport vehicle at the next moment; and determining the weighted average value of the weight corresponding to each target deflection angle and the corresponding projection length of the projection on the ordinate axis as a second average value, and determining the sum of the ordinate value in the actual position coordinate and the second average value as the ordinate value of the target unmanned transport vehicle at the next moment.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a storage device, on which one or more programs are stored, which, when executed by the one or more processors, cause the one or more processors to implement the method as described in any implementation manner of the first aspect.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
According to the method and the device for controlling the automated guided vehicle, the actual position coordinate and the deflection angle of the target automated guided vehicle at the current moment in the preset coordinate system are obtained firstly; then, predicting the position coordinate of the target automated guided vehicle at the next moment as a predicted position coordinate by using a Longge Kutta algorithm based on the actual position coordinate and the deflection angle; then, acquiring planned position coordinates of the target unmanned transport vehicle planned in advance at the next moment, and determining a position error based on the planned position coordinates and the predicted position coordinates; then, determining a target position coordinate based on the position error and the actual position coordinate; and finally, transmitting control information to the target unmanned transport vehicle based on the target position coordinates. The position coordinate of the unmanned carrier at the next moment is predicted by the mode, the position error possibly generated by the unmanned carrier at the next moment is estimated by combining the position coordinate planned in advance, and the position error is corrected at the current moment, so that the track precision of the unmanned carrier in the driving process is improved, and the derailment probability of the unmanned carrier is reduced.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which various embodiments of the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for controlling an automated guided vehicle according to the present application;
FIG. 3 is a schematic illustration of one application scenario of a method for controlling an automated guided vehicle according to the present application;
FIG. 4 is a flow chart of predicting a position coordinate of a target automated guided vehicle at a next time in a method for controlling an automated guided vehicle according to the present application;
FIG. 5 is a schematic illustration of a travel trajectory of a target automated guided vehicle in a method for controlling an automated guided vehicle and determining a degree of articulation of the target automated guided vehicle based on the travel trajectory of the target automated guided vehicle according to the present application;
FIG. 6 is a schematic structural diagram of one embodiment of an apparatus for controlling an automated guided vehicle according to the present application;
FIG. 7 is a block diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the present method for controlling an automated guided vehicle may be applied.
As shown in fig. 1, the system architecture 100 may include an automated guided vehicle 101, a network 102, and a server 103 that provides support for the automated guided vehicle 101. An in-vehicle smart device 104 and a driver 105 may be provided in the automated guided vehicle 101. Network 102 is the medium used to provide a communication link between in-vehicle smart device 104 and server 103. Network 102 may include various connection types, such as wired, wireless communication links, global positioning systems, or fiber optic cables, to name a few.
The onboard intelligent device 104 is provided with a control system of the automated guided vehicle 101. The control system may send control information to the driver 105 to control the automated guided vehicle 101 to travel. In-vehicle smart device 104 may interact with server 103 via network 102 to receive information such as control information.
Various sensors such as an obstacle sensor, an imaging device, a gyroscope, and an accelerometer may be mounted on the automated guided vehicle 101. It should be noted that the automated guided vehicle 101 may be further equipped with various types and functions of sensors other than those listed above, and the details are not described here.
The in-vehicle smart device 104 may be hardware or software. When the in-vehicle smart device 104 is hardware, it may be an electronic device supporting information interaction. When the in-vehicle smart device 104 is software, it may be installed in the electronic device. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 103 may be a server that provides various services, such as a server that transmits control information to the in-vehicle smart device 104 mounted on the automated guided vehicle 101. The server 103 may first acquire an actual position coordinate and a deflection angle of the automated guided vehicle 101 at a current time in a preset coordinate system; then, the position coordinate of the automated guided vehicle 101 at the next time may be predicted as a predicted position coordinate by using the longge tata algorithm based on the actual position coordinate and the deflection angle; then, a planned position coordinate of the unmanned transport vehicle 101 planned in advance at the next time can be acquired, and a position error is determined based on the planned position coordinate and the predicted position coordinate; then, a target position coordinate may be determined based on the position error and the actual position coordinate; finally, control information may be transmitted to the in-vehicle intelligent device 104 of the target automated guided vehicle 101 based on the target position coordinates, so that the in-vehicle intelligent device 104 drives the driver 105 using the control information.
The server 103 may be hardware or software. When the server 103 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 103 is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the method for controlling the automated guided vehicle provided in the embodiment of the present application is generally performed by the server 103.
It should be understood that the number of automated guided vehicles, onboard intelligent devices, drives, networks, and servers in FIG. 1 are merely illustrative. There may be any number of automated guided vehicles, onboard intelligent devices, drives, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for controlling an automated guided vehicle according to the present application is shown. The method for controlling an automated guided vehicle includes the steps of:
step 201, acquiring an actual position coordinate and a deflection angle of the target automated guided vehicle at the current moment in a preset coordinate system.
In the present embodiment, an executing agent (e.g., the server 103 shown in fig. 1) of the method for controlling the automated guided vehicle may acquire an actual position coordinate and a deflection angle of the target automated guided vehicle at the present time in a preset coordinate system. An automated guided vehicle may be referred to as an automated guided vehicle, and is a vehicle equipped with an electromagnetic or optical automatic guide device, which can travel along a predetermined guide path and has safety protection and various transfer functions. The above-mentioned target automated guided vehicle may be an automated guided vehicle that is traveling in a warehouse at the present time.
In this embodiment, the preset coordinate system may be a two-dimensional coordinate system having an abscissa axis and an ordinate axis that are parallel to two adjacent sides of the warehouse where the target automated guided vehicle is located. The execution body may acquire an actual position coordinate and a deflection angle of the target automated guided vehicle at a current time through a Global Positioning System (GPS). The deflection angle is generally an angle between the direction of the target automated guided vehicle and the initial traveling direction of the target automated guided vehicle.
And step 202, predicting the position coordinate of the target unmanned transport vehicle at the next moment as a predicted position coordinate by using a Longge Kutta algorithm based on the actual position coordinate and the deflection angle.
In this embodiment, the executing body may predict the position coordinate of the target automated guided vehicle at the next time as a predicted position coordinate by using a longge-Kutta (Runge-Kutta) algorithm based on the actual position coordinate and the deflection angle acquired in step 201. The Runge Kutta algorithm is a high-precision single-step algorithm widely applied to engineering. Errors can be suppressed by adopting the Runge Kutta algorithm. The Runge-Kutta algorithm can be applied by computer simulation under the condition that the equation derivative and the initial information are known, and the complex process of solving a differential equation is omitted. The most classical dragon lattice cottage tower algorithm is the fourth order dragon lattice cottage tower algorithm (RK 4).
Here, the initial value information may be expressed as the following formula (1):
y'=f(t,y),y(tn)=yn (1)
then, RK4 for the problem of predicting the position coordinates of the target automated guided vehicle at the next time is given by the following equation (2):
Figure BDA0002289438130000071
wherein k in the formula (2)1、k2、k3And k4Can be determined by the following equation (3), equation (4), equation (5), and equation (6), respectively:
k1=f(tn,yn) (3)
Figure BDA0002289438130000072
Figure BDA0002289438130000073
k4=f(tn+t,yn+tk3) (6)
wherein, ynIs a coordinate value (abscissa or ordinate) in the actual position coordinate at the current time, yn+1Is a coordinate value, t, in the predicted position coordinates of the next time instantnTime of the current time, k1Is the slope at the start of a communication, k2For the slope at the midpoint of a communication, the slope k is adopted by the Euler method1To determine y at point
Figure BDA0002289438130000074
Value of (k)3For the slope at the midpoint of a communication, the slope k is used2To determine y at point
Figure BDA0002289438130000075
Value of (k)4For the slope at the end of a communication, the slope k is used3To determine y at point x0The value at + t, n is a natural number.
Thus, the value y at the next momentn+1From the value y of the current timenPlus the product of the time interval t and an estimated slope.
The execution agent may calculate a slope at a predetermined time, for example, a slope at the start of a time period, a slope at one-third of the time period, a slope at two-thirds of the time period, and a slope at the end of the time period, and determine a weighted average of the slopes at the predetermined time to predict the position coordinates of the target automated guided vehicle at the next time.
And step 203, acquiring planned position coordinates of the target unmanned transport vehicle planned in advance at the next moment, and determining a position error based on the planned position coordinates and the predicted position coordinates.
In this embodiment, the executing body may plan the traveling route of the target automated guided vehicle after acquiring the start position and the end position of the target automated guided vehicle. Then, a curve representing the corresponding relation between the position and the time can be fitted according to the settings of the acceleration and the maximum speed. And dividing the curve according to a communication cycle, and sending the planned position coordinates (planned position coordinates) to the target unmanned transport vehicle when the time reaches the communication time. Here, the execution body is generally a driver (servo driver) that transmits the planned position coordinates to the target automated guided vehicle. It should be noted that the next time is usually separated from the current time by the communication cycle.
In this embodiment, the executing agent may obtain the planned position coordinates of the target automated guided vehicle at the next time using the curve representing the correspondence between the position and the time. Thereafter, the execution body may determine a position error based on the planned position coordinates and the predicted position coordinates predicted in step 202. Specifically, the execution body may determine a difference between an abscissa value in the planned position coordinates and an abscissa value in the predicted position coordinates as a position error of the abscissa. The execution body may determine a difference between a vertical coordinate value in the planned position coordinates and a vertical coordinate value in the predicted position coordinates as a position error of the vertical coordinate.
Here, the above-described execution body may determine the position error of the abscissa by the following formula (7), and the position error of the ordinate by the following formula (8):
ex=x′i+1-xi+1 (7)
ey=y′i+1-yi+1 (8)
wherein e isxIs the position error of the abscissa, x'i+1As the abscissa value, x, in the coordinates of the planned positioni+1As an abscissa value in the predicted position coordinates, eyIs the position error of the ordinate, y'i+1As a longitudinal coordinate value, y, in the planned position coordinatesi+1Is the ordinate value in the predicted position coordinates.
And step 204, determining target position coordinates based on the position error and the actual position coordinates.
In this embodiment, the execution subject may determine the target position coordinates based on the position error determined in step 203 and the actual position coordinates. Specifically, the execution body may determine a sum of the position error of the abscissa and the abscissa value in the actual position coordinate as the abscissa value of the target position coordinate. The execution body may determine a sum of the position error of the ordinate and the ordinate value in the actual position coordinate as the ordinate value of the target position coordinate.
Here, the execution body described above may determine the target position coordinates by the following formula (9) and formula (10):
x″i+1=xi+ex (9)
y″i+1=yi+ey (10)
wherein, x ″)i+1The abscissa value, x, of the target position coordinateiAs the abscissa value in the actual position coordinates, exIs the position error of the abscissa, y ″)i+1Is the ordinate value, y, of the target position coordinateiAs ordinate values in the actual position coordinates, eyIs the position error of the ordinate.
Step 205, based on the target position coordinates, sends control information to the target automated guided vehicle.
In this embodiment, the executing agent may transmit control information to the target automated guided vehicle based on the target position coordinates determined in step 204. After the target automated guided vehicle receives the control information, the driver of the target automated guided vehicle can be driven by the control information, so that the target automated guided vehicle reaches the target position coordinate, and the position of the target automated guided vehicle is corrected at the current moment. Specifically, the executing agent may plan the route to be traveled of the target automated guided vehicle with the actual position coordinates as a starting point and the target position coordinates as an ending point. The execution agent may transmit control information to the target automated guided vehicle according to the route to be traveled. The control information may include, but is not limited to: direction of travel and speed of travel.
In some optional implementations of this embodiment, an identifier for indicating position information of a corresponding position, for example, a two-dimensional code, a QR code, a checkerboard, and the like, may be attached to the ground of the warehouse where the target automated guided vehicle is located. The destination identifier is typically an identifier of the location information used to indicate the corresponding location as described above. The target identifier may be attached to a designated location on the warehouse floor, and the designated locations may be spaced apart by a predetermined distance. The execution body may acquire an actual position coordinate and a deflection angle of the target automated guided vehicle at a current time in a preset coordinate system by: the execution body may acquire an image in which the target identifier is present after the camera mounted on the target automated guided vehicle acquires an image including the target identifier. After that, the object identifier may be read, and since the object identifier stores position information indicating a corresponding position, the execution body may recognize actual position information of the target automated guided vehicle at the current time from the object identifier. The execution body may further determine a traveling direction of the target automated guided vehicle at the current time from the image. The execution body can determine the driving direction of the target unmanned transport vehicle by fusing the electronic compass data through the rotation condition of the target identifier in the image.
Then, the execution agent may determine an actual position coordinate of the target automated guided vehicle in a preset coordinate system at a current time based on the actual position information. The actual position information is generally position information of a position where the target automated guided vehicle is located in the warehouse. The actual position indicated by the actual position information may be characterized as a coordinate point. For example, if the preset coordinate system is a warehouse coordinate system, for example, a designated point in the warehouse is taken as an origin of coordinates, and the abscissa axis and the ordinate axis are respectively parallel to two adjacent sides of the warehouse where the target automated guided vehicle is located, the executing entity may determine coordinates of a coordinate point representing an actual position indicated by the actual position information as actual position coordinates of the target automated guided vehicle in the coordinate system at the current time. If the predetermined coordinate system is a first coordinate system in which the start point of the target automated guided vehicle is an origin of coordinates and the start traveling direction of the target automated guided vehicle is a positive direction of an abscissa axis, the executing body may determine a relative position of a coordinate point of the actual position indicated by the actual position information with respect to a coordinate point of the actual position indicated by the actual position information of the target automated guided vehicle at the initial time, and determine an actual position coordinate of the target automated guided vehicle at the current time in the first coordinate system.
Finally, the executing body may determine a deflection angle of the target automated guided vehicle in the coordinate system at the current time based on the traveling direction. The deflection angle is generally an angle with the traveling direction of the target automated guided vehicle and the reference direction. For example, if the reference direction is a starting traveling direction of the target automated guided vehicle, the executing entity may determine an angle between a traveling direction of the target automated guided vehicle at the current time and the starting traveling direction as a deflection angle of the target automated guided vehicle at the current time in the coordinate system.
With continued reference to fig. 3, fig. 3 is a schematic view of an application scenario of the method for controlling an automated guided vehicle according to the present embodiment. In the application scenario of fig. 3, the server 301 may first acquire the actual position coordinates 303 and the deflection angle 304 of the automated guided vehicle 302 in the preset warehouse coordinate system at the current moment. Here, the actual position coordinate 303 may be (20, 35), and the deflection angle 304 may be 30 degrees. It should be noted that the deflection angle 304 may be an angle from the positive direction of the abscissa axis of the warehouse coordinate system. Thereafter, the server 301 may predict the position coordinates of the automated guided vehicle 302 at the next time as predicted position coordinates 305 using the longkutta algorithm based on the actual position coordinates 303 and the deflection angle 304. Here, the predicted position coordinates 305 are (24, 38) predicted by the longgutta algorithm. Then, the server 301 may acquire that the planned position coordinate 306 of the automated guided vehicle 302 planned in advance at the next time is (23, 40), the server 301 may determine the difference-1 between the abscissa value 23 in the planned position coordinate 306 and the abscissa value 24 in the predicted position coordinate 305 as the position error 307 of the abscissa, and the server 301 may determine the difference 2 between the ordinate value 40 in the planned position coordinate 306 and the ordinate value 38 in the predicted position coordinate 305 as the position error of the ordinate. Server 301 may then determine target location coordinates 308 based on location error 307 and actual location coordinates 303. Specifically, the server 301 may determine that the target position coordinate 308 is (19, 37) by determining the sum 19 of the position error-1 of the abscissa and the abscissa value 20 of the actual position coordinate 303 as the abscissa value of the target position coordinate 308, and determining the sum 37 of the position error 2 of the ordinate and the ordinate value 35 of the actual position coordinate 303 as the ordinate value of the target position coordinate 308. Finally, the server 301 may send control information 309 to the automated guided vehicle 302 based on the target position coordinates 308. Specifically, the server 301 may plan a path to be traveled by the automated guided vehicle 302 with the actual position coordinates (20, 35) as a start point and the target position coordinates (19, 37) as an end point, and may transmit control information 309 such as a travel direction and a travel speed to the automated guided vehicle 302 in accordance with the path to be traveled.
According to the method provided by the embodiment of the application, the position coordinate of the unmanned transport vehicle at the next moment is predicted, the position error possibly generated by the unmanned transport vehicle at the next moment is predicted by combining the position coordinate planned in advance, and the position error is corrected at the current moment, so that the track precision of the unmanned transport vehicle in the driving process is improved, and the derailment probability of the unmanned transport vehicle is reduced.
With further reference to fig. 4, fig. 4 is a flow chart 400 for predicting a position coordinate of a target automated guided vehicle at a next time in a method for controlling an automated guided vehicle according to the present application. As shown in fig. 4, in the present embodiment, the step of predicting the position coordinates of the target automated guided vehicle at the next time includes:
step 401, obtaining a self-rotation angle of the target automated guided vehicle in the target period and a running speed of the target automated guided vehicle.
In the present embodiment, an executing agent (e.g., the server 103 shown in fig. 1) of the method for controlling the automated guided vehicle may acquire a turning angle of the target automated guided vehicle within the target period and a traveling speed of the target automated guided vehicle. The target automated guided vehicle may be an automated guided vehicle that is currently in a traveling state in the warehouse. The target cycle may be a communication cycle between the execution body and the target automated guided vehicle. The turning angle may be a product of an angular velocity of the turning of the target automated guided vehicle due to the two-wheel drive non-synchronization and the target period.
Step 402, determining projection lengths of projections of the target unmanned transport vehicle on an abscissa axis and an ordinate axis of a coordinate system respectively of a driving path of a driving target period at a driving speed under each target deflection angle.
In this embodiment, the executing body may determine projection lengths of projections of the traveling path of the target automated guided vehicle traveling at the traveling speed on the abscissa axis and the ordinate axis of the coordinate system at the target yaw angle, respectively.
Here, the target yaw angle may be determined based on the yaw angle and the spin angle. The target deflection angle may include a deflection angle of the target automated guided vehicle at a start of primary communication (the deflection angle), a deflection angle of the target automated guided vehicle at a midpoint of primary communication (a sum of a half of the turning angle and the deflection angle), and a deflection angle of the target automated guided vehicle at a final point of primary communication (a sum of the turning angle and the deflection angle).
In this embodiment, the executing body may determine projection lengths of projections of the travel paths of the target automated guided vehicle traveling the target cycle at the travel speed on the abscissa axis of the coordinate system at the respective target deflection angles by equations (11), (12), (13), and (14) as follows:
Figure BDA0002289438130000121
Figure BDA0002289438130000122
Figure BDA0002289438130000123
Figure BDA0002289438130000131
wherein the content of the first and second substances,
Figure BDA0002289438130000132
a projection length of a projection on an abscissa axis of the coordinate system when a deflection angle of the target automated guided vehicle corresponding to a travel path along which the target automated guided vehicle travels at the travel speed in the target cycle is θ,
Figure BDA0002289438130000133
a travel path for the target automated guided vehicle to travel at the travel speed for the target automated guided vehicle in the target cycle corresponds to a deflection angle of the target automated guided vehicle
Figure BDA0002289438130000134
The projection length of the projection on the abscissa axis of the coordinate system,
Figure BDA0002289438130000135
a travel path for the target automated guided vehicle to travel at the travel speed for the target automated guided vehicle in the target cycle corresponds to a deflection angle of the target automated guided vehicle
Figure BDA0002289438130000136
The projection length of the projection on the abscissa axis of the coordinate system,
Figure BDA0002289438130000137
a deflection angle of the target automated guided vehicle corresponding to a travel path for the target automated guided vehicle to travel at the travel speed for the target automated guided vehicle in the target cycle is θ + Vωt is the projection length of the projection on the abscissa axis of the coordinate system, VLTheta is the running speed of the target automated guided vehicle, theta is the deflection angle of the target automated guided vehicle under the preset coordinate system at the current moment, and VωT is a communication period between the executing body and the target automated guided vehicle, where t is an angular velocity of rotation of the target automated guided vehicle due to non-synchronization of two-wheel traveling.
In this embodiment, the executing body may determine projection lengths of projections of the traveling paths of the target automated guided vehicle traveling the target cycle at the traveling speed on the ordinate axis of the coordinate system at the respective target deflection angles by the following equations (15), (16), (17), and (18):
Figure BDA0002289438130000138
Figure BDA0002289438130000139
Figure BDA00022894381300001310
Figure BDA00022894381300001311
wherein the content of the first and second substances,
Figure BDA00022894381300001312
a projection length of a projection on the ordinate axis of the coordinate system when a deflection angle of the target automated guided vehicle corresponding to a travel path along which the target automated guided vehicle travels at the travel speed in the target cycle is θ,
Figure BDA00022894381300001313
A travel path for the target automated guided vehicle to travel at the travel speed for the target automated guided vehicle in the target cycle corresponds to a deflection angle of the target automated guided vehicle
Figure BDA00022894381300001314
The projection length of the projection on the ordinate axis of the coordinate system,
Figure BDA00022894381300001315
a travel path for the target automated guided vehicle to travel at the travel speed for the target automated guided vehicle in the target cycle corresponds to a deflection angle of the target automated guided vehicle
Figure BDA0002289438130000141
The projection length of the projection on the ordinate axis of the coordinate system,
Figure BDA0002289438130000142
a deflection angle of the target automated guided vehicle corresponding to a travel path for the target automated guided vehicle to travel at the travel speed for the target automated guided vehicle in the target cycle is θ + Vωt is the projection length of the projection on the ordinate axis of the coordinate system, VLTheta is the running speed of the target automated guided vehicle, theta is the deflection angle of the target automated guided vehicle under the preset coordinate system at the current moment, and VωT is a communication period between the executing body and the target automated guided vehicle, where t is an angular velocity of rotation of the target automated guided vehicle due to non-synchronization of two-wheel traveling.
And 403, determining a weighted average value of the preset weight corresponding to each target deflection angle and the corresponding projection length projected on the abscissa axis as a first average value, and determining the sum of the abscissa value in the actual position coordinate and the first average value as the abscissa value of the target unmanned transport vehicle at the next moment.
In this embodiment, the executing entity may determine a weighted average of the preset weight corresponding to each target deflection angle and the corresponding projection length of the projection on the abscissa axis as a first average, and may determine a sum of the first average and an abscissa value in the actual position coordinate as an abscissa value of the target automated guided vehicle at a next time. The execution main body needs to acquire preset weights corresponding to the target deflection angles in advance.
Here, the executor may determine an abscissa value of the target automated guided vehicle at a next time by the following equation (19):
Figure BDA0002289438130000143
wherein x isi+1The abscissa value, x, of the target automated guided vehicle at the next timeiThe abscissa value in the actual position coordinates.
The parameters in the formula (19)
Figure BDA0002289438130000144
And
Figure BDA0002289438130000145
with the parameters of formula (11) to formula (14)
Figure BDA0002289438130000146
And
Figure BDA0002289438130000147
the meanings of the above are the same, and are not described herein again.
Step 404, determining a weighted average value of the weight corresponding to each target deflection angle and the corresponding projection length of the projection on the ordinate axis as a second average value, and determining the sum of the ordinate value in the actual position coordinate and the second average value as the ordinate value of the target automated guided vehicle at the next moment.
In this embodiment, the execution body may determine a weighted average of the weight corresponding to each target deflection angle and the projection length of the corresponding projection on the ordinate axis as a second average, and may determine a sum of the ordinate value in the actual position coordinate and the second average as an ordinate value of the target automated guided vehicle at a next time.
Here, the executor may determine a vertical coordinate value of the target automated guided vehicle at a next time by the following equation (20):
Figure BDA0002289438130000151
wherein, yi+1Is the longitudinal coordinate value y of the target unmanned transport vehicle at the next momentiIs the ordinate value in the actual position coordinate.
The parameters in the formula (20)
Figure BDA0002289438130000152
And
Figure BDA0002289438130000153
the meaning of (1) and the parameters in the formula (15) to the formula (18)
Figure BDA0002289438130000154
And
Figure BDA0002289438130000155
the meanings of the above are the same, and are not described herein again.
In some optional implementations of this embodiment, the preset coordinate system may use a start point of the target automated guided vehicle (a start point of the current transportation) as an origin of coordinates, and use a start traveling direction of the target automated guided vehicle as a positive direction of the abscissa axis.
In some optional implementations of the embodiment, the rotation angle may be determined by the executing agent based on a wheel diameter of the target automated guided vehicle, a variation value of a wheel motor encoder when the target automated guided vehicle travels the target period, a resolution of the wheel motor encoder of the target automated guided vehicle, a mechanical reduction ratio of the target automated guided vehicle, and a distance between two wheels of the target automated guided vehicle.
Specifically, the executing body may determine the degree of pivoting of the target automated guided vehicle by:
suppose that the change value of the encoder of the two-wheel motor is S in one communication period1And S2The resolution of the motor encoder (the encoder change value of one rotation of the motor) is S, the mechanical reduction ratio is R, the distance between two wheels is L, and the diameter of the wheel is D. The mechanical reduction ratio may be a transmission ratio of the speed reducer, and refers to a ratio of an instantaneous input speed to an output speed in the speed reducer mechanism.
As shown in fig. 5, fig. 5 is a schematic view of a travel trajectory of a target automated guided vehicle in a method for controlling an automated guided vehicle according to the present application and determining a degree of self-rotation of the target automated guided vehicle based on the travel trajectory of the target automated guided vehicle.
In fig. 5, arc I is a travel track of one side wheel of the target automated guided vehicle, arc P is a travel track of the other side wheel of the target automated guided vehicle, and arc I is parallel to arc P. And respectively connecting two end points of the arc I and the arc P to form extension lines, and obtaining circle centers corresponding to the two arcs, wherein omega is the rotation angle of the target unmanned conveying vehicle, and r is the length of the extension line (the distance between the end point of the arc I and the circle center).
From the arc length formula, the following formula (21) and formula (22) can be derived:
Figure BDA0002289438130000161
Figure BDA0002289438130000162
the above equations (21) and (22) are linear equations of two variables with r and ω as unknowns, and the two equations are solved to obtain the following equation (23):
Figure BDA0002289438130000163
due to the fact that
Figure BDA0002289438130000164
The following formula (24) is derived:
Figure BDA0002289438130000165
the execution body may determine the degree of pivoting of the automated guided vehicle by the following equation (25):
Figure BDA0002289438130000166
wherein h is the self-rotation angle of the target automated guided vehicle in the target period.
According to the method provided by the embodiment of the application, the position coordinates of the target automated guided vehicle at the next moment are predicted by using the fourth-order Runge Kutta algorithm, the prediction method can further improve the track precision of the automated guided vehicle in the driving process, and the derailing probability of the automated guided vehicle is further reduced.
With further reference to fig. 6, as an implementation of the methods shown in the above figures, the present application provides an embodiment of an apparatus for controlling an automated guided vehicle, which corresponds to the embodiment of the method shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 6, the apparatus 600 for controlling an automated guided vehicle of the present embodiment includes: an acquisition unit 601, a prediction unit 602, a first determination unit 603, a second determination unit 604, and a transmission unit 605. Wherein the obtaining unit 601 is configured to obtain an actual position coordinate and a deflection angle of the target automated guided vehicle at a current moment in a preset coordinate system; the prediction unit 602 is configured to predict, as a predicted position coordinate, a position coordinate of the target automated guided vehicle at a next time using the longge tata algorithm based on the actual position coordinate and the deflection angle; the first determination unit 603 is configured to acquire planned position coordinates of a target automated guided vehicle planned in advance at the next time, determine a position error based on the planned position coordinates and the predicted position coordinates; the second determining unit 604 is configured to determine target position coordinates based on the position error and the actual position coordinates; the transmission unit 605 is configured to transmit control information to the target automated guided vehicle based on the target position coordinates.
In the present embodiment, specific processes of the acquisition unit 601, the prediction unit 602, the first determination unit 603, the second determination unit 604, and the transmission unit 605 of the apparatus 600 for controlling an automated guided vehicle may refer to step 201, step 202, step 203, step 204, and step 205 in the corresponding embodiment of fig. 2.
In some optional implementations of this embodiment, the preset coordinate system may use a start point of the target automated guided vehicle (a start point of the current transportation) as an origin of coordinates, and use a start traveling direction of the target automated guided vehicle as a positive direction of the abscissa axis.
In some optional implementations of the embodiment, the prediction unit 602 may predict the position coordinate of the target automated guided vehicle at the next time as the predicted position coordinate by using a longkutta algorithm based on the actual position coordinate and the deflection angle as follows: the prediction unit 602 may acquire a turning angle of the target automated guided vehicle and a traveling speed of the target automated guided vehicle within a target cycle, where the target cycle is a communication cycle with the target automated guided vehicle; then, the prediction unit 602 may determine projection lengths of projections of the target automated guided vehicle on an abscissa axis and an ordinate axis of the coordinate system, respectively, of a travel path that travels the target cycle at the travel speed at each target yaw angle determined based on the yaw angle and the turning angle; then, the prediction unit 602 may determine a weighted average of the preset weight corresponding to each target deflection angle and the corresponding projection length of the projection on the abscissa axis as a first average, and determine a sum of an abscissa value in the actual position coordinate and the first average as an abscissa value of the target automated guided vehicle at a next time; finally, the prediction unit 602 may determine a weighted average of the weight corresponding to each target deflection angle and the projection length of the corresponding projection on the ordinate axis as a second average, and determine a sum of the ordinate value in the actual position coordinate and the second average as the ordinate value of the target automated guided vehicle at the next time.
In some optional implementations of the embodiment, the rotation angle may be determined by the executing agent based on a wheel diameter of the target automated guided vehicle, a variation value of a wheel motor encoder when the target automated guided vehicle travels the target period, a resolution of the wheel motor encoder of the target automated guided vehicle, a mechanical reduction ratio of the target automated guided vehicle, and a distance between two wheels of the target automated guided vehicle.
In some optional implementations of this embodiment, an identifier for indicating position information of a corresponding position, for example, a two-dimensional code, a QR code, a checkerboard, and the like, may be attached to the ground of the warehouse where the target automated guided vehicle is located. The destination identifier is typically an identifier of the location information used to indicate the corresponding location as described above. The target identifier may be attached to a designated location on the warehouse floor, and the designated locations may be spaced apart by a predetermined distance. The acquiring unit 601 may acquire the actual position coordinates and the deflection angle of the target automated guided vehicle in the preset coordinate system at the current time as follows: the acquisition unit 601 may acquire an image showing the target identifier after the camera mounted on the target automated guided vehicle acquires an image including the target identifier. After that, the object identifier may be read, and since the object identifier stores position information indicating a corresponding position, the acquisition unit 601 may recognize actual position information of the target automated guided vehicle at the current time from the object identifier. The acquiring unit 601 may further determine a traveling direction of the target automated guided vehicle at the current time from the image. The acquiring unit 601 may determine the traveling direction of the target automated guided vehicle by fusing the electronic compass data with the rotation of the target identifier in the image.
Then, the acquiring unit 601 may determine an actual position coordinate of the target automated guided vehicle in a preset coordinate system at the current time based on the actual position information. The actual position information is generally position information of a position where the target automated guided vehicle is located in the warehouse. The actual position indicated by the actual position information may be characterized as a coordinate point. As an example, if the preset coordinate system is a warehouse coordinate system, for example, a designated point in a warehouse is taken as an origin of coordinates, and an abscissa axis and an ordinate axis are respectively parallel to two adjacent sides of the warehouse where the target automated guided vehicle is located, the obtaining unit 601 may determine coordinates of a coordinate point representing an actual position indicated by the actual position information as actual position coordinates of the target automated guided vehicle in the coordinate system at the current time. If the predetermined coordinate system is a first coordinate system in which the start point of the target automated guided vehicle is the origin of coordinates and the start traveling direction of the target automated guided vehicle is the positive direction of the abscissa axis, the obtaining unit 601 may determine a relative position of a coordinate point of the actual position indicated by the actual position information with respect to a coordinate point of the actual position indicated by the actual position information of the target automated guided vehicle at the initial time, thereby determining an actual position coordinate of the target automated guided vehicle at the current time in the first coordinate system.
Finally, the obtaining unit 601 may determine a deflection angle of the target automated guided vehicle in the coordinate system at the current time based on the traveling direction. The deflection angle is generally an angle with the traveling direction of the target automated guided vehicle and the reference direction. For example, if the reference direction is a starting traveling direction of the target automated guided vehicle, the executing entity may determine an angle between a traveling direction of the target automated guided vehicle at the current time and the starting traveling direction as a deflection angle of the target automated guided vehicle at the current time in the coordinate system.
Referring now to FIG. 7, a block diagram of an electronic device (e.g., the server of FIG. 1) 700 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, electronic device 700 may include a processing means (e.g., central processing unit, graphics processor, etc.) 701 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from storage 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The processing device 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Generally, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 700 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 7 may represent one device or may represent multiple devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 709, or may be installed from the storage means 708, or may be installed from the ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of embodiments of the present disclosure. It should be noted that the computer readable medium described in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring an actual position coordinate and a deflection angle of a target unmanned transport vehicle at the current moment under a preset coordinate system; predicting the position coordinate of the target unmanned transport vehicle at the next moment as a predicted position coordinate by utilizing a Longge Kutta algorithm based on the actual position coordinate and the deflection angle; acquiring planned position coordinates of a target unmanned transport vehicle planned in advance at the next moment, and determining a position error based on the planned position coordinates and the predicted position coordinates; determining target position coordinates based on the position error and the actual position coordinates; and transmitting control information to the target automated guided vehicle based on the target position coordinates.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a prediction unit, a first determination unit, a second determination unit, and a transmission unit. The names of these units do not in some cases constitute a limitation on the unit itself, and for example, the transmission unit may also be described as a "unit that transmits control information to the target automated guided vehicle based on the target position coordinates".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (10)

1. A method for controlling an automated guided vehicle, comprising:
acquiring an actual position coordinate and a deflection angle of a target unmanned transport vehicle at the current moment under a preset coordinate system;
predicting the position coordinate of the target automated guided vehicle at the next moment as a predicted position coordinate by utilizing a Longge Kutta algorithm based on the actual position coordinate and the deflection angle;
acquiring planned position coordinates of the target unmanned transport vehicle planned in advance at the next moment, and determining a position error based on the planned position coordinates and the predicted position coordinates;
determining target position coordinates based on the position error and the actual position coordinates;
transmitting control information to the target automated guided vehicle based on the target position coordinates.
2. The method of claim 1, wherein the preset coordinate system has a starting point of the target automated guided vehicle as an origin of coordinates and a starting traveling direction of the target automated guided vehicle as a positive direction of an abscissa axis.
3. The method of claim 2, wherein the predicting, with a Longge Kutta algorithm, a position coordinate of the target automated guided vehicle at a next time as a predicted position coordinate based on the actual position coordinate and the deflection angle comprises:
acquiring a self-rotation angle of the target automated guided vehicle and a running speed of the target automated guided vehicle in a target period, wherein the target period is a communication period with the target automated guided vehicle;
determining projection lengths of projections of a travel path of the target automated guided vehicle traveling the target cycle at the travel speed on an abscissa axis and an ordinate axis of the coordinate system, respectively, at respective target deflection angles, wherein the target deflection angles are determined based on the deflection angles and the swivel angles;
determining a weighted average value of preset weights corresponding to all target deflection angles and corresponding projection lengths of projections on the abscissa axis as a first average value, and determining the sum of an abscissa value in the actual position coordinate and the first average value as an abscissa value of the target automated guided vehicle at the next moment;
and determining a weighted average value of the weight corresponding to each target deflection angle and the corresponding projection length of the projection on the ordinate axis as a second average value, and determining the sum of the ordinate value in the actual position coordinate and the second average value as the ordinate value of the target automated guided vehicle at the next moment.
4. The method of claim 3, wherein the swivel angle is determined based on a wheel diameter of the target automated guided vehicle, a change value of a wheel motor encoder when the target automated guided vehicle travels the target period, a resolution of a wheel motor encoder of the target automated guided vehicle, a mechanical reduction ratio of the target automated guided vehicle, and a spacing between two wheels of the target automated guided vehicle.
5. The method of one of claims 1-4, wherein said obtaining actual position coordinates and deflection angles of the target automated guided vehicle at the current time in a preset coordinate system comprises:
acquiring an image presenting a target identifier;
identifying actual position information of the target automated guided vehicle at the current moment from the target identifier, and determining a driving direction of the target automated guided vehicle at the current moment from the image;
determining the actual position coordinate of the target unmanned transport vehicle under a preset coordinate system at the current moment based on the actual position information;
determining a deflection angle of the target automated guided vehicle at the current moment in the coordinate system based on the traveling direction.
6. An apparatus for controlling an automated guided vehicle, comprising:
an acquisition unit configured to acquire an actual position coordinate and a deflection angle of a target automated guided vehicle at a current time in a preset coordinate system;
a prediction unit configured to predict a position coordinate of the target automated guided vehicle at a next time as a predicted position coordinate using a Longge Kutta algorithm based on the actual position coordinate and the deflection angle;
a first determination unit configured to acquire planned position coordinates of the target automated guided vehicle planned in advance at a next time, and determine a position error based on the planned position coordinates and the predicted position coordinates;
a second determination unit configured to determine target position coordinates based on the position error and the actual position coordinates;
a transmitting unit configured to transmit control information to the target automated guided vehicle based on the target position coordinates.
7. The apparatus of claim 6, wherein the preset coordinate system has a start point of the target automated guided vehicle as an origin of coordinates and a start traveling direction of the target automated guided vehicle as a positive direction of an abscissa axis.
8. The apparatus of claim 7, wherein the prediction unit is further configured to predict the position coordinate of the target automated guided vehicle at a next time as a predicted position coordinate using a Longgasta algorithm based on the actual position coordinate and the deflection angle as follows:
acquiring a self-rotation angle of the target automated guided vehicle and a running speed of the target automated guided vehicle in a target period, wherein the target period is a communication period with the target automated guided vehicle;
determining projection lengths of projections of a travel path of the target automated guided vehicle traveling the target cycle at the travel speed on an abscissa axis and an ordinate axis of the coordinate system, respectively, at respective target deflection angles, wherein the target deflection angles are determined based on the deflection angles and the swivel angles;
determining a weighted average value of preset weights corresponding to all target deflection angles and corresponding projection lengths of projections on the abscissa axis as a first average value, and determining the sum of an abscissa value in the actual position coordinate and the first average value as an abscissa value of the target automated guided vehicle at the next moment;
and determining a weighted average value of the weight corresponding to each target deflection angle and the corresponding projection length of the projection on the ordinate axis as a second average value, and determining the sum of the ordinate value in the actual position coordinate and the second average value as the ordinate value of the target automated guided vehicle at the next moment.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
10. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN201911173759.9A 2019-11-26 2019-11-26 Method and apparatus for controlling an automated guided vehicle Pending CN112859826A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911173759.9A CN112859826A (en) 2019-11-26 2019-11-26 Method and apparatus for controlling an automated guided vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911173759.9A CN112859826A (en) 2019-11-26 2019-11-26 Method and apparatus for controlling an automated guided vehicle

Publications (1)

Publication Number Publication Date
CN112859826A true CN112859826A (en) 2021-05-28

Family

ID=75984767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911173759.9A Pending CN112859826A (en) 2019-11-26 2019-11-26 Method and apparatus for controlling an automated guided vehicle

Country Status (1)

Country Link
CN (1) CN112859826A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113837332A (en) * 2021-09-23 2021-12-24 北京京东乾石科技有限公司 Shelf angle adjusting method and device, electronic equipment and computer readable medium
CN115057190A (en) * 2022-04-26 2022-09-16 浙江华睿科技股份有限公司 Object moving method, system, electronic device, and computer-readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106909153A (en) * 2017-03-21 2017-06-30 北京京东尚科信息技术有限公司 Unmanned vehicle crosswise joint method and apparatus
CN106970629A (en) * 2017-05-22 2017-07-21 北京京东尚科信息技术有限公司 The control method and device of automatic guided vehicle
CN108140316A (en) * 2015-09-30 2018-06-08 日产自动车株式会社 Travel control method and travel controlling system
US20180251092A1 (en) * 2017-03-06 2018-09-06 GM Global Technology Operations LLC Vehicle collision prediction algorithm using radar sensor and upa sensor
US20180307233A1 (en) * 2017-04-20 2018-10-25 Baidu Usa Llc System and method for trajectory re-planning of autonomous driving vehicles
KR20190022173A (en) * 2017-08-25 2019-03-06 한국항공우주연구원 Method and system for controlling movement of a UAV by predicting the trajectory of a spherical target through a camera
CN110162046A (en) * 2019-05-21 2019-08-23 同济人工智能研究院(苏州)有限公司 Unmanned vehicle path following method based on event trigger type model predictive control
CN110194158A (en) * 2018-02-27 2019-09-03 现代自动车株式会社 The driving conditions prediction technique and forecasting system of vehicle

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108140316A (en) * 2015-09-30 2018-06-08 日产自动车株式会社 Travel control method and travel controlling system
US20180251092A1 (en) * 2017-03-06 2018-09-06 GM Global Technology Operations LLC Vehicle collision prediction algorithm using radar sensor and upa sensor
CN106909153A (en) * 2017-03-21 2017-06-30 北京京东尚科信息技术有限公司 Unmanned vehicle crosswise joint method and apparatus
US20180307233A1 (en) * 2017-04-20 2018-10-25 Baidu Usa Llc System and method for trajectory re-planning of autonomous driving vehicles
CN106970629A (en) * 2017-05-22 2017-07-21 北京京东尚科信息技术有限公司 The control method and device of automatic guided vehicle
KR20190022173A (en) * 2017-08-25 2019-03-06 한국항공우주연구원 Method and system for controlling movement of a UAV by predicting the trajectory of a spherical target through a camera
CN110194158A (en) * 2018-02-27 2019-09-03 现代自动车株式会社 The driving conditions prediction technique and forecasting system of vehicle
CN110162046A (en) * 2019-05-21 2019-08-23 同济人工智能研究院(苏州)有限公司 Unmanned vehicle path following method based on event trigger type model predictive control

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113837332A (en) * 2021-09-23 2021-12-24 北京京东乾石科技有限公司 Shelf angle adjusting method and device, electronic equipment and computer readable medium
CN115057190A (en) * 2022-04-26 2022-09-16 浙江华睿科技股份有限公司 Object moving method, system, electronic device, and computer-readable storage medium
CN115057190B (en) * 2022-04-26 2024-04-26 浙江华睿科技股份有限公司 Target moving method, system, electronic device and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN106970629B (en) Control method and device for automated guided vehicle
US20200241564A1 (en) Proactive generation of tuning data for autonomous vehicle dispatch
CN110126825B (en) System and method for low-level feed-forward vehicle control strategy
KR102399019B1 (en) Method and apparatus for controlling unmanned vehicle to perform route verification
JP2019532292A (en) Autonomous vehicle with vehicle location
CN107132843B (en) Control method and device for automated guided vehicle
US20190283766A1 (en) Drivetrain compensation for autonomous vehicles
JP6634100B2 (en) Trajectory determination device and automatic driving device
US11859990B2 (en) Routing autonomous vehicles using temporal data
US10452065B2 (en) Human-machine interface (HMI) architecture
CN111137298B (en) Vehicle automatic driving method, device, system and storage medium
CN113033925B (en) Apparatus, electronic device, and medium for controlling travel of autonomous vehicle
US11829135B2 (en) Tuning autonomous vehicle dispatch using vehicle performance
CN110654381A (en) Method and device for controlling a vehicle
CN111785062B (en) Method and device for realizing vehicle-road cooperation at signal lamp-free intersection
CN112859826A (en) Method and apparatus for controlling an automated guided vehicle
US20220242444A1 (en) Roadmodel Manifold for 2D Trajectory Planner
US20190283760A1 (en) Determining vehicle slope and uses thereof
US20200240800A1 (en) Routing multiple autonomous vehicles using local and general route planning
CN115617051A (en) Vehicle control method, device, equipment and computer readable medium
CN113306570B (en) Method and device for controlling an autonomous vehicle and autonomous dispensing vehicle
CN116279596B (en) Vehicle control method, apparatus, electronic device, and computer-readable medium
CN116088538B (en) Vehicle track information generation method, device, equipment and computer readable medium
CN109987097B (en) Method and equipment for adjusting motion state of target vehicle
CN111380556A (en) Information processing method and device for automatic driving vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination