WO2024114682A1 - 定位方法、装置、计算设备和存储介质 - Google Patents

定位方法、装置、计算设备和存储介质 Download PDF

Info

Publication number
WO2024114682A1
WO2024114682A1 PCT/CN2023/135013 CN2023135013W WO2024114682A1 WO 2024114682 A1 WO2024114682 A1 WO 2024114682A1 CN 2023135013 W CN2023135013 W CN 2023135013W WO 2024114682 A1 WO2024114682 A1 WO 2024114682A1
Authority
WO
WIPO (PCT)
Prior art keywords
self
vehicle
moving device
information
identification
Prior art date
Application number
PCT/CN2023/135013
Other languages
English (en)
French (fr)
Inventor
李泽伟
Original Assignee
北京极智嘉科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202211536121.9A external-priority patent/CN118129749A/zh
Priority claimed from CN202310383208.5A external-priority patent/CN116520829A/zh
Application filed by 北京极智嘉科技股份有限公司 filed Critical 北京极智嘉科技股份有限公司
Publication of WO2024114682A1 publication Critical patent/WO2024114682A1/zh

Links

Definitions

  • the present disclosure relates to the field of intelligent warehousing technology, and in particular to a positioning method.
  • the present disclosure also relates to a positioning device, a computing device, and a computer-readable storage medium.
  • intelligent warehousing systems can be widely used in logistics warehouses, e-commerce warehouses, pharmaceutical warehouses, catering warehouses and other fields.
  • goods or carriers for placing goods can be moved by self-equipped equipment to achieve tasks such as warehousing, picking, sorting and outbound delivery.
  • the embodiment of the present disclosure provides a positioning method.
  • the present disclosure also relates to a positioning device, a computing device, and a computer-readable storage medium.
  • a positioning method including: identifying a position to be identified on a vehicle corresponding to a self-moving device to obtain an identification result; the identification result includes ranging attribute information between the self-moving device and the position to be identified on the vehicle, and/or position information of the self-moving device; determining target information based on the identification result; the target information includes the center position information of the vehicle and/or the boundary line of the lane where the self-moving device is located; determining control parameters of the self-moving device according to the target information; and controlling the movement of the self-moving device based on the control parameters.
  • a positioning device including: an identification module, configured to identify a position to be identified on a vehicle corresponding to a self-moving device, and obtain an identification result; the identification result includes ranging attribute information between the self-moving device and the position to be identified on the vehicle, and/or position information of the self-moving device; a first determination module, configured to determine target information based on the identification result; the target information includes the center position information of the vehicle and/or the boundary line of the lane where the self-moving device is located; a second determination module, configured to determine control parameters of the self-moving device according to the target information; and a control module, configured to control the movement of the self-moving device based on the control parameters.
  • a computing device including a memory, a processor, and computer instructions stored in the memory and executable on the processor, wherein the steps of the positioning method are implemented when the processor executes the computer instructions.
  • a computer-readable storage medium which stores computer instructions, and when the computer instructions are executed by a processor, the steps of the positioning method are implemented.
  • the positioning method provided by the present disclosure determines the center position information of the vehicle and/or the boundary line of the lane where the self-moving device is located by identifying the position to be identified on the vehicle, and then determines the control parameters of the self-moving device through the center position information of the vehicle and/or the boundary line of the lane where the self-moving device is located, thereby controlling the movement of the self-moving device according to the control parameters.
  • This method can accurately locate the center position of the vehicle without pasting an identification code, reducing the cost of pasting the identification code in the early stage.
  • the robustness and accuracy of the self-moving device's posture estimation can be improved, effectively reducing the probability of collision with vehicles on both sides of the lane, avoiding damage to the self-moving device, and improving the safety and stability of the self-moving device.
  • FIG1A is a schematic diagram of a scenario of a positioning method provided according to some embodiments of the present disclosure.
  • FIG1B is a flow chart of a positioning method provided according to some embodiments of the present disclosure.
  • FIG1C is a flowchart of another positioning method provided according to some embodiments of the present disclosure.
  • FIG2 is a schematic diagram of a first carrier identification area provided according to some embodiments of the present disclosure.
  • FIG3 is a schematic diagram of a second carrier identification area provided according to some embodiments of the present disclosure.
  • FIG4A is a schematic diagram of identifying vehicle legs according to some embodiments of the present disclosure.
  • FIG4B is another schematic diagram of identifying vehicle legs according to some embodiments of the present disclosure.
  • FIG5A is a flow chart of a method for determining central position information of a vehicle according to some embodiments of the present disclosure
  • FIG5B is a processing flow chart of a positioning method applied to a smart warehousing scenario according to some embodiments of the present disclosure
  • FIG6A is a flowchart of another positioning method provided according to some embodiments of the present disclosure.
  • FIG6B is a schematic diagram of a positioning method according to some embodiments of the present disclosure.
  • FIG6C is a positioning flow chart of a positioning method provided according to some embodiments of the present disclosure.
  • FIG6D is a schematic diagram of another positioning method provided according to some embodiments of the present disclosure.
  • FIG7 is a schematic diagram of the structure of a positioning device provided according to some embodiments of the present disclosure.
  • FIG8 is a structural block diagram of a computing device provided according to some embodiments of the present disclosure.
  • first, second, etc. may be used to describe various information in one or more embodiments of the present disclosure, these information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • the first may also be referred to as the second, and similarly, the second may also be referred to as the first.
  • word "if” as used herein may be interpreted as "at the time of” or "when” or "in response to determining”.
  • the location information of the self-moving device can be determined by identifying the identification code of the mounting lane, thereby controlling the movement of the self-moving device.
  • a handling robot usually needs to carry a movable carrier (shelf or pallet) from one location to another. Before lifting the movable carrier, the robot needs to accurately know the center position of the movable carrier and the center position of the robot, and then the control system controls the robot to move to directly below the center position of the movable carrier, so that the lifting center of the robot and the center position of the movable carrier coincide, thereby ensuring the stability of the movable carrier during transportation.
  • the robot Before lifting, the robot needs to accurately know the center position and angle of the movable carrier relative to the center position of the robot. Usually, this can be done by sticking a QR code or other marker at the bottom center of the movable carrier, and installing sensors such as cameras on the top of the robot. The camera is used to identify the position deviation and angle deviation of the marker, thereby calculating the exact coordinates and angle of the center position of the movable carrier.
  • the above method of determining the center position of the movable carrier requires modification of the movable carrier itself, such as pasting a specific marker at the center position of the movable carrier, and requiring the pasting position and angle to be precise enough, which will introduce large material costs and implementation costs.
  • the self-moving device corrects the current posture of the self-moving device by identifying the identification code mounted on the motion channel, so that the self-moving device moves along the correct path.
  • the posture deviation of the self-moving device occurs.
  • the self-moving device is likely to collide with the corresponding vehicle, causing damage to the self-moving device, which is a great test for the safety and stability of the self-moving device.
  • an embodiment of the present disclosure provides a positioning method.
  • the present disclosure also relates to a positioning device, a computing device, and a computer-readable storage medium, which are described in detail one by one in the following embodiments.
  • Figure 1A shows a scene schematic diagram of a positioning method provided by an embodiment of the present disclosure.
  • a plurality of vehicles 130 are placed, each vehicle 130 has a corresponding identification point, and the plurality of vehicles 130 are distributed in rows and columns, and an alley is left between two vehicles 130 for a plurality of self-moving equipment 110 to move therein.
  • the self-mobile device 110 when the self-mobile device 110 is carrying the carrier 130, it can identify the position to be identified on the carrier 130 (for example, at least three carrier legs on the carrier), determine the center position information of the carrier 130, and move to the center position of the carrier 130 based on the center position information of the carrier 130 and lift and carry the carrier 130. Therefore, there is no need to affix a label to the carrier, and it is applicable to various types of carriers, can achieve the effect of precise positioning, and reduce the cost of early implementation and carrier modification.
  • the carrier 130 for example, at least three carrier legs on the carrier
  • the self-mobile device 110 can identify the position to be identified on the corresponding carrier (e.g., a carrier within the vicinity of the self-mobile device) and obtain the position information of the self-mobile device 110; and determine whether the self-mobile device 110 is located in the alley based on the position information of the self-mobile device 110; when the self-mobile device 110 is located in the alley, determine the boundary line of the alley; then, determine the control parameters of the self-mobile device based on the motion information of the self-mobile device and the boundary line of the alley, and control the movement of the self-mobile device 110 based on the control parameters.
  • the corresponding carrier e.g., a carrier within the vicinity of the self-mobile device
  • control parameters of the self-mobile device are determined based on the boundary line of the alley where the self-mobile device is located during the movement of the self-mobile device, and then the movement of the self-mobile device is controlled, the robustness and accuracy of the self-mobile device posture estimation can be improved, and the probability of collision with the corresponding carrier is effectively reduced, thereby avoiding the self-mobile device. damage, thereby improving the security and stability of the mobile device 110.
  • FIG1B shows a flow chart of a positioning method provided according to some embodiments of the present disclosure. As shown in FIG1B , the positioning method includes steps 202 to 208 .
  • Step 202 Identify the position to be identified on the carrier corresponding to the self-mobile device to obtain an identification result.
  • the recognition result includes distance measurement attribute information between the mobile device and the position to be recognized on the vehicle, and/or position information of the mobile device.
  • the self-moving device can be a device that moves by a self-driving device, and the self-moving device moves and operates according to the control instructions issued by the control system.
  • the self-moving device can be a robot that carries a carrier, or it can be a robot that carries a container.
  • the self-moving device can also be an intelligent device that moves adaptively based on the surrounding environment, such as a carrier handling device, a container handling device, a carrier traction device, etc.
  • the self-moving device is equipped with a collection device to collect environmental information within the vicinity.
  • the self-moving device moves in the inventory area and can carry carriers or containers in the inventory area.
  • the carrier is a cargo carrier pre-placed in the inventory area, and the carrier can be a fixed carrier or a movable carrier.
  • the carrier can be a shelf, a pallet, etc.
  • the carrier includes at least one layer of partitions, and the at least one layer of partitions divides the carrier into at least two layers.
  • At least one storage position is set on the partition of the carrier, and each storage position can accommodate at least one container.
  • the container can be a cargo box, a material box, an original box, a pallet, etc.
  • the carrier corresponding to the self-moving device may be a carrier to be transported by the self-moving device, or may be a carrier within the vicinity of the self-moving device.
  • the position to be identified is the vehicle leg on the vehicle to be transported
  • the recognition result obtained in the above step 202 is the distance measurement attribute information between the self-moving device and the position to be identified on the vehicle (e.g., each vehicle leg of the vehicle to be transported).
  • the vehicle corresponding to the self-moving device is a vehicle within the vicinity of the self-moving device (e.g., vehicles on both sides of the lane where the self-moving device is located)
  • the position to be identified is an identification point on the vehicle within the vicinity of the self-moving device
  • the recognition result obtained in the above step 202 is the position information of the self-moving device.
  • Step 204 Determine target information based on the recognition result.
  • the target information includes the center position information of the vehicle and/or the boundary line of the lane where the self-moving device is located.
  • the lane where the self-moving device is located is a passable passage between the vehicles corresponding to the self-moving device, and the self-moving device can move in the lane.
  • the self-moving device moves in the lane, it can run empty (such as driving in the lane without a transport vehicle) or loaded (such as driving in the lane with a transport vehicle).
  • the target information when the recognition result is the distance measurement attribute information between the self-moving device and the position to be recognized on the vehicle (e.g., each vehicle leg on the vehicle), the target information includes the center position information of the vehicle.
  • the recognition result is the position information of the self-moving device
  • the target information includes the boundary line of the lane where the self-moving device is located.
  • Step 206 Determine control parameters of the mobile device according to the target information.
  • the control parameters of the self-moving device are physical parameters for controlling the movement of the self-moving device, including: the target movement speed of the self-moving device, the target movement direction of the self-moving device, the target turning angle of the self-moving device, the output power of the self-moving device, the target movement height of the self-moving device, and at least one of the target movement time of the self-moving device.
  • the control parameters of the self-moving device can be determined according to the position information of the self-moving device and the center position information of the vehicle.
  • the control parameters of the self-moving device include the movement information of the self-moving device.
  • the control parameters of the self-moving device can be determined according to the motion information of the self-moving device and the boundary line of the alley where the self-moving device is located.
  • Step 208 Control the movement of the mobile device based on the control parameters.
  • the target information is the center position information of the vehicle
  • the self-moving device is located to the left of the center position of the vehicle and the distance between the self-moving device and the center position of the vehicle is 0.5 meters; at this time, the self-moving device is controlled to rotate and the movement direction of the self-moving device is adjusted to point to the right; then, the self-moving device is controlled to move 0.5 meters.
  • the self-moving device when the target information is a boundary line of a lane where the self-moving device is located, the self-moving device needs to be controlled according to the positional relationship between the self-moving device and the lane.
  • the self-moving device is controlled to rotate clockwise to adjust the movement direction of the self-moving device to point to the right.
  • the self-moving device will move in the direction pointing to the right; as the self-moving device moves, when the self-moving device moves to the center line of the alley, the self-moving device is controlled to rotate counterclockwise to adjust the movement direction of the self-moving device to be parallel to the center line.
  • the self-moving device is located on the left side of the lane and its movement direction is pointing to the right
  • the self-moving device moves along the direction pointing to the right to the center line of the lane
  • the movement direction of the self-moving device is adjusted to be parallel to the center line.
  • the movement of the self-moving device can be controlled by directly determining the target movement direction, and after the self-moving device moves to the center line, adjusting the movement direction to be parallel to the center line.
  • the movement direction can also be dynamically adjusted so that the self-moving device continuously approaches the center line, and when it reaches the center line, the movement direction has been adjusted to be parallel to the center line.
  • control the mobile device For example, based on the control parameters (target turning angle: 3 degrees clockwise; target movement time: 0.2 seconds; Target movement speed: 1 meter per second), control the mobile device to rotate 3 degrees clockwise and move at a speed of 1 meter per second for 0.2 seconds.
  • the positioning method provided by the embodiment of the present disclosure identifies the position to be identified on the vehicle, determines the center position information of the vehicle and/or the boundary line of the lane where the self-moving device is located, and then determines the control parameters of the self-moving device through the center position information of the vehicle and/or the boundary line of the lane where the self-moving device is located, thereby controlling the movement of the self-moving device according to the control parameters. Therefore, the center position of the vehicle can be accurately located without pasting the identification code, reducing the cost of pasting the identification code in the early stage.
  • the robustness and accuracy of the self-moving device's posture estimation can be improved, effectively reducing the probability of collision with vehicles on both sides of the lane, avoiding damage to the self-moving device, and improving the safety and stability of the self-moving device.
  • an exemplary description is given of how the self-moving device determines the central position information of the carrier to be transported, and controls the movement of the self-moving device according to the central position information of the carrier.
  • an exemplary description is given of how the self-moving device determines the boundary line of the lane where the self-moving device is located, and controls the movement of the self-moving device according to the boundary line of the lane where the self-moving device is located.
  • FIG1C is a flowchart of another positioning method provided in some embodiments of the present disclosure. As shown in FIG1C , the method includes steps 302 to 310 .
  • Step 302 Control the mobile device to move to a vehicle identification area corresponding to the vehicle.
  • the vehicle can be moved from the initial position to the target position by the mobile device.
  • the vehicle identification area refers to the area used to identify the vehicle. Since the center position of the vehicle needs to be identified based on the vehicle legs of the vehicle, the vehicle identification area can be the area of the vehicle legs in the vehicle.
  • the specifications of a vehicle are usually rectangular and a vehicle is supported by four vehicle legs, in order to accurately identify the center position of the vehicle, it is necessary to identify it based on at least three vehicle legs; therefore, in the process of identifying the vehicle in the vehicle identification area, it is necessary to ensure that the vehicle identification area can identify at least three vehicle legs.
  • controlling the movement of a mobile device to a vehicle identification area corresponding to a vehicle includes: controlling the movement of a mobile device to a first vehicle identification area corresponding to the vehicle, the first vehicle identification area being located outside the vehicle; or controlling the movement of a mobile device to a second vehicle identification area corresponding to the vehicle, the second vehicle identification area being located at the bottom of the vehicle.
  • the self-mobile device when controlling the self-mobile device to move to the vehicle identification area corresponding to the vehicle, the self-mobile device may be controlled to move to the outside of the vehicle, or may be controlled to move to the bottom of the vehicle.
  • the setting of the vehicle identification area is related to the number of sensors set on the self-moving device.
  • the self-moving device may be provided with at least one sensor, which is a range-finding sensor, such as a radar sensor, a 3D vision sensor, etc.
  • the sensor is set on the front of the robot and has a certain scanning visible range.
  • the scanning visible range of the sensor is usually 180° or 270°.
  • the embodiments of the present disclosure do not limit the number of sensors set on the self-moving device.
  • the self-moving device taking the case where a sensor is provided on a self-moving device, since the sensor provided on the self-moving device can usually identify objects within a certain range in front, objects behind cannot be seen. At this time, the self-moving device can only detect 1-2 vehicle legs at the same time, and the center position of the vehicle requires information of at least three vehicle legs. Therefore, the vehicle identification area corresponding to the self-moving device can be determined as the outside of the vehicle.
  • Figure 2 is a schematic diagram of the first vehicle identification area provided according to some embodiments of the present disclosure.
  • the self-moving device when only one sensor is provided on the self-moving device, the self-moving device is controlled to move to the outside of the vehicle.
  • the four vehicle legs of the vehicle (vehicle leg L1, vehicle leg L2, vehicle leg L3 and vehicle leg L4) are all within the visible range of the sensor, thereby realizing the identification of the vehicle.
  • the vehicle identification area corresponding to the self-moving device can be determined as the second vehicle identification area, that is, the vehicle identification area is determined as the bottom area of the vehicle. In this case, it is necessary to control the self-moving robot to identify the vehicle legs of the vehicle at the bottom of the vehicle.
  • Fig. 3 is a schematic diagram of a second vehicle identification area provided according to an embodiment of the present disclosure.
  • the self-moving device can be controlled to move to the bottom of the vehicle, at which time the two vehicle legs (vehicle leg L1 and vehicle leg L2) of the vehicle are within the visible range of the first sensor, and the other two vehicle legs (vehicle leg L3 and vehicle leg L4) are within the visible range of the second sensor, thereby realizing the identification of the vehicle.
  • the robot may first move from the outside of the vehicle to the bottom of the vehicle, and then identify at least three vehicle legs on the vehicle at the bottom of the vehicle to determine the center position of the vehicle; or the robot may directly identify at least three vehicle legs on the outside of the vehicle to determine the center position of the vehicle.
  • the robot taking the case where the robot first moves from the outside of the vehicle to the bottom of the vehicle, and then identifies at least three vehicle legs on the vehicle at the bottom of the vehicle to determine the center position of the vehicle, before the robot enters the bottom of the vehicle, if there is no QR code on the ground, the robot needs to take a picture of the vehicle legs before entering the bottom of the vehicle to determine the relative position of the vehicle legs and the robot, so that the robot avoids colliding with the vehicle when entering the bottom of the vehicle. Before the robot enters the bottom of the vehicle, if there is a QR code on the ground, the robot does not need to take a picture of the vehicle legs, and can directly enter the bottom of the vehicle by identifying the QR code on the ground.
  • Step 304 Identify at least three vehicle legs on the vehicle, and obtain ranging attribute information between the self-moving device and each vehicle leg.
  • the self-moving device needs to identify the vehicle legs of the vehicle in the vehicle identification area, that is, it needs to identify at least three vehicle legs corresponding to the vehicle.
  • the ranging attribute information between the self-moving device and each vehicle leg can be obtained.
  • the ranging attribute information may include ranging distance, ranging angle and other information.
  • identifying at least three vehicle legs on a vehicle, and ranging attribute information between the self-moving device and each vehicle leg includes: identifying the vehicle based on at least one ranging sensor and a first identification angle; when at least three vehicle legs are identified, obtaining ranging attribute information between the self-moving device and each vehicle leg; when less than three vehicle legs are identified, obtaining first ranging attribute information between the self-moving device and each vehicle leg at the first identification angle, adjusting the first identification angle to a second identification angle, and identifying the vehicle according to the second identification angle, obtaining second ranging attribute information between the self-moving device and each vehicle leg at the second identification angle; the ranging attribute information includes first ranging attribute information and second ranging attribute information.
  • At least one ranging sensor is provided on the mobile device, and the vehicle legs and ranging attribute information of the vehicle can be identified according to the ranging sensor.
  • the number of ranging sensors, viewing angles and other information vary in different application scenarios.
  • the ranging attribute information of the at least three vehicle legs can be directly used; if the self-moving device is in the vehicle identification area and can only obtain one or two vehicle legs through the first identification angle, then after obtaining the first ranging attribute information of each vehicle leg at the first identification angle, the identification angle can be further adjusted, the first identification angle can be adjusted to the second identification angle, and the vehicle can be identified here at the second identification angle, so as to identify other vehicle legs, and then obtain the second ranging attribute information of the vehicle legs at the second identification angle.
  • a distance measuring sensor is provided on the self-moving device, and the vehicle identification area is the first vehicle identification area.
  • the distance measuring sensor can simultaneously identify the four vehicle legs, and can also obtain the distance measuring attribute information of each vehicle leg and the self-moving device (for vehicle leg L1, there is corresponding distance measuring attribute information I1, for vehicle leg L2, there is corresponding distance measuring attribute information I2, for vehicle leg L3, there is corresponding distance measuring attribute information I3, and for vehicle leg L4, there is corresponding distance measuring attribute information I4).
  • the vehicle identification area is the second vehicle identification area.
  • the first ranging sensor can identify the vehicle legs L1 and L2
  • the second ranging sensor can identify the vehicle legs L3 and L4, and simultaneously obtain the ranging attribute information of each vehicle leg and the self-moving device (ranging attribute information I1, ranging attribute information I2, ranging attribute information I3, ranging attribute information I4).
  • a distance measuring sensor is provided on the self-moving device, and the vehicle identification area is the first vehicle identification area.
  • FIG. 4A shows a schematic diagram of identifying vehicle legs provided by an embodiment of the present disclosure.
  • the visual range of the distance measuring sensor is relatively small, and the self-moving device can only detect two vehicle legs. Based on this, the self-moving device can identify the vehicle according to the first identification angle, identify the vehicle leg L1 and the vehicle leg L2, thereby obtaining the first distance measuring attribute information (first distance measuring attribute information I1, first distance measuring attribute information I2) between each vehicle leg and the self-moving device; and then adjust the angle of the self-moving device to adjust the first identification angle to the second identification angle.
  • the vehicle is identified, and the vehicle leg L3 and the vehicle leg L4 are identified, thereby obtaining the second distance measuring attribute information (second distance measuring attribute information I3, second distance measuring attribute information I4) between each vehicle leg and the self-moving device.
  • the adjustment angle of the self-moving device is less than 90 degrees.
  • a distance measuring sensor is provided on the self-moving device, and the vehicle identification area is the second vehicle identification area.
  • Figure 4B is a schematic diagram of another identification of vehicle legs provided according to some embodiments of the present disclosure. As shown in Figure 4B, the self-moving device moves to the bottom of the vehicle, and the vehicle can be identified according to the first identification angle (such as 0 degrees in Figure 4B), and the vehicle leg L1 and the vehicle leg L2 can be identified, and the first distance measurement attribute information (first distance measurement attribute information I1, first distance measurement attribute information I2) between the vehicle leg and the self-moving device can be obtained.
  • first identification angle such as 0 degrees in Figure 4B
  • the self-moving device can have two processing methods, one is to rotate 90 degrees, identify the vehicle leg L2 and the vehicle leg L3, and obtain the second distance measurement attribute information between the vehicle leg and the self-moving device; the other is to rotate 180 degrees, identify the vehicle leg L3 and the vehicle leg L4, and obtain the second distance measurement attribute information between the vehicle leg and the self-moving device.
  • the self-moving device when there is an overlapping area between the first identification angle and the second identification angle, the self-moving device will repeatedly obtain the distance measurement attribute information between the same vehicle leg and the self-moving device. At this time, the average value of the distance measurement attribute information between the vehicle leg and the self-moving device obtained twice can be calculated, and the average value can be used as the distance measurement attribute information between the vehicle leg and the self-moving device to determine the center position information of the vehicle.
  • the self-mobile device identifies the vehicle legs L1 and L2 at the first identification angle, and obtains the first ranging attribute information (first ranging attribute information I1, first ranging attribute information I2) between the vehicle legs and the self-mobile device.
  • the self-mobile device rotates 90 degrees, identifies the vehicle legs L2 and L3, and obtains the second ranging attribute information (second ranging attribute information I2, second ranging attribute information I3) between the vehicle legs and the self-mobile device; at this time, the average value of the first ranging attribute information I1 and the second ranging attribute information I2 can be determined, and the average value of the first ranging attribute information I1 and the second ranging attribute information I2 is determined as the ranging attribute information between the vehicle leg L2 and the self-mobile device.
  • second ranging attribute information I2, second ranging attribute information I3 between the vehicle legs and the self-mobile device
  • Step 306 Determine the center position information of the vehicle based on the distance measurement attribute information between the self-moving device and each leg of the vehicle.
  • the ranging attribute information can be obtained according to the ranging attribute information corresponding to each vehicle leg.
  • the distance attribute information is used to calculate the center position information of the vehicle.
  • FIG5A is a flow chart of a method for determining the central position information of a vehicle according to some embodiments of the present disclosure. As shown in FIG5A , the method includes steps 3062 to 3066 .
  • Step 3062 Calculate the device position coordinates of each vehicle leg based on the ranging attribute information between the self-moving device and each vehicle leg.
  • the device position coordinates of each vehicle leg refer to the vehicle leg coordinates of each vehicle leg in the coordinate system of the self-moving device.
  • the ranging attribute information of each vehicle leg is obtained based on the ranging sensor, and the ranging sensor is set on the self-moving device.
  • the sensor coordinate system corresponding to the ranging sensor is set on the self-moving device.
  • the device position coordinates of each vehicle leg are calculated according to the ranging attribute information between the self-moving device and each vehicle leg, including: determining the sensor position coordinates of each vehicle leg according to the ranging attribute information between the self-moving device and each vehicle leg; converting the sensor position coordinates of each vehicle leg into the device position coordinates of each vehicle leg.
  • the ranging attribute information includes a ranging distance and a ranging angle based on the sensor position coordinates.
  • the above-mentioned determining the sensor position coordinates of each vehicle leg based on the ranging attribute information between the self-moving device and each vehicle leg includes: determining the sensor position coordinates corresponding to each vehicle leg based on the ranging distance and the ranging angle between the self-moving device and each vehicle leg.
  • the distance measurement attribute information between the self-moving device and each vehicle leg is obtained according to the distance measurement sensor on the self-moving device, that is, the sensor position coordinates p(x, y) of each vehicle leg in the distance measurement sensor coordinate system can be obtained.
  • the distance measurement sensor coordinate system can refer to a coordinate system with the distance measurement sensor as the origin, where p represents the vehicle leg, and x and y represent the horizontal and vertical coordinates in the distance measurement sensor coordinate system.
  • d represents the ranging distance of the sensor position coordinates
  • represents the ranging angle of the sensor position coordinates
  • the first rotation matrix of the ranging sensor coordinate system relative to the self-moving device coordinate system can be calculated based on the installation angle of the sensor on the self-moving device.
  • the self-moving device coordinate system refers to a coordinate system with the self-moving device as the origin.
  • Formula 2 can be used to convert the sensor position coordinates of the leg in the ranging sensor coordinate system to the device position coordinates of the leg in the self-moving device coordinate system based on the first rotation matrix.
  • Formula 2 is as follows:
  • P represents the device position coordinates downloaded from the mobile device coordinate system.
  • p represents the sensor position coordinates of the vehicle leg in the ranging sensor coordinate system
  • bs represents the ranging sensor coordinate system.
  • the device position coordinates of each vehicle leg in the self-equipment device coordinate system can be obtained.
  • Step 3064 Convert the device position coordinates of each vehicle leg into the global position coordinates of each vehicle leg.
  • the device position coordinates of each vehicle leg are determined for the self-moving device coordinate system. Therefore, it is also necessary to convert the device position coordinates of the vehicle legs into global position coordinates in the global coordinate system.
  • the angle of the vehicle relative to the self-moving device can be calculated based on the side length of the rectangle formed by the vehicle legs and the side length of the vehicle; then, the angle of the vehicle can be calculated based on the direction of the line segment composed of the coordinates of the vehicle legs.
  • the second rotation matrix of the self-moving device coordinate system relative to the global coordinate system can be determined based on the angle of the vehicle relative to the self-moving device.
  • Formula 3 is shown as follows:
  • P′ represents the global position coordinates of the legs in the global coordinate system
  • P represents the device position coordinates of the legs downloaded from the mobile device coordinate system
  • br represents the global coordinate system
  • Step 3066 Determine the center position information of the vehicle based on the global position coordinates of each vehicle leg.
  • the center position information of the vehicle can be calculated through the global position coordinates of each vehicle leg according to the uniform distribution characteristics of the vehicle legs.
  • a vehicle usually includes four vehicle legs, when the number of vehicle legs in step 3066 is different, the method of calculating the center position information of the vehicle is also different.
  • the global position coordinates of each vehicle leg determine the center position information of the vehicle, including: when the number of vehicle legs is three, determining the diagonal vehicle legs, and determining the center position information of the vehicle based on the global position coordinates of the diagonal vehicle legs; when the number of vehicle legs is four, determining the center position information of the vehicle based on the global position coordinates of the four vehicle legs.
  • P load is the center position information of the vehicle
  • P leg1 and P leg3 are the global position coordinates of the diagonal vehicle legs.
  • P load is the center position information of the vehicle
  • P leg is the position coordinate of the vehicle legs
  • N is the number of vehicle legs.
  • the self-mobile device if the distance between every two vehicle legs on the vehicle is a known value, the self-mobile device identifies at least two vehicle legs corresponding to the vehicle in the vehicle identification area. Thereafter, the midpoint of the two vehicle legs can be determined based on the distance between the two vehicle legs, thereby determining the center position information of the vehicle. That is, when the distance between every two vehicle legs on the vehicle is a known value, the self-mobile device identifies the distance measurement attribute information between at least two vehicle legs and the self-mobile device to determine the center position information of the vehicle. When the distance between every two vehicle legs on the vehicle is an unknown value, the self-mobile device identifies the distance measurement attribute information between at least three vehicle legs and the self-mobile device to determine the center position information of the vehicle.
  • Step 308 Determine the movement information of the self-moving device according to the position information of the self-moving device and the central position information of the vehicle.
  • the movement information includes a movement distance and a movement angle. Determining the movement information of the self-moving device according to the position information of the self-moving device and the center position information of the vehicle includes:
  • the angle of the vehicle can be calculated based on the direction of the line segment formed by the coordinates of the vehicle legs; the angle of the vehicle relative to the self-moving device can be calculated based on the side length of the rectangle formed by the vehicle legs and the side length of the vehicle.
  • the position information of the vehicle is known, and the device angle of the self-moving device can be determined based on the vehicle angle of the vehicle and the angle of the vehicle relative to the self-moving device.
  • the adjustment angle can be determined according to the carrier angle and the device angle, and the adjustment angle is used to correct the device angle of the self-moving device. Then, the device moving distance to be moved by the self-moving device is determined according to the position information of the self-moving device and the center position information of the carrier. The device moving distance and the adjustment angle are used as the movement information of the self-moving device.
  • Step 310 Control the mobile device to move to the bottom center position of the vehicle based on the movement information.
  • the self-moving device can be controlled to move based on the movement information until it moves to the bottom center position of the vehicle.
  • controlling the movement of the mobile device to the bottom center of the vehicle based on the movement information includes: controlling the movement of the mobile device to the bottom center of the vehicle according to the adjustment angle and the movement distance of the device.
  • the self-moving device can be controlled to move to the bottom center position of the vehicle (directly below the vehicle) according to the device movement distance and the adjustment angle. At this point, the operation of identifying the center position of the vehicle is completed.
  • the above method may further include: controlling a lifting mechanism of a self-mobile device to lift the vehicle and transport the vehicle to a destination.
  • the vehicle can be lifted up by the lifting mechanism on the mobile device according to business needs, and the vehicle can be moved to the destination.
  • the positioning method provided by the embodiment of the present disclosure includes controlling the self-moving device to move to the vehicle identification area corresponding to the vehicle; identifying at least three vehicle legs corresponding to the vehicle, and the distance measurement attribute information of the self-moving device and each vehicle leg; determining the center position information of the vehicle according to the distance measurement attribute information of the self-moving device and each vehicle leg; determining the movement information of the self-moving device according to the position information of the self-moving device and the center position information of the vehicle; and controlling the self-moving device to move to the bottom center position of the vehicle based on the movement information.
  • a distance measurement sensor commonly used in self-moving devices is used to identify the vehicle legs of the vehicle and the position information of the vehicle legs, calculate the center position information of the vehicle, and then determine the movement information of the self-moving device through the position relationship between the center position information and the self-moving device, so as to control the self-moving device to the center position of the vehicle according to the movement information.
  • This method does not require the operation of sticking a label on the vehicle, is applicable to various types of vehicles, can achieve the effect of accurate positioning, and reduces the cost of early implementation and vehicle modification.
  • FIG. 5B is a processing flow chart of a positioning method applied to a smart warehousing scenario provided by some embodiments of the present disclosure, and the method includes the following steps:
  • Step 402 Control the shelf transport robot to move to the shelf bottom identification area corresponding to the movable shelf, wherein two ranging sensors are provided in the shelf transport robot.
  • Step 404 Identify shelf leg 1 and shelf leg 2 of the movable shelf based on the first ranging sensor, and identify shelf leg 3 and shelf leg 4 of the movable shelf based on the second ranging sensor.
  • Step 406 obtain ranging attribute information 1 corresponding to shelf leg 1, ranging attribute information 2 corresponding to shelf leg 2, ranging attribute information 3 corresponding to shelf leg 3, and ranging attribute information 4 corresponding to shelf leg 4.
  • Step 408 calculate the global position coordinates P1, P2, P3, P4 corresponding to each shelf leg according to ranging attribute information 1, ranging attribute information 2, ranging attribute information 3, and ranging attribute information 4.
  • Step 410 determine the center position coordinate P0 of the movable shelf according to the global position coordinates P1, P2, P3, and P4 corresponding to each shelf leg.
  • Step 412 Control the shelf transport robot to move to P0 of the movable shelf, lift the movable shelf through the lifting mechanism, and transport the movable shelf to the workstation.
  • the positioning method applied to the intelligent warehousing scenario uses the distance measuring sensor commonly used in self-moving equipment, calculates the center position information of the vehicle by identifying the vehicle legs and the position information of the vehicle legs, and then determines the movement information of the self-moving equipment through the relationship between the center position information and the position of the self-moving equipment, thereby controlling the self-moving equipment to the center position of the vehicle according to the movement information.
  • This method does not require the operation of sticking labels on the vehicle, is applicable to various types of vehicles, can achieve the effect of accurate positioning, and reduces the cost of early implementation and vehicle modification.
  • FIG6A is a flowchart of another positioning method provided in some embodiments of the present disclosure. As shown in FIG6A , the method includes steps 502 to 510.
  • Step 502 Obtain location information of an identification point on a vehicle corresponding to the mobile device.
  • the self-moving device can carry a carrier and travel in the alley, or it can carry a container and travel in the alley, and the embodiments of the present disclosure are not limited to this.
  • the vehicle has at least one identification point
  • the identification point is a position identification point used to determine the position information of the vehicle; for example, the wheels of the vehicle, the support legs (vehicle legs) of the vehicle, the crossbeams of the vehicle, and the boundary points of the vehicle.
  • the position information is quantifiable position data information
  • the position information of the identification point can be the distance between the identification point and the self-moving device.
  • the distance between the identification point and the self-moving device is determined by radar, infrared or depth map.
  • the position information of the identification point can also be the coordinate position of the identification point in the coordinate system, wherein the coordinate system can be a global coordinate system (establishing a coordinate system for the inventory area) or a local coordinate system (establishing a coordinate system with the self-moving device as the origin). For example, point cloud data information and/or image data information are collected through a point cloud acquisition device and/or an image acquisition device. After the coordinate system is established, the identification point is projected into the coordinate system to determine the position information of each identification point.
  • the coordinate system can be a two-dimensional coordinate system or a three-dimensional coordinate system.
  • the position information of the identification point on the vehicle corresponding to the self-mobile device can be the position information of the identification point stored in advance. For example, a global coordinate system is established, the coordinates of the identification points of each vehicle in the inventory area are pre-determined and stored, and the information is directly obtained; or the information can be obtained based on the environmental information collected by the self-mobile device. For example, the self-mobile device obtains point cloud data information, and based on the point cloud data information, the position information of the identification point on the vehicle corresponding to the self-mobile device is determined to be obtained, which is not limited here.
  • the self-moving device is a self-moving robot in the inventory area
  • the carrier is a shelf in the inventory area.
  • the center point of the self-moving device is used as the origin to establish a local coordinate system.
  • the coordinates of 16 shelf legs on 4 shelves within the vicinity of the self-moving robot (radius 10 meters) in the coordinate system are determined P1 (X1, Y1, Z1), P2 (X2, Y2, Z2) ... P16 (X16, Y16, Z16).
  • the vehicles corresponding to the self-moving device include vehicles on both sides of the lane where the self-moving device is located. Obtaining the location information of the identification point on the vehicle corresponding to the self-moving device provides information reference for subsequent determination of whether the self-moving device is located in the lane and determining the boundary line of the lane.
  • obtaining the location information of the identification point on the vehicle corresponding to the self-mobile device includes: obtaining initial environment information collected from the self-mobile device; and determining the location information of the identification point on the vehicle corresponding to the self-mobile device based on the initial environment information.
  • the initial environment information is environment data information collected by the mobile device within the vicinity of the mobile device, such as point cloud data information, image data information, illumination data information, etc.
  • the location information of the identification point on the vehicle corresponding to the self-mobile device is determined.
  • the identification point and the position information of the identification point can be confirmed separately, and then the identification point and the position information of the identification point can be compared to ensure the accuracy of determining the identification point and the position information of the identification point.
  • the initial environmental information does not determine whether it contains the identification point on the vehicle corresponding to the self-mobile device. It is necessary to determine whether the location information of the identification point is obtained on the basis of obtaining the initial environmental information. For example, currently, there is a shelf (vehicle) set at a distance of 20 meters from the self-mobile device, and the shelf has shelf legs (vehicle legs).
  • the effective range of the collection device on the self-mobile device is a circle with a radius of 10 meters. The collected initial environmental information does not contain the vehicle legs, and the vehicle legs and the location information of the vehicle legs cannot be obtained.
  • the self-mobile device As the self-mobile device moves, the self-mobile device is 10 meters away from the vehicle, and the collected initial environmental information contains the vehicle legs, and the vehicle legs and the location information of the vehicle legs can be obtained. That is, obtaining the initial environmental information is a dynamic process, and determining whether the location information of the identification point is obtained based on the initial environmental information is also a dynamic process.
  • the point cloud data information of the vicinity (radius 10 meters) collected by the mobile device (mobile robot) is obtained.
  • image data information based on the point cloud data information, determine the coordinates of 13 first vehicle legs and 13 first vehicle legs on the 4 vehicles, based on the image data information, determine the coordinates of 14 second vehicle legs and 14 second vehicle legs on the 4 vehicles, take the union of the two, and obtain the coordinates of 16 vehicle legs and 16 vehicle legs P1 (X1, Y1, Z1), P2 (X2, Y2, Z2)...P16 (X16, Y16, Z16).
  • the initial environment information is collected from the mobile device according to a preset sampling frequency.
  • the preset sampling frequency is a sampling frequency preset by the collection device on the mobile device, for example, the point cloud collection device collects point cloud data information within the vicinity once every 2 seconds.
  • initial environmental information is collected according to a preset sampling frequency, and it is continuously determined whether the location information of the identification point is obtained, and then the positioning method provided by the embodiment of the present disclosure is cycled to achieve dynamic adjustment of the self-moving device, thereby improving the robustness and accuracy of the posture estimation of the self-moving device.
  • determining the position information of the identification point on the vehicle corresponding to the self-mobile device based on the initial environment information includes:
  • the point cloud data information is clustered to obtain a cluster set; based on the cluster set, the position information of the identification point on the vehicle corresponding to the self-mobile device is determined.
  • clustering is a method of clustering each point in the point cloud data information to obtain a cluster set of at least one category.
  • the cluster cluster set is a point cluster set of at least one category, and any cluster cluster set contains at least one point of the same category.
  • the point cloud data information is a point data set on the surface of the vehicle within the vicinity, including but not limited to: laser point cloud data information, radar data point cloud information, and lidar point cloud data information.
  • the point cloud data information contains the location information and intensity information (color, laser reflection intensity, etc.) of each point.
  • the point cloud data information includes 1,000 points, and the 1,000 points have corresponding location information and laser reflection intensity.
  • the point cloud data information contains the identification point on the vehicle corresponding to the self-mobile device. It is necessary to determine whether the location information of the identification point is obtained on the basis of the point cloud data information. For example, currently, there is a vehicle set 20 meters away from the self-mobile device, and the vehicle has vehicle legs. The effective range of the acquisition device on the self-mobile device is a circle with a radius of 10. The collected point cloud data information does not include the vehicle legs, and the vehicle legs and the location information of the vehicle legs cannot be obtained. As the self-mobile device moves, the self-mobile device is 10 meters away from the vehicle, and the collected point cloud data information includes the vehicle legs, and the vehicle legs and the location information of the vehicle legs can be obtained. That is, obtaining point cloud data information is a dynamic process, and determining the location information of the identification point based on point cloud data information is also a dynamic process.
  • the position information of the identification point can be determined by clustering the point cloud data information based on the position information and/or intensity information of each point in the point cloud data information; the position information of the identification point can also be determined by filtering the point cloud data information based on the position information and/or intensity information of each point in the point cloud data information using the vehicle size information and the vehicle distribution information; the position information of the identification point can also be determined by first clustering the point cloud data information based on the position information and/or intensity information of each point in the point cloud data information, and then filtering the clustering results using the vehicle size information and the vehicle distribution information.
  • the disclosed embodiments do not limit the method for determining the position information of the identification point.
  • the following embodiments are illustrative examples of clustering the point cloud data information, and then filtering the clustering results using the vehicle size information and the vehicle distribution information.
  • the 1000 points are clustered to obtain 16 cluster points, and the 16 cluster points are determined to be vehicle legs, and the coordinates of the 16 vehicle legs are obtained: P1 (X1, Y1, Z1), P2 (X2, Y2, Z2)...P16 (X16, Y16, Z16).
  • the point cloud data information is clustered to obtain a cluster set.
  • any two points whose distance is within a preset distance threshold are determined to be points of the same category, and a corresponding set of clusters is obtained.
  • two points of the same color are determined to be points of the same category, and a corresponding set of clusters is obtained.
  • two points whose laser reflection intensities are within the same preset interval are determined to be points of the same category, and a corresponding set of clusters is obtained.
  • the above methods are combined.
  • the cluster set it is determined whether the identification point on the vehicle corresponding to the self-mobile device is obtained, and based on the position information of the key point in the cluster set corresponding to the identification point, the position information of the identification point is determined.
  • the key point can be the center point in the cluster set, the centroid point in the cluster set, or the outermost circle point in the cluster set, which is not limited here.
  • the point cloud data information is clustered to obtain 16 cluster sets, and the 16 cluster sets are determined to be vehicle legs. Based on the coordinates of the center of mass points in the cluster sets corresponding to each vehicle leg, the coordinates of the 16 vehicle legs are determined: P1 (X1, Y1, Z1), P2 (X2, Y2, Z2)...P16 (X16, Y16, Z16).
  • the use of high-precision point cloud data information improves the accuracy of the position information of the determined identification points; and by clustering the point cloud data information, a clustering cluster set is obtained, and based on the clustering cluster set, it is determined whether the identification points and the position information of the identification points are obtained, which can improve the accuracy of the determined identification points and the accuracy of the position information of the determined identification points.
  • the clustering is Therefore, in order to improve the accuracy of the determined identification points and the position information of the identification points, it is necessary to screen and identify the vehicles based on their own characteristics.
  • determining the location information of the identification point on the vehicle corresponding to the self-mobile device based on the cluster set includes:
  • the cluster set is screened to obtain the target cluster set; according to the preset vehicle distribution information, the target cluster set is identified to determine the position information of the identification point on the vehicle corresponding to the self-mobile device.
  • the vehicle size information is the spatial size information of the identification points on the vehicle
  • the target clustering cluster set is the clustering cluster set that satisfies the spatial size information of the identification points on the vehicle.
  • the vehicle leg size of the vehicle is a radius of 5 cm; if the average radius of the clustering cluster set is 20 cm, then the clustering cluster set is not the target clustering cluster set.
  • the vehicle distribution information is the spatial distribution information of the identification points on the vehicle. For example, if the vehicle is a 200 cm ⁇ 150 cm ⁇ 90 cm vehicle, and the coordinate difference vector between the key points in the two target clustering cluster sets is (10 cm, 0 cm, 0 cm), then the two target clustering cluster sets are determined to be the vehicle legs of two adjacent vehicles.
  • the coordinate difference vector between the key points in the two target clustering cluster sets is (60 cm, 40 cm, 20 cm)
  • the vehicle legs corresponding to the two target clustering cluster sets are not the vehicle legs of two adjacent vehicles, nor are they two vehicle legs on the same vehicle.
  • 20 cluster sets are screened to obtain 16 target cluster sets, and according to the preset vehicle distribution information (the distance between the vehicle legs is 180 cm or 130 cm), the 16 target cluster sets are identified to determine the 16 vehicle legs on the vehicle corresponding to the self-mobile device and the coordinates of the 16 vehicle legs: P1 (X1, Y1, Z1), P2 (X2, Y2, Z2)...P16 (X16, Y16, Z16).
  • the target cluster set is identified, and the identification point and the position information of the identification point on the vehicle corresponding to the self-mobile device are determined, which improves the accuracy of the determined identification point, and further improves the accuracy of the position information of the determined identification point.
  • determining the position information of the identification point on the vehicle corresponding to the self-mobile device based on the initial environment information includes: determining the position information of the identification point on the vehicle corresponding to the self-mobile device based on the image environment information.
  • the image data information is visual image data of the surface of the vehicle within the vicinity of the mobile device. For example, photos, videos, etc.
  • the image data information contains the position information and color of each point.
  • the image data information includes 1000 points, and the 1000 points have corresponding position information and colors.
  • the image data information may include a visual identifier pre-loaded on the carrier.
  • the position information of the visual identifier can be determined by identifying the visual identifier; then, based on the position information of the visual identifier, the position information of the identification point is determined.
  • the visual identifier may be a reflector and a light-absorbing plate.
  • the reflector and the position information of the reflector are identified from the image data information, and based on the reflector and the position information of the reflector, the coordinates of 16 carrier legs and 16 carrier legs are determined: P1 (X1, Y1, Z1), P2 (X2, Y2, Z2)...P16 (X16, Y16, Z16).
  • a neural network model may be used to identify identification points of image data information collected from a mobile device, thereby obtaining position information of the identification points on a carrier corresponding to the mobile device.
  • the embodiments of the present disclosure do not limit the implementation method of determining the position information of the identification point on the carrier corresponding to the self-mobile device based on the image data information.
  • Step 504 Determine the location information of the mobile device according to the location information of the identification point.
  • the location information of the self-moving device is data information of the current location of the self-moving device, including but not limited to: whether the self-moving device is located in the lane, the distance between the self-moving device and the carrier, the distance between the self-moving device and the identification point, and the coordinate position of the self-moving device in the coordinate system.
  • the coordinate system can be a global coordinate system (establishing a coordinate system for the inventory area) or a local coordinate system (establishing a coordinate system with the carrier as the origin), and the coordinate system can be a two-dimensional coordinate system or a three-dimensional coordinate system. The embodiments of the present disclosure are not limited to this.
  • determining the location information of the self-moving device based on the location information of the identification point includes: based on the location information of the identification point, determining that the self-moving device is located in the lane when there are identification points on both sides of the movement direction of the self-moving device.
  • the position information of the self-moving device when the position information of the self-moving device corresponding to the identification point on the vehicle is obtained, the position information of the self-moving device is determined based on the position information of the identification point on the vehicle; that is, based on the position information of the self-moving device and the position information of the identification point, it is determined whether there are identification points on both sides of the movement direction of the self-moving device.
  • the boundary line of the lane is a fitted straight line of the lane boundary, which is used to determine the moving direction of the self-moving device. When the device housing of the self-moving device crosses the boundary line, it is determined that the self-moving device collides with the vehicle.
  • the boundary line of the lane is a fitted straight line of the lane boundary, and the movement of the self-moving device can be determined through the boundary line of the lane; for example, when the device housing of the self-moving device crosses the boundary line, it is determined that the self-moving device collides with the vehicle.
  • the self-moving device based on the location information of the identification point and the movement direction of the self-moving device, it is determined that there are identification points on both sides of the movement direction of the self-moving device, and then the self-moving device is determined to be in the lane; then, the boundary line of the lane is determined based on the location information of the identification point.
  • the self-moving device based on the coordinates (X0, Y0, Z0) of the self-moving device and the coordinates P1 (X1, Y1, Z1), P2 (X2, Y2, Z2) ... P16 (X16, Y16, Z16) of the position information of the 16 identification points, it is determined whether there are identification points on both sides of the moving direction of the self-moving device; if there are identification points on both the left and right sides of the moving direction of the self-moving device; it is determined that the self-moving device is located in the lane. Afterwards, the boundary line of the lane is determined based on the position information of the identification points located on both sides of the moving direction of the self-moving device.
  • the boundary line of the lane is determined based on the position information of the identification point, which lays a foundation for the subsequent determination of the control parameters.
  • determining the boundary line of the lane based on the position information of the identification point includes: performing straight line fitting on the identification point based on the position information of the identification point to obtain a first boundary line and a second boundary line corresponding to the lane.
  • the first boundary line may be the left boundary line of the lane
  • the second boundary line may be the right boundary line of the lane.
  • the disclosed embodiments do not limit whether the first boundary line is the left boundary line or the right boundary line of the lane, and whether the second boundary line is the right boundary line or the left boundary line of the lane.
  • the first boundary line is the left boundary line of the lane
  • the second boundary line is the right boundary line of the lane as an example for exemplary description.
  • the left boundary line and the right boundary line are based on the moving direction of the self-moving device, that is, the left boundary line is the boundary line located on the left side of the moving direction; the right boundary line is the boundary line located on the right side of the moving direction.
  • the moving direction of the self-moving device is the direction that the head of the self-moving device is currently pointing to, which can be represented by a direction vector in the coordinate system.
  • the straight line fitting is at least one straight line obtained by fitting and determined by connecting the identification points.
  • the identification points located on both sides of the self-moving device are sequentially connected to obtain multiple line segments, and the multiple line segments are determined as the left boundary line and the right boundary line corresponding to the lane.
  • the identification points located on both sides of the self-moving device are sequentially connected to obtain multiple line segments located on both sides of the self-moving device, and the line segments located on both sides of the self-moving device and with the largest angle among the multiple line segments are determined as the left boundary line and the right boundary line corresponding to the lane.
  • the self-moving device For example, based on the coordinates (X0, Y0, Z0) of the self-moving device and the coordinates P1 (X1, Y1, Z1), P2 (X2, Y2, Z2) ... P16 (X16, Y16, Z16) of the 16 vehicle legs, it is determined whether there are vehicle legs on both sides of the moving direction of the self-moving device. If so, it is determined that the self-moving device is located in the lane. At this time, based on the coordinates of the vehicle legs, a straight line fitting is performed on the vehicle legs on both sides of the moving direction of the self-moving device to obtain the left and right boundary lines corresponding to the lane.
  • Step 508 Determine control parameters of the self-moving device according to the motion information of the self-moving device and the boundary line of the lane.
  • the motion information of the mobile device is information about the current motion state of the mobile device, including but not limited to: location information of the mobile device, motion direction of the mobile device, motion speed of the mobile device, steering angle of the mobile device, and motion height of the mobile device.
  • determining the control parameters of the self-moving device based on the motion information of the self-moving device and the boundary line of the lane includes: determining the control parameters of the self-moving device based on the deviation information between the motion information of the self-moving device and the boundary line, wherein the deviation information includes at least one of a deviation angle, a deviation distance, and a deviation height.
  • control parameters of the self-moving device can be determined as target turning angle: 3 degrees clockwise; target movement time: 0.2 seconds; target movement speed: 1 meter per second.
  • control parameters of the self-moving device are determined based on the motion information of the self-moving device and the boundary line of the lane.
  • the data include: determining the control angle of the self-moving device according to the angle between the moving direction of the self-moving device and the boundary line of the lane; and/or determining the control distance of the self-moving device according to the distance between the position information of the self-moving device and the boundary line of the lane.
  • the position information of the self-moving device is the current position information of the self-moving device, which can be the coordinates of a certain point in the coordinate system.
  • the position information of the self-moving device can be represented by the position information of key points on the self-moving device; for example, the position information of the center of mass of the self-moving device, the position information of the center point of the self-moving device, and the position information of the top point of the head of the self-moving device.
  • the control angle is a control parameter for controlling the movement angle of the self-moving device
  • the control distance is a control parameter for controlling the movement distance of the self-moving device.
  • control angle of the self-moving device is determined based on the angle between the movement direction of the self-moving device and the boundary line of the lane, including: determining the control angle of the self-moving device based on the deviation angle between the movement direction of the self-moving device and the boundary line of the lane.
  • the deviation angle is the angle between the moving direction and the boundary line of the lane.
  • the control angle of the self-moving robot is determined to rotate 3 degrees clockwise.
  • determining the control distance of the mobile device based on the distance between the location information of the mobile device and the boundary line of the lane includes: determining the control distance of the mobile device based on the vertical distance between the location information of the mobile device and the boundary line of the lane.
  • the vertical distance is the vertical distance between the location information of the mobile device and the boundary line of the lane.
  • control distance of the self-moving robot is determined to be 0.2 meters based on the vertical distance 0.2*sin3 meters between the coordinates (0,0) of the self-moving device (self-moving robot) and the boundary line of the lane.
  • control angle of the self-moving device is determined according to the angle between the moving direction of the self-moving device and the boundary line of the lane; and/or the control distance of the self-moving device is determined according to the distance between the position information of the self-moving device and the boundary line of the lane. This improves the accuracy of the determined control angle and/or control distance, improves the robustness and accuracy of the self-moving device position estimation, and thus improves the accuracy of the subsequent control of the movement of the self-moving device.
  • the control angle of the self-moving device is determined based on the angle between the movement direction of the self-moving device and the boundary line of the alley, including: obtaining a first angle between the movement direction of the self-moving device and the first boundary line of the alley, and a second angle between the movement direction and the second boundary line of the alley; determining a reference angle based on the first angle and the second angle; and determining the difference between the reference angle and the theoretical angle as the control angle of the self-moving device.
  • the theoretical angle is the angle between the direction of movement and the center line of the roadway.
  • the center line of the lane is a straight line bisecting a line connecting two points on a first boundary line perpendicular to the lane and a second boundary line of the lane.
  • the reference angle is a reference angle representing an offset angle between a moving direction of the self-mobile device and a boundary line of the lane.
  • the reference angle is determined based on the first angle and the second angle, and the reference angle can be obtained by calculating the average of the first angle and the second angle, or by calculating the weighted average of the first angle and the second angle. This disclosure does not limit this.
  • the first angle (3.8 degrees) between the moving direction (-1.8 degrees) of the self-mobile device (self-mobile robot) and the first boundary line (+2 degrees) of the lane, and the second angle (3.2 degrees) between the moving direction (-1.8 degrees) of the self-mobile robot and the second boundary line (-5 degrees) of the lane are obtained.
  • the average of the first angle (3.8 degrees) and the second angle (3.2 degrees) is calculated to obtain a reference angle (3.5 degrees); the difference between the reference angle (3.5 degrees) and the theoretical angle (0.5 degrees) is determined as the control angle of the self-mobile robot: 3 degrees.
  • the theoretical angle is the angle between the moving direction and the center line of the lane.
  • the first angle between the moving direction of the self-moving device and the first boundary line of the lane, and the second angle between the moving direction and the second boundary line of the lane are obtained;
  • the reference angle is determined according to the first angle and the second angle;
  • the difference between the reference angle and the theoretical angle is determined as the control angle of the self-moving device, wherein the theoretical angle is the angle between the moving direction and the center line of the lane.
  • the accuracy of the determined control angle is improved, the robustness and accuracy of the pose estimation of the self-moving device are improved, and the accuracy of the subsequent control of the movement of the self-moving device is improved.
  • determining the control distance of the self-moving device according to the distance between the position information of the self-moving device and the boundary line of the lane includes:
  • a first distance between the position information of the self-mobile device and the first boundary line, and a second distance between the position information of the self-mobile device and the second boundary line are obtained; when the first distance is less than the second distance, the difference between the first distance and the preset theoretical width is determined as the control distance of the self-mobile device; when the first distance is greater than the second distance, the difference between the second distance and the theoretical width is determined as the control distance of the self-mobile device.
  • the first distance is the vertical distance between the position information of the self-moving device and the first boundary line
  • the second distance is the vertical distance between the position information of the self-moving device and the second boundary line.
  • the theoretical width is a preset width of the lane, which can be measured and set in advance for the lane, or measured and set in advance for the size of the self-moving device, or a combination of the two, which is not limited here. In the embodiment of the present disclosure, the theoretical width is half the width of the lane.
  • the first boundary line is the left boundary line of the lane
  • the second boundary line is the right boundary line of the lane.
  • the first distance (1.8 meters) between the coordinates (0,0) of the self-mobile device (self-mobile robot) and the first boundary line, and the second distance (2.2 meters) between the coordinates (0,0) of the self-mobile robot and the right boundary line of the lane are obtained. Since the first distance is smaller than the second distance, the difference between the first distance (1.8 meters) and the preset theoretical width (2 meters) is determined as the control distance of the self-mobile robot: 0.2 meters.
  • the first distance between the position information of the self-moving device and the first boundary line, and the second distance between the position information of the self-moving device and the second boundary line are obtained; when the first distance is less than the second distance, the difference between the first distance and the preset theoretical width is determined as the control distance of the self-moving device; when the first distance is greater than the second distance, the difference between the second distance and the theoretical width is determined as the control distance of the self-moving device.
  • the accuracy of the determined control distance is improved, the robustness and accuracy of the self-moving device posture estimation are improved, and the accuracy of the subsequent control of the movement of the self-moving device is improved.
  • Step 510 Based on the control parameters, control the driving component in the self-moving device to move the self-moving device.
  • the driving component is a mechanical component in the self-moving device that drives the self-moving device to move, such as a steering wheel, a track, a mechanical foot, a propeller, etc.
  • controlling a driving component in a self-moving device based on control parameters includes: generating motion control instructions based on the control parameters, sending the motion control instructions to the driving component in the self-moving device, and controlling the driving component to move according to the control parameters.
  • a motion control instruction is generated; then, the motion control instruction is sent to the left wheel and the right wheel of the self-moving device (self-moving robot), controlling the left wheel to rotate at 300 revolutions per minute for 0.2 seconds, and controlling the right wheel to rotate at 100 revolutions per minute for 0.2 seconds, so that the self-moving robot rotates 3 degrees clockwise and moves at a speed of 1 meter per second for 0.2 seconds.
  • the driving component in the self-moving device is controlled to move the self-moving device.
  • the driving component accurate control of the movement of the self-moving device is achieved, the probability of collision with the corresponding carrier is reduced, damage to the self-moving device is avoided, and the safety and stability of the self-moving device are improved.
  • FIG6B shows a schematic diagram of positioning of a positioning method provided by an embodiment of the present disclosure, as shown in FIG6B :
  • the self-moving device will move along the center line of the alley.
  • the first boundary line (left boundary line) and the second boundary line (right boundary line) determined by the self-moving device through identifying the position information of the identification points on both sides should be parallel to the movement direction of the self-moving device.
  • the first distance between the self-moving device and the first boundary line and the second distance between the self-moving device and the second boundary line are equal.
  • the first boundary line and the second boundary line are not parallel to each other, but have a certain angle. There is a first angle between the movement direction of the self-moving device and the first boundary line, and there is a second angle between the movement direction of the self-moving device and the second boundary line.
  • the actual movement angle of the self-moving device is the difference between the theoretical angle and the reference angle.
  • the theoretical angle is the angle between the movement direction of the self-moving device and the center line of the lane.
  • the lane has a theoretical width.
  • the left and right position deviations of the self-moving device (the difference between the first distance and half of the theoretical distance, or the difference between the second distance and half of the theoretical distance) are obtained to control the movement of the self-moving robot.
  • FIG6C shows a positioning flow chart of the positioning method provided by an embodiment of the present disclosure, as shown in FIG6C :
  • FIG6D shows a positioning method of a self-moving device of a storage robot applied to a carrier area according to an embodiment of the present disclosure.
  • the positioning method of a self-moving robot applied to a carrier area is described by taking a self-moving robot in a carrier area as an example, and includes steps 602 to 630:
  • Step 602 Acquire point cloud data information and image data information collected by the mobile robot.
  • Step 604 cluster the point cloud data information to obtain a cluster set.
  • Step 606 Filter the cluster set according to the preset vehicle size information to obtain the target cluster set.
  • Step 608 Identify the target cluster set according to the preset vehicle distribution information, and determine and obtain the first vehicle leg and the coordinates of the first vehicle leg on the neighboring vehicle of the self-moving robot.
  • Step 610 Based on the image data information, determine and obtain the coordinates of the second vehicle leg and the second vehicle leg on the adjacent vehicle of the self-mobile robot.
  • Step 612 Determine the intersection of the first vehicle leg and the second vehicle leg as the vehicle leg, and obtain the coordinates of the vehicle leg.
  • Step 614 Based on the coordinates of the vehicle legs, determine whether there are vehicle legs on both sides of the moving direction of the self-moving robot.
  • Step 616 If yes, it is determined that the self-propelled robot is located in the lane, and based on the coordinates of the vehicle legs, a straight line fitting is performed on the vehicle legs to obtain a left boundary line (first boundary line) and a right boundary line (second boundary line) corresponding to the lane.
  • Step 618 Obtain a first angle between the moving direction of the mobile robot and the left boundary line, and a second angle between the moving direction and the right boundary line.
  • Step 620 Calculate the average of the first angle and the second angle to obtain a reference angle.
  • Step 622 Determine the difference between the reference angle and the theoretical angle as the control angle of the self-moving robot.
  • Step 624 Obtain a first distance between the coordinates of the self-moving robot and the left boundary line, and a second distance between the coordinates of the self-moving robot and the right boundary line.
  • Step 626 When the first distance is smaller than the second distance, the difference between the first distance and the preset theoretical width is determined as the control distance of the self-moving robot.
  • Step 628 When the first distance is greater than the second distance, the difference between the second distance and the theoretical width is determined as the control distance of the self-moving robot.
  • Step 630 Based on the control angle and the control distance, control the driving component in the self-moving robot to move the self-moving robot.
  • the position information of the self-moving device is determined according to the position information of the identification point on the vehicle; when it is determined that the self-moving device is located in the lane, the boundary line of the lane is determined based on the position information of the identification point; the control parameters of the self-moving device are determined according to the motion information and boundary line of the self-moving device; and the movement of the self-moving device is controlled based on the control parameters.
  • the boundary line of the lane is determined by the position information of the identification point on the corresponding vehicle, and the control parameters of the self-moving device are determined according to the motion information and boundary line of the self-moving device, and then the movement of the self-moving device is controlled, which improves the robustness and accuracy of the self-moving device's pose estimation, effectively reduces the probability of collision with the corresponding vehicle, avoids damage to the self-moving device, and improves the safety and stability of the self-moving device.
  • the positioning method provided in the above embodiments of the present disclosure, it is possible to accurately determine the center position of the bottom of the vehicle and reduce the cost of pasting the identification code in the early stage; it is also possible to accurately determine the boundary line of the alley where the self-moving device is located, effectively reducing the probability of collision between the self-moving device and the vehicles on both sides of the alley, avoiding damage to the self-moving device, and improving the safety and stability of the self-moving device.
  • FIG7 is a schematic diagram of the structure of a positioning device provided according to some embodiments of the present disclosure. As shown in FIG7 , the device includes:
  • the identification module 702 is configured to identify the position to be identified on the vehicle corresponding to the self-mobile device and obtain an identification result; the identification result includes the ranging attribute information between the self-mobile device and the position to be identified on the vehicle, and/or the position information of the self-mobile device.
  • the first determination module 704 is configured to determine target information based on the recognition result; the target information includes the center position information of the vehicle and/or the boundary line of the lane where the self-moving device is located.
  • the second determination module 706 is configured to determine the control parameters of the mobile device according to the target information.
  • the control module 708 is configured to control the movement of the mobile device based on the control parameters.
  • the carrier corresponding to the self-moving device includes the carrier to be transported by the self-moving device, and the position to be identified on the carrier includes at least three carrier legs on the carrier;
  • the control module 708 is also configured to control the self-moving device to move to the vehicle identification area corresponding to the carrier;
  • the identification module 702 is also configured to identify at least three vehicle legs on the vehicle to obtain the ranging attribute information between the self-moving device and each vehicle leg.
  • the target information includes the center position information of the vehicle; the first determination module 704 is further configured to determine the center position information of the vehicle based on the ranging attribute information between the self-moving device and each leg of the vehicle.
  • the second determination module 706 is further configured to determine control parameters of the self-moving device based on the position information and center position information of the self-moving device; the control parameters include movement information; the control module 708 is further configured to control the self-moving device to move to the bottom center position of the vehicle based on the movement information.
  • control module 708 is further configured to control the mobile device to move to a first vehicle identification area corresponding to the vehicle, wherein the first vehicle identification area is located outside the vehicle;
  • the self-moving device is controlled to move to a second vehicle identification area corresponding to the vehicle, wherein the second vehicle identification area is located at the bottom of the vehicle.
  • At least one ranging sensor is provided on the self-moving device; the identification module 702 is further configured to identify the vehicle based on the at least one ranging sensor and the first identification angle; when at least three vehicle legs are identified, ranging attribute information between the self-moving device and each vehicle leg is obtained; when less than three vehicle legs are identified, the first ranging attribute information between the self-moving device and each vehicle leg at the first identification angle is obtained, the first identification angle is adjusted to a second identification angle, and the vehicle is identified according to the second identification angle, and the second ranging attribute information between the self-moving device and each vehicle leg at the second identification angle is obtained, and the ranging attribute information includes the first ranging attribute information and the second ranging attribute information.
  • the first determination module 704 is further configured to calculate the device position coordinates of each vehicle leg based on the ranging attribute information between the self-moving device and each vehicle leg; convert the device position coordinates of each vehicle leg into the global position coordinates of each vehicle leg; and determine the center position information of the vehicle based on the global position coordinates of each vehicle leg.
  • the first determination module 704 is further configured to determine the sensor position coordinates of each of the vehicle legs based on the ranging attribute information between the self-moving device and each of the vehicle legs; and convert the sensor position coordinates of each of the vehicle legs into the device position coordinates of each of the vehicle legs.
  • the ranging attribute information includes a ranging distance and a ranging angle based on the sensor position coordinates; the first determination module 704 is also configured to determine the sensor position coordinates corresponding to each vehicle leg based on the ranging distance and ranging angle between the self-moving device and each vehicle leg.
  • the first determination module 704 is further configured to, when the number of vehicle legs is three, determine the diagonally distributed vehicle legs, and determine the center position information of the vehicle based on the global position coordinates of the diagonally distributed vehicle legs; and when the number of vehicle legs is four, determine the center position information of the vehicle based on the global position coordinates of the four vehicle legs.
  • the second determination module 706 is further configured to determine the vehicle angle of the vehicle based on the global position coordinates of the vehicle legs, obtain the device angle of the self-moving device; determine the adjustment angle based on the vehicle angle and the device angle; determine the device movement distance based on the position information of the self-moving device and the center position information of the vehicle; determine the adjustment angle and the device movement distance as the movement information of the self-moving device; and control the self-moving device to move to the bottom center position of the vehicle based on the control parameters, including: controlling the self-moving device to move to the bottom center position of the vehicle based on the adjustment angle and the device movement distance.
  • the position to be identified on the vehicle includes an identification point on the vehicle; the identification module 702 is further configured to obtain the position information of the identification point on the vehicle corresponding to the self-mobile device; and determine the position information of the self-mobile device based on the position information of the identification point.
  • the target information is the boundary line of the lane where the self-mobile device is located; the first determination module 704 is further configured to determine the boundary line of the lane based on the location information of the identification point when it is determined that the self-mobile device is located in the lane according to the location information of the self-mobile device.
  • the second determination module 706 is further configured to determine the control parameters of the self-moving device according to the motion information of the self-moving device and the boundary line of the lane.
  • the identification module 702 is further configured to obtain initial environment information collected from the mobile device; based on the initial environment information, determine the location information of the identification point on the vehicle corresponding to the mobile device.
  • the initial environment identification information includes point cloud data information; the identification module 702 is further configured to cluster the point cloud data information to obtain a cluster set; based on the cluster set, determine the location information of the identification point on the vehicle corresponding to the self-mobile device.
  • the identification module 702 is further configured to screen the cluster set according to preset vehicle size information to obtain a target cluster set; identify the target cluster set according to preset vehicle distribution information, and determine the location information of the identification point on the vehicle corresponding to the self-mobile device.
  • the initial environment recognition information includes image data information; the recognition module 702 is further configured to determine the location information of the recognition point on the vehicle corresponding to the self-mobile device based on the image environment information.
  • the first determination module 704 is further configured to determine the control angle of the mobile device based on the angle between the movement direction of the mobile device and the boundary line of the alley; and/or determine the control distance of the mobile device based on the distance between the location information of the mobile device and the boundary line of the alley.
  • the first determination module 704 is further configured to determine that the self-moving device is located in the lane based on the position information of the identification point when there are identification points on both sides of the moving direction of the self-moving device.
  • the first determination module 704 is further configured to perform straight line fitting on the identification points based on the position information of the identification points to obtain the first boundary line and the second boundary line corresponding to the lane.
  • the first determination module 704 is further configured to obtain a first angle between the movement direction of the self-moving device and the first boundary line, and a second angle between the movement direction of the self-moving device and the second boundary line; determine a reference angle based on the first angle and the second angle; and determine the difference between the reference angle and the theoretical angle as the control angle of the self-moving device, wherein the theoretical angle is the angle between the movement direction and the center line of the lane.
  • the first determination module 704 is further configured to obtain a first distance between the location information of the self-mobile device and the first boundary line, and a second distance between the location information of the self-mobile device and the second boundary line; when the first distance is less than the second distance, the difference between the first distance and the preset theoretical width is determined as the control distance of the self-mobile device; when the first distance is greater than the second distance, the difference between the second distance and the theoretical width is determined as the control distance of the self-mobile device.
  • the above is a schematic scheme of a positioning device of this embodiment. It should be noted that the technical scheme of the positioning device and the technical scheme of the positioning method described above are of the same concept, and the details of the technical scheme of the positioning device not described in detail can be found in the description of the technical scheme of the positioning method described above.
  • FIG. 8 shows a block diagram of a computing device 800 according to an embodiment of the present disclosure.
  • the components of the computing device 800 include but are not limited to a memory 810 and a processor 820.
  • the processor 820 is connected to the memory 810 via a bus 830, and a database 850 is used to store data.
  • the computing device 800 also includes an access device 840 that enables the computing device 800 to communicate via one or more networks 860.
  • networks 860 include a public switched telephone network (PSTN), a local area network (LAN), a wide area network (WAN), a personal area network (PAN), or a combination of communication networks such as the Internet.
  • PSTN public switched telephone network
  • LAN local area network
  • WAN wide area network
  • PAN personal area network
  • the access device 840 may include one or more of any type of network interface, wired or wireless (e.g., a network interface card (NIC)), such as an IEEE802.11 wireless local area network (WLAN) wireless interface, a Worldwide Interoperability for Microwave Access (Wi-MAX) interface, an Ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a Bluetooth interface, a near field communication (NFC) interface, and the like.
  • NIC network interface card
  • the above components of the computing device 800 and other components not shown in FIG. 8 may also be It should be understood that the computing device structure block diagram shown in FIG8 is only for illustrative purposes, and is not intended to limit the scope of the present disclosure. Those skilled in the art may add or replace other components as needed.
  • the computing device 800 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., a tablet computer, a personal digital assistant, a laptop computer, a notebook computer, a netbook, etc.), a mobile phone (e.g., a smart phone), a wearable computing device (e.g., a smart watch, smart glasses, etc.), or other types of mobile devices, or a stationary computing device such as a desktop computer or a personal computer (PC).
  • the computing device 800 may also be a mobile or stationary server.
  • the embodiment of the present disclosure also provides a computer-readable storage medium storing computer instructions, which implement the steps of the positioning method as described above when executed by a processor.
  • the computer instructions include computer program codes, which may be in source code form, object code form, executable files or some intermediate forms, etc.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, USB flash drive, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM), random access memory (RAM), electric carrier signal, telecommunication signal and software distribution medium, etc. It should be noted that the content contained in the computer-readable medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction. For example, in some jurisdictions, according to legislation and patent practice, computer-readable media do not include electric carrier signals and telecommunication signals.

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

一种定位方法,包括:识别载具(130)上的待识别位置,得到识别结果(202);基于识别结果确定目标信息(204),目标信息包括载具(130)的中心位置信息和/或自移动设备(110)所在的巷道的边界线,根据目标信息,确定自移动设备(110)的控制参数(206);最后根据控制参数控制自移动设备(110)移动(208)。

Description

定位方法、装置、计算设备和存储介质
本申请要求于2022年12月02日提交的申请号为202211536121.9的中国专利申请的优先权,以及于2023年04月11日提交的申请号为202310383208.5的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及智能仓储技术领域,特别涉及一种定位方法。本公开同时涉及一种定位装置,一种计算设备,以及一种计算机可读存储介质。
背景技术
随着电子商务的迅捷发展,智能仓储系统也应运而生。目前,智能仓储系统可广泛应用于物流仓库、电子商务仓库、医药仓库、餐饮库房等多个领域,在智能仓储场景中,可以通过自移动设备对货物或放置货物的载具进行搬运以实现入库、拣选、分拣、出库等任务。
发明内容
本公开实施例提供了一种定位方法。本公开同时涉及一种定位装置,一种计算设备,以及一种计算机可读存储介质。
根据本公开一些实施例,提供了一种定位方法,包括:识别自移动设备对应的载具上的待识别位置,得到识别结果;识别结果包括自移动设备与载具上的待识别位置之间的测距属性信息,和/或,自移动设备的位置信息;基于识别结果确定目标信息;目标信息包括载具的中心位置信息和/或自移动设备所在的巷道的边界线;根据目标信息,确定自移动设备的控制参数;基于控制参数,控制自移动设备移动。
根据本公开一些实施例,提供了一种定位装置,包括:识别模块,被配置为识别自移动设备对应的载具上的待识别位置,得到识别结果;识别结果包括自移动设备与载具上的待识别位置之间的测距属性信息,和/或,自移动设备的位置信息;第一确定模块,被配置为基于识别结果确定目标信息;目标信息包括载具的中心位置信息和/或自移动设备所在的巷道的边界线;第二确定模块,被配置为根据目标信息,确定自移动设备的控制参数;控制模块,被配置为基于控制参数,控制自移动设备移动。
根据本公开一些实施例,提供了一种计算设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机指令,所述处理器执行所述计算机指令时实现所述定位方法的步骤。
根据本公开一些实施例,提供了一种计算机可读存储介质,其存储有计算机指令,该计算机指令被处理器执行时实现所述定位方法的步骤。
本公开提供的定位方法,通过识别载具上的待识别位置,确定载具的中心位置信息和/或自移动设备所在的巷道的边界线,再通过载具的中心位置信息和/或自移动设备所在的巷道的边界线来确定自移动设备的控制参数,从而根据控制参数控制自移动设备移动。本方法无需粘贴识别码,即可实现精准定位载具的中心位置,减少前期粘贴识别码的成本。而且通过根据自移动设备所在巷道的边界线控制自移动设备移动,能够提高自移动设备位姿估计的鲁棒性和精确度,有效降低与巷道两侧载具发生碰撞的概率,避免自移动设备的损坏,提升自移动设备的安全性和稳定性。
附图说明
图1A是根据本公开一些实施例提供的一种定位方法的场景示意图;
图1B是根据本公开一些实施例提供的一种定位方法的流程图;
图1C是根据本公开一些实施例提供的另一种定位方法的流程图;
图2是根据本公开一些实施例提供的第一载具识别区域的示意图;
图3是根据本公开一些实施例提供的第二载具识别区域的示意图;
图4A是根据本公开一些实施例提供的识别载具腿的示意图;
图4B是根据本公开一些实施例提供的另一识别载具腿的示意图;
图5A是根据本公开一些实施例提供的确定载具的中心位置信息的方法流程图;
图5B是根据本公开一些实施例提供的一种应用于智能仓储场景的定位方法的处理流程图;
图6A是根据本公开一些实施例提供的又一种定位方法的流程图;
图6B是根据本公开一些实施例提供的一种定位方法的定位示意图;
图6C是根据本公开一些实施例提供的一种定位方法的定位流程图;
图6D是根据本公开一些实施例提供的又一种定位方法的示意图;
图7是根据本公开一些实施例提供的一种定位装置的结构示意图;
图8是根据本公开一些实施例提供的一种计算设备的结构框图。
具体实施方式
在下面的描述中阐述了很多具体细节以便于充分理解本公开。但是本公开能够以很多不同于在此描述的其它方式来实施,本领域技术人员可以在不违背本公开内涵的情况下做类似推广,因此本公开不受下面公开的具体实施的限制。
在本公开一个或多个实施例中使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开一个或多个实施例。在本公开一个或多个实施例和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本公开一个或多个实施例中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本公开一个或多个实施例中可能采用术语第一、第二等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开一个或多个实施例范围的情况下,第一也可以被称为第二,类似地,第二也可以被称为第一。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
通常,在仓储系统中,自移动设备(例如,搬运类机器人)在移动时,可以通过识别贴装巷道的识别码,确定自移动设备的位置信息,进而控制自移动设备移动。
示例性地,在智能仓储场景下,搬运类机器人通常需要将可移动载具(货架或托盘)从一个位置搬运到另外一个位置,在将可移动载具托举之前,机器人需要准确的获知可移动载具的中心位置和机器人的中心位置,进而控制系统控制机器人移动至可移动载具的中心位置正下方,使得机器人举升中心和可移动载具的中心位置重合,以此保证可移动载具在运输过程中的稳定性。
在举升之前,机器人需要准确获知可移动载具的中心位置相对机器人的中心位置和角度,通常可以采用在可移动载具的底部中心位置粘贴二维码等标志物,并在机器人的顶部装配相机等传感器,利用相机识别标志物的位置偏差和角度偏差,从而计算得到可移动载具的中心位置的准确坐标和角度。
但是,上述确定可移动载具的中心位置的方式,需要对可移动载具的本身进行修改,如在可移动载具的中心位置粘贴特定的标志物,而且要求粘贴位置和角度足够精确,会引入较大的物料成本和实施成本。
示例性地,自移动设备在库存区域内的运动通道(如,巷道)中行驶的场景中,自移动设备通过识别运动通道上贴装的识别码,来修正自移动设备的当前位姿,使得自移动设备按照正确的路径进行移动。然而,当识别码识别出现异常情况或者轮子打滑时,自移动设备出现位姿偏差。在此情况下,自移动设备大概率与对应载具发生碰撞,导致自移动设备损坏,对自移动设备的安全性与稳定性是极大的考验。
基于此,本公开实施例提供了一种定位方法,本公开同时涉及一种定位装置,一种计算设备,以及一种计算机可读存储介质,在下面的实施例中逐一进行详细说明。
图1A示出了本公开实施例提供的一种定位方法的场景示意图,如图1A所示,在库存区域120中,放置有多个载具130,每个载具130上具有对应的识别点,多个载具130呈行列形式分布,两个载具130之间留有巷道,供多个自移动设备110在其中移动。
示例性地,如图1A所示,自移动设备110在搬运载具130时,可以通过识别载具130上的待识别位置(例如,载具上的至少三个载具腿),确定载具130的中心位置信息,并基于载具130的中心位置信息移动到载具130的中心位置并对载具130进行托举和搬运。因此,无需对载具进行粘贴标识的操作,适用于各种类型的载具,能做到精准定位的效果,减少前期实施和载具改造的成本。
如图1A所示,在自移动设备110移动至载具130的底部中心位置的过程中,和/或,自移动设备110搬运载具130(或者容器)移动的过程中,自移动设备110可以识别对应载具(如,自移动设备邻近范围内的载具)上的待识别位置,得到自移动设备110的位置信息;并基于自移动设备110的位置信息确定自移动设备110是否位于巷道;在自移动设备110位于巷道的情况下,确定巷道的边界线;之后,根据自移动设备的运动信息和巷道的边界线,确定自移动设备的控制参数,并基于控制参数,控制自移动设备110移动。由于自移动设备移动过程中是根据自移动设备所在巷道的边界线,确定自移动设备的控制参数,进而控制自移动设备移动,因此能够提高自移动设备位姿估计的鲁棒性和精确度,有效降低了与对应载具发生碰撞的概率,避免自移动设备 的损坏,提升自移动设备110的安全性和稳定性。
图1B示出了根据本公开一些实施例提供的一种定位方法的流程图,如图1B所示,该定位方法包括步骤202-步骤208。
步骤202、识别自移动设备对应的载具上的待识别位置,得到识别结果。
其中,识别结果包括自移动设备与载具上的待识别位置之间的测距属性信息,和/或,自移动设备的位置信息。
在一些实施例中,自移动设备可以是通过自驱动装置进行移动的设备,自移动设备根据控制系统下发的控制指令进行移动、操作。示例性地,自移动设备可以是搬运载具的机器人,也可以是搬运容器的机器人。自移动设备还可以是基于周围环境进行自适应移动的智能设备,例如,载具搬运设备、容器搬运设备、载具牵引设备等。自移动设备上装载有采集设备,对邻近范围内的环境信息进行采集。自移动设备在库存区域中移动,可以搬运库存区域中的载具或容器。
在一些实施例中,载具为库存区域中预先放置的货物载具,载具可以是固定载具,也可以是可移动载具。例如,载具可以是货架、托盘等,载具包括至少一层隔板,至少一层隔板将载具分割为至少两层,载具的隔板上设置有至少一个存储位,每个存储位可容纳至少一个容器,容器可以是货箱、料箱、原箱、托盘等。
在一些实施例中,上述自移动设备对应的载具可以为自移动设备待搬运的载具,也可以为自移动设备邻近范围内的载具。
示例性地,在自移动设备对应的载具为待搬运载具的情况下,待识别位置为待搬运载具上的载具腿,上述步骤202得到的识别结果为自移动设备与载具上的待识别位置(如,待搬运载具的各载具腿)之间的测距属性信息。在自移动设备对应的载具为自移动设备邻近范围内的载具(例如,自移动设备所在的巷道两侧的载具)的情况下,待识别位置是自移动设备邻近范围内的载具上的识别点,上述步骤202得到的识别结果为自移动设备的位置信息。
步骤204、基于识别结果确定目标信息。
其中,目标信息包括载具的中心位置信息和/或自移动设备所在的巷道的边界线。
在一些实施例中,自移动设备所在的巷道为自移动设备对应的载具之间的可通行通道,自移动设备可在巷道中移动。自移动设备在巷道中移动时,可以空载运行(如未搬运载具在巷道行驶),也可以负载运行(如搬运载具在巷道行驶)。
在一些实施例中,在识别结果为自移动设备与载具上的待识别位置(如,载具上的各载具腿)之间的测距属性信息的情况下,目标信息包括载具的中心位置信息。在识别结果为自移动设备的位置信息的情况下,目标信息包括自移动设备所在的巷道的边界线。
步骤206、根据目标信息,确定自移动设备的控制参数。
自移动设备的控制参数为控制自移动设备进行移动的物理参数,包括:自移动设备的目标运动速度、自移动设备的目标运动方向、自移动设备的目标转向角度等、自移动设备的输出功率、自移动设备的目标运动高度、自移动设备的目标运动时间中的至少一项。
在一些实施例中,在目标信息为载具的中心位置信息的情况下,可以在确定载具的中心位置信息之后,根据自移动设备的位置信息和载具的中心位置信息确定自移动设备的控制参数。该自移动设备的控制参数包括自移动设备的移动信息。
在一些实施例中,在目标信息为自移动设备所在的巷道的边界线的情况下,可以在确定自移动设备所在的巷道的边界线之后,根据自移动设备的运动信息和自移动设备所在的巷道的边界线确定自移动设备的控制参数。
步骤208、基于控制参数,控制自移动设备移动。
在一些实施例中,在目标信息为载具的中心位置信息的情况下,需要根据自移动设备的位置信息与载具的中心位置信息之间的关系来控制自移动设备。
示例性地,自移动设备位于载具的中心位置的左侧,且与载具的中心位置之间的距离为0.5米;此时,控制自移动设备旋转,将自移动设备的运动方向调整为指向右侧;之后,控制自移动设备移动0.5米。
在一些实施例中,在目标信息为自移动设备所在的巷道的边界线的情况下,需要根据自移动设备与巷道的位置关系来控制自移动设备。
示例性地,自移动设备位于巷道的左侧,其运动方向指向左侧为例,控制自移动设备顺时针旋转,将自移动设备的运动方向调整为指向右侧,此时,自移动设备将沿指向右侧的方向进行移动;随着自移动设备的移动,在自移动设备移动至巷道的中心线时,控制自移动设备逆时针旋转,以将自移动设备的运动方向调整为与中心线平行。
示例性地,以自移动设备位于巷道的左侧,其运动方向指向右侧为例,在自移动设备沿指向右侧的方向移动至巷道的中心线时,将自移动设备的运动方向调整为与中心线平行即可。
需要说明的是,控制自移动设备移动,可以为直接确定目标运动方向,待自移动设备运动至中心线后,调整运动方向为与中心线平行,也可以为动态调整运动方向,使得自移动设备不断逼近中心线,到达中心线时运动方向已经调整为与中心线平行。
示例性地,基于控制参数(目标转向角度:顺时针转动3度;目标运动时间:运动0.2秒; 目标运动速度:1米每秒),控制自移动设备顺时针转动3度,按照1米每秒的速度运动0.2秒。
本公开实施例提供的定位方法,通过识别载具上的待识别位置,确定载具的中心位置信息和/或自移动设备所在的巷道的边界线,再通过载具的中心位置信息和/或自移动设备所在的巷道的边界线来确定自移动设备的控制参数,从而根据控制参数控制自移动设备移动。因此无需粘贴识别码,即可实现精准定位载具的中心位置,减少前期粘贴识别码的成本。而且通过根据自移动设备所在巷道的边界线控制自移动设备移动,能够提高自移动设备位姿估计的鲁棒性和精确度,有效降低与巷道两侧载具发生碰撞的概率,避免自移动设备的损坏,提升自移动设备的安全性和稳定性。
下面结合图1C至图5B对自移动设备如何确定待搬运载具的中心位置信息,以及根据载具的中心位置信息控制自移动设备移动进行示例性说明。结合图6A至图6D对自移动设备如何确定自移动设备所在的巷道的边界线,以及根据自移动设备所在的巷道的边界线控制自移动设备移动进行示例性说明。
图1C为本公开一些实施例提供的另一种定位方法的流程图,如图1C所示,该方法包括步骤302-步骤310。
步骤302、控制自移动设备移动至载具对应的载具识别区域。
在一些实施例中,载具可以被自移动设备从初始位置点搬运至目标位置点。载具识别区域指用于对载具进行识别的区域。由于需要根据载具的载具腿来对载具的中心位置进行识别,因此,载具识别区域可以为载具中载具腿的区域。
由于载具的规格通常为矩形,一个载具有四个载具腿支撑,为了可以准确识别载具的中心位置,需要根据至少三个载具腿来进行识别;因此,在载具识别区域对载具进行识别的过程中,需要保证载具识别区域可以识别到至少三个载具腿。
在一些实施例中,控制自移动设备移动至载具对应的载具识别区域,包括:控制自移动设备移动至载具对应的第一载具识别区域,第一载具识别区域位于载具的外部;或,控制自移动设备移动至载具对应的第二载具识别区域,第二载具识别区域位于载具的底部。
示例性地,控制自移动设备移动至载具对应的载具识别区域时,可以控制自移动设备移动至载具外部,也可以控制自移动设备移动至载具底部。
在一些实施例中,载具识别区域的设定与设置在自移动设备上的传感器数量相关。自移动设备上可以设置有至少一个传感器,该传感器为测距类传感器,如雷达传感器、3D视觉传感器等等,该传感器设置在机器人的正面,具有一定的扫描可视范围,传感器的扫描可视范围通常为180°或270°。本公开实施例对自移动设备上设置的传感器数量不作限定。
在一些实施例中,以自移动设备上设置有一个传感器为例,由于自移动设备上设置的传感器通常可以识别前方一定范围内的物体,后方的物体则无法看到,此时自移动设备同一时间只能检测到1-2个载具腿,而载具的中心位置至少需要三个载具腿的信息,因此可以将该自移动设备对应的载具识别区域确定为载具的外部。
示例性地,图2为根据本公开一些实施例提供的第一载具识别区域的示意图,参见图2,在自移动设备上只设置有一个传感器的情况下,控制自移动设备移动至载具的外部,此时载具的四个载具腿(载具腿L1、载具腿L2、载具腿L3和载具腿L4)均在传感器的可视范围内,从而实现对载具的识别。
在一些实施例中,以自移动设备设置有至少两个传感器,且自移动设备为自移动机器人为例,此时,可以将自移动设备对应的载具识别区域确定为第二载具识别区域,即将载具识别区域确定为载具的底部区域,在此情况下,需要控制自移动机器人在载具的底部对载具的载具腿进行识别。
示例性地,图3为根据本公开实施例提供的第二载具识别区域的示意图。参见图3,在自移动设备上设置有两个传感器的情况下,可以控制自移动设备移动至载具的底部,此时载具的两个载具腿(载具腿L1和载具腿L2)在第一个传感器的可视范围内,另外两个载具腿(载具腿L3和载具腿L4)在第二个传感器的可视范围内,从而实现对载具的识别。
在一些实施例中,机器人可以先从载具外部移动到载具底部,再在载具底部识别载具上的至少三个载具腿,以确定载具的中心位置;也可以直接在载具外部识别载具上的至少三个载具腿,以确定载具的中心位置。
示例性地,以机器人先从载具外部移动到载具底部,再在载具底部识别载具上的至少三个载具腿,以确定载具的中心位置为例,机器人进入载具底部之前,如果地面上没有贴装二维码,那么机器人在进入载具底部之前需要拍摄载具腿,以确定载具腿和机器人的相对位置,使得机器人在进入载具底部时避免和载具发生碰撞。机器人进入载具底部之前,如果地面上贴装有二维码,那么机器人不需要拍摄载具腿,可以通过识别地面上的二维码直接进入载具底部。
步骤304、识别载具上的至少三个载具腿,得到自移动设备与各载具腿之间的测距属性信息。
示例性地,由于在对载具的中心位置信息进行确定的过程中,需要根据载具的载具腿进行计 算,因此,需要自移动设备在载具识别区域中对载具的载具腿进行识别,即需要识别载具对应的至少三个载具腿。在识别各载具腿的过程中,可以得到自移动设备与各载具腿之间的测距属性信息,在实际应用中,测距属性信息可以包括测距距离、测距角度等信息。
在一些实施例中,识别载具上的至少三个载具腿,以及自移动设备与各载具腿的测距属性信息,包括:基于至少一个测距传感器和第一识别角度识别载具;在识别到至少三个载具腿的情况下,获取自移动设备与各载具腿的测距属性信息;在识别到小于三个载具腿的情况下,获取第一识别角度下自移动设备与各载具腿之间的第一测距属性信息,将第一识别角度调整为第二识别角度,并根据第二识别角度识别载具,获取第二识别角度下自移动设备与各载具腿之间的第二测距属性信息;测距属性信息包括第一测距属性信息和第二测距属性信息。
示例性地,自移动设备上设置有至少一个测距传感器,根据该测距传感器即可识别载具的载具腿和测距属性信息。测距传感器的数量、可视角度等信息在不同的应用场景中各不相同。
在一些实施例中,若自移动设备在载具识别区域中,通过第一识别角度可以一次性识别到至少3个载具腿,则可以直接使用上述至少3个载具腿的测距属性信息;若自移动设备在载具识别区域中,通过第一识别角度只能获取到1个或2个载具腿,则在获取到第一识别角度下各载具腿的第一测距属性信息后,可以继续调整识别角度,将第一识别角度调整为第二识别角度,并在第二识别角度下对载具在此进行识别,从而识别到另外的载具腿,进而获取在第二识别角度下载具腿的第二测距属性信息。
下面将用几个实施方式来对几种不同的应用场景进行解释说明。
示例性地,以自移动设备上设置有一个测距传感器、载具识别区域为第一载具识别区域为例。参见图2,当自移动设备移动至载具对应的第一载具识别区域内,此时载具的四个载具腿均在测距传感器的可视范围内,测距传感器可以同时识别到四个载具腿,同时还可以获取到各载具腿与自移动设备的测距属性信息(对于载具腿L1对应有测距属性信息I1,对于载具腿L2对应有测距属性信息I2,对于载具腿L3对应有测距属性信息I3,对于载具腿L4对应有测距属性信息I4)。
示例性地,以自移动设备上设置有两个测距传感器,载具识别区域为第二载具识别区域为例。参见图3,当自移动设备移动至载具的第二载具识别区域内,此时第一个测距传感器可以识别到载具腿L1和载具腿L2,第二个测距传感器可以识别到载具腿L3和载具腿L4,同时获取各载具腿与自移动设备的测距属性信息(测距属性信息I1、测距属性信息I2、测距属性信息I3、测距属性信息I4)。
示例性地,以自移动设备上设置有一个测距传感器,载具识别区域为第一载具识别区域为例进行示例性说明。参见图4A,图4A示出了本公开实施例提供的识别载具腿的示意图,在此实施方式中,测距传感器的可视范围比较小,自移动设备只能检测到2个载具腿。基于此,自移动设备可以根据第一识别角度对载具进行识别,识别到载具腿L1和载具腿L2,从而获得各载具腿与自移动设备之间的第一测距属性信息(第一测距属性信息I1、第一测距属性信息I2);再调整自移动设备的角度,将第一识别角度调整为第二识别角度。基于第二识别角度对载具进行识别,识别到载具腿L3和载具腿L4,从而获得各载具腿与自移动设备之间的第二测距属性信息(第二测距属性信息I3、第二测距属性信息I4)。在实际应用中,当载具识别区域为第一载具识别区域的情况下,自移动设备的调整角度要小于90度。
示例性地,以自移动设备上设置有一个测距传感器,载具识别区域为第二载具识别区域为例。图4B为根据本公开一些实施例提供的另一识别载具腿的示意图,如图4B所示,自移动设备移动至载具的底部,可以根据第一识别角度(如图4B中的0度)对载具进行识别,可以识别到载具腿L1和载具腿L2,获得载具腿与自移动设备之间的第一测距属性信息(第一测距属性信息I1、第一测距属性信息I2)。此时,自移动设备可以有2种处理方式,一种是旋转90度,识别到载具腿L2和载具腿L3,并获取到载具腿与自移动设备之间的第二测距属性信息;一种是旋转180度,识别到载具腿L3和载具腿L4,获取到载具腿与自移动设备之间的第二测距属性信息。
需要说明的是,在第一识别角度与第二识别角度之间存在重叠区域的情况下,自移动设备将重复获取同一载具腿与自移动设备之间的测距属性信息。此时,可以计算两次获取到的该载具腿与自移动设备之间的测距属性信息的平均值,并将平均值作为该载具腿与自移动设备之间的测距属性信息,以确定载具的中心位置信息。
示例性地,参见图4B,自移动设备在第一识别角度识别到载具腿L1和载具腿L2,获得载具腿与自移动设备之间的第一测距属性信息(第一测距属性信息I1、第一测距属性信息I2)。之后,自移动设备旋转90度,识别到载具腿L2和载具腿L3,获取到载具腿与自移动设备之间的第二测距属性信息(第二测距属性信息I2、第二测距属性信息I3);此时,可以确定第一测距属性信息I1和第二测距属性信息I2的平均值,将第一测距属性信息I1和第二测距属性信息I2的平均值确定为载具腿L2与自移动设备之间的测距属性信息。
步骤306、根据自移动设备与各载具腿之间的测距属性信息,确定载具的中心位置信息。
示例性地,在获得自移动设备与各载具腿的测距属性信息之后,即可根据各载具腿对应的测 距属性信息计算载具的中心位置信息。
图5A为根据本公开一些实施例提供的确定载具的中心位置信息的方法流程图,如图5A所示,该方法包括步骤3062-步骤3066。
步骤3062、根据自移动设备与各载具腿之间的测距属性信息计算各载具腿的设备位置坐标。
其中,各载具腿的设备位置坐标指各载具腿在自移动设备坐标系下的载具腿坐标,在实际应用中,各载具腿的测距属性信息是根据测距传感器获取的,而测距传感器是设置在自移动设备上的,测距传感器对应的传感器坐标系和自移动设备对应的设备坐标系之间还存在对应的转换关系。
基于此,根据自移动设备与各载具腿之间的测距属性信息计算各载具腿的设备位置坐标,包括:根据自移动设备与各载具腿之间的测距属性信息确定各载具腿的传感器位置坐标;将各载具腿的传感器位置坐标转换为各载具腿的设备位置坐标。
在一些实施例中,测距属性信息包括基于传感器位置坐标的测距距离和测距角度。上述根据自移动设备与各载具腿的测距属性信息确定各载具腿的传感器位置坐标,包括:根据自移动设备与各载具腿之间的测距距离和测距角度,确定各载具腿对应的传感器位置坐标。
示例性地,自移动设备和各载具腿之间的测距属性信息是根据自移动设备上的测距传感器获取的,即可以获得各载具腿在测距传感器坐标系下的传感器位置坐标p(x,y),测距传感器坐标系可以是指以测距传感为原点的坐标系,其中,p代表载具腿,x、y代表在测距传感器坐标系下的横纵坐标。x,y的计算公式参见下述公式1:
x=d*cos(θ)
y=d*sin(θ)     公式1
其中,d表示传感器位置坐标的测距距离,θ表示传感器位置坐标的测距角度。基于上述公式1,可以获得各载具腿在测距传感器坐标系下的传感器位置坐标。
在一些实施例中,由于传感器与自移动设备的相对位置是固定的,因此可以根据传感器在自移动设备上的安装角度,计算得到测距传感器坐标系相对于自移动设备坐标系下的第一旋转矩阵。自移动设备坐标系指以自移动设备为原点的坐标系。可以通过公式2,基于第一旋转矩阵将测距传感器坐标系下载具腿的传感器位置坐标,转换为自移动设备坐标系下载具腿的设备位置坐标。公式2如下所示:
其中,P表示自移动设备坐标系下载具腿的设备位置坐标,表示第一旋转矩阵,p表示测距传感器坐标系下载具腿的传感器位置坐标,bs表示测距传感器坐标系。
至此,可以获得各载具腿在自移动设备坐标系下的设备位置坐标。
步骤3064、将各载具腿的设备位置坐标转换为各载具腿的全局位置坐标。
由于在获得各载具腿的设备位置坐标之后,各载具腿的设备位置坐标是针对自移动设备坐标系确定的。因此,还需要将载具腿的设备位置坐标转换为全局坐标系下的全局位置坐标。在一些实施例中,根据载具腿组成矩形的边长与载具的边长,可以计算出载具相对于自移动设备的角度;之后,根据载具腿的坐标组成的线段方向,可以计算获得载具的角度。再之后,根据载具相对于自移动设备的角度可以确定出自移动设备坐标系相对于全局坐标系的第二旋转矩阵。最后,通过公式3,基于第二旋转矩阵将自移动设备坐标系下载具腿的设备位置坐标,转换为全局坐标系下载具腿的全局位置坐标。公式3如下所示:
其中,P′表示全局坐标系下载具腿的全局位置坐标,表示第二旋转矩阵,P表示自移动设备坐标系下载具腿的设备位置坐标,br表示全局坐标系。
至此,即可获得各载具腿在全局坐标系下的全局位置坐标。
步骤3066、根据各载具腿的全局位置坐标确定载具的中心位置信息。
示例性地,在确定了各载具腿的全局位置坐标后,根据载具腿均匀分布的特征,通过各载具腿的全局位置坐标即可计算获得载具的中心位置信息。
由于一个载具通常包括4个载具腿,当步骤3066中的载具腿的数量不同时,计算载具的中心位置信息的方式也不同。
在一些实施例中,各载具腿的全局位置坐标确定载具的中心位置信息,包括:在载具腿的数量为三个的情况下,确定对角线载具腿,根据对角线载具腿的全局位置坐标确定载具的中心位置信息;在载具腿的数量为四个的情况下,根据四个载具腿的全局位置坐标确定载具的中心位置信息。
示例性地,当载具腿的数量为三个的情况下,根据载具有4个载具腿的特性,可以确定这三个载具腿中一定有一对对角载具腿,例如图4B中的载具腿1、载具腿3。再根据载具载具腿均匀分布的特性,可以确定对角载具腿连线的中心即为载具的中心位置信息。参见下述公式4:
Pload=(Pleg1+Pleg3)÷2     公式4
其中,Pload为载具的中心位置信息,Pleg1,Pleg3为对角线载具腿的全局位置坐标。
当载具腿的数量为四个的情况下,可以根据四个载具腿的位置坐标来对载具的中心位置信息进行计算,例如,可以根据四个载具腿的位置坐标的平均值进行计算。参见下述公式5:
Pload=ΣPleg÷N    公式5
其中,Pload为载具的中心位置信息,Pleg为载具腿的位置坐标,N为载具腿的数量。
在一些实施例中,若载具上每两个载具腿之间的距离为已知数值,则自移动设备在载具识别区域中识别载具对应的至少两个载具腿。之后可以根据两个载具腿之间距离,确定两个载具腿的中点,从而确定载具的中心位置信息。即在载具上每两个载具腿之间的距离为已知数值时,自移动设备识别得到至少两个载具腿与自移动设备之间的测距属性信息即可确定载具的中心位置信息。在载具上每两个载具腿之间的距离为未知数值时,自移动设备识别得到至少三个载具腿与自移动设备之间的测距属性信息确定载具的中心位置信息。
步骤308、根据自移动设备的位置信息和载具的中心位置信息,确定自移动设备的移动信息。
在一些实施例中,移动信息包括移动距离和移动角度。根据自移动设备的位置信息和载具的中心位置信息确定自移动设备的移动信息,包括:
根据载具腿的坐标确定载具的载具角度,获取自移动设备的设备角度根据载具角度和设备角度确定调整角度;根据自移动设备的位置信息和载具的中心位置信息确定设备移动距离;将调整角度和设备移动距离确定为自移动设备的移动信息。
示例性地,根据载具腿的坐标组成的线段方向,可以计算获得载具的角度;根据载具腿组成矩形的边长与载具的边长,可以计算出载具相对于自移动设备的角度。在全局坐标系中,载具的位置信息是已知的,即可根据载具的载具角度和载具相对于自移动设备的角度确定自移动设备的设备角度。
为了使得自移动设备可以在载具的正下方将载具顶起,可以根据载具角度和设备角度确定调整角度,调整角度用于校正自移动设备的设备角度。再根据自移动设备的位置信息和载具的中心位置信息确定自移动设备要移动的设备移动距离。并将设备移动距离和该调整角度作为自移动设备的移动信息。
步骤310、基于移动信息控制自移动设备移动至载具的底部中心位置。
示例性地,在获得了移动信息之后,即可基于该移动信息控制该自移动设备进行移动,直至移动至载具的底部中心位置。
在一些实施例中,基于移动信息控制自移动设备移动至载具的底部中心位置,包括:根据调整角度和设备移动距离,控制自移动设备移动至载具的底部中心位置。
示例性地,由于移动信息包括设备移动距离和调整角度,因此可以根据设备移动距离和调整角度控制该自移动设备移动至载具的底部中心位置处(载具的正下方)。至此,完成了识别载具的正中心位置的操作。
在一些实施例中,上述方法还可以包括:控制自移动设备的顶升机构,举起载具,并将载具运输至目的地。
示例性地,自移动设备移动至载具的正下方之后,可以将根据业务需求,由自移动设备上的顶升机构将载具顶起,并将该载具移动至目的地。
本公开实施例提供的定位方法,包括控制自移动设备移动至载具对应的载具识别区域;识别载具对应的至少三个载具腿,以及自移动设备与各载具腿的测距属性信息;根据自移动设备与各载具腿的测距属性信息,确定载具的中心位置信息;根据自移动设备的位置信息和载具的中心位置信息确定自移动设备的移动信息;基于移动信息控制自移动设备移动至载具的底部中心位置。通过本方法,使用自移动设备常用的测距传感器,通过识别载具的载具腿和载具腿的位置信息,计算载具的中心位置信息,再通过中心位置信息和自移动设备的位置关系来确定自移动设备的移动信息,从而根据移动信息控制自移动设备到载具的中心位置。本方法无需对载具进行粘贴标识的操作,适用于各种类型的载具,能做到精准定位的效果,减少前期实施和载具改造的成本。
下述结合附图5B,以本公开提供的定位方法应用于智能仓储场景,自移动设备为货架搬运机器人,载具为货架为例,对上述定位方法进行说明。其中,图5B为本公开一些实施例提供的一种应用于智能仓储场景的定位方法的处理流程图,该方法包括以下步骤:
步骤402、控制货架搬运机器人移动至可移动货架对应的货架底部识别区域,其中,货架搬运机器人中设置有两个测距传感器。
步骤404、基于第一测距传感器识别可移动货架的货架腿1和货架腿2,基于第二测距传感器识别可移动货架的货架腿3和货架腿4。
步骤406、获取货架腿1对应的测距属性信息1、货架腿2对应的测距属性信息2、货架腿3对应的测距属性信息3、货架腿4对应的测距属性信息4。
步骤408、根据测距属性信息1、测距属性信息2、测距属性信息3、测距属性信息4计算每个货架腿对应的全局位置坐标P1、P2、P3、P4。
步骤410、根据每个货架腿对应的全局位置坐标P1、P2、P3、P4确定可移动货架的中心位置坐标P0。
步骤412、控制货架搬运机器人移动至可移动货架的P0处,通过顶升机构将可移动货架举起,并将可移动货架搬运至工作站。
上述应用于智能仓储场景的定位方法使用自移动设备常用的测距传感器,通过识别载具的载具腿和载具腿的位置信息,计算载具的中心位置信息,再通过中心位置信息和自移动设备的位置关系来确定自移动设备的移动信息,从而根据移动信息控制自移动设备到载具的中心位置。本方法无需对载具进行粘贴标识的操作,适用于各种类型的载具,能做到精准定位的效果,减少前期实施和载具改造的成本。
图6A为本公开一些实施例提供的另一种定位方法的流程图,如图6A所示,该方法包括步骤502-步骤510。
步骤502、获取自移动设备对应的载具上的识别点的位置信息。
示例性地,自移动设备可以搬运载具在巷道行驶,也可以搬运容器在巷道行驶,本公开实施例对此不作限定。
在一些实施例中,载具上具有至少一个识别点,识别点为用于确定载具的位置信息的位置识别点;例如,载具的轮子、载具的支撑腿(载具腿)、载具的横梁和载具的边界点等。需要说明的是,位置信息为可以量化的位置数据信息,识别点的位置信息可以为识别点与自移动设备之间的距离,例如,通过雷达、红外或深度图,确定识别点与自移动设备之间的距离。识别点的位置信息也可以为坐标系中识别点的坐标位置,其中,坐标系可以为全局坐标系(对库存区域建立坐标系),也可以为局部坐标系(以自移动设备为原点建立坐标系),例如,通过点云采集设备和/或图像采集设备,采集得到点云数据信息和/或图像数据信息,建立坐标系后,将识别点投射至坐标系中,确定各识别点的位置信息。坐标系可以为二维坐标系,也可以为三维坐标系。
自移动设备对应载具上识别点的位置信息,可以为预先存储的识别点的位置信息。例如,建立全局坐标系,预先确定库存区域中各载具的识别点的坐标并进行存储,直接获取得到;也可以为获取自移动设备采集环境信息后,基于环境信息确定获取得到的。例如,自移动设备获取点云数据信息,基于点云数据信息,确定获取到自移动设备对应载具上识别点的位置信息,在此不作限定。
示例性地,自移动设备为库存区域中的自移动机器人,载具为库存区域中的货架,以自移动设备的中心点为原点,建立局部坐标系。通过点云数据信息和图像数据信息,确定坐标系中,自移动机器人邻近范围(半径10米)内4个货架上16个货架腿的坐标P1(X1,Y1,Z1),P2(X2,Y2,Z2)……P16(X16,Y16,Z16)。
在一些实施例中,自移动设备对应载具包括自移动设备所在的巷道两侧的载具。获取自移动设备对应载具上识别点的位置信息,为后续确定自移动设备是否位于巷道内以及确定巷道的边界线提供了信息参考。
在一些实施例中,获取自移动设备对应的载具上的识别点的位置信息包括:获取自移动设备采集的初始环境信息;基于初始环境信息,确定自移动设备对应的载具上的识别点的位置信息。
其中,初始环境信息为自移动设备采集得到的自移动设备的邻近范围内环境数据信息,例如,点云数据信息、图像数据信息、光照数据信息等。
在一些实施例中,基于初始环境信息,先确定是否获取到自移动设备对应载具上的识别点和识别点的位置信息;在获取自移动设备对应的载具上的识别点和识别点的位置信息后,确定自移动设备对应的载具上的识别点的位置信息。
示例性地,在初始环境信息包括多种(如,点云数据信息、图像数据信息、光照数据信息中的至少两种)的情况下,可以分别确认识别点和识别点的位置信息,之后对识别点和识别点的位置信息进行比对,进而保证确定识别点和识别点位置信息的精确度。
需要说明的是,初始环境信息不确定是否包含自移动设备对应载具上识别点,需要在获取到初始环境信息的基础上,确定是否获取到识别点的位置信息。例如,当前,距离自移动设备20米位置设置有一个货架(载具),货架上具有货架腿(载具腿),自移动设备上采集设备的有效范围为半径为10米的圆,则采集到的初始环境信息中不包含载具腿,无法获取到载具腿和载具腿的位置信息。随着自移动设备的移动,自移动设备距离该载具10米,采集到的初始环境信息包含载具腿,可以获取到载具腿和载具腿的位置信息。即获取初始环境信息是一个动态的过程,基于初始环境信息确定是否获取到识别点的位置信息也是一个动态的过程。
示例性地,获取自移动设备(自移动机器人)采集的邻近范围(半径10米)的点云数据信息 和图像数据信息,基于点云数据信息,确定4个载具上的13个第一载具腿和13个第一载具腿的坐标,基于图像数据信息,确定4个载具上的14个第二载具腿和14个第二载具腿的坐标,取两者的并集,得到16个载具腿和16个载具腿的坐标P1(X1,Y1,Z1),P2(X2,Y2,Z2)……P16(X16,Y16,Z16)。
如此,基于自移动设备采集的初始环境信息,确定是否获取到识别点的位置信息,保证了确定的识别点的位置信息的精确度,保证了后续确定边界线的精确度。
在一些实施例中,初始环境信息为自移动设备按照预设采样频率采集的。
示例性地,预设采样频率为预先设定的自移动设备上采集设备的采样频率。例如,每2秒点云采集设备采集一次邻近范围内的点云数据信息。
在自移动设备在巷道中移动时,按照预设采样频率采集初始环境信息,不断确定是否获取到识别点的位置信息,进而循环本公开实施例提供的定位方法,以实现对自移动设备的动态调整,提高了自移动设备位姿估计的鲁棒性和精确度。
在一些实施例中,在初始环境识别信息包括点云数据信息的情况下,基于初始环境信息,确定自移动设备对应的载具上的识别点的位置信息,包括:
对点云数据信息进行聚类,得到聚类簇集合;基于聚类簇集合,确定自移动设备对应的载具上的识别点的位置信息。
其中,聚类为对点云数据信息中各点进行簇划分,得到至少一个类别的聚类簇集合的方式。聚类簇集合为至少一个类别的点簇集合,任一聚类簇集合包含同一类别的至少一个点。点云数据信息为邻近范围内载具表面的点数据集合,包括但不限于:激光点云数据信息、雷达数据点云信息、激光雷达点云数据信息。其中,点云数据信息含有各点的位置信息和强度信息(颜色、激光反射强度等)。例如,点云数据信息中包括1000个点,1000个点具有对应的位置信息和激光反射强度。
需要说明的是,点云数据信息不确定是否包含自移动设备对应载具上识别点,需要在获取到点云数据信息的基础上,确定是否获取到识别点的位置信息。例如,当前,距离自移动设备20米位置设置有一个载具,载具上具有载具腿,自移动设备上采集设备的有效范围为半径为10的圆,则采集到的点云数据信息中不包含载具腿,无法获取到载具腿和载具腿的位置信息。随着自移动设备的移动,自移动设备距离该载具10米,采集到的点云数据信息包含载具腿,可以获取到载具腿和载具腿的位置信息。即获取点云数据信息是一个动态的过程,基于点云数据信息确定识别点的位置信息也是一个动态的过程。
在一些实施例中,可以基于点云数据信息中各点的位置信息和/或强度信息,对点云数据信息进行聚类的方式,确定识别点的位置信息;也可以基于点云数据信息中各点的位置信息和/或强度信息,利用载具尺寸信息和载具分布信息,对点云数据信息进行筛选的方式,确定识别点的位置信息;还可以为先基于点云数据信息中各点的位置信息和/或强度信息,对点云数据信息进行聚类,再利用载具尺寸信息和载具分布信息,对聚类结果进行筛选的方式确定识别点的位置信息。本公开实施例对确定识别点的位置信息的方式不作限定。下述实施例中以对点云数据信息进行聚类,再利用载具尺寸信息和载具分布信息对聚类结果进行筛选的方式确定识别点的位置信息为例进行示例性说明。
示例性地,基于点云数据信息中1000个点的位置信息和激光反射强度,对1000个点进行聚类,得到16个聚类点,确定16个聚类点为载具腿,得到16个载具腿的坐标:P1(X1,Y1,Z1),P2(X2,Y2,Z2)……P16(X16,Y16,Z16)。
在一些实施例中,基于点云数据中各点的位置信息和/或强度信息,对点云数据信息进行聚类,得到聚类簇集合。
示例性地,确定任两点之间的距离在预设距离阈值内的点为同一类别的点,得到对应的聚类簇集合,又例如,确定相同颜色两点为同一类别的点,得到对应的聚类簇集合;还例如,确定激光反射强度在相同预设区间内的两点为同一类别的点,得到对应的聚类簇集合;还例如,上述方法相结合。
在一些实施例中,基于聚类簇集合,确定是否获取到自移动设备对应载具上的识别点,基于识别点对应的聚类簇集合中关键点的位置信息,确定识别点的位置信息。例如,关键点可以为聚类簇集合中的中心点,也可以为聚类簇集合中的质心点,还可以为聚类簇集合中的最外圈点,在此不作限定。
示例性地,基于点云数据中各点的坐标和颜色,对点云数据信息进行聚类,得到16个聚类簇集合,确定16个聚类簇集合为载具腿,基于各载具腿对应的聚类簇集合中质心点的坐标,确定16个载具腿的坐标:P1(X1,Y1,Z1),P2(X2,Y2,Z2)……P16(X16,Y16,Z16)。
可以理解是,利用高精准度的点云数据信息,提升了确定的识别点的位置信息的精确度;而通过对点云数据信息聚类的方式,得到聚类簇集合,并基于聚类簇集合,确定是否获取到识别点和识别点的位置信息,能够提升确定的识别点的精确度,提升了确定的识别点的位置信息的精确度。
由于点云数据信息中存在噪声点或者部分点的位置信息、强度信息存在误差,导致聚类得到 的聚类簇集合会出现误差,因此,为了提升确定的识别点和识别点的位置信息的精确度,需要结合载具本身的特点对其进行筛选和识别处理。
在一些实施例中,基于聚类簇集合,确定自移动设备对应的载具上的识别点的位置信息,包括:
根据预设的载具尺寸信息,对聚类簇集合进行筛选,获得目标聚类簇集合;根据预设的载具分布信息,对目标聚类簇集合进行识别,确定自移动设备对应的载具上的识别点的位置信息。
在一些实施例中,载具尺寸信息为载具上识别点的空间尺寸信息,目标聚类簇集合为满足载具上识别点的空间尺寸信息的聚类簇集合。以载具的载具腿尺寸为半径5厘米为例;若聚类簇集合的平均半径为20厘米,则该聚类簇集合不为目标聚类簇集合。载具分布信息为载具上识别点的空间分布信息。例如,若载具为一个200厘米×150厘米×90厘米的载具,两个目标聚类簇集合中关键点之间的坐标差值向量为(10厘米,0厘米,0厘米),则确定这两个目标聚类簇集合为相邻两个载具的载具腿。又例如,若两个目标聚类簇集合中关键点之间的坐标差值向量为(60厘米,40厘米,20厘米),则确定这两个目标聚类簇集合对应的载具腿不为相邻两个载具的载具腿,也不为同一载具上的两个载具腿。
示例性地,根据预设的载具尺寸信息(载具腿半径5厘米),对20个聚类簇集合进行筛选,获得16个目标聚类簇集合,根据预设的载具分布信息(载具腿之间的距离为180厘米或者130厘米),对16个目标聚类簇集合进行识别,确定自移动设备对应载具上的16个载具腿和16个载具腿的坐标:P1(X1,Y1,Z1),P2(X2,Y2,Z2)……P16(X16,Y16,Z16)。
可以理解的是,根据预设的载具分布信息,对目标聚类簇集合进行识别,确定自移动设备对应载具上的识别点和识别点的位置信息。提升了确定的识别点的精确度,进而提升了确定的识别点的位置信息的精确度。
在一些实施例中,在初始环境识别信息包括图像数据信息的情况下,基于初始环境信息,确定自移动设备对应的载具上的识别点的位置信息,包括:基于图像环境信息,确定自移动设备对应的载具上的识别点的位置信息。
图像数据信息为自移动设备邻近范围内的载具的表面的视觉图像数据。例如,照片、视频等。其中,图像数据信息含有各点的位置信息和颜色。例如,图像数据信息中包括1000个点,1000个点具有对应的位置信息和颜色。
示例性地,图像数据信息可以包括预先装载在载具上的视觉标识。如此,可以通过识别视觉标识,确定视觉标识的位置信息;之后,基于视觉标识的位置信息,确定识别点的位置信息。示例性地,视觉标识可以是反光板和吸光板。例如,利用预先装载在载具腿上的反光板,从图像数据信息中识别反光板和反光板的位置信息,基于反光板和反光板的位置信息,确定16个载具腿和16个载具腿的坐标:P1(X1,Y1,Z1),P2(X2,Y2,Z2)……P16(X16,Y16,Z16)。
示例性地,可以利用神经网络模型,对自移动设备采集的图像数据信息进行识别点的识别,从而得到自移动设备对应载具上的识别点的位置信息。
本公开实施例对基于图像数据信息确定自移动设备对应的载具上的识别点的位置信息的实现方式不作限定。
需要说明的是,图像数据信息不确定是否包含自移动设备对应载具上识别点,需要在获取到图像数据信息的基础上,确定是否获取到识别点的位置信息。例如,当前,距离自移动设备20米位置设置有一个载具,载具上具有载具腿,自移动设备上采集设备的有效范围为半径为10的圆,则采集到的图像数据信息中不包含载具腿,无法获取到载具腿和载具腿的位置信息。随着自移动设备的移动,自移动设备距离该载具10米,采集到的图像数据信息包含载具腿,可以获取到载具腿和载具腿的位置信息。即获取图像数据信息是一个动态的过程,基于图像数据信息确定是否获取到识别点的位置信息也是一个动态的过程。
可以理解是,从图像数据信息中,确定是否获取到自移动设备对应载具上的识别点和识别点的位置信息,利用高精准度的图像数据信息,提升了确定的识别点的位置信息的精确度。
步骤504、根据识别点的位置信息确定自移动设备的位置信息。
其中,自移动设备的位置信息为自移动设备的当前位置的数据信息,包括但不限于:自移动设备是否位于巷道中,自移动设备与载具之间的距离,自移动设备与识别点之间的距离,自移动设备在坐标系中的坐标位置。其中,坐标系可以为全局坐标系(对库存区域建立坐标系),也可以为局部坐标系(以载具为原点建立坐标系),坐标系可以为二维坐标系,也可以为三维坐标系。本公开实施例对此不作限定。
在一些实施例中,根据识别点的位置信息确定自移动设备的位置信息,包括:基于识别点的位置信息,在自移动设备的运动方向两侧有识别点的情况下,确定自移动设备位于巷道。
在一些实施例中,在获取到自移动设备对应载具上识别点的位置信息的情况下,根据载具上识别点的位置信息确定自移动设备的位置信息;即基于自移动设备的位置信息和识别点的位置信息,判断自移动设备的运动方向两侧是否有识别点。
示例性地,基于自移动设备的坐标(X0,Y0,Z0)和16个识别点的位置信息的坐标P1(X1,Y1,Z1),P2(X2,Y2,Z2)……P16(X16,Y16,Z16),判断自移动设备的运动方 向两侧是否有识别点;若自移动设备的运动方向的左侧有识别点P1(X1,Y1,Z1),P2(X2,Y2,Z2),P3(X3,Y3,Z3)和P4(X4,Y4,Z4);自移动设备的运动方向的右侧有识别点P11(X11,Y11,Z11),P12(X12,Y12,Z12),P13(X13,Y13,Z13)和P14(X14,Y14,Z14);则确定自移动设备位于巷道;若否,终止获取识别点的位置信息。
506、在根据自移动设备的位置信息确定自移动设备位于巷道的情况下,基于识别点的位置信息确定巷道的边界线。
其中,巷道的边界线为巷道边界的拟合直线,用于确定自移动设备的移动方向。在自移动设备的设备外壳越过边界线的情况下,认定自移动设备与载具发生碰撞。
在一些实施例中,巷道的边界线为巷道边界的拟合直线,通过巷道的边界线能够确定自移动设备的移动情况;例如,在自移动设备的设备外壳越过边界线的情况下,认定自移动设备与载具发生碰撞。
在一些实施例中,先基于识别点的位置信息和自移动设备的位置信息,确定自移动设备是否位于巷道中;若是,则基于识别点的位置信息确定巷道的边界线。例如,基于识别点的位置信息和自移动设备的位置信息,确定自移动设备两侧一定距离内都有识别点,则确定自移动设备位于巷道内;之后,基于识别点的位置信息确定巷道的边界线。又例如,基于识别点的位置信息和自移动设备的运动方向,确定自移动设备的运动方向两侧都有识别点,则确定自移动设备位于巷道内;之后,基于识别点的位置信息确定巷道的边界线。
示例性地,基于自移动设备的坐标(X0,Y0,Z0)和16个识别点的位置信息的坐标P1(X1,Y1,Z1),P2(X2,Y2,Z2)……P16(X16,Y16,Z16),判断自移动设备的运动方向两侧是否有识别点;若自移动设备的运动方向的左右两侧均有识别点;则确定自移动设备位于巷道。之后,基于位于自移动设备的运动方向的两侧的识别点的位置信息确定巷道的边界线。
可以理解的是,在确定自移动设备位于巷道的情况下,基于识别点的位置信息确定巷道的边界线,为后续确定控制参数奠定了基础。
在一些实施例中,基于识别点的位置信息确定巷道的边界线,包括:基于识别点的位置信息,对识别点进行直线拟合,得到巷道对应的第一边界线和第二边界线。
在一些实施例中,第一边界线可以是巷道的左边界线,第二边界线可以是巷道的右边界线。本公开实施例对第一边界线为巷道的左边界线还是右边界线,第二边界线为巷道的右边界线还是左边界线不作限定。下述实施例中以第一边界线为巷道的左边界线,第二边界线为巷道的右边界线为例进行示例性说明。
示例性地,左边界线和右边界线均以自移动设备的运动方向为基准,即左边界线为位于运动方向左侧的边界线;右边界线为位于运动方向右侧的边界线。自移动设备的运动方向为自移动设备的头部当前所指向的方向,可以用坐标系中的方向向量来表示。
在一些实施例中,直线拟合为拟合得到的由识别点连接确定的至少一条直线。例如,将位于自移动设备两侧的各识别点依次连接得到多个线段,将多个线段确定为巷道对应的左边界线和右边界线。又例如,将位于自移动设备两侧的各识别点依次连接,得到位于自移动设备两侧的多个线段,确定多个线段中位于自移动设备两侧且线段夹角最大的为巷道对应的左边界线和右边界线。
示例性地,基于自移动设备的坐标(X0,Y0,Z0)和16个载具腿的坐标P1(X1,Y1,Z1),P2(X2,Y2,Z2)……P16(X16,Y16,Z16),判断自移动设备的运动方向两侧是否有载具腿。若是,则确定自移动设备位于巷道,此时,基于载具腿的坐标,对位于自移动设备的运动方向的两侧的载具腿进行直线拟合,得到巷道对应的左边界线和右边界线。
可以理解的是,基于识别点的位置信息,判断自移动设备的运动方向两侧是否有识别点;若是,确定自移动设备位于巷道,基于识别点的位置信息,对识别点进行直线拟合,得到巷道对应的左边界线和右边界线。提升了判断自移动设备是否位于巷道的精确度,通过直线拟合提升了确定左边界线和右边界线的精确度。
步骤508、根据自移动设备的运动信息和巷道的边界线,确定自移动设备的控制参数。
自移动设备的运动信息为自移动设备的当前运动状态的信息,包括但不限于:自移动设备的位置信息、自移动设备的运动方向、自移动设备的运动速度,自移动设备的转向角度、自移动设备的运动高度。
在一些实施例中,根据自移动设备的运动信息和巷道的边界线,确定自移动设备的控制参数,包括:根据自移动设备的运动信息和边界线之间的偏差信息,确定自移动设备的控制参数。其中,偏差信息包括偏差角度、偏差距离和偏差高度中至少一项。
示例性地,根据自移动设备的运动信息和边界线之间的偏差信息:偏差角度:+1.3度;偏差距离:向左偏差0.06米;可以确定自移动设备的控制参数为目标转向角度:顺时针转动3度;目标运动时间:运动0.2秒;目标运动速度:1米每秒。
可以理解的是,根据自移动设备的运动信息和边界线,确定自移动设备的控制参数,提高了自移动设备位姿估计的鲁棒性和精确度。
在一些实施例中,根据自移动设备的运动信息和巷道的边界线,确定自移动设备的控制参 数,包括:根据自移动设备的运动方向和巷道的边界线之间的角度,确定自移动设备的控制角度;和/或,根据自移动设备的位置信息和巷道的边界线之间的距离,确定自移动设备的控制距离。
示例性地,自移动设备的位置信息为自移动设备的当前位置信息,可以为坐标系中的某一点坐标。自移动设备的位置信息可以用自移动设备上的关键点的位置信息用来表征;例如,自移动设备的质心点的位置信息,自移动设备的中心点的位置信息,自移动设备头部顶点的位置信息。控制角度为用于控制自移动设备的运动角度的控制参数,控制距离为用于控制自移动设备的运动距离的控制参数。
在一些实施例中,根据自移动设备的运动方向和巷道的边界线之间的角度,确定自移动设备的控制角度,包括:根据自移动设备的运动方向和巷道的边界线之间的偏差角度,确定自移动设备的控制角度。
其中,偏差角度为运动方向与巷道的边界线之间的夹角。
示例性地,根据自移动设备(自移动机器人)的运动方向(-1.8度)和巷道的边界线(+1.2度)之间的偏差角度:3度,确定自移动机器人的控制角度:顺时针转动3度。
在一些实施例中,根据自移动设备的位置信息和巷道的边界线之间的距离,确定自移动设备的控制距离,包括:根据自移动设备的位置信息和巷道的边界线之间的垂直距离,确定自移动设备的控制距离。垂直距离为自移动设备的位置信息至巷道的边界线之间的垂直距离。
示例性地,根据自移动设备(自移动机器人)的坐标(0,0)和巷道的边界线之间的垂直距离0.2*sin3米,确定自移动机器人的控制距离0.2米。
可以理解的是,根据自移动设备的运动方向和巷道的边界线之间的角度,确定自移动设备的控制角度;和/或,根据自移动设备的位置信息和巷道的边界线之间的距离,确定自移动设备的控制距离。提升了确定的控制角度和/或控制距离的精确度,提高了自移动设备位姿估计的鲁棒性和精确度,进而提升了后续控制自移动设备移动的精确度。
在一些实施例中,根据自移动设备的运动方向和巷道的边界线之间的角度,确定自移动设备的控制角度,包括:获取自移动设备的运动方向和巷道的第一边界线之间的第一角度,以及运动方向和巷道的第二边界线之间的第二角度;根据第一角度和第二角度,确定参考角度;将参考角度和理论角度之间的差值,确定为自移动设备的控制角度。
其中,理论角度为运动方向与巷道的中心线之间的角度。
示例性地,巷道的中心线为平分垂直于巷道的第一边界线和巷道的第二边界线上两点连线的直线。参考角度为表征自移动设备的运动方向与巷道的边界线之间偏移角度的参考角度。
在一些实施例中,根据第一角度和第二角度确定参考角度,可以为计算第一角度和第二角度的平均值,得到参考角度;也可以为计算第一角度和第二角度的加权平均值,得到参考角度。本公开实施例对此不作限定。
示例性地,获取自移动设备(自移动机器人)的运动方向(-1.8度)和巷道的第一边界线(+2度)之间的第一角度(3.8度),以及自移动机器人的运动方向(-1.8度)和巷道的第二边界线(-5度)之间的第二角度(3.2度)。基于第一角度(3.8度)和第二角度(3.2度),计算第一角度(3.8度)和第二角度(3.2度)的平均值,得到参考角度(3.5度);将参考角度(3.5度)和理论角度(0.5度)之间的差值,确定为自移动机器人的控制角度:3度。其中,理论角度为运动方向与巷道的中心线之间的角度。
可以理解的是,获取自移动设备的运动方向和巷道的第一边界线之间的第一角度,以及运动方向和巷道的第二边界线之间的第二角度;根据第一角度和第二角度,确定参考角度;将参考角度和理论角度之间的差值,确定为自移动设备的控制角度,其中,理论角度为运动方向与巷道的中心线之间的角度。提升了确定的控制角度的精确度,提高了自移动设备位姿估计的鲁棒性和精确度,进而提升了后续控制自移动设备移动的精确度。
在一些实施例中,根据自移动设备的位置信息和巷道的边界线之间的距离,确定自移动设备的控制距离,包括:
获取自移动设备的位置信息和第一边界线之间的第一距离,以及自移动设备的位置信息和第二边界线之间的第二距离;在第一距离小于第二距离的情况下,将第一距离和预设的理论宽度之间的差值,确定为自移动设备的控制距离;在第一距离大于第二距离的情况下,将第二距离和理论宽度之间的差值,确定为自移动设备的控制距离。
示例性地,第一距离为自移动设备的位置信息和第一边界线之间的垂直距离,第二距离为自移动设备的位置信息和第二边界线之间的垂直距离。理论宽度为巷道的预设宽度,可以为预先对巷道进行测量设定的,也可以为预先对自移动设备的尺寸进行测量设定的,还可以为两者的结合,在此不做限定。本公开实施例中理论宽度为巷道的宽度的一半。
在一些实施例中,以第一边界线为巷道的左边界线,第二边界线为巷道的右边界线为例进行示例性说明。在第一距离小于第二距离的情况下,表明自移动设备位于巷道的左侧;此时,将第一距离和预设的理论宽度之间的差值,确定为自移动设备的控制距离;在第一距离大于第二距离的情况下,表明自移动设备位于巷道的右侧;此时,将第二距离和预设的理论宽度之间的差值, 确定为自移动设备的控制距离。
示例性地,获取自移动设备(自移动机器人)的坐标(0,0)和第一边界线之间的第一距离(1.8米),以及自移动机器人的坐标(0,0)和巷道的右边界线之间的第二距离(2.2米)。由于第一距离小于第二距离,因此将第一距离(1.8米)和预设的理论宽度(2米)之间的差值,确定为自移动机器人的控制距离:0.2米。
可以理解的是,获取自移动设备的位置信息和第一边界线之间的第一距离,以及自移动设备的位置信息和第二边界线之间的第二距离;在第一距离小于第二距离的情况下,将第一距离和预设的理论宽度之间的差值,确定为自移动设备的控制距离;在第一距离大于第二距离的情况下,将第二距离和理论宽度之间的差值,确定为自移动设备的控制距离。提升了确定的控制距离的精确度,提高了自移动设备位姿估计的鲁棒性和精确度,进而提升了后续控制自移动设备移动的精确度。
步骤510、基于控制参数,控制自移动设备中的驱动组件,以使自移动设备移动。
其中,驱动组件为自移动设备中的驱使自移动设备移动的机械组件,例如,转向轮、履带、机械足、螺旋桨等。
在一些实施例中,基于控制参数,控制自移动设备中的驱动组件,包括:基于控制参数,生成运动控制指令,将运动控制指令发送至自移动设备中的驱动组件,控制驱动组件按照控制参数运动。
示例性地,基于控制参数(目标转向角度:顺时针转动3度;目标运动时间:运动0.2秒;目标运动速度:1米每秒。),生成运动控制指令;之后,将运动控制指令发送至自移动设备(自移动机器人)的左侧轮和右侧轮,控制左侧轮按照300转每分钟的转速转动0.2秒,控制右侧轮按照100转每分钟的转速转动0.2秒,以使自移动机器人自移动机器人顺时针转动3度,按照1米每秒的速度运动0.2秒。
可以理解的是,基于控制参数,控制自移动设备中的驱动组件,以使自移动设备移动。通过控制驱动组件实现了对自移动设备移动的准确控制,降低了与对应载具发生碰撞的概率,避免自移动设备的损坏,提升了自移动设备的安全性和稳定性。
图6B示出了本公开实施例提供的定位方法的定位示意图,如图6B所示:
理论情况下,自移动设备会沿着巷道的中心线移动,自移动设备通过识别两侧的识别点的位置信息确定的第一边界线(左边界线)和第二边界线(右边界线),应该与自移动设备的运动方向平行,同时自移动设备与第一边界线之间的第一距离和自移动设备与第二边界线之间的第二距离相等。
实际情况中,第一边界线和第二边界线不是相互平行的,具有一定的角度,自移动设备的运动方向与第一边界线之间具有第一角度,自移动设备的运动方向与第二边界线之间具有第二角度;确定参考角度后,自移动设备的实际运动角度为理论角度与参考角度之间的差值。其中,理论角度为自移动设备的运动方向与巷道的中心线之间的角度。巷道存在理论宽度;通过计算自移动设备的位置信息(中心点)与第一边界线之间的第一距离,和自移动设备的位置信息(中心点)与第二边界线之间的第二距离,得到自移动设备的左右位置偏差(第一距离与理论距离的一半之间的差值,或者第二距离与理论距离的一半之间的差值),以此来控制自移动机器人移动。
图6C示出了本公开实施例提供的定位方法的定位流程图,如图6C所示:
首先,检测自移动设备的运动方向两侧是否有识别点,并基于自移动设备的运动方向两侧的识别点的位置信息确定自移动设备是否位于巷道。若否,直接结束;若是,则先基于两侧识别点,进行直线拟合,得到自移动设备所在的巷道的边界线;之后,根据自移动设备与巷道的边界线的角度的平均值,计算自移动设备的偏差角度;在之后,计算自移动设备与单侧巷道的边界线的距离,确定自移动设备是否偏离巷道的中心线;最后控制自移动设备移动,修正自移动设备的当前位姿,结束。
图6D示出了本公开实施例的应用于载具区域的仓储机器人的自移动设备的定位方法,该应用于载具区域的自移动机器人的定位方法以载具区域的自移动机器人为例进行描述,包括步骤602至步骤630:
步骤602、获取自移动机器人采集的点云数据信息和图像数据信息。
步骤604、对点云数据信息进行聚类,得到聚类簇集合。
步骤606、根据预设的载具尺寸信息,对聚类簇集合进行筛选,获得目标聚类簇集合。
步骤608、根据预设的载具分布信息,对目标聚类簇集合进行识别,确定获取到自移动机器人邻近载具上的第一载具腿和第一载具腿的坐标。
步骤610、基于图像数据信息,确定获取到自移动机器人邻近载具上的第二载具腿和第二载具腿的坐标。
步骤612、确定第一载具腿和第二载具腿的交集为载具腿,得到载具腿的坐标。
步骤614、基于载具腿的坐标,判断自移动机器人的运动方向两侧是否有载具腿。
步骤616、若是,确定自移动机器人位于巷道,基于载具腿的坐标,对载具腿进行直线拟合,得到巷道对应的左边界线(第一边界线)和右边界线(第二边界线)。
步骤618、获取自移动机器人的运动方向和左边界线之间的第一角度,以及运动方向和右边界线之间的第二角度。
步骤620、计算第一角度和第二角度的平均值,得到参考角度。
步骤622、将参考角度和理论角度之间的差值,确定为自移动机器人的控制角度。
步骤624、获取自移动机器人的坐标和左边界线之间的第一距离,以及自移动机器人的坐标和右边界线之间的第二距离。
步骤626、在第一距离小于第二距离的情况下,将第一距离和预设的理论宽度之间的差值,确定为自移动机器人的控制距离。
步骤628、在第一距离大于第二距离的情况下,将第二距离和理论宽度之间的差值,确定为自移动机器人的控制距离。
步骤630、基于控制角度和控制距离,控制自移动机器人中的驱动组件,以使自移动机器人移动。
可以理解的是,在获取到自移动设备对应载具上识别点的位置信息的情况下,根据载具上识别点的位置信息确定自移动设备的位置信息;在确定自移动设备位于巷道的情况下,基于识别点的位置信息确定巷道的边界线;根据自移动设备的运动信息和边界线,确定自移动设备的控制参数;基于控制参数,控制自移动设备移动。通过对应载具上识别点的位置信息确定了巷道的边界线,根据自移动设备的运动信息和边界线,确定自移动设备的控制参数,进而控制自移动设备移动,提高了自移动设备位姿估计的鲁棒性和精确度,有效降低了与对应载具发生碰撞的概率,避免自移动设备的损坏,提升了自移动设备的安全性和稳定性。
通过本公开上述实施例提供的定位方法,既可以准确的确定载具底部中心位置,减少前期粘贴识别码的成本;还可以准确的确定自移动设备所在的巷道的边界线,有效降低自移动设备与巷道两侧载具发生碰撞的概率,避免自移动设备的损坏,提升自移动设备的安全性和稳定性。
与上述方法实施例相对应,本公开还提供了定位装置实施例,图7为根据本公开一些实施例提供的一种定位装置的结构示意图。如图7所示,该装置包括:
识别模块702,被配置为识别自移动设备对应的载具上的待识别位置,得到识别结果;识别结果包括自移动设备与载具上的待识别位置之间的测距属性信息,和/或,自移动设备的位置信息。
第一确定模块704,被配置为基于识别结果确定目标信息;目标信息包括载具的中心位置信息和/或自移动设备所在的巷道的边界线。
第二确定模块706,被配置为根据目标信息,确定自移动设备的控制参数。
控制模块708,被配置为基于控制参数,控制自移动设备移动。
在一些实施例中,自移动设备对应的载具包括自移动设备待搬运的载具,载具上的待识别位置包括载具上的至少三个载具腿;控制模块708,还被配置为控制自移动设备移动至载具对应的载具识别区域;识别模块702,还被配置为识别载具上的至少三个载具腿,得到自移动设备与各载具腿之间的测距属性信息。
在一些实施例中,目标信息包括载具的中心位置信息;第一确定模块704,还被配置为根据自移动设备与各载具腿之间的测距属性信息,确定载具的中心位置信息。
在一些实施例中,第二确定模块706,还被配置为根据自移动设备的位置信息和中心位置信息,确定自移动设备的控制参数;控制参数包括移动信息;控制模块708,还被配置为基于移动信息控制自移动设备移动至载具的底部中心位置。
在一些实施例中,控制模块708,还被配置为控制自移动设备移动至所述载具对应的第一载具识别区域,其中,所述第一载具识别区域位于所述载具的外部;或,
控制所述自移动设备移动至所述载具对应的第二载具识别区域,其中,所述第二载具识别区域位于所述载具的底部。
在一些实施例中,所述自移动设备上设置有至少一个测距传感器;识别模块702,还被配置为基于所述至少一个测距传感器和第一识别角度识别所述载具;在识别到至少三个载具腿的情况下,获取自移动设备与各载具腿之间的测距属性信息;在识别到小于三个载具腿的情况下,获取第一识别角度下自移动设备与各载具腿之间的第一测距属性信息,将第一识别角度调整为第二识别角度,并根据第二识别角度识别载具,获取第二识别角度下自移动设备与各载具腿之间的第二测距属性信息,测距属性信息包括第一测距属性信息和第二测距属性信息。
在一些实施例中,第一确定模块704,还被配置为根据自移动设备与各载具腿之间的测距属性信息计算各载具腿的设备位置坐标;将各载具腿的设备位置坐标转换为各载具腿的全局位置坐标;根据各载具腿的全局位置坐标确定载具的中心位置信息。
在一些实施例中,第一确定模块704,还被配置为根据自移动设备与各所述载具腿之间的所述测距属性信息确定各所述载具腿的传感器位置坐标;将各所述载具腿的传感器位置坐标转换为各载具腿的设备位置坐标。
在一些实施例中,测距属性信息包括基于传感器位置坐标的测距距离和测距角度;第一确定模块704,还被配置为根据自移动设备与各载具腿之间的测距距离和测距角度,确定各载具腿对应的传感器位置坐标。
在一些实施例中,第一确定模块704,还被配置为在载具腿的数量为三个的情况下,确定呈对角线分布的载具腿,根据呈对角线分布的载具腿的全局位置坐标确定载具的中心位置信息;在载具腿的数量为四个的情况下,根据四个载具腿的全局位置坐标确定载具的中心位置信息。
在一些实施例中,第二确定模块706,还被配置为根据载具腿的全局位置坐标确定载具的载具角度,获取自移动设备的设备角度;根据载具角度和设备角度确定调整角度;根据自移动设备的位置信息和载具的中心位置信息确定设备移动距离;将调整角度和设备移动距离确定为自移动设备的移动信息;基于控制参数控制自移动设备移动至载具的底部中心位置,包括:根据调整角度和设备移动距离,控制自移动设备移动至载具的底部中心位置。
在一些实施例中,载具上的待识别位置包括载具上的识别点;识别模块702,还被配置为获取自移动设备对应的载具上的识别点的位置信息;根据识别点的位置信息确定自移动设备的位置信息。
在一些实施例中,目标信息为自移动设备所在的巷道的边界线;第一确定模块704,还被配置为在根据自移动设备的位置信息确定自移动设备位于巷道的情况下,基于识别点的位置信息确定巷道的边界线。
在一些实施例中,第二确定模块706,还被配置为根据自移动设备的运动信息和巷道的边界线,确定自移动设备的控制参数。
在一些实施例中,识别模块702,还被配置为获取自移动设备采集的初始环境信息;基于初始环境信息,确定自移动设备对应的载具上的识别点的位置信息。
在一些实施例中,初始环境识别信息包括点云数据信息;识别模块702,还被配置为对点云数据信息进行聚类,得到聚类簇集合;基于聚类簇集合,确定自移动设备对应的载具上的识别点的位置信息。
在一些实施例中,识别模块702,还被配置为根据预设的载具尺寸信息,对聚类簇集合进行筛选,获得目标聚类簇集合;根据预设的载具分布信息,对目标聚类簇集合进行识别,确定自移动设备对应的载具上的识别点的位置信息。
在一些实施例中,初始环境识别信息包括图像数据信息;识别模块702,还被配置为基于图像环境信息,确定自移动设备对应的载具上的识别点的位置信息。
在一些实施例中,第一确定模块704,还被配置为根据自移动设备的运动方向和巷道的边界线之间的角度,确定自移动设备的控制角度;和/或,根据自移动设备的位置信息和巷道的边界线之间的距离,确定自移动设备的控制距离。
在一些实施例中,第一确定模块704,还被配置为基于识别点的位置信息,在自移动设备的运动方向两侧有识别点的情况下,确定自移动设备位于巷道。
在一些实施例中,第一确定模块704,还被配置为基于识别点的位置信息,对识别点进行直线拟合,得到巷道对应的第一边界线和第二边界线。
在一些实施例中,第一确定模块704,还被配置为获取自移动设备的运动方向和第一边界线之间的第一角度,以及自移动设备的运动方向和第二边界线之间的第二角度;根据第一角度和第二角度,确定参考角度;将参考角度和理论角度之间的差值,确定为自移动设备的控制角度,其中,理论角度为运动方向与巷道的中心线之间的角度。
在一些实施例中,第一确定模块704,还被配置为获取自移动设备的位置信息和第一边界线之间的第一距离,以及自移动设备的位置信息和第二边界线之间的第二距离;在第一距离小于第二距离的情况下,将第一距离和预设的理论宽度之间的差值,确定为自移动设备的控制距离;在第一距离大于第二距离的情况下,将第二距离和理论宽度之间的差值,确定为自移动设备的控制距离。
上述为本实施例的一种定位装置的示意性方案。需要说明的是,该定位装置的技术方案与上述的定位方法的技术方案属于同一构思,定位装置的技术方案未详细描述的细节内容,均可以参见上述定位方法的技术方案的描述。
图8示出了根据本公开实施例提供的一种计算设备800的结构框图。该计算设备800的部件包括但不限于存储器810和处理器820。处理器820与存储器810通过总线830相连接,数据库850用于保存数据。
计算设备800还包括接入设备840,接入设备840使得计算设备800能够经由一个或多个网络860通信。这些网络的示例包括公用交换电话网(PSTN,Public Switched Telephone Network)、局域网(LAN,Local Area Network)、广域网(WAN,Wide Area Network)、个域网(PAN,Personal Area Network)或诸如因特网的通信网络的组合。接入设备840可以包括有线或无线的任何类型的网络接口(例如,网络接口卡(NIC,network interface controller))中的一个或多个,诸如IEEE802.11无线局域网(WLAN,Wireless Local Area Network)无线接口、全球微波互联接入(Wi-MAX,Worldwide Interoperability for Microwave Access)接口、以太网接口、通用串行总线(USB,Universal Serial Bus)接口、蜂窝网络接口、蓝牙接口、近场通信(NFC,Near Field Communication)接口,等等。
在本公开的一个实施例中,计算设备800的上述部件以及图8中未示出的其他部件也可以彼 此相连接,例如通过总线。应当理解,图8所示的计算设备结构框图仅仅是出于示例的目的,而不是对本公开范围的限制。本领域技术人员可以根据需要,增添或替换其他部件。
计算设备800可以是任何类型的静止或移动计算设备,包括移动计算机或移动计算设备(例如,平板计算机、个人数字助理、膝上型计算机、笔记本计算机、上网本等)、移动电话(例如,智能手机)、可佩戴的计算设备(例如,智能手表、智能眼镜等)或其他类型的移动设备,或者诸如台式计算机或个人计算机(PC,Per步骤onal Computer)的静止计算设备。计算设备800还可以是移动式或静止式的服务器。
其中,处理器820执行所述计算机指令时实现所述的定位方法的步骤。
上述为本实施例的一种计算设备的示意性方案。需要说明的是,该计算设备的技术方案与上述的定位方法的技术方案属于同一构思,计算设备的技术方案未详细描述的细节内容,均可以参见上述定位方法的技术方案的描述。
本公开实施例还提供一种计算机可读存储介质,其存储有计算机指令,该计算机指令被处理器执行时实现如前所述定位方法的步骤。
上述为本实施例的一种计算机可读存储介质的示意性方案。需要说明的是,该存储介质的技术方案与上述的定位方法的技术方案属于同一构思,存储介质的技术方案未详细描述的细节内容,均可以参见上述定位方法的技术方案的描述。
上述对本公开特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
所述计算机指令包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。
需要说明的是,对于前述的各方法实施例,为了简便描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本公开并不受所描述的动作顺序的限制,因为依据本公开,某些步骤可以采用其它顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定都是本公开所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其它实施例的相关描述。
以上公开的本公开优选实施例只是用于帮助阐述本公开。可选实施例并没有详尽叙述所有的细节,也不限制该发明仅为所述的具体实施方式。显然,根据本公开的内容,可作很多的修改和变化。本公开选取并具体描述这些实施例,是为了更好地解释本公开的原理和实际应用,从而使所属技术领域技术人员能很好地理解和利用本公开。本公开仅受权利要求书及其全部范围和等效物的限制。

Claims (26)

  1. 一种定位方法,包括:
    识别所述自移动设备对应的载具上的待识别位置,得到识别结果;所述识别结果包括所述自移动设备与所述载具上的待识别位置之间的测距属性信息,和/或,所述自移动设备的位置信息;
    基于所述识别结果确定目标信息;所述目标信息包括所述载具的中心位置信息和/或所述自移动设备所在的巷道的边界线;
    根据所述目标信息,确定所述自移动设备的控制参数;
    基于所述控制参数,控制所述自移动设备移动。
  2. 根据权利要求1所述的方法,其中,所述自移动设备对应的载具包括所述自移动设备待搬运的载具,所述载具上的所述待识别位置包括所述载具上的至少三个载具腿;
    所述识别所述自移动设备对应的载具上的待识别位置,得到识别结果,包括:
    控制自移动设备移动至所述载具对应的载具识别区域;
    识别所述载具上的至少三个载具腿,得到所述自移动设备与各所述载具腿之间的所述测距属性信息。
  3. 根据权利要求2所述的方法,其中,所述目标信息包括所述载具的中心位置信息;所述基于所述识别结果确定目标信息,包括:
    根据所述自移动设备与各所述载具腿之间的所述测距属性信息,确定所述载具的中心位置信息。
  4. 根据权利要求3所述的方法,其中,所述根据所述目标信息,确定所述自移动设备的控制参数,包括:
    根据所述自移动设备的位置信息和所述载具的中心位置信息,确定所述自移动设备的控制参数;所述控制参数包括移动信息;
    所述基于所述控制参数,控制所述自移动设备移动,包括:基于所述移动信息控制所述自移动设备移动至所述载具的底部中心位置。
  5. 根据权利要求4所述的方法,其中,所述控制自移动设备移动至所述载具对应的载具识别区域,包括:
    控制所述自移动设备移动至所述载具对应的第一载具识别区域,其中,所述第一载具识别区域位于所述载具的外部;或,
    控制所述自移动设备移动至所述载具对应的第二载具识别区域,其中,所述第二载具识别区域位于所述载具的底部。
  6. 根据权利要求4所述的方法,其中,所述自移动设备上设置有至少一个测距传感器;
    所述识别所述载具上的至少三个载具腿,得到所述自移动设备与各所述载具腿之间的所述测距属性信息,包括:
    基于所述至少一个测距传感器和第一识别角度识别所述载具;
    在识别到至少三个载具腿的情况下,获取所述自移动设备与各所述载具腿之间的所述测距属性信息;
    在识别到小于三个载具腿的情况下,获取所述第一识别角度下所述自移动设备与各所述载具腿之间的第一测距属性信息,将所述第一识别角度调整为第二识别角度,并根据所述第二识别角度识别所述载具,获取所述第二识别角度下所述自移动设备与各所述载具腿之间的第二测距属性信息,所述测距属性信息包括所述第一测距属性信息和所述第二测距属性信息。
  7. 根据权利要求6所述的方法,其中,所述根据所述自移动设备与各所述载具腿的所述测距属性信息,确定所述载具的中心位置信息,包括:
    根据所述自移动设备与各所述载具腿之间的所述测距属性信息计算各所述载具腿的设备位置坐标;
    将各所述载具腿的所述设备位置坐标转换为各所述载具腿的全局位置坐标;
    根据各所述载具腿的所述全局位置坐标确定所述载具的中心位置信息。
  8. 根据权利要求7所述的方法,其中,所述根据所述自移动设备与各所述载具腿之间的所述测距属性信息计算各所述载具腿的设备位置坐标,包括:
    根据所述自移动设备与各所述载具腿之间的所述测距属性信息确定各所述载具腿的传感器位置坐标;
    将各所述载具腿的所述传感器位置坐标转换为各所述载具腿的所述设备位置坐标。
  9. 根据权利要求8所述的方法,其中,所述测距属性信息包括基于所述传感器位置坐标的测距距离和测距角度;
    所述根据所述自移动设备与各所述载具腿之间的所述测距属性信息确定各所述载具腿的传感器位置坐标,包括:
    根据所述自移动设备与各所述载具腿之间的所述测距距离和所述测距角度,确定各所述载具腿对应的所述传感器位置坐标。
  10. 根据权利要求7所述的方法,其中,所述根据各所述载具腿的所述全局位置坐标确定所述载具的中心位置信息,包括:
    在所述载具腿的数量为三个的情况下,确定呈对角线分布的所述载具腿,根据所述呈对角线分布的所述载具腿的全局位置坐标确定所述载具的中心位置信息;
    在所述载具腿的数量为四个的情况下,根据四个所述载具腿的全局位置坐标确定所述载具的中心位置信息。
  11. 根据权利要求4所述的方法,其中,所述根据所述自移动设备的位置信息和所述载具的中心位置信息,确定所述自移动设备的控制参数,包括:
    根据所述载具腿的全局位置坐标确定所述载具的载具角度,获取所述自移动设备的设备角度;
    根据所述载具角度和所述设备角度确定调整角度;
    根据所述自移动设备的位置信息和所述载具的中心位置信息确定设备移动距离;
    将所述调整角度和所述设备移动距离确定为所述自移动设备的所述移动信息;
    所述基于所述控制参数,控制所述自移动设备移动至所述载具的底部中心位置,包括:根据所述调整角度和所述设备移动距离,控制所述自移动设备移动至所述载具的底部中心位置。
  12. 根据权利要求1所述的方法,其中,所述载具上的所述待识别位置包括所述载具上的识别点;
    所述识别所述自移动设备对应的载具上的待识别位置,得到识别结果,包括:
    获取所述自移动设备对应的载具上的所述识别点的位置信息;
    根据所述识别点的位置信息确定所述自移动设备的位置信息。
  13. 根据权利要求12所述的方法,其中,所述目标信息为所述自移动设备所在的巷道的边界线;所述基于所述识别结果确定目标信息,包括:
    在根据所述自移动设备的位置信息确定所述自移动设备位于巷道的情况下,基于所述识别点的位置信息确定所述巷道的边界线。
  14. 根据权利要求13所述的方法,其中,所述根据所述目标信息,确定所述自移动设备的控制参数,包括:
    根据所述自移动设备的运动信息和所述巷道的边界线,确定所述自移动设备的所述控制参数。
  15. 根据权利要求14所述的方法,其中,所述获取所述自移动设备对应的载具上的所述识别点的位置信息,包括:
    获取所述自移动设备采集的初始环境信息;
    基于所述初始环境信息,确定所述自移动设备对应的载具上的所述识别点的位置信息。
  16. 根据权利要求15所述的方法,其中,所述初始环境识别信息包括点云数据信息;
    所述基于所述初始环境信息,确定所述自移动设备对应的载具上的所述识别点的位置信息,包括:
    对所述点云数据信息进行聚类,得到聚类簇集合;
    基于聚类簇集合,确定所述自移动设备对应的载具上的所述识别点的位置信息。
  17. 根据权利要求16所述的方法,其中,所述基于聚类簇集合,确定所述自移动设备对应的载具上的所述识别点的位置信息,包括:
    根据预设的载具尺寸信息,对所述聚类簇集合进行筛选,获得目标聚类簇集合;
    根据预设的载具分布信息,对所述目标聚类簇集合进行识别,确定所述自移动设备对应的载具上的所述识别点的位置信息。
  18. 根据权利要求17所述的方法,其中,所述初始环境识别信息包括图像数据信息;
    所述基于所述初始环境信息,确定所述自移动设备对应的载具上的所述识别点的位置信息,包括:
    基于所述图像环境信息,确定所述自移动设备对应的载具上的所述识别点的位置信息。
  19. 根据权利要求14所述的方法,其中,所述根据所述自移动设备的运动信息和所述巷道的边界线,确定所述自移动设备的所述控制参数,包括:
    根据所述自移动设备的运动方向和所述巷道的边界线之间的角度,确定所述自移动设备的控制角度;和/或,
    根据所述自移动设备的位置信息和所述巷道的边界线之间的距离,确定所述自移动设备的控制距离。
  20. 根据权利要求14所述的方法,其中,所述根据所述识别点的位置信息确定所述自移动设备的位置信息,包括:
    基于所述识别点的位置信息,在所述自移动设备的运动方向两侧有识别点的情况下,确定所述自移动设备位于巷道。
  21. 根据权利要求14所述的方法,其中,所述基于所述识别点的位置信息确定所述巷道的边界线,包括:
    基于所述识别点的位置信息,对所述识别点进行直线拟合,得到所述巷道对应的第一边界线和第二边界线。
  22. 根据权利要求21所述的方法,其中,所述根据所述自移动设备的运动方向和所述巷道的边界线之间的角度,确定所述自移动设备的控制角度,包括:
    获取所述自移动设备的运动方向和所述第一边界线之间的第一角度,以及所述自移动设备的运动方向和所述第二边界线之间的第二角度;
    根据所述第一角度和所述第二角度,确定参考角度;
    将所述参考角度和理论角度之间的差值,确定为所述自移动设备的所述控制角度,其中,所述理论角度为所述运动方向与所述巷道的中心线之间的角度。
  23. 根据权利要求21所述的方法,其中,所述根据所述自移动设备的位置信息和所述巷道的边界线之间的距离,确定所述自移动设备的控制距离,包括:
    获取所述自移动设备的位置信息和所述第一边界线之间的第一距离,以及所述自移动设备的位置信息和所述第二边界线之间的第二距离;
    在所述第一距离小于所述第二距离的情况下,将所述第一距离和预设的理论宽度之间的差值,确定为所述自移动设备的所述控制距离;
    在所述第一距离大于所述第二距离的情况下,将所述第二距离和所述理论宽度之间的差值,确定为所述自移动设备的所述控制距离。
  24. 一种定位装置,包括:
    识别模块,被配置为识别所述自移动设备对应的载具上的待识别位置,得到识别结果;所述识别结果包括所述自移动设备与所述载具上的待识别位置之间的测距属性信息,和/或,所述自移动设备的位置信息;
    第一确定模块,被配置为基于所述识别结果确定目标信息;所述目标信息包括所述载具的中心位置信息和/或所述自移动设备所在的巷道的边界线;
    第二确定模块,被配置为根据所述目标信息,确定所述自移动设备的控制参数;
    控制模块,被配置为基于所述控制参数,控制所述自移动设备移动。
  25. 一种计算设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机指令,其中,所述处理器执行所述计算机指令时实现权利要求1-23任意一项所述方法的步骤。
  26. 一种计算机可读存储介质,其存储有计算机指令,其中,该计算机指令被处理器执行时实现权利要求1-23任意一项所述方法的步骤。
PCT/CN2023/135013 2022-12-02 2023-11-29 定位方法、装置、计算设备和存储介质 WO2024114682A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202211536121.9A CN118129749A (zh) 2022-12-02 2022-12-02 定位方法及装置
CN202211536121.9 2022-12-02
CN202310383208.5 2023-04-11
CN202310383208.5A CN116520829A (zh) 2023-04-11 2023-04-11 自移动设备的设备控制方法及装置

Publications (1)

Publication Number Publication Date
WO2024114682A1 true WO2024114682A1 (zh) 2024-06-06

Family

ID=91323014

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/135013 WO2024114682A1 (zh) 2022-12-02 2023-11-29 定位方法、装置、计算设备和存储介质

Country Status (1)

Country Link
WO (1) WO2024114682A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017004230A (ja) * 2015-06-09 2017-01-05 シャープ株式会社 自律走行体、自律走行体の狭路判定方法、狭路判定プログラム及びコンピュータ読み取り可能な記録媒体
CN110271804A (zh) * 2018-03-15 2019-09-24 深圳志合天成科技有限公司 自动存取装置的运动控制方法、装置、设备及存储介质
CN111977244A (zh) * 2020-09-08 2020-11-24 北京极智嘉科技有限公司 一种仓库搬运调度系统及方法
CN113311844A (zh) * 2021-07-28 2021-08-27 福勤智能科技(昆山)有限公司 一种伺服控制方法、装置、计算机设备及存储介质
CN114326740A (zh) * 2021-12-30 2022-04-12 杭州海康机器人技术有限公司 协同搬运处理方法、装置、电子设备及系统
CN114995416A (zh) * 2022-05-30 2022-09-02 深圳市优必选科技股份有限公司 全局路径导航方法、装置、终端设备及存储介质
CN116520829A (zh) * 2023-04-11 2023-08-01 北京极智嘉科技股份有限公司 自移动设备的设备控制方法及装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017004230A (ja) * 2015-06-09 2017-01-05 シャープ株式会社 自律走行体、自律走行体の狭路判定方法、狭路判定プログラム及びコンピュータ読み取り可能な記録媒体
CN110271804A (zh) * 2018-03-15 2019-09-24 深圳志合天成科技有限公司 自动存取装置的运动控制方法、装置、设备及存储介质
CN111977244A (zh) * 2020-09-08 2020-11-24 北京极智嘉科技有限公司 一种仓库搬运调度系统及方法
CN113311844A (zh) * 2021-07-28 2021-08-27 福勤智能科技(昆山)有限公司 一种伺服控制方法、装置、计算机设备及存储介质
CN114326740A (zh) * 2021-12-30 2022-04-12 杭州海康机器人技术有限公司 协同搬运处理方法、装置、电子设备及系统
CN114995416A (zh) * 2022-05-30 2022-09-02 深圳市优必选科技股份有限公司 全局路径导航方法、装置、终端设备及存储介质
CN116520829A (zh) * 2023-04-11 2023-08-01 北京极智嘉科技股份有限公司 自移动设备的设备控制方法及装置

Similar Documents

Publication Publication Date Title
EP3698270B1 (en) Systems and methods for tracking goods carriers
CN110837814B (zh) 车辆导航方法、装置及计算机可读存储介质
US11835967B2 (en) System and method for assisting collaborative sensor calibration
KR20180044279A (ko) 깊이 맵 샘플링을 위한 시스템 및 방법
US20230213939A1 (en) System and method for collaborative sensor calibration
US11875682B2 (en) System and method for coordinating collaborative sensor calibration
US11852730B2 (en) System and method for collaborative calibration via landmark
US11630454B2 (en) System and method for coordinating landmark based collaborative sensor calibration
WO2023005384A1 (zh) 可移动设备的重定位方法及装置
US11687086B2 (en) Autonomous robotic navigation in storage site
WO2021077614A1 (zh) 基于实时更新地图的agv定位方法、装置及系统
WO2021004483A1 (zh) 一种导航方法、移动载体及导航系统
CN114545426A (zh) 定位方法、装置、移动机器人和计算机可读介质
CN116520829A (zh) 自移动设备的设备控制方法及装置
WO2024114682A1 (zh) 定位方法、装置、计算设备和存储介质
WO2024001596A1 (zh) 机器人运动控制方法以及装置
CN106248058B (zh) 一种对于仓储运输工具的定位方法、装置及系统
WO2020137311A1 (ja) 測位装置及び移動体
WO2023061501A1 (zh) 基于货架标识的导航系统、方法
Zhao et al. The construction method of the digital operation environment for bridge cranes
US20230133480A1 (en) Thin object detection and avoidance in aerial robots
WO2019130932A1 (ja) 車両監視装置、車両、及び車両監視システム
US20240210177A1 (en) Topometric map based autonomous navigation for inventory drone
CN115166686B (zh) 卫星拒止环境下多无人机分布式协同定位与建图方法
CN118129749A (zh) 定位方法及装置