CN111071963B - Luggage cart collecting method and luggage cart collecting equipment - Google Patents

Luggage cart collecting method and luggage cart collecting equipment Download PDF

Info

Publication number
CN111071963B
CN111071963B CN201911274337.0A CN201911274337A CN111071963B CN 111071963 B CN111071963 B CN 111071963B CN 201911274337 A CN201911274337 A CN 201911274337A CN 111071963 B CN111071963 B CN 111071963B
Authority
CN
China
Prior art keywords
luggage
luggage van
van
image
baggage car
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911274337.0A
Other languages
Chinese (zh)
Other versions
CN111071963A (en
Inventor
孟李艾俐
王超群
刘剑邦
周彤
延廷芳
周越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wenyuan laboratory Co.,Ltd.
Original Assignee
Lianbo Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lianbo Intelligent Technology Co ltd filed Critical Lianbo Intelligent Technology Co ltd
Priority to CN201911274337.0A priority Critical patent/CN111071963B/en
Publication of CN111071963A publication Critical patent/CN111071963A/en
Application granted granted Critical
Publication of CN111071963B publication Critical patent/CN111071963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66FHOISTING, LIFTING, HAULING OR PUSHING, NOT OTHERWISE PROVIDED FOR, e.g. DEVICES WHICH APPLY A LIFTING OR PUSHING FORCE DIRECTLY TO THE SURFACE OF A LOAD
    • B66F9/00Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes
    • B66F9/06Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes movable, with their loads, on wheels or the like, e.g. fork-lift trucks
    • B66F9/075Constructional features or details
    • B66F9/07504Accessories, e.g. for towing, charging, locking
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64FGROUND OR AIRCRAFT-CARRIER-DECK INSTALLATIONS SPECIALLY ADAPTED FOR USE IN CONNECTION WITH AIRCRAFT; DESIGNING, MANUFACTURING, ASSEMBLING, CLEANING, MAINTAINING OR REPAIRING AIRCRAFT, NOT OTHERWISE PROVIDED FOR; HANDLING, TRANSPORTING, TESTING OR INSPECTING AIRCRAFT COMPONENTS, NOT OTHERWISE PROVIDED FOR
    • B64F1/00Ground or aircraft-carrier-deck installations
    • B64F1/32Ground or aircraft-carrier-deck installations for handling freight
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66FHOISTING, LIFTING, HAULING OR PUSHING, NOT OTHERWISE PROVIDED FOR, e.g. DEVICES WHICH APPLY A LIFTING OR PUSHING FORCE DIRECTLY TO THE SURFACE OF A LOAD
    • B66F9/00Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes
    • B66F9/06Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes movable, with their loads, on wheels or the like, e.g. fork-lift trucks
    • B66F9/075Constructional features or details
    • B66F9/07513Details concerning the chassis
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66FHOISTING, LIFTING, HAULING OR PUSHING, NOT OTHERWISE PROVIDED FOR, e.g. DEVICES WHICH APPLY A LIFTING OR PUSHING FORCE DIRECTLY TO THE SURFACE OF A LOAD
    • B66F9/00Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes
    • B66F9/06Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes movable, with their loads, on wheels or the like, e.g. fork-lift trucks
    • B66F9/075Constructional features or details
    • B66F9/0755Position control; Position detectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Transportation (AREA)
  • Structural Engineering (AREA)
  • Mechanical Engineering (AREA)
  • Geology (AREA)
  • Civil Engineering (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the application is suitable for the technical field of robots and provides a luggage van collecting method and luggage van collecting equipment, wherein the method comprises the following steps: acquiring an ambient environment image, and detecting the ambient environment according to the ambient environment image; when the luggage van to be collected is detected to exist in the surrounding environment, carrying out pose estimation on the luggage van to obtain the pose of the luggage van; moving to a preset orientation of the luggage van according to the pose of the luggage van; the luggage van and the luggage van collecting device are fixedly connected from the preset orientation; and conveying the baggage car which is fixedly connected to a baggage car collecting point. By adopting the method provided by the embodiment, the automatic collection of the luggage van can be realized, the number of hands required to be used in the luggage van collection process is obviously reduced, and the operation cost of the luggage van operation mechanism is reduced.

Description

Luggage cart collecting method and luggage cart collecting equipment
Technical Field
The application belongs to the technical field of robots, and particularly relates to a luggage van collecting method and luggage van collecting equipment.
Background
In recent years, techniques such as robot sensing, positioning, and planning have been developed dramatically. With the gradual maturity of various technologies, unmanned vehicles are also increasingly applied to aspects of life, such as storage autonomous mobile robots, automatic meal delivery robots, medical material conveying robots and the like. The use of robots to perform heavy tasks instead of humans is an important issue for current research in the field of robotics.
Taking the international hong kong airport as an example, more than 100 airlines in the international hong kong airport provide more than 1100 flights per day, connect 220 multiple destinations, and receive more than 7050 ten thousand passengers per year. To cope with the huge traffic, about 1.3 ten thousand baggage cars are distributed in airports. The used baggage carts are scattered around the corners of the airport, and the airport is required to collect the used baggage carts and deploy them to the places where the passengers need them.
To address this problem, airports need to hire a large number of people to collect the scattered baggage carts and return them to the required locations for use by other passengers. Therefore, how to conveniently collect the scattered luggage carts is a great problem to be solved by the technical personnel in the field.
Disclosure of Invention
In view of this, the embodiment of the present application provides a baggage car collection method and a baggage car collection device, which can realize automatic collection of baggage cars, significantly reduce the number of hands required to be used in a baggage car collection process, and reduce the operation cost of a baggage car operation mechanism.
A first aspect of an embodiment of the present application provides a baggage car collecting method applied to a baggage car collecting apparatus, where the baggage car collecting method includes:
acquiring an ambient environment image, and detecting the ambient environment according to the ambient environment image;
when the luggage van to be collected is detected to exist in the surrounding environment, carrying out pose estimation on the luggage van to obtain the pose of the luggage van;
moving to a preset orientation of the luggage van according to the pose of the luggage van;
the luggage van and the luggage van collecting device are fixedly connected from the preset orientation;
and conveying the baggage car which is fixedly connected to a baggage car collecting point.
A second aspect of the embodiments of the present application provides a baggage car collecting device, including at least two driving wheels, at least two omnidirectional moving wheels, a device chassis supported by the driving wheels and the omnidirectional moving wheels, a lifting mechanism fixed on the device chassis, a docking mechanism, an onboard computer, a clamping mechanism movably connected to the lifting mechanism, and a depth camera in communication connection with the onboard computer; wherein:
the depth camera is used for acquiring an image of the surrounding environment;
the onboard computer is used for detecting the surrounding environment according to the surrounding environment image, estimating the pose of the luggage van when the luggage van to be collected is detected to exist in the surrounding environment, obtaining the pose of the luggage van, and driving the driving wheel of the luggage van collecting device to move to the preset direction of the luggage van according to the pose of the luggage van;
the clamping mechanism is used for clamping the luggage van from the preset direction;
the lifting mechanism is used for lifting the clamped luggage van and fixing the luggage van on the connection mechanism of the luggage van collecting device;
the driving wheel is used for driving the luggage van collecting equipment to convey the docked and fixed luggage van to a luggage van collecting point under the instruction of the onboard computer.
A third aspect of the embodiments of the present application provides a baggage car collecting device applied to a baggage car collecting apparatus, including:
the image acquisition module is used for acquiring an ambient environment image and detecting the ambient environment according to the ambient environment image;
the position and orientation estimation module is used for estimating the position and orientation of the luggage van when the luggage van to be collected is detected to exist in the surrounding environment, so as to obtain the position and orientation of the luggage van;
the moving module is used for moving to a preset position of the luggage van according to the pose of the luggage van;
the connection module is used for connecting and fixing the luggage van and the luggage van collecting device from the preset direction;
and the conveying module is used for conveying the baggage car which is fixedly connected to a baggage car collecting point.
A fourth aspect of the embodiments of the present application provides a baggage car collecting apparatus, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor, when executing the computer program, implements the baggage car collecting method of the first aspect.
A fifth aspect of the embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the baggage car collection method according to the first aspect.
A sixth aspect of embodiments of the present application provides a computer program product, which, when run on a baggage car collection apparatus, causes the baggage car collection apparatus to perform the baggage car collection method of the first aspect.
Compared with the prior art, the embodiment of the application has the following advantages:
(1) aiming at the requirements of the robot such as application scene, endurance and the like, a complete robot software and hardware system is designed: a design scheme of a chassis of a two-wheel drive robot based on a differential drive mode is provided for uneven terrains such as slopes in the environment such as airports. The solution incorporates a suspension arrangement to enable the robot to operate stably in different terrains. Due to the different types of baggage carts present in an airport environment, the execution mechanisms of the robots need to have a certain generalization capability. Therefore, according to the common characteristics of all luggage vehicles to be collected, a robot actuating mechanism with magnetic adsorption capacity is provided for forking and docking the luggage vehicles. Through comprehensive analysis of a hardware system, a high-efficiency multi-thread robot software system is designed, so that high-efficiency communication can be realized among all modules of the robot, and the robot can be ensured to stably and efficiently complete baggage car collection tasks in environments such as airports and the like.
(2) Aiming at the problem that the robot is inaccurate in positioning or completely fails in a dynamic and dense crowd environment, a positioning method based on camera detection information and multi-sensor fusion is adopted: in order to improve the robustness of a robot positioning algorithm in crowd-dense areas such as airports, shopping malls and the like, the embodiment provides that a dynamic object is detected through a three-dimensional camera, and point cloud information of the corresponding dynamic object is removed from laser scanning information according to a calibration result of the camera and laser; on the basis, the embodiment provides a multi-sensor fusion positioning method, and the positioning result of the camera and the laser is fused with a natural marker positioning method based on vision and a positioning method based on a odometer, so that the robot has the capability of performing accurate positioning in a large-scale environment.
(3) Aiming at the characteristic of more complex background noise of the environment such as an airport and the like, a stable and high-precision luggage van identification method based on deep learning is provided: in order to enable the robot to stably and accurately identify the state of the baggage car in a complex and dynamic airport environment, it is necessary to track the idle baggage cars to be collected in preparation for subsequent robot motion planning. Aiming at the problems that the size change of the luggage van in different distance ranges is large on a pixel plane, different states of the luggage van are easy to be confused, pedestrians are shielded and interfered, and the like, the luggage van state detection method is designed by the deep learning technology, and the idle state and the occupied state can be accurately distinguished. Meanwhile, in order to ensure the working fluency of the robot, a multi-task processing method based on multiple threads is provided, and the recognition speed of the state of the luggage van is improved.
(4) Aiming at the limitation of the identification range and the identification capability of a vehicle-mounted sensor, the embodiment provides a stable and high-precision luggage parking position estimation method based on distance information: the position and orientation estimation result of the identified idle luggage van can provide the position information of the luggage van relative to the robot for the motion control part in the process that the robot approaches and forks the luggage van. According to the distance between the luggage van and the robot, the embodiment provides two different pose estimation methods for the luggage van in different distance ranges so as to achieve the goal of quickly, stably and accurately estimating the pose of the luggage van. When the robot is far away from the luggage van, a luggage van position judgment strategy based on pixel information is provided. When the distance between the robot and the luggage van is smaller than a certain threshold value, a luggage van pose detection strategy based on point cloud is provided.
(5) Aiming at the incomplete constraint characteristic of the differential drive robot, the connection between the robot and the luggage van is realized by using a hierarchical multi-section visual servo system: because the robot designed by the embodiment is an incomplete constraint system, the transverse motion of the robot is constrained. Applying visual servoing algorithms directly on such systems may result in the loss of target in view. By analyzing the positioning range and the positioning error of different luggage van positioning methods, the embodiment provides a segmented multistage visual servo algorithm, and different servo strategies are adopted according to different relative positions of the robot and the luggage van, so that the connection between the robot and the luggage van is realized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the embodiments or the description of the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a schematic flow chart illustrating steps of a baggage car collection method according to an embodiment of the present application;
FIG. 2 is an isometric view of a baggage car collection device according to an embodiment of the present application;
FIG. 3 is a bottom view of the baggage car collection device of one embodiment of the present application;
FIG. 4 is a schematic flow chart illustrating steps of another baggage car collection method according to an embodiment of the present application;
FIG. 5 is a schematic illustration of a baggage car detection identification process according to an embodiment of the present application;
FIG. 6 is a schematic flow chart illustrating steps of another baggage car collection method according to an embodiment of the present application;
FIG. 7 is a schematic diagram of the architecture of a system to which the baggage car collection method of an embodiment of the present application is applied;
FIG. 8 is a schematic diagram of the operation process of the baggage car collecting method according to an embodiment of the present application
FIG. 9 is a schematic view of a baggage car collection device according to an embodiment of the present application;
fig. 10 is a schematic view of a baggage car collection device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. However, it will be apparent to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
At present, in order to reduce the labor cost, electric mopeds are adopted in a plurality of airports to help long-distance baggage car transportation queues. The method reduces the labor burden to a certain extent and improves the recovery speed of the luggage van. However, the method is only responsible for transporting the luggage van, so that the burden of workers in the process of transporting the luggage van for a long distance is reduced, and the process of collecting the luggage van still needs the participation of the workers. This method does not significantly reduce the use of labor.
Meanwhile, luggage vehicles with autonomous navigation functions appear in a plurality of airports in the world, and can help passengers to transport luggage to a specified position and return to a specific position to continuously provide services for other passengers after the luggage vehicles are transported to the specified position. The method avoids heavy luggage cart collection work, but the price of a single automatic luggage cart is high, and the large-scale popularization and application of the luggage cart are limited.
Therefore, aiming at the problems, the method realizes a set of full-automatic airport luggage cart collecting method, can detect the distribution condition of luggage carts in real time, automatically collect the luggage carts scattered everywhere, and transfer the luggage carts to a designated area in batch for passengers to use.
The technical solution of the present application will be described below by way of specific examples.
Referring to fig. 1, a schematic flow chart illustrating steps of a baggage car collection method according to an embodiment of the present application is shown, which may specifically include the following steps:
s101, collecting an ambient environment image, and detecting the ambient environment according to the ambient environment image;
it should be noted that the method may be applied to a baggage car collecting apparatus, which may be a transportation apparatus for collecting baggage cars, such as a baggage car collecting vehicle, a baggage car collecting robot, etc., and the present embodiment does not limit the specific form of the baggage car collecting apparatus.
In addition, the baggage car collection device in this embodiment may be applied to an environment such as a station waiting hall, an airport terminal, and the like, and may also be applied to other scenes where baggage cars or similar devices need to be recovered.
Generally, in order to realize automatic collection of a luggage cart, a luggage cart collecting apparatus needs to have a certain structure to support operations such as adsorption, lifting, connection, and fixation of the luggage cart. Therefore, for the sake of understanding, the present embodiment first describes the hardware structure of the baggage car collecting apparatus.
As shown in fig. 2, is an axial view of the baggage car collecting apparatus of the present embodiment. In the isometric view shown in fig. 2, the baggage car collecting apparatus may be regarded as a baggage car collecting robot, and the hardware structure thereof mainly includes a robot chassis capable of stably moving under various terrains and a clamping mechanism capable of stably clamping the baggage car. The robot can be driven by two differential driving wheels, and has good terrain adaptability and stability. In addition to the two differential driving wheels, the rear end of the robot is provided with two omnidirectional moving wheels (universal wheels), and the robot chassis is supported by the two omnidirectional moving wheels and the two differential driving wheels. Of course, the above description of the number of the omni-directional moving wheels and the differential driving wheels is only an example, and other numbers of the omni-directional moving wheels and the differential driving wheels may be configured for the robot according to actual needs.
The front part of the robot body comprises a lifting mechanism and a connection mechanism which are fixed on a robot chassis, and the lifting mechanism can be movably connected with a clamping mechanism. After the luggage van to be collected is clamped by the fork heads of the clamping mechanism, the clamped luggage van can be lifted by the lifting mechanism and fixed on the connection mechanism, so that the fixed connection between the robot and the luggage van is realized.
An ultrasonic distance measuring sensor can be arranged in front of the lifting mechanism and used for detecting the distance and the angle between the fork head and the luggage van. Meanwhile, a magnetic adsorption device can be additionally arranged on the fork head and used for adsorbing the luggage van when the robot is close to the luggage van, so that the luggage van is prevented from being pushed away by external force in the operation process.
As shown in fig. 2, the robot is further provided with a depth camera, through which images of the surroundings of the robot during the traveling process and point cloud images of the baggage car to be collected can be collected after the baggage car to be collected is detected, and the images can be transmitted to an onboard computer and further processed by the onboard computer.
As shown in fig. 3, it is a bottom view of the baggage car collecting apparatus of the present embodiment. As can be seen from the bottom view shown in fig. 3, the robot is further provided with a Servo Motor (Servo Motor) for matching with the instruction of the onboard computer to realize automatic control of the system during the process of traveling and collecting luggage vehicles and the like of the robot.
Based on the robot with the hardware structure, the luggage van can be automatically collected. The process of collecting the baggage car using the robot having the above-described hardware structure will be described in detail. For the sake of understanding, the following description will be given by taking the baggage car collecting apparatus as an example of the baggage car collecting robot shown in fig. 3.
In this embodiment, initially, the baggage car collection robot may cruise along a certain route. For example, for a robot placed in an airport terminal, a corresponding cruising route can be planned for the robot, and the robot is controlled to cruise in the terminal according to the route.
During cruising, the robot can shoot surrounding environment images through the configured depth camera, then the surrounding environment is detected according to the shot images, and whether the luggage van to be collected exists is confirmed. Alternatively, the robot may automatically detect whether a baggage car to be collected exists in the surrounding environment by a laser radar.
S102, when the fact that the luggage van to be collected exists in the surrounding environment is detected, carrying out pose estimation on the luggage van, and obtaining the pose of the luggage van;
after the robot detects the luggage van to be collected, different pose estimation models can be used for estimating the pose of the luggage van according to the distance between the luggage van and the robot.
It should be noted that the luggage cart to be collected in this embodiment may refer to an unused luggage cart. For example, a baggage car that is not loaded with any baggage and is not held or pushed by other users may be considered to be a baggage car to be collected that needs to be transported to a designated collection point.
The pose of the luggage van comprises the detected position and the detected pose of the luggage van to be collected, and a specific scheme for how the robot collects the luggage van can be designed by estimating the position and the pose of the luggage van. For example, what orientation of the luggage cart needs to be moved, how to attach the luggage cart, etc.
S103, moving to a preset direction of the luggage van according to the pose of the luggage van;
in the present embodiment, the preset orientation of the luggage cart may refer to a direction directly behind the luggage cart, i.e., a direction facing the user himself when the user pushes the luggage cart.
After the pose of the luggage van is estimated, the position and the current pose of the luggage van can be determined, so that the robot can be controlled to move to the position right behind the luggage van, and preparation is made for collecting the luggage van.
S104, the luggage van and the luggage van collecting device are connected and fixed from the preset direction;
in this embodiment, after the robot moves to the right back of the luggage van, the robot can be controlled to continue to advance forwards, and when the robot is away from the luggage van by a certain distance, the magnetic adsorption device on the robot can adsorb the luggage van, and then the fork of the clamping mechanism is forked into the gap of the luggage van, and is lifted up, so that the connection and fixation between the robot and the luggage van are realized.
It should be noted that, when the baggage car and the robot are docked fixedly, the control may be performed by a corresponding algorithm, and the algorithm may determine how the baggage car should be operated at the distance according to the distance between the baggage car and the robot, which varies in real time.
S105, conveying the baggage car which is fixedly connected to a baggage car collecting point.
The baggage car collection point may refer to an area for concentrated placement of baggage cars. For example, in an airport terminal, the baggage car collection point may be at the passenger entrance, and by placing a baggage car at the entrance, the passenger may conveniently use the baggage car when entering the terminal. The specific location of the collection point of the luggage van is not limited in this embodiment.
In a specific implementation, after the robot is docked and fixes a baggage car, the relative position relationship between the robot and the collection point of the baggage car can be firstly determined, and then a corresponding route is planned and travels according to the route, so that the baggage car is conveyed to the collection point.
In the embodiment of the application, when the luggage van to be collected in the surrounding environment is detected, the position and posture information of the luggage van is obtained by estimating the position and posture of the luggage van, then the luggage van is moved to a certain position of the luggage van according to the information, and the luggage van collecting equipment are fixedly connected from the position, so that the fixedly connected luggage van can be conveyed to the luggage van collecting point, the automatic collection of the luggage van scattered at each position is realized, the number of hands required to be used in the luggage van collecting process is remarkably reduced, and the operation cost of a luggage van operation mechanism is reduced.
Referring to fig. 4, a schematic flow chart illustrating steps of another baggage car collection method according to an embodiment of the present application may specifically include the following steps:
s401, acquiring positioning information, and determining the position of the luggage van collecting device according to the positioning information;
the method can be applied to luggage cart collecting equipment, such as a luggage cart collecting robot for recovering idle luggage carts and the like. The main execution body of the embodiment is a baggage car collecting device, and the baggage car collecting device is controlled to cruise at a station waiting hall, an airport terminal and the like, so that an idle baggage car needing to be recovered is detected, and the idle baggage car can be conveyed to a designated baggage car collecting point. The following description will be given by taking a baggage car collection robot as an example.
In this embodiment, in the working process of the baggage car collection robot, the baggage car collection robot needs to acquire the positioning information in real time to determine the current position of the baggage car, so that after the idle baggage car is collected, how to transport the idle baggage car to the baggage car collection point can be determined according to the current position.
Generally, in a large-area indoor environment with dense crowds such as a market, a hospital or an airport, due to large environmental interference, if a common positioning mode is adopted, the robot positioning system is easy to fail, the robot positioning system needs to be redesigned, and the robot positioning system can stably operate under all conditions.
As an example of the present application, the positioning system of the baggage car collecting robot in the embodiment may include an inertial navigation sensor, a multiline laser radar, a depth camera, a wheel type odometer, an ultrasonic ranging sensor, and the like, and a multi-sensor integrated positioning system is implemented through cooperation of a plurality of devices or modules. The function of the depth camera in the robot positioning system can be to detect characteristic markers in the surrounding environment and assist in positioning; the multiline laser radar can be used for drawing and updating a map; accurate robot position information is obtained through cooperation of the radar, the characteristic markers and other sensors.
In a specific implementation, in order to realize stable positioning of the robot under a dense crowd, the wheel mileage and the visual mileage during the traveling process of the robot can be counted first. The wheel mileage may refer to a distance traveled by a driving wheel of the baggage car collecting robot; the visual mileage can be obtained by collecting road surface texture information in the advancing process by using a depth camera arranged on the robot and then counting according to the road surface texture information. That is, during the robot traveling, the camera of the depth camera may be controlled to look down on the road surface, and then the visual mileage is made from the road surface texture information.
Then, the actual mileage during traveling can be calculated from the wheeled mileage and the visual mileage.
Because no matter adopt wheeled odometer or vision odometer can have certain error, this embodiment can be through fusing above-mentioned two kinds of odometers to reduce the odometer error. In specific implementation, an extended Kalman filter can be adopted to perform data fusion on the wheel-type mileage and the visual mileage to obtain the actual mileage in the advancing process. Of course, on the basis of the wheel-type mileage and the visual mileage, the data of the inertial sensor or other data can be combined, and the accuracy of the odometer is further improved.
After the actual mileage of the robot is obtained, the current position of the robot can be determined based on the actual mileage, the preset environmental marker information and the starting point position information. The environmental marker information may be information of a fixed object on a robot traveling route collected in advance, for example, some fixed inquiry stations, check-in islands, and the like in an airport terminal.
According to the embodiment, on the basis of calculating the actual travelling mileage of the robot, the existing characteristic markers in the application environment are fused with the positioning algorithm, so that the robustness of the positioning algorithm can be enhanced, and the robot can be ensured to stably determine the position of the robot under dense crowds.
S402, collecting surrounding environment images of the luggage van collecting device in the advancing process, and detecting the surrounding environment images by adopting a preset luggage van detection model;
in this embodiment, whether the baggage car to be collected is detected may be determined by acquiring an image of the surroundings during the travel of the robot and then detecting whether the image includes an image of an empty baggage car to be collected.
In a specific implementation, the surrounding environment image can be detected through a baggage car detection model obtained through pre-training so as to identify the baggage car to be collected.
In this embodiment, the baggage car detection model may be obtained by acquiring images of a plurality of baggage cars, labeling each baggage car in the images of the plurality of baggage cars, obtaining image position information and pixel information of each baggage car in the corresponding image, scanning an unused baggage car by using a depth sensor to obtain a three-dimensional image of the unused baggage car, and then performing model training on the deep learning model by using the three-dimensional image of the unused baggage car, the image position information and the pixel information of the plurality of baggage cars. That is, the baggage car detection model in this embodiment is based on the improvement of the deep learning model, and the network of the classification portion of the baggage car is changed into the network of the baggage car and the state recognition thereof, so that the detection network outputs the position and the state category of the baggage car.
Taking the example of collecting the idle baggage car in the airport terminal as an example, the baggage car detection model can be obtained by the following steps:
(1) an algorithm training and testing database is constructed by shooting a depth image and a color image of the luggage van in the airport;
(2) marking the position of the luggage van and the pixels of the luggage van in the color image;
(3) scanning a three-dimensional image of the idle luggage van by using a depth sensor;
(4) and training the deep learning model by using the information obtained in the previous step to obtain a luggage van detection model.
By testing the baggage car image on the above baggage car detection model, the position of an empty baggage car may be output.
S403, when the surrounding environment image is detected to contain the luggage van image, identifying whether the luggage van image contains an object image or a human body limb image;
in this embodiment, the baggage car to be collected is an unused idle baggage car, which means a baggage car that has no baggage to be placed therein and that has not been held or pushed by a person.
Therefore, when it is detected that the baggage car image is included in the image, it is possible to further recognize whether the baggage car image includes an object image or a human body limb image.
If the luggage van image does not contain any object image or human body limb image, the luggage van corresponding to the image can be judged to be the idle luggage van to be collected. At this time, the step S404 may be continuously executed to perform pose estimation on the baggage car.
S404, judging that the luggage van corresponding to the luggage van image is a luggage van to be collected, and estimating the position and the attitude of the luggage van to obtain the position and the attitude of the luggage van;
fig. 5 is a schematic diagram illustrating a baggage car detection and identification process according to this embodiment. According to the process shown in fig. 5, the robot can detect the idle baggage car to be collected by using the baggage car detection model, separate the baggage car based on the semantic segmentation algorithm, and then complete the pose estimation of the baggage car by using the corresponding pose estimation model.
S405, moving to a preset position of the luggage van according to the pose of the luggage van;
s406, the luggage van and the luggage van collecting device are fixedly connected from the preset direction;
s407, conveying the baggage car which is docked and fixed to a baggage car collecting point.
After the estimation of the pose of the luggage van is completed, the robot can move to the position right behind the luggage van according to the estimated pose, then clamp the luggage van from the position right behind, and connect and fix the luggage van and the robot, so that the luggage van is conveyed to a luggage van collecting point.
Since steps S405 to S407 of this embodiment are similar to steps S103 to S105 of the previous embodiment, it can be referred to, and this embodiment is not described again.
In the embodiment of the application, the wheel-type mileage and the visual mileage of the luggage van collecting device in the travelling process are fused, and the luggage van collecting device is positioned by combining the characteristic markers in the application environment, so that the robustness of a positioning algorithm can be enhanced, and the positioning stability of the luggage van collecting device under dense crowds is improved.
Referring to fig. 6, a schematic flow chart illustrating steps of another baggage car collection method according to an embodiment of the present application is shown, which may specifically include the following steps:
s601, collecting an ambient environment image, and detecting the ambient environment according to the ambient environment image;
since step S601 in this embodiment is similar to step S101 and step S402 in the previous embodiment, this embodiment is not described again.
S602, when detecting that the to-be-collected luggage van exists in the surrounding environment, counting the number of pixels of the outer frame of the luggage van in the image of the luggage van, and calculating the perimeter of the outer frame of the luggage van according to the number of pixels of the outer frame of the luggage van;
in this embodiment, after detecting an idle baggage car to be collected, different pose estimation models may be used according to the distance between the baggage car and the robot.
In a specific implementation, the perimeter of the outer frame can be obtained by calculating the number of pixels of the outer frame, and the distance between the free luggage van and the robot is represented by the size of the perimeter of the outer frame. When the perimeter is less than or equal to a preset threshold value, the idle luggage van is far away from the robot; otherwise, the luggage van is relatively close to the robot.
S603, if the perimeter of the outer frame of the luggage van is smaller than or equal to a preset threshold value, performing pose estimation on the luggage van by adopting a preset first pose estimation model to obtain the pose of the luggage van;
in this embodiment, when the perimeter of the outer frame of the baggage car is less than or equal to the preset threshold, it indicates that the baggage car is far away from the robot, and at this time, the pose estimation may be performed by using the first pose estimation model with relatively low accuracy, that is, the rough pose estimation strategy.
The idea of the rough pose estimation strategy is to set the z, roll and pitch values in the 6-dimensional pose parameters [ x, y, z, roll, pitch, yaw ] to zero, consider the rotation angle yaw as a direction classification problem, then estimate the displacement x and y of the idle luggage van on the ground in combination with the depth information, and approach the luggage van to be collected through motion control.
In the embodiment of the application, an unused luggage van image can be obtained first, the unused luggage van image is partitioned, a luggage van image data set is constructed, a plurality of classes of rotation angle labeling is performed on each image block in the luggage van image data set, so that any class corresponds to one rotation angle value respectively, then each labeled image block is used as training data, a deep convolution classification network model is trained, and a first pose estimation model is obtained. The method comprises the following specific steps:
(1) intercepting to obtain an image block of the idle luggage van and a corresponding depth image block of the idle luggage van by utilizing an outer frame of the idle luggage van output by the luggage van detection model, and constructing an image data set of the luggage van;
(2) and labeling the rotation angle in the yaw direction of the luggage van image data set, and dividing the rotation angle in the yaw direction into 8 categories, wherein the categories 0-7 respectively represent 0 degree, 45 degrees to the right, 90 degrees to the right, 135 degrees to the right, 180 degrees to the right, 135 degrees to the left, 90 degrees to the left and 45 degrees to the left. The data set may be divided into a training set and a test set;
(3) and training a deep convolution classification network by using the training set, and finely adjusting model parameters to obtain a model with optimal classification precision to obtain a first pose estimation model.
On the basis, the trained model is used for classifying the rotation angle of the remote idle luggage van in the yaw direction, and after the translation x and y are calculated by using the depth information corresponding to the luggage van segmentation, a moving instruction can be sent to the robot motion control system to instruct the robot to move to the luggage van according to the roughly estimated pose.
S604, if the perimeter of the outer frame of the luggage van is larger than the preset threshold value, performing pose estimation on the luggage van by adopting a preset second pose estimation model to obtain the pose of the luggage van;
in this embodiment, when the perimeter of the outer frame of the baggage car is greater than the preset threshold, it indicates that the baggage car is closer to the robot, and at this time, the pose estimation may be performed by using a second pose estimation model with relatively higher accuracy, i.e., a fine pose estimation strategy. That is, the estimation accuracy of the first posture estimation model is smaller than the estimation accuracy of the second posture estimation model. The fine pose estimation strategy is to learn and estimate the pose of the luggage van by using a deep learning method.
In this embodiment of the application, a depth camera may be used to respectively acquire a first baggage car image attached with a tag map and a second baggage car image not attached with the tag map, where the first baggage car image and the second baggage car image are both images of an unused baggage car, and then train a pose estimation network model with the first baggage car image and the second baggage car image as training data to obtain a second pose estimation model. The method comprises the following specific steps:
(1) and constructing a luggage van pose data set. Pasting a label graph on the luggage van, and shooting images of the free luggage van with the label graph and without the label graph by using a depth camera, wherein the label graph is pasted to obtain the position and posture information of the luggage van by using the depth camera to obtain the characteristics of the label graph, and the position and posture is used as actual marking data for luggage van position and posture training;
(2) inputting the images of the idle luggage carts, the corresponding pose information and the three-dimensional images of the luggage carts into a pose estimation network posCNN, and training the posCNN to obtain a second pose estimation model of the luggage carts.
S605, driving a driving wheel of the luggage trolley collecting device to move to the right back of the luggage trolley according to the pose of the luggage trolley;
in this embodiment, after estimating the pose of the baggage car to be collected, a movement instruction may be sent to the robot motion control system to instruct the robot to move to the baggage car according to the estimated pose.
S606, acquiring a point cloud image of the luggage van by using a depth camera configured on the luggage van collecting device, and calculating the distance between the luggage van collecting device and the luggage van in real time according to the point cloud image;
after the robot reaches the rear of the luggage van, the servo motor can guide the luggage van collecting robot to be connected with the luggage van from the rear of the luggage van.
In order to realize error compensation, the present embodiment may adjust the control strategy according to the posture of the baggage car to interface with the baggage car, and the robot is required to obtain the posture of the baggage car in real time at a high frequency. Direct application of visual servoing algorithms on such systems can lead to problems such as loss of target in view, due to limitations placed on the speed direction of the robot by non-integral systems. In view of the above limitations and requirements, the present embodiment may design a hierarchical multi-stage visual servo algorithm for the baggage car collection robot, and implement docking between the robot and the baggage car by fusing multiple baggage car positioning modes and robot control modes.
In this embodiment, the multi-stage visual servo algorithm may be obtained by performing supervised learning on a preset visual servo algorithm, where a supervision signal of the supervised learning is a distance between the baggage car collection robot and the baggage car. The different distance intervals respectively correspond to one stage in the multi-stage visual servo algorithm.
Therefore, in specific implementation, when the driving robot runs to a position close to the luggage van, the depth camera can acquire a point cloud image of the luggage van, and the distance between the robot and the luggage van can be calculated in real time according to the point cloud image.
S607, determining a target stage visual servo algorithm corresponding to the distance interval to which the distance belongs, and adopting the target stage visual servo algorithm to connect and fix the luggage van and the luggage van collecting device;
according to the distance between the robot and the luggage van, a specific target stage visual servo algorithm can be determined, and then the luggage van and the robot are connected by adopting the stage visual servo algorithm.
In specific implementation, a target stage visual servo algorithm can be adopted to control a clamping mechanism of the luggage cart collecting robot to clamp the luggage cart, the clamped luggage cart is lifted, and the luggage cart is fixed on a connection mechanism of the luggage cart collecting robot.
This embodiment realizes switching process's speed flexibility through introducing the acceleration restraint, reduces the scram of robot and the phenomenon with sharp acceleration, improves the efficiency of plugging into, has increased the predictability of robot action.
And S608, conveying the baggage car which is docked and fixed to a baggage car collecting point.
The docked stationary baggage car will be transported by the baggage car collection robot to a baggage car collection point.
In the embodiment of the application, the distance between the luggage van to be collected and the luggage van collecting device is calculated, so that rough pose estimation or fine pose estimation can be selected, and the pose estimation speed is increased; in addition, the embodiment designs a multi-stage and multi-stage visual servo algorithm for controlling the connection process aiming at different distances, so that the speed flexibility of the connection process can be realized, the phenomena of sudden stop and sudden acceleration of the luggage cart collecting equipment in the process are reduced, and the connection efficiency is improved.
It should be noted that, the sequence numbers of the steps in the foregoing embodiments do not mean the execution sequence, and the execution sequence of each process should be determined by the function and the inherent logic of the process, and should not constitute any limitation on the implementation process of the embodiments of the present application.
For ease of understanding, the baggage car collection method of the present embodiment will be described below with reference to a complete example.
Fig. 7 is a schematic diagram of a system to which the baggage car collection method of the present embodiment is applied. According to the architecture shown in fig. 7, the whole system includes two parts, namely a hardware architecture and a software architecture. Namely, the mechanical structure and configuration of the baggage car collecting device itself, and a software control process when the baggage car collecting device is used to collect baggage cars.
Referring to fig. 8, there is shown an operation process diagram of the baggage car collection method of the present embodiment. Taking the baggage car collection robot as an example, first, the robot may be controlled to cruise along a certain route and continuously detect the baggage car. In the process, the robot can complete the functions of positioning, obstacle avoidance and the like. When the luggage van is detected, the robot enters a luggage van state estimation state. If the luggage van is determined to need to be collected, the robot enters a luggage van pose estimation state and a vision servo state so as to move to the rear of the luggage van. When the relative distance between the robot and the luggage van reaches a preset value, the robot enters the luggage van lifting process to lift the luggage van. And after the lifting, the robot enters a navigation state and conveys the luggage van to a luggage van collection point. The robot puts down the luggage van after arriving at the luggage van collection point, and finishes the task of recovering an idle luggage van to the collection point. After the task is completed, the robot returns to the cruising state again to actively search for the idle luggage van. The state forms a cycle, so that the robot can continuously and repeatedly execute the task of recovering the idle luggage van to the collection point.
Based on the robot software and hardware system provided by the embodiment, the luggage carts scattered in the environment such as an airport and the like can be automatically collected, and the cost for manually collecting the luggage carts and using the automatic luggage carts is reduced; meanwhile, aiming at the software and hardware systems, the positioning method, the luggage van identification method, the pose estimation method and the like provided by the embodiment can be suitable for robot positioning in the environments of airports, hospitals, markets and the like, and can accurately determine the state and the pose of the luggage van in the dense crowd environment; in addition, the grading multistage vision servo system of the embodiment can drive the robot to move to the correct operation position to operate the luggage van, and convey the luggage van to the designated collection point.
The embodiment of the application also discloses luggage cart collecting equipment, which comprises at least two driving wheels, at least two omnidirectional moving wheels, an equipment chassis supported by the driving wheels and the omnidirectional moving wheels, a lifting mechanism, a connection mechanism, an airborne computer, a clamping mechanism movably connected with the lifting mechanism and a depth camera in communication connection with the airborne computer, wherein the lifting mechanism, the connection mechanism and the airborne computer are fixed on the equipment chassis; wherein:
the depth camera is used for acquiring an image of the surrounding environment;
the onboard computer is used for detecting the surrounding environment according to the surrounding environment image, estimating the pose of the luggage van when the luggage van to be collected is detected to exist in the surrounding environment, obtaining the pose of the luggage van, and driving the driving wheel of the luggage van collecting device to move to the preset direction of the luggage van according to the pose of the luggage van;
the clamping mechanism is used for clamping the luggage van from the preset direction;
the lifting mechanism is used for lifting the clamped luggage van and fixing the luggage van on the connection mechanism of the luggage van collecting device;
the driving wheel is used for driving the luggage van collecting equipment to convey the docked and fixed luggage van to a luggage van collecting point under the instruction of the onboard computer.
The specific structure of the baggage car collecting device in this embodiment may refer to the axle side view of the baggage car collecting device shown in fig. 2, which is not described again in this embodiment.
Referring to fig. 9, a schematic diagram of a baggage car collecting device according to an embodiment of the present application is shown, and the baggage car collecting device may be applied to a baggage car collecting apparatus, and specifically may include the following modules:
an image acquisition module 901, configured to acquire an ambient image and detect an ambient environment according to the ambient image;
a pose estimation module 902, configured to perform pose estimation on the baggage car when it is detected that the baggage car to be collected exists in the surrounding environment, so as to obtain a pose of the baggage car;
the moving module 903 is used for moving the luggage van to a preset position according to the pose of the luggage van;
a docking module 904, configured to dock and fix the baggage car and the baggage car collecting device from the preset orientation;
a transporting module 905, configured to transport the baggage car docked and fixed to a baggage car collection point.
In an embodiment of the present application, the baggage car collecting device may further include the following modules:
and the positioning module is used for acquiring positioning information and determining the position of the luggage van collecting equipment according to the positioning information.
In this embodiment, the positioning module may specifically include the following sub-modules:
the wheel type mileage counting submodule is used for counting wheel type mileage in the travelling process of the luggage van collecting equipment, and the wheel type mileage is the travelling distance of a driving wheel of the luggage van collecting equipment;
the visual mileage statistics submodule is used for collecting road surface texture information in the advancing process by adopting a depth camera configured on the luggage van collecting equipment and carrying out statistics on visual mileage according to the road surface texture information;
the actual mileage calculation submodule is used for calculating the actual mileage in the advancing process according to the wheel-type mileage and the visual mileage;
and the position determining submodule is used for determining the position of the luggage van collecting equipment based on the actual mileage, preset environment marker information and starting point position information, wherein the environment marker information is information of a fixed object on the travelling route of the luggage van collecting equipment, and the information is acquired in advance.
In this embodiment of the present application, the actual mileage calculating sub-module may specifically include the following units:
and the actual mileage calculation unit is used for performing data fusion on the wheeled mileage and the visual mileage by adopting an extended Kalman filter to obtain the actual mileage in the advancing process.
In this embodiment, the image capturing module 901 may specifically include the following sub-modules:
the image acquisition sub-module is used for acquiring surrounding environment images of the luggage van collection equipment in the advancing process;
the image detection submodule is used for detecting the surrounding environment image by adopting a preset luggage van detection model;
the object identification submodule is used for identifying whether the luggage van image contains an object image or a human body limb image when the surrounding environment image is detected to contain the luggage van image;
and the luggage car judging submodule is used for judging that the luggage car corresponding to the luggage car image is the luggage car to be collected if the luggage car image does not contain the object image or the human body limb image.
In the embodiment of the application, the luggage van detection model is generated by calling the following modules:
the image labeling module is used for acquiring images of a plurality of luggage carts, labeling each luggage cart in the images of the plurality of luggage carts and obtaining image position information and pixel information of each luggage cart in the corresponding image;
the three-dimensional image scanning module is used for scanning the unused luggage van by adopting the depth sensor to obtain a three-dimensional image of the unused luggage van;
and the luggage van detection model training module is used for carrying out model training on the deep learning model by adopting the three-dimensional image of the unused luggage van, the image position information and the pixel information of the plurality of luggage vans to obtain the luggage van detection model.
In this embodiment of the application, the surrounding environment image includes a baggage car image, and the pose estimation module 902 may specifically include the following sub-modules:
the outer frame circumference calculation submodule is used for counting the number of pixels of the outer frame of the luggage van in the image of the luggage van and calculating the circumference of the outer frame of the luggage van according to the number of the pixels of the outer frame of the luggage van;
the first position estimation submodule is used for estimating the position of the luggage van by adopting a preset first position estimation model if the perimeter of the outer frame of the luggage van is smaller than or equal to a preset threshold value, so as to obtain the position of the luggage van;
and the second position and posture estimation submodule is used for estimating the position and posture of the luggage van by adopting a preset second position and posture estimation model if the perimeter of the outer frame of the luggage van is larger than the preset threshold value, so as to obtain the position and posture of the luggage van, wherein the estimation precision of the first position and posture estimation model is smaller than that of the second position and posture estimation model.
In an embodiment of the present application, the first pose estimation model is generated by invoking the following modules:
the image blocking module is used for acquiring an unused luggage van image, blocking the unused luggage van image and constructing a luggage van image data set;
the image rotation module is used for carrying out rotation angle labeling of multiple categories on each image block in the luggage van image data set, wherein any category corresponds to a rotation angle value;
and the first pose estimation model training module is used for training a deep convolution classification network model by taking each marked image block as training data to obtain the first pose estimation model.
In an embodiment of the present application, the second pose estimation model is generated by invoking the following modules:
the luggage van image acquisition module is used for respectively acquiring a first luggage van image attached with a mark map and a second luggage van image not attached with the mark map by adopting a depth camera, wherein the first luggage van image and the second luggage van image are both images of unused luggage vans;
and the second position and posture estimation model training module is used for training a position and posture estimation network model by taking the first luggage van image and the second luggage van image as training data to obtain the second position and posture estimation model.
In this embodiment of the application, the preset position is right behind the luggage van, and the moving module 903 may specifically include the following sub-modules:
and the moving submodule is used for driving the driving wheel of the luggage car collecting device to move to the position right behind the luggage car according to the position and posture of the luggage car.
In this embodiment, the docking module 904 may specifically include the following sub-modules:
the point cloud image acquisition sub-module is used for acquiring a point cloud image of the luggage van by adopting a depth camera arranged on the luggage van collection equipment and calculating the distance between the luggage van collection equipment and the luggage van in real time according to the point cloud image;
the target stage visual servo algorithm determining submodule is used for determining a target stage visual servo algorithm corresponding to the distance interval to which the distance belongs; the different distance intervals respectively correspond to one stage in a multi-stage visual servo algorithm, the multi-stage visual servo algorithm is obtained by performing supervised learning on a preset visual servo algorithm, and a supervision signal of the supervised learning is the distance between the luggage van collecting device and the luggage van;
and the connection fixing submodule is used for adopting the target stage visual servo algorithm to connect and fix the luggage van and the luggage van collecting device.
In this embodiment of the present application, the docking fixing sub-module may specifically include the following units:
and the connection fixing unit is used for controlling a clamping mechanism of the luggage cart collecting equipment to clamp the luggage cart by adopting the target stage visual servo algorithm, lifting the clamped luggage cart and fixing the luggage cart on the connection mechanism of the luggage cart collecting equipment.
For the apparatus embodiment, since it is substantially similar to the method embodiment, it is described relatively simply, and reference may be made to the description of the method embodiment section for relevant points.
Referring to fig. 10, a schematic view of a baggage car collection apparatus according to an embodiment of the present application is shown. As shown in fig. 10, the baggage car collecting apparatus 1000 of the present embodiment includes: a processor 1010, a memory 1020, and a computer program 1021 stored in the memory 1020 and operable on the processor 1010. The processor 1010, when executing the computer program 1021, implements the steps of the baggage car collection method described above in various embodiments, such as the steps S101 to S105 shown in fig. 1. Alternatively, the processor 1010, when executing the computer program 1021, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the modules 901 to 905 shown in fig. 9.
Illustratively, the computer program 1021 may be partitioned into one or more modules/units that are stored in the memory 1020 and executed by the processor 1010 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing certain functions, which may be used to describe the execution of the computer program 1021 in the baggage car collection device 1000. For example, the computer program 1021 may be segmented into an image acquisition module, a pose estimation module, a movement module, a docking module, and a transport module, each of which functions specifically as follows:
the image acquisition module is used for acquiring an ambient environment image and detecting the ambient environment according to the ambient environment image;
the position and orientation estimation module is used for estimating the position and orientation of the luggage van when the luggage van to be collected is detected to exist in the surrounding environment, so as to obtain the position and orientation of the luggage van;
the moving module is used for moving to a preset position of the luggage van according to the pose of the luggage van;
the connection module is used for connecting and fixing the luggage van and the luggage van collecting device from the preset direction;
and the conveying module is used for conveying the baggage car which is fixedly connected to a baggage car collecting point.
The baggage car collection device 1000 may include, but is not limited to, a processor 1010, a memory 1020. Those skilled in the art will appreciate that fig. 10 is merely an example of a baggage car collection device 1000 and does not constitute a limitation on baggage car collection device 1000 and may include more or less components than shown, or some components in combination, or different components, for example, the baggage car collection device 1000 may also include input and output devices, network access devices, buses, etc.
The Processor 1010 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 1020 may be an internal storage unit of the baggage car collection device 1000, such as a hard disk or a memory of the baggage car collection device 1000. The memory 1020 may also be an external storage device of the baggage car collection device 1000, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the baggage car collection device 1000. Further, the memory 1020 may also include both an internal storage unit and an external storage device of the baggage car collection device 1000. The memory 1020 is used to store the computer program 1021 and other programs and data required by the baggage car collection device 1000. The memory 1020 may also be used to temporarily store data that has been output or is to be output.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same. Although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (14)

1. A luggage cart collecting method is characterized by being applied to a luggage cart collecting device and comprises the following steps:
acquiring an ambient environment image, and detecting the ambient environment according to the ambient environment image, wherein the ambient environment image comprises a luggage van image;
when the luggage van to be collected is detected to exist in the surrounding environment, carrying out pose estimation on the luggage van to obtain the pose of the luggage van;
moving to a preset orientation of the luggage van according to the pose of the luggage van;
the luggage van and the luggage van collecting device are fixedly connected from the preset orientation;
transporting the docked stationary baggage car to a baggage car collection point;
wherein, the estimating the position and posture of the luggage van to obtain the position and posture of the luggage van comprises:
counting the number of pixels of the outer frame of the luggage van in the image of the luggage van, and calculating the perimeter of the outer frame of the luggage van according to the number of pixels of the outer frame of the luggage van;
if the perimeter of the outer frame of the luggage van is smaller than or equal to a preset threshold value, performing pose estimation on the luggage van by adopting a preset first pose estimation model to obtain the pose of the luggage van;
if the perimeter of the outer frame of the luggage van is larger than the preset threshold value, performing pose estimation on the luggage van by adopting a preset second pose estimation model to obtain the pose of the luggage van, wherein the estimation accuracy of the first pose estimation model is smaller than that of the second pose estimation model.
2. The method of claim 1, further comprising:
and acquiring positioning information, and determining the position of the luggage van collecting equipment according to the positioning information.
3. The method of claim 2, wherein said obtaining location information from which the location of the baggage car collection device is determined comprises:
counting wheel type mileage in the travelling process of the luggage van collecting device, wherein the wheel type mileage is the travelling distance of a driving wheel of the luggage van collecting device;
collecting pavement texture information in the advancing process by adopting a depth camera configured on the luggage van collecting equipment, and counting visual mileage according to the pavement texture information;
calculating the actual mileage in the traveling process according to the wheeled mileage and the visual mileage;
and determining the position of the luggage van collecting device based on the actual mileage, preset environmental marker information and starting point position information, wherein the environmental marker information is information of a fixed object on the travelling route of the luggage van collecting device, which is acquired in advance.
4. The method of claim 3, wherein calculating an actual distance traveled based on the wheeled distance and the visual distance comprises:
and performing data fusion on the wheel-type mileage and the visual mileage by adopting an extended Kalman filter to obtain the actual mileage in the advancing process.
5. The method of claim 1, wherein the acquiring of the ambient image from which the ambient is detected comprises:
collecting surrounding environment images of the luggage van collecting device in the advancing process, and detecting the surrounding environment images by adopting a preset luggage van detection model;
when the surrounding environment image is detected to contain a luggage van image, identifying whether the luggage van image contains an object image or a human body limb image;
and if the luggage van image does not contain the object image or the human body limb image, judging that the luggage van corresponding to the luggage van image is the luggage van to be collected.
6. The method of claim 5, wherein the baggage car detection model is generated by:
acquiring images of a plurality of luggage carts, labeling each luggage cart in the images of the plurality of luggage carts, and obtaining image position information and pixel information of each luggage cart in the corresponding image;
scanning an unused luggage van by adopting a depth sensor to obtain a three-dimensional image of the unused luggage van;
and performing model training on the deep learning model by adopting the three-dimensional image of the unused luggage van, the image position information and the pixel information of the plurality of luggage vans to obtain the luggage van detection model.
7. The method of claim 1, wherein the first pose estimation model is generated by:
obtaining an unused luggage van image, partitioning the unused luggage van image, and constructing a luggage van image data set;
carrying out rotation angle labeling of multiple categories on each image block in the luggage van image data set, wherein any category corresponds to a rotation angle value;
and training a deep convolution classification network model by using the marked image blocks as training data to obtain the first pose estimation model.
8. The method of claim 1, wherein the second pose estimation model is generated by:
respectively acquiring a first luggage vehicle image attached with a mark map and a second luggage vehicle image not attached with the mark map by adopting a depth camera, wherein the first luggage vehicle image and the second luggage vehicle image are both images of unused luggage vehicles;
and training a pose estimation network model by taking the first luggage van image and the second luggage van image as training data to obtain the second pose estimation model.
9. The method according to any one of claims 1 to 8, wherein the preset orientation is directly behind the luggage cart, and the moving to the preset orientation of the luggage cart according to the posture of the luggage cart comprises:
and driving a driving wheel of the luggage trolley collecting device to move to the position right behind the luggage trolley according to the pose of the luggage trolley.
10. The method of claim 9, wherein said docking securing the baggage car to the baggage car collection device from the predetermined orientation comprises:
acquiring a point cloud image of the luggage van by using a depth camera arranged on the luggage van collecting device, and calculating the distance between the luggage van collecting device and the luggage van in real time according to the point cloud image;
determining a target stage visual servo algorithm corresponding to the distance interval to which the distance belongs; the different distance intervals respectively correspond to one stage in a multi-stage visual servo algorithm, the multi-stage visual servo algorithm is obtained by performing supervised learning on a preset visual servo algorithm, and a supervision signal of the supervised learning is the distance between the luggage van collecting device and the luggage van;
and adopting the target stage visual servo algorithm to fixedly connect the luggage van and the luggage van collecting equipment.
11. The method of claim 10, wherein said docking securing said baggage car to said baggage car collection device using said target phase visual servoing algorithm comprises:
and controlling a clamping mechanism of the luggage cart collecting device to clamp the luggage cart by adopting the target stage visual servo algorithm, lifting the clamped luggage cart, and fixing the luggage cart on a connecting mechanism of the luggage cart collecting device.
12. The luggage trolley collecting device is characterized by comprising at least two driving wheels, at least two omnidirectional moving wheels, a device chassis supported by the driving wheels and the omnidirectional moving wheels, a lifting mechanism fixed on the device chassis, a connecting mechanism, an airborne computer, a clamping mechanism movably connected with the lifting mechanism, and a depth camera in communication connection with the airborne computer; wherein:
the depth camera is used for acquiring surrounding environment images, and the surrounding environment images comprise luggage van images;
the onboard computer is used for detecting the surrounding environment according to the surrounding environment image, counting the number of pixels of the outer frame of the luggage van in the image of the luggage van when the luggage van to be collected is detected to exist in the surrounding environment, and calculating the perimeter of the outer frame of the luggage van according to the number of pixels of the outer frame of the luggage van; if the perimeter of the outer frame of the luggage van is smaller than or equal to a preset threshold value, performing pose estimation on the luggage van by adopting a preset first pose estimation model to obtain the pose of the luggage van; if the perimeter of the outer frame of the luggage van is larger than the preset threshold value, adopting a preset second position and posture estimation model to estimate the position and posture of the luggage van, and driving a driving wheel of the luggage van collecting device to move to the preset direction of the luggage van according to the position and posture of the luggage van; the estimation precision of the first posture estimation model is smaller than that of the second posture estimation model;
the clamping mechanism is used for clamping the luggage van from the preset direction;
the lifting mechanism is used for lifting the clamped luggage van and fixing the luggage van on the connection mechanism of the luggage van collecting device;
the driving wheel is used for driving the luggage van collecting equipment to convey the docked and fixed luggage van to a luggage van collecting point under the instruction of the onboard computer.
13. A baggage car collection device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor when executing the computer program implements a baggage car collection method according to any one of claims 1 to 11.
14. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out a baggage car collection method according to any one of claims 1 to 11.
CN201911274337.0A 2019-12-12 2019-12-12 Luggage cart collecting method and luggage cart collecting equipment Active CN111071963B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911274337.0A CN111071963B (en) 2019-12-12 2019-12-12 Luggage cart collecting method and luggage cart collecting equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911274337.0A CN111071963B (en) 2019-12-12 2019-12-12 Luggage cart collecting method and luggage cart collecting equipment

Publications (2)

Publication Number Publication Date
CN111071963A CN111071963A (en) 2020-04-28
CN111071963B true CN111071963B (en) 2020-09-18

Family

ID=70314040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911274337.0A Active CN111071963B (en) 2019-12-12 2019-12-12 Luggage cart collecting method and luggage cart collecting equipment

Country Status (1)

Country Link
CN (1) CN111071963B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114954991B (en) * 2021-12-10 2023-03-14 昆明理工大学 Automatic transfer system of airport luggage handcart

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6168367B1 (en) * 1997-07-31 2001-01-02 Coy J. Robinson Shopping cart collection vehicle and method
CN106741028A (en) * 2016-12-05 2017-05-31 四川西部动力机器人科技有限公司 A kind of airport Intelligent baggage car
CN106927395A (en) * 2017-05-12 2017-07-07 谜米机器人自动化(上海)有限公司 Automatic guided vehicle and unmanned handling system
CN206561754U (en) * 2017-01-18 2017-10-17 上海卓仕物流科技股份有限公司 A kind of luggage truck embraces folder
CN108236777A (en) * 2018-01-08 2018-07-03 深圳市易成自动驾驶技术有限公司 It picks up ball method, pick up ball vehicle and computer readable storage medium
CN108466268A (en) * 2018-03-27 2018-08-31 苏州大学 A kind of freight classification method for carrying, system and mobile robot and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6168367B1 (en) * 1997-07-31 2001-01-02 Coy J. Robinson Shopping cart collection vehicle and method
CN106741028A (en) * 2016-12-05 2017-05-31 四川西部动力机器人科技有限公司 A kind of airport Intelligent baggage car
CN206561754U (en) * 2017-01-18 2017-10-17 上海卓仕物流科技股份有限公司 A kind of luggage truck embraces folder
CN106927395A (en) * 2017-05-12 2017-07-07 谜米机器人自动化(上海)有限公司 Automatic guided vehicle and unmanned handling system
CN108236777A (en) * 2018-01-08 2018-07-03 深圳市易成自动驾驶技术有限公司 It picks up ball method, pick up ball vehicle and computer readable storage medium
CN108466268A (en) * 2018-03-27 2018-08-31 苏州大学 A kind of freight classification method for carrying, system and mobile robot and storage medium

Also Published As

Publication number Publication date
CN111071963A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
US11548403B2 (en) Autonomous vehicle paletization system
CN109945858B (en) Multi-sensing fusion positioning method for low-speed parking driving scene
CN108051002B (en) Transport vehicle space positioning method and system based on inertial measurement auxiliary vision
Xue et al. A vision-centered multi-sensor fusing approach to self-localization and obstacle perception for robotic cars
US20210209543A1 (en) Directing secondary delivery vehicles using primary delivery vehicles
CN110278405A (en) A kind of lateral image processing method of automatic driving vehicle, device and system
CN105300403A (en) Vehicle mileage calculation method based on double-eye vision
US20210042542A1 (en) Using captured video data to identify active turn signals on a vehicle
CN112082567B (en) Map path planning method based on combination of improved Astar and gray wolf algorithm
CN109964149B (en) Self-calibrating sensor system for wheeled vehicles
CN102608998A (en) Vision guiding AGV (Automatic Guided Vehicle) system and method of embedded system
Liu et al. Deep learning-based localization and perception systems: approaches for autonomous cargo transportation vehicles in large-scale, semiclosed environments
CN110268417A (en) Mesh calibration method is identified in camera review
CN110160528B (en) Mobile device pose positioning method based on angle feature recognition
CN111071963B (en) Luggage cart collecting method and luggage cart collecting equipment
CN111708010A (en) Mobile equipment positioning method, device and system and mobile equipment
CN115511228B (en) Intelligent dispatching system and method for unmanned logistics vehicle passing in park
CN115244585A (en) Method for controlling a vehicle on a cargo yard, travel control unit and vehicle
Zhao et al. An ISVD and SFFSD-based vehicle ego-positioning method and its application on indoor parking guidance
JP7227849B2 (en) Trajectory generator
Butdee et al. Automatic guided vehicle control by vision system
CN113298044B (en) Obstacle detection method, system, device and storage medium based on positioning compensation
Nowak et al. Vision-based positioning of electric buses for assisted docking to charging stations
EP4239616A1 (en) Object tracking device and object tracking method
CN117635721A (en) Target positioning method, related system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210311

Address after: No 802 Shenzhen Research Institute Chinese University of Hong Kong No 10 Yuexing 2nd Road Gaoxin community Yuehai street Nanshan District Shenzhen City Guangdong Province

Patentee after: Yuanhua Intelligent Technology (Shenzhen) Co.,Ltd.

Address before: Room 402, Jardine Plaza, 1 Connaught Plaza, central, Hong Kong, China

Patentee before: LianBo Intelligent Technology Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210318

Address after: 803, Shenzhen Research Institute, Chinese University of Hong Kong, 10 Yuexing 2nd Road, high tech Zone community, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Patentee after: Shenzhen Wenyuan laboratory Co.,Ltd.

Address before: No.802, Shenzhen Research Institute, Chinese University of Hong Kong, 10 Yuexing 2nd Road, high tech Zone community, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Patentee before: Yuanhua Intelligent Technology (Shenzhen) Co.,Ltd.