CN110716559A - Comprehensive control method for shopping mall and supermarket goods picking robot - Google Patents

Comprehensive control method for shopping mall and supermarket goods picking robot Download PDF

Info

Publication number
CN110716559A
CN110716559A CN201911153026.9A CN201911153026A CN110716559A CN 110716559 A CN110716559 A CN 110716559A CN 201911153026 A CN201911153026 A CN 201911153026A CN 110716559 A CN110716559 A CN 110716559A
Authority
CN
China
Prior art keywords
robot
obstacle
goods
target
shelf
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911153026.9A
Other languages
Chinese (zh)
Other versions
CN110716559B (en
Inventor
张建华
何伟
赵爱迪
冯琦
姜旭
周有杰
张垚楠
张霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University of Technology
Original Assignee
Hebei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University of Technology filed Critical Hebei University of Technology
Priority to CN201911153026.9A priority Critical patent/CN110716559B/en
Publication of CN110716559A publication Critical patent/CN110716559A/en
Application granted granted Critical
Publication of CN110716559B publication Critical patent/CN110716559B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Acoustics & Sound (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a comprehensive control method for a shopping mall and supermarket picking robot, which comprises the following steps of 1) constructing an environment map by combining a laser radar and a monocular camera; incrementally constructing an environment map by using a gmapping algorithm, performing loop detection on an environment image of a current frame, and calculating the position coordinates of the goods shelf by using a Bayesian filter algorithm in the process of constructing the environment map; 2) planning a path by adopting a mode of combining global optimal path planning and local uniform-speed path planning; performing global optimal path planning by using a fusion A-x algorithm and a DWA algorithm, and performing local uniform path planning by adopting a uniform linear advancing mode; 3) the real-time dynamic obstacle avoidance based on the ultrasonic sensor comprises a non-picking obstacle avoidance and a picking obstacle avoidance; 4) monocular camera based target recognition, including image annotation and training of data sets. The method can realize picking and delivering of the robot in a large scene, not only greatly reduces the requirement on manpower, but also improves the adaptability of the scene.

Description

Comprehensive control method for shopping mall and supermarket goods picking robot
Technical Field
The invention relates to the field of automatic control of robots, in particular to a comprehensive control method for a shopping mall and supermarket goods picking robot.
Background
With the gradual entrance of service robots into the lives of people, particularly with the rapid development of incoming commerce in recent years, the picking robot comes along. The picking robot can automatically complete the positioning, identification and picking of goods according to a picking order, and conveys the picked goods to a designated position, thereby replacing heavy manual labor and realizing unmanned goods picking operation. In order to ensure that the picking robot can smoothly complete the actions, a set of control method for realizing the mutual coordination work among all parts of the picking robot is very important.
The chinese patent application No. 201811371601.8 discloses a method, an apparatus, a terminal, a system and a storage medium for sorting goods in different regions based on a robot, wherein the robot acquires the goods location information of goods, and then acquires the goods location information of the robot mapped by the goods location information, and then plans the traveling path of the robot.
The chinese patent application No. 201710295472.8 discloses a control system of a robot, which includes an obstacle detection module, an obstacle processing module and a navigation control module, and detects whether an obstacle moves by a background modeling method or a frame difference method, a static obstacle may affect a path planning, and the robot walks around the obstacle; the dynamic barrier does not influence the path planning of the robot, and the robot decelerates when approaching the dynamic barrier and tries to walk through the dynamic barrier; the robot needs to walk after the barrier is automatically eliminated, and the picking efficiency is influenced, so the method is not suitable for complex occasions with more barriers, such as supermarkets and the like.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to solve the problem of providing a comprehensive control method for a shopping mall and supermarket goods picking robot, which integrates map construction, path planning, dynamic obstacle avoidance, target identification and information communication, can realize goods picking and delivery of the robot under a large scene, greatly reduces the requirement on manpower, and improves the adaptability of the scene.
The technical scheme adopted by the invention for solving the technical problems is to provide a comprehensive control method for a shopping mall and supermarket picking robot, which comprises the following contents:
firstly, constructing an environment map by combining a laser radar and a monocular camera; incrementally constructing an environment map by using a gmapping algorithm, performing loop detection on an environment image of a current frame, and calculating the position coordinates of the goods shelf by using a Bayesian filter algorithm in the process of constructing the environment map;
secondly, planning a path by adopting a mode of combining global optimal path planning and local uniform-speed path planning; performing global optimal path planning by using a fusion A-x algorithm and a DWA algorithm, and performing local uniform path planning by adopting a uniform linear advancing mode;
thirdly, real-time dynamic obstacle avoidance based on the ultrasonic sensor comprises a non-picking obstacle avoidance and a picking obstacle avoidance, so that the robot is prevented from being mistakenly picked and missed to pick due to obstacle avoidance bypassing;
and fourthly, target recognition based on the monocular camera, including image annotation and training of a data set.
The process of calculating the position coordinates of the shelf in the first step is as follows: the upper computer detects the goods shelf by adopting an SSD algorithm; after the goods shelf is detected, the position coordinates of the goods shelf are calculated by combining the relative installation positions of the monocular camera and the laser radar and utilizing a Bayes filter algorithm, wherein the Bayes filter algorithm meets the following formula 1):
xt~p(xt|w1:t,v1:t) 1)
in formula 1), xtThe position coordinates of the goods shelf at the time t are obtained; w is a1:tRepresenting the position coordinates of the goods shelf relative to the laser radar from the initial state to the time t; v. of1:tRepresenting the position coordinates of the shelf relative to the monocular camera from the initial state to time t; p (x)t|w1:t,v1:t) The position coordinates of the shelf are expressed by the position of the shelf relative to the laser radar and the monocular camera.
And performing global optimal path planning on paths between the movement of the robot from the starting position to the starting point of the target shelf, between the movement of the robot from the end point of the last target shelf to the starting point of the next target shelf and between the movement of the robot from the picking completion position to the transfer box, and performing local uniform path planning on the paths between the robot from the starting point of the target shelf to the end point of the target shelf.
The process of image labeling in the fourth step is as follows: firstly, selecting the same number of images for manual labeling of each commodity, and providing an initial training data set for a system;
then, automatically labeling the residual images of all commodities by adopting fluid labeling, calculating pixel values on two sides of the boundary of the fluid labeling image by using a Q-learning thread, evaluating the difference between the pixel values on the two sides, calculating division income, setting an income threshold value, and selecting the image qualified for labeling by taking the set income threshold value as an evaluation standard; manually re-screening the automatically labeled images again to finish labeling of all the images; all the images with qualified labels form a data set;
training a data set by adopting a distributed machine learning system SPARK through a dark script carried by YOLO; and carrying out target detection by using a YOLOV3 thread, and optimizing the recognition result of the YOLOV3 thread by using ResNet-101.
The non-picking obstacle avoidance process comprises the following specific steps: (1) in the process that the robot moves from the starting position to the starting point of the target shelf and moves from the end point of the last target shelf to the starting point of the next target shelf, if the ultrasonic sensor at the front end of the robot detects that an obstacle exists, the robot immediately stops moving and waits for the obstacle to be eliminated; (2) after a period of time, the ultrasonic sensor at the front end of the robot detects whether the obstacle is eliminated again, and if the obstacle is eliminated, the robot continues to move forward according to the global optimal path; if the obstacle still exists, detecting whether the obstacle exists by an ultrasonic sensor at the front and rear ends of the robot; (3) if an obstacle exists at the front rear end, the robot reports the obstacle; if no obstacle exists at the front and rear ends, the robot retreats for a certain distance; detecting whether an obstacle exists by an ultrasonic sensor on the left side of the front end of the robot, if the obstacle does not exist, the robot detours to the left side to avoid the obstacle, then continuously moves along the global optimal path, and continuously executes the step (1); if the obstacle exists on the left side of the front end, the ultrasonic sensor on the right side of the front end of the robot detects whether the obstacle exists; if no obstacle exists on the right side of the front end, the robot detours to the right side to avoid the obstacle, then continues to move along the global optimal path, and continues to execute the step (1); if the right side of the front end has the obstacle, the robot continues to execute the step (3) until the obstacle appears at the right rear end of the robot, and the robot reports the obstacle;
picking and avoiding barriers: (1) in the process that the robot moves from the starting point of the target shelf to the end point of the target shelf, if an ultrasonic sensor at the front end of the robot detects an obstacle, the robot immediately stops moving and waits for the obstacle to be eliminated; (2) after a period of time, the ultrasonic sensor at the front end of the robot detects whether the obstacle is eliminated again, and if the obstacle is eliminated, the robot continues to move forward according to a preset path; if the obstacle still exists, detecting whether the obstacle exists by an ultrasonic sensor at the front and rear ends of the robot; (3) if the obstacle exists at the front rear end, the robot reports the obstacle; if no obstacle exists at the front and rear ends, the robot retreats for a certain distance; then detecting whether an obstacle exists by an ultrasonic sensor on the left side of the front end of the robot, if the obstacle does not exist, the robot detours to the left side to avoid the obstacle, then the robot moves to the next target goods shelf to pick goods, and continuing to execute the step (1) to show that the goods picking of the target goods shelf fails; if the left side of the front end has the obstacle, the ultrasonic sensor on the right side of the front end of the robot detects whether the obstacle exists, if the obstacle does not exist, the robot detours to the right side to avoid the obstacle, then the robot moves to the next target goods shelf to pick goods, and the step (1) is continuously executed to show that the goods picking of the target goods shelf fails; if the right side of the front end has the obstacle, the robot continues to execute the step (3) until the obstacle appears at the right rear end of the robot, and the robot reports the obstacle; and after the rest target goods shelves are detected, the robot judges whether the goods catalogues are detected, if all the target goods are not detected, the robot returns the target goods shelves with failed goods picking, and the target goods shelves are picked again.
The method also comprises real-time communication based on the ROS robot operating system, namely a communication mechanism of the robot control system;
the robot is started, an environment map is constructed by using a laser radar and a monocular camera, the upper computer calculates the coordinates of all goods shelves, and the robot acquires a commodity catalog to begin picking up goods; estimating the position of a target shelf according to the coordinates of all shelves, reading the starting point and the end point of the target shelf, calculating the starting point and the end point of the robot according to the starting point and the end point of the target shelf, and performing local uniform path planning to enable the robot to reach the starting point of an initial target shelf; the robot moves at a constant speed along the target shelf, and the monocular camera performs target identification;
when the target goods in the goods catalog do not exist on the target goods shelf, the robot continues to move; when the target commodity exists, the robot detects whether the target commodity is in the center of the visual field, if the target commodity is not in the center of the visual field, the robot continues to move, and if the robot stops moving in the center of the visual field, a mechanical arm of the robot starts to pick the goods; after the goods are successfully picked, the picked goods are removed from the goods catalog, and if the goods are not successfully picked, the robot retreats for a certain distance to pick the goods again;
the robot detects whether an obstacle exists or not by means of the ultrasonic sensor in the moving process, and continues to move if no obstacle exists in the advancing direction; if the obstacle exists in the advancing direction, the robot stops moving to wait for the obstacle to be eliminated, and whether the obstacle exists in the advancing direction is detected again after a period of time; if not, the robot continues to move; if yes, judging whether the robot is picking the goods currently or not, and starting corresponding obstacle avoidance measures; until the robot finishes picking all the goods in the goods catalog.
Compared with the prior art, the invention has the beneficial effects that:
1) aiming at the large scene environment of a market and a supermarket, the environment map is constructed in a mode of combining a laser radar and a monocular camera, loop detection is carried out on environment information, and mismatching is avoided; and calculating the position coordinates of the goods shelf in the environment map by adopting a Bayesian filtering algorithm to realize automatic marking of the goods shelf.
2) The traditional path planning method only considers time or path optimization and neglects the path requirement needed when the robot executes tasks; the invention provides a mode of combining global optimal path planning and local uniform path planning, so that the robot is ensured to walk at a uniform speed in the goods picking process in front of a goods shelf, the running stability of the robot is improved, and a monocular camera can more accurately identify target goods.
3) Aiming at the business overload environment, the obstacle avoidance robot not only simply avoids obstacles, but also combines with target identification in the goods picking process, provides different obstacle avoidance modes aiming at the goods picking scene and the goods non-picking scene, and ensures that the robot cannot miss picking due to obstacle avoidance bypassing.
4) The traditional YOLO algorithm is improved, the ResNet-101 is used for optimizing the recognition result after the target recognition of the YOLOV3, and the accuracy of recognizing small-volume commodities under multiple illumination intensities and backgrounds is improved; the target identification can be better suitable for the situations of various commodities and irregular placement.
5) A mechanism for cooperative work of various systems is provided based on an ROS robot operating system, so that the stable operation of the whole system is ensured; the invention integrates map construction, path planning, dynamic obstacle avoidance, target identification and information communication, can realize picking and delivery of the robot in a large scene, greatly reduces the requirement on manpower, and improves the adaptability of the scene.
6) The invention is suitable for unmanned operation in a supermarket or storage environment.
Drawings
FIG. 1 is a communication mechanism of a robot of the present invention;
fig. 2 is a flow chart of obstacle avoidance of the robot of the present invention.
Detailed Description
In order to make the present invention more comprehensible, the present invention will be further described with reference to the following embodiments and accompanying drawings.
The invention provides a comprehensive control method for a shopping mall and supermarket picking robot, which comprises the following contents:
the picking system of the picking robot comprises a laser radar, a monocular camera, four ultrasonic sensors, an Arduino development board and an upper computer; the laser radar and the monocular camera are both arranged at the front end of the robot, and the scanning plane of the laser radar is as high as the plane where the shelf laminate is located; two ultrasonic sensors are respectively arranged at the front end and the rear end of the robot, and the other two ultrasonic sensors are arranged at the left side and the right side of the front end of the robot and respectively form an included angle of 30 degrees with the left axis and the right axis of the robot; the Arduino development board and the upper computer are arranged on the robot; the upper computer is in communication connection with the laser radar, the monocular camera, the ultrasonic sensor and the Arduino development board respectively, and a control program of the robot is stored in the upper computer;
firstly, constructing an environment map by combining a laser radar and a monocular camera; the laser radar is used for scanning environment information, and the monocular camera is used for extracting an environment image;
s1, the upper computer builds the environment map in an incremental manner by adopting a gmapping algorithm, and the method comprises the following specific steps:
the method comprises the following steps: abstracting environment information into particle state information, wherein the particle state information is described as follows by formula (1):
p(x1:t,m|z1:t,u1:t-1)=p(m|x1:t,z1:t)·p(x1:t|z1:t,u1:t-1) (1)
in the formula (1), x1:tThe pose of the robot from the initial state to the t moment is shown; m is state data of the environment map; z is a radical of1:tThe observation data represent the observation data of the laser radar from the initial state to the t moment; u. of1:t-1Indicating from the initial state toControl data sent to the robot by the control system at the time t-1; p (m | x)1:t,z1:t) Representing the state distribution of the environment map at the time t; p (x)1:tz1:t,u1:t-1) Representing the pose of the robot from the initial state to the time t; p (x)1:t,m|z1:t,u1:t-1) The state distribution of the pose of the robot and the environment map is represented by data carried out by controlling the robot by a control system from an initial state to a time t-1 and observation data of a laser radar;
step two: calculating the weight of each particle for the resampling in the step three;
because the particles cover the whole motion state space of the robot and only a small part of the particles meet the requirement, the weight of each particle is calculated by adopting a formula (2);
Figure BDA0002284072910000041
in the formula (2), the reaction mixture is,
Figure BDA0002284072910000042
representing the weight of any particle at time t;
Figure BDA0002284072910000043
represents the weight of any particle at time t-1; x is the number ofjRepresenting the pose of the robot near the peak value of the motion state space (close to the real state); ztRepresenting observation data of the laser radar at the t-1 moment;
Figure BDA0002284072910000044
state data representing an environment map at time t-1;
Figure BDA0002284072910000045
representing the pose of the robot at the t-1 moment; u. oft-1Control data for the robot from the control system at the time t-1 is shown; k represents the number of particles near the space peak value of the motion state;the state data of the environment map at the time t-1, the pose of the robot and the control data for controlling the robot are used for representing the observation data of the laser radar near the space peak value of the motion state;
Figure BDA0002284072910000051
the state data of the environment map at the time t-1 is used for representing the observation data of the laser radar near the space peak value of the motion state;
Figure BDA0002284072910000052
the pose of the robot at the t-1 moment and the control data of the robot are used for representing the pose of the robot near the peak value of the motion state space;
step three: resampling the particles, and screening the particles which accord with the actual scene environment; when the particles are adopted to represent the state distribution of the environment map, the particles with small weight need to be discarded, and the particles with large weight need to be kept and copied, so that the particles are converged near the real state; in order to screen out particles similar to the real state, resampling the particles with weight change exceeding a threshold value, wherein the resampling standard meets a formula (3); the threshold value is N/2, and N is the number of the collected particles;
Figure BDA0002284072910000053
in the formula (3), the reaction mixture is,
Figure BDA0002284072910000054
represents the weight of any particle; n represents the number of the collected particles; when Neff is larger than the threshold value, the difference between the weight of the particles and the true value is smaller, and the particles just can represent the true distribution when all the weights of the particles are the same; when Neff is smaller than the threshold value, the difference between the weight of the particles and the true value is large, and the particles need to be resampled;
obtaining an environment map at the time t through the first step, the second step and the third step, and repeatedly executing the first step, the second step and the third step to finish incremental construction of the indoor environment map; in the process, the robot updates the self pose in real time, and the purpose of self-positioning is achieved;
s2, performing loop detection on the environment image of the current frame by using a LOOPCLOSING thread of an ORB-SLAM2 algorithm;
acquiring an environment image of each frame by using a monocular camera, extracting feature points of the environment image of each frame by using an ORB feature description algorithm by using an upper computer, and adding all the key frames into a bag-of-words model when the feature points of the same environment image meet a certain number (at least 30) of key frames;
the upper computer matches the current frame acquired by the monocular camera and the key frame in the bag-of-words model by using a LOOPCLOSING thread of an ORB-SLAM2 algorithm, performs loop detection on the current frame, namely matches the current frame with the key frame in the bag-of-words model, and matches the map generated by the current frame with the map generated by the previous frame; if the loop detection is successful, the upper computer guides the mapping algorithm to be matched with the robot position point corresponding to the key frame of the successful loop detection, so that the closing of the robot path is realized, the accumulated error of the robot in the construction of the environment map is eliminated, the correction of the environment map is realized, and a foundation is provided for more accurate obstacle avoidance and navigation of the picking robot; if the loop detection fails, the upper computer continues to perform loop detection on the frame with the failed loop detection;
s3, calculating the position coordinates of the goods shelf by adopting a Bayesian filtering algorithm; in the process of constructing the environment map, the upper computer detects the goods shelf by using an SSD algorithm; after the goods shelf is detected, the position coordinates of the goods shelf are calculated by combining the relative installation positions of the monocular camera and the laser radar and utilizing a Bayes filter algorithm, wherein the Bayes filter algorithm meets the formula (4):
xt~p(xt|w1:t,v1:t) (4)
in the formula (4), xtThe position coordinates of the goods shelf at the time t are obtained; w is a1:tRepresenting the position coordinates of the goods shelf relative to the laser radar from the initial state to the time t; v. of1:tRepresenting the position coordinates of the shelf relative to the monocular camera from the initial state to time t; p (x)t|w1:t,v1:t) The position coordinates of the goods shelf are expressed by the position of the goods shelf relative to the laser radar and the monocular camera;
the gmsupporting algorithm reflects the relative position of the laser radar and the goods shelf, the SSD algorithm reflects the relative position of the monocular camera and the goods shelf, and the position coordinates of the goods shelf in an environment map can be calculated in real time by combining the relative installation positions of the monocular camera and the laser radar, so that the automatic marking of the position of the goods shelf is realized;
because the scanning area of the laser radar is a plane with a certain height, and the shelf support legs of the supermarket and the supermarket in the market are small and difficult to scan by the laser radar, the scanning plane of the laser radar is set as the plane where the shelf laminate is located by the algorithm, so that the shelf is easier to identify; because the position information of the goods shelf is scanned on the environment map, the upper computer calculates the position coordinates of the goods shelf in the environment map, and therefore the position coordinates of the goods shelf do not need to be marked manually on the map;
an environment map is constructed in a large scene range, if only a laser radar and a gmapping algorithm are used, loop detection is lacked, shelves of a market and a supermarket are similar, and mismatching is easy to occur among the shelves; if only the monocular camera is used, the construction of the environment map is easily influenced by environmental factors such as illumination and the like, and the large-scene environment map is difficult to construct, so the environment map is constructed by combining the laser radar and the monocular camera;
the main innovation points of the part are as follows: performing loop detection on the environment map constructed by the mapping algorithm by adopting an LOOPCLOSING thread of ORB-SLAM2 to increase the drawing construction precision; the position coordinates of the goods shelf are calculated by adopting a Bayesian filtering algorithm, so that the automatic marking of the coordinates of the goods shelf in an environment map is realized, and manual marking is replaced;
secondly, planning a path by adopting a mode of combining global optimal path planning and local uniform-speed path planning;
1) and (3) global optimal path planning: performing global optimal path planning on paths between the movement of the robot from the initial position to the starting point of the target shelf, between the movement of the robot from the end point of the last target shelf to the starting point of the next target shelf and between the movement of the robot from the picking completion position to the transfer box by adopting a fusion A-star algorithm and a DWA algorithm;
2) local uniform speed path planning: local uniform-speed path planning is carried out on a path from a target shelf starting point to a target shelf end point of the robot in a uniform-speed linear traveling mode, and the robot moves linearly at a uniform speed along the local uniform-speed path;
the main innovation points of the part are as follows: in the traditional storage logistics field, a magnetic stripe needs to be laid or a two-dimensional code needs to be pasted on a robot movement path in advance so as to realize that the robot moves according to a set path; the robot only needs to read the coordinates of the starting point and the end point of the goods shelf to realize uniform linear travel without laying external auxiliary equipment on the travel path of the robot;
the traditional path planning method only considers time or path optimization and neglects the path requirement needed when the robot executes tasks; the invention provides a mode of combining global optimal path planning and local uniform path planning, so that the robot is ensured to walk at a uniform speed in the goods picking process in front of a goods shelf, the running stability of the robot is improved, and a monocular camera can more accurately identify target goods;
thirdly, real-time dynamic obstacle avoidance based on the ultrasonic sensor, wherein the real-time dynamic obstacle avoidance comprises a non-picking obstacle avoidance and a picking obstacle avoidance;
non-picking and obstacle avoidance: (1) in the process that the robot moves from the starting position to the starting point of the target shelf and moves from the end point of the last target shelf to the starting point of the next target shelf, if the ultrasonic sensor at the front end of the robot detects that an obstacle exists, the robot immediately stops moving and waits for the obstacle to be eliminated; (2) after a period of time (about 5s), the ultrasonic sensor at the front end of the robot detects whether the obstacle is eliminated again, and if the obstacle is eliminated, the robot continues to move forward according to the global optimal path; if the obstacle still exists, detecting whether the obstacle exists by an ultrasonic sensor at the front and rear ends of the robot; (3) if an obstacle exists at the front rear end, the robot reports the obstacle; if no obstacle exists at the front end and the rear end, the robot retreats for a certain distance (generally twice the length of a chassis of the robot) so as to ensure that the robot cannot collide with the obstacle in the obstacle avoidance process; detecting whether an obstacle exists by an ultrasonic sensor on the left side of the front end of the robot, if the obstacle does not exist, the robot detours to the left side to avoid the obstacle, then continuously moves along the global optimal path, and continuously executes the step (1); if the obstacle exists on the left side of the front end, the ultrasonic sensor on the right side of the front end of the robot detects whether the obstacle exists; if no obstacle exists on the right side of the front end, the robot detours to the right side to avoid the obstacle, then continues to move along the global optimal path, and continues to execute the step (1); if the right side of the front end has the obstacle, the robot continues to execute the step (3) until the obstacle appears at the right rear end of the robot, and the robot reports the obstacle;
picking and avoiding barriers: (1) in the process that the robot moves from the starting point of the target shelf to the end point of the target shelf, if an ultrasonic sensor at the front end of the robot detects an obstacle, the robot immediately stops moving and waits for the obstacle to be eliminated; (2) after a period of time (about 5s), the ultrasonic sensor at the front end of the robot detects whether the obstacle is eliminated again, and if the obstacle is eliminated, the robot continues to move forward according to a preset path; if the obstacle still exists, detecting whether the obstacle exists by an ultrasonic sensor at the front and rear ends of the robot; (3) if the obstacle exists at the front rear end, the robot reports the obstacle; if no obstacle exists at the front end and the rear end, the robot retreats for a certain distance (generally twice the length of a chassis of the robot) so as to ensure that the robot cannot collide with the obstacle in the obstacle avoidance process; then detecting whether an obstacle exists by an ultrasonic sensor on the left side of the front end of the robot, if the obstacle does not exist, the robot detours to the left side to avoid the obstacle, then the robot moves to the next target goods shelf to pick goods, and continuing to execute the step (1) to show that the goods picking of the target goods shelf fails; if the left side of the front end has the obstacle, the ultrasonic sensor on the right side of the front end of the robot detects whether the obstacle exists, if the obstacle does not exist, the robot detours to the right side to avoid the obstacle, then the robot moves to the next target goods shelf to pick goods, and the step (1) is continuously executed to show that the goods picking of the target goods shelf fails; if the right side of the front end has the obstacle, the robot continues to execute the step (3) until the obstacle appears at the right rear end of the robot, and the robot reports the obstacle; after all the remaining target goods shelves are detected, the robot judges whether the goods catalogues are detected, if all the target goods shelves are not detected, the robot returns the target goods shelf with failed goods picking, and the target goods shelf is picked again;
the obstacle of the part is generally a picking robot;
in summary, the main innovation points of this part are: aiming at the business overload environment, the obstacle avoidance method is not only used for avoiding obstacles, but also combined with target identification in the goods picking process, different obstacle avoidance modes are provided for goods picking scenes and goods non-picking scenes, and the condition that the robot is not mistakenly picked and missed picked due to obstacle avoidance bypassing is ensured;
fourthly, target recognition based on a monocular camera;
the monocular camera is used for acquiring the image data of the commodity, and simultaneously acquiring a plurality of pictures of the same commodity under different angles, illumination and backgrounds, so that the richness of the image data is increased, and the success rate of target identification is improved; all the collected images are subjected to Gaussian blur and sharpening, so that accurate and rapid target identification can be realized even if the shot images are not clear in the moving process of the monocular camera;
image labeling: firstly, selecting the same number of images for manual labeling of each commodity, and providing an initial training data set for a system;
then, the fluid marking is adopted to automatically mark the residual images of all the commodities, so that the image marking efficiency is improved; aiming at the problem of inaccurate boundary in fluid labeling, calculating pixel values on two sides of the boundary of a fluid labeling image by using a Q-learning thread, evaluating the difference between the pixel values on the two sides, calculating division yield, setting a yield threshold value, evaluating and screening a labeling result by taking the set yield threshold value as an evaluation standard, and selecting an image with qualified labeling; if the division income is greater than or equal to the income threshold value, the image marking is qualified; if the division income is smaller than the income threshold value, the image labeling is unqualified, and the image with unqualified labeling enters the automatic labeling thread again for circular labeling until the image with qualified labeling of each commodity meets the quantity requirement, and the quantity requirement depends on the identification precision; if the image qualified by labeling of a certain commodity does not meet the quantity requirement after cyclic labeling, the commodity is required to be supplemented with the acquired image, and the supplemented image is automatically labeled and screened until the image qualified by labeling of the commodity meets the quantity requirement, so that the automatic labeling of the image is completed;
manually re-screening the automatically labeled images again to finish labeling of all the images; all the images qualified in labeling form a data set;
training of the data set: training a data set by adopting a distributed machine learning system SPARK through a Dalknet script carried by YOLO, and realizing multi-thread co-training data to improve the training speed; using a monocular camera to scan commodities on a shelf in real time, and using a Yolov3 thread by an upper computer to perform target detection; the Yolov3 thread has good robustness to small-volume commodities and is inaccurate in perception to the small-volume commodities, so that the recognition result of the Yolov3 thread is optimized by ResNet-101 after the target recognition of the Yolov3 thread, the small-volume commodities are prevented from being unobvious in recognition, the accuracy of recognizing the small-volume commodities under multiple illumination intensities and backgrounds is improved, the problems that the matching error is increased and the recognition result is inaccurate when the background of a use environment is deepened are solved, and ResNet-101 can ensure that the speed of target recognition is optimal;
the main innovation points of the part are as follows: firstly, in order to reduce the workload of manually marking images, automatically marking the images; aiming at the problem of inaccurate boundary in automatic image annotation, a Q-learning thread is introduced to evaluate an automatic annotation result, and an image with qualified annotation is screened out; secondly, a distributed machine learning system is introduced to train the images so as to improve the training speed; finally, on the basis of adopting a YOLOV3 thread to perform target identification, the target identification is optimized through ResNet-101, the error of the target identification is reduced, and the accuracy of small-volume object identification is improved;
fifthly, real-time communication based on an ROS (robot software platform) robot operating system, namely a communication mechanism of a robot control system;
the robot is started, an environment map is constructed by using a laser radar and a monocular camera, the upper computer calculates the coordinates of all goods shelves, and the robot acquires a commodity catalog to begin picking up goods; estimating the position of a target shelf according to the coordinates of all shelves, reading the starting point and the end point of the target shelf, calculating the starting point and the end point of the robot according to the starting point and the end point of the target shelf, and planning a path to enable the robot to reach the starting point position of the initial target shelf; the robot moves along the target shelf at a constant speed, and the monocular camera is started to identify the target;
when the target goods in the goods catalog do not exist on the target goods shelf, the robot continues to move; when the target commodity exists, the robot detects whether the target commodity is in the center of the visual field, if the target commodity is not in the center of the visual field, the robot continues to move, and if the robot stops moving in the center of the visual field, a mechanical arm of the robot starts to pick the goods; after the goods are picked successfully, the picked goods are removed from the goods catalog, if the goods are not picked successfully, the condition that the robot moves forward too much by means of inertia and misses the target goods is indicated, and if the robot moves backwards for a certain distance, the goods are picked again;
the robot detects whether an obstacle exists or not by means of the ultrasonic sensor in the moving process, and continues to move if no obstacle exists in the advancing direction; if the obstacle exists in the advancing direction, the robot stops moving to wait for the obstacle to be eliminated, and whether the obstacle exists in the advancing direction is detected again after a period of time (5S); if not, the robot continues to move; if yes, judging whether the robot is picking up goods currently;
if the goods are picked, whether obstacles exist in other directions (the front end, the rear end, the left side of the front end and the right side of the front end) of the robot is detected; if no obstacle exists at the left side of the front end, the right rear end or the right side of the front end and the right rear end, the robot moves to the next target shelf by bypassing the obstacle;
if the goods are not picked (in the process that the robot moves from the starting position to the starting point of the target shelf and moves from the end point of the last target shelf to the starting point of the next target shelf), whether an obstacle exists behind the robot or not is continuously detected, and if the obstacle exists, the obstacle is reported; if not, detecting whether obstacles exist on the left and right sides (the left side of the front end and the right side of the front end) of the robot; if the robot does not exist, the robot avoids the obstacle and bypasses to move forward, and if the robot exists, the robot reports the obstacle;
the main innovation points of the part are as follows: aiming at the business super environment, a set of communication mechanism of the robot is provided based on the ROS robot operating system, so that the coordination work among all parts is ensured, and the robot can run safely and stably.
Nothing in this specification is said to apply to the prior art.

Claims (6)

1. A comprehensive control method for a shopping mall and supermarket picking robot is characterized by comprising the following steps:
firstly, constructing an environment map by combining a laser radar and a monocular camera; incrementally constructing an environment map by using a gmapping algorithm, performing loop detection on an environment image of a current frame, and calculating the position coordinates of the goods shelf by using a Bayesian filter algorithm in the process of constructing the environment map;
secondly, planning a path by adopting a mode of combining global optimal path planning and local uniform-speed path planning; performing global optimal path planning by using a fusion A-x algorithm and a DWA algorithm, and performing local uniform path planning by adopting a uniform linear advancing mode;
thirdly, real-time dynamic obstacle avoidance based on the ultrasonic sensor comprises a non-picking obstacle avoidance and a picking obstacle avoidance, so that the robot is prevented from being mistakenly picked and missed to pick due to obstacle avoidance bypassing;
and fourthly, target recognition based on the monocular camera, including image annotation and training of a data set.
2. The integrated control method for a supermarket picking robot according to claim 1, wherein the process of calculating the position coordinates of the shelves in the first step is as follows: the upper computer detects the goods shelf by adopting an SSD algorithm; after the goods shelf is detected, the position coordinates of the goods shelf are calculated by combining the relative installation positions of the monocular camera and the laser radar and utilizing a Bayes filter algorithm, wherein the Bayes filter algorithm satisfies the formula 1):
xt~p(xt|w1:tv1:t) 1)
in formula 1), xtThe position coordinates of the goods shelf at the time t are obtained; w is a1:tRepresenting the position coordinates of the goods shelf relative to the laser radar from the initial state to the time t; v. of1:tRepresenting the position coordinates of the shelf relative to the monocular camera from the initial state to time t; p (x)t|w1: tv1:t) For indicating the position of the shelf by its position relative to the lidar and monocular cameraAnd (4) marking.
3. The integrated control method for a supermarket picking robot according to claim 1, wherein the global optimal path planning is performed on the path between the robot moving from the start position to the start point of the target shelf, the path between the end point of the last target shelf and the start point of the next target shelf, and the path between the robot moving from the picking completion position to the transfer box, and the local uniform path planning is performed on the path between the start point of the target shelf and the end point of the target shelf.
4. The integrated control method for a supermarket picking robot according to claim 1, wherein the image labeling process in the fourth step is as follows: firstly, selecting the same number of images for manual labeling of each commodity, and providing an initial training data set for a system;
then, automatically labeling the residual images of all commodities by adopting fluid labeling, calculating pixel values on two sides of the boundary of the fluid labeling image by using a Q-learning thread, evaluating the difference between the pixel values on the two sides, calculating division income, setting an income threshold value, and selecting the image qualified for labeling by taking the set income threshold value as an evaluation standard; manually re-screening the automatically labeled images again to finish labeling of all the images; all the images with qualified labels form a data set;
training a data set by adopting a distributed machine learning system SPARK through a dark script carried by YOLO; and carrying out target detection by using a YOLO V3 thread, and optimizing the recognition result of the YOLO V3 thread by using ResNet-101.
5. The comprehensive control method for the shopping mall and supermarket pick-up robot as claimed in claim 1, wherein the specific process of non-pick-up and obstacle avoidance is as follows: (1) in the process that the robot moves from the starting position to the starting point of the target shelf and moves from the end point of the last target shelf to the starting point of the next target shelf, if the ultrasonic sensor at the front end of the robot detects that an obstacle exists, the robot immediately stops moving and waits for the obstacle to be eliminated; (2) after a period of time, the ultrasonic sensor at the front end of the robot detects whether the obstacle is eliminated again, and if the obstacle is eliminated, the robot continues to move forward according to the global optimal path; if the obstacle still exists, detecting whether the obstacle exists by an ultrasonic sensor at the front and rear ends of the robot; (3) if an obstacle exists at the front rear end, the robot reports the obstacle; if no obstacle exists at the front and rear ends, the robot retreats for a certain distance; detecting whether an obstacle exists by an ultrasonic sensor on the left side of the front end of the robot, if the obstacle does not exist, the robot detours to the left side to avoid the obstacle, then continuously moves along the global optimal path, and continuously executes the step (1); if the obstacle exists on the left side of the front end, the ultrasonic sensor on the right side of the front end of the robot detects whether the obstacle exists; if no obstacle exists on the right side of the front end, the robot detours to the right side to avoid the obstacle, then continues to move along the global optimal path, and continues to execute the step (1); if the right side of the front end has the obstacle, the robot continues to execute the step (3) until the obstacle appears at the right rear end of the robot, and the robot reports the obstacle;
picking and avoiding barriers: (1) in the process that the robot moves from the starting point of the target shelf to the end point of the target shelf, if an ultrasonic sensor at the front end of the robot detects an obstacle, the robot immediately stops moving and waits for the obstacle to be eliminated; (2) after a period of time, the ultrasonic sensor at the front end of the robot detects whether the obstacle is eliminated again, and if the obstacle is eliminated, the robot continues to move forward according to a preset path; if the obstacle still exists, detecting whether the obstacle exists by an ultrasonic sensor at the front and rear ends of the robot; (3) if the obstacle exists at the front rear end, the robot reports the obstacle; if no obstacle exists at the front and rear ends, the robot retreats for a certain distance; then detecting whether an obstacle exists by an ultrasonic sensor on the left side of the front end of the robot, if the obstacle does not exist, the robot detours to the left side to avoid the obstacle, then the robot moves to the next target goods shelf to pick goods, and continuing to execute the step (1) to show that the goods picking of the target goods shelf fails; if the left side of the front end has the obstacle, the ultrasonic sensor on the right side of the front end of the robot detects whether the obstacle exists, if the obstacle does not exist, the robot detours to the right side to avoid the obstacle, then the robot moves to the next target goods shelf to pick goods, and the step (1) is continuously executed to show that the goods picking of the target goods shelf fails; if the right side of the front end has the obstacle, the robot continues to execute the step (3) until the obstacle appears at the right rear end of the robot, and the robot reports the obstacle; and after the rest target goods shelves are detected, the robot judges whether the goods catalogues are detected, if all the target goods are not detected, the robot returns the target goods shelves with failed goods picking, and the target goods shelves are picked again.
6. The integrated control method for supermarket picking robot according to claim 1, characterized in that the method further comprises real-time communication based on ROS robot operating system, i.e. communication mechanism of robot control system;
the robot is started, an environment map is constructed by using a laser radar and a monocular camera, the upper computer calculates the coordinates of all goods shelves, and the robot acquires a commodity catalog to begin picking up goods; estimating the position of a target shelf according to the coordinates of all shelves, reading the starting point and the end point of the target shelf, calculating the starting point and the end point of the robot according to the starting point and the end point of the target shelf, and performing local uniform path planning to enable the robot to reach the starting point of an initial target shelf; the robot moves at a constant speed along the target shelf, and the monocular camera performs target identification;
when the target goods in the goods catalog do not exist on the target goods shelf, the robot continues to move; when the target commodity exists, the robot detects whether the target commodity is in the center of the visual field, if the target commodity is not in the center of the visual field, the robot continues to move, and if the robot stops moving in the center of the visual field, a mechanical arm of the robot starts to pick the goods; after the goods are successfully picked, the picked goods are removed from the goods catalog, and if the goods are not successfully picked, the robot retreats for a certain distance to pick the goods again;
the robot detects whether an obstacle exists or not by means of the ultrasonic sensor in the moving process, and continues to move if no obstacle exists in the advancing direction; if the obstacle exists in the advancing direction, the robot stops moving to wait for the obstacle to be eliminated, and whether the obstacle exists in the advancing direction is detected again after a period of time; if not, the robot continues to move; if yes, judging whether the robot is picking the goods currently or not, and starting corresponding obstacle avoidance measures; until the robot finishes picking all the goods in the goods catalog.
CN201911153026.9A 2019-11-22 2019-11-22 Comprehensive control method for shopping mall and supermarket goods picking robot Active CN110716559B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911153026.9A CN110716559B (en) 2019-11-22 2019-11-22 Comprehensive control method for shopping mall and supermarket goods picking robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911153026.9A CN110716559B (en) 2019-11-22 2019-11-22 Comprehensive control method for shopping mall and supermarket goods picking robot

Publications (2)

Publication Number Publication Date
CN110716559A true CN110716559A (en) 2020-01-21
CN110716559B CN110716559B (en) 2022-07-29

Family

ID=69215498

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911153026.9A Active CN110716559B (en) 2019-11-22 2019-11-22 Comprehensive control method for shopping mall and supermarket goods picking robot

Country Status (1)

Country Link
CN (1) CN110716559B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112161624A (en) * 2020-09-11 2021-01-01 上海高仙自动化科技发展有限公司 Marking method, marking device, intelligent robot and readable storage medium
CN112241171A (en) * 2020-10-09 2021-01-19 国网江西省电力有限公司检修分公司 Wheeled robot linear track tracking method capable of winding obstacles
CN112904855A (en) * 2021-01-19 2021-06-04 四川阿泰因机器人智能装备有限公司 Follow-up robot local path planning method based on improved dynamic window
CN114227683A (en) * 2021-12-23 2022-03-25 江苏木盟智能科技有限公司 Robot control method, system, terminal device and storage medium
CN114723154A (en) * 2022-04-18 2022-07-08 淮阴工学院 Wisdom supermarket
WO2023124621A1 (en) * 2021-12-31 2023-07-06 追觅创新科技(苏州)有限公司 Path planning method and system based on obstacle marker, and self-moving robot

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109634267A (en) * 2017-10-09 2019-04-16 北京瑞悟科技有限公司 One kind being used for shopping mall supermarket intelligent picking distributed robot
CN110465928A (en) * 2019-08-23 2019-11-19 河北工业大学 A kind of paths planning method of storage commodity pick-and-place mobile platform and the mobile platform

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109634267A (en) * 2017-10-09 2019-04-16 北京瑞悟科技有限公司 One kind being used for shopping mall supermarket intelligent picking distributed robot
CN110465928A (en) * 2019-08-23 2019-11-19 河北工业大学 A kind of paths planning method of storage commodity pick-and-place mobile platform and the mobile platform

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JAKUB HVĚZDA等: "Context-Aware Route Planning for Automated Warehouses", 《2018 21ST INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC)》 *
唐淑华等: "电商仓库拣货机器人路径规划与实现", 《物流技术》 *
王晓彤等: "基于单目天花板视觉的扫地机器人定位算法设计及实现", 《微电子学与计算机》 *
陈至坤等: "移动机器人目标路径规划的仿真研究", 《计算机仿真》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112161624A (en) * 2020-09-11 2021-01-01 上海高仙自动化科技发展有限公司 Marking method, marking device, intelligent robot and readable storage medium
CN112241171A (en) * 2020-10-09 2021-01-19 国网江西省电力有限公司检修分公司 Wheeled robot linear track tracking method capable of winding obstacles
CN112904855A (en) * 2021-01-19 2021-06-04 四川阿泰因机器人智能装备有限公司 Follow-up robot local path planning method based on improved dynamic window
CN112904855B (en) * 2021-01-19 2022-08-16 四川阿泰因机器人智能装备有限公司 Follow-up robot local path planning method based on improved dynamic window
CN114227683A (en) * 2021-12-23 2022-03-25 江苏木盟智能科技有限公司 Robot control method, system, terminal device and storage medium
CN114227683B (en) * 2021-12-23 2024-02-09 江苏木盟智能科技有限公司 Robot control method, system, terminal device and storage medium
WO2023124621A1 (en) * 2021-12-31 2023-07-06 追觅创新科技(苏州)有限公司 Path planning method and system based on obstacle marker, and self-moving robot
CN114723154A (en) * 2022-04-18 2022-07-08 淮阴工学院 Wisdom supermarket
CN114723154B (en) * 2022-04-18 2024-05-28 淮阴工学院 Wisdom supermarket

Also Published As

Publication number Publication date
CN110716559B (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN110716559B (en) Comprehensive control method for shopping mall and supermarket goods picking robot
CN112014857B (en) Three-dimensional laser radar positioning and navigation method for intelligent inspection and inspection robot
CN112476434B (en) Visual 3D pick-and-place method and system based on cooperative robot
CN111462135B (en) Semantic mapping method based on visual SLAM and two-dimensional semantic segmentation
CN109931939B (en) Vehicle positioning method, device, equipment and computer readable storage medium
CN113537208B (en) Visual positioning method and system based on semantic ORB-SLAM technology
CN111664843A (en) SLAM-based intelligent storage checking method
CN111496770A (en) Intelligent carrying mechanical arm system based on 3D vision and deep learning and use method
US9990726B2 (en) Method of determining a position and orientation of a device associated with a capturing device for capturing at least one image
CN109186606B (en) Robot composition and navigation method based on SLAM and image information
KR20200041355A (en) Simultaneous positioning and mapping navigation method, device and system combining markers
CN107741234A (en) The offline map structuring and localization method of a kind of view-based access control model
CN106379684A (en) Submersible AGV abut-joint method and system and submersible AGV
CN112363158B (en) Pose estimation method for robot, robot and computer storage medium
US11995852B2 (en) Point cloud annotation for a warehouse environment
CN112605993B (en) Automatic file grabbing robot control system and method based on binocular vision guidance
CN114047750A (en) Express delivery warehousing method based on mobile robot
LeVoir et al. High-accuracy adaptive low-cost location sensing subsystems for autonomous rover in precision agriculture
CN114998276A (en) Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
Maier et al. Appearance-based traversability classification in monocular images using iterative ground plane estimation
CN111780744A (en) Mobile robot hybrid navigation method, equipment and storage device
CN111380535A (en) Navigation method and device based on visual label, mobile machine and readable medium
Giordano et al. 3D structure identification from image moments
CN110531774A (en) Obstacle Avoidance, device, robot and computer readable storage medium
CN113469045B (en) Visual positioning method and system for unmanned integrated card, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant