CN113093729A - Intelligent shopping trolley based on vision and laser radar and control method - Google Patents

Intelligent shopping trolley based on vision and laser radar and control method Download PDF

Info

Publication number
CN113093729A
CN113093729A CN202110258589.5A CN202110258589A CN113093729A CN 113093729 A CN113093729 A CN 113093729A CN 202110258589 A CN202110258589 A CN 202110258589A CN 113093729 A CN113093729 A CN 113093729A
Authority
CN
China
Prior art keywords
module
shopper
intelligent shopping
laser radar
shopping trolley
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110258589.5A
Other languages
Chinese (zh)
Inventor
张诚毅
党淑雯
李陆君
陈勇
凌晨飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai University of Engineering Science
Original Assignee
Shanghai University of Engineering Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai University of Engineering Science filed Critical Shanghai University of Engineering Science
Priority to CN202110258589.5A priority Critical patent/CN113093729A/en
Publication of CN113093729A publication Critical patent/CN113093729A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • G05D1/0236Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to an intelligent shopping trolley based on vision and laser radar and a control method, wherein the intelligent shopping trolley comprises a shopping trolley body, a control module, a detection module, a navigation module and a motion module which are respectively in communication connection with the control module, and a power supply module provides a working power supply for the intelligent shopping trolley; the detection module comprises a depth camera and a monocular camera, the navigation module comprises a laser radar sensor, a gyroscope and an accelerometer, and the motion module comprises two driving wheels and a universal wheel. Compared with the prior art, the pedestrian detection method has the advantages that the CNN convolutional neural network is adopted for pedestrian detection, the characteristic data of the shopper is extracted through semantic segmentation, automatic following is realized by combining the characteristic data of the shopper, the precision is higher, and the robustness is good; the global grid map is constructed based on the depth camera and the laser radar sensor, the requirements of the intelligent shopping trolley on light weight, precision and accuracy can be met, and a basis is provided for automatic following and autonomous navigation.

Description

Intelligent shopping trolley based on vision and laser radar and control method
Technical Field
The invention relates to the field of intelligent trolleys, in particular to an intelligent shopping trolley based on vision and laser radar and a control method.
Background
With the improvement of the living standard of people and the acceleration of the life rhythm, supermarket shopping becomes more and more common. The existing supermarket shopping cart is mostly in a hand-push type, a customer needs to spend energy to push the shopping cart, and the shopping cart needs to be returned to a designated position manually after shopping is finished, so that the user experience degree is reduced. In order to solve the problems, the intelligent shopping cart is suitable for transportation, the intelligent shopping cart is mainly applied to public places such as large supermarkets, shopping malls and shopping centers, the shopping experience of people can be greatly improved, and the market prospect is huge.
Chinese patent No. CN201811365223.2 discloses an intelligent shopping cart shopping guide system, where a shopper locates, navigates, and settles accounts for desired goods through data interaction among an application, a cloud data processing system, and an intelligent shopping cart, so as to reduce the shopping burden of the shopper and reduce the management difficulty of shopping places.
Chinese patent No. CN201810742833.3 discloses an intelligent shopping cart, an intelligent shopping cart management system, and a method for using the same, which can automatically follow a user, and can complete an interactive function according to gesture information of the user, thereby providing great convenience for the user, especially for the elderly and the infirm, to shop, but does not disclose that if the user moves based on target tracking feature information and gesture feature information, the robustness under a complex environment is poor.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide an intelligent shopping trolley based on vision and laser radar and a control method, a CNN convolutional neural network is adopted for pedestrian detection, characteristic data of a shopper is extracted through semantic segmentation, automatic following is realized by combining the characteristic data of the shopper, the precision is higher, and the robustness is good; the global grid map is constructed based on the depth camera and the laser radar sensor, the requirements of the intelligent shopping trolley on light weight, precision and accuracy can be met, and a basis is provided for automatic following and autonomous navigation.
The purpose of the invention can be realized by the following technical scheme:
an intelligent shopping trolley based on vision and laser radar comprises a shopping trolley body, and a control module, a detection module, a navigation module, a motion module and a power supply module which are arranged on the shopping trolley body, wherein the control module is respectively in communication connection with the detection module, the navigation module and the motion module, and the power supply module is used for providing a working power supply for the intelligent shopping trolley;
the detection module comprises a depth camera and a monocular camera, the navigation module comprises a laser radar sensor, a gyroscope and an accelerometer, and the motion module comprises two driving wheels and a universal wheel.
Further, the control module determines the position of the shopper according to the characteristic data of the shopper, the trained pedestrian detection model and the image information acquired by the detection module.
Further, the navigation module further comprises a distance detection sensor.
Still further, the feature data includes facial feature data and body feature data.
Further, the pedestrian detection model is a CNN convolutional neural network.
Furthermore, the intelligent shopping trolley further comprises a cloud database, and the control module is in communication connection with the cloud database.
An intelligent shopping trolley control method based on vision and laser radar comprises the following steps:
s1: constructing a global grid map based on a depth camera and a laser radar sensor;
s2: selecting a working mode of the intelligent shopping trolley, wherein the working mode comprises an automatic following mode and an autonomous navigation mode, if the working mode is the automatic following mode, the detection module acquires characteristic data of a shopper, and executes the step S3, otherwise, executes the step S4;
s3: the control module determines the position of the shopper according to the characteristic data of the shopper, a preset pedestrian detection model and the image information acquired by the detection module, the navigation module generates a moving route based on the global grid map and the position of the shopper, the intelligent shopping trolley is moved through the movement module, the step S3 is repeated until the automatic following mode is finished, and the step S2 is executed;
s4: and acquiring destination information, generating a moving route by the navigation module based on the global grid map and the destination information, moving the intelligent shopping trolley through the motion module until the intelligent shopping trolley reaches the destination, ending the automatic navigation mode, and executing the step S2.
Further, step S1 is specifically:
acquiring image data acquired by a depth camera, preprocessing the image data, performing filtering operation on the preprocessed image data to generate point cloud data, performing down-sampling operation on the point cloud data to reduce the density of the point cloud, and obtaining a projection environment map based on a Bayesian rule;
obtaining distance data collected by a laser radar sensor to obtain a laser radar map;
and fusing the projection environment map and the laser radar map to obtain a global grid map.
Still further, the filtering operation is a kalman filter.
Furthermore, the preprocessing is specifically to filter out obstacle images which are not in the moving range of the intelligent shopping trolley in the image data.
Further, in step S2, the step of acquiring the characteristic data of the shopper by the detection module specifically includes:
the monocular camera collects images for protecting the shopper, obtains image information of the shopper through semantic segmentation extraction, and obtains characteristic data of the shopper through extraction based on the image information of the shopper.
Further, in step S3, after the automatic following mode is ended, the method further includes: and uploading the characteristic data of the shopper to a cloud database.
Compared with the prior art, the invention has the following beneficial effects:
(1) the CNN convolutional neural network is adopted for pedestrian detection, the characteristic data of the shopper is extracted through semantic segmentation, automatic following is realized by combining the characteristic data of the shopper, the precision is higher, and the robustness is good.
(2) The global grid map is constructed based on the depth camera and the laser radar sensor, the requirements of the intelligent shopping trolley on light weight, precision and accuracy can be met, and a basis is provided for automatic following and autonomous navigation.
(3) When the projection environment map is constructed, only the obstacles in the moving range of the intelligent shopping trolley are considered, and the point cloud data is generated and then subjected to down-sampling processing, so that the number of point clouds is reduced, and computer resources are saved.
Drawings
FIG. 1 is a schematic structural diagram of an intelligent shopping cart in an embodiment;
FIG. 2 is a flow of constructing a global grid map in an embodiment;
FIG. 3 is a flow chart of an intelligent shopping cart control method in an embodiment;
reference numerals: 1. depth camera 2, monocular camera, 3, laser radar sensor, 4, power module, 5, action wheel, 6, universal wheel.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
In the drawings, structurally identical elements are represented by like reference numerals, and structurally or functionally similar elements are represented by like reference numerals throughout the several views. The size and thickness of each component shown in the drawings are arbitrarily illustrated, and the present invention is not limited to the size and thickness of each component. Parts are exaggerated in the drawing where appropriate for clarity of illustration.
Example 1:
an intelligent shopping trolley based on vision and laser radar comprises a shopping trolley body, and a control module, a detection module, a navigation module, a motion module and a power supply module 4 which are arranged on the shopping trolley body, wherein the control module is respectively in communication connection with the detection module, the navigation module and the motion module, and the power supply module 4 is used for providing a working power supply for the intelligent shopping trolley; the overall structure is shown in fig. 1.
The detection module comprises a depth camera 1 and a monocular camera 2, the navigation module comprises a laser radar sensor 3, a gyroscope and an accelerometer, and the motion module comprises two driving wheels 5 and a universal wheel 6. In this embodiment, the intelligent shopping trolley further comprises a cloud database, and the control module is in communication connection with the cloud database; the navigation module further comprises a distance detection sensor.
The control module determines the position of the shopper according to the characteristic data of the shopper (the characteristic data comprises facial characteristic data and body characteristic data), a trained pedestrian detection model (in the embodiment, the pedestrian detection model is a CNN convolutional neural network) and the image information acquired by the detection module.
Compared with the traditional gray level identification method, the pedestrian detection is carried out through the CNN convolutional neural network, so that the identification precision is higher, the speed is higher, and the robustness is better.
A control method of an intelligent shopping trolley based on vision and laser radar is shown in figure 3 and comprises the following steps:
s1: constructing a global grid map based on the depth camera 1 and the laser radar sensor 3;
step S1 is shown in fig. 2, and specifically includes:
acquiring image data acquired by a depth camera 1, performing filtering operation (Kalman filtering) after preprocessing (specifically, filtering out obstacle images which are not in the moving range of the intelligent shopping trolley in the image data) to generate point cloud data, performing down-sampling operation on the point cloud data to reduce the point cloud density, and obtaining a projection environment map based on a Bayesian rule;
obtaining distance data collected by a laser radar sensor 3 to obtain a laser radar map;
and fusing the projection environment map and the laser radar map to obtain a global grid map.
S2: selecting a working mode of the intelligent shopping trolley, wherein the working mode comprises an automatic following mode and an autonomous navigation mode, if the working mode is the automatic following mode, the detection module acquires characteristic data of a shopper, and executing the step S3, otherwise, executing the step S4;
in step S2, the specific steps of the detection module obtaining the characteristic data of the shopper are:
the monocular camera 2 collects images for protecting the shopper, extracts image information of the shopper through semantic segmentation, and extracts feature data of the shopper based on the image information of the shopper.
S3: the control module determines the position of the shopper according to the characteristic data of the shopper, a preset pedestrian detection model and the image information acquired by the detection module, the navigation module generates a moving route based on the global grid map and the position of the shopper, moves the intelligent shopping trolley through the motion module, repeats the step S3 until the automatic following mode is finished, and executes the step S2;
s4: and acquiring destination information, generating a moving route by the navigation module based on the global grid map and the destination information, moving the intelligent shopping trolley through the motion module until the intelligent shopping trolley reaches the destination, ending the automatic navigation mode, and executing the step S2.
In step S3, after the automatic following mode is ended, the method further includes: and uploading the characteristic data of the shopper to a cloud database.
In this embodiment, control module locates the bottom of shopping cart body, including industrial computer and driven circuit board, the two passes through serial ports and connects, and the industrial computer carries on the ROS system based on linux kernel, handles on the ROS system. The power module 4 adopts a chargeable polymer battery, and after the polymer battery is charged, the polymer battery respectively supplies power to the control module, the detection module, the navigation module, the motion module and the like through the power switching module.
The gyroscope selects mpu9250 nine-axis sensor, the distance detection sensor can select infrared or ultrasonic sensor, the monocular camera 2 selects 1080pUSB camera, two driving wheels 5 of the motion module adopt the step wheel hub motor to directly drive, and the intelligent shopping trolley can move forward and backward.
When constructing the global grid map, as shown in fig. 2, the global grid map is created from data collected by the depth camera 1 and the lidar sensor 3. Acquiring image data acquired by a depth camera 1, performing filtering operation after preprocessing (specifically, filtering out images higher than or lower than the moving range of an intelligent shopping trolley in the image data), enabling the images to reach the relevant requirements of image construction, generating point cloud data, performing down-sampling operation on the point cloud data to reduce the point cloud density, and obtaining a projection environment map based on a Bayesian rule; obtaining distance data collected by a laser radar sensor 3 to obtain a laser radar map; and fusing the projection environment map and the laser radar map to obtain a global grid map.
Considering the limitation of computer resources, the barrier outside the range of motion of the intelligent shopping trolley does not influence the passing of the intelligent shopping trolley, so that the installation position and the angle of the depth camera 1 can be adjusted, the barrier which does not influence the motion of the intelligent shopping trolley is filtered, and the resource consumption is reduced. After the point cloud data is generated through filtering, the point cloud data is subjected to down-sampling operation, so that the number of the point clouds can be greatly reduced, and more computer resources are saved.
The working model of the intelligent shopping trolley comprises an automatic following mode and an autonomous navigation mode. In the automatic following mode, the detection module firstly acquires characteristic data of a shopper, then continuously acquires images, finds pedestrians in the images after inputting a pedestrian detection model, compares the pedestrians with the characteristic data of the shopper to determine the position of the shopper, and then achieves advancing and retreating of the intelligent shopping trolley through the navigation module and the movement module. After the automatic mode of following is ended, upload shopper's characteristic data to high in the clouds database, when this customer carries out shopping again next time, can be more accurate carry out automatic following, increased customer's shopping experience.
In the autonomous navigation mode, after a destination (such as a cash register) is input, a moving route is automatically generated and the user moves to the destination.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. The utility model provides an intelligent shopping dolly based on vision and laser radar, includes the shopping dolly body, installs control module, detection module, navigation module, motion module and power module (4) on the shopping dolly body, control module respectively with detection module, navigation module and motion module communication connection, power module (4) are used for providing working power supply for intelligent shopping dolly, its characterized in that:
the detection module comprises a depth camera (1) and a monocular camera (2), the navigation module comprises a laser radar sensor (3), a gyroscope and an accelerometer, and the motion module comprises two driving wheels (5) and a universal wheel (6).
2. The intelligent shopping cart based on vision and lidar as recited in claim 1, wherein the control module determines the location of the shopper based on the shopper's characteristics data, the trained pedestrian detection model, and the image information obtained by the detection module.
3. The vision and lidar based intelligent shopping cart of claim 2, wherein said feature data comprises facial feature data and body feature data.
4. The intelligent shopping cart based on vision and lidar of claim 2, wherein the pedestrian detection model is a CNN convolutional neural network.
5. The intelligent shopping cart based on vision and lidar as recited in claim 1, further comprising a cloud database, wherein said control module is communicatively connected to said cloud database.
6. A method for controlling an intelligent shopping trolley based on vision and laser radar, which is characterized in that the method is based on the intelligent shopping trolley as claimed in any one of claims 1 to 5 and comprises the following steps:
s1: constructing a global grid map based on the depth camera (1) and the laser radar sensor (3);
s2: selecting a working mode of the intelligent shopping trolley, wherein the working mode comprises an automatic following mode and an autonomous navigation mode, if the working mode is the automatic following mode, the detection module acquires characteristic data of a shopper, and executes the step S3, otherwise, executes the step S4;
s3: the control module determines the position of the shopper according to the characteristic data of the shopper, a preset pedestrian detection model and the image information acquired by the detection module, the navigation module generates a moving route based on the global grid map and the position of the shopper, the intelligent shopping trolley is moved through the movement module, the step S3 is repeated until the automatic following mode is finished, and the step S2 is executed;
s4: and acquiring destination information, generating a moving route by the navigation module based on the global grid map and the destination information, moving the intelligent shopping trolley through the motion module until the intelligent shopping trolley reaches the destination, ending the automatic navigation mode, and executing the step S2.
7. The method for controlling the intelligent shopping cart based on the vision and the laser radar as claimed in claim 6, wherein the step S1 is specifically as follows:
acquiring image data acquired by a depth camera (1), preprocessing the image data, performing filtering operation on the preprocessed image data to generate point cloud data, performing down-sampling operation on the point cloud data, and obtaining a projection environment map based on a Bayesian rule;
obtaining distance data collected by a laser radar sensor (3) to obtain a laser radar map;
and fusing the projection environment map and the laser radar map to obtain a global grid map.
8. The method as claimed in claim 7, wherein the preprocessing is to filter the image data of the obstacle out of the range of motion of the intelligent shopping cart.
9. The method as claimed in claim 6, wherein in step S2, the detection module obtains the characteristic data of the shopper specifically as follows:
the monocular camera (2) collects images for protecting the shopper, extracts image information of the shopper through semantic segmentation, and extracts feature data of the shopper based on the image information of the shopper.
10. The method as claimed in claim 6, wherein the step S3, after the automatic following mode is finished, further comprises: and uploading the characteristic data of the shopper to a cloud database.
CN202110258589.5A 2021-03-10 2021-03-10 Intelligent shopping trolley based on vision and laser radar and control method Pending CN113093729A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110258589.5A CN113093729A (en) 2021-03-10 2021-03-10 Intelligent shopping trolley based on vision and laser radar and control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110258589.5A CN113093729A (en) 2021-03-10 2021-03-10 Intelligent shopping trolley based on vision and laser radar and control method

Publications (1)

Publication Number Publication Date
CN113093729A true CN113093729A (en) 2021-07-09

Family

ID=76666703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110258589.5A Pending CN113093729A (en) 2021-03-10 2021-03-10 Intelligent shopping trolley based on vision and laser radar and control method

Country Status (1)

Country Link
CN (1) CN113093729A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112859859A (en) * 2021-01-13 2021-05-28 中南大学 Dynamic grid map updating method based on three-dimensional obstacle object pixel object mapping

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170270579A1 (en) * 2016-03-15 2017-09-21 Tier1 Technology, S.L. Robotic equipment for the location of items in a shop and operating process thereof
CN108536145A (en) * 2018-04-10 2018-09-14 深圳市开心橙子科技有限公司 A kind of robot system intelligently followed using machine vision and operation method
CN108646761A (en) * 2018-07-12 2018-10-12 郑州大学 Robot indoor environment exploration, avoidance and method for tracking target based on ROS
CN109160452A (en) * 2018-10-23 2019-01-08 西安中科光电精密工程有限公司 Unmanned transhipment fork truck and air navigation aid based on laser positioning and stereoscopic vision
CN109703607A (en) * 2017-10-25 2019-05-03 北京眸视科技有限公司 A kind of Intelligent baggage car

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170270579A1 (en) * 2016-03-15 2017-09-21 Tier1 Technology, S.L. Robotic equipment for the location of items in a shop and operating process thereof
CN109703607A (en) * 2017-10-25 2019-05-03 北京眸视科技有限公司 A kind of Intelligent baggage car
CN108536145A (en) * 2018-04-10 2018-09-14 深圳市开心橙子科技有限公司 A kind of robot system intelligently followed using machine vision and operation method
CN108646761A (en) * 2018-07-12 2018-10-12 郑州大学 Robot indoor environment exploration, avoidance and method for tracking target based on ROS
CN109160452A (en) * 2018-10-23 2019-01-08 西安中科光电精密工程有限公司 Unmanned transhipment fork truck and air navigation aid based on laser positioning and stereoscopic vision

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112859859A (en) * 2021-01-13 2021-05-28 中南大学 Dynamic grid map updating method based on three-dimensional obstacle object pixel object mapping

Similar Documents

Publication Publication Date Title
CN109703607B (en) Intelligent luggage van
CN111399505B (en) Mobile robot obstacle avoidance method based on neural network
Bauer et al. The autonomous city explorer: Towards natural human-robot interaction in urban environments
Ran et al. Scene perception based visual navigation of mobile robot in indoor environment
CN111360780A (en) Garbage picking robot based on visual semantic SLAM
Pfeiffer et al. Modeling dynamic 3D environments by means of the stixel world
CN102393739B (en) Intelligent trolley and application method thereof
CN110874100A (en) System and method for autonomous navigation using visual sparse maps
CN109634267B (en) Be used for market supermarket intelligence to choose goods delivery robot
Sales et al. Adaptive finite state machine based visual autonomous navigation system
CN113126632B (en) Virtual wall defining and operating method, equipment and storage medium
CN111290403B (en) Transport method for carrying automatic guided transport vehicle and carrying automatic guided transport vehicle
KR20190096874A (en) Artificial intelligence robot cleaner
Yan et al. Robot perception of static and dynamic objects with an autonomous floor scrubber
CN111459172A (en) Autonomous navigation system of boundary security unmanned patrol car
CN113093729A (en) Intelligent shopping trolley based on vision and laser radar and control method
Niijima et al. Real-time autonomous navigation of an electric wheelchair in large-scale urban area with 3D map
Lei et al. Automated Lane Change Behavior Prediction and Environmental Perception Based on SLAM Technology
Xiao et al. Robotic autonomous trolley collection with progressive perception and nonlinear model predictive control
Miyagusuku et al. Toward autonomous garbage collection robots in terrains with different elevations
CN112233170A (en) Visual positioning and image processing method, device and storage medium
Prabakaran et al. sTetro-D: A deep learning based autonomous descending-stair cleaning robot
WO2023155903A1 (en) Systems and methods for generating road surface semantic segmentation map from sequence of point clouds
Li et al. Obstacle information detection method based on multiframe three-dimensional lidar point cloud fusion
Uzawa et al. Dataset Generation for Deep Visual Navigation in Unstructured Environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210709