CN106873566A - A kind of unmanned logistic car based on deep learning - Google Patents
A kind of unmanned logistic car based on deep learning Download PDFInfo
- Publication number
- CN106873566A CN106873566A CN201710146233.6A CN201710146233A CN106873566A CN 106873566 A CN106873566 A CN 106873566A CN 201710146233 A CN201710146233 A CN 201710146233A CN 106873566 A CN106873566 A CN 106873566A
- Authority
- CN
- China
- Prior art keywords
- logistic car
- module
- layer
- deep learning
- convolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013135 deep learning Methods 0.000 title claims abstract description 11
- SAZUGELZHZOXHB-UHFFFAOYSA-N acecarbromal Chemical compound CCC(Br)(CC)C(=O)NC(=O)NC(C)=O SAZUGELZHZOXHB-UHFFFAOYSA-N 0.000 title claims abstract description 6
- 238000013136 deep learning model Methods 0.000 claims abstract description 25
- 238000012545 processing Methods 0.000 claims abstract description 15
- 230000004888 barrier function Effects 0.000 claims abstract description 11
- 230000006870 function Effects 0.000 claims abstract description 7
- 238000012360 testing method Methods 0.000 claims description 15
- 238000012549 training Methods 0.000 claims description 13
- 230000004913 activation Effects 0.000 claims description 5
- 239000011248 coating agent Substances 0.000 claims description 5
- 238000000576 coating method Methods 0.000 claims description 5
- 230000005611 electricity Effects 0.000 claims description 2
- 210000002569 neuron Anatomy 0.000 claims description 2
- 238000010606 normalization Methods 0.000 claims description 2
- 238000011017 operating method Methods 0.000 claims description 2
- 238000013527 convolutional neural network Methods 0.000 claims 1
- 238000013519 translation Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 4
- 230000033001 locomotion Effects 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000000034 method Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 235000015170 shellfish Nutrition 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000002054 transplantation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/42—Recording and playback systems, i.e. in which the programme is recorded from a cycle of operations, e.g. the cycle of operations being manually controlled, after which this record is played back on the same machine
- G05B19/425—Teaching successive positions by numerical control, i.e. commands being entered to control the positioning servo of the tool head or end effector
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0251—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0255—Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Aviation & Aerospace Engineering (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Acoustics & Sound (AREA)
- Robotics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Navigation (AREA)
Abstract
The present invention relates to a kind of unmanned logistic car based on deep learning, including logistic car car body, avoiding obstacles by supersonic wave module, binocular stereo vision avoidance module, motor drive module, embedded system, power module and vision guided navigation processing system;Binocular stereo vision avoidance module is used to detect more remote barrier in road scene that avoiding obstacles by supersonic wave module to be used to detect closely barrier that the range information of the two acquired barrier to be referred to as avoidance information;The road image data that vision guided navigation processing system is gathered using the deep learning model treatment trained by sample set, and output control command information;Finally judged by decision-making module Comprehensive Control command information and avoidance information, carried out controlled motor drive module, to realize the unmanned function of logistic car.The present invention need not install auxiliary equipment, it is only necessary to which deep learning model passes through learning sample collection, you can road surrounding environment is perceived and understood, the unmanned function of logistic car is realized.
Description
Technical field
The invention belongs to unmanned technical field, it is related to a kind of unmanned logistic car based on deep learning, is applicable
In public places such as large-scale garden, warehouse, station, airport, harbours.
Background technology
As the freight volume of the fast development of logistics, particularly warehouse carrying, express delivery and take-away constantly rises, to nobody
Drive logistic car and bring great development potentiality and the huge market space.
But, current warehouse carrying is mostly using the AGV (nothings of the modes such as electromagnetic guide, tape guiding and laser navigation
People's carrier), express delivery and take-away substantially rely on manpower transport.The former realizes unmanned transport, applies in general to place and puts down
Whole, clean indoor environment;The latter needs to expend a large amount of manpower and materials, and shipment and delivery cost is high.Although the former can realize unmanned fortune
It is defeated, but it is strict for site requirements, to configure some auxiliary guiding equipment (magnetic stripe, colour band and reflector etc.), construction period
Long, investment cost is high.
At present, pilotless automobile is most of uses laser radar as navigation detection means, but laser radar cost
It is too high, and the characteristic information for extracting is sparse, is unfavorable for the understanding of scene and perceives.In addition, logistic car work garden or
Low speed even running is typically required to its speed in person room.And low, short construction period is not only invested using vision guided navigation, Er Qieneng
Abundant image feature information is enough extracted, its processing speed can also meet the requirement of real-time of system, be adapted to unmanned thing
The technology for flowing car is implemented.
Unmanned logistic car of the invention with deep learning model as core, by Real-time Collection road surrounding environment
View data, goes to perceive the environmental information of surrounding, and then send control instruction information through deep learning model treatment;Again by decision-making
The avoidance information that module combination binocular stereo vision avoidance module and avoiding obstacles by supersonic wave module are provided is judged, and carrys out controlled motor
Drive module, to realize the unmanned function of logistic car.
The content of the invention
The present invention makes full use of the respective advantage of vision-based detection and deep learning, proposes a kind of nobody based on deep learning
Drive logistic car.Compared to existing using auxiliary guiding AGV (automatic guided vehicle), unmanned logistic car of the invention is not only installed
Debugging difficulty is small, low cost, flexible, Navigation Control deviation are small;And it is adapted to more complicated interior or outside work
Road scene.
In order to realize the purpose of the present invention, the technical scheme of use is as follows:
A kind of unmanned logistic car based on deep learning, including logistic car external structure and logistic car internal structure two
Part.
Logistic car external structure is main by logistic car car body, avoiding obstacles by supersonic wave module and binocular stereo vision avoidance module group
Into;The logistic car car body amounts to the drawer doors that setting five is used to store goods;Logistic car car body both sides respectively fill two drawers
Door, the afterbody of logistic car car body fills a drawer door;The bottom of the logistic car car body is equipped with four Mecanum wheels, realizes former
Ground Omnidirectional rotation;The head both sides of the logistic car car body are respectively equipped with a set of avoiding obstacles by supersonic wave module, for closely preventing
Shield property range finding avoidance;Camera A and camera B is housed, the two constitutes binocular tri-dimensional in the middle of the head of the logistic car car body
Feel avoidance module, for detecting more remote barrier in road scene, avoiding obstacles by supersonic wave module is used to detect closely obstacle
Thing, the range information of the two acquired barrier is referred to as the avoidance information of logistic car;Main views of the camera A similar to people
Eye, for vision guided navigation processing system provides view data.
Logistic car internal structure is mainly made up of motor drive module, embedded system and power module.The motor drives
Dynamic model block is used to drive DC brushless motor, then drives belt to drive Mecanum wheel by DC brushless motor;It is described embedded
System is used to gather view data, carries vision guided navigation processing system, controls the motor drive module;The power module is adopted
It is logistic car system power supply with battery pack.
Above-mentioned unmanned logistic car completes unmanned in road work environment by vision guided navigation processing system
Function, the vision guided navigation processing system is mainly made up of deep learning model, decision-making module and sample set, and wherein sample set is again
It is divided into training set and test set.
The deep learning model establishment step is as follows:
Step 1:The road environment video image that logistic car is collected in advance and operating personnel's operation telecommand information
In copying computer to, by sample set according to 9:1 ratio is fabricated to training set and test set;
Step 2:Training set is used to train depth model, and test set is used for MTD learning model, by repeatedly to depth
Degree model is adjusted parameter and test, test result error size is observed, untill the control accuracy that disclosure satisfy that system;
Finally, required deep learning model is got.
The present invention is based on deep learning theory of algorithm, by learning the driving experience of outstanding operating personnel, by anti-
Multiple regulation and Optimal Parameters, training reach the deep learning model for meeting system requirements.Logistics is made by deep learning model
Car can perceive the surrounding environment of road, obtain the control instruction information of system.In addition, again by decision-making module combination logistic car
Avoidance information, sends decision instruction information controlled motor drive module, realizes the unmanned function of logistic car.
Brief description of the drawings
Fig. 1 is car body external structure of the invention.
Fig. 2 is vehicle body structure chart of the invention.
Fig. 3 is deep learning model structure of the invention.
Fig. 4 is system architecture functional diagram of the invention.
Fig. 5 is systemic-function implementing procedure figure of the invention.
In figure:100 is logistic car car body, and 200 is drawer door, and 300 is Mecanum wheel, and 400 is avoiding obstacles by supersonic wave module,
500 is binocular stereo vision avoidance module, and 501 is camera A, and 502 is camera B, and 600 is DC brushless motor, and 701 is embedding
Embedded system, 702 is motor drive module, and 800 is power module, and 900 is sample set, and 901 is training set, and 902 is test set,
903 is deep learning model, and 904 is decision-making module, and 905 is vision guided navigation processing system.
Specific embodiment
Specific embodiment of the invention is described in detail below in conjunction with technical scheme and accompanying drawing.
Fig. 1 is the external structure schematic diagram of logistic car car body.As shown in Figure 1, the both sides of the logistic car car body 100 respectively fill
Two drawer doors 200, and afterbody is equipped with a drawer door 200, altogether five drawer doors for being used to store goods
200.The bottom of logistic car car body 100 is equipped with four Mecanum wheels 300, is capable of achieving four-wheel omnidirectional and drives.The thing
The head both sides of stream car car body 100 are respectively equipped with a set of ultrasonic avoidance module 400, for close-range protection range finding avoidance.Institute
State the head centre position of logistic car car body 100 and the camera A501 and camera B502 is installed, the two constitutes binocular solid
Vision avoidance module 500 (shown in Fig. 4), the three-dimensional spatial information for obtaining road surrounding environment is capable of achieving vision avoidance work(
Energy;Wherein, the camera A501, similar to the dominant eye of people, is that vision guided navigation processing system 905 (shown in Fig. 4) provides image
Data.
Fig. 2 is the internal structure schematic diagram of logistic car car body.As can be seen from Figure 2, the embedded system 701 passes through camera
A501 gathers view data, after being processed through vision guided navigation processing system 905 (shown in Fig. 4), controls the motor drive module
702.The motor drive module 702 is used to drive the DC brushless motor 600.The DC brushless motor 600 passes through skin
Band drives and drives Mecanum wheel 300 (shown in Fig. 1) motion.The power module 800 uses battery pack to be supplied for logistic car system
Electricity.
In deep learning model specific implementation case, as shown in figure 3, it is depth convolutional Neural that deep learning model is used
Network, the order of connection of its network is followed successively by input layer, ground floor convolution plus active coating, the first maximum pond layer, the second convolution
Plus active coating, the second maximum pond layer, the 3rd convolution add active coating, the 3rd maximum pond layer, the first full connection plus activation to add
Dropout layers, the second full articulamentum, output layer.As shown in figure 4, the image that the camera A501 is collected is used as input layer,
The size of its input data is 200 × 160 × 1.According to table 1, it is 5 × 5 to use convolution kernel size for input layer, filtering
Device number is 20, and convolution step-length is 1, and convolution is filled with 1 and carries out convolution operation, obtain the size of the first convolutional layer for 196 × 156 ×
20;After first convolutional layer is processed by activation primitive ReLU, as the input of the first maximum pond layer, using the big of Chi Huahe
Small is 2 × 2, and wave filter number is 20, and pond step-length carries out down-sampled treatment for 2, and the size for obtaining the first maximum pond layer is 98
×78×20.This moment, the operation of first time convolution and pond is realized.
Next second, the convolution of third time and pondization are operated according to first time convolution and the operating procedure in pond, its
The size that parameter query table 1 obtains the second convolutional layer successively is 94 × 74 × 40, and the size of the second maximum pond layer is 47 × 37
× 40, the size of the 3rd convolutional layer is 44 × 34 × 60, and the size of the 3rd maximum pond layer is 22 × 17 × 60.Altogether by three
After secondary convolution and pondization operation, the first full articulamentum, ReLU layers, Dropout layers and the second full articulamentum are sequentially connected, wherein
Dropout layers is that some neurons are suppressed at random, prevents deep learning from supersaturation occur during training.The
The input of one full articulamentum is 22 × 17 × 60, is output as 400;ReLU layers and Dropout layers of input and output keep not being changed into
400, the input of the second full articulamentum is 400, is output as 6.
Finally, after carrying out data normalization by Softmax layers, the output category size of output layer is 6.Wherein, control
System 6 classes of instruction point, are corresponding in turn to the control instruction of operation remote control:Advance (0), left-hand rotation (1), left (2), right-hand rotation (3), the right side are flat
Move (4), (5) six instructions of u-turn.
Table 1
Fig. 4 is logistic car system architecture functional diagram.As can be seen from Figure 4, the vision guided navigation processing system 905 includes depth
Practise model 903 and decision-making module 904.The deep learning model 903 be through sample set 900 train obtain, its input picture by
Camera A501 is provided.The sample set 900 is by the road environment video in camera A501 collection logistic car motion processes
The control instruction composition of image and operating personnel's remote control logistic car, and sample set 900 will be obtained by 9:1 ratio is divided into training set
901 and test set 902.The input data of the decision-making module 904 is by deep learning model 903, binocular stereo vision avoidance
Module 500 and avoiding obstacles by supersonic wave module 400 are provided, and it exports decision instruction information transfer to motor drive module 702, realizes thing
The autokinetic movement control of stream car.
As shown in figure 5, the course of work of the unmanned logistic car of present embodiment is as follows:
Step 1:The road environment video image that first logistic car is collected and operating personnel's operation telecommand information are copied
Shellfish in calculating, by sample set according to 9:1 ratio is fabricated to training set and test set.
Step 2:On computers, go to train deep learning model using the training set being collected into, then gone to survey with test set
Examination deep learning model, by constantly regulation parameter and the test repeatedly of deep learning model, until the control accuracy for exporting
Untill disclosure satisfy that the control requirement of system, deep learning model transplantations will be finally got in embedded system.
Step 3:Binocular stereo vision avoidance module is used to detect more remote barrier, avoiding obstacles by supersonic wave in road scene
Module is used to detect closely barrier that the obstacle information acquired in the two to be referred to as logistic car avoidance information.
Step 4:Real-time Collection to road environment image transmitting is given deep learning model by logistic car by camera A, warp
Output control command information after deep learning model treatment;Then kept away by decision-making module Comprehensive Control command information and logistic car
Barrier information judged, sends decision instruction information controlled motor drive module, realizes the unmanned function of logistic car.
Claims (2)
1. a kind of unmanned logistic car based on deep learning, including logistic car external structure and logistic car internal structure two
Point;It is characterized in that:
Logistic car external structure is mainly made up of logistic car car body, avoiding obstacles by supersonic wave module and binocular stereo vision avoidance module;
The logistic car car body amounts to the drawer doors that setting five is used to store goods;Logistic car car body both sides respectively fill two drawer doors,
The afterbody of logistic car car body fills a drawer door;The bottom of the logistic car car body is equipped with four Mecanum wheels, realizes original place
Omnidirectional rotation;The head both sides of the logistic car car body are respectively equipped with a set of avoiding obstacles by supersonic wave module, for close-range protection
Property range finding avoidance;Camera A and camera B is housed, the two constitutes binocular stereo vision in the middle of the head of the logistic car car body
Avoidance module, for detecting more remote barrier in road scene, avoiding obstacles by supersonic wave module is used to detect closely barrier,
The range information of the two acquired barrier is referred to as the avoidance information of logistic car;The camera A similar to people dominant eye,
For vision guided navigation processing system provides view data;
Logistic car internal structure is mainly made up of motor drive module, embedded system and power module;The motor drives mould
Block is used to drive DC brushless motor, then drives belt to drive Mecanum wheel by DC brushless motor;The embedded system
For gathering view data, vision guided navigation processing system is carried, control the motor drive module;The power module is using electricity
Pond group is logistic car system power supply;
Above-mentioned unmanned logistic car completes the unmanned function in road work environment by vision guided navigation processing system,
The vision guided navigation processing system is mainly made up of deep learning model, decision-making module and sample set, and wherein sample set is divided into again
Training set and test set;
The deep learning model establishment step is as follows:
Step 1:The road environment video image that logistic car is collected in advance and operating personnel's operation telecommand information copy
To in computer, by sample set according to 9:1 ratio is fabricated to training set and test set;
Step 2:Training set is used to train depth model, and test set is used for MTD learning model, by repeatedly to depth mould
Type is adjusted parameter and test, test result error size is observed, untill the control accuracy that disclosure satisfy that system;Most
Eventually, required deep learning model is got.
2. unmanned logistic car as claimed in claim 1, it is characterised in that:
Using being depth convolutional neural networks, the order of connection of its network is followed successively by input layer, first to the deep learning model
Layer convolution adds active coating, the first maximum pond layer, the second convolution plus active coating, the second maximum pond layer, the 3rd convolution to add activation
Layer, the 3rd maximum pond layer, the first full connection plus activation add Dropout layers, the second full articulamentum, output layer;
Used as input layer, the size of its input data is 200 × 160 × 1 to the image that the camera A is collected;For input
It is 5 × 5 that layer uses convolution kernel size, and filter number is 20, and convolution step-length is 1, and convolution is filled with 1 and carries out convolution operation, obtains
The size of the first convolutional layer is 196 × 156 × 20;After first convolutional layer is processed by activation primitive ReLU, as first most
The input of great Chiization layer, the size for using Chi Huahe is 2 × 2, and wave filter number is 20, and pond step-length carries out down-sampled treatment for 2,
The size for obtaining the first maximum pond layer is 98 × 78 × 20, realizes the operation of first time convolution and pond;Following second
, according to first time convolution and the operating procedure in pond, its parameter query table 1 is obtained successively for secondary, third time convolution and pondization operation
The size of the second convolutional layer is 94 × 74 × 40, and the size of the second maximum pond layer is 47 × 37 × 40, the 3rd convolutional layer it is big
Small is 44 × 34 × 60, and the size of the 3rd maximum pond layer is 22 × 17 × 60;Altogether by cubic convolution and pondization operation
Afterwards, the first full articulamentum, ReLU layers, Dropout layers and the second full articulamentum are sequentially connected, wherein Dropout layers is random right
Some neurons are suppressed, and prevent deep learning from supersaturation occur during training;The input of the first full articulamentum
It is 22 × 17 × 60, is output as 400;ReLU layers and Dropout layers of input and output keep not being changed into 400, the second full articulamentum
Input be 400, be output as 6;Finally, after carrying out data normalization by Softmax layers, the output category size of output layer
It is 6;Wherein, 6 classes of control instruction point, are corresponding in turn to the control instruction of operation remote control:Advance (0), turn left (1), left (2),
Right-hand rotation (3), right translation (4), (5) six instructions of u-turn.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710146233.6A CN106873566B (en) | 2017-03-14 | 2017-03-14 | A kind of unmanned logistic car based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710146233.6A CN106873566B (en) | 2017-03-14 | 2017-03-14 | A kind of unmanned logistic car based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106873566A true CN106873566A (en) | 2017-06-20 |
CN106873566B CN106873566B (en) | 2019-01-22 |
Family
ID=59170774
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710146233.6A Active CN106873566B (en) | 2017-03-14 | 2017-03-14 | A kind of unmanned logistic car based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106873566B (en) |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107450593A (en) * | 2017-08-30 | 2017-12-08 | 清华大学 | A kind of unmanned plane autonomous navigation method and system |
CN107491072A (en) * | 2017-09-05 | 2017-12-19 | 百度在线网络技术(北京)有限公司 | Vehicle obstacle-avoidance method and apparatus |
CN107515607A (en) * | 2017-09-05 | 2017-12-26 | 百度在线网络技术(北京)有限公司 | Control method and device for unmanned vehicle |
CN107544518A (en) * | 2017-10-17 | 2018-01-05 | 芜湖伯特利汽车安全系统股份有限公司 | The ACC/AEB systems and vehicle driven based on personification |
CN107767487A (en) * | 2017-09-05 | 2018-03-06 | 百度在线网络技术(北京)有限公司 | A kind of method and apparatus for determining data acquisition route |
CN107826105A (en) * | 2017-10-31 | 2018-03-23 | 清华大学 | Translucent automatic Pilot artificial intelligence system and vehicle |
CN107967468A (en) * | 2018-01-19 | 2018-04-27 | 刘至键 | A kind of supplementary controlled system based on pilotless automobile |
CN108897313A (en) * | 2018-05-23 | 2018-11-27 | 清华大学 | A kind of end-to-end Vehicular automatic driving system construction method of layer-stepping |
CN109116374A (en) * | 2017-06-23 | 2019-01-01 | 百度在线网络技术(北京)有限公司 | Determine the method, apparatus, equipment and storage medium of obstacle distance |
CN109240123A (en) * | 2018-10-09 | 2019-01-18 | 合肥学院 | On-loop simulation method and system for intelligent logistics vehicle |
CN109358614A (en) * | 2018-08-30 | 2019-02-19 | 深圳市易成自动驾驶技术有限公司 | Automatic Pilot method, system, device and readable storage medium storing program for executing |
CN109388135A (en) * | 2017-08-14 | 2019-02-26 | 通用汽车环球科技运作有限责任公司 | The autonomous operation learnt using depth space-time |
CN109407679A (en) * | 2018-12-28 | 2019-03-01 | 百度在线网络技术(北京)有限公司 | Method and apparatus for controlling pilotless automobile |
WO2019047649A1 (en) * | 2017-09-05 | 2019-03-14 | 百度在线网络技术(北京)有限公司 | Method and device for determining driving behavior of unmanned vehicle |
CN109754625A (en) * | 2017-11-07 | 2019-05-14 | 天津工业大学 | Take out the drive manner of vehicle in a kind of unpiloted campus |
CN109933081A (en) * | 2017-12-15 | 2019-06-25 | 北京京东尚科信息技术有限公司 | Unmanned plane barrier-avoiding method, avoidance unmanned plane and unmanned plane obstacle avoidance apparatus |
WO2019179094A1 (en) * | 2018-03-23 | 2019-09-26 | 广州汽车集团股份有限公司 | Method and apparatus for maintaining driverless driveway, computer device, and storage medium |
CN110473806A (en) * | 2019-07-13 | 2019-11-19 | 河北工业大学 | The deep learning identification of photovoltaic cell sorting and control method and device |
CN110618678A (en) * | 2018-06-19 | 2019-12-27 | 辉达公司 | Behavioral guided path planning in autonomous machine applications |
CN110646574A (en) * | 2019-10-08 | 2020-01-03 | 张家港江苏科技大学产业技术研究院 | Unmanned ship-based water quality conductivity autonomous detection system and method |
CN110673602A (en) * | 2019-10-24 | 2020-01-10 | 驭势科技(北京)有限公司 | Reinforced learning model, vehicle automatic driving decision method and vehicle-mounted equipment |
CN111142519A (en) * | 2019-12-17 | 2020-05-12 | 西安工业大学 | Automatic driving system based on computer vision and ultrasonic radar redundancy and control method thereof |
WO2020177417A1 (en) * | 2019-03-01 | 2020-09-10 | 北京三快在线科技有限公司 | Unmanned device control and model training |
CN111898732A (en) * | 2020-06-30 | 2020-11-06 | 江苏省特种设备安全监督检验研究院 | Ultrasonic ranging compensation method based on deep convolutional neural network |
CN111975769A (en) * | 2020-07-16 | 2020-11-24 | 华南理工大学 | Mobile robot obstacle avoidance method based on meta-learning |
CN112835333A (en) * | 2020-12-31 | 2021-05-25 | 北京工商大学 | Multi-AGV obstacle avoidance and path planning method and system based on deep reinforcement learning |
CN113470416A (en) * | 2020-03-31 | 2021-10-01 | 上汽通用汽车有限公司 | System, method and storage medium for realizing parking space detection by using embedded system |
CN114509077A (en) * | 2020-11-16 | 2022-05-17 | 阿里巴巴集团控股有限公司 | Method, device, system and computer program product for generating navigation guide line |
WO2023124572A1 (en) * | 2021-12-31 | 2023-07-06 | 上海邦邦机器人有限公司 | Driving assistance system and method applied to scooter for elderly person, and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103129468A (en) * | 2013-02-19 | 2013-06-05 | 河海大学常州校区 | Vehicle-mounted roadblock recognition system and method based on laser imaging technique |
CN103514456A (en) * | 2013-06-30 | 2014-01-15 | 安科智慧城市技术(中国)有限公司 | Image classification method and device based on compressed sensing multi-core learning |
CN104715021A (en) * | 2015-02-27 | 2015-06-17 | 南京邮电大学 | Multi-label learning design method based on hashing method |
CN104793620A (en) * | 2015-04-17 | 2015-07-22 | 中国矿业大学 | Obstacle avoidance robot based on visual feature binding and reinforcement learning theory |
CN105629221A (en) * | 2014-10-26 | 2016-06-01 | 江西理工大学 | Logistics vehicle wireless-infrared-ultrasonic distance-measuring and positioning system |
-
2017
- 2017-03-14 CN CN201710146233.6A patent/CN106873566B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103129468A (en) * | 2013-02-19 | 2013-06-05 | 河海大学常州校区 | Vehicle-mounted roadblock recognition system and method based on laser imaging technique |
CN103514456A (en) * | 2013-06-30 | 2014-01-15 | 安科智慧城市技术(中国)有限公司 | Image classification method and device based on compressed sensing multi-core learning |
CN105629221A (en) * | 2014-10-26 | 2016-06-01 | 江西理工大学 | Logistics vehicle wireless-infrared-ultrasonic distance-measuring and positioning system |
CN104715021A (en) * | 2015-02-27 | 2015-06-17 | 南京邮电大学 | Multi-label learning design method based on hashing method |
CN104793620A (en) * | 2015-04-17 | 2015-07-22 | 中国矿业大学 | Obstacle avoidance robot based on visual feature binding and reinforcement learning theory |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109116374A (en) * | 2017-06-23 | 2019-01-01 | 百度在线网络技术(北京)有限公司 | Determine the method, apparatus, equipment and storage medium of obstacle distance |
CN109388135A (en) * | 2017-08-14 | 2019-02-26 | 通用汽车环球科技运作有限责任公司 | The autonomous operation learnt using depth space-time |
CN107450593A (en) * | 2017-08-30 | 2017-12-08 | 清华大学 | A kind of unmanned plane autonomous navigation method and system |
CN107450593B (en) * | 2017-08-30 | 2020-06-12 | 清华大学 | Unmanned aerial vehicle autonomous navigation method and system |
CN107767487A (en) * | 2017-09-05 | 2018-03-06 | 百度在线网络技术(北京)有限公司 | A kind of method and apparatus for determining data acquisition route |
CN107491072B (en) * | 2017-09-05 | 2021-03-30 | 百度在线网络技术(北京)有限公司 | Vehicle obstacle avoidance method and device |
CN107491072A (en) * | 2017-09-05 | 2017-12-19 | 百度在线网络技术(北京)有限公司 | Vehicle obstacle-avoidance method and apparatus |
WO2019047649A1 (en) * | 2017-09-05 | 2019-03-14 | 百度在线网络技术(北京)有限公司 | Method and device for determining driving behavior of unmanned vehicle |
CN107515607A (en) * | 2017-09-05 | 2017-12-26 | 百度在线网络技术(北京)有限公司 | Control method and device for unmanned vehicle |
CN107544518A (en) * | 2017-10-17 | 2018-01-05 | 芜湖伯特利汽车安全系统股份有限公司 | The ACC/AEB systems and vehicle driven based on personification |
CN107544518B (en) * | 2017-10-17 | 2020-12-01 | 芜湖伯特利汽车安全系统股份有限公司 | ACC/AEB system based on anthropomorphic driving and vehicle |
CN107826105A (en) * | 2017-10-31 | 2018-03-23 | 清华大学 | Translucent automatic Pilot artificial intelligence system and vehicle |
CN109754625A (en) * | 2017-11-07 | 2019-05-14 | 天津工业大学 | Take out the drive manner of vehicle in a kind of unpiloted campus |
CN109933081A (en) * | 2017-12-15 | 2019-06-25 | 北京京东尚科信息技术有限公司 | Unmanned plane barrier-avoiding method, avoidance unmanned plane and unmanned plane obstacle avoidance apparatus |
CN107967468A (en) * | 2018-01-19 | 2018-04-27 | 刘至键 | A kind of supplementary controlled system based on pilotless automobile |
US11505187B2 (en) | 2018-03-23 | 2022-11-22 | Guangzhou Automobile Group Co., Ltd. | Unmanned lane keeping method and device, computer device, and storage medium |
WO2019179094A1 (en) * | 2018-03-23 | 2019-09-26 | 广州汽车集团股份有限公司 | Method and apparatus for maintaining driverless driveway, computer device, and storage medium |
CN108897313A (en) * | 2018-05-23 | 2018-11-27 | 清华大学 | A kind of end-to-end Vehicular automatic driving system construction method of layer-stepping |
CN110618678A (en) * | 2018-06-19 | 2019-12-27 | 辉达公司 | Behavioral guided path planning in autonomous machine applications |
US11966838B2 (en) | 2018-06-19 | 2024-04-23 | Nvidia Corporation | Behavior-guided path planning in autonomous machine applications |
CN109358614A (en) * | 2018-08-30 | 2019-02-19 | 深圳市易成自动驾驶技术有限公司 | Automatic Pilot method, system, device and readable storage medium storing program for executing |
CN109240123A (en) * | 2018-10-09 | 2019-01-18 | 合肥学院 | On-loop simulation method and system for intelligent logistics vehicle |
CN109407679A (en) * | 2018-12-28 | 2019-03-01 | 百度在线网络技术(北京)有限公司 | Method and apparatus for controlling pilotless automobile |
WO2020177417A1 (en) * | 2019-03-01 | 2020-09-10 | 北京三快在线科技有限公司 | Unmanned device control and model training |
CN110473806A (en) * | 2019-07-13 | 2019-11-19 | 河北工业大学 | The deep learning identification of photovoltaic cell sorting and control method and device |
CN110646574B (en) * | 2019-10-08 | 2022-02-08 | 张家港江苏科技大学产业技术研究院 | Unmanned ship-based water quality conductivity autonomous detection system and method |
CN110646574A (en) * | 2019-10-08 | 2020-01-03 | 张家港江苏科技大学产业技术研究院 | Unmanned ship-based water quality conductivity autonomous detection system and method |
CN110673602A (en) * | 2019-10-24 | 2020-01-10 | 驭势科技(北京)有限公司 | Reinforced learning model, vehicle automatic driving decision method and vehicle-mounted equipment |
CN110673602B (en) * | 2019-10-24 | 2022-11-25 | 驭势科技(北京)有限公司 | Reinforced learning model, vehicle automatic driving decision method and vehicle-mounted equipment |
CN111142519A (en) * | 2019-12-17 | 2020-05-12 | 西安工业大学 | Automatic driving system based on computer vision and ultrasonic radar redundancy and control method thereof |
CN113470416A (en) * | 2020-03-31 | 2021-10-01 | 上汽通用汽车有限公司 | System, method and storage medium for realizing parking space detection by using embedded system |
CN113470416B (en) * | 2020-03-31 | 2023-02-17 | 上汽通用汽车有限公司 | System, method and storage medium for realizing parking space detection by using embedded system |
CN111898732B (en) * | 2020-06-30 | 2023-06-20 | 江苏省特种设备安全监督检验研究院 | Ultrasonic ranging compensation method based on deep convolutional neural network |
CN111898732A (en) * | 2020-06-30 | 2020-11-06 | 江苏省特种设备安全监督检验研究院 | Ultrasonic ranging compensation method based on deep convolutional neural network |
CN111975769A (en) * | 2020-07-16 | 2020-11-24 | 华南理工大学 | Mobile robot obstacle avoidance method based on meta-learning |
CN114509077A (en) * | 2020-11-16 | 2022-05-17 | 阿里巴巴集团控股有限公司 | Method, device, system and computer program product for generating navigation guide line |
CN112835333B (en) * | 2020-12-31 | 2022-03-15 | 北京工商大学 | Multi-AGV obstacle avoidance and path planning method and system based on deep reinforcement learning |
CN112835333A (en) * | 2020-12-31 | 2021-05-25 | 北京工商大学 | Multi-AGV obstacle avoidance and path planning method and system based on deep reinforcement learning |
WO2023124572A1 (en) * | 2021-12-31 | 2023-07-06 | 上海邦邦机器人有限公司 | Driving assistance system and method applied to scooter for elderly person, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN106873566B (en) | 2019-01-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106873566A (en) | A kind of unmanned logistic car based on deep learning | |
US11363929B2 (en) | Apparatus and methods for programming and training of robotic household appliances | |
Gandhi et al. | Learning to fly by crashing | |
US20190009408A1 (en) | Apparatus and methods for programming and training of robotic devices | |
CN115100622B (en) | Method for detecting driving area of unmanned transportation equipment in deep limited space and automatically avoiding obstacle | |
CN111399505B (en) | Mobile robot obstacle avoidance method based on neural network | |
CN106054896A (en) | Intelligent navigation robot dolly system | |
CN110362083A (en) | It is a kind of based on multiple target tracking prediction space-time map under autonomous navigation method | |
CN114474061A (en) | Robot multi-sensor fusion positioning navigation system and method based on cloud service | |
US20220039313A1 (en) | Autonomous lawn mower | |
US20220153310A1 (en) | Automatic Annotation of Object Trajectories in Multiple Dimensions | |
DE102022114201A1 (en) | Neural network for object detection and tracking | |
Tang et al. | An overview of path planning algorithms | |
Chen et al. | A review of autonomous obstacle avoidance technology for multi-rotor UAVs | |
Gajjar et al. | A comprehensive study on lane detecting autonomous car using computer vision | |
Hayajneh et al. | Automatic UAV wireless charging over solar vehicle to enable frequent flight missions | |
US12008762B2 (en) | Systems and methods for generating a road surface semantic segmentation map from a sequence of point clouds | |
Dong et al. | A vision-based method for improving the safety of self-driving | |
CN108490935A (en) | A kind of unmanned cruiser system of four-wheel drive low speed and working method | |
Soto et al. | Cyber-ATVs: Dynamic and Distributed Reconnaissance and Surveillance Using All-Terrain UGVs | |
CN114326821B (en) | Unmanned aerial vehicle autonomous obstacle avoidance system and method based on deep reinforcement learning | |
Helble et al. | OATS: Oxford aerial tracking system | |
Shi et al. | Path Planning of Unmanned Aerial Vehicle Based on Supervised Learning | |
Gao et al. | Design and Implementation of an Autonomous Driving Delivery Robot | |
CN209769526U (en) | Wheel-shaped food delivery unmanned vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |