CN107553490A - A kind of monocular vision barrier-avoiding method based on deep learning - Google Patents
A kind of monocular vision barrier-avoiding method based on deep learning Download PDFInfo
- Publication number
- CN107553490A CN107553490A CN201710805759.0A CN201710805759A CN107553490A CN 107553490 A CN107553490 A CN 107553490A CN 201710805759 A CN201710805759 A CN 201710805759A CN 107553490 A CN107553490 A CN 107553490A
- Authority
- CN
- China
- Prior art keywords
- network
- double
- avoidance
- confrontation
- monocular vision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Landscapes
- Image Analysis (AREA)
- Manipulator (AREA)
Abstract
The present invention proposes that a kind of double Q networks confrontation frameworks of depth that are based on carry out avoidance, using monocular vision RGB image, obtain corresponding depth image, based on confrontation network and double Q network mechanisms, the training pattern in simulator, the knowledge acquired in emulation testing can be seamlessly transferred in the new scene in real world.How machine learning avoids the obstacle on simulator, and depth information prediction can be also carried out in very noisy RGB image.Present invention incorporates double fluid confrontation network to carry out monocular vision avoidance, realize that the end-to-end high speed of avoidance task learns based on double Q networks with limited computing resource using the network architecture is resisted, it can be transferred directly in real robot completely, avoid Complex Modeling and the parameter adjustment of legacy paths planner so that performance and training speed are greatly improved, in addition, S.L.R provides abundant robot manipulation's environmental information, cost is low, in light weight, suitable for kinds of platform.
Description
Technical field
The present invention relates to vision avoidance field, more particularly, to a kind of monocular vision avoidance side based on deep learning
Method.
Background technology
Deep learning shows huge effect in terms of robotics and computer vision.And guide how study avoids
The path planning based on Deep Learning of collision just becomes to become more and more popular.When mobile robot works in real world,
Due to the change of various situations, one of the basic capacity that they need is being capable of avoiding obstacles.It is usually used in unmanned plane, aviation boat
My god, military surveillance, robot, the field such as navigation programming.Especially, avoidance technology to flying robot complexity forest environment
In navigated;As robot initially enters the environment such as factory, warehouse, hotel, market, dining room, avoidance technological guidance its
Move.
Avoidance is typically to utilize distance measuring sensor, such as laser scanner and sonar.However, distance measuring sensor only capture it is limited
Information, it is and expensive.In addition, when obtaining the perception of distance by monocular vision (i.e. RGB image), it is three-dimensional due to lacking
Information, avoidance problem just become abnormal difficult.Because three-dimensional world switchs to two dimensional image plane, eliminate between pixel and distance
Direct corresponding relation.And traditional obstacle-avoiding route planning requirement is adjusted to multiple parameters, efficiency is low and cost is high.
The present invention proposes that a kind of double Q networks confrontation frameworks of depth that are based on carry out avoidance, using monocular vision RGB image, obtains
Corresponding depth image is taken, based on confrontation network and double Q network mechanisms, the training pattern in simulator, is acquired in emulation testing
Knowledge can be seamlessly transferred in the new scene in real world.How machine learning avoids the obstacle on simulator, i.e.,
Make also carry out depth information prediction in very noisy RGB image.Present invention incorporates double fluid confrontation network to carry out monocular
Vision avoidance, utilize the end-to-end high speed for resisting the network architecture based on double Q networks and being realized with limited computing resource avoidance task
Study, can be transferred directly in real robot, avoid Complex Modeling and the parameter adjustment of legacy paths planner completely,
So that performance and training speed are greatly improved, in addition, S.L.R provides abundant robot manipulation's environmental information, into
This is low, in light weight, suitable for kinds of platform.
The content of the invention
For cost it is high low with efficiency the problem of, it is an object of the invention to provide a kind of monocular based on deep learning to regard
Feel barrier-avoiding method, using monocular vision RGB image, obtain corresponding depth image, based on confrontation network and double Q network mechanisms,
Training pattern in simulator, the knowledge acquired in emulation testing can be seamlessly transferred in the new scene in real world.
How machine learning avoids the obstacle on simulator, and depth information prediction can be also carried out in very noisy RGB image.
To solve the above problems, the present invention provides a kind of monocular vision barrier-avoiding method based on deep learning, it is main interior
Appearance includes:
(1) monocular vision avoidance problem definition;
(2) two-phase deep neural network;
(3) conversion of the outward appearance to geometry;
(4) model sets (four).
Wherein, the monocular vision barrier-avoiding method based on deep learning, double Q networks are based on using the network architecture is resisted
The end-to-end high speed for realizing avoidance task with limited computing resource learns, the training pattern in simulator, in emulation testing
The knowledge acquired can be seamlessly transferred in the new scene in real world.
Wherein, can be considered based on the monocular vision avoidance problem definition described in claims 1, monocular vision avoidance problem
The decision process of robot monocular-camera and environmental interaction, robot is according to camera image xtIn t ∈ [0, T] time model
Enclose interior one action of selectionObservation passes through prize signal r caused by reward functiont, then it is transitioned into next xt+1;
The algorithm accumulates the feedback in future to greatest extentγ is discount factors, due to at=π (xt), action
It is worth the state-action of (Q values) to (xt,at) be defined as follows:
Q value function can be calculated using Bellman equations:
By selecting optimal action every timeObtain optimal Q value function:
Optimal Q values passage time t currently rewards RtPlus the time t+1 optimal Q values of discount, rather than directly at one
State space computing Q value function, solved by the deep neural network of near-optimization value function.
Wherein, described two-phase deep neural network, mainly it is made up of confrontation network and double Q networks, the network rack of confrontation
Structure, traditional double Q networks are the full articulamentum constructions of single current, estimate that each operating state after convolution corresponds to Q values, provide and work as
Preceding state, however, double Q networks of confrontation network, two strands are fully connected the value and advantage function that layer establishes calculating respectively, and
With reference to calculating Q values.
Further, described model, avoidance is promoted to learn using model, result phase xt+1By using network and
Objective network calculates the objective network Q ' of time t+1 optimal value*.Then with discount factors γ and current reward rt, obtain t
Desired value y.Finally, the optimal value Q of line neural network forecast is passed through*Desired value calculation error is subtracted, provides current state x, then
Update weight backpropagation.
Wherein, described outward appearance is to the conversion of geometry, and due to needing substantial amounts of data and time training, performance typically exists
Show in simulated environment, in order to be applied in robot, the training pattern in simulator, then transmit them to
In real robot, but this is for the technology of view-based access control model, due between virtual environment and true environment due to outer
The significant difference of sight, illumination etc., solving this problem has very big challenge.
Further, described training pattern, obtains a geometric representation from RGB image, and the Part I of model is
One complete convolution residual error network, is predicted to the depth information of single RGB image, the depth god of the depth image used
Through network to ensure that well-trained model can be from simulation to reality, and it is summarised in real world.
Wherein, described model is set, and model is based on confrontation and double Q network technologies are established, and specifically, it has three-layer coil product
Layer, with the size (height, width, passage) specified, and the full articulamentum confrontation framework of three double fluids, while training network obtains
Feasible control strategy.
Further, described training network is, it is necessary to suitably define robot motion, rather than simply order, example
Such as " advance ", " left-hand rotation ", linear velocity and the angular speed that discrete scheme controls respectively are defined within our network action.
Further, based on claims 9 described in order, instantaneous reward function is defined as r=v*cos (ω) *
δt, wherein v and ω are local linear speed and angular speed respectively, δtIt is to be arranged to 0.2 second the time that each training circulates, rewards
The design of function is to make robot translational speed as fast as possible, is punished if rotating at the scene, and total reward is thing
The accumulation of all step moment rewards in part, if detecting collision, event terminates immediately and additional punishment -10.Otherwise, this
One event can persistently reach the step of maximum quantity (500 steps in our experiment) and terminate and impunity.
Brief description of the drawings
Fig. 1 is a kind of system framework figure of the monocular vision barrier-avoiding method based on deep learning of the present invention.
Fig. 2 is that a kind of deep layer based on monocular image of monocular vision barrier-avoiding method based on deep learning of the present invention is strengthened
Learn avoidance network structure.
Fig. 3 is a kind of two-phase deep neural network signal of monocular vision barrier-avoiding method based on deep learning of the present invention
Figure.
Embodiment
It should be noted that in the case where not conflicting, the feature in embodiment and embodiment in the application can phase
Mutually combine, the present invention is described in further detail with specific embodiment below in conjunction with the accompanying drawings.
Fig. 1 is a kind of system framework figure of the monocular vision barrier-avoiding method based on deep learning of the present invention.Mainly include, it is single
Visually feel avoidance problem definition;Two-phase deep neural network;Conversion of the outward appearance to geometry;Model is set.
Based on the monocular vision avoidance problem definition described in claims 1, monocular vision avoidance problem can be considered machine
The decision process of people's monocular-camera and environmental interaction, robot is according to camera image xtIn t ∈ [0, T] time range
One action of selectionObservation passes through prize signal r caused by reward functiont, then it is transitioned into next xt+1;The calculation
Method accumulates the feedback in future to greatest extentγ is discount factors, due to at=π (xt), working value (Q
Value) state-action to (xt,at) be defined as follows:
Q value function can be calculated using Bellman equations:
By selecting optimal action every timeObtain optimal Q value function:
Optimal Q values passage time t currently rewards RtPlus the time t+1 optimal Q values of discount, rather than directly at one
State space computing Q value function, solved by the deep neural network of near-optimization value function.
Avoidance is promoted to learn using model, result phase xt+1Time t+1 is calculated by using network and objective network
Optimal value objective network Q '*.Then with discount factors γ and current reward rt, obtain t desired value y.Finally, line is passed through
The optimal value Q of neural network forecast*Desired value calculation error is subtracted, provides current state x, then updates weight backpropagation.
Outward appearance is to the conversion of geometry, and due to needing substantial amounts of data and time training, performance is typically in simulated environment
Performance, in order to be applied in robot, the training pattern in simulator, then transmits them to real machine
In people, but this is for the technology of view-based access control model, due between virtual environment and true environment due to the side such as outward appearance, illumination
The significant difference in face, solving this problem has very big challenge.
Training pattern, a geometric representation is obtained from RGB image, the Part I of model is a complete convolution residual error
Network, the depth information of single RGB image is predicted, the deep neural network of the depth image used is to ensure to train
The model being always or usually as specified can be from simulation to reality, and is summarised in real world.
Model is set, and model is established based on confrontation and double Q network technologies, and specifically, it has three-layer coil lamination, with specified
Size (height, width, passage), and three double fluids full articulamentum confrontation framework, while training network obtains feasible control
Strategy.Need suitably to define robot motion, rather than simple order, such as " advance ", " left-hand rotation ", in our network
Action is defined within linear velocity and the angular speed that discrete scheme controls respectively.Instantaneous reward function is defined as r=v*cos
(ω)*δt, wherein v and ω are local linear speed and angular speed respectively, δtIt is to be arranged to 0.2 second the time that each training circulates,
The design of awards faction is to make robot translational speed as fast as possible, is punished if rotating at the scene, total reward
It is the accumulation of all step moment rewards in event, if detecting collision, event terminates immediately and additional punishment -10.It is no
Then, this event can persistently reach the step of maximum quantity (500 steps in our experiment) and terminate without being punished
Penalize.
Fig. 2 is that a kind of deep layer based on monocular image of monocular vision barrier-avoiding method based on deep learning of the present invention is strengthened
Learn avoidance network structure.Avoidance task is realized with limited computing resource based on double Q networks using the network architecture is resisted
End-to-end high speed study, the training pattern in simulator, the knowledge acquired in emulation testing can be seamlessly transferred to reality
In new scene in the world.
Fig. 3 is a kind of two-phase deep neural network signal of monocular vision barrier-avoiding method based on deep learning of the present invention
Figure.Two-phase deep neural network, mainly it is made up of confrontation network and double Q networks, the network architecture of confrontation, traditional double Q networks
It is the full articulamentum construction of single current, estimates that each operating state after convolution corresponds to Q values, provide current state, however, confrontation
Double Q networks of network, two strands are fully connected the value and advantage function that layer establishes calculating respectively, and combine and calculate Q values.
For those skilled in the art, the present invention is not restricted to the details of above-described embodiment, in the essence without departing substantially from the present invention
In the case of refreshing and scope, the present invention can be realized with other concrete forms.In addition, those skilled in the art can be to this hair
Bright to carry out various changes and modification without departing from the spirit and scope of the present invention, these improvement and modification also should be regarded as the present invention's
Protection domain.Therefore, appended claims are intended to be construed to include preferred embodiment and fall into all changes of the scope of the invention
More and change.
Claims (10)
1. a kind of monocular vision barrier-avoiding method based on deep learning, it is characterised in that mainly include monocular vision avoidance problem
Define (one);Two-phase deep neural network (two);Conversion (three) of the outward appearance to geometry;Model sets (four).
2. based on the monocular vision barrier-avoiding method based on deep learning described in claims 1, it is characterised in that utilize confrontation
The network architecture realizes that the end-to-end high speed of avoidance task learns based on double Q networks with limited computing resource, is instructed in simulator
Practice model, the knowledge acquired in emulation testing can be seamlessly transferred in the new scene in real world.
3. based on the monocular vision avoidance problem definition (one) described in claims 1, it is characterised in that monocular vision avoidance is asked
Topic can be considered the decision process of robot monocular-camera and environmental interaction, and robot is according to camera image xtT ∈ [0,
T] time range one action of interior selectionObservation passes through prize signal r caused by reward functiont, then it is transitioned into down
One xt+1;The algorithm accumulates the feedback in future to greatest extentγ is discount factors, due to at=π
(xt), state-action of working value (Q values) is to (xt,at) be defined as follows:
Q value function can be calculated using Bellman equations:
By selecting optimal action every timeObtain optimal Q value function:
Optimal Q values passage time t currently rewards RtPlus the time t+1 optimal Q values of discount, rather than directly in a state
Space calculates Q value function, is solved by the deep neural network of near-optimization value function.
4. based on the two-phase deep neural network (two) described in claims 1, it is characterised in that mainly by confrontation network and double
Q networks are formed, the network architecture of confrontation, and traditional double Q networks are the full articulamentum constructions of single current, estimate each after convolution
Operating state corresponds to Q values, provides current state, however, double Q networks of confrontation network, two strands are fully connected layer and establish respectively
The value and advantage function of calculating, and combine and calculate Q values.
5. based on the model described in claims 4, it is characterised in that promote avoidance to learn using model, result phase xt+1
The objective network Q ' of time t+1 optimal value is calculated by using network and objective network*, then with discount factors γ and work as
Preceding reward rt, t desired value y is obtained, finally, passes through the optimal value Q of line neural network forecast*Desired value calculation error is subtracted, provides and works as
Preceding state x, then update weight backpropagation.
6. the conversion (three) based on the outward appearance described in claims 1 to geometry, it is characterised in that due to needing substantial amounts of data
Trained with the time, performance is typically to be showed in simulated environment, in order to be applied in robot, is instructed in simulator
Practice model, then transmit them in real robot, but this is for the technology of view-based access control model, due to virtual ring
Due to the significant difference of outward appearance, illumination etc. between border and true environment, solving this problem has very big challenge.
7. based on the training pattern described in claims 6, it is characterised in that obtain a geometric representation, mould from RGB image
The Part I of type is a complete convolution residual error network, the depth information of single RGB image is predicted, the depth used
The deep neural network of image is spent to ensure that well-trained model can be from simulation to reality, and is summarised in real world.
8. (four) are set based on the model described in claims 1, it is characterised in that model is based on confrontation and double Q network technologies
Establish, specifically, it has three-layer coil lamination, with the size (height, width, passage) specified, and three full articulamentums of double fluid
Framework is resisted, while training network obtains feasible control strategy.
9. based on the training network described in claims 8, it is characterised in that need suitably to define robot motion, without
It is simply to order, such as " advance ", " left-hand rotation ", the linear speed that discrete scheme controls respectively is defined within our network action
Degree and angular speed.
10. based on the order described in claims 9, it is characterised in that instantaneous reward function is defined as r=v*cos (ω) *
δt, wherein v and ω are local linear speed and angular speed respectively, δtIt is to be arranged to 0.2 second the time that each training circulates, rewards
The design of function is to make robot translational speed as fast as possible, is punished if rotating at the scene, and total reward is thing
The accumulation of all step moment rewards in part, if detecting collision, event terminates immediately and additional punishment -10, otherwise, this
One event can persistently reach the step of maximum quantity (500 steps in our experiment) and terminate and impunity.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710805759.0A CN107553490A (en) | 2017-09-08 | 2017-09-08 | A kind of monocular vision barrier-avoiding method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710805759.0A CN107553490A (en) | 2017-09-08 | 2017-09-08 | A kind of monocular vision barrier-avoiding method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107553490A true CN107553490A (en) | 2018-01-09 |
Family
ID=60980348
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710805759.0A Withdrawn CN107553490A (en) | 2017-09-08 | 2017-09-08 | A kind of monocular vision barrier-avoiding method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107553490A (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108255182A (en) * | 2018-01-30 | 2018-07-06 | 上海交通大学 | A kind of service robot pedestrian based on deeply study perceives barrier-avoiding method |
CN109035319A (en) * | 2018-07-27 | 2018-12-18 | 深圳市商汤科技有限公司 | Monocular image depth estimation method and device, equipment, program and storage medium |
CN109213147A (en) * | 2018-08-01 | 2019-01-15 | 上海交通大学 | A kind of robot obstacle-avoiding method for planning track and system based on deep learning |
CN109407676A (en) * | 2018-12-20 | 2019-03-01 | 哈尔滨工业大学 | The moving robot obstacle avoiding method learnt based on DoubleDQN network and deeply |
CN109514553A (en) * | 2018-11-21 | 2019-03-26 | 苏州大学 | A kind of method, system and the equipment of the mobile control of robot |
CN109800864A (en) * | 2019-01-18 | 2019-05-24 | 中山大学 | A kind of robot Active Learning Method based on image input |
CN109814565A (en) * | 2019-01-30 | 2019-05-28 | 上海海事大学 | The unmanned boat intelligence navigation control method of space-time double fluid data-driven depth Q study |
CN109976909A (en) * | 2019-03-18 | 2019-07-05 | 中南大学 | Low delay method for scheduling task in edge calculations network based on study |
CN109993106A (en) * | 2019-03-29 | 2019-07-09 | 北京易达图灵科技有限公司 | Barrier-avoiding method and device |
CN110053034A (en) * | 2019-05-23 | 2019-07-26 | 哈尔滨工业大学 | A kind of multi purpose space cellular machineries people's device of view-based access control model |
CN110302539A (en) * | 2019-08-05 | 2019-10-08 | 苏州大学 | A kind of tactics of the game calculation method, device, system and readable storage medium storing program for executing |
CN110362085A (en) * | 2019-07-22 | 2019-10-22 | 合肥小步智能科技有限公司 | A kind of class brain platform for extraordinary crusing robot |
CN110471444A (en) * | 2019-08-19 | 2019-11-19 | 西安微电子技术研究所 | UAV Intelligent barrier-avoiding method based on autonomous learning |
CN110488835A (en) * | 2019-08-28 | 2019-11-22 | 北京航空航天大学 | A kind of unmanned systems intelligence local paths planning method based on double reverse transmittance nerve networks |
CN111126218A (en) * | 2019-12-12 | 2020-05-08 | 北京工业大学 | Human behavior recognition method based on zero sample learning |
CN111975769A (en) * | 2020-07-16 | 2020-11-24 | 华南理工大学 | Mobile robot obstacle avoidance method based on meta-learning |
CN112057858A (en) * | 2020-09-11 | 2020-12-11 | 腾讯科技(深圳)有限公司 | Virtual object control method, device, equipment and storage medium |
CN112313044A (en) * | 2018-06-15 | 2021-02-02 | 谷歌有限责任公司 | Deep reinforcement learning for robotic manipulation |
CN112767373A (en) * | 2021-01-27 | 2021-05-07 | 大连理工大学 | Robot indoor complex scene obstacle avoidance method based on monocular camera |
CN112799401A (en) * | 2020-12-28 | 2021-05-14 | 华南理工大学 | End-to-end robot vision-motion navigation method |
CN113419555A (en) * | 2021-05-20 | 2021-09-21 | 北京航空航天大学 | Monocular vision-based low-power-consumption autonomous obstacle avoidance method and system for unmanned aerial vehicle |
CN114326821A (en) * | 2022-03-02 | 2022-04-12 | 中国人民解放军陆军指挥学院 | Unmanned aerial vehicle autonomous obstacle avoidance system and method based on deep reinforcement learning |
CN115574816A (en) * | 2022-11-24 | 2023-01-06 | 东南大学 | Bionic vision multi-source information intelligent perception unmanned platform |
CN115826628A (en) * | 2023-02-22 | 2023-03-21 | 成都航空职业技术学院 | NeRF neural network-based heterogeneous unmanned aerial vehicle visual obstacle avoidance system and method |
US11989628B2 (en) | 2021-03-05 | 2024-05-21 | International Business Machines Corporation | Machine teaching complex concepts assisted by computer vision and knowledge reasoning |
-
2017
- 2017-09-08 CN CN201710805759.0A patent/CN107553490A/en not_active Withdrawn
Non-Patent Citations (1)
Title |
---|
LINHAI XIE等: "Towards Monocular Vision based Obstacle Avoidance through Deep Reinforcement Learning", 《网页在线公开:HTTPS://ARXIV.ORG/ABS/1706.09829V1》 * |
Cited By (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108255182A (en) * | 2018-01-30 | 2018-07-06 | 上海交通大学 | A kind of service robot pedestrian based on deeply study perceives barrier-avoiding method |
CN112313044A (en) * | 2018-06-15 | 2021-02-02 | 谷歌有限责任公司 | Deep reinforcement learning for robotic manipulation |
CN109035319A (en) * | 2018-07-27 | 2018-12-18 | 深圳市商汤科技有限公司 | Monocular image depth estimation method and device, equipment, program and storage medium |
JP2021500689A (en) * | 2018-07-27 | 2021-01-07 | 深▲せん▼市商▲湯▼科技有限公司Shenzhen Sensetime Technology Co., Ltd. | Monocular image depth estimation method and equipment, equipment, programs and storage media |
CN109035319B (en) * | 2018-07-27 | 2021-04-30 | 深圳市商汤科技有限公司 | Monocular image depth estimation method, monocular image depth estimation device, monocular image depth estimation apparatus, monocular image depth estimation program, and storage medium |
US11443445B2 (en) | 2018-07-27 | 2022-09-13 | Shenzhen Sensetime Technology Co., Ltd. | Method and apparatus for depth estimation of monocular image, and storage medium |
CN109213147A (en) * | 2018-08-01 | 2019-01-15 | 上海交通大学 | A kind of robot obstacle-avoiding method for planning track and system based on deep learning |
CN109514553A (en) * | 2018-11-21 | 2019-03-26 | 苏州大学 | A kind of method, system and the equipment of the mobile control of robot |
CN109514553B (en) * | 2018-11-21 | 2021-09-21 | 苏州大学 | Method, system and equipment for robot movement control |
CN109407676A (en) * | 2018-12-20 | 2019-03-01 | 哈尔滨工业大学 | The moving robot obstacle avoiding method learnt based on DoubleDQN network and deeply |
CN109800864A (en) * | 2019-01-18 | 2019-05-24 | 中山大学 | A kind of robot Active Learning Method based on image input |
CN109814565A (en) * | 2019-01-30 | 2019-05-28 | 上海海事大学 | The unmanned boat intelligence navigation control method of space-time double fluid data-driven depth Q study |
CN109976909B (en) * | 2019-03-18 | 2022-11-08 | 中南大学 | Learning-based low-delay task scheduling method in edge computing network |
CN109976909A (en) * | 2019-03-18 | 2019-07-05 | 中南大学 | Low delay method for scheduling task in edge calculations network based on study |
CN109993106A (en) * | 2019-03-29 | 2019-07-09 | 北京易达图灵科技有限公司 | Barrier-avoiding method and device |
CN110053034A (en) * | 2019-05-23 | 2019-07-26 | 哈尔滨工业大学 | A kind of multi purpose space cellular machineries people's device of view-based access control model |
CN110362085A (en) * | 2019-07-22 | 2019-10-22 | 合肥小步智能科技有限公司 | A kind of class brain platform for extraordinary crusing robot |
CN110302539A (en) * | 2019-08-05 | 2019-10-08 | 苏州大学 | A kind of tactics of the game calculation method, device, system and readable storage medium storing program for executing |
CN110471444B (en) * | 2019-08-19 | 2022-07-12 | 西安微电子技术研究所 | Unmanned aerial vehicle intelligent obstacle avoidance method based on autonomous learning |
CN110471444A (en) * | 2019-08-19 | 2019-11-19 | 西安微电子技术研究所 | UAV Intelligent barrier-avoiding method based on autonomous learning |
CN110488835A (en) * | 2019-08-28 | 2019-11-22 | 北京航空航天大学 | A kind of unmanned systems intelligence local paths planning method based on double reverse transmittance nerve networks |
CN111126218B (en) * | 2019-12-12 | 2023-09-26 | 北京工业大学 | Human behavior recognition method based on zero sample learning |
CN111126218A (en) * | 2019-12-12 | 2020-05-08 | 北京工业大学 | Human behavior recognition method based on zero sample learning |
CN111975769A (en) * | 2020-07-16 | 2020-11-24 | 华南理工大学 | Mobile robot obstacle avoidance method based on meta-learning |
CN112057858B (en) * | 2020-09-11 | 2022-04-08 | 腾讯科技(深圳)有限公司 | Virtual object control method, device, equipment and storage medium |
CN112057858A (en) * | 2020-09-11 | 2020-12-11 | 腾讯科技(深圳)有限公司 | Virtual object control method, device, equipment and storage medium |
CN112799401A (en) * | 2020-12-28 | 2021-05-14 | 华南理工大学 | End-to-end robot vision-motion navigation method |
CN112767373A (en) * | 2021-01-27 | 2021-05-07 | 大连理工大学 | Robot indoor complex scene obstacle avoidance method based on monocular camera |
WO2022160430A1 (en) * | 2021-01-27 | 2022-08-04 | Dalian University Of Technology | Method for obstacle avoidance of robot in the complex indoor scene based on monocular camera |
CN112767373B (en) * | 2021-01-27 | 2022-09-02 | 大连理工大学 | Robot indoor complex scene obstacle avoidance method based on monocular camera |
US11989628B2 (en) | 2021-03-05 | 2024-05-21 | International Business Machines Corporation | Machine teaching complex concepts assisted by computer vision and knowledge reasoning |
CN113419555A (en) * | 2021-05-20 | 2021-09-21 | 北京航空航天大学 | Monocular vision-based low-power-consumption autonomous obstacle avoidance method and system for unmanned aerial vehicle |
CN113419555B (en) * | 2021-05-20 | 2022-07-19 | 北京航空航天大学 | Monocular vision-based low-power-consumption autonomous obstacle avoidance method and system for unmanned aerial vehicle |
CN114326821A (en) * | 2022-03-02 | 2022-04-12 | 中国人民解放军陆军指挥学院 | Unmanned aerial vehicle autonomous obstacle avoidance system and method based on deep reinforcement learning |
CN115574816A (en) * | 2022-11-24 | 2023-01-06 | 东南大学 | Bionic vision multi-source information intelligent perception unmanned platform |
CN115574816B (en) * | 2022-11-24 | 2023-03-14 | 东南大学 | Bionic vision multi-source information intelligent perception unmanned platform |
CN115826628A (en) * | 2023-02-22 | 2023-03-21 | 成都航空职业技术学院 | NeRF neural network-based heterogeneous unmanned aerial vehicle visual obstacle avoidance system and method |
CN115826628B (en) * | 2023-02-22 | 2023-05-09 | 成都航空职业技术学院 | Heterogeneous unmanned aerial vehicle vision obstacle avoidance system and method based on NeRF neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107553490A (en) | A kind of monocular vision barrier-avoiding method based on deep learning | |
Liu et al. | The angle guidance path planning algorithms for unmanned surface vehicle formations by using the fast marching method | |
CN113495578B (en) | Digital twin training-based cluster track planning reinforcement learning method | |
CN104407619B (en) | Multiple no-manned plane under uncertain environment reaches multiple goal approachs simultaneously | |
CN110333739A (en) | A kind of AUV conduct programming and method of controlling operation based on intensified learning | |
CN108692734A (en) | A kind of paths planning method and device | |
WO2019076044A1 (en) | Mobile robot local motion planning method and apparatus and computer storage medium | |
CN109324636A (en) | Formation control method is cooperateed with based on second order consistency and more quadrotor master-slave modes of active disturbance rejection | |
CN106595671A (en) | Method and apparatus for planning route of unmanned aerial vehicle based on reinforcement learning | |
Bipin et al. | Autonomous navigation of generic monocular quadcopter in natural environment | |
CN109739218A (en) | It is a kind of that outstanding driver's lane-change method for establishing model is imitated based on GRU network | |
CN105589459A (en) | Unmanned vehicle semi-autonomous remote control method | |
CN108415460A (en) | A kind of combination separate type rotor and sufficient formula moving operation machine people concentration-distributed control method | |
CN112506210B (en) | Unmanned aerial vehicle control method for autonomous target tracking | |
CN111506063A (en) | Mobile robot map-free navigation method based on layered reinforcement learning framework | |
Frew et al. | Obstacle avoidance with sensor uncertainty for small unmanned aircraft | |
CN109814565A (en) | The unmanned boat intelligence navigation control method of space-time double fluid data-driven depth Q study | |
CN113759901A (en) | Mobile robot autonomous obstacle avoidance method based on deep reinforcement learning | |
Lam et al. | Haptic interface for UAV collision avoidance | |
CN113900449B (en) | Multi-unmanned aerial vehicle track planning method and device, unmanned aerial vehicle and storage medium | |
Nahavandi et al. | Autonomous convoying: A survey on current research and development | |
CN110879607A (en) | Offshore wind power blade detection method based on multi-unmanned aerial vehicle formation cooperative detection | |
Helble et al. | 3-d path planning and target trajectory prediction for the oxford aerial tracking system | |
Lin et al. | Tracking strategy of unmanned aerial vehicle for tracking moving target | |
Wang et al. | An integrated teleoperation assistance system for collision avoidance of high-speed uavs in complex environments |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20180109 |