CN107127773A - A kind of method that robot captures article - Google Patents

A kind of method that robot captures article Download PDF

Info

Publication number
CN107127773A
CN107127773A CN201710218154.1A CN201710218154A CN107127773A CN 107127773 A CN107127773 A CN 107127773A CN 201710218154 A CN201710218154 A CN 201710218154A CN 107127773 A CN107127773 A CN 107127773A
Authority
CN
China
Prior art keywords
robot
article
movement locus
target location
place
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710218154.1A
Other languages
Chinese (zh)
Inventor
王欣
伍世虔
韩浩
邹谜
王建勋
张俊勇
陈鹏
杨超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Science and Engineering WUSE
Wuhan University of Science and Technology WHUST
Original Assignee
Wuhan University of Science and Engineering WUSE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Science and Engineering WUSE filed Critical Wuhan University of Science and Engineering WUSE
Priority to CN201710218154.1A priority Critical patent/CN107127773A/en
Publication of CN107127773A publication Critical patent/CN107127773A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/088Controls for manipulators by means of sensing devices, e.g. viewing or touching devices with position, velocity or acceleration sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks

Abstract

The invention discloses a kind of method that robot captures article, methods described includes:The first position of the robot and the second place of the article are obtained, wherein the first position and the second place come from Kinect infrared equipments;According to the first position and the second place, the robot is obtained to the first movement locus of the article;According to first movement locus, obtain target location so that the robot motion to the target location to capture the article.The method for the crawl article that the present invention is provided realizes the accuracy for improving article positioning, so as to more precisely capture the technique effect of article, the scene that expansion robot is used and scope, the technical problem that the method positioned in the prior art using robot itself vision has the environment that positioning precision is relatively low, be not suitable for dark is solved.

Description

A kind of method that robot captures article
Technical field
The present invention relates to intellect service robot technical field, more particularly to a kind of method that robot captures article.
Background technology
With the development of artificial intelligence technology, intellect service robot is widely used in various home services, in intelligence In the environment of energy household, the instruction that robot can be sent with recipient is captured and transmitted to the article of needs.
In the prior art, article method is captured using robot, is typically all the video camera by robot body, or External camera is positioned to article, so as to realize the crawl of article.
Present inventor has found that at least there are the following problems in the prior art when realizing technical scheme:
The existing video camera by robot body is come the method that is positioned to article, and the requirement for light is compared Height, is only applicable to daytime or the suitable place of light, during for night or dark, it is impossible to positioned, such as right For the disabled person without self care ability, grade of drinking water or get it filled at night is inconvenient.And robot in itself from The video camera of band is usually to be positioned by itself vision, and ratio of precision is relatively low, simultaneously inaccurate, the application condition of the positioning to object Greatly.
It can be seen that, it is relatively low, no to there is positioning precision in the method for the prior art positioned using robot itself vision Suitable for dark environment the problem of, therefore, how to improve positioning precision and realize and more accurately capture article, it is real real Existing smart home, is important problem.
The content of the invention
The embodiment of the present invention provides a kind of method that robot captures article, to solve to utilize robot in the prior art There is the technical problem for the environment that positioning precision is relatively low, be not suitable for dark in the method that itself vision is positioned.
In a first aspect, the invention discloses a kind of method that robot captures article, methods described includes:
The first position of the robot and the second place of the article are obtained, wherein the first position and described Two positions come from Kinect infrared equipments;
According to the first position and the second place, the robot is obtained to the first motion rail of the article Mark;
According to first movement locus, obtain target location so that the robot motion to the target location with Capture the article.
Alternatively, described according to first movement locus, before obtaining target location, in addition to:
According to first movement locus, the 3rd position is obtained, wherein, the 3rd position can be different from target location Position;
Alternatively, before target location according to first movement locus, is obtained, in addition to:
Judge to whether there is barrier on first movement locus;
If it is present obtaining planning again obtains the second movement locus so that the robot avoids the barrier;
If it does not exist, then being moved according to first movement locus.
Alternatively, after target location according to first movement locus, is obtained, in addition to:
Judge the target location whether in error range;
If it is, capturing the article using the left hand or the right hand of robot;
If it is not, then obtaining the 3rd movement locus planned again so that the robot reaches corresponding target position Put.
Alternatively, after target location according to first movement locus, is obtained, in addition to:
According to the target location and the second place, whether the position and posture for judging robot are the optimal of crawl State.
Alternatively, the first position includes the three-dimensional coordinate of the robot, and the second place includes the article Three-dimensional coordinate.
Alternatively, first movement locus includes the direction of travel and distance of robot.
The robot motion, to capture during the article, machinery is determined according to inverse kinematics principle to the target location The pose of hand end.
Based on same inventive concept, present invention also offers a kind of method that robot captures article, methods described bag Include:
Kinect infrared equipments obtain the first position of the robot and the second place of the article;
The first position and the second place are sent to the robot, so that the robot is according to described One position and the second place obtain target location to capture the article.
Alternatively, the Kinect infrared equipments obtain the first position of the robot and the second of the article Put, including:
Obtain spatial depth image;
According to the spatial depth image, pixel coordinate and depth coordinate are obtained;
According to the pixel coordinate and depth coordinate, the first position of the robot and the second of the article are obtained Put.
The one or more technical schemes provided in the embodiment of the present invention, have at least the following technical effects or advantages:
The method that a kind of robot that the embodiment of the present application is provided captures article, by obtaining, Kinect is infrared to be set for robot The first position of standby robot and the second place of the article, and according to the first position and the second place, obtain The robot is obtained to the first movement locus of the article;Then target location, institute are obtained according to first movement locus Robot motion is stated to the target location to capture the article.It can be obtained accurately because Kinect infrared equipments are obtained Positional information, and positional information is passed into robot, the accuracy of article positioning can be improved, so as to more precisely capture Article, and Kinect infrared equipments are not required light, can still be used in dark environment, be expanded machine Scene and scope that device people uses so that robot can be truly realized 24 hours and be serviced for the mankind.Solve in the prior art There is the technology for the environment that positioning precision is relatively low, be not suitable for dark in the method positioned using robot itself vision Problem.
Described above is only the general introduction of technical solution of the present invention, in order to better understand the technological means of the present invention, And can be practiced according to the content of specification, and in order to allow above and other objects of the present invention, feature and advantage can Become apparent, below especially exemplified by the embodiment of the present invention.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is the accompanying drawing used required in technology description to be briefly described, it should be apparent that, drawings in the following description are this hairs Some bright embodiments, for those of ordinary skill in the art, on the premise of not paying creative work, can be with root Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is a kind of flow chart of the method for robot crawl article in the embodiment of the present invention;
Fig. 2 is the flow chart of the method for another robot crawl article in the embodiment of the present invention.
Embodiment
The embodiment of the present invention provides a kind of method that robot captures article, to solve to utilize robot in the prior art There is the technical problem for the environment that positioning precision is relatively low, be not suitable for dark in the method that itself vision is positioned.Realize Raising positioning precision accurately captures article, is applicable the technique effect of various light environments.
Technical scheme in the embodiment of the present application, general thought is as follows:
A kind of method that robot captures article, methods described includes:Obtain the first position of the robot and described The second place of article, wherein the first position and the second place come from Kinect infrared equipments;According to described One position and the second place, obtain the robot to the first movement locus of the article;According to the described first motion Track, obtains target location so that the robot motion to the target location to capture the article.
In the above method, robot passes through the first position for obtaining the robot of Kinect infrared equipments and the article The second place, and according to the first position and the second place, the robot is obtained to the first motion of the article Track;Then target location is obtained according to first movement locus, the robot motion is to the target location to capture The article, can obtain accurate positional information, and positional information is passed into machine because Kinect infrared equipments are obtained People, can improve the accuracy of article positioning, so that article is more precisely captured, and Kinect infrared equipments do not have to light Require, can still be used in dark environment, expand scene and scope that robot is used so that robot It can be truly realized 24 hours and be serviced for the mankind.Solve the method positioned in the prior art using robot itself vision There is the technical problem for the environment that positioning precision is relatively low, be not suitable for dark.
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is A part of embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art The every other embodiment obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.
Embodiment one
The present embodiment provides a kind of method that robot captures article, and methods described includes:
Step S101:The first position of the robot and the second place of the article are obtained, wherein described first Put and come from Kinect infrared equipments with the second place;
Step S102:According to the first position and the second place, the robot is obtained to the of the article One movement locus;
Step S103:According to first movement locus, target location is obtained, so that the robot motion is to the mesh Cursor position is to capture the article.
In the above method, the first position of robot and the second of the article are obtained by Kinect infrared equipments Put, and above-mentioned first position and the second place are sent to robot, data are set up between Kinect infrared equipments and robot Connection, and can interact, Kinect infrared equipments, which are obtained, can obtain accurate positional information, can improve article and determine The accuracy of position, so that article is more precisely captured, and Kinect infrared equipments are not required light, even in dark Environment in can still use, expand scene and scope that robot is used so that it is small that robot can be truly realized 24 When for the mankind service.Solve the method positioned in the prior art using robot itself vision exist positioning precision compared with The technical problem of environment that is low, not being suitable for dark.
It should be noted that in the application, the first position of the robot obtained in the step S101 and the article Second place order in no particular order, and can simultaneously obtain, not be limited herein.
Below, described in detail with reference to the method for the crawl article that Fig. 1 is provided the application:
First, step S101 is performed, the first position of the robot and the second place of the article, wherein institute is obtained State first position and the second place and come from Kinect infrared equipments.
In the embodiment of the present application, robot can for NAO robots or other can be with robot, Kinect is infrared to be set Standby can be Kinect V2 equipment, and robot sets up data cube computation with Kinect infrared equipments so that robot can be received The positional information that Kinect infrared equipments are sent, wherein, the second place of the first position of robot and the article can be wrapped Robot and the three-dimensional coordinate of article or other data that can be positioned to robot and article are included, specific limit is not done herein It is fixed.
Next, performing step step S102:According to the first position and the second place, the robot is obtained To the first movement locus of the article.
Specifically, first movement locus includes the direction of travel and distance of robot.
In the embodiment of the present application, robot can carry out route planning, acquire according to first position and the second place First movement locus of article.
Finally, step S103 is performed:According to first movement locus, target location is obtained, so that the robot is transported Move to the target location to capture the article.
Alternatively, in the method for crawl article provided in an embodiment of the present invention, described according to first movement locus, Before acquisition target location, in addition to:
According to first movement locus, the 3rd position is obtained, wherein, the 3rd position can be different from target location Position.
Alternatively, in the method for crawl article provided in an embodiment of the present invention, described according to first movement locus, Before acquisition target location, in addition to:
Judge to whether there is barrier on first movement locus;
If it is present carrying out planning again obtains the second movement locus so that the robot avoids the barrier;
If it does not exist, then being moved according to first movement locus.
Specifically, judge to whether there is barrier on first movement locus, that is, judge on the first movement locus With the presence or absence of barriers such as other objects, pillars, if there is barrier, then the route of robot is planned again, The route that will not be collided is selected, so as to reach corresponding target location crawl article.
During concrete implementation, according to the first movement locus, robot combines the position and attitude and object of itself Positional information, first go to from object have with a certain distance from position at, i.e. the 3rd position, the 3rd position can be and target position Difference is put, it is necessary to avoid colliding with object during the 3rd position is moved to, it is therefore desirable to judge first motion It whether there is barrier on track.
Alternatively, in the method for crawl article provided in an embodiment of the present invention, according to first movement locus, obtain After target location, in addition to:
Judge the target location whether in error range;
If it is, capturing the article using the left hand or the right hand of robot;
If it is not, then planning obtains the 3rd movement locus so that the robot reaches corresponding target position again Put.
Because the position of robot is changed, at this time, it may be necessary to reacquire robot by Kinect infrared equipments Position, and send to robot, robot judges position now by obtaining the position of itself position and article now Whether in the range of error allows, if it is, carrying out crawl article, if gone beyond the scope, robot is according to itself Position is adjusted, and when robot reaches target location, it itself is closer from left hand that can judge or the right hand is closer, If near from left hand, article is captured with left hand, it is near from the right hand, then capture article with the right hand.Robot is grabbed after article, will Article is placed into fixed position and put down.
Alternatively, in the method for crawl article provided in an embodiment of the present invention, according to first movement locus, obtain After target location, in addition to:
According to the target location and the second place, whether the position and posture for judging robot are the optimal of crawl State.
Alternatively, the first position includes the three-dimensional coordinate of the robot, and the second place includes the article Three-dimensional coordinate.
Alternatively, the robot motion to the target location to capture during the article, according to inverse kinematics principle Determine the pose of arm end.
Embodiment two
Based on the inventive concept same with embodiment one, the embodiment of the present invention two is from the visual angle of Kinect infrared equipments, also A kind of method that article is captured there is provided robot, methods described includes:
Step S201:Kinect infrared equipments obtain the first position of the robot and the second place of the article;
Step S202:The first position and the second place are sent to the robot, so that the robot Obtain target location to capture the article according to the first position and the second place.
Specifically, in the method for crawl article provided in an embodiment of the present invention, the Kinect infrared equipments obtain described The first position of robot and the second place of the article, including:
Obtain spatial depth image;
According to the spatial depth image, pixel coordinate and depth coordinate are obtained;
According to the pixel coordinate and depth coordinate, the first position of the robot and the second of the article are obtained Put.
Specifically, the Kinect infrared equipments that the embodiment of the present application is used can be Kinect V2 equipment, specific In implementation process, the said equipment is placed on horizontal level, and KinectV2 is equipped with " Time of Flight (ToF) " mode Depth sensors, project infrared pulse, the depth image in space can be obtained, then obtained from spatial depth image Pixel coordinate and depth coordinate are taken, the first position of the robot and the second place of the article is obtained, specific real During applying, inner parameter and external parameter that demarcation obtains camera can be carried out to Kinect device, picture is obtained by bit arithmetic Vegetarian refreshments depth value, and then combining camera parameter and pixel depth value, show that the three-dimensional of pixel is sat by the imaging model of camera Mark.Specifically, it can be showed by the depth information for obtaining Depth sensors with the form of image, for difference Depth value represented respectively with RGB, different depth is represented using transition color.
It is identical with the inventive concept of embodiment one due to implementing two, it is applied to embodiment for the various modifications for implementing one Two.
In the application, robot and Kinect are successfully realized simple interaction, Kinect by the robot measured and The data of object have successfully passed to robot, and robot is moved according to the data of transmission.
The one or more technical schemes provided in the embodiment of the present invention, have at least the following technical effects or advantages:
The method that a kind of robot that the embodiment of the present application is provided captures article, by obtaining, Kinect is infrared to be set for robot The first position of standby robot and the second place of the article, and according to the first position and the second place, obtain The robot is obtained to the first movement locus of the article;Then target location, institute are obtained according to first movement locus Robot motion is stated to the target location to capture the article.It can be obtained accurately because Kinect infrared equipments are obtained Positional information, and positional information is passed into robot, the accuracy of article positioning can be improved, so as to more precisely capture Article, and Kinect infrared equipments are not required light, can still be used in dark environment, be expanded machine Scene and scope that device people uses so that robot can be truly realized 24 hours and be serviced for the mankind.Solve in the prior art There is the technology for the environment that positioning precision is relatively low, be not suitable for dark in the method positioned using robot itself vision Problem.
, but those skilled in the art once know basic creation although preferred embodiments of the present invention have been described Property concept, then can make other change and modification to these embodiments.So, appended claims are intended to be construed to include excellent Select embodiment and fall into having altered and changing for the scope of the invention.
Obviously, those skilled in the art can carry out various changes and modification without departing from this hair to the embodiment of the present invention The spirit and scope of bright embodiment.So, if these modifications and variations of the embodiment of the present invention belong to the claims in the present invention And its within the scope of equivalent technologies, then the present invention is also intended to comprising including these changes and modification.

Claims (10)

1. a kind of method that robot captures article, it is characterised in that methods described includes:
The first position of the robot and the second place of the article are obtained, wherein the first position and the second Put and come from Kinect infrared equipments;
According to the first position and the second place, the robot is obtained to the first movement locus of the article;
According to first movement locus, obtain target location so that the robot motion to the target location to capture The article.
2. the method as described in claim 1, it is characterised in that described according to first movement locus, obtains target position Before putting, in addition to:
According to first movement locus, the 3rd position is obtained, wherein, it from target location is different positions that the 3rd position, which can be, Put.
3. the method as described in claim 1, it is characterised in that according to first movement locus, obtain target location it Before, in addition to:
Judge to whether there is barrier on first movement locus;
If it is present obtaining planning again obtains the second movement locus so that the robot avoids the barrier;
If it does not exist, then being moved according to first movement locus.
4. the method as described in claim 1, it is characterised in that according to first movement locus, obtain target location it Afterwards, in addition to:
Judge the target location whether in error range;
If it is, capturing the article using the left hand or the right hand of robot;
If it is not, then obtaining the 3rd movement locus planned again so that the robot reaches corresponding target location.
5. the method as described in claim 1, it is characterised in that according to first movement locus, obtain target location it Afterwards, in addition to:
According to the target location and the second place, whether the position and posture for judging robot are the optimal shape captured State.
6. the method as described in claim 1, it is characterised in that the first position includes the three-dimensional coordinate of the robot, The second place includes the three-dimensional coordinate of the article.
7. the method as described in claim 1, it is characterised in that first movement locus include robot direction of travel and Distance.
8. the method as described in claim 1, it is characterised in that the robot motion is described to capture to the target location During article, the pose of arm end is determined according to inverse kinematics principle.
9. a kind of method that robot captures article, it is characterised in that methods described includes:
Kinect infrared equipments obtain the first position of the robot and the second place of the article;
The first position and the second place are sent to the robot, so that the robot is according to described first Put and obtain target location to capture the article with the second place.
10. method as claimed in claim 9, it is characterised in that the Kinect infrared equipments obtain the of the robot One position and the second place of the article, including:
Obtain spatial depth image;
According to the spatial depth image, pixel coordinate and depth coordinate are obtained;
According to the pixel coordinate and depth coordinate, the first position of the robot and the second place of the article are obtained.
CN201710218154.1A 2017-04-05 2017-04-05 A kind of method that robot captures article Pending CN107127773A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710218154.1A CN107127773A (en) 2017-04-05 2017-04-05 A kind of method that robot captures article

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710218154.1A CN107127773A (en) 2017-04-05 2017-04-05 A kind of method that robot captures article

Publications (1)

Publication Number Publication Date
CN107127773A true CN107127773A (en) 2017-09-05

Family

ID=59715586

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710218154.1A Pending CN107127773A (en) 2017-04-05 2017-04-05 A kind of method that robot captures article

Country Status (1)

Country Link
CN (1) CN107127773A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108115688A (en) * 2017-12-29 2018-06-05 深圳市越疆科技有限公司 Crawl control method, system and the mechanical arm of a kind of mechanical arm
CN112025714A (en) * 2020-09-17 2020-12-04 珠海格力智能装备有限公司 Workpiece grabbing method and device and robot equipment
CN112171664A (en) * 2020-09-10 2021-01-05 敬科(深圳)机器人科技有限公司 Production line robot track compensation method, device and system based on visual identification

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010096850A1 (en) * 2009-02-26 2010-09-02 Ih Tech Sondermaschinenbau U. Instandhaltung Gmbh Method and apparatus for the robot-controlled gripping and moving of objects
US20120059517A1 (en) * 2010-09-07 2012-03-08 Canon Kabushiki Kaisha Object gripping system, object gripping method, storage medium and robot system
CN105652873A (en) * 2016-03-04 2016-06-08 中山大学 Mobile robot obstacle avoidance method based on Kinect
CN106272427A (en) * 2016-09-12 2017-01-04 安徽理工大学 A kind of industrial robot intelligence picking up system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010096850A1 (en) * 2009-02-26 2010-09-02 Ih Tech Sondermaschinenbau U. Instandhaltung Gmbh Method and apparatus for the robot-controlled gripping and moving of objects
US20120059517A1 (en) * 2010-09-07 2012-03-08 Canon Kabushiki Kaisha Object gripping system, object gripping method, storage medium and robot system
CN105652873A (en) * 2016-03-04 2016-06-08 中山大学 Mobile robot obstacle avoidance method based on Kinect
CN106272427A (en) * 2016-09-12 2017-01-04 安徽理工大学 A kind of industrial robot intelligence picking up system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
韩峥 等: "基于Kinect的机械臂目标抓取", 《智能系统学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108115688A (en) * 2017-12-29 2018-06-05 深圳市越疆科技有限公司 Crawl control method, system and the mechanical arm of a kind of mechanical arm
CN112171664A (en) * 2020-09-10 2021-01-05 敬科(深圳)机器人科技有限公司 Production line robot track compensation method, device and system based on visual identification
CN112171664B (en) * 2020-09-10 2021-10-08 敬科(深圳)机器人科技有限公司 Production line robot track compensation method, device and system based on visual identification
CN112025714A (en) * 2020-09-17 2020-12-04 珠海格力智能装备有限公司 Workpiece grabbing method and device and robot equipment

Similar Documents

Publication Publication Date Title
CN105411490B (en) The real-time location method and mobile robot of mobile robot
CN104602869B (en) Robot control method, system and the equipment of visual pursuit based on the remote mobile device with video camera
CN110446159A (en) A kind of system and method for interior unmanned plane accurate positioning and independent navigation
CN111199560B (en) Video monitoring positioning method and video monitoring system
CN106153050A (en) A kind of indoor locating system based on beacon and method
CN106600627A (en) Rigid body motion capturing method and system based on mark point
CN108846867A (en) A kind of SLAM system based on more mesh panorama inertial navigations
CN106020201A (en) Mobile robot 3D navigation and positioning system and navigation and positioning method
CN109313417A (en) Help robot localization
CN105488457A (en) Virtual simulation method and system of camera motion control system in film shooting
CN107127773A (en) A kind of method that robot captures article
CN109813319A (en) A kind of open loop optimization method and system for building figure based on SLAM
CN104864807A (en) Manipulator hand-eye calibration method based on active binocular vision
CN102368810A (en) Semi-automatic aligning video fusion system and method thereof
CN108170166A (en) The follow-up control method and its intelligent apparatus of robot
CN109062407A (en) Remote mobile terminal three-dimensional display & control system and method based on VR technology
CN106054924A (en) Unmanned aerial vehicle accompanying method, unmanned aerial vehicle accompanying device and unmanned aerial vehicle accompanying system
CN110728739A (en) Virtual human control and interaction method based on video stream
WO2019221800A1 (en) System and method for spatially registering multiple augmented reality devices
CN105698784A (en) Indoor robot positioning system and method
CN110275179A (en) A kind of building merged based on laser radar and vision ground drawing method
CN109202958A (en) A kind of composite machine people visual grasping platform
CN106325278B (en) A kind of robot localization air navigation aid based on Aleuroglyphus ovatus
CN105303518A (en) Region feature based video inter-frame splicing method
CN116052046A (en) Dynamic scene binocular vision SLAM method based on target tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170905

RJ01 Rejection of invention patent application after publication