CN106384353A - Target positioning method based on RGBD - Google Patents
Target positioning method based on RGBD Download PDFInfo
- Publication number
- CN106384353A CN106384353A CN201610817684.3A CN201610817684A CN106384353A CN 106384353 A CN106384353 A CN 106384353A CN 201610817684 A CN201610817684 A CN 201610817684A CN 106384353 A CN106384353 A CN 106384353A
- Authority
- CN
- China
- Prior art keywords
- depth
- image
- trackwindow
- target
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0014—Image feed-back for automatic industrial control, e.g. robot with camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Manipulator (AREA)
Abstract
The invention provides a target positioning method based on a RGBD, and the main steps comprise hardware environment arrangement and device initialization; the RGBD sensor is adopted to acquire depth data and color image of a mobile robot in current frame, a depth image is segmented based on the depth data to obtain a target region in the depth image, the target image is set as SearchRect_depth and is mapped to the color image, and the corresponding color image part is recorded as SearchRectROI; in the color image SearchRectROI, a CAMSHIFT method is utilized to calculate a position (x, y) of a target in the color image; the depth data is combined to calculate world coordinates (xw, yw, zw) of the current target; the world coordinates (xw, yw, zw) of the current target is transmitted to the mobile robot through a communication system, and the robot can control its self motion. According to the invention, the method has advantage of fast calculation speed, accurate positioning, strong adaptability for environment change and good expandability for different targets.
Description
Technical field
The invention belongs to vision positioning field is and in particular to a kind of object localization method based on RGBD.
Background technology
Mobile robot self poisoning ability is to realize basis and the key of real high-performance and high accuracy navigation, and it can enter
Row autonomous simultaneously complete given job task it is necessary to obtain exact position and the attitude of itself in real time.Move machine at present
People is provided with GPS and inertial navigation system, its technology relative maturity, has much outstanding product can easily adopt
Buy.Although GPS and inertial navigation system have many advantages, such as, but also have the shortcomings that some merit attention:
(1) gps data is usually to update once within 1 second it is impossible to meet the requirement of the quick motion of robot.
(2) when robot is in some particular pose, such as under backward flight, underriding and reduced gravity situations, often cannot connect
Receive gps signal.
(3) indoors in environment it is impossible to receive gps signal.
Generally require to be tested indoors during the research method such as Autonomous Mobile Robot Control Algorithm and environment sensing, and in room
In the environment of interior and some GPS cannot use, own location information often cannot be known in robot, and weather condition, cloud
Thickness degree is also possible to affect the reception of gps signal.In addition, the change of robot own load also can make the quality of robot
Distribution and equilibrium condition change the control difficulty increase so that mobile robot.Therefore it is adaptable to interior, have accurately, soon
Speed, independently, the cheap and research heat becoming current mobile robot based on machine vision localization method the features such as adaptability is good
Point.
Content of the invention
For overcoming the deficiencies in the prior art, the present invention devises a kind of object localization method based on RGBD, and this method has
Have the advantages that calculating speed is fast, accurate positioning, the change for environment has stronger adaptability.
For realizing technique scheme, the invention provides a kind of object localization method based on RGBD, walk including following
Suddenly:
Hardware environment is set up, and equipment initializes;
Obtain depth data and the coloured image of present frame mobile robot using RGBD sensor, based on depth data pair
Depth image is split, and obtains the target area in depth image, is set to SearchRect_depth, and maps that to coloured silk
In color image, corresponding color image parts are designated as SearchRectROI;
Calculate position in coloured image for the target using CAMSHIFT method in coloured image SearchRectROI
(x, y);
Calculate the world coordinates (x of current goal in conjunction with depth dataw, yw, zw);
World coordinates (x by current goalw, yw, zw) mobile robot is sent to by communication system, robot controls
Displacement.
Preferably, described hardware environment is set up, and comprises the following steps:
By fixing for RGB sensor on the ceiling so as to front panel parallel to ground and with ground distance at 0.8 meter to 4
Between rice;
Colour-coded is fixed on mobile robot center.
Preferably, described based on depth data, depth image is split, comprise the following steps:
Note former frame positioning result (xw, yw, zw) in target depth value zwFor depth_pre, scheme in present frame depth image
As the depth of arbitrfary point (i, j) is depth (i, j), obtain present frame depth data;
Registered depth threshold value is T, a width of depth_cols of depth image, a length of depth_rows of depth image, depth image
In a certain pixel gray value be depth_seg (i, j), present frame depth data is traveled through, as follows to depth
Image carries out Range Image Segmentation:
Wherein i=0,1,2 ... depth_cols, j=0,1,2 ... depth_rows;
Carry out connected domain detection in depth image depth_seg (i, j) through over-segmentation, choose the maximum connection of area
Domain, and obtain the boundary rectangle of the maximum connected domain of area, the target area SearchRect_depth as in depth image;
By depth image and coloured image coordinate corresponding relation, obtain from coloured image and SearchRect_depth
Corresponding search rectangular region SearchRectROI.
Preferably, described depth image is corrected by camera lens for RGB sensor with coloured image coordinate corresponding relation and is obtained
Depth data and color image pixel point between mapping relations it is known that a bit (i, j) in coloured image, reflected by this
The relation of penetrating can obtain depth information depth (i, j) of this point.
Preferably, calculated using CAMSHIFT method in described coloured image SearchRectROI and follow the tracks of target's center position
Put O, i.e. (x, y), comprise the following steps:
Run if first, then need to carry out object initialization, choose target following rectangle frame manually
TrackWindow_init, follows the tracks of target and should have specific color characteristic, and following the tracks of target in the present invention is to be fixed on robot
The colour-coded at center;
To carrying out CAMSHIFT search in SearchRectROI, obtain new target location trackWindow, calculate
The area trackWindow.area of trackWindow, if trackWindow.area is more than threshold value T_area then it represents that following the tracks of
Success, obtains trackWindow centre of form coordinate (x, y), if trackWindow.area is less than threshold value T_area then it represents that following the tracks of
Failure, reinitializes tracking result trackWindow_pre that trackWindow_init is previous frame, and re-starts
CAMSHIFT searches for, and obtains new target location trackWindow.
Preferably, described threshold value is the 30% of initial trackWindow_init area, that is,
T_area=0.3*trackWindow_init.area.
Preferably, described combination depth data calculates the world coordinates (x of current goalw, yw, zw), comprise the following steps:
Note W and H is respectively width and the height of RGB sensor color image, and image coordinate is converted into image coordinate
Coordinate value (x for initial point1, y1), it is calculated as follows:
The field range of note RGBD colour imagery shot is horizontal view angle αh, vertical angle of view αv, thus can try to achieve every pixel and correspond to
Field of view angle be:
Level:Vertically:
And then try to achieve current pixel point (x1, y1) corresponding to level angle θhWith vertical rotational angle thetav:
According to depth information zwWorld coordinates (the x of current goal can be tried to achievew, yw, zw):
xw=zw·tanθhyw=zw·tanθv.
Compared with prior art, the technical solution used in the present invention has following beneficial effects:
The hardware device that the method for the invention is used is simple, with low cost, only relates to RGBD sensor and outfit
There are Windows operating system and the computer of corresponding RGBD sensor interface.In terms of target following, by calculating to CAMSHIFT
Method carries out the improvement of depth data fusion, greatly reduces the sensitiveness to illumination variation for the algorithm, enhances the Shandong of track algorithm
Rod and real-time, have significantly attenuated the background object interference close with color of object.Compare other contact robot localization
Method, this method adopts contactless measurement, and calculating speed, up to 25 frames per second, has that calculating speed is fast, accurate positioning
Advantage;Change for environment has stronger adaptability, is directed to different target, it is strong that this method has extensibility
Advantage.
Brief description
Fig. 1 is the inventive method overall flow figure.
Fig. 2 merges the image segmentation flow chart of depth data for the inventive method.
Fig. 3 improves CAMSHIFT tracking flow chart for the inventive method.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Whole description is it is clear that described embodiment is only a part of embodiment of the present invention, rather than whole embodiments.Ability
The every other embodiment that domain ordinary person is obtained under the premise of not making creative work, belongs to the protection of the present invention
Scope.
Embodiment:A kind of object localization method based on RGBD.
In the present embodiment, using second generation Kinect sensor, including RGB camera, infrared receiver, infrared
Line transmitter and microphone array, note RGBD sensor world coordinate system is OnXnYnZn, abbreviation N coordinate system, N coordinate origin
Sensor color camera central point, the Z axis of N coordinate system perpendicular to Kinect front panel, X-axis along front panel to the left, Y-axis edge
Front panel is downward.
Shown in reference Fig. 1, a kind of object localization method based on RGBD, comprise the following steps:
Hardware environment is set up, and equipment initializes;
Obtain depth image in flight course for the present frame aircraft and coloured image using RGBD sensor, based on depth
Degrees of data is split to depth image, obtains the target area in depth image, is set to SearchRect_depth, and by its
It is mapped in coloured image, corresponding color image parts are designated as SearchRectROI;
Calculate position in coloured image for the target using CAMSHIFT method in coloured image SearchRectROI
(x, y);
Calculate the world coordinates (x of current goal in conjunction with depth dataw, yw, zw);
The world coordinates of current goal is sent to aircraft by communication system ZigBee, flying vehicles control itself is transported
Dynamic.
In the present embodiment, described hardware environment is set up, and comprises the following steps:
By fixing for RGB sensor on the ceiling so as to front panel parallel to ground and with ground distance at 0.8 meter to 4
Between rice;
Colour-coded is fixed on mobile robot center.
With reference to shown in Fig. 2, described based on depth data, depth image is split, comprise the following steps:
Note former frame positioning result (xw, yw, zw) in target depth value zwFor depth_pre, scheme in present frame depth image
As the depth of arbitrfary point (i, j) is depth (i, j), obtain present frame depth data;
Registered depth threshold value is T, a width of depth_cols of depth image, a length of depth_rows of depth image, depth image
In a certain pixel gray value be depth_seg (i, j), present frame depth data is traveled through, as follows to depth
Image carries out Range Image Segmentation:
Wherein i=0,1,2 ... depth_cols, j=0,1,2 ... depth_rows;
Carry out connected domain detection in depth image depth_seg (i, j) through over-segmentation, choose the maximum connection of area
Domain, and obtain the boundary rectangle of the maximum connected domain of area, the target area SearchRect_depth as in depth image;
By depth image and coloured image coordinate corresponding relation, obtain from coloured image and SearchRect_depth
Corresponding search rectangular region SearchRectROI.
In the present embodiment, described depth image is corrected by camera lens for RGB sensor with coloured image coordinate corresponding relation
Mapping relations between the depth data obtaining and color image pixel point it is known that in coloured image a bit (i, j), by this
Plant depth information depth (i, j) that mapping relations can obtain this point.
With reference to shown in Fig. 3, calculated using CAMSHIFT method in described coloured image SearchRectROI and follow the tracks of in target
Heart position O, remembers its image coordinate (x, y), comprises the following steps:
Object initialization, chooses target following rectangle frame trackWindow_init manually, follows the tracks of target position in the present invention
The green panel at quadrotor center;
To carrying out CAMSHIFT search in SearchRectROI, obtain new target location trackWindow, calculate
The area trackWindow.area of trackWindow, if trackWindow.area is more than threshold value T_area then it represents that following the tracks of
Success, is the central point of trackWindow rectangle frame according to position in coloured image for the target's center, i.e. (x, y), if
TrackWindow.area is less than threshold value T_area then it represents that following the tracks of unsuccessfully, and it is upper for reinitializing trackWindow_init
Tracking result trackWindow_pre of one frame, and re-start CAMSHIFT search, obtain new target location
trackWindow.
In the present embodiment, described threshold value is the 30% of initial trackWindow_init area, that is,
T_area=0.3*trackWindow_init.area.
In the present embodiment, described combination depth data calculates coordinate (x in N coordinate system for the current goalw, yw, zw), bag
Include following steps:
Note W and H is respectively width and height, W=1920, the H=1080 in Kinect of RGB sensor color image, will scheme
As Coordinate Conversion becomes with image coordinateCoordinate value (x for initial point1, y1), it is calculated as follows:
The field range of note RGBD colour imagery shot is horizontal view angle αh, vertical angle of view αv, α in Kinecth=70 °, αv
=60 °, thus can try to achieve the corresponding field of view angle of every pixel is:
Level:Vertically:
And then try to achieve current pixel point (x1, y1) corresponding to level angle θhWith vertical rotational angle thetav:
According to Kinect depth information zwWorld coordinates (the x of current goal can be tried to achievew, yw, zw):
xw=zw·tanθhyw=zw·tanθv.
The above is presently preferred embodiments of the present invention, but the present invention should not be limited to this embodiment and accompanying drawing institute is public
The content opened, thus every without departing from complete equivalent or modification under spirit disclosed in this invention, both fall within present invention protection
Scope.
Claims (7)
1. a kind of object localization method based on RGBD is it is characterised in that comprise the following steps:
Hardware environment is set up, and equipment initializes;
Obtain depth data and the coloured image of present frame mobile robot using RGBD sensor, based on depth data to depth
Image is split, and obtains the target area in depth image, is set to SearchRect_depth, and maps that to cromogram
In picture, corresponding color image parts are designated as SearchRectROI;
Calculate position (x, y) in coloured image for the target using CAMSHIFT method in coloured image SearchRectROI;
Calculate the world coordinates (x of current goal in conjunction with depth dataw, yw, zw);
World coordinates (x by current goalw, yw, zw) mobile robot is sent to by communication system, robot controls itself
Motion.
2. the object localization method based on RGBD as claimed in claim 1 is it is characterised in that described hardware environment is set up, including
Following steps:
By fixing for RGB sensor on the ceiling so as to front panel parallel to ground and with ground distance 0.8 meter to 4 meters it
Between;
Colour-coded is fixed on mobile robot center.
3. as claimed in claim 1 the object localization method based on RGBD it is characterised in that described based on depth data to depth
Image is split, and comprises the following steps:
Note former frame positioning result (xw, yw, zw) in target depth value zwFor depth_pre, in present frame depth image, image is appointed
The depth of meaning point (i, j) is depth (i, j), obtains present frame depth data;
Registered depth threshold value be T, a width of depth_cols of depth image, a length of depth_rows of depth image, in depth image certain
One pixel gray value is depth_seg (i, j), present frame depth data is traveled through, as follows to depth image
Carry out Range Image Segmentation:
Wherein i=0,1,2 ... depth_cols, j=0,1,2 ... depth_rows;
Carry out connected domain detection in depth image depth_seg (i, j) through over-segmentation, choose the maximum connected domain of area,
And obtain the boundary rectangle of the maximum connected domain of area, the target area SearchRect_depth as in depth image;
By depth image and coloured image coordinate corresponding relation, obtain corresponding with SearchRect_depth from coloured image
Search rectangular region SearchRectROI.
4. the object localization method based on RGBD as claimed in claim 3 is it is characterised in that described depth image and cromogram
As coordinate corresponding relation corrects the mapping between the depth data obtaining and color image pixel point for RGB sensor by camera lens
Relation it is known that a bit (i, j) in coloured image, by this mapping relations can obtain this point depth information depth (i,
j).
5. the object localization method based on RGBD as claimed in claim 1 is it is characterised in that in described coloured image
Calculated using CAMSHIFT method in SearchRectROI and follow the tracks of target's center position O, i.e. (x, y), comprise the following steps:
Run if first, then need to carry out object initialization, choose target following rectangle frame trackWindow_ manually
Init, follows the tracks of target and should have specific color characteristic, and following the tracks of target in the present invention is to be fixed on the colored mark at robot center
Note;
To carrying out CAMSHIFT search in SearchRectROI, obtain new target location trackWindow, calculate
The area trackWindow.area of trackWindow, if trackWindow.area is more than threshold value T_area then it represents that following the tracks of
Success, obtains trackWindow centre of form coordinate (x, y), if trackWindow.area is less than threshold value T_area then it represents that following the tracks of
Failure, reinitializes tracking result trackWindow_pre that trackWindow_init is previous frame, and re-starts
CAMSHIFT searches for, and obtains new target location trackWindow.
6. the object localization method based on RGBD as claimed in claim 5 is it is characterised in that described threshold value is initial
The 30% of trackWindow_init area, that is,
T_area=0.3*trackWindow_init.area.
7. the object localization method based on RGBD as claimed in claim 1 is it is characterised in that described combination depth data calculates
World coordinates (the x of current goalw, yw, zw), comprise the following steps:
Note W and H is respectively width and the height of RGB sensor color image, and image coordinate is converted into image coordinateFor former
Coordinate value (the x of point1, y1), it is calculated as follows:
The field range of note RGBD colour imagery shot is horizontal view angle αh, vertical angle of view αv, thus can try to achieve that every pixel is corresponding to be regarded
Rink corner degree is:
Level:Vertically:
And then try to achieve current pixel point (x1, y1) corresponding to level angle θhWith vertical rotational angle thetav:
According to depth information zwWorld coordinates (the x of current goal can be tried to achievew, yw, zw):
xw=zw·tanθhyw=zw·tanθv.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610817684.3A CN106384353A (en) | 2016-09-12 | 2016-09-12 | Target positioning method based on RGBD |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610817684.3A CN106384353A (en) | 2016-09-12 | 2016-09-12 | Target positioning method based on RGBD |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106384353A true CN106384353A (en) | 2017-02-08 |
Family
ID=57936426
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610817684.3A Pending CN106384353A (en) | 2016-09-12 | 2016-09-12 | Target positioning method based on RGBD |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106384353A (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107403430A (en) * | 2017-06-15 | 2017-11-28 | 中山大学 | A kind of RGBD image, semantics dividing method |
CN107563388A (en) * | 2017-09-18 | 2018-01-09 | 东北大学 | A kind of convolutional neural networks object identification method based on depth information pre-segmentation |
CN107945192A (en) * | 2017-12-14 | 2018-04-20 | 北京信息科技大学 | A kind of pallet carton pile type real-time detection method |
CN108345912A (en) * | 2018-04-25 | 2018-07-31 | 电子科技大学中山学院 | Commodity rapid settlement system based on RGBD information and deep learning |
CN108715233A (en) * | 2018-05-29 | 2018-10-30 | 珠海全志科技股份有限公司 | A kind of unmanned plane during flying precision determination method |
CN108717553A (en) * | 2018-05-18 | 2018-10-30 | 杭州艾米机器人有限公司 | A kind of robot follows the method and system of human body |
CN108846864A (en) * | 2018-05-29 | 2018-11-20 | 珠海全志科技股份有限公司 | A kind of position capture system, the method and device of moving object |
CN108876795A (en) * | 2018-06-07 | 2018-11-23 | 四川斐讯信息技术有限公司 | A kind of dividing method and system of objects in images |
CN108871310A (en) * | 2017-05-12 | 2018-11-23 | 中华映管股份有限公司 | Thermal image positioning system and localization method |
CN108994832A (en) * | 2018-07-20 | 2018-12-14 | 上海节卡机器人科技有限公司 | A kind of robot eye system and its self-calibrating method based on RGB-D camera |
CN109146906A (en) * | 2018-08-22 | 2019-01-04 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
CN109472814A (en) * | 2018-12-05 | 2019-03-15 | 湖南大学 | A kind of multiple quadrotor indoor tracking and positioning methods based on Kinect |
CN109816688A (en) * | 2018-12-03 | 2019-05-28 | 安徽酷哇机器人有限公司 | Article follower method and luggage case |
CN109934873A (en) * | 2019-03-15 | 2019-06-25 | 百度在线网络技术(北京)有限公司 | Mark image acquiring method, device and equipment |
CN110084828A (en) * | 2019-04-29 | 2019-08-02 | 北京华捷艾米科技有限公司 | A kind of image partition method, device and terminal device |
CN110244710A (en) * | 2019-05-16 | 2019-09-17 | 深圳前海达闼云端智能科技有限公司 | Automatic Track Finding method, apparatus, storage medium and electronic equipment |
WO2020135446A1 (en) * | 2018-12-24 | 2020-07-02 | 深圳市道通智能航空技术有限公司 | Target positioning method and device and unmanned aerial vehicle |
CN111428622A (en) * | 2020-03-20 | 2020-07-17 | 上海健麾信息技术股份有限公司 | Image positioning method based on segmentation algorithm and application thereof |
WO2021057739A1 (en) * | 2019-09-27 | 2021-04-01 | Oppo广东移动通信有限公司 | Positioning method and device, apparatus, and storage medium |
CN114049399A (en) * | 2022-01-13 | 2022-02-15 | 上海景吾智能科技有限公司 | Mirror positioning method combining RGBD image |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102867311A (en) * | 2011-07-07 | 2013-01-09 | 株式会社理光 | Target tracking method and target tracking device |
CN103971380A (en) * | 2014-05-05 | 2014-08-06 | 中国民航大学 | Pedestrian trailing detection method based on RGB-D |
CN104880154A (en) * | 2015-06-03 | 2015-09-02 | 西安交通大学 | Internet-of-things binocular vision zoom dynamic target tracking test system platform and Internet-of-things binocular vision zoom dynamic target tracking ranging method |
KR20160097512A (en) * | 2015-02-09 | 2016-08-18 | 선문대학교 산학협력단 | Paired-edge based hand detection method using depth image |
-
2016
- 2016-09-12 CN CN201610817684.3A patent/CN106384353A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102867311A (en) * | 2011-07-07 | 2013-01-09 | 株式会社理光 | Target tracking method and target tracking device |
CN103971380A (en) * | 2014-05-05 | 2014-08-06 | 中国民航大学 | Pedestrian trailing detection method based on RGB-D |
KR20160097512A (en) * | 2015-02-09 | 2016-08-18 | 선문대학교 산학협력단 | Paired-edge based hand detection method using depth image |
CN104880154A (en) * | 2015-06-03 | 2015-09-02 | 西安交通大学 | Internet-of-things binocular vision zoom dynamic target tracking test system platform and Internet-of-things binocular vision zoom dynamic target tracking ranging method |
Non-Patent Citations (2)
Title |
---|
刘士荣: ""基于改进Camshift算法的移动机器人运动目标跟踪"", 《华中科技大学学报 (自然科学版)》 * |
张慧: ""基于 RGBD 相机的移动机器人运动目标跟踪技术"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108871310A (en) * | 2017-05-12 | 2018-11-23 | 中华映管股份有限公司 | Thermal image positioning system and localization method |
CN107403430B (en) * | 2017-06-15 | 2020-08-07 | 中山大学 | RGBD image semantic segmentation method |
CN107403430A (en) * | 2017-06-15 | 2017-11-28 | 中山大学 | A kind of RGBD image, semantics dividing method |
CN107563388A (en) * | 2017-09-18 | 2018-01-09 | 东北大学 | A kind of convolutional neural networks object identification method based on depth information pre-segmentation |
CN107945192B (en) * | 2017-12-14 | 2021-10-22 | 北京信息科技大学 | Tray carton pile type real-time detection method |
CN107945192A (en) * | 2017-12-14 | 2018-04-20 | 北京信息科技大学 | A kind of pallet carton pile type real-time detection method |
CN108345912A (en) * | 2018-04-25 | 2018-07-31 | 电子科技大学中山学院 | Commodity rapid settlement system based on RGBD information and deep learning |
CN108717553A (en) * | 2018-05-18 | 2018-10-30 | 杭州艾米机器人有限公司 | A kind of robot follows the method and system of human body |
CN108717553B (en) * | 2018-05-18 | 2020-08-18 | 杭州艾米机器人有限公司 | Method and system for robot to follow human body |
CN108846864A (en) * | 2018-05-29 | 2018-11-20 | 珠海全志科技股份有限公司 | A kind of position capture system, the method and device of moving object |
CN108715233A (en) * | 2018-05-29 | 2018-10-30 | 珠海全志科技股份有限公司 | A kind of unmanned plane during flying precision determination method |
CN108876795A (en) * | 2018-06-07 | 2018-11-23 | 四川斐讯信息技术有限公司 | A kind of dividing method and system of objects in images |
CN108994832A (en) * | 2018-07-20 | 2018-12-14 | 上海节卡机器人科技有限公司 | A kind of robot eye system and its self-calibrating method based on RGB-D camera |
CN108994832B (en) * | 2018-07-20 | 2021-03-02 | 上海节卡机器人科技有限公司 | Robot eye system based on RGB-D camera and self-calibration method thereof |
CN109146906A (en) * | 2018-08-22 | 2019-01-04 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
CN109816688A (en) * | 2018-12-03 | 2019-05-28 | 安徽酷哇机器人有限公司 | Article follower method and luggage case |
CN109472814A (en) * | 2018-12-05 | 2019-03-15 | 湖南大学 | A kind of multiple quadrotor indoor tracking and positioning methods based on Kinect |
US12008824B2 (en) | 2018-12-24 | 2024-06-11 | Autel Robotics Co., Ltd. | Target positioning method and device, and unmanned aerial vehicle |
WO2020135446A1 (en) * | 2018-12-24 | 2020-07-02 | 深圳市道通智能航空技术有限公司 | Target positioning method and device and unmanned aerial vehicle |
CN109934873B (en) * | 2019-03-15 | 2021-11-02 | 百度在线网络技术(北京)有限公司 | Method, device and equipment for acquiring marked image |
CN109934873A (en) * | 2019-03-15 | 2019-06-25 | 百度在线网络技术(北京)有限公司 | Mark image acquiring method, device and equipment |
CN110084828A (en) * | 2019-04-29 | 2019-08-02 | 北京华捷艾米科技有限公司 | A kind of image partition method, device and terminal device |
CN110244710B (en) * | 2019-05-16 | 2022-05-31 | 达闼机器人股份有限公司 | Automatic tracing method, device, storage medium and electronic equipment |
CN110244710A (en) * | 2019-05-16 | 2019-09-17 | 深圳前海达闼云端智能科技有限公司 | Automatic Track Finding method, apparatus, storage medium and electronic equipment |
WO2021057739A1 (en) * | 2019-09-27 | 2021-04-01 | Oppo广东移动通信有限公司 | Positioning method and device, apparatus, and storage medium |
US12051223B2 (en) | 2019-09-27 | 2024-07-30 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Positioning method, electronic device, and storage medium |
CN111428622A (en) * | 2020-03-20 | 2020-07-17 | 上海健麾信息技术股份有限公司 | Image positioning method based on segmentation algorithm and application thereof |
CN111428622B (en) * | 2020-03-20 | 2023-05-09 | 上海健麾信息技术股份有限公司 | Image positioning method based on segmentation algorithm and application thereof |
CN114049399A (en) * | 2022-01-13 | 2022-02-15 | 上海景吾智能科技有限公司 | Mirror positioning method combining RGBD image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106384353A (en) | Target positioning method based on RGBD | |
CN110446159B (en) | System and method for accurate positioning and autonomous navigation of indoor unmanned aerial vehicle | |
CN106681353B (en) | The unmanned plane barrier-avoiding method and system merged based on binocular vision with light stream | |
CN108375370B (en) | A kind of complex navigation system towards intelligent patrol unmanned plane | |
CN207752371U (en) | A kind of robot autonomous navigation device and robot | |
CN104215239B (en) | Guidance method using vision-based autonomous unmanned plane landing guidance device | |
CN104197928B (en) | Multi-camera collaboration-based method for detecting, positioning and tracking unmanned aerial vehicle | |
CN104217439B (en) | Indoor visual positioning system and method | |
CN106291278B (en) | A kind of partial discharge of switchgear automatic testing method based on more vision systems | |
CN109945856A (en) | Based on inertia/radar unmanned plane autonomous positioning and build drawing method | |
CN110262546A (en) | A kind of tunnel intelligent unmanned plane cruising inspection system and method | |
CN110222581A (en) | A kind of quadrotor drone visual target tracking method based on binocular camera | |
CN109191504A (en) | A kind of unmanned plane target tracking | |
CN206709853U (en) | Drawing system is synchronously positioned and builds in a kind of multi-rotor unmanned aerial vehicle room | |
CN106168805A (en) | The method of robot autonomous walking based on cloud computing | |
CN102190081B (en) | Vision-based fixed point robust control method for airship | |
CN108873917A (en) | A kind of unmanned plane independent landing control system and method towards mobile platform | |
CN107390704B (en) | IMU attitude compensation-based multi-rotor unmanned aerial vehicle optical flow hovering method | |
CN208953962U (en) | A kind of robot tracking control and robot | |
CN107289910B (en) | Optical flow positioning system based on TOF | |
CN106197422A (en) | A kind of unmanned plane based on two-dimensional tag location and method for tracking target | |
CN108089586A (en) | A kind of robot autonomous guider, method and robot | |
CN101894378B (en) | Moving target visual tracking method and system based on double ROI (Region of Interest) | |
CN111275760A (en) | Unmanned aerial vehicle target tracking system and method based on 5G and depth image information | |
CN103926933A (en) | Indoor simultaneous locating and environment modeling method for unmanned aerial vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170208 |