CN111351485A - Intelligent robot autonomous positioning method and device, chip and visual robot - Google Patents

Intelligent robot autonomous positioning method and device, chip and visual robot Download PDF

Info

Publication number
CN111351485A
CN111351485A CN201811581882.XA CN201811581882A CN111351485A CN 111351485 A CN111351485 A CN 111351485A CN 201811581882 A CN201811581882 A CN 201811581882A CN 111351485 A CN111351485 A CN 111351485A
Authority
CN
China
Prior art keywords
intelligent robot
image
images
space
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811581882.XA
Other languages
Chinese (zh)
Inventor
杨武
蒋新桥
赖钦伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Amicro Semiconductor Co Ltd
Original Assignee
Zhuhai Amicro Semiconductor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Amicro Semiconductor Co Ltd filed Critical Zhuhai Amicro Semiconductor Co Ltd
Priority to CN201811581882.XA priority Critical patent/CN111351485A/en
Publication of CN111351485A publication Critical patent/CN111351485A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to an intelligent robot autonomous positioning method, an intelligent robot autonomous positioning device, a chip and a visual robot, and belongs to the field of intelligent robots. The intelligent robot is provided with a camera, the camera is connected with the intelligent robot through a tripod head structure, the tripod head structure has at least two degrees of freedom, and the intelligent robot autonomous positioning method comprises the following steps: after the intelligent robot receives a work starting instruction in a work space, controlling the holder structure to turn to a plurality of preset directions within a preset time period so as to drive the camera to respectively shoot a plurality of images in the plurality of directions; and splicing panoramic images according to the rotating direction of the holder structure and the image data shot in each rotating direction, and determining the positioning information of the intelligent robot according to the shot image data. Through the technical scheme, the intelligent robot can be accurately positioned.

Description

Intelligent robot autonomous positioning method and device, chip and visual robot
Technical Field
The invention relates to the field of intelligent robots, in particular to an intelligent robot autonomous positioning method, an intelligent robot autonomous positioning device, a chip and a visual robot.
Background
Along with popularization and promotion of the intelligent robot, the intelligent robot is higher and higher in the center of consumers by means of the practical performance and good use experience of intelligent planning, automatic cleaning and automatic obstacle avoidance, and becomes a necessary cleaning tool for most families.
However, the intelligent robot is still difficult to autonomously position, if the multi-line laser radar is adopted, the cost is high, the algorithm is complex, and if the autonomous positioning is not realized, the space cannot be accurately detected, so that operation errors are caused, the operation is not accurate enough, and operation leakage and repeated operation occur.
Disclosure of Invention
In order to at least partially solve the above problems in the prior art, an object of embodiments of the present invention is to provide an intelligent robot autonomous positioning method, apparatus, chip and visual robot. The specific technical scheme is as follows:
an intelligent robot autonomous positioning method, the intelligent robot having a camera, the camera being connected to the intelligent robot through a pan-tilt structure, the pan-tilt structure having at least two degrees of freedom, wherein the two degrees of freedom at least include a degree of freedom in a vertical direction and a degree of freedom in a horizontal direction, the intelligent robot autonomous positioning method comprising: after the intelligent robot receives a work starting instruction in a work space, controlling the holder structure to turn to a plurality of preset directions within a preset time period so as to drive the camera to respectively shoot a plurality of images in the plurality of directions; the method for splicing the panoramic images comprises the steps of determining space coordinate information of images shot in the rotating direction according to the rotating direction of the holder structure, projecting corresponding images onto space coordinates according to the space coordinate information, identifying overlapping areas of adjacent images according to an image feature identification algorithm, and fusing the images of the overlapping areas to generate a three-dimensional image panoramic image; extracting the top features of the operating space by using a feature extraction method, and recovering three-dimensional information by using a preset spherical camera model and a multi-view geometric constraint principle; the positioning information determining module is used for determining the positioning information of the intelligent robot in the three-dimensional space according to the image data shot by the intelligent robot.
Further, according to the intelligent robot autonomous positioning method, the three-dimensional image panorama is used for realizing autonomous positioning of the intelligent robot in an environment, and when the image is updated in real time in the real-time autonomous positioning process, the three-dimensional image panorama is updated according to an extended Kalman filtering and particle filtering method.
Further, the feature extraction method is an SIFT feature extraction method.
Further, the corresponding image is projected to a space coordinate according to the space coordinate information, wherein the space coordinate is the spherical polar coordinate information of the spherical model.
The utility model provides an intelligent robot is positioner independently, intelligent robot possesses the camera, the camera with intelligent robot passes through cloud platform structure and connects, cloud platform structure has two at least degrees of freedom, and wherein, two degrees of freedom include the degree of freedom of vertical direction and the degree of freedom of horizontal direction at least, intelligent robot is positioner independently includes: the image shooting module is used for controlling the holder structure to turn to a plurality of preset directions within a preset time period after the intelligent robot receives an operation starting instruction in an operation space so as to drive the camera to respectively shoot a plurality of images in the plurality of directions; the panoramic image generation module is used for splicing panoramic images according to the rotation direction of the holder structure and image data shot in each rotation direction, wherein the method for splicing the panoramic images comprises the steps of determining the space coordinate information of the images shot in the rotation direction according to the rotation direction of the holder structure, projecting the corresponding images onto space coordinates according to the space coordinate information, identifying the overlapping areas of adjacent images according to an image feature identification algorithm, and fusing the images of the overlapping areas to generate a three-dimensional image panoramic image; the characteristic extraction module is used for extracting the top characteristic of the operation space by using a characteristic extraction method and recovering three-dimensional information by using a preset spherical camera model and a multi-view geometric constraint principle; and the positioning information determining module is used for determining the positioning information of the intelligent robot in the three-dimensional space according to the image data shot by the intelligent robot.
Further, the three-dimensional image panorama is used for realizing the autonomous positioning of the intelligent robot in the environment, and when the image is updated in real time in the process of the autonomous positioning, the three-dimensional image panorama is updated according to the method of the extended Kalman filtering and the particle filtering.
Further, the feature extraction method is an SIFT feature extraction method.
Further, the corresponding image is projected to a space coordinate according to the space coordinate information, wherein the space coordinate is the spherical polar coordinate information of the spherical model.
A chip is provided, a computer program is stored on the chip, and the chip can control a robot to execute the intelligent robot autonomous positioning method according to the computer program.
A vision robot, the robot comprising: the chip comprises one or more main control chips, wherein the main control chip is the chip.
The intelligent robot in the technical scheme is provided with a holder structure for supporting the camera to shoot images from a plurality of directions of an operation space, shooting in a plurality of directions is completed in a preset time period, so that the images in the plurality of directions are obtained, the images in the operation space are ensured to be completely obtained without dead angles, image space data in each direction are generated according to the rotation direction of the holder and the corresponding rotation direction, the accuracy of data splicing is ensured, the images are directly projected onto a model according to the space coordinates of the images to quickly obtain a space model, the top characteristics of the operation space are extracted by using a characteristic extraction method, three-dimensional information is restored by using a preset spherical camera model and a multi-view geometric constraint principle, the splicing can be quickly completed, and the splicing efficiency is ensured. The three-dimensional operation space can be spliced quickly, so that the technical effect of accurate positioning is achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the embodiments of the invention without limiting the embodiments of the invention. In the drawings:
fig. 1 schematically illustrates an intelligent robot autonomous positioning method according to an embodiment of the present invention;
fig. 2 schematically illustrates an intelligent robot autonomous positioning apparatus provided by an embodiment of the invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating embodiments of the invention, are given by way of illustration and explanation only, not limitation.
Fig. 1 is a flowchart of an intelligent robot autonomous positioning method according to an embodiment of the present invention. The intelligent robot is provided with a camera, the camera is connected with the intelligent robot through a holder structure, the holder structure is provided with at least two degrees of freedom, wherein the two degrees of freedom at least comprise a degree of freedom in a vertical direction and a degree of freedom in a horizontal direction, and as shown in fig. 1, the intelligent robot autonomous positioning method comprises the following steps: step 101: after the intelligent robot receives a work starting instruction in a work space, controlling the holder structure to turn to a plurality of preset directions within a preset time period so as to drive the camera to respectively shoot a plurality of images in the plurality of directions; for example, the image is sequentially rotated in a predetermined direction within 10 seconds before the start of the job, and the photograph is taken at a predetermined frequency while the image is rotated, and for example, one image may be taken every 0.01 seconds. Step 102: the panoramic image is spliced according to the rotating direction of the holder structure and image data shot in each rotating direction, wherein the method for splicing the panoramic image comprises the steps of determining space coordinate information of an image shot in the rotating direction according to the rotating direction of the holder structure, projecting the corresponding image onto space coordinates according to the space coordinate information, identifying an overlapping area of adjacent images according to an image feature recognition algorithm, and fusing images of the overlapping area to generate a three-dimensional image panoramic image. Step 103: the method comprises the steps of extracting top features of the operating space by using a feature extraction method, recovering three-dimensional information by using a preset spherical camera model and a multi-view geometric constraint principle, specifically, locating the orientation of the intelligent robot by using the top features, such as the texture features of a ceiling, for example, identifying the horizontal stripe features if the top features are horizontal stripes, identifying all the stripes representing the ceiling according to the horizontal stripe features, then determining the distance between the stripes, and determining the orientation of the intelligent robot according to the position of the corresponding stripe right above the current position of the intelligent robot. Step 104: the positioning information determining module is used for determining the positioning information of the intelligent robot in the three-dimensional space according to the image data shot by the intelligent robot.
Preferably, in the autonomous positioning method for the intelligent robot, the three-dimensional image panorama is used for realizing autonomous positioning of the intelligent robot in an environment, and when the image is updated in real time in the process of real-time autonomous positioning, the three-dimensional image panorama is updated according to an extended kalman filter and particle filter method. Kalman filtering (Kalman filtering) is an algorithm that uses a linear system state equation to optimally estimate the state of a system by inputting and outputting observation data through the system. Due to the fact that the observation data comprise the influence of noise and interference in the system, the optimal estimation can be regarded as a filtering process, new images can be obtained in real time in the motion process, partial noise and the like are probably caused by the movement of the intelligent robot in the images, the noise can be filtered through the extended Kalman filtering, and high-definition images are obtained. The particle filtering is a process of approximately representing a probability density function by searching a group of random samples which are propagated in a state space, replacing integral operation with a sample mean value, and further obtaining minimum variance estimation of a system state, wherein interference signals can be identified through the particle filtering, so that a better updated image is obtained, azimuth information of the updated image is reintegrated, and then the updated image is updated into an original panoramic image.
Preferably, the feature extraction method is a SIFT feature extraction method. The SIFT feature extraction algorithm has the characteristic of high extraction speed, and can quickly identify the features of the image.
Preferably, the corresponding image is projected onto a spatial coordinate according to the spatial coordinate information, and the spatial coordinate is spherical polar coordinate information of the spherical model.
As shown in fig. 2, an intelligent robot is from dynamic positioning device, intelligent robot possesses the camera, the camera with intelligent robot passes through the pan-tilt structure and connects, pan-tilt structure has two at least degrees of freedom, wherein, two degrees of freedom include the degree of freedom of vertical direction and the degree of freedom of horizontal direction at least, intelligent robot is from dynamic positioning device includes: the image shooting module 31 is used for controlling the cradle head structure to turn to a plurality of preset directions in a preset time period after the intelligent robot receives a work starting instruction in a work space, so as to drive the camera to respectively shoot a plurality of images in the plurality of directions, for example, the camera sequentially rotates in the preset direction within 10 seconds before work starts, and shoots pictures according to a preset frequency while rotating. And a panorama generating module 32, where the panorama generating module 32 is configured to splice panoramic images according to the rotation direction of the pan-tilt structure and the image data shot in each rotation direction, and the method for splicing panoramic images includes determining spatial coordinate information of images shot in the rotation direction according to the rotation direction of the pan-tilt structure, projecting corresponding images onto spatial coordinates according to the spatial coordinate information, identifying overlapping areas of adjacent images according to an image feature identification algorithm, and fusing the images in the overlapping areas to generate a three-dimensional image panorama. The feature extraction module 33 is configured to extract top features of the working space by using a feature extraction method, and recover three-dimensional information by using a preset spherical camera model and a multi-view geometric constraint principle; it will be understood by those skilled in the art that the shooting angle of the panoramic image can be calculated according to the rotation angle of the pan/tilt head, and the spatial coordinates of the image can be obtained through the conversion relation of the spatial coordinate system, the image coordinate system and the geodetic coordinate system, so as to be used for generating the three-dimensional panoramic image, for example, one image can be shot every 0.01 second. And the positioning information determining module 34 is used for determining the positioning information of the intelligent robot in the three-dimensional space according to the image data shot by the intelligent robot. Specifically, the orientation of the intelligent robot may be located by using a texture feature of the top, for example, a ceiling, for example, if the top is a horizontal stripe, the horizontal stripe feature is identified, all stripes representing the ceiling are identified accordingly, then the distance between the stripes is determined, and the orientation of the intelligent robot is determined according to the position of the corresponding stripe directly above the current position of the intelligent robot.
Preferably, the three-dimensional image panorama is used for realizing the autonomous positioning of the intelligent robot in the environment, and when the image is updated in real time in the process of the autonomous positioning, the three-dimensional image panorama is updated according to the method of extended kalman filtering and particle filtering. Kalman filtering (Kalman filtering) is an algorithm that uses a linear system state equation to optimally estimate the state of a system by inputting and outputting observation data through the system. Due to the fact that the observation data comprise the influence of noise and interference in the system, the optimal estimation can be regarded as a filtering process, new images can be obtained in real time in the motion process, partial noise and the like are probably caused by the movement of the intelligent robot in the images, the noise can be filtered through the extended Kalman filtering, and high-definition images are obtained. The particle filtering is a process of approximately representing a probability density function by searching a group of random samples which are propagated in a state space, replacing integral operation with a sample mean value, and further obtaining minimum variance estimation of a system state, wherein interference signals can be identified through the particle filtering, so that a better updated image is obtained, azimuth information of the updated image is reintegrated, and then the updated image is updated into an original panoramic image.
Preferably, the feature extraction method is a SIFT feature extraction method. The SIFT feature extraction algorithm has the characteristic of high extraction speed, and can quickly identify the features of the image.
Preferably, the corresponding image is projected onto a spatial coordinate according to the spatial coordinate information, and the spatial coordinate is spherical polar coordinate information of the spherical model.
A chip is provided, a computer program is stored on the chip, and the chip can control a robot to execute the intelligent robot autonomous positioning method according to the computer program.
A vision robot, the robot comprising: the chip comprises one or more main control chips, wherein the main control chip is the chip. The vision robot can be a floor sweeping robot, a floor mopping robot, a polishing robot, a waxing robot and other indoor cleaning robots.
Through the embodiments, the intelligent robot is equipped with a pan-tilt structure for supporting the camera to shoot images from multiple directions of the working space, shooting in multiple directions is completed within a preset time period, so that the images in multiple directions are obtained, the images in the working space are guaranteed to be completely obtained without dead angles, image space data in each direction are generated according to the rotation direction of the pan-tilt and the corresponding rotation direction, the accuracy of data splicing is guaranteed, the images are directly projected onto a model according to the space coordinates of the images to quickly obtain a space model, the top features of the working space are extracted by using a feature extraction method, three-dimensional information is restored by using a preset spherical camera model and a multi-view geometric constraint principle, the splicing can be quickly completed, and the splicing efficiency is guaranteed, the three-dimensional operation space can be spliced quickly, so that the technical effect of accurate positioning is achieved.
In addition, any combination of the various embodiments of the present invention is also possible, and the same should be considered as disclosed in the embodiments of the present invention as long as it does not depart from the spirit of the embodiments of the present invention.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, the embodiments of the present invention will not be described separately for the various possible combinations.
Those skilled in the art will appreciate that all or part of the steps in the method according to the above embodiments may be implemented by a program, which is stored in a storage medium and includes instructions for causing a single chip, a chip, or a processor (processor) to execute all or part of the steps in the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

Claims (10)

1. The intelligent robot autonomous positioning method is characterized in that the intelligent robot is provided with a camera, the camera is connected with the intelligent robot through a holder structure, the holder structure is provided with at least two degrees of freedom, the two degrees of freedom at least comprise a degree of freedom in a vertical direction and a degree of freedom in a horizontal direction, and the intelligent robot autonomous positioning method comprises the following steps:
after the intelligent robot receives a work starting instruction in a work space, controlling the holder structure to turn to a plurality of preset directions within a preset time period so as to drive the camera to respectively shoot a plurality of images in the plurality of directions;
the method for splicing the panoramic images comprises the steps of determining space coordinate information of images shot in the rotating direction according to the rotating direction of the holder structure, projecting corresponding images onto space coordinates according to the space coordinate information, identifying overlapping areas of adjacent images according to an image feature identification algorithm, and fusing the images of the overlapping areas to generate a three-dimensional image panoramic image;
extracting the top features of the operating space by using a feature extraction method, and recovering three-dimensional information by using a preset spherical camera model and a multi-view geometric constraint principle;
and determining the intelligent robot in the three-dimensional space, and determining the positioning information of the intelligent robot according to the image data shot by the intelligent robot.
2. The method of claim 1,
the three-dimensional image panorama is used for realizing the autonomous positioning of the intelligent robot in the environment, and when the image is updated in real time in the process of the autonomous positioning, the three-dimensional image panorama is updated according to the method of extended Kalman filtering and particle filtering.
3. The method of claim 1,
the feature extraction method is an SIFT feature extraction method.
4. The method of claim 1,
and projecting the corresponding image to a space coordinate according to the space coordinate information, wherein the space coordinate is the spherical polar coordinate information of the spherical model.
5. The utility model provides an intelligent robot is positioner independently, its characterized in that, intelligent robot possesses the camera, the camera with intelligent robot passes through cloud platform structure and connects, cloud platform structure has two at least degrees of freedom, and wherein, two degrees of freedom include the degree of freedom of vertical direction and the degree of freedom of horizontal direction at least, intelligent robot is positioner independently includes:
the image shooting module is used for controlling the holder structure to turn to a plurality of preset directions within a preset time period after the intelligent robot receives an operation starting instruction in an operation space so as to drive the camera to respectively shoot a plurality of images in the plurality of directions;
the panoramic image generation module is used for splicing panoramic images according to the rotation direction of the holder structure and image data shot in each rotation direction, wherein the method for splicing the panoramic images comprises the steps of determining the space coordinate information of the images shot in the rotation direction according to the rotation direction of the holder structure, projecting the corresponding images onto space coordinates according to the space coordinate information, identifying the overlapping areas of adjacent images according to an image feature identification algorithm, and fusing the images of the overlapping areas to generate a three-dimensional image panoramic image;
the characteristic extraction module is used for extracting the top characteristic of the operation space by using a characteristic extraction method and recovering three-dimensional information by using a preset spherical camera model and a multi-view geometric constraint principle;
and the positioning information determining module is used for determining the positioning information of the intelligent robot in the three-dimensional space according to the image data shot by the intelligent robot.
6. The apparatus of claim 5, wherein the three-dimensional image panorama is used for autonomous positioning of the intelligent robot in an environment, and wherein the image is updated in real time during the real-time autonomous positioning, and wherein the three-dimensional image panorama is updated according to an extended Kalman filter and particle filter method.
7. The apparatus of claim 5, wherein the feature extraction method is a SIFT feature extraction method.
8. The apparatus of claim 5, wherein the corresponding image is projected onto spatial coordinates according to the spatial coordinate information, the spatial coordinates being spherical polar coordinate information of the spherical model.
9. A chip having stored thereon a computer program according to which the chip is capable of controlling a robot to perform the intelligent robot autonomous positioning method of any of claims 1-4.
10. A visual robot, characterized in that the robot comprises:
one or more master control chips, the master control chip being the chip of claim 9.
CN201811581882.XA 2018-12-24 2018-12-24 Intelligent robot autonomous positioning method and device, chip and visual robot Pending CN111351485A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811581882.XA CN111351485A (en) 2018-12-24 2018-12-24 Intelligent robot autonomous positioning method and device, chip and visual robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811581882.XA CN111351485A (en) 2018-12-24 2018-12-24 Intelligent robot autonomous positioning method and device, chip and visual robot

Publications (1)

Publication Number Publication Date
CN111351485A true CN111351485A (en) 2020-06-30

Family

ID=71191962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811581882.XA Pending CN111351485A (en) 2018-12-24 2018-12-24 Intelligent robot autonomous positioning method and device, chip and visual robot

Country Status (1)

Country Link
CN (1) CN111351485A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203983835U (en) * 2014-03-14 2014-12-03 刘凯 Many rotary wind types Intelligent overhead-line circuit scanning test robot
CN104318604A (en) * 2014-10-21 2015-01-28 四川华雁信息产业股份有限公司 3D image stitching method and apparatus
CN105865419A (en) * 2015-01-22 2016-08-17 青岛通产软件科技有限公司 Autonomous precise positioning system and method based on ground characteristic for mobile robot
CN206077560U (en) * 2016-09-30 2017-04-05 李娜 A kind of Indoor Robot positions camera system
CN207115187U (en) * 2017-05-16 2018-03-16 电子科技大学中山学院 Automatic indoor map construction system oriented to rectangular corridor environment
CN108195472A (en) * 2018-01-08 2018-06-22 亿嘉和科技股份有限公司 A kind of heat transfer method for panoramic imaging based on track mobile robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203983835U (en) * 2014-03-14 2014-12-03 刘凯 Many rotary wind types Intelligent overhead-line circuit scanning test robot
CN104318604A (en) * 2014-10-21 2015-01-28 四川华雁信息产业股份有限公司 3D image stitching method and apparatus
CN105865419A (en) * 2015-01-22 2016-08-17 青岛通产软件科技有限公司 Autonomous precise positioning system and method based on ground characteristic for mobile robot
CN206077560U (en) * 2016-09-30 2017-04-05 李娜 A kind of Indoor Robot positions camera system
CN207115187U (en) * 2017-05-16 2018-03-16 电子科技大学中山学院 Automatic indoor map construction system oriented to rectangular corridor environment
CN108195472A (en) * 2018-01-08 2018-06-22 亿嘉和科技股份有限公司 A kind of heat transfer method for panoramic imaging based on track mobile robot

Similar Documents

Publication Publication Date Title
CN107329490B (en) Unmanned aerial vehicle obstacle avoidance method and unmanned aerial vehicle
CN109671115B (en) Image processing method and apparatus using depth value estimation
KR101776622B1 (en) Apparatus for recognizing location mobile robot using edge based refinement and method thereof
KR101725060B1 (en) Apparatus for recognizing location mobile robot using key point based on gradient and method thereof
US9269187B2 (en) Image-based 3D panorama
US11748907B2 (en) Object pose estimation in visual data
US8644557B2 (en) Method and apparatus for estimating position of moving vehicle such as mobile robot
US8896660B2 (en) Method and apparatus for computing error-bounded position and orientation of panoramic cameras in real-world environments
CN111325796A (en) Method and apparatus for determining pose of vision device
CN111141264B (en) Unmanned aerial vehicle-based urban three-dimensional mapping method and system
WO2020113423A1 (en) Target scene three-dimensional reconstruction method and system, and unmanned aerial vehicle
CN109215111B (en) Indoor scene three-dimensional modeling method based on laser range finder
CN107179082B (en) Autonomous exploration method and navigation method based on fusion of topological map and measurement map
CN112700486B (en) Method and device for estimating depth of road surface lane line in image
CN111665826A (en) Depth map acquisition method based on laser radar and monocular camera and sweeping robot
CN105844692A (en) Binocular stereoscopic vision based 3D reconstruction device, method, system and UAV
WO2024087962A1 (en) Truck bed orientation recognition system and method, and electronic device and storage medium
CN111679664A (en) Three-dimensional map construction method based on depth camera and sweeping robot
CN111609854A (en) Three-dimensional map construction method based on multiple depth cameras and sweeping robot
CN107028558B (en) Computer readable recording medium and automatic cleaning machine
CN111380535A (en) Navigation method and device based on visual label, mobile machine and readable medium
JP2019213039A (en) Overlooking video presentation system
CN111351485A (en) Intelligent robot autonomous positioning method and device, chip and visual robot
CN109389677B (en) Real-time building method, system, device and storage medium of house three-dimensional live-action map
CN110036411A (en) The device and method for generating electronics three-dimensional range environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 519000 2706, No. 3000, Huandao East Road, Hengqin new area, Zhuhai, Guangdong

Applicant after: Zhuhai Yiwei Semiconductor Co.,Ltd.

Address before: Room 105-514, No.6 Baohua Road, Hengqin New District, Zhuhai City, Guangdong Province

Applicant before: AMICRO SEMICONDUCTOR Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200630