CN108406731A - A kind of positioning device, method and robot based on deep vision - Google Patents

A kind of positioning device, method and robot based on deep vision Download PDF

Info

Publication number
CN108406731A
CN108406731A CN201810572514.2A CN201810572514A CN108406731A CN 108406731 A CN108406731 A CN 108406731A CN 201810572514 A CN201810572514 A CN 201810572514A CN 108406731 A CN108406731 A CN 108406731A
Authority
CN
China
Prior art keywords
image
landmark
depth
positioning device
terrestrial reference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810572514.2A
Other languages
Chinese (zh)
Other versions
CN108406731B (en
Inventor
赖钦伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Amicro Semiconductor Co Ltd
Original Assignee
Zhuhai Amicro Semiconductor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Amicro Semiconductor Co Ltd filed Critical Zhuhai Amicro Semiconductor Co Ltd
Priority to CN201810572514.2A priority Critical patent/CN108406731B/en
Publication of CN108406731A publication Critical patent/CN108406731A/en
Application granted granted Critical
Publication of CN108406731B publication Critical patent/CN108406731B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • B25J5/007Manipulators mounted on wheels or on carriages mounted on wheels
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/027Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means comprising intertial navigation means, e.g. azimuth detector

Abstract

The present invention discloses a kind of positioning device, localization method and robot based on deep vision, which is a kind of moveable vision positioning device, including:Postposition image capture module, for acquiring landmark image to realize positioning;Depth recognition module, for identification object on ground and ground;Image processing module, including image preprocessing submodule and characteristic matching submodule, the image information for handling postposition image capture module and the input of depth recognition module;Inertia processing module, the displacement information for incuding inertial sensor in real time;Locating module is merged, for being merged to the environmental information acquired in each sensor assembly to realize positioning.Compared with the existing technology, the 3 dimension depth transducers for being installed in preceding part provide new landmark information to coordinate completion to position for the sweptback camera in face in real time so that computing resource is reduced during location navigation, improves synchronous location efficiency.

Description

A kind of positioning device, method and robot based on deep vision
Technical field
The present invention relates to localization method and devices, and in particular to a kind of positioning device based on deep vision, localization method And robot.
Background technology
Traditional camera shooting is expanded to third dimension by three-dimensional (3D) depth capture systems.Although being obtained from traditional video camera 2D images indicate color at each (x, y) pixel and brightness, the 3D point cloud instruction obtained from 3D depth transducers to Each distance (z) of the body surface at (x, y) pixel.In this way, 3D sensors provide the measurement of third Spatial Dimension z.3D System directly acquire depth information rather than rely on visual angle, relative size, block, texture, parallax and other clues detect depth Degree.Directly (x, y, z) data are particularly useful to the computer deciphering of image data.For example, depth camera is acquired Three dimensional point cloud project to two dimensional surface obtain two-dimensional projection data, to build two-dimensional grid map.
In existing vision sweeper product, mobile robot includes being embedded in the main body of robot under top cover Camera navigation system.Navigation system include capture ambient enviroment image one or more cameras (for example, standard camera, Volumetric point cloud image camera, three-dimensional (3D) image camera, the camera with depth map sensor, Visible Light Camera and/or red Outer camera).Mobile robot can be configured optionally using any one of various camera configurations, which includes The inclined preceding camera that is combined with the (not shown) of camera forward being aligned in the movement direction, with the inclined multiple faces of different angle Forward camera, stereoscopic camera to, with two or more adjacent or the visual field that partly overlaps inclined cameras, and/or with difference Angle is angled.It is executed using the image data of one or more of navigation system by mobile robot inclined camera capture The map procedures environment of VSLAM and the position that mobile robot is precisely located, however the combination of above-mentioned camera and position point Cloth makes vision, and location algorithm is complicated simultaneously, during navigator fix the computing resource consumption of the processor of robot master control compared with Greatly.
Invention content
A kind of positioning device based on deep vision, the positioning device are a kind of movable fixtures, including postposition image is adopted Collect module, depth recognition module, image processing module, inertia processing module and fusion locating module;
Postposition image capture module, including be positioned at the positioning device top surface tail portion opening backward recessed and/or Camera at projective structure, for acquiring landmark image to realize positioning;
Depth recognition module, including it is positioned at the three dimensional depth sensor in front of the positioning device top surface, the three-dimensional is deep The optic axis and the positioning device top surface for spending sensor form a predetermined angle so that the three dimensional depth sensor is known The ground of line direction and/or object more than ground before the not described positioning device;
Image processing module, including image preprocessing submodule and characteristic matching submodule, for handling postposition Image Acquisition mould The image information of block and the input of depth recognition module;Image preprocessing submodule is used for postposition image capture module and depth The image of identification module input is converted to gray level image;Characteristic matching submodule, for special to the image in the gray level image Sign carries out characteristic matching with the landmark image feature in landmark data library;Wherein, the landmark data library storage given terrestrial reference The description of space structure, is built in image processing module in the characteristics of image and actual scene of associated area;
Inertia processing module, is made of inertial sensor, incudes rotation angle information, the acceleration letter of the positioning device in real time Breath and translational velocity information, wherein the inertial sensor includes odometer, gyroscope and accelerometer;
Locating module is merged, for the matching result in the image information according to the landmark data library and the input, will be taken the photograph As the inertial data that the image feature information fusion inertia processing module that head acquires acquires, to correct the current of the positioning device Location information;Simultaneously by the landmark image associated features mismatch with landmark data library of three dimensional depth sensor extraction Depth image data feature be added landmark data library with as new terrestrial reference.
Further, the three dimensional depth sensor selects 3D TOF sensors, or the binocular based on image or More mesh sensor arrays.
Further, in the fusion locating module, the landmark image feature when camera acquisition and the terrestrial reference When landmark image characteristic matching success in database, the terrestrial reference to match that is obtained from the postposition image capture module In image associated features, the ground for obtaining current acquired image is marked on coordinate in map, then in conjunction with national forest park in Xiaokeng meter The relative position relation for calculating the positioning device and the terrestrial reference that are obtained, obtains seat of the positioning device in map Mark, and merged using the inertial data that inertia processing module acquires, to correct the current location information of the positioning device;
When the landmark image feature of camera acquisition fails with the landmark image characteristic matching in the landmark data library, The inertial data is recorded between every two frames landmark image and carries out integral operation show that the pose of the inertial sensor becomes Change, the characteristic point that terrestrial reference described in previous frame landmark image is then calculated using the internal reference of the camera in present frame is marked on a map Coordinate as in is compared with the feature point coordinates of the terrestrial reference of the present frame landmark image of camera acquisition, from And update to correct and obtain new terrestrial reference and be stored in the landmark data library, complete the establishment of new road sign;
Depth image feature when three dimensional depth sensor acquisition and the landmark image feature in the landmark data library When with failure, landmark data library is added in the characteristic information of collected not matched depth image, as new terrestrial reference;
Wherein, the coordinate in the map uses world coordinate system.
Further, the depth recognition module, for by three dimensional depth sensor obtain actual scene in setting away from Depth data from interior terrestrial reference establishes space multistory three-dimensional system of coordinate, and wherein Z coordinate represents the depth value of each pixel, Wherein distance of the terrestrial reference to the three dimensional depth sensor described in each pixel value reflection actual scene.
A kind of localization method based on the positioning device, includes the following steps:
The three dimensional depth sensor obtains the depth image that the positioning device drives forwards object on direction, and from depth map The identification subject image characteristic information extracted as in;
The camera pre-processes the target image of terrestrial reference in collected actual scene, and extraction ground is identified from the target image Target feature calculates then according to the national forest park in Xiaokeng of the characteristic point and the formation of the terrestrial reference of target image identification Go out position relationship of the positioning device relative to the terrestrial reference;
By the target image and description of the corresponding gray level image feature of the depth image and it is stored in landmark data library In description of landmark image associated features carry out characteristic matching, judge the feature of the target image and landmark data library In landmark image associated features whether match, while judging the feature of the depth image and the terrestrial reference in landmark data library Whether image associated features match;
If the landmark image associated features successful match in the feature of the target image and landmark data library, from described In the landmark image associated features that postposition image capture module obtained match, acquisition is marked in map describedly Coordinate calculates the positioning device and exists in conjunction with the position relationship for calculating the positioning device relative to the terrestrial reference Coordinate in map, and corrected using inertial data update, complete the real-time positioning of the positioning device;
If the feature of the target image is with the landmark image associated features in landmark data library, it fails to match,
Between the two field pictures that the camera is continuously shot the terrestrial reference, records the inertial data and carry out integral fortune The pose variation for obtaining the inertial sensor is calculated, the internal reference of the camera is recycled to calculate the terrestrial reference of previous frame image Coordinate of the characteristic point in current frame image, and compared with the feature point coordinates of terrestrial reference described in current frame image, into And the feature point coordinates of terrestrial reference described in current frame image is updated and is corrected, to complete the establishment of new road sign, and store record In the landmark data library;
If the feature of the depth image is with the landmark image associated features in landmark data library, it fails to match, the three-dimensional Landmark data library is added in the characteristic information of the not matched depth image got by depth transducer, as new terrestrial reference;
Wherein, the inertial data has been subjected to calibration and is filtered;The landmark data library storage in actual scene and gives ground The location information of target image characteristic point and the three dimensional point cloud of terrestrial reference.
Further, it is matched using ORB features during the characteristic matching.
Further, the pixel value of the depth image reflects that the positioning device drives forwards the object on direction and institute State the distance of three dimensional depth sensor current location.
A kind of robot, the robot are a kind of mobile robots of the installing positioning device.
Compared with the existing technology, the present invention provides one three dimensional depth sensor in the preceding part of the positioning device, The object on direction is driven forwards for detecting the identification positioning device and judges obstacle distance, and creates new terrestrial reference, So that the area information not detected previously is learned, relevant terrestrial reference realization is matched to be conducive to the sweptback camera in face Positioning, wherein three dimensional depth sensor reduce computing resource without carrying out position fixing process, realize that the positioning device synchronizes positioning And navigation, improve navigation efficiency.
Description of the drawings
Fig. 1 is a kind of module frame figure for positioning device based on deep vision that present invention implementation provides;
Fig. 2 is a kind of localization method flow chart based on deep vision that present invention implementation provides;
Fig. 3 is a kind of robot system architecture's figure based on deep vision that present invention implementation provides.
Specific implementation mode
The specific implementation mode of the present invention is described further below in conjunction with the accompanying drawings:
A kind of in the embodiment of the present invention is implemented based on the positioning device of deep vision in a manner of robot, including sweeper Device people, AGV etc. mobile robot.The positioning device is assumed below to be installed on sweeping robot.However art technology Personnel will be appreciated that other than being used in particular for mobile robot, and construction according to the embodiment of the present invention, which can extend, answers For mobile terminal.
In the present invention is implemented, those skilled in the art are readily apparent that, during executing vslam, are schemed according to input As characteristic point is buffered into a small map, and then the position relationship between calculating present frame and map.Here map is only one Each frame characteristic point is cached to a place, constitutes the set of characteristic point, referred to as map by a provisional concept. During executing vslam, every frame image of camera acquisition contributes some information for map, for example, add new characteristic point or New and old characteristic point, to safeguard the map of a continuous updating.
The present invention provides a kind of positioning device based on deep vision, which is a kind of movable fixture, such as Fig. 1 It is shown, including postposition image capture module, depth recognition module, image processing module, inertia processing module and fusion positioning mould Block.Postposition image capture module, including it is positioned at tail portion opening backward recessed and/or convex of the positioning device top surface Go out the camera at structure, for acquiring landmark image to realize positioning, the camera needs slightly protrusion under normal circumstances, protects Relatively good visual angle can be obtained by holding predetermined angle, because being provided with bump bar and cylinder in embodiments of the present invention 360 degree of infrared receiving devices, these are easy to block the camera, and camera is caused to keep predetermined angle that can be compared Good visual angle so the camera for being placed on preceding part is unfavorable for assisting navigation positioning, but is used for object detection, especially to institute It states the object that positioning device drives forwards on direction and carries out target identification analysis.
Preferably, as shown in figure 3, the depth recognition module includes three dimensional depth sensor 108, it is positioned at the positioning The three dimensional depth sensor of part before device top surface, the optic axis of the three dimensional depth sensor and positioning device top Surface forms a predetermined angle so that the three dimensional depth sensor identify before the positioning device ground of line direction and/or Object more than ground, to the sky that can reach ground and ground or more within sweep of the eye of the three dimensional depth sensor Between, to obtain the depth data of terrestrial reference in the above setpoint distance in ground or ground, space multistory three-dimensional system of coordinate is established, wherein Z coordinate represents the depth value of each pixel, wherein terrestrial reference described in the depth value reflection actual scene of each pixel To the distance of the three dimensional depth sensor, so as to drive forwards newly identified terrestrial reference on direction for the addition of landmark data library Information.
Specifically, the three dimensional depth sensor 108 can be 3D TOF sensors, or the binocular based on image or The more mesh sensor arrays of person.Binocular or more mesh sensor arrays wherein based on image are suitable for binocular measuring system, and 3D TOF sensor is that the target range obtained by entering, reflecting optical detection obtains.
In the present invention is implemented, the three dimensional depth sensor generates the data of depth image with the speed of 20 frame per second Stream, and depth map is created according to the depth image, depth map is comprising the distance dependent with the surface of the scenario objects of viewpoint Information image or image channel, 2 dimension tables of depth map show gray level image, and only its each pixel value is sensor distance The actual range of object, the influence of working substance external table and background color reduce the occurrence of judging by accident.Utilize the depth map Distance and bearing feature as capableing of disturbance in judgement object, directly obtains the position of each pixel in space so that described Positioning device not only introduces new terrestrial reference, to the reproduction ambient enviroment of real-time 3D.Wherein, in 3D computer graphics, described Depth image includes common RGB Three Channel Color images, and usual RGB image and Depth images are registrations, thus pixel Between have one-to-one correspondence.
Further, the depth image further includes two dimension (2D) pixel region of captured scene, wherein 2D pixels Each pixel in region can indicate depth value, and the object distance in captured scene in terms of centimetre, millimeter etc. is caught Catch the length or distance of equipment.The depth image only retains the characteristic point information in setpoint distance, and outside setpoint distance Characteristic point is lost.
Specifically, as shown in figure 3, the optic axis of the camera 106 is formed slopely with the positioning device top surface Angle be defined as acute angle ɑ, generally can near 45 degree, with ensure obtain true imaging characteristic good approximation effect, carry The precision of high detection terrestrial reference feature.
As shown in Figure 1, image processing module, including image preprocessing submodule and characteristic matching submodule, for handling The image information of postposition image capture module and the input of depth recognition module.Wherein, image preprocessing submodule is by the camera shooting The color image data binarization of head and three dimensional depth sensor acquisition, is converted to gray level image in ambient enviroment The unique terrestrial reference for establishing repeatable identification, completes the preprocessing process of image.Then characteristic matching submodule is from image preprocessing Extract characteristic point in the pretreated obtained gray level image of submodule, so calculate description son, and in landmark data library Landmark image associated features corresponding description carry out characteristic matching;Wherein, the image information of the input includes described The image information of postposition image capture module and depth recognition module input.
Specifically, the landmark data library is the landmark data library built in image processing module, which includes The image characteristic point of given terrestrial reference associated area and/or the three-dimensional structure of feature.The landmark data library includes about many The deep image information for the unknown terrestrial reference that the information of the terrestrial reference of previous observation and the three dimensional depth sensor obtain, it is described fixed Position device can be acted using the terrestrial reference to execute navigator fix.Terrestrial reference is considered the feature with specific three-dimensional structure Set.Any one of various features can be used for identifying terrestrial reference, when the positioning device is configured as room cleaning machine When device people, terrestrial reference may be one group of feature of the two-dimensional structure identification of corner of the (but not limited to) based on photo frame.Such feature Based on the static geometry in room, although and feature is illuminated with certain and dimensional variation, and they are relative to frequency Object in the lower region of the environment of numerous ground displacement (such as chair, dustbin, pet etc.) be generally easier it is identified and It is identified as terrestrial reference.It will be readily understood that ground, the concrete structure in the landmark data library are formed in specific application requirement.
As shown in Figure 1, inertia processing module, is made of a series of inertial sensors, for incuding the positioning dress in real time Rotation angle information, acceleration information and the translational velocity information set;The module is used to acquire inertia number by inertial sensor According to, then carry out calibrating to be filtered being transmitted to fusion locating module.The original data processing of the inertial data include maximum value and The shielding of minimum value;Static drift is eliminated;The Kalman filtering of data.Wherein inertial sensor includes odometer, gyroscope, adds Speedometer etc. is used for the sensor of inertial navigation.In subsequent processes, based on the optics observed between continuous adjacent image Stream carrys out the image of acquisition and tracking terrestrial reference, and the data that these inertial sensors acquire is needed to determine the distance advanced.Specifically, in institute It states in positioning device moving process, carrying out Integral Processing to encoder data can be obtained position and attitudes vibration, to acceleration The data of flowmeter sensor acquisition, which carry out integral, can obtain change in location, and being integrated to the data of gyro sensor acquisition can To obtain attitudes vibration, later by above-mentioned three's weighted average, you can obtain position of the current time relative to last moment It sets and attitudes vibration.
As shown in Figure 1, fusion locating module, for according to the landmark data library and the image information of the input With as a result, calculating the positioning dress from the image feature information that camera acquires by the geometrical relationship of national forest park in Xiaokeng The coordinate information in current location is set, then the positioning device that the inertial data operation acquired with inertia processing module obtains exists The coordinate information of current location is compared, to update amendment current location information.The wherein described inertial sensor can pass through product Partite transport is calculated to obtain the travel distance of the positioning device.Simultaneously by three dimensional depth sensor extraction with landmark data library Landmark data library is added with as new terrestrial reference in the characteristic information of the unmatched depth image data of landmark image associated features.
Specifically, in the fusion locating module, the landmark image when camera acquisition and the landmark data library In the success of landmark image characteristic matching when, obtain current acquired image in the matched landmark image associated features of slave phase Ground be marked on the coordinate on pixel coordinate system, described the coordinate being marked in map is obtained by coordinate system conversion, then in conjunction with small Institute is calculated by coordinate in the relative position relation for the positioning device and the terrestrial reference that borescopic imaging model calculates Coordinate of the positioning device in map, and the integral operation Comparative result of the inertial data acquired with inertia processing module are stated, from And correct the current location information of the positioning device.
When the landmark image feature of camera acquisition is lost with the landmark image characteristic matching in the landmark data library When losing, the inertial data is recorded between every two frames landmark image, it is described fixed to be found out according to the accumulated value of the inertial data The position relationship of position device and the terrestrial reference, the phase acquired with the camera then in conjunction with the inertial sensor sensitive information The relative attitude determined between adjacent two field pictures so that the inertial sensor sensitive information passes through the rotation of R from world coordinate system Turn and the translation of t obtains the coordinate that camera coordinates are fastened, then calculates previous frame terrestrial reference using the internal reference of the camera Coordinate of the characteristic point of terrestrial reference described in image in present frame landmark image, with the camera acquisition present frame described in The characteristic point coordinate pair ratio of logo image, to the camera acquisition present frame described in gray level image feature point coordinates carry out more It is new to correct, to show that new terrestrial reference and storage are recorded in the landmark data library, complete the establishment of new road sign.
When the depth image feature of three dimensional depth sensor acquisition and the landmark image in the landmark data library are special Sign is when it fails to match, the three dimensional depth sensor extraction it is unmatched with the landmark image associated features in landmark data library Landmark data library is added so that as new terrestrial reference, unknown object is provided for the landmark data library in the feature of depth image data The characteristic set of 3D structures, to solve the pose pre-estimation problem during the positioning device moves ahead in zone of ignorance.It is described In positioning device, the three dimensional depth sensor does not execute positioning operation, and it is that the landmark data library adds to be served only for identification object Add new landmark, to coordinate the positioning operation of camera.
Wherein, the coordinate in the map uses world coordinate system;The inertial sensor is described to take the photograph to the camera As all there is mapping associations for head to the gray level image feature and/or landmark image associated features, while feature can lead to The gray level image extraction is crossed to obtain.
Based on same inventive concept, the embodiment of the present invention additionally provides a kind of localization method based on deep vision, due to Based on a kind of aforementioned positioning device based on deep vision, therefore the hardware device that orientation problem is solved using the localization method is The embodiment of the localization method may refer to a kind of application implementation of aforementioned positioning device based on deep vision.Specific real Shi Shi, as shown in Fig. 2, specifically including:
Step S1, the described three dimensional depth sensor obtains the depth image that the positioning device drives forwards object on direction, and The image feature information of extraction identification terrestrial reference from depth image;The present invention obtains visual field using the three dimensional depth sensor Depth data establishes space multistory three-dimensional system of coordinate, and wherein Z coordinate represents the depth value of each pixel.According to acquisition Image obtains three dimensional point cloud set, builds 3D rendering, and data normalization of adjusting the distance is fastened to pixel coordinate, is converted into figure As gray value, finally the depth image generated is exported to external processing apparatus.
Step S2, the described camera pre-processes the target image of terrestrial reference in collected actual scene, carries out gray processing, Feature point extraction is carried out to gray level image, to form the characteristic point of the terrestrial reference, then according to target image identification The imaging features geometrical relationship of characteristic point and the terrestrial reference(That is national forest park in Xiaokeng), the positioning device is calculated relative to institute State the position relationship of terrestrial reference;
Step S3, landmark image associated features that the are description of the gray level image feature is sub and being stored in landmark data library Description carry out characteristic matching, judge the feature of the target image and the landmark image associated features in landmark data library Whether match, be, enters step S4, otherwise enter step S5;Feature and the landmark data library of the depth image are judged simultaneously In landmark image associated features whether match, be to enter step S6;
Step S4, after successful match, in the case of the Intrinsic Matrix of the known camera, according to what is matched Logo image associated features obtain and are marked on coordinate in map describedly, in conjunction with it is described calculate the positioning device relative to The position relationship of the terrestrial reference calculates coordinate of the positioning device in map, by the inertial sensor to described Current position coordinates amendment updates, and obtains an accurate current position coordinates, completes the real-time positioning of the positioning device.
Step S5, continuous to the terrestrial reference in the camera between the two field pictures of the camera continuous acquisition Between the two field pictures of shooting, records the inertial data and carry out the pose change that integral operation obtains the inertial sensor Change, the internal reference of the camera is recycled to calculate seat of the characteristic point of the terrestrial reference of previous frame image in current frame image Mark, and compared with the feature point coordinates of terrestrial reference described in current frame image, and then to terrestrial reference described in current frame image The update of feature point coordinates is corrected, and to complete the establishment of new road sign, and is stored and is recorded in the landmark data library.
Step S6, the landmark image associated features mismatch with landmark data library of the described three dimensional depth sensor extraction Depth image data feature be added landmark data library as new terrestrial reference, unknown object is provided for the landmark data library 3 dimension point cloud datas, to solve the limitation at the camera visual angle, to realize that synchronous positioning provides multiple angulars field of view It ensures, the pose pre-estimation problem during further processing moves ahead in zone of ignorance.
Wherein, the inertial data has been subjected to calibration and is filtered;The inertial sensor is described to take the photograph to the camera As all there is mapping associations for head to the target image characteristics and/or landmark image associated features;The landmark data library Store the location information for the image characteristic point that terrestrial reference is given in actual scene and the three dimensional point cloud of terrestrial reference.
As a kind of mode that the present invention is implemented, the characteristic point is made of key point and the sub- two parts of description, key point It is position of this feature point in image, some characteristic points also have the information such as direction, size, and describe son and usually describe the pass The information of key point surrounding pixel.After foregoing invention implements pretreatment, the characteristic point extracted from the gray level image is ORB features Point, and carry out characteristic matching using its corresponding descriptor.ORB Feature Descriptors improve FAST detection and do not have directionality The problem of, and the binary descriptor BRIEF being exceedingly fast using speed so that the feature extraction step of the characteristic matching submodule It greatly accelerates.
ORB characteristic points are made of key point with sub- two parts are described.Wherein, key point FAST, FAST are a kind of angle point, It is significantly local to predominantly detect local pixel grey scale change, famous soon with speed, improved FAST angle points have ruler in ORB The description of degree and rotation, greatly improves the robustness that they are stated between images.BRIEF description son for one kind two into System description, description vectors are formed by many 0 and 1, convenient storage, two pixels near 0 and 1 encoded key point here Magnitude relationship;BRIEF is compared using random selecting point, and speed is exceedingly fast, and has preferable rotational invariance so that BRIEF is retouched State the matching of overabundance of amniotic fluid foot positioning device realtime graphic.Therefore selection ORB characteristic points keep feature to have rotation, scale invariability, It can prevent from losing in image rotation change procedure, while be promoted obviously in terms of speed, be conducive to improve the positioning dress The speed for setting acquisition process image enhances the ability of positioning device calculated in real time.
Further, the characteristic matching is actually to describe son with all measurements to each characteristic point on image Distance, then sort, take nearest one be used as match point.And the similar journey between son distance two features of expression is described Degree, the present invention selects Euclidean distance to measure in implementing, i.e., in gray level image more to be measured and the landmark data library Landmark image respectively extracts the corresponding Euclidean distance of feature, to complete to judge matching process.
Object in the pixel value reflection actual scene of the depth image and three dimensional depth sensor current location Distance, according to the depth image create depth map.The three dimensional depth sensor is the work(for obtaining depth information in visual field It can unit.In embodiment, the three dimensional depth sensor is by using structure light, flight time, stereoscopic vision or this field The known any other sensor technology of those of ordinary skill captures the corresponding three dimensional point cloud of depth image to two dimensional surface Projection obtains coordinate information of the object of the three dimensional depth sensor capture in pixel coordinate system.
Between the two field pictures that the camera is continuously shot, present frame and reference frame are defined, records the inertia Data simultaneously carry out accumulating operation, to obtain the pose transformation of the inertial sensor record between present frame and reference frame, make Change for the pose of the inertial sensor;Then changed using the pose of the inertial sensor and continuously adopted with the camera The relative attitude determined between the two field pictures of collection, the terrestrial reference is transformed into from world coordinate system in camera coordinate system, then The location information of the terrestrial reference is further transformed into pixel coordinate according to the camera internal reference matrix to fasten, to obtain Coordinate of the characteristic coordinates of reference frame in present frame.When the target image and the landmark image phase that is stored in landmark data library When linked character mismatches, with aforementioned conversion method, predicted on the current position coordinates obtained by inertial sensor The coordinate of image characteristic point, and with the characteristic point coordinate pair ratio in current frame image feature, to the spy in current frame image feature Sign point coordinates is updated amendment, and is stored back into new landmark of the landmark data library as the establishment under current location.
As a kind of mode that the present invention is implemented, the camera shooting head model uses traditional national forest park in Xiaokeng, is to be based on The camera lens of the camera of the predetermined position is towards angle ɑ(As shown in Figure 3)The target image of upper acquisition The position relationship that the characteristic point that pixel coordinate is fastened senses the road sign in world coordinate system in conjunction with the inertial sensor is established The similar triangles relationship got up.It is known in the camera internal reference, in conjunction with the positioning device advance process The triangulation that the distance of feature and position are made on the road sign of middle shooting builds the geometrical relationship of similar triangles, can be with Calculate two-dimensional coordinate of the individual features angle point in the camera coordinate system on road sign.
As a kind of robotic embodiment that the present invention is implemented, a kind of structure chart of sweeping robot is provided in Fig. 3, can be made The concrete application product structure figure of a kind of positioning device based on deep vision provided in implementing for the present invention, for the ease of saying It is bright, it illustrates only and the relevant part of the embodiment of the present invention.In the positioning device image processing module with merge locating module It is built in signal-processing board 102;Postposition image capture module includes that camera 106 is installed in the tail portion of body 101 and protrudes backward At structure so that the camera avoids the object for being difficult to detect by some from encountering far from collision detection sensor 105;Institute The optic axis and the positioning device top surface for stating camera 106 form certain angle of inclination ɑ so that the binocular camera shooting Head has preferable observed bearing.Depth recognition module includes three dimensional depth sensor 108, is installed in the first half of body 101, The visual angle of its camera lens slightly upward, with the depth information of the object on the ground and ground of line direction before acquisition.Inertia processing Module includes collision detection sensor 105, and inertia processing module drives the effect of body 101 in moving wheel 104 and universal wheel 107 Under sensed, the data of the inertia processing module and postposition image capture module acquisition pass through with the opposite appearance The intrinsic parameter of state and camera 106 carries out fusion correction position coordinate, and then executes location navigation action, can also update described Landmark data library using as structure navigation map foundation.The institute that last 103 output signal processing board of man-machine interface is calculated State the accurate coordinate numerical value of the current location of sweeping robot.
Above example be only it is fully open is not intended to limit the present invention, all creation purports based on the present invention, without creating Property labour equivalence techniques feature replacement, should be considered as the application exposure range.

Claims (8)

1. a kind of positioning device based on deep vision, which is a kind of movable fixture, which is characterized in that including rear Set image capture module, depth recognition module, image processing module, inertia processing module and fusion locating module;
Postposition image capture module, including be positioned at the positioning device top surface tail portion opening backward recessed and/or Camera at projective structure, for acquiring landmark image to realize positioning;
Depth recognition module, including it is positioned at the three dimensional depth sensor in front of the positioning device top surface, the three-dimensional is deep The optic axis and the positioning device top surface for spending sensor form a predetermined angle so that the three dimensional depth sensor is known The ground of line direction and/or object more than ground before the not described positioning device;
Image processing module, including image preprocessing submodule and characteristic matching submodule, for handling postposition Image Acquisition mould The image information of block and the input of depth recognition module;Image preprocessing submodule is used for postposition image capture module and depth The image of identification module input is converted to gray level image;Characteristic matching submodule, for special to the image in the gray level image Sign carries out characteristic matching with the landmark image feature in landmark data library;Wherein, the landmark data library storage given terrestrial reference The description of space structure, is built in image processing module in the characteristics of image and actual scene of associated area;
Inertia processing module, is made of inertial sensor, incudes rotation angle information, the acceleration letter of the positioning device in real time Breath and translational velocity information, wherein the inertial sensor includes odometer, gyroscope and accelerometer;
Locating module is merged, for the matching result in the image information according to the landmark data library and the input, will be taken the photograph As the inertial data that the image feature information fusion inertia processing module that head acquires acquires, to correct the current of the positioning device Location information;Simultaneously by the landmark image associated features mismatch with landmark data library of three dimensional depth sensor extraction Depth image data feature be added landmark data library with as new terrestrial reference.
2. positioning device according to claim 1, which is characterized in that the three dimensional depth sensor selects 3D TOF sensings Device, or the binocular based on image or more mesh sensor arrays.
3. positioning device according to claim 1, which is characterized in that in the fusion locating module, when the camera is adopted When landmark image characteristic matching in the landmark image feature of collection and the landmark data library is successful, from the postposition Image Acquisition In the landmark image associated features that module obtained match, the ground for obtaining current acquired image is marked on seat in map Mark, then in conjunction with the relative position relation of the positioning device and the terrestrial reference that national forest park in Xiaokeng calculates, obtains Coordinate of the positioning device in map, and merged using the inertial data that inertia processing module acquires, to correct State the current location information of positioning device;
When the landmark image feature of camera acquisition fails with the landmark image characteristic matching in the landmark data library, The inertial data is recorded between every two frames landmark image and carries out integral operation show that the pose of the inertial sensor becomes Change, the characteristic point that terrestrial reference described in previous frame landmark image is then calculated using the internal reference of the camera in present frame is marked on a map Coordinate as in is compared with the feature point coordinates of the terrestrial reference of the present frame landmark image of camera acquisition, from And update to correct and obtain new terrestrial reference and be stored in the landmark data library, complete the establishment of new road sign;
Depth image feature when three dimensional depth sensor acquisition and the landmark image feature in the landmark data library When with failure, landmark data library is added in the characteristic information of collected not matched depth image, as new terrestrial reference;
Wherein, the coordinate in the map uses world coordinate system.
4. positioning device according to claim 1, which is characterized in that the depth recognition module, for passing through three dimensional depth Sensor obtains the depth data of terrestrial reference in setpoint distance in actual scene, establishes space multistory three-dimensional system of coordinate, wherein Z coordinate The depth value of each pixel is represented, wherein terrestrial reference described in each pixel value reflection actual scene to the three dimensional depth The distance of sensor.
5. a kind of localization method based on any one of claim 1 to claim 4 positioning device, which is characterized in that packet Include following steps:
The three dimensional depth sensor obtains the depth image that the positioning device drives forwards object on direction, and from depth map The identification subject image characteristic information extracted as in;
The camera pre-processes the target image of terrestrial reference in collected actual scene, and extraction ground is identified from the target image Target feature calculates then according to the national forest park in Xiaokeng of the characteristic point and the formation of the terrestrial reference of target image identification Go out position relationship of the positioning device relative to the terrestrial reference;
By the target image and description of the corresponding gray level image feature of the depth image and it is stored in landmark data library In description of landmark image associated features carry out characteristic matching, judge the feature of the target image and landmark data library In landmark image associated features whether match, while judging the feature of the depth image and the terrestrial reference in landmark data library Whether image associated features match;
If the landmark image associated features successful match in the feature of the target image and landmark data library, from described In the landmark image associated features that postposition image capture module obtained match, acquisition is marked in map describedly Coordinate calculates the positioning device and exists in conjunction with the position relationship for calculating the positioning device relative to the terrestrial reference Coordinate in map, and corrected using inertial data update, complete the real-time positioning of the positioning device;
If the feature of the target image is with the landmark image associated features in landmark data library, it fails to match,
Between the two field pictures that the camera is continuously shot the terrestrial reference, records the inertial data and carry out integral fortune The pose variation for obtaining the inertial sensor is calculated, the internal reference of the camera is recycled to calculate the terrestrial reference of previous frame image Coordinate of the characteristic point in current frame image, and compared with the feature point coordinates of terrestrial reference described in current frame image, into And the feature point coordinates of terrestrial reference described in current frame image is updated and is corrected, to complete the establishment of new road sign, and store record In the landmark data library;
If the feature of the depth image is with the landmark image associated features in landmark data library, it fails to match, the three-dimensional Landmark data library is added in the characteristic information of the not matched depth image got by depth transducer, as new terrestrial reference;
Wherein, the inertial data has been subjected to calibration and is filtered;The landmark data library storage in actual scene and gives ground The location information of target image characteristic point and the three dimensional point cloud of terrestrial reference.
6. localization method according to claim 5, which is characterized in that carried out using ORB features during the characteristic matching Matching.
7. localization method according to claim 5, which is characterized in that the pixel value of the depth image reflects the positioning dress It sets and drives forwards object on direction at a distance from three dimensional depth sensor current location.
8. a kind of robot, which is characterized in that the robot is that a kind of install positions as described in any one of Claims 1-4 The mobile robot of device.
CN201810572514.2A 2018-06-06 2018-06-06 Positioning device, method and robot based on depth vision Active CN108406731B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810572514.2A CN108406731B (en) 2018-06-06 2018-06-06 Positioning device, method and robot based on depth vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810572514.2A CN108406731B (en) 2018-06-06 2018-06-06 Positioning device, method and robot based on depth vision

Publications (2)

Publication Number Publication Date
CN108406731A true CN108406731A (en) 2018-08-17
CN108406731B CN108406731B (en) 2023-06-13

Family

ID=63141427

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810572514.2A Active CN108406731B (en) 2018-06-06 2018-06-06 Positioning device, method and robot based on depth vision

Country Status (1)

Country Link
CN (1) CN108406731B (en)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109213177A (en) * 2018-11-09 2019-01-15 苏州瑞得恩光能科技有限公司 Algorithms of Robots Navigation System and air navigation aid
CN109360235A (en) * 2018-09-29 2019-02-19 中国航空工业集团公司上海航空测控技术研究所 A kind of interacting depth estimation method based on light field data
CN109405850A (en) * 2018-10-31 2019-03-01 张维玲 A kind of the inertial navigation positioning calibration method and its system of view-based access control model and priori knowledge
CN109579852A (en) * 2019-01-22 2019-04-05 杭州蓝芯科技有限公司 Robot autonomous localization method and device based on depth camera
CN109633664A (en) * 2018-12-29 2019-04-16 南京理工大学工程技术研究院有限公司 Joint positioning method based on RGB-D Yu laser odometer
CN110012280A (en) * 2019-03-22 2019-07-12 盎锐(上海)信息科技有限公司 TOF mould group and VSLAM calculation method for VSLAM system
CN110018688A (en) * 2019-04-11 2019-07-16 清华大学深圳研究生院 The automatic guide vehicle localization method of view-based access control model
CN110135396A (en) * 2019-05-27 2019-08-16 百度在线网络技术(北京)有限公司 Recognition methods, device, equipment and the medium of surface mark
CN110174891A (en) * 2019-04-08 2019-08-27 江苏大学 A kind of AGV cluster control system and method based on WIFI wireless communication
CN110176034A (en) * 2019-05-27 2019-08-27 盎锐(上海)信息科技有限公司 Localization method and end of scan for VSLAM
CN110275168A (en) * 2019-07-09 2019-09-24 厦门金龙联合汽车工业有限公司 A kind of multi-targets recognition and anti-collision early warning method and system
CN110415297A (en) * 2019-07-12 2019-11-05 北京三快在线科技有限公司 Localization method, device and unmanned equipment
CN110689572A (en) * 2019-08-13 2020-01-14 中山大学 System and method for positioning mobile robot in three-dimensional space
CN110782506A (en) * 2019-11-21 2020-02-11 大连理工大学 Method for constructing grid map by fusing infrared camera and depth camera
CN110967018A (en) * 2019-11-25 2020-04-07 斑马网络技术有限公司 Parking lot positioning method and device, electronic equipment and computer readable medium
CN111080627A (en) * 2019-12-20 2020-04-28 南京航空航天大学 2D +3D large airplane appearance defect detection and analysis method based on deep learning
CN111239761A (en) * 2020-01-20 2020-06-05 西安交通大学 Method for indoor real-time establishment of two-dimensional map
CN111596656A (en) * 2020-04-30 2020-08-28 南京理工大学 Heavy-load AGV hybrid navigation device based on binocular video and magnetic sensors
CN111627054A (en) * 2019-06-24 2020-09-04 长城汽车股份有限公司 Method and device for predicting depth completion error map of high-confidence dense point cloud
CN111844036A (en) * 2020-07-21 2020-10-30 上汽大通汽车有限公司 Method for sequencing multi-vehicle type and multi-variety automobile glass assemblies
CN112288811A (en) * 2020-10-30 2021-01-29 珠海市一微半导体有限公司 Key frame fusion control method for multi-frame depth image positioning and visual robot
WO2021016854A1 (en) * 2019-07-30 2021-02-04 深圳市大疆创新科技有限公司 Calibration method and device, movable platform, and storage medium
CN112465723A (en) * 2020-12-04 2021-03-09 北京华捷艾米科技有限公司 Method and device for repairing depth image, electronic equipment and computer storage medium
CN112747746A (en) * 2020-12-25 2021-05-04 珠海市一微半导体有限公司 Point cloud data acquisition method based on single-point TOF, chip and mobile robot
CN113761255A (en) * 2021-08-19 2021-12-07 劢微机器人科技(深圳)有限公司 Robot indoor positioning method, device, equipment and storage medium
CN113820697A (en) * 2021-09-09 2021-12-21 中国电子科技集团公司第五十四研究所 Visual positioning method based on urban building characteristics and three-dimensional map
CN114111787A (en) * 2021-11-05 2022-03-01 上海大学 Visual positioning method and system based on three-dimensional road sign
CN114424023A (en) * 2019-09-17 2022-04-29 赛峰电子与防务公司 Method and system for locating a vehicle using an image capture device
CN114451830A (en) * 2022-03-17 2022-05-10 上海飞博激光科技有限公司 Device and method for cleaning glass curtain wall by laser
CN114729807A (en) * 2020-11-30 2022-07-08 深圳市大疆创新科技有限公司 Positioning method, positioning device, movable platform, landmark and landmark array
CN115019167A (en) * 2022-05-26 2022-09-06 中国电信股份有限公司 Fusion positioning method, system, equipment and storage medium based on mobile terminal
CN117058209A (en) * 2023-10-11 2023-11-14 山东欧龙电子科技有限公司 Method for calculating depth information of visual image of aerocar based on three-dimensional map
US11940269B1 (en) * 2023-09-29 2024-03-26 Mloptic Corp. Feature location detection utilizing depth sensor

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120215380A1 (en) * 2011-02-23 2012-08-23 Microsoft Corporation Semi-autonomous robot that supports multiple modes of navigation
CN103067856A (en) * 2011-10-24 2013-04-24 康佳集团股份有限公司 Geographic position locating method and system based on image recognition
CN104833354A (en) * 2015-05-25 2015-08-12 梁步阁 Multibasic multi-module network integration indoor personnel navigation positioning system and implementation method thereof
US20160147230A1 (en) * 2014-11-26 2016-05-26 Irobot Corporation Systems and Methods for Performing Simultaneous Localization and Mapping using Machine Vision Systems
CN106002921A (en) * 2016-07-08 2016-10-12 东莞市开胜电子有限公司 Stable control and movement system for idler wheel type service robot
CN107030690A (en) * 2016-12-22 2017-08-11 中国科学院沈阳自动化研究所 A kind of mechanical arm barrier-avoiding method of view-based access control model
CN107402569A (en) * 2016-05-19 2017-11-28 科沃斯机器人股份有限公司 Self-movement robot and map constructing method, assembly robot's map call method
CN208323361U (en) * 2018-06-06 2019-01-04 珠海市一微半导体有限公司 A kind of positioning device and robot based on deep vision

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120215380A1 (en) * 2011-02-23 2012-08-23 Microsoft Corporation Semi-autonomous robot that supports multiple modes of navigation
CN103067856A (en) * 2011-10-24 2013-04-24 康佳集团股份有限公司 Geographic position locating method and system based on image recognition
US20160147230A1 (en) * 2014-11-26 2016-05-26 Irobot Corporation Systems and Methods for Performing Simultaneous Localization and Mapping using Machine Vision Systems
CN104833354A (en) * 2015-05-25 2015-08-12 梁步阁 Multibasic multi-module network integration indoor personnel navigation positioning system and implementation method thereof
CN107402569A (en) * 2016-05-19 2017-11-28 科沃斯机器人股份有限公司 Self-movement robot and map constructing method, assembly robot's map call method
CN106002921A (en) * 2016-07-08 2016-10-12 东莞市开胜电子有限公司 Stable control and movement system for idler wheel type service robot
CN107030690A (en) * 2016-12-22 2017-08-11 中国科学院沈阳自动化研究所 A kind of mechanical arm barrier-avoiding method of view-based access control model
CN208323361U (en) * 2018-06-06 2019-01-04 珠海市一微半导体有限公司 A kind of positioning device and robot based on deep vision

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109360235A (en) * 2018-09-29 2019-02-19 中国航空工业集团公司上海航空测控技术研究所 A kind of interacting depth estimation method based on light field data
CN109405850A (en) * 2018-10-31 2019-03-01 张维玲 A kind of the inertial navigation positioning calibration method and its system of view-based access control model and priori knowledge
CN109213177B (en) * 2018-11-09 2022-01-11 苏州瑞得恩光能科技有限公司 Robot navigation system and navigation method
CN109213177A (en) * 2018-11-09 2019-01-15 苏州瑞得恩光能科技有限公司 Algorithms of Robots Navigation System and air navigation aid
CN109633664A (en) * 2018-12-29 2019-04-16 南京理工大学工程技术研究院有限公司 Joint positioning method based on RGB-D Yu laser odometer
CN109579852A (en) * 2019-01-22 2019-04-05 杭州蓝芯科技有限公司 Robot autonomous localization method and device based on depth camera
CN110012280A (en) * 2019-03-22 2019-07-12 盎锐(上海)信息科技有限公司 TOF mould group and VSLAM calculation method for VSLAM system
CN110012280B (en) * 2019-03-22 2020-12-18 盎锐(上海)信息科技有限公司 TOF module for VSLAM system and VSLAM calculation method
CN110174891A (en) * 2019-04-08 2019-08-27 江苏大学 A kind of AGV cluster control system and method based on WIFI wireless communication
CN110018688A (en) * 2019-04-11 2019-07-16 清华大学深圳研究生院 The automatic guide vehicle localization method of view-based access control model
CN110176034A (en) * 2019-05-27 2019-08-27 盎锐(上海)信息科技有限公司 Localization method and end of scan for VSLAM
CN110135396A (en) * 2019-05-27 2019-08-16 百度在线网络技术(北京)有限公司 Recognition methods, device, equipment and the medium of surface mark
CN111627054B (en) * 2019-06-24 2023-09-01 长城汽车股份有限公司 Method and device for predicting depth complement error map of confidence dense point cloud
CN111627054A (en) * 2019-06-24 2020-09-04 长城汽车股份有限公司 Method and device for predicting depth completion error map of high-confidence dense point cloud
CN110275168A (en) * 2019-07-09 2019-09-24 厦门金龙联合汽车工业有限公司 A kind of multi-targets recognition and anti-collision early warning method and system
CN110415297A (en) * 2019-07-12 2019-11-05 北京三快在线科技有限公司 Localization method, device and unmanned equipment
CN110415297B (en) * 2019-07-12 2021-11-05 北京三快在线科技有限公司 Positioning method and device and unmanned equipment
WO2021016854A1 (en) * 2019-07-30 2021-02-04 深圳市大疆创新科技有限公司 Calibration method and device, movable platform, and storage medium
CN110689572A (en) * 2019-08-13 2020-01-14 中山大学 System and method for positioning mobile robot in three-dimensional space
CN114424023A (en) * 2019-09-17 2022-04-29 赛峰电子与防务公司 Method and system for locating a vehicle using an image capture device
CN110782506B (en) * 2019-11-21 2021-04-20 大连理工大学 Method for constructing grid map by fusing infrared camera and depth camera
CN110782506A (en) * 2019-11-21 2020-02-11 大连理工大学 Method for constructing grid map by fusing infrared camera and depth camera
CN110967018B (en) * 2019-11-25 2024-04-12 斑马网络技术有限公司 Parking lot positioning method and device, electronic equipment and computer readable medium
CN110967018A (en) * 2019-11-25 2020-04-07 斑马网络技术有限公司 Parking lot positioning method and device, electronic equipment and computer readable medium
CN111080627A (en) * 2019-12-20 2020-04-28 南京航空航天大学 2D +3D large airplane appearance defect detection and analysis method based on deep learning
CN111239761A (en) * 2020-01-20 2020-06-05 西安交通大学 Method for indoor real-time establishment of two-dimensional map
CN111596656A (en) * 2020-04-30 2020-08-28 南京理工大学 Heavy-load AGV hybrid navigation device based on binocular video and magnetic sensors
CN111844036A (en) * 2020-07-21 2020-10-30 上汽大通汽车有限公司 Method for sequencing multi-vehicle type and multi-variety automobile glass assemblies
CN112288811A (en) * 2020-10-30 2021-01-29 珠海市一微半导体有限公司 Key frame fusion control method for multi-frame depth image positioning and visual robot
CN114729807A (en) * 2020-11-30 2022-07-08 深圳市大疆创新科技有限公司 Positioning method, positioning device, movable platform, landmark and landmark array
CN112465723A (en) * 2020-12-04 2021-03-09 北京华捷艾米科技有限公司 Method and device for repairing depth image, electronic equipment and computer storage medium
CN112747746A (en) * 2020-12-25 2021-05-04 珠海市一微半导体有限公司 Point cloud data acquisition method based on single-point TOF, chip and mobile robot
CN113761255A (en) * 2021-08-19 2021-12-07 劢微机器人科技(深圳)有限公司 Robot indoor positioning method, device, equipment and storage medium
CN113761255B (en) * 2021-08-19 2024-02-09 劢微机器人科技(深圳)有限公司 Robot indoor positioning method, device, equipment and storage medium
CN113820697A (en) * 2021-09-09 2021-12-21 中国电子科技集团公司第五十四研究所 Visual positioning method based on urban building characteristics and three-dimensional map
CN113820697B (en) * 2021-09-09 2024-03-26 中国电子科技集团公司第五十四研究所 Visual positioning method based on city building features and three-dimensional map
CN114111787A (en) * 2021-11-05 2022-03-01 上海大学 Visual positioning method and system based on three-dimensional road sign
CN114111787B (en) * 2021-11-05 2023-11-21 上海大学 Visual positioning method and system based on three-dimensional road sign
CN114451830A (en) * 2022-03-17 2022-05-10 上海飞博激光科技有限公司 Device and method for cleaning glass curtain wall by laser
CN115019167B (en) * 2022-05-26 2023-11-07 中国电信股份有限公司 Fusion positioning method, system, equipment and storage medium based on mobile terminal
CN115019167A (en) * 2022-05-26 2022-09-06 中国电信股份有限公司 Fusion positioning method, system, equipment and storage medium based on mobile terminal
US11940269B1 (en) * 2023-09-29 2024-03-26 Mloptic Corp. Feature location detection utilizing depth sensor
CN117058209A (en) * 2023-10-11 2023-11-14 山东欧龙电子科技有限公司 Method for calculating depth information of visual image of aerocar based on three-dimensional map
CN117058209B (en) * 2023-10-11 2024-01-23 山东欧龙电子科技有限公司 Method for calculating depth information of visual image of aerocar based on three-dimensional map

Also Published As

Publication number Publication date
CN108406731B (en) 2023-06-13

Similar Documents

Publication Publication Date Title
CN108406731A (en) A kind of positioning device, method and robot based on deep vision
CN208323361U (en) A kind of positioning device and robot based on deep vision
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
CN109993113B (en) Pose estimation method based on RGB-D and IMU information fusion
CN112734852B (en) Robot mapping method and device and computing equipment
CN110044354A (en) A kind of binocular vision indoor positioning and build drawing method and device
US9990726B2 (en) Method of determining a position and orientation of a device associated with a capturing device for capturing at least one image
CN109540126A (en) A kind of inertia visual combination air navigation aid based on optical flow method
CN110221603A (en) A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud
CN109166149A (en) A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN107909614B (en) Positioning method of inspection robot in GPS failure environment
CN113706626B (en) Positioning and mapping method based on multi-sensor fusion and two-dimensional code correction
Abuhadrous et al. Digitizing and 3D modeling of urban environments and roads using vehicle-borne laser scanner system
CN108481327A (en) A kind of positioning device, localization method and the robot of enhancing vision
CN111968228B (en) Augmented reality self-positioning method based on aviation assembly
CN108544494A (en) A kind of positioning device, method and robot based on inertia and visual signature
CN111161337A (en) Accompanying robot synchronous positioning and composition method in dynamic environment
CN108364304A (en) A kind of system and method for the detection of monocular airborne target
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
CN208289901U (en) A kind of positioning device and robot enhancing vision
CN111998862A (en) Dense binocular SLAM method based on BNN
JP6410231B2 (en) Alignment apparatus, alignment method, and computer program for alignment
JP2881193B1 (en) Three-dimensional object recognition apparatus and method
CN113701750A (en) Fusion positioning system of underground multi-sensor
CN112731503A (en) Pose estimation method and system based on front-end tight coupling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 519000 2706, No. 3000, Huandao East Road, Hengqin new area, Zhuhai, Guangdong

Applicant after: Zhuhai Yiwei Semiconductor Co.,Ltd.

Address before: 519000 room 105-514, No. 6, Baohua Road, Hengqin new area, Zhuhai, Guangdong

Applicant before: AMICRO SEMICONDUCTOR Co.,Ltd.

GR01 Patent grant
GR01 Patent grant