CN208323361U - A kind of positioning device and robot based on deep vision - Google Patents

A kind of positioning device and robot based on deep vision Download PDF

Info

Publication number
CN208323361U
CN208323361U CN201820866263.4U CN201820866263U CN208323361U CN 208323361 U CN208323361 U CN 208323361U CN 201820866263 U CN201820866263 U CN 201820866263U CN 208323361 U CN208323361 U CN 208323361U
Authority
CN
China
Prior art keywords
image
landmark
positioning device
depth
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn - After Issue
Application number
CN201820866263.4U
Other languages
Chinese (zh)
Inventor
赖钦伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Amicro Semiconductor Co Ltd
Original Assignee
Zhuhai Amicro Semiconductor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Amicro Semiconductor Co Ltd filed Critical Zhuhai Amicro Semiconductor Co Ltd
Priority to CN201820866263.4U priority Critical patent/CN208323361U/en
Application granted granted Critical
Publication of CN208323361U publication Critical patent/CN208323361U/en
Withdrawn - After Issue legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Navigation (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The utility model discloses a kind of positioning device and robot based on deep vision, which is a kind of moveable vision positioning device, comprising: postposition image capture module, for acquiring landmark image to realize positioning;Depth recognition module, for identification object on ground and ground;Image processing module, including image preprocessing submodule and characteristic matching submodule, for handling the image information of postposition image capture module and the input of depth recognition module;Inertia processing module, for incuding the displacement information of inertial sensor in real time;Locating module is merged, for being merged to environmental information acquired in each sensor module to realize positioning.Compared with the existing technology, the 3 dimension depth transducers for being installed in preceding part provide new landmark information in real time for the sweptback camera in face to cooperate and complete to position, so that computing resource is reduced during location navigation, improve synchronous location efficiency.

Description

A kind of positioning device and robot based on deep vision
Technical field
The utility model relates to positioning devices, and in particular to a kind of positioning device and robot based on deep vision.
Background technique
Traditional camera shooting is expanded to third dimension by three-dimensional (3D) depth capture systems.Although being obtained from traditional video camera 2D image indicate color and brightness at each (x, y) pixel, the 3D point cloud instruction obtained from 3D depth transducer to Each distance (z) of the body surface at (x, y) pixel.In this way, 3D sensor provides the measurement of third Spatial Dimension z.3D System directly acquire depth information rather than rely on visual angle, relative size, block, texture, parallax and other clues detect depth Degree.Directly (x, y, z) data are particularly useful to the computer interpretation of image data.For example, depth camera is acquired Three dimensional point cloud project to two-dimensional surface obtain two-dimensional projection data, to construct two-dimensional grid map.
In existing vision sweeper product, mobile robot includes being embedded in the main body of robot under top cover Camera navigation system.Navigation system include capture ambient enviroment image one or more cameras (for example, standard camera, Volumetric point cloud image camera, three-dimensional (3D) image camera, the camera with depth map sensor, Visible Light Camera and/or red Outer camera).Mobile robot can be configured optionally using any one of various camera configurations, which includes The inclined preceding camera that is combined with the (not shown) of camera forward being aligned in the movement direction, with the inclined multiple faces of different angle Forward camera, stereoscopic camera to, with two or more adjacent or the visual field that partly overlaps inclined cameras, and/or with difference Angle is angled.It is executed using one or more inclined camera captured image data in the navigation system by mobile robot The map procedures environment of VSLAM and the position that mobile robot is precisely located, however the combination of above-mentioned camera and position point Cloth makes vision, and location algorithm is complicated simultaneously, during navigator fix the computing resource consumption of the processor of robot master control compared with Greatly.
Utility model content
A kind of positioning device based on deep vision, the positioning device are a kind of movable fixtures, including postposition image is adopted Collect module, depth recognition module, image processing module, inertia processing module and fusion locating module;
Postposition image capture module, the tail portion including being positioned at the positioning device top surface are open backward recessed And/or the camera at projective structure, for acquiring landmark image to realize positioning;
Depth recognition module, including the three dimensional depth sensor being positioned in front of the positioning device top surface, this three The optic axis and the positioning device top surface for tieing up depth transducer form a predetermined angle, so that the three dimensional depth senses Device identifies the ground of line direction and/or object more than ground before the positioning device;
Image processing module, including image preprocessing submodule and characteristic matching submodule, are adopted for handling postposition image Collect the image information of module and the input of depth recognition module;Image preprocessing submodule, for by postposition image capture module and The image of depth recognition module input is converted to gray level image;Characteristic matching submodule, for the figure in the gray level image As the landmark image feature in feature and landmark data library carries out characteristic matching;Wherein, the landmark data library stores given The description of space structure, is built in image processing module in the characteristics of image and actual scene of terrestrial reference associated area;
Inertia processing module, is made of inertial sensor, incudes the rotation angle information of the positioning device in real time, accelerates Information and translational velocity information are spent, wherein the inertial sensor includes odometer, gyroscope and accelerometer;
Locating module is merged, for according to the matching result in the image information of the landmark data library and the input, By the inertial data of the image feature information fusion inertia processing module acquisition of camera acquisition, to correct the positioning device Current location information;The three dimensional depth sensor is extracted simultaneously with the landmark image associated features in landmark data library not Landmark data library is added to become new terrestrial reference in the feature of matched depth image data.
Further, the three dimensional depth sensor selects 3D TOF sensor, or the binocular based on image or More mesh sensor arrays.
Further, landmark image feature and the terrestrial reference in the fusion locating module, when camera acquisition When landmark image characteristic matching success in database, the terrestrial reference to match that is obtained from the postposition image capture module In image associated features, the ground for obtaining current acquired image is marked on coordinate in map, then in conjunction with national forest park in Xiaokeng meter The relative positional relationship for calculating the positioning device and the terrestrial reference that are obtained, obtains seat of the positioning device in map Mark, and merged using the inertial data that inertia processing module acquires, to correct the current location information of the positioning device;
When the landmark image characteristic matching in the landmark image feature and the landmark data library of camera acquisition is lost When losing, the inertial data is recorded between every two frames landmark image and carries out the position that integral operation obtains the inertial sensor Appearance variation, then calculates the characteristic point of terrestrial reference described in previous frame landmark image in present frame using the internal reference of the camera Coordinate in logo image carries out pair with the characteristic point coordinate of the terrestrial reference of the present frame landmark image of camera acquisition Than obtaining new terrestrial reference to update amendment and being stored in the landmark data library, complete the creation of new road sign;
When the landmark image in the depth image feature and the landmark data library of three dimensional depth sensor acquisition is special Landmark data library is added when it fails to match, by the characteristic information of collected not matched depth image in sign, as new terrestrial reference;
Wherein, the coordinate in the map uses world coordinate system.
Further, the depth recognition module, for by three dimensional depth sensor obtain actual scene in setting away from Depth data from interior terrestrial reference establishes space multistory three-dimensional system of coordinate, and wherein Z coordinate represents the depth value of each pixel, Wherein distance of the terrestrial reference described in each pixel value reflection actual scene to the three dimensional depth sensor.
A kind of robot, the robot are a kind of mobile robots for installing the positioning device.
Compared with the existing technology, the utility model provides one three dimensional depth sensing in the preceding part of the positioning device Device drives forwards the object on direction for detecting the identification positioning device and determines obstacle distance, and creates new ground Mark, so that the area information not detected previously is learned, so that being conducive to the sweptback camera in face matches relevant terrestrial reference Realize positioning, wherein three dimensional depth sensor reduces computing resource without carrying out position fixing process, realizes that the positioning device is synchronous Positioning and navigation, improve navigation efficiency.
Detailed description of the invention
Fig. 1 is the module frame figure that the utility model implements a kind of positioning device based on deep vision provided;
Fig. 2 is that the utility model implements a kind of localization method flow chart based on deep vision provided;
Fig. 3 is a kind of robot system architecture's figure based on deep vision that the utility model is implemented to provide.
Specific embodiment
Specific embodiment of the present utility model is described further with reference to the accompanying drawing:
One of the utility model embodiment is implemented in a manner of robot based on the positioning device of deep vision, packet Include sweeping robot, AGV etc. mobile robot.The positioning device is assumed below to be installed on sweeping robot.However this Field the skilled person will understand that, other than being used in particular for mobile robot, embodiment according to the present utility model Construction energy expanded application is in mobile terminal.
In the utility model implementation, those skilled in the art are readily apparent that, during executing vslam, according to defeated Enter image characteristic point buffering into a small map, and then calculates the positional relationship between present frame and map.Here map is only It is a provisional concept, i.e., each frame characteristic point is cached to a place, constitutes the set of characteristic point, referred to as Figure.During executing vslam, every frame image of camera acquisition is that map contributes some information, for example adds new spy Sign point or new and old characteristic point, thus the map of one continuous updating of maintenance.
The utility model provides a kind of positioning device based on deep vision, which is a kind of movable fixture, As shown in Figure 1, including that postposition image capture module, depth recognition module, image processing module, inertia processing module and fusion are fixed Position module.Postposition image capture module, the tail portion including being positioned at the positioning device top surface be open backward recessed and/ Or the camera at projective structure, for acquiring landmark image to realize positioning, the camera needs slightly to dash forward under normal circumstances Rise, keep predetermined angle can obtain relatively good visual angle because be provided in the utility model embodiment bump bar and 360 degree of cylindrical infrared receiving devices, these are easy to block the camera, and camera is caused to keep predetermined angle can Relatively good visual angle is obtained, so the camera for being placed on preceding part is unfavorable for assisting navigation positioning, but is used for object detection, especially It is that the object driven forwards on direction to the positioning device carries out target identification analysis.
Preferably, as shown in figure 3, the depth recognition module includes three dimensional depth sensor 108, it is positioned at the positioning At the top of the three dimensional depth sensor of part before device top surface, the optic axis of the three dimensional depth sensor and the positioning device Surface forms a predetermined angle so that the three dimensional depth sensor identify before the positioning device ground of line direction and/or Object more than ground, thus the sky that can reach ground and ground or more within sweep of the eye of the three dimensional depth sensor Between, to obtain the depth data of terrestrial reference in the above set distance in ground or ground, space multistory three-dimensional system of coordinate is established, wherein Z coordinate represents the depth value of each pixel, wherein terrestrial reference described in the depth value reflection actual scene of each pixel To the distance of the three dimensional depth sensor, so as to drive forwards newly identified terrestrial reference on direction for the addition of landmark data library Information.
Specifically, the three dimensional depth sensor 108 can be 3D TOF sensor, or the binocular based on image or The more mesh sensor arrays of person.Wherein the binocular based on image or more mesh sensor arrays are suitable for binocular measuring system, and 3D TOF sensor is that the target range obtained by entering, reflecting optical detection obtains.
In the utility model implementation, the three dimensional depth sensor generates the number of depth image with the speed of 20 frame per second Depth map is created according to stream, and according to the depth image, depth map is comprising having at a distance from the surface of the scenario objects of viewpoint 2 dimension tables of the image or image channel of the information of pass, depth map show gray level image, only its each pixel value be sensor away from Actual range from object, the influence of effective object appearance and background color reduce the occurrence of judging by accident.Utilize the depth Image is capable of the distance and bearing feature of disturbance in judgement object, directly obtains the position of each pixel in space, so that institute It states positioning device and not only introduces new terrestrial reference, thus the reproduction ambient enviroment of 3D in real time.Wherein, in 3D computer graphics, institute Stating depth image includes common RGB Three Channel Color image, and usual RGB image and Depth image are registrations, thus pixel There is one-to-one corresponding relationship between point.
Further, the depth image further includes two dimension (2D) pixel region of captured scene, wherein 2D pixel Each of region pixel can indicate depth value, and the object distance in captured scene in terms of centimetre, millimeter etc. is caught Catch the length or distance of equipment.The depth image only retains the characteristic point information in set distance, and outside set distance Characteristic point is lost.
Specifically, as shown in figure 3, the optic axis of the camera 106 is formed slopely with the positioning device top surface Angle be defined as acute angle ɑ, generally can near 45 degree, with guarantee obtain true imaging characteristic good approximation effect, mention The precision of high detection terrestrial reference feature.
As shown in Figure 1, image processing module, including image preprocessing submodule and characteristic matching submodule, for handling The image information of postposition image capture module and the input of depth recognition module.Wherein, image preprocessing submodule is by the camera shooting The color image data binarization of head and three dimensional depth sensor acquisition, is converted to gray level image in ambient enviroment The unique terrestrial reference for establishing repeatable identification, completes the preprocessing process of image.Then characteristic matching submodule is from image preprocessing Extract characteristic point in the pretreated obtained gray level image of submodule, so calculate description son, and in landmark data library Landmark image associated features corresponding description carry out characteristic matching;Wherein, the image information of the input includes described The image information of postposition image capture module and depth recognition module input.
Specifically, the landmark data library is the landmark data library built in image processing module, which includes The image characteristic point of given terrestrial reference associated area and/or the three-dimensional structure of feature.The landmark data library includes about many The deep image information for the unknown terrestrial reference that the information of the terrestrial reference of previous observation and the three dimensional depth sensor obtain, it is described fixed Position device can use the terrestrial reference to execute navigator fix movement.Terrestrial reference is considered the feature with specific three-dimensional structure Set.Any one of various features can be used for identifying terrestrial reference, when the positioning device is configured as room cleaning machine When device people, terrestrial reference may be one group of feature of the two-dimensional structure identification of the corner of (but being not limited to) based on photo frame.Such feature Based on the static geometry in room, although and feature is illuminated with certain and dimensional variation, and they are relative to frequency Be displaced numerously object in the lower region of the environment of (such as chair, dustbin, pet etc.) be generally easier it is identified and It is identified as terrestrial reference.It will be readily understood that ground, the specific structure in the landmark data library are formed in specific application requirement.
As shown in Figure 1, inertia processing module, is made of a series of inertial sensors, for incuding the positioning dress in real time Rotation angle information, acceleration information and the translational velocity information set;The module is used to acquire inertia number by inertial sensor According to, then carry out calibration filtering processing and be transmitted to fusion locating module.The original data processing of the inertial data include maximum value and The shielding of minimum value;Static drift is eliminated;The Kalman filtering of data.Wherein inertial sensor includes odometer, gyroscope, adds Speedometer etc. is used for the sensor of inertial navigation.In subsequent processes, based on the optics observed between continuous adjacent image Stream carrys out the image of acquisition and tracking terrestrial reference, and the data for needing these inertial sensors to acquire determine the distance advanced.Specifically, in institute It states in positioning device moving process, carrying out Integral Processing to encoder data can be obtained position and attitudes vibration, to acceleration The data of flowmeter sensor acquisition, which carry out integral, can obtain change in location, and integrating to the data of gyro sensor acquisition can To obtain attitudes vibration, later by being weighted and averaged to above-mentioned three, position of the current time relative to last moment can be obtained It sets and attitudes vibration.
As shown in Figure 1, fusion locating module, for according to the landmark data library and the image information of the input With as a result, calculating the positioning dress from the image feature information that camera acquires by the geometrical relationship of national forest park in Xiaokeng The coordinate information in current location is set, then the positioning device that the inertial data operation acquired with inertia processing module obtains exists The coordinate information of current location compares, to update amendment current location information.Wherein the inertial sensor can pass through product Partite transport is calculated to obtain the travel distance of the positioning device.Simultaneously by the three dimensional depth sensor extract with landmark data library Landmark data library is added to become new terrestrial reference in the characteristic information of the unmatched depth image data of landmark image associated features.
Specifically, landmark image and the landmark data library in the fusion locating module, when camera acquisition In landmark image characteristic matching success when, obtain current acquired image from the landmark image associated features to match Ground be marked on the coordinate on pixel coordinate system, the coordinate that is marked in map describedly is obtained by coordinate system conversion, then in conjunction with small The relative positional relationship of the positioning device and the terrestrial reference that borescopic imaging model calculates, is calculated institute by coordinate Coordinate of the positioning device in map, and the integral operation Comparative result with the inertial data of inertia processing module acquisition are stated, from And correct the current location information of the positioning device.
When the landmark image characteristic matching in the landmark image feature and the landmark data library of camera acquisition is lost When losing, the inertial data is recorded between every two frames landmark image, it is described fixed to find out according to the accumulated value of the inertial data The positional relationship of position device and the terrestrial reference, the phase acquired then in conjunction with the inertial sensor sensitive information with the camera The relative attitude determined between adjacent two field pictures, so that the inertial sensor sensitive information passes through the rotation of R from world coordinate system Turn and the translation of t obtains the coordinate that camera coordinates are fastened, then calculates previous frame terrestrial reference using the internal reference of the camera Coordinate of the characteristic point of terrestrial reference described in image in present frame landmark image, with the camera acquisition present frame described in The characteristic point coordinate pair ratio of logo image, to the camera acquisition present frame described in gray level image characteristic point coordinate carry out more New amendment completes the creation of new road sign to obtain new terrestrial reference and storage is recorded in the landmark data library.
When the landmark image in the depth image feature and the landmark data library of three dimensional depth sensor acquisition is special When it fails to match, the three dimensional depth sensor extracts unmatched with the landmark image associated features in landmark data library sign Landmark data library is added to become new terrestrial reference in the feature of depth image data, provides unknown object for the landmark data library The characteristic set of 3D structure, to solve the pose pre-estimation problem during the positioning device moves ahead in zone of ignorance.It is described In positioning device, the three dimensional depth sensor does not execute positioning operation, and being served only for identification object is that the landmark data library adds Add new landmark, to cooperate the positioning operation of camera.
Wherein, the coordinate in the map uses world coordinate system;The inertial sensor is described to take the photograph to the camera As all there is mapping associations for head to the gray level image feature and/or landmark image associated features, while feature can lead to It crosses the gray level image and extracts acquisition.
Conceived based on same utility model, a kind of localization method based on deep vision is present embodiments provided, due to making With the localization method solve orientation problem hardware device be based on a kind of aforementioned positioning device based on deep vision, therefore should The embodiment of localization method may refer to a kind of application implementation of aforementioned positioning device based on deep vision.It is being embodied When, as shown in Fig. 2, specifically including:
Step S1, the described three dimensional depth sensor obtains the depth map that the positioning device drives forwards object on direction Picture, and the image feature information for identifying terrestrial reference is extracted from depth image;The utility model utilizes the three dimensional depth sensor The depth data for obtaining visual field, establishes space multistory three-dimensional system of coordinate, wherein Z coordinate represents the depth value of each pixel. Three dimensional point cloud set is obtained according to the image of acquisition, constructs 3D rendering, and data normalization of adjusting the distance is to pixel coordinate system On, it is converted into gray value of image, finally exports depth image generated to external processing apparatus.
Step S2, the described camera pre-processes the target image of terrestrial reference in collected actual scene, carries out gray processing, Feature point extraction is carried out to gray level image, so that the characteristic point of the terrestrial reference is formed, then according to target image identification The imaging features geometrical relationship (i.e. national forest park in Xiaokeng) of characteristic point and the terrestrial reference, calculates the positioning device relative to institute State the positional relationship of terrestrial reference;
Step S3, description of the gray level image feature is associated with the landmark image being stored in landmark data library Description of feature carries out characteristic matching, judges that the feature of the target image is associated with the landmark image in landmark data library Whether feature matches, and is, enters step S4, otherwise enters step S5;The feature and terrestrial reference number of the depth image are judged simultaneously Whether matched according to the landmark image associated features in library, is to enter step S6;
Step S4, after successful match, in the case where the Intrinsic Matrix of the known camera, according to the institute to match Landmark image associated features are stated, the coordinate being marked in map describedly is obtained, calculates the positioning device phase in conjunction with described For the positional relationship of the terrestrial reference, coordinate of the positioning device in map is calculated, the inertial sensor pair is passed through The current position coordinates amendment updates, and obtains an accurate current position coordinates, completes the real-time fixed of the positioning device Position.
Step S5, continuous to the terrestrial reference in the camera between the two field pictures of the camera continuous acquisition Between the two field pictures of shooting, records the inertial data and carry out the pose change that integral operation obtains the inertial sensor Change, the internal reference of the camera is recycled to calculate seat of the characteristic point of the terrestrial reference of previous frame image in current frame image Mark, and compared with the characteristic point coordinate of terrestrial reference described in current frame image, and then to terrestrial reference described in current frame image Characteristic point coordinate updates amendment, to complete the creation of new road sign, and stores and is recorded in the landmark data library.
Step S6, the landmark image associated features mismatch with landmark data library that the described three dimensional depth sensor extracts Depth image data feature be added landmark data library with become new terrestrial reference, provide unknown object for the landmark data library 3 dimension point cloud datas, to solve the limitation at the camera visual angle, to realize that synchronous positioning provides multiple angulars field of view It ensures, the pose pre-estimation problem during further processing moves ahead in zone of ignorance.
Wherein, the inertial data has been subjected to calibration filtering processing;The inertial sensor is described to take the photograph to the camera As all there is mapping associations for head to the target image characteristics and/or landmark image associated features;The landmark data library Store the three dimensional point cloud of location information and terrestrial reference that the image characteristic point of terrestrial reference is given in actual scene.
As a kind of mode that the utility model is implemented, the characteristic point is made of key point and the sub- two parts of description, is closed Key point is position of this feature point in image, some characteristic points also have the information such as direction, size, and describe son usually description The information of the key point surrounding pixel.Above-mentioned utility model is implemented after pre-processing, the characteristic point extracted from the gray level image For ORB characteristic point, and characteristic matching is carried out using its corresponding descriptor.ORB Feature Descriptor improves FAST detection not Have the problem of directionality, and the binary descriptor BRIEF being exceedingly fast using speed, so that the spy of the characteristic matching submodule Sign is extracted link and is greatly accelerated.
ORB characteristic point is made of key point with sub- two parts are described.Wherein, key point FAST, FAST are a kind of angle point, It is significantly local to predominantly detect local pixel grey scale change, famous fastly with speed, improved FAST angle point has ruler in ORB The description of degree and rotation, greatly improves the robustness that they are stated between images.BRIEF description son for one kind two into System description, description vectors are formed by many 0 and 1, convenient storage, two pixels near 0 and 1 encoded key point here Size relation;BRIEF is compared using random selecting point, and speed is exceedingly fast, and has preferable rotational invariance, so that BRIEF is retouched State the matching of overabundance of amniotic fluid foot positioning device realtime graphic.Therefore selection ORB characteristic point keeps feature to have rotation, scale invariability, It can prevent from losing in image rotation change procedure, while be promoted obviously in terms of speed, be conducive to improve the positioning dress The speed for setting acquisition process image enhances the ability of positioning device calculated in real time.
Further, the characteristic matching is actually to describe son with all measurements to each characteristic point on image Distance, then sort, take nearest one as match point.And the similar journey between son distance two features of expression is described It spends, selects Euclidean distance to be measured in the utility model implementation, i.e., gray level image more to be measured and the landmark data library In landmark image respectively extract the corresponding Euclidean distance of feature, to complete to judge matching process.
Object and three dimensional depth sensor current location in the pixel value reflection actual scene of the depth image Distance, according to the depth image create depth map.The three dimensional depth sensor is the function for obtaining depth information in visual field It can unit.In embodiment, the three dimensional depth sensor is by using structure light, flight time, stereoscopic vision or this field The known any other sensor technology of those of ordinary skill captures the corresponding three dimensional point cloud of depth image to two-dimensional surface Projection obtains coordinate information of the object of the three dimensional depth sensor capture in pixel coordinate system.
Between the two field pictures that the camera is continuously shot, present frame and reference frame are defined, records the inertia Data simultaneously carry out accumulating operation, to obtain the pose transformation of the inertial sensor record between present frame and reference frame, make Change for the pose of the inertial sensor;Then changed using the pose of the inertial sensor and continuously adopted with the camera The relative attitude determined between the two field pictures of collection, the terrestrial reference is transformed into camera coordinate system from world coordinate system, then The location information of the terrestrial reference is further transformed into pixel coordinate according to the camera internal reference matrix to fasten, to obtain Coordinate of the characteristic coordinates of reference frame in present frame.When the target image and the landmark image phase that is stored in landmark data library When linked character mismatches, with aforementioned conversion method, predicted on the current position coordinates obtained by inertial sensor The coordinate of image characteristic point, and with the characteristic point coordinate pair ratio in current frame image feature, to the spy in current frame image feature Sign point coordinate is updated amendment, and is stored back into new landmark of the landmark data library as the creation under current location.
As a kind of mode that the utility model is implemented, the camera shooting head model uses traditional national forest park in Xiaokeng, is The camera lens of the camera based on the predetermined position is towards the target image acquired on angle ɑ (as shown in Figure 3) The characteristic point fastened in pixel coordinate the positional relationship of the road sign in world coordinate system is sensed in conjunction with the inertial sensor The similar triangles relationship set up.In the case where the camera internal reference is known situation, advance in conjunction with the positioning device The triangulation that the distance of feature and position are made on the road sign shot in the process builds the geometrical relationship of similar triangles, Two-dimensional coordinate of the individual features angle point in the camera coordinate system on road sign can be calculated.
As a kind of robotic embodiment that the utility model is implemented, a kind of structure chart of sweeping robot is provided in Fig. 3, Can be used as the utility model implement in the concrete application product structure figure of a kind of positioning device based on deep vision that provides, be Convenient for explanation, part relevant to the utility model embodiment is illustrated only.In the positioning device image processing module with Fusion locating module is built in signal-processing board 102;Postposition image capture module includes that camera 106 is installed in body 101 Tail portion is backward at projective structure, so that the camera far from collision detection sensor 105, avoids being difficult to detect by some Object is encountered;The optic axis of the camera 106 and the positioning device top surface form certain tilt angle ɑ, so that The binocular camera has preferable observed bearing.Depth recognition module includes three dimensional depth sensor 108, is installed in body 101 first half, slightly upward, the depth of the object on ground and ground to obtain preceding line direction is believed at the visual angle of camera lens Breath.Inertia processing module includes collision detection sensor 105, and inertia processing module is in 107 driving machine of moving wheel 104 and universal wheel It is sensed under the action of body 101, the data of the inertia processing module and postposition image capture module acquisition pass through fortune Fusion correction position coordinate is carried out with the intrinsic parameter of the relative attitude and camera 106, and then executes location navigation movement, also The landmark data library can be updated using the foundation as building navigation map.103 output signal processing board of last man-machine interface The accurate coordinate numerical value of the current location for the sweeping robot being calculated.
Above embodiments are only sufficiently open rather than limitation the utility model, all creation purports based on the utility model, The replacement of equivalence techniques feature without creative work should be considered as the range of the application exposure.

Claims (5)

1. a kind of positioning device based on deep vision, which is a kind of movable fixture, which is characterized in that including rear Set image capture module, depth recognition module, image processing module, inertia processing module and fusion locating module;
Postposition image capture module, the tail portion including being positioned at the positioning device top surface be open backward recessed and/or Camera at projective structure, for acquiring landmark image to realize positioning;
Depth recognition module, including the three dimensional depth sensor being positioned in front of the positioning device top surface, the three-dimensional is deep The optic axis and the positioning device top surface for spending sensor form a predetermined angle, so that the three dimensional depth sensor is known The ground of line direction and/or object more than ground before the not described positioning device;
Image processing module, including image preprocessing submodule and characteristic matching submodule, for handling postposition Image Acquisition mould The image information of block and the input of depth recognition module;Image preprocessing submodule is used for postposition image capture module and depth The image of identification module input is converted to gray level image;Characteristic matching submodule, for special to the image in the gray level image Sign carries out characteristic matching with the landmark image feature in landmark data library;Wherein, the landmark data library stores given terrestrial reference The description of space structure, is built in image processing module in the characteristics of image and actual scene of associated area;
Inertia processing module, is made of inertial sensor, incudes rotation angle information, the acceleration letter of the positioning device in real time Breath and translational velocity information, wherein the inertial sensor includes odometer, gyroscope and accelerometer;
Locating module is merged, for the matching result in the image information according to the landmark data library and the input, will be taken the photograph The inertial data that the image feature information fusion inertia processing module acquired as head acquires, to correct the current of the positioning device Location information;It is that the three dimensional depth sensor is extracted simultaneously to be mismatched with landmark data library landmark image associated features Depth image data feature be added landmark data library with become new terrestrial reference.
2. positioning device according to claim 1, which is characterized in that the three dimensional depth sensor selects 3D TOF sensing Device, or the binocular based on image or more mesh sensor arrays.
3. positioning device according to claim 1, which is characterized in that in the fusion locating module, when the camera is adopted When landmark image characteristic matching in the landmark image feature of collection and the landmark data library is successful, from the postposition Image Acquisition In the landmark image associated features that module obtained match, the ground for obtaining current acquired image is marked on seat in map Mark, the relative positional relationship of the positioning device and the terrestrial reference that calculate then in conjunction with national forest park in Xiaokeng, obtains Coordinate of the positioning device in map, and merged using the inertial data that inertia processing module acquires, to correct State the current location information of positioning device;
When the landmark image characteristic matching in the landmark image feature and the landmark data library of camera acquisition fails, The inertial data is recorded between every two frames landmark image and carries out integral operation show that the pose of the inertial sensor becomes Change, then in present frame is marked on a map using the characteristic point that the internal reference of the camera calculates terrestrial reference described in previous frame landmark image Coordinate as in is compared with the characteristic point coordinate of the terrestrial reference of the present frame landmark image of camera acquisition, from And update amendment and obtain new terrestrial reference and be stored in the landmark data library, complete the creation of new road sign;
Landmark image feature in the depth image feature and the landmark data library of three dimensional depth sensor acquisition When with failure, landmark data library is added in the characteristic information of collected not matched depth image, as new terrestrial reference;
Wherein, the coordinate in the map uses world coordinate system.
4. positioning device according to claim 1, which is characterized in that the depth recognition module, for passing through three dimensional depth Sensor obtains the depth data of terrestrial reference in set distance in actual scene, establishes space multistory three-dimensional system of coordinate, wherein Z coordinate The depth value of each pixel is represented, wherein terrestrial reference described in each pixel value reflection actual scene to the three dimensional depth The distance of sensor.
5. a kind of robot, which is characterized in that the robot is a kind of installing positioning as described in any one of Claims 1-4 The mobile robot of device.
CN201820866263.4U 2018-06-06 2018-06-06 A kind of positioning device and robot based on deep vision Withdrawn - After Issue CN208323361U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201820866263.4U CN208323361U (en) 2018-06-06 2018-06-06 A kind of positioning device and robot based on deep vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201820866263.4U CN208323361U (en) 2018-06-06 2018-06-06 A kind of positioning device and robot based on deep vision

Publications (1)

Publication Number Publication Date
CN208323361U true CN208323361U (en) 2019-01-04

Family

ID=64773379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201820866263.4U Withdrawn - After Issue CN208323361U (en) 2018-06-06 2018-06-06 A kind of positioning device and robot based on deep vision

Country Status (1)

Country Link
CN (1) CN208323361U (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108406731A (en) * 2018-06-06 2018-08-17 珠海市微半导体有限公司 A kind of positioning device, method and robot based on deep vision
CN110018688A (en) * 2019-04-11 2019-07-16 清华大学深圳研究生院 The automatic guide vehicle localization method of view-based access control model
CN110728245A (en) * 2019-10-17 2020-01-24 珠海格力电器股份有限公司 Optimization method and device for VSLAM front-end processing, electronic equipment and storage medium
CN111055283A (en) * 2019-12-30 2020-04-24 广东省智能制造研究所 FOC position servo driving device and method of foot type robot
CN112747746A (en) * 2020-12-25 2021-05-04 珠海市一微半导体有限公司 Point cloud data acquisition method based on single-point TOF, chip and mobile robot

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108406731A (en) * 2018-06-06 2018-08-17 珠海市微半导体有限公司 A kind of positioning device, method and robot based on deep vision
CN110018688A (en) * 2019-04-11 2019-07-16 清华大学深圳研究生院 The automatic guide vehicle localization method of view-based access control model
CN110728245A (en) * 2019-10-17 2020-01-24 珠海格力电器股份有限公司 Optimization method and device for VSLAM front-end processing, electronic equipment and storage medium
CN111055283A (en) * 2019-12-30 2020-04-24 广东省智能制造研究所 FOC position servo driving device and method of foot type robot
CN111055283B (en) * 2019-12-30 2021-06-25 广东省智能制造研究所 FOC position servo driving device and method of foot type robot
CN112747746A (en) * 2020-12-25 2021-05-04 珠海市一微半导体有限公司 Point cloud data acquisition method based on single-point TOF, chip and mobile robot

Similar Documents

Publication Publication Date Title
CN108406731A (en) A kind of positioning device, method and robot based on deep vision
CN208323361U (en) A kind of positioning device and robot based on deep vision
CN109993113B (en) Pose estimation method based on RGB-D and IMU information fusion
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
CN112634451B (en) Outdoor large-scene three-dimensional mapping method integrating multiple sensors
CN110044354A (en) A kind of binocular vision indoor positioning and build drawing method and device
Kanade et al. Real-time and 3D vision for autonomous small and micro air vehicles
CN109540126A (en) A kind of inertia visual combination air navigation aid based on optical flow method
CN107909614B (en) Positioning method of inspection robot in GPS failure environment
US20150235367A1 (en) Method of determining a position and orientation of a device associated with a capturing device for capturing at least one image
CN113706626B (en) Positioning and mapping method based on multi-sensor fusion and two-dimensional code correction
Abuhadrous et al. Digitizing and 3D modeling of urban environments and roads using vehicle-borne laser scanner system
CN108481327A (en) A kind of positioning device, localization method and the robot of enhancing vision
CN108364304A (en) A kind of system and method for the detection of monocular airborne target
CN108544494A (en) A kind of positioning device, method and robot based on inertia and visual signature
CN208289901U (en) A kind of positioning device and robot enhancing vision
CN117367427A (en) Multi-mode slam method applicable to vision-assisted laser fusion IMU in indoor environment
CN113701750A (en) Fusion positioning system of underground multi-sensor
CN112731503A (en) Pose estimation method and system based on front-end tight coupling
CN112432653A (en) Monocular vision inertial odometer method based on point-line characteristics
CN116804553A (en) Odometer system and method based on event camera/IMU/natural road sign
CN212044739U (en) Positioning device and robot based on inertial data and visual characteristics
JP2021047024A (en) Estimation device, estimation method, and program
CN116380079A (en) Underwater SLAM method for fusing front-view sonar and ORB-SLAM3
JP6886136B2 (en) Alignment device, alignment method and computer program for alignment

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant
AV01 Patent right actively abandoned
AV01 Patent right actively abandoned
AV01 Patent right actively abandoned

Granted publication date: 20190104

Effective date of abandoning: 20230613

AV01 Patent right actively abandoned

Granted publication date: 20190104

Effective date of abandoning: 20230613