CN105678842A - Manufacturing method and device for three-dimensional map of indoor environment - Google Patents

Manufacturing method and device for three-dimensional map of indoor environment Download PDF

Info

Publication number
CN105678842A
CN105678842A CN201610014802.7A CN201610014802A CN105678842A CN 105678842 A CN105678842 A CN 105678842A CN 201610014802 A CN201610014802 A CN 201610014802A CN 105678842 A CN105678842 A CN 105678842A
Authority
CN
China
Prior art keywords
image
frame images
key frame
indoor environment
conversion parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610014802.7A
Other languages
Chinese (zh)
Inventor
马燕新
李洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Visualtouring Information Technology Co Ltd
Original Assignee
Hunan Visualtouring Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Visualtouring Information Technology Co Ltd filed Critical Hunan Visualtouring Information Technology Co Ltd
Priority to CN201610014802.7A priority Critical patent/CN105678842A/en
Publication of CN105678842A publication Critical patent/CN105678842A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention provides a manufacturing method and a device for the three-dimensional map of an indoor environment. The method comprises the steps of acquiring the images of the indoor environment via a three-dimensional vision sensor, wherein the images of the indoor environment are composed of a color image and a depth image; converting the depth image into point cloud data; extracting a feature vector out of the color image and the depth image; determining the key-frame images of the images of the indoor environment according to the feature vector; in combination with the point cloud data, calculating transformation parameters among the key-frame images; according to the transformation parameters among the key-frame images, connecting all the key-frame images to generate a three-dimensional point cloud map of the indoor environment; and manufacturing the three-dimensional map of the indoor environment according to the three-dimensional point cloud map. According to the technical scheme of the invention, the three-dimensional map can be manufactured based on the depth features of the indoor environment, so that the manufactured three-dimensional map is good in expressing ability and better in visual effect.

Description

The three-dimensional map making method of indoor environment and device
Technical field
The present invention relates to modeling field, particularly relate to the three-dimensional map making method of a kind of indoor environment and device.
Background technology
Along with the universal of internet, mobile communication, running fix and intelligent mobile terminal and application, location-based service become build smart city, realize intelligent transportation, emergent natural disaster, the important foundation that embodies public service etc. support. From technological layer, the offer of location-based service depends on the foundation of map. In recent years, along with computer technology, the particularly fast development of computer graphics, network, multi-media, virtual reality technology, three-dimensional simulation technique, map develops to three dimensional stress direction.
For the location-based service of indoor environment, prior art provides the establishment method of the three-dimensional map of multiple indoor environment. Prior art sets up the three-dimensional map of indoor environment by following principle usually: first, by camera acquisition indoor environment image, secondly, indoor environment image is carried out feature extraction and two field picture coupling, finally, three-dimensional modeling is carried out according to the result of feature extraction and the result of two field picture coupling.
But, in the process due to the three-dimensional map of prior art make-up room environment, when feature is extracted, only take into account the two dimensional image feature of indoor environment, and do not consider the depth characteristic of indoor environment, and cause the map environment ability to express finally obtained weak, effect of visualization is poor.
Summary of the invention
The present invention provides the three-dimensional map making method of a kind of indoor environment and device, it is possible to the depth characteristic in conjunction with indoor environment makes three-dimensional map, thus ensures that the three-dimensional map environment table Danone power produced is strong, has better effect of visualization.
First aspect, embodiments provides the three-dimensional map making method of a kind of indoor environment, and described method comprises:
Gathering indoor environment image by three-dimensional visual sensor, described indoor environment image comprises coloured image and depth image, and described depth image is converted into cloud data;
Extract eigenvector from described coloured image and described depth image, determine the key frame images of described indoor environment image according to described eigenvector;
The conversion parameter between described key frame images is calculated in conjunction with described cloud data;
According to the conversion parameter between described key frame images, described key frame images is tied, generates the three-dimensional point cloud map of described indoor environment, according to the three-dimensional map of described three-dimensional point cloud map make-up room environment.
In conjunction with first aspect, embodiments provide the first possible enforcement mode of first aspect, wherein, extract eigenvector from described coloured image and described depth image, comprising:
The scale-of-two independent characteristic BRIEF (BinaryRobustIndependentElementaryFeatures of 256 is extracted respectively from described coloured image and described depth image, scale-of-two independent characteristic), the BRIEF of described 256 extracted is carried out concatenation, obtain the BRIEF of 512, using the BRIEF of described 512 as the eigenvector extracted.
In conjunction with first aspect, embodiments provide first aspect the 2nd kind of possible enforcement mode, wherein, determine the key frame images of described indoor environment image according to described eigenvector, comprising:
The matching parameter between the current frame image of described indoor environment image and former frame key frame images is calculated according to described eigenvector, when described matching parameter meets pre-conditioned, it is determined that described current frame image is the key frame images of described indoor environment image:
Wherein, described matching parameter comprise following one of at least: the matching degree between described current frame image and described former frame key frame images; Acquisition time interval between described current frame image and described former frame key frame images;
Described pre-conditioned comprise following one of at least: the matching degree between described current frame image and described former frame key frame images is less than preset matching degree threshold value; Acquisition time interval between described current frame image and described former frame key frame images is greater than prefixed time interval threshold value.
In conjunction with first aspect, embodiments provide the third possible enforcement mode of first aspect, wherein, calculate the conversion parameter between described key frame images in conjunction with described cloud data, comprising:
Adopt RANSAC (RandomSampleConsensus, stochastic sampling is consistent) algorithm described key frame images to be calculated, obtain the initial conversion parameter between described key frame images;
Taking described initial conversion parameter as foundation, adopt ICP (Iterativeclosestpoint, iterative closest point) described cloud data calculates by algorithm, obtain the accurate conversion parameter between described key frame images, using described accurate conversion parameter as the conversion parameter between described key frame images.
In conjunction with the above-mentioned enforcement mode of first aspect, embodiments provide first aspect the 4th kind of possible enforcement mode, wherein, after calculating the conversion parameter between described key frame images in conjunction with described cloud data, also comprise:
Described key frame images is carried out closed loop detect, when detect there is closed loop point time, utilize G2O (GeneralGraphOptimization, standard drawing is optimized) the conversion parameter between described key frame images is optimized by algorithm, by the conversion parameter that the described conversion parameter after optimizing is defined as between described key frame images.
Second aspect, embodiments provides the three-dimensional map producing device of a kind of indoor environment, and described device comprises:
Image capture module, for gathering indoor environment image by three-dimensional visual sensor, described indoor environment image comprises coloured image and depth image, and described depth image is converted into cloud data;
Crucial frame determination module, for extracting eigenvector from described coloured image and described depth image, determines the key frame images of described indoor environment image according to described eigenvector;
Conversion parameter calculating module, for calculating the conversion parameter between described key frame images in conjunction with described cloud data;
Three-dimensional map makes module, for described key frame images being tied according to the conversion parameter between described key frame images, generates the three-dimensional point cloud map of described indoor environment, according to the three-dimensional map of described three-dimensional point cloud map make-up room environment.
In conjunction with second aspect, embodiments providing the first possible enforcement mode of second aspect, wherein, described crucial frame determination module comprises:
Eigenvector extraction unit, for extracting the scale-of-two independent characteristic BRIEF of 256 respectively from described coloured image and described depth image, the BRIEF of described 256 extracted is carried out concatenation, obtain the BRIEF of 512, using the BRIEF of described 512 as the eigenvector extracted.
In conjunction with second aspect, embodiments providing second aspect the 2nd kind of possible enforcement mode, wherein, described crucial frame determination module comprises:
Crucial frame matching unit, for the matching parameter calculated according to described eigenvector between the current frame image of described indoor environment image and former frame key frame images, when described matching parameter meets pre-conditioned, it is determined that described current frame image is the key frame images of described indoor environment image:
Wherein, described matching parameter comprise following one of at least: the matching degree between described current frame image and described former frame key frame images; Acquisition time interval between described current frame image and described former frame key frame images;
Described pre-conditioned comprise following one of at least: the matching degree between described current frame image and described former frame key frame images is less than preset matching degree threshold value; Acquisition time interval between described current frame image and described former frame key frame images is greater than prefixed time interval threshold value.
In conjunction with second aspect, embodiments providing the third possible enforcement mode of second aspect, wherein, described conversion parameter calculating module comprises:
Initially convert parameter calculation unit, for adopting RANSAC algorithm described key frame images to be calculated, obtain the initial conversion parameter between described key frame images;
Accurately convert parameter calculation unit, for taking described initial conversion parameter as foundation, adopt ICP algorithm to be calculated by described cloud data, obtain the accurate conversion parameter between described key frame images, using described accurate conversion parameter as the conversion parameter between described key frame images.
In conjunction with the above-mentioned enforcement mode of second aspect, embodiments providing second aspect the 4th kind of possible enforcement mode, wherein, described device also comprises:
Conversion parameter optimization module, for described key frame images is carried out closed loop detect, when detect there is closed loop point time, utilize G2O algorithm the conversion parameter between described key frame images to be optimized, by the described conversion parameter conversion parameter that is defined as between described key frame images after optimizing.
By the method in the present embodiment and device, when carrying out characteristics of image and extract, both take into account the two dimensional image feature of indoor environment, contemplate the depth characteristic of image, and the depth characteristic in conjunction with indoor environment makes three-dimensional map, ensure that the three-dimensional map environment table Danone power produced is strong, there is better effect of visualization.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme of the embodiment of the present invention, it is briefly described to the accompanying drawing used required in embodiment below, it is to be understood that, the following drawings illustrate only some embodiment of the present invention, therefore should not be counted as is the restriction to scope, for those of ordinary skill in the art, under the prerequisite not paying creative work, it is also possible to obtain other relevant accompanying drawings according to these accompanying drawings.
Fig. 1 shows the first schematic flow sheet of the three-dimensional map making method of indoor environment that first embodiment of the invention provides;
Fig. 2 shows the 2nd kind of schematic flow sheet of the three-dimensional map making method of indoor environment that first embodiment of the invention provides;
Fig. 3 shows the third schematic flow sheet of the three-dimensional map making method of indoor environment that first embodiment of the invention provides;
Fig. 4 shows the first structure signal of the three-dimensional map producing device of indoor environment that second embodiment of the invention provides;
Fig. 5 shows the 2nd kind of structure signal of the three-dimensional map producing device of indoor environment that second embodiment of the invention provides.
Embodiment
For making the object of the embodiment of the present invention, technical scheme and advantage clearly, below in conjunction with accompanying drawing in the embodiment of the present invention, technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments. The assembly of the embodiment of the present invention usually describing in accompanying drawing herein and illustrating can be arranged with various different configuration and design. Therefore, below to the detailed description of the embodiments of the invention provided in the accompanying drawings and the scope of the claimed the present invention of not intended to be limiting, but only represent the selected embodiment of the present invention. Based on embodiments of the invention, other embodiments all that those skilled in the art obtain under the prerequisite not making creative work, all belong to the scope of protection of the invention.
Consider in the process of the three-dimensional map of prior art make-up room environment, when feature is extracted, only take into account the two dimensional image feature of indoor environment, and do not consider the depth characteristic of indoor environment, cause the map environment ability to express finally obtained weak, the problem that effect of visualization is poor, the present invention provides the three-dimensional map making method of a kind of indoor environment and device, three-dimensional map can be made in conjunction with the depth characteristic of indoor environment, thus ensure that the three-dimensional map environment table Danone power produced is strong, there is better effect of visualization. It is described in detail below in conjunction with embodiment.
Embodiment one
Fig. 1 shows the first schematic flow sheet of the three-dimensional map making method of indoor environment that first embodiment of the invention provides. As shown in Figure 1, the three-dimensional map making method of the indoor environment in the present embodiment comprises the following steps:
Step 102, gathers indoor environment image by three-dimensional visual sensor, and this indoor environment image comprises coloured image and depth image, and depth image is converted into cloud data.
Consider that three-dimensional visual sensor has the advantages such as convenience, economy, quantity of information be big, cheap three-dimensional visual sensor (such as Kinect, Tango, Kinect2 generation, RealSense, AsusXtion etc.) is preferably adopted to obtain coloured image and the depth image of indoor environment. Especially, the advantage that Kinect series sensor has work effect stability, do not disturb by ambient visible light spectrum.
Based on photogrammetric principle, and depth image is converted into cloud data.Consider the vision effect of cloud data, in the present embodiment, after depth image is converted into cloud data, utilize coloured image for the upper color of cloud data tax.
Step 104, extracts eigenvector from above-mentioned coloured image and above-mentioned depth image, determines the key frame images of indoor environment image according to this eigenvector.
Consider that depth image exists abundant invariant feature, for ensureing the completeness of depth image feature, above-mentioned from coloured image and depth image extraction eigenvector, comprise: the scale-of-two independent characteristic BRIEF extracting 256 from coloured image and depth image respectively, the BRIEF of 256 extracted is carried out concatenation, obtain the BRIEF of 512, using the BRIEF of 512 as the eigenvector extracted.
In a kind of preferred embodiment, first SIFT (Scaleinvariantfeaturetransform is utilized, scale invariant feature change) algorithm extract coloured image unique point, then utilize BRAND (BinaryRobustAppearanceandNormalDescriptor, scale-of-two outward appearance and the normal descriptor) character description method of improvement that the unique point extracted and depth image are carried out feature description. The process of feature description is: the scale-of-two independent characteristic BRIEF extracting 256 respectively from coloured image and depth image, the scale-of-two independent characteristic BRIEF of two parts 256 extracted is carried out concatenation, obtain the scale-of-two independent characteristic BRIEF of 512, using the scale-of-two independent characteristic BRIEF of 512 as the eigenvector extracted.
Compared with prior art, the BRAND character description method of the improvement in the present embodiment has the advantage that extraction is convenient, calculated amount is little, image matching effect is high, the completeness of feature can be ensured, it is to increase the sign ability of feature by the BRAND character description method improved. Extract the scale-of-two vector characteristic of image, it is possible to ensure the matching precision of image and effectively improve matching speed.
Due in step 102, when indoor environment is carried out image collection by three-dimensional visual sensor, great amount of images can be gathered, in order to reduce image calculated amount, need the image collected at three-dimensional visual sensor determines representative key frame images, reached the object of image procossing by the process of key frame images.
Consider that key frame images is representative image, the above-mentioned key frame images determining indoor environment image according to eigenvector, comprise: according to the matching parameter between the current frame image of eigenvector counting chamber environment image and former frame key frame images, when matching parameter meets pre-conditioned, determine that current frame image is the key frame images of indoor environment image: wherein, matching parameter comprise following one of at least: the matching degree between current frame image and former frame key frame images; Acquisition time interval between current frame image and former frame key frame images; Pre-conditioned comprise following one of at least: the matching degree between current frame image and former frame key frame images is less than preset matching degree threshold value; Acquisition time interval between current frame image and former frame key frame images is greater than prefixed time interval threshold value. Wherein, the definition of matching degree is: the overlapping region pixel count after two width images match accounts for the ratio of entire image pixel count.
Owing to needing the image collected by three-dimensional visual sensor to analyze one by one, it is preferred to the first two field picture is defined as key frame images, for other current frame images, itself and the last frame key frame images (i.e. former frame key frame images) in the key frame images determined are carried out matching degree calculating, if matching degree is less than preset matching degree threshold value, then current frame image is defined as key frame images.
In another enforcement mode, the acquisition time interval between the last frame key frame images in current frame image and the key frame images determined can also be calculated, if acquisition time interval is greater than prefixed time interval threshold value, then current frame image is defined as key frame images. By determining key frame images, it is possible to reduce the calculated amount of successive image process, avoid the system resource waste that process great amount of images causes.
Step 106, calculates the conversion parameter between above-mentioned key frame images in conjunction with above-mentioned cloud data.
Those skilled in the art, it can be appreciated that key frame images is the image obtained after three-dimensional visual sensor collection indoor environment, are independent images one by one. There is conversion parameter (also known as rotary flat shifting parameter) between key frame images, by conversion parameter, multiframe key frame images is tied, just can describe out indoor environment, thus carry out three-dimensional modeling.
In the present embodiment, calculate the conversion parameter between key frame images in conjunction with cloud data, comprising: (1) adopts RANSAC algorithm key frame images to be calculated, and obtains the initial conversion parameter between key frame images; (2) initially to convert parameter as foundation, adopt ICP algorithm to be calculated by cloud data, obtain the accurate conversion parameter between key frame images, will accurately convert parameter as the conversion parameter between key frame images.
In process (1), utilize RANSAC algorithm to reject the Mismatching point that may exist to improve matching efficiency, and calculate and initially convert parameter. In process (2), the initial conversion parameter obtained using RANSAC algorithm, as foundation, carries out Iterative matching based on ICP algorithm and obtains and accurately convert parameter.
In the present embodiment, RANSAC algorithm and ICP algorithm are combined and calculates conversion parameter, there is computing velocity fast, it is to increase the advantage of image processing efficiency. Wherein, ICP algorithm can be replaced as Fast-ICP (fastIterativeclosestpoint, the most near point of Accelerated iteration) algorithm, GICP (Iterativeclosestpointalgorithmbasedongenetic, based on the most near point of genetic iteration) algorithm etc.
Step 108, is tied key frame images according to the conversion parameter between key frame images, generates the three-dimensional point cloud map of indoor environment, according to the three-dimensional map of three-dimensional point cloud map make-up room environment.
Here three-dimensional point cloud map can be utilized as required to make three-dimensional surface sheet map, 3 d grid map, two-dimensional grid map etc. Adopt based on oriented distance field method generate three-dimensional surface sheet map, be mainly used in real time human-machine interaction with visual, follow-up this map is carried out rasterizing process can also generate 3 d grid map, can be used for real-time avoidance. Carry out projection in addition and can obtain two-dimensional grid map, can be used for global path planning.
Method in the present embodiment, when carrying out characteristics of image and extract, both take into account the two dimensional image feature of indoor environment, contemplate the depth characteristic of image, and the depth characteristic in conjunction with indoor environment makes three-dimensional map, ensure that the three-dimensional map environment table Danone power produced is strong, there is better effect of visualization.
Fig. 2 shows the 2nd kind of schematic flow sheet of the three-dimensional map making method of indoor environment that first embodiment of the invention provides. In order to ensure the effect of visualization of the three-dimensional map of the indoor environment that finally obtains, as shown in Figure 2, in step 106, after calculating the conversion parameter between key frame images in conjunction with cloud data, also comprise: step 202, key frame images carried out closed loop detect, when detect there is closed loop point time, G2O algorithm is utilized the conversion parameter between key frame images to be optimized, by the conversion parameter that the conversion parameter after optimizing is defined as between key frame images.
Step 202 preferably after step 106, performs before step 108. When three-dimensional visual sensor somewhere indoor starts to gather image, and when finally returning to this place, then the image collected can form a complete closed loop, and accordingly, key frame images also can form complete closed loop. It can be appreciated that when key frame images forms complete closed loop, the first two field picture collected and last frame image are for same place scene, and therefore the matching degree of the two is higher.
Based on this, in step 202, key frame images is carried out closed loop detect, comprising: key frame images is compared between two, when the matching degree existed between two two field pictures is greater than default threshold value, it is determined that two two field pictures form closed loop points. Preferably, current key frame image can be carried out characteristic matching with key frame images all before respectively, calculate matching degree, when the matching degree calculated between current key frame image and certain two field picture is greater than default threshold value, it is determined that this two field picture forms closed loop point.
In step 202, when closed loop point being detected, G2O algorithm is utilized the conversion parameter between key frame images to be optimized, by the conversion parameter that the conversion parameter after optimizing is defined as between key frame images. Then step 108 is performed. Error in image processing process is evenly distributed to each node by the closure based on closed loop, thus ensures the whole structure of the three-dimensional map produced. Wherein, G2O algorithm can be replaced TORO (treebasednetworkoptimizer, the network optimization based on tree) algorithm.
In step 202, if closed loop point not detected, then removing the action of conversion parameter optimization from, directly performing step 108.
Fig. 3 shows the third schematic flow sheet of the three-dimensional map making method of indoor environment that first embodiment of the invention provides. As shown in Figure 3, the method comprises:
Step 301, obtains coloured image and the depth image of indoor environment by three-dimensional visual sensor;
Step 302, is converted to cloud data based on pinhole camera model by depth image;
Step 303, utilizes coloured image to compose color for cloud data;
Step 304, extracts eigenvector according to the BRAND algorithm of SIFT algorithm and improvement from coloured image and depth image;
Step 305, determines key frame images according to eigenvector;
Step 306, merges RANSAC algorithm, ICP algorithm and cloud data and calculates the rotary flat shifting parameter between key frame images;
Step 307, carries out closed loop point detection to key frame images, when closed loop point being detected, performs step 308, otherwise performs step 309;
Step 308, utilizes G2O algorithm to be optimized by rotary flat shifting parameter, using the rotary flat shifting parameter after optimization as the rotary flat shifting parameter between key frame images;
Step 309, carries out the making of three-dimensional scene according to the rotary flat shifting parameter between key frame images.
All algorithms involved in Fig. 1 to Fig. 3 all can be replaced as corresponding GPU (GraphicProcessingUnit, graphic process unit) accelerating algorithm.
To sum up, the three-dimensional map making method of the indoor environment in the present embodiment compared with prior art, has following advantage:
(1) take full advantage of two dimensional image and degree of depth information that three-dimension sensor provides, make the environment ability to express of color three dimension environment map that constructs more by force, there is better effect of visualization and interactivity;
(2) the improvement BRAND character description method built has stronger ability to express and processing speed, the feature extracted has good scale invariability, invariable rotary, translation invariant and the unchangeability to illumination, is applied to three-dimensional drawing and effectively improves into figure accuracy and runtime;
(3) according to customer need, multiple expression can be carried out: three-dimensional surface sheet map, 3 d grid map, two-dimensional grid map, it is possible to be global path planning and local avoidance service, applied widely;
(4) carry out image collection by three-dimensional visual sensor and have that platform cost is low, structure is simple, easily operation, portable good feature, it is possible to be transplanted to corresponding mobile platform and build intelligent mobile terminal;
(5) without the need to understanding environment space structure in advance, not by ambient visible light spectrum interference, become figure speed fast, it may be achieved the online establishment of indoor map and renewal.
Embodiment two
The three-dimensional map making method of corresponding above-mentioned indoor environment, the embodiment of the present invention additionally provides the three-dimensional map producing device of indoor environment, for performing the three-dimensional map making method of above-mentioned indoor environment.
Fig. 4 shows the first structure signal of the three-dimensional map producing device of indoor environment that second embodiment of the invention provides, and as shown in Figure 4, the device in the present embodiment comprises:
Image capture module 41, for gathering indoor environment image by three-dimensional visual sensor, indoor environment image comprises coloured image and depth image, and depth image is converted into cloud data;
Crucial frame determination module 42, for extracting eigenvector from coloured image and depth image, determines the key frame images of indoor environment image according to eigenvector;
Conversion parameter calculating module 43, for calculating the conversion parameter between key frame images in conjunction with cloud data;
Three-dimensional map makes module 44, for key frame images being tied according to the conversion parameter between key frame images, generates the three-dimensional point cloud map of indoor environment, according to the three-dimensional map of three-dimensional point cloud map make-up room environment.
Preferably, crucial frame determination module 42 comprises: eigenvector extraction unit, for extracting the scale-of-two independent characteristic BRIEF of 256 respectively from coloured image and depth image, the BRIEF of 256 extracted is carried out concatenation, obtain the BRIEF of 512, using the BRIEF of 512 as the eigenvector extracted.
In the present embodiment, carry out feature extraction by eigenvector extraction unit, have and extract the advantage convenient, calculated amount is little, images match efficiency is high, it is possible to ensure the completeness of feature, it is to increase the sign ability of feature. Extract the scale-of-two vector characteristic of image, it is possible to ensure the matching precision of image and effectively improve matching speed.
Preferably, crucial frame determination module 42 comprises: crucial frame matching unit, for the matching parameter between the current frame image according to eigenvector counting chamber environment image and former frame key frame images, when matching parameter meets pre-conditioned, determine that current frame image is the key frame images of indoor environment image: wherein, matching parameter comprise following one of at least: the matching degree between current frame image and former frame key frame images; Acquisition time interval between current frame image and former frame key frame images; Pre-conditioned comprise following one of at least: the matching degree between current frame image and former frame key frame images is less than preset matching degree threshold value; Acquisition time interval between current frame image and former frame key frame images is greater than prefixed time interval threshold value.
In the present embodiment, determine key frame images by crucial frame matching unit, it is possible to reduce the calculated amount of successive image process, avoid the system resource waste that process great amount of images causes.
Preferably convert parameter calculating module 43 to comprise: initially convert parameter calculation unit, for adopting RANSAC algorithm key frame images to be calculated, obtain the initial conversion parameter between key frame images;Accurately convert parameter calculation unit, for initially to convert parameter as foundation, adopting ICP algorithm to be calculated by cloud data, obtain the accurate conversion parameter between key frame images, will accurately convert parameter as the conversion parameter between key frame images.
In the present embodiment, by initially converting parameter calculation unit and accurately convert parameter calculation unit, RANSAC algorithm and ICP algorithm are combined and calculates conversion parameter, there is computing velocity fast, it is to increase the advantage of image processing efficiency.
Fig. 5 shows the 2nd kind of structure signal of the three-dimensional map producing device of indoor environment that second embodiment of the invention provides, and as shown in Figure 5, the device in the present embodiment also comprises:
Conversion parameter optimization module 51, for key frame images is carried out closed loop detect, when detect there is closed loop point time, utilize G2O algorithm the conversion parameter between key frame images to be optimized, by the conversion parameter conversion parameter that is defined as between key frame images after optimizing. In the present embodiment, the whole structure of the three-dimensional map produced can be ensured by conversion parameter optimization module 51.
By the device in the present embodiment, when carrying out characteristics of image and extract, both take into account the two dimensional image feature of indoor environment, contemplate the depth characteristic of image, and the depth characteristic in conjunction with indoor environment makes three-dimensional map, ensure that the three-dimensional map environment table Danone power produced is strong, there is better effect of visualization.
The three-dimensional map producing device of the indoor environment that the embodiment of the present invention provides can be the specific hardware on equipment or the software being installed on equipment or firmware etc. The device that the embodiment of the present invention provides, its technique effect realizing principle and generation is identical with aforementioned embodiment of the method, is concise and to the point description, and device embodiment part does not mention part, can with reference to corresponding contents in aforementioned embodiment of the method. The technician of art can be well understood to, and for convenience and simplicity of description, the concrete working process of the system of aforementioned description, device and unit, all with reference to the corresponding process in aforesaid method embodiment, can not repeat them here.
In embodiment provided by the present invention, it should be appreciated that, disclosed device and method, it is possible to realize by another way. Device embodiment described above is only schematic, such as, the division of described unit, it is only a kind of logic function to divide, actual can have other dividing mode when realizing, again such as, multiple unit or assembly can in conjunction with or another system can be integrated into, or some features can ignore, or do not perform. Another point, shown or discussed coupling each other or directly coupling or communication connection can be the indirect coupling by some communication interfaces, device or unit or communication connection, it is possible to be electrical, machinery or other form.
The described unit illustrated as separating component or can may not be and physically separates, and the parts as unit display can be or may not be physical location, namely can be positioned at a place, or can also be distributed on multiple NE. Some or all of unit wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.
In addition, each functional unit in embodiment provided by the invention can be integrated in a processing unit, it is also possible to is that the independent physics of each unit exists, it is also possible to two or more unit are in a unit integrated.
If described function realize using the form of software functional unit and as independent production marketing or when using, it is possible to be stored in a computer read/write memory medium.Based on such understanding, the technical scheme of the present invention in essence or says that the part of part or this technical scheme prior art contributed can embody with the form of software product, this computer software product is stored in a storage media, comprise some instructions with so that a computer equipment (can be Personal Computer, server, or the network equipment etc.) perform all or part of step of method described in each embodiment of the present invention. And aforesaid storage media comprises: USB flash disk, portable hard drive, read-only storage (ROM, Read-OnlyMemory), random access memory (RAM, RandomAccessMemory), magnetic disc or CD etc. various can be program code stored medium.
It should be noted that: similar label and letter accompanying drawing below represents similar item, therefore, once a certain Xiang Yi accompanying drawing is defined, then do not need it carries out definition further and explains in accompanying drawing subsequently, in addition, term " first ", " the 2nd ", " the 3rd " etc. are only for distinguishing description, and can not be interpreted as instruction or hint relative importance.
Last it is noted that the above embodiment, it is only the specific embodiment of the present invention, in order to the technical scheme of the present invention to be described, it is not intended to limit, protection scope of the present invention is not limited thereto, although with reference to previous embodiment to invention has been detailed description, it will be understood by those within the art that: any be familiar with those skilled in the art in the technical scope that the present invention discloses, technical scheme described in previous embodiment still can be modified or can be expected change easily by it, or wherein part technology feature is carried out equivalent replacement, and these amendments, change or replacement, do not make the spirit and scope of the essence disengaging embodiment of the present invention technical scheme of appropriate technical solution. all should be encompassed within protection scope of the present invention. therefore, protection scope of the present invention should described be as the criterion with the protection domain of claim.

Claims (10)

1. the three-dimensional map making method of indoor environment, it is characterised in that, described method comprises:
Gathering indoor environment image by three-dimensional visual sensor, described indoor environment image comprises coloured image and depth image, and described depth image is converted into cloud data;
Extract eigenvector from described coloured image and described depth image, determine the key frame images of described indoor environment image according to described eigenvector;
The conversion parameter between described key frame images is calculated in conjunction with described cloud data;
According to the conversion parameter between described key frame images, described key frame images is tied, generates the three-dimensional point cloud map of described indoor environment, according to the three-dimensional map of described three-dimensional point cloud map make-up room environment.
2. method according to claim 1, it is characterised in that, extract eigenvector from described coloured image and described depth image, comprising:
Extract the scale-of-two independent characteristic BRIEF of 256 from described coloured image and described depth image respectively, the BRIEF of described 256 extracted is carried out concatenation, obtains the BRIEF of 512, using the BRIEF of described 512 as the eigenvector extracted.
3. method according to claim 1, it is characterised in that, determine the key frame images of described indoor environment image according to described eigenvector, comprising:
The matching parameter between the current frame image of described indoor environment image and former frame key frame images is calculated according to described eigenvector, when described matching parameter meets pre-conditioned, it is determined that described current frame image is the key frame images of described indoor environment image:
Wherein, described matching parameter comprise following one of at least: the matching degree between described current frame image and described former frame key frame images;Acquisition time interval between described current frame image and described former frame key frame images;
Described pre-conditioned comprise following one of at least: the matching degree between described current frame image and described former frame key frame images is less than preset matching degree threshold value; Acquisition time interval between described current frame image and described former frame key frame images is greater than prefixed time interval threshold value.
4. method according to claim 1, it is characterised in that, calculate the conversion parameter between described key frame images in conjunction with described cloud data, comprising:
Adopt RANSAC algorithm described key frame images to be calculated, obtain the initial conversion parameter between described key frame images;
Taking described initial conversion parameter as foundation, adopt ICP algorithm to be calculated by described cloud data, obtain the accurate conversion parameter between described key frame images, using described accurate conversion parameter as the conversion parameter between described key frame images.
5. method according to the arbitrary item of Claims 1-4, it is characterised in that, after calculating the conversion parameter between described key frame images in conjunction with described cloud data, also comprise:
Described key frame images is carried out closed loop detect, when detect there is closed loop point time, utilize G2O algorithm the conversion parameter between described key frame images to be optimized, by the described conversion parameter conversion parameter that is defined as between described key frame images after optimizing.
6. the three-dimensional map producing device of indoor environment, it is characterised in that, described device comprises:
Image capture module, for gathering indoor environment image by three-dimensional visual sensor, described indoor environment image comprises coloured image and depth image, and described depth image is converted into cloud data;
Crucial frame determination module, for extracting eigenvector from described coloured image and described depth image, determines the key frame images of described indoor environment image according to described eigenvector;
Conversion parameter calculating module, for calculating the conversion parameter between described key frame images in conjunction with described cloud data;
Three-dimensional map makes module, for described key frame images being tied according to the conversion parameter between described key frame images, generates the three-dimensional point cloud map of described indoor environment, according to the three-dimensional map of described three-dimensional point cloud map make-up room environment.
7. device according to claim 6, it is characterised in that, described crucial frame determination module comprises:
Eigenvector extraction unit, for extracting the scale-of-two independent characteristic BRIEF of 256 respectively from described coloured image and described depth image, the BRIEF of described 256 extracted is carried out concatenation, obtain the BRIEF of 512, using the BRIEF of described 512 as the eigenvector extracted.
8. device according to claim 6, it is characterised in that, described crucial frame determination module comprises:
Crucial frame matching unit, for the matching parameter calculated according to described eigenvector between the current frame image of described indoor environment image and former frame key frame images, when described matching parameter meets pre-conditioned, it is determined that described current frame image is the key frame images of described indoor environment image:
Wherein, described matching parameter comprise following one of at least: the matching degree between described current frame image and described former frame key frame images; Acquisition time interval between described current frame image and described former frame key frame images;
Described pre-conditioned comprise following one of at least: the matching degree between described current frame image and described former frame key frame images is less than preset matching degree threshold value;Acquisition time interval between described current frame image and described former frame key frame images is greater than prefixed time interval threshold value.
9. device according to claim 6, it is characterised in that, described conversion parameter calculating module comprises:
Initially convert parameter calculation unit, for adopting RANSAC algorithm described key frame images to be calculated, obtain the initial conversion parameter between described key frame images;
Accurately convert parameter calculation unit, for taking described initial conversion parameter as foundation, adopt ICP algorithm to be calculated by described cloud data, obtain the accurate conversion parameter between described key frame images, using described accurate conversion parameter as the conversion parameter between described key frame images.
10. device according to the arbitrary item of claim 6 to 9, it is characterised in that, described device also comprises:
Conversion parameter optimization module, for described key frame images is carried out closed loop detect, when detect there is closed loop point time, utilize G2O algorithm the conversion parameter between described key frame images to be optimized, by the described conversion parameter conversion parameter that is defined as between described key frame images after optimizing.
CN201610014802.7A 2016-01-11 2016-01-11 Manufacturing method and device for three-dimensional map of indoor environment Pending CN105678842A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610014802.7A CN105678842A (en) 2016-01-11 2016-01-11 Manufacturing method and device for three-dimensional map of indoor environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610014802.7A CN105678842A (en) 2016-01-11 2016-01-11 Manufacturing method and device for three-dimensional map of indoor environment

Publications (1)

Publication Number Publication Date
CN105678842A true CN105678842A (en) 2016-06-15

Family

ID=56299897

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610014802.7A Pending CN105678842A (en) 2016-01-11 2016-01-11 Manufacturing method and device for three-dimensional map of indoor environment

Country Status (1)

Country Link
CN (1) CN105678842A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292949A (en) * 2017-05-25 2017-10-24 深圳先进技术研究院 Three-dimensional rebuilding method, device and the terminal device of scene
CN107516339A (en) * 2017-08-25 2017-12-26 联想(北京)有限公司 A kind of information processing method and information processor
WO2018040982A1 (en) * 2016-08-30 2018-03-08 成都理想境界科技有限公司 Real time image superposition method and device for enhancing reality
CN108256060A (en) * 2018-01-16 2018-07-06 广州视源电子科技股份有限公司 A kind of closed loop detection method, device, terminal and storage medium
CN108364257A (en) * 2018-02-06 2018-08-03 深圳市菲森科技有限公司 The joining method and system of 3-D scanning point cloud data
CN108854031A (en) * 2018-05-29 2018-11-23 深圳臻迪信息技术有限公司 The method and relevant apparatus of exercise data are analyzed by unmanned camera work
WO2018214086A1 (en) * 2017-05-25 2018-11-29 深圳先进技术研究院 Method and apparatus for three-dimensional reconstruction of scene, and terminal device
CN108961385A (en) * 2017-05-22 2018-12-07 中国人民解放军信息工程大学 A kind of SLAM patterning process and device
CN109949414A (en) * 2019-01-31 2019-06-28 顺丰科技有限公司 The construction method and device of indoor map
CN110325938A (en) * 2017-05-23 2019-10-11 东芝生活电器株式会社 Electric dust collector
CN110874851A (en) * 2019-10-25 2020-03-10 深圳奥比中光科技有限公司 Method, device, system and readable storage medium for reconstructing three-dimensional model of human body
CN112085026A (en) * 2020-08-26 2020-12-15 的卢技术有限公司 Closed loop detection method based on deep neural network semantic segmentation
CN112198878A (en) * 2020-09-30 2021-01-08 深圳市银星智能科技股份有限公司 Instant map construction method and device, robot and storage medium
CN113447014A (en) * 2021-08-30 2021-09-28 深圳市大道智创科技有限公司 Indoor mobile robot, mapping method, positioning method, and mapping positioning device
CN115222602A (en) * 2022-08-15 2022-10-21 北京城市网邻信息技术有限公司 Image splicing method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247075A (en) * 2013-05-13 2013-08-14 北京工业大学 Variational mechanism-based indoor scene three-dimensional reconstruction method
CN104677347A (en) * 2013-11-27 2015-06-03 哈尔滨恒誉名翔科技有限公司 Indoor mobile robot capable of producing 3D navigation map based on Kinect

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247075A (en) * 2013-05-13 2013-08-14 北京工业大学 Variational mechanism-based indoor scene three-dimensional reconstruction method
CN104677347A (en) * 2013-11-27 2015-06-03 哈尔滨恒誉名翔科技有限公司 Indoor mobile robot capable of producing 3D navigation map based on Kinect

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
PETER HENRY ET AL.: "RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Environments", 《ENGINEERING, COMPUTER SCIENCEPUBLISHED IN ISER 2010》 *
刘艳丽: "融合颜色和深度信息的三维同步定位与地图构建研究", 《中国博士学位论文全文数据库 信息科技辑》 *
李庆华: "基于RGB-D数据的环境特征提取与场景重构", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
王亚龙 等: "基于 RGB-D 相机的室内环境3D地图创建", 《计算机应用研究》 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018040982A1 (en) * 2016-08-30 2018-03-08 成都理想境界科技有限公司 Real time image superposition method and device for enhancing reality
CN108961385A (en) * 2017-05-22 2018-12-07 中国人民解放军信息工程大学 A kind of SLAM patterning process and device
CN108961385B (en) * 2017-05-22 2023-05-02 中国人民解放军信息工程大学 SLAM composition method and device
CN110325938A (en) * 2017-05-23 2019-10-11 东芝生活电器株式会社 Electric dust collector
WO2018214086A1 (en) * 2017-05-25 2018-11-29 深圳先进技术研究院 Method and apparatus for three-dimensional reconstruction of scene, and terminal device
CN107292949A (en) * 2017-05-25 2017-10-24 深圳先进技术研究院 Three-dimensional rebuilding method, device and the terminal device of scene
CN107292949B (en) * 2017-05-25 2020-06-16 深圳先进技术研究院 Three-dimensional reconstruction method and device of scene and terminal equipment
CN107516339A (en) * 2017-08-25 2017-12-26 联想(北京)有限公司 A kind of information processing method and information processor
CN107516339B (en) * 2017-08-25 2020-06-23 联想(北京)有限公司 Information processing method and information processing device
CN108256060B (en) * 2018-01-16 2021-02-09 广州视源电子科技股份有限公司 Closed loop detection method, device, terminal and storage medium
CN108256060A (en) * 2018-01-16 2018-07-06 广州视源电子科技股份有限公司 A kind of closed loop detection method, device, terminal and storage medium
CN108364257A (en) * 2018-02-06 2018-08-03 深圳市菲森科技有限公司 The joining method and system of 3-D scanning point cloud data
CN108364257B (en) * 2018-02-06 2023-05-09 深圳市菲森科技有限公司 Splicing method and system for three-dimensional scanning point cloud data
CN108854031A (en) * 2018-05-29 2018-11-23 深圳臻迪信息技术有限公司 The method and relevant apparatus of exercise data are analyzed by unmanned camera work
CN109949414A (en) * 2019-01-31 2019-06-28 顺丰科技有限公司 The construction method and device of indoor map
CN110874851A (en) * 2019-10-25 2020-03-10 深圳奥比中光科技有限公司 Method, device, system and readable storage medium for reconstructing three-dimensional model of human body
CN112085026A (en) * 2020-08-26 2020-12-15 的卢技术有限公司 Closed loop detection method based on deep neural network semantic segmentation
CN112198878A (en) * 2020-09-30 2021-01-08 深圳市银星智能科技股份有限公司 Instant map construction method and device, robot and storage medium
CN112198878B (en) * 2020-09-30 2021-09-28 深圳市银星智能科技股份有限公司 Instant map construction method and device, robot and storage medium
CN113447014A (en) * 2021-08-30 2021-09-28 深圳市大道智创科技有限公司 Indoor mobile robot, mapping method, positioning method, and mapping positioning device
CN115222602A (en) * 2022-08-15 2022-10-21 北京城市网邻信息技术有限公司 Image splicing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN105678842A (en) Manufacturing method and device for three-dimensional map of indoor environment
Li et al. Manhattan-world urban reconstruction from point clouds
CN111008422B (en) Building live-action map making method and system
JP2016218999A (en) Method for training classifier to detect object represented in image of target environment
CN104778654A (en) Intangible cultural heritage digital display system and method thereof
KR100738107B1 (en) Apparatus and method for modeling based on 3 dimension point
CN105913485A (en) Three-dimensional virtual scene generation method and device
CN101414383B (en) Image processing apparatus and image processing method
CN103577793A (en) Gesture recognition method and device
CN104199659A (en) Method and device for exporting model information capable of being identified by 3DMAX
Cotella From 3D point clouds to HBIM: application of artificial intelligence in cultural heritage
KR20200136723A (en) Method and apparatus for generating learning data for object recognition using virtual city model
Wang et al. A Gestalt rules and graph-cut-based simplification framework for urban building models
CN115330940A (en) Three-dimensional reconstruction method, device, equipment and medium
CN114266780A (en) Building single instance dividing method and device
JP2019185776A (en) Method and apparatus for generating three-dimensional map of indoor space
Zhang et al. A geometry and texture coupled flexible generalization of urban building models
CN113724388A (en) Method, device and equipment for generating high-precision map and storage medium
CN111881121B (en) Automatic driving data filling method and device
CN114969586A (en) BIM (building information modeling) graphic engine loading method and device based on WEB side
Lin et al. Visual saliency and quality evaluation for 3D point clouds and meshes: An overview
Zollmann et al. Dense depth maps from sparse models and image coherence for augmented reality
CN114445574B (en) Method, device and equipment for converting GeoJSON data format into three-dimensional GLB format
Li et al. Combining data-and-model-driven 3D modelling (CDMD3DM) for small indoor scenes using RGB-D data
KR102521565B1 (en) Apparatus and method for providing and regenerating augmented reality service using 3 dimensional graph neural network detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160615