CN110148217A - A kind of real-time three-dimensional method for reconstructing, device and equipment - Google Patents

A kind of real-time three-dimensional method for reconstructing, device and equipment Download PDF

Info

Publication number
CN110148217A
CN110148217A CN201910437856.8A CN201910437856A CN110148217A CN 110148217 A CN110148217 A CN 110148217A CN 201910437856 A CN201910437856 A CN 201910437856A CN 110148217 A CN110148217 A CN 110148217A
Authority
CN
China
Prior art keywords
image
model
reconstruction
key frame
posture information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910437856.8A
Other languages
Chinese (zh)
Inventor
郭建亚
李骊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing HJIMI Technology Co Ltd
Original Assignee
Beijing HJIMI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing HJIMI Technology Co Ltd filed Critical Beijing HJIMI Technology Co Ltd
Priority to CN201910437856.8A priority Critical patent/CN110148217A/en
Publication of CN110148217A publication Critical patent/CN110148217A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

A kind of real-time three-dimensional method for reconstructing, device and equipment disclosed by the invention, belong to technical field of computer vision.This method comprises: being split to obtain the second image only comprising attention object to the object in the first image got based on initial 3D model;Object posture information is obtained according to second image and the initial model, key frame images are obtained from the first image according to object posture information;Conversion process is carried out to second image according to the object posture information and obtains third image, it merges to obtain new 3D model with the initial 3D model based on the third image, gridding reconstruction is carried out to the new 3D model and obtains the first reconstruction model, texture mapping is carried out to first reconstruction model according to the object posture information and the key frame images and obtains the second reconstruction model.

Description

A kind of real-time three-dimensional method for reconstructing, device and equipment
Technical field
The present invention relates to computer vision field, more particularly to a kind of real-time three-dimensional method for reconstructing, device and Equipment.
Background technique
Traditional three-dimensional reconstruction usually using two dimensional image as input, reconstructs the threedimensional model in scene, but This is limited to input data, and the threedimensional model reconstructed is often sufficiently complete, differs greatly with real-world object, and the sense of reality is lower.
Later with consumer level 3D sensor (such as Microsoft Kinect XBOX, ASUS Xtion, Apple IPhone X, Intel Realsense etc.) more and more appear in the visual field of people, the object dimensional based on 3D sensor Reconstruction technique is also more and more widely used.It, can be simultaneously since such scanning device majority has color sensor The depth map and RGB figure for obtaining object scene, so sensors with auxiliary electrode is generally referred to as RGBD sensor or RGBD camera. The KinectFusion three-dimensional rebuilding method of publication in 2011 is the earliest real-time dense three-dimensional reconstruction side based on RGBD camera Method, and three-dimensional rebuilding method most popular so far, this method are suitble to build static scene or motion rigid body Mould.The Fusion4D three-dimensional rebuilding method occurred can rebuild fast-changing non-rigid within 2016, it require that using more A depth camera, mounting arrangements are complex and costly high.
Summary of the invention
In view of the deficiencies of the prior art, it is an object of the invention to propose a kind of real-time three-dimensional method for reconstructing, device and set It is standby, attention object can effectively be divided, realize the quick reconstruction to dynamic object in scene.One aspect of the present invention mentions Supply a kind of real-time three-dimensional method for reconstructing, comprising:
The object in the first image got is split to obtain only comprising object interested based on initial 3D model Second image of body;The first image is 3D rendering;
Object posture information is obtained according to second image and the initial 3D model, according to object posture information Key frame images are obtained from the first image;
Conversion process is carried out to second image according to the object posture information and obtains third image, based on described Third image merges to obtain new 3D model with the initial 3D model, carries out gridding reconstruction to the new 3D model and obtains To the first reconstruction model.
Preferably, obtaining the first reconstruction model further includes later according to the object posture information and the key frame figure The second reconstruction model is obtained as carrying out texture mapping to first reconstruction model.
It is above-mentioned that the object in the first image got is split to obtain only comprising feeling emerging based on initial 3D model Second image of interesting object specifically includes:
Step a1: plane monitoring-network is carried out to the first image, and counts the area for the plane that each detected;According to described The area for the plane that detected rejects the area in the first image and is greater than the pixel that the plane of threshold value is included;
Step a2: attention object form statistical information, the attention object form system are obtained according to initial 3D model Count mass center information, the boundary information of image and the surface model information of image that information includes image;
Step a3: the foreground image comprising attention object is divided from the first image according to the mass center information of described image It cuts out;Using the boundary information of described image or the surface model information of described image to the pixel in the foreground image It is labeled, the pixel for meeting preset condition is labeled as attention object, the pixel for being unsatisfactory for preset condition is marked For foreign matter;
Step a4: form student movement is done respectively to the pixel for being labeled as attention object and the pixel for being labeled as foreign matter It calculates, the attention object tab area after being adjusted;
Step a5: it from the attention object tab area, rejects while being noted as attention object and be labeled as The pixel of foreign matter, using remaining pixel as final attention object cut-point, according to the final attention object point Cutpoint determines the second image only comprising attention object.
Preferably, it is above-mentioned according to the object posture information and the key frame images to first reconstruction model into Row texture mapping obtains the second reconstruction model and specifically includes:
The color diagram regional area of dough sheet is obtained based on the first reconstruction model, key frame images and object posture information Step;
The step of obtaining the texture region of dough sheet based on the first reconstruction model;
And the texture region of the color diagram regional area and the dough sheet based on the dough sheet obtains texture atlas and The step of two reconstruction models.
Wherein, described that the color diagram of dough sheet is obtained based on the first reconstruction model, key frame images and object posture information The step of regional area, specifically includes again:
Step c1: being projected the first reconstruction model to the coordinate system of the key frame images using object posture information, Dough sheet and color diagram part are obtained according to projected position of the first reconstruction model dough sheet vertex in the color diagram of key frame images Region corresponding relationship;
Step c2: calculate the projected position of the first reconstruction model dough sheet vertex in the shape graph of key frame images with The alternate position spike of corresponding points in shape graph;Calculate the first reconstruction model dough sheet vertex in the shape graph for projecting to key frame images The angle of normal vector and z-axis opposite direction;
Step c3: the alternate position spike and the smallest key frame images of the angle are chosen, according to the dough sheet and color diagram Regional area corresponding relationship and the color diagram regional area of the key frame images of selection obtain the color diagram regional area of dough sheet.
Wherein, described the step of obtaining the texture region of dough sheet based on the first reconstruction model, specifically includes again:
Step e1: first reconstruction model is split to obtain segmentation block;
Step e2: by each segmentation block carry out UV parametrization and it is compact be arranged into atlas, calculate each face of the first reconstruction model Piece vertex corresponding image coordinate in atlas obtains the texture region of dough sheet in turn.
The present invention also provides a kind of real-time three-dimensional reconstructing devices, including image capture device and calculating equipment;
Wherein, described image acquisition equipment is used for photographed scene image, and by the first image transmitting acquired in real time to institute State calculating equipment;The equipment that calculates includes that the image processing unit of three-dimensional reconstruction is realized based on the first image, described Image processing unit includes image division sub-unit, registration subelement, key frame images extraction subelement, image co-registration son list Member, gridding reconstruction subelement and texture mapping subelement;
Preferably, above-mentioned image division sub-unit, for based on initial 3D model to interested in the first image Object is split to obtain the second image only comprising attention object;
Above-mentioned registration subelement, for obtaining object pose letter according to second image and the initial 3D model Breath;
Above-mentioned key frame images extract subelement, for being taken out from the first image according to the object posture information Take key frame images;
Above-mentioned image co-registration subelement, for being carried out at transformation according to the object posture information to second image Reason obtains third image, merges to obtain new 3D model with the initial 3D model based on the third image;
Above-mentioned gridding reconstruction subelement, the new 3D model for merging to described image fusion subelement carry out Gridding reconstruction obtains the first reconstruction model;
Above-mentioned texture mapping subelement is used for according to the object posture information and the key frame images to the net The first reconstruction model that reconstruction unit of formatting obtains carries out texture mapping and obtains the second reconstruction model.
The present invention also provides a kind of calculating equipment, including processor and memory;
Said program code is transferred to the processor for storing program code by the memory;
The processor is used for according to the above-mentioned real-time three-dimensional method for reconstructing of instruction execution in said program code.
The present invention has the advantage that can effectively be divided to attention object, it is able to achieve the list of attention object Solely modeling, therefore the attention object come is reconstructed without containing extra scene information, such as ground, background or other attachments Object etc.;In addition the realization of dynamic modeling, it is only necessary to which a 3D sensor reduces the cost of dynamic modeling, and easy to operate;It adopts Model resolution is improved with multi-angle of view high-precision texture mapping.
Detailed description of the invention
Fig. 1 is the application scenarios schematic diagram for the real-time three-dimensional method for reconstructing that the embodiment of the present application proposes;
Fig. 2 is a kind of flow chart for real-time three-dimensional method for reconstructing that the embodiment of the present application proposes;
Fig. 3 is to be partitioned into the method flow diagram of attention object in slave 3D rendering that the embodiment of the present application proposes;
Fig. 4 is the method flow diagram that multi-angle of view texture mapping is carried out to 3D model that the embodiment of the present application proposes;
Fig. 5 is a kind of composition block diagram for real-time three-dimensional reconstructing device that the embodiment of the present application proposes;
Fig. 6 is the composed structure of the image processing unit in a kind of real-time three-dimensional reconstructing device that the embodiment of the present application proposes Figure;
Fig. 7 is the flow chart for another real-time three-dimensional method for reconstructing that the embodiment of the present application proposes.
Specific embodiment
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this The some embodiments recorded in application, for those of ordinary skill in the art, without creative efforts, It can also be obtained according to these attached drawings other attached drawings.
Fig. 1 is a kind of application scenarios schematic diagram for real-time three-dimensional method for reconstructing that the embodiment of the present application proposes, this method It needs to use a 3D sensor and a calculating equipment in application scenarios, calculates and contain an image procossing list inside equipment Member, image processing unit run the real-time three-dimensional method for reconstructing that the embodiment of the present application is proposed, complete the reality to attention object When dynamic partition and three-dimensional reconstruction, wherein attention object is a part of scene captured by 3D sensor.
3D sensor shown in Fig. 1 can be the laser radar for being equipped with color camera, TOF depth camera, structure light The 3D rendering sequence of depth camera, binocular stereo vision depth camera etc., the acquisition of 3D sensor includes the shape graph and face of scene Chromatic graph, shape graph can be depth map or distance map or point cloud chart, and color diagram can be cromogram or grayscale image.Common 3D figure As having RGBD image, coloured point cloud chart picture etc..For example, the most common 3D sensor is RGBD camera, the 3D rendering of shooting claims For RGBD image.Wherein, RGBD:RGB cromogram and Depth depth map.
In the embodiment of the present application, attention object can move in scene, real-time tracing attention object of the present invention With the relative pose of 3D sensor, this relative pose is dynamic, variation, using the information of initial 3D model and in conjunction with phase Pose is split the attention object in real-time acquired image frame, to realize the real-time dynamic of attention object Segmentation and three-dimensional reconstruction.
Embodiment of the method
Referring to fig. 2, which is a kind of flow chart of real-time three-dimensional method for reconstructing provided by the embodiments of the present application, this method packet It includes:
Step 101: the attention object in the first image got being split to obtain only based on initial 3D model The second image comprising attention object;
It further include that the initial 3D model is obtained by model initialization before this step in the embodiment of the present application, as The operating method of a kind of preferred implementation, model initialization can be specific as follows:
1. attention object is placed in the flat surface in scene, do not contacted with other non-attention objects;
2. obtaining scene 3D rendering using 3D sensor, plane monitoring-network is carried out to scene 3D rendering, and count each detection The area of plane out;
3. the area for the plane that detected according to rejects the plane that the area in scene 3D rendering is greater than threshold value S The pixel for being included;
4. searched in scene 3D rendering the effective pixel points nearest from picture centre (pixel value effectively and be not removed and It is not labeled as invalid starting pixels point) as candidate starting pixels point;
5. using area growing method finds all pixels point being connected to candidate's starting pixels point, these pixels are made For the initial segmentation pixel of attention object;
6. otherwise if the pixel number found, which is greater than N, is labeled as nothing by candidate's starting pixels point into step 7 Starting pixels point is imitated, step 4 is returned;
7. the initial segmentation pixel using attention object generates initial 3D model, terminate.(note: the initial 3D model The coordinate system at place is known as world coordinate system)
The first image got in the present embodiment is 3D rendering, such as RGBD image or coloured point cloud chart picture, can With understanding, it is still to only the second image comprising attention object that attention object in the first image is split 3D rendering.
The specific implementation of this step is as shown in figure 3, include the following steps:
Step 1011: plane monitoring-network being carried out to the first image, and counts the area for the plane that each detected;
Step 1012: according to the area of the plane that detected, the area rejected in the first image is greater than threshold value S The plane pixel that is included;
In the embodiment of the present application, the value range of threshold value S is specified according to practical, such as when attention object is doll, Threshold value S can be set to 0.004 square metre, when attention object is human body, threshold value S can be set to 0.1 square Rice.
Step 1013: attention object form statistical information, the attention object form are obtained according to initial 3D model Statistical information includes mass center information, the boundary information of image and the surface model information of image of image;
In the present embodiment, this step is implemented as follows:
1) the first image of present frame is initially registered in the first image for obtaining present frame with initial 3D model and is felt Initial pose of the interest object relative to 3D sensor;
2) initial 3D model projection to present frame is obtained by 3D model projection 3D rendering according to the initial pose;
3) form statistics is carried out to 3D model projection 3D rendering, obtains attention object form statistical information, form system Meter information includes: the boundary of image, the mass center of image and encirclement frame or the ring of encirclement, the surface model of image etc..
Wherein, the surface model of image can be the point cloud or 3D mould of the pixel composition of 3D model projection 3D rendering Type projection 3D rendering point cloud reconstructs the grid surface come.
Step 1014: according to the mass center information of described image by the foreground image comprising attention object from the first image It splits;
When this step implements, using pixel nearest from mass center in image as starting point, with the method for region growing (one kind of morphological method) finds all connected domain pixels, the connected domain pixel contain attention object and with object interested The object (such as the hand for holding the people of the attention object) of body contact, commonly referred to herein as prospect, are rejected in the first image The pixel of the connected domain is not belonging to get the foreground image for alleged by this step including attention object is arrived.
Step 1015: using the boundary information of described image or the surface model information of described image to the foreground image In pixel be labeled, the pixel for meeting preset condition is labeled as attention object, preset condition will be unsatisfactory for Pixel is labeled as foreign matter;
In practical application it is understood that this step can in the following way carry out attention object and foreign matter Mark:
Mode 1: the pixel within the boundary of the attention object in foreground image is labeled as attention object, boundary Except pixel be labeled as foreign matter.
Mode 2: the pixel in foreground image with the surface model minimum distance of attention object less than d is labeled as feeling Interest object, other pixels are labeled as foreign matter.
Step 1016: morphology is done respectively to the pixel for being labeled as attention object and the pixel for being labeled as foreign matter Operation, the attention object tab area after being adjusted;
Step 1017: from the attention object tab area, rejecting while being noted as attention object and mark For the pixel of foreign matter, using remaining pixel as final attention object cut-point, according to the final attention object Cut-point determines the second image only comprising attention object.
Step 102: object posture information being obtained according to second image and the initial 3D model, according to object Posture information obtains key frame images from the first image;
Specifically, the second image only comprising attention object is registrated with initial 3D model in the present embodiment, obtain To the posture information relative to 3D sensor of current interest object, i.e. object posture information described in this step.
Preferably, the second image and initial 3D model be registrated obtain attention object relative to 3D sensor Posture information can specifically use ICP method, or use ElasticFusion method.
Wherein, obtaining key frame images from the first image according to object posture information specifically can be according to target Rotary angle information in object posture information is every certain angle extraction section 3D rendering as key frame images.
It, can be with according to the rotary angle information in posture information every certain angle extraction section 3D rendering as key frame images It is that the 3D rendering conduct for meeting any of the first preset condition or second preset condition is extracted from the first image of present frame Key frame images, or extract while meeting the 3D rendering of following first preset condition and the second preset condition as key frame Image;
First preset condition: rotation angle in the posture information of present frame and it is all before key frame posture information in Rotation angle difference be greater than the first preset value (angle θ);The angle θ value range is usually between 10 degree to 30 degree.
Second preset condition: in the posture information of present frame displacement with it is all before key frame posture information in The Euclidean distance for being displaced (Translation) amount is greater than the second preset value (t);The value range of t be usually 10cm to 50cm it Between.
Step 103: conversion process is carried out to second image according to the object posture information and obtains third image, It merges to obtain new 3D model with the initial 3D model based on the third image, grid is carried out to the new 3D model Change to rebuild and obtains the first reconstruction model;
In the embodiment of the present application, conversion process is carried out to second image according to the object posture information and obtains the Three images specifically: converted using the rotation and translation in object posture information, the second image is converted to the initial 3D Model coordinate system (world coordinate system), obtain third image.
It is above-mentioned merge to obtain new 3D model with the initial 3D model based on the third image can be using a point cloud The methods of fusion, TSDF, Voxel Hashing, Surfel are realized.
Wherein, obtaining the first reconstruction model to the new 3D model progress gridding reconstruction can be used Marching Cube or Poisson method for reconstructing carry out gridding reconstruction to 3D model.3D model after gridding reconstruction is one by many Dough sheet forms three-dimension curved surface, and each dough sheet includes several vertex.
Step 104: first reconstruction model being carried out according to the object posture information and the key frame images Texture mapping obtains the second reconstruction model.
In the embodiment of the present application, multi-angle of view texture patch is carried out to the first reconstruction model using key frame and its posture information Figure, the textures of this step are carried out after the first reconstruction model rebuilds completion, and whether 3D Model Reconstruction is completed to depend on upper one Whether the integrality of new 3D model described in step 103 meets the threshold value of setting.In addition, pass when texture mapping in this step Key frame image had both included shape graph or had included color diagram.
Below with reference to step shown in Fig. 4, a kind of specific implementation form of this step 104 is described and illustrated, specifically such as Under:
Step 1040: when the first reconstruction model of input, key frame images and object posture information, while starting First Line Journey and the second thread;
Wherein, the color diagram regional area of dough sheet can be obtained by executing step 1041-1044 after starting first thread, opens The texture region of dough sheet can be obtained by executing step 1045-1046 after dynamic second thread;
It is understood that the specific implementation of step 104 can be it is complete by a functional module texture mapping subelement At specifically, inputting the first reconstruction model, key frame images and object posture information to the texture mapping subelement, then Second reconstruction model of available texture mapping subelement output.
Step 1041: being projected the first reconstruction model to the coordinate of the key frame images using object posture information System, obtains dough sheet and color diagram office according to projected position of the first reconstruction model dough sheet vertex in the color diagram of key frame images Portion region corresponding relationship;
Step 1042: calculating projected position of the first reconstruction model dough sheet vertex in the shape graph of key frame images With the alternate position spike of corresponding points in shape graph;
Step 1043: calculating the normal vector on the first reconstruction model dough sheet vertex in the shape graph for projecting to key frame images With the angle of z-axis opposite direction;
Step 1044: the alternate position spike and the smallest key frame images of the angle are chosen, according to the dough sheet and color Figure regional area corresponding relationship and the color diagram regional area of the key frame images of selection obtain the color diagram partial zones of dough sheet Domain;
As shown, this step will enter step 1047 after having executed.
Step 1045: first reconstruction model being split to obtain segmentation block;
In the present embodiment, it is preferred to use Iso-charts method divides the first reconstruction model (can also claim 3D model) It cuts, after the completion of segmentation, there is the segmentation block of its ownership on each vertex of each dough sheet of 3D model.
Step 1046: by each segmentation block carry out UV parametrization and it is compact be arranged into atlas, it is each to calculate the first reconstruction model Dough sheet vertex corresponding image coordinate in atlas obtains the texture region of dough sheet in turn;
It is understood that image block is compact arranges by many for the atlas, the segmentation of image block and 3D model Block corresponds, and is the mapping relations of 3D to 2D a kind of, and this mapping relations establish incidence relation by UV parametrization.According to UV Parametrization establish incidence relation calculate the first reconstruction model (polygon) dough sheet vertex in atlas corresponding image coordinate into And obtain (polygon) texture region.It is implemented as follows: first passing through the segmentation block of each vertex ownership of polygonal patch in atlas In find corresponding image block, each vertex corresponding image coordinate point in image block is then found out according to incidence relation, these Coordinate points constitute a polygon texture region and obtain the texture region of dough sheet described in this step.
In addition, general 3D model dough sheet is triangle or is polygon (generally quadrangle) have on 3D model dough sheet Multiple vertex, such as when for triangle, there are three vertex, when for quadrangle, there are four vertex.The embodiment of the present application In, the dough sheet of 3D model is polygon, includes multiple vertex, it is also a polygon on atlas that dough sheet, which corresponds to, comprising multiple Image coordinate point.
As shown, this step will enter step 1047 after having executed.
Step 1047: the texture region of the dough sheet being coloured using the color diagram regional area of the dough sheet, is obtained Texture atlas and the second reconstruction model after to coloring.
As can be seen from the above description, the present embodiment can effectively divide attention object, be able to achieve attention object Independent modeling, therefore reconstruct the attention object come without containing extra scene information, such as ground, background or other Attachment etc..It does not need secondly, the present embodiment can carry out effectively segmentation to attention object by turntable or more 3D sensings Device, full angle modeling can be carried out to object by only using 1 3D sensor.The embodiment of the present application uses dough sheet texture mapping, line Reason textures effect is more clear.
Installation practice
Referring to Fig. 5, which is a kind of composition block diagram of real-time three-dimensional reconstructing device provided by the embodiments of the present application, as schemed institute Show, device includes image capture device 300 and calculating equipment 500.
Image capture device 300 is used for photographed scene image, and the first image acquired in real time (3D rendering sequence) is passed It is defeated by the calculating equipment 500;
In the embodiment of the present application, image capture device 300 (alternatively referred to as 3D sensor), which can be, is equipped with color camera shooting Laser radar, TOF depth camera, structure light depth camera, the binocular stereo vision depth camera etc. of head, image capture device The 3D rendering sequence of 300 acquisitions includes the shape graph and color diagram of scene, and shape graph can be depth map or distance map or point cloud Figure, color diagram can be cromogram or grayscale image.Image capture device 300 described in the present embodiment uses RGBD camera.
Equipment 500 is calculated, the first image for acquiring based on initial 3D model and described image acquisition equipment 300 is complete Attention object real-time three-dimensional in pairs of the first image is rebuild.It should be noted that interested in scene in the application Object can move.
When specific implementation, the calculating equipment 500 is specifically used for acquiring equipment to described image based on initial 3D model Object in first image of 300 acquisitions is split to obtain the second image only comprising attention object;For according to institute It states the second image and the initial 3D model obtains object posture information, according to object posture information from the first image Middle acquisition key frame images;And it is obtained for carrying out conversion process to second image according to the object posture information Third image merges to obtain new 3D model with the initial 3D model based on the third image, to the new 3D mould Type carries out gridding reconstruction and obtains the first reconstruction model.
In the embodiment of the present application, calculating in equipment 500 includes image processing unit 400, further, as shown in fig. 6, Image processing unit 400 includes image division sub-unit 401, registration subelement 402, key frame images extraction subelement 403, figure As fusion subelement 404, judgment sub-unit 405, gridding reconstruction subelement 406 and texture mapping subelement 407, in which:
Described image divide subelement 401, for based on initial 3D model to interested in the first image got Object is split to obtain the second image only comprising attention object;
The registration subelement 402, for obtaining object pose according to second image and the initial 3D model Information;It is specific: described image segmentation subelement 401 being divided into obtained the second image and initial 3D model is carried out with will definitely To object posture information.
The key frame images extract subelement 403, for according to the object posture information from the first image Middle extraction key frame images;
Described image merges subelement 404, for being become according to the object posture information to second image It changes processing and obtains third image, merge to obtain new 3D model with the initial 3D model based on the third image;
The judgment sub-unit 405, for judging that described image fusion subelement 404 merges obtained new 3D model It is whether complete, the gridding reconstruction subelement 406 is triggered if complete, described image segmentation subelement 401 is otherwise triggered and connects Receive the first image that described image acquisition equipment 300 acquires;
The gridding reconstruction subelement 406, for merging obtained new 3D mould to described image fusion subelement 404 Type carries out gridding reconstruction and obtains the first reconstruction model;
Specifically, carrying out gridding reconstruction using method of surface reconstruction obtains the first reconstruction model.
The texture mapping subelement 407 is used for according to the object posture information and the key frame images to institute It states the first reconstruction model progress texture mapping that gridding reconstruction unit 406 obtains and obtains the second reconstruction model.
In conclusion in the embodiment of the present application, attention object can move in scene, and reality may be implemented in the present invention When track the relative pose of attention object and 3D sensor, this relative pose be it is dynamic, change.Utilize initial 3D mould The information of type is simultaneously split the attention object in real-time acquired image frame in conjunction with relative pose, to realize dynamic Attention object segmentation in state scene, in addition the present invention only needs by 1 3D effective segmentation that attention object carries out Sensor can be completed to carry out full angle modeling to object, and equipment cost is low, easy to operate.The last present invention uses dough sheet texture Textures, texture mapping effect is apparent, and the textures of model can regular be individual texture atlas.
Apparatus embodiments
A kind of calculating equipment provided in this embodiment, including memory and processor;
The memory, for storing computer program;
The processor executes the reality as described in preceding embodiment one for running computer program, when described program is run When three-dimensional rebuilding method.
Further, the calculating equipment can also execute method flow as shown in Figure 7 when carrying out program operation, tool Body is as follows:
Step 201: receiving the first image of image capture device acquisition;
Step 202: processing being split to the first image using initial 3D model and is obtained only comprising attention object The second image;
Step 203: second image being registrated to obtain object posture information with initial 3D model, according to described Object posture information extracts key frame images from the first image;
Step 204: conversion process is carried out to second image according to the object posture information and obtains third image, The third image is merged to obtain new 3D model with the initial 3D model;
Step 205: judging whether the new 3D model is complete, is to then follow the steps 206, otherwise return step 201;
Preferably, this step is specially to judge whether the integrality of the new 3D model meets the threshold alpha of setting, and α is logical It is standing to be set between 90~98%.
Step 206: gridding reconstruction being carried out to the new 3D model using method of surface reconstruction and obtains the first reconstruction mould Type carries out texture mapping to first reconstruction model according to the object posture information and the key frame and obtains the second weight Established model.
It is above-mentioned for there is the application of 100% integrity demands to reconstruction model as another optional implementation Step 205 and step 206 can be with specific as follows: given threshold α is 98%, judges that the new 3D model integrity reaches α Later, using model filling-up hole method, such as the triangle gridding based on radial basis function (RBF:Radial Basis Function) Filling-up hole method carries out filling-up hole processing to the new 3D model and forms 100% complete closing threedimensional model, then according to institute It states object posture information and the key frame and the second reconstruction mould is obtained to the closing threedimensional model progress texture mapping of formation Type.
It should be noted that all the embodiments in this specification are described in a progressive manner, each embodiment it Between same and similar part may refer to each other, each embodiment focuses on the differences from other embodiments. For equipment and Installation practice, since it is substantially similar to the method embodiment, so describe fairly simple, The relevent part can refer to the partial explaination of embodiments of method.
The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, and it cannot be said that A specific embodiment of the invention is only limitted to this, for those of ordinary skill in the art to which the present invention belongs, is not taking off Under the premise of from present inventive concept, several simple deduction or replace can also be made, all shall be regarded as belonging to the present invention by institute Claims of submission determine protection scope.

Claims (11)

1. a kind of real-time three-dimensional method for reconstructing characterized by comprising
Being split to obtain to the object in the first image got based on initial 3D model only includes attention object Second image;The first image is 3D rendering;
Object posture information is obtained according to second image and the initial 3D model, according to object posture information from institute It states in the first image and obtains key frame images;
Conversion process is carried out to second image according to the object posture information and obtains third image, is based on the third Image merges to obtain new 3D model with the initial 3D model, carries out gridding reconstruction to the new 3D model and obtains the One reconstruction model.
2. the method according to claim 1, wherein described obtain the new 3D model progress gridding reconstruction To after the first reconstruction model further include: rebuild according to the object posture information and the key frame images to described first Model carries out texture mapping and obtains the second reconstruction model.
3. the method according to claim 1, wherein it is described based on initial 3D model to the first image got In object be split to obtain and only specifically included comprising the second image of attention object:
Step a1: plane monitoring-network is carried out to the first image, and counts the area for the plane that each detected;According to the detection The area of plane out rejects the area in the first image and is greater than the pixel that the plane of threshold value is included;
Step a2: attention object form statistical information, the attention object form statistics letter are obtained according to initial 3D model Breath includes mass center information, the boundary information of image and the surface model information of image of image;
Step a3: the foreground image comprising attention object is partitioned into from the first image according to the mass center information of described image Come;The pixel in the foreground image is carried out using the boundary information of described image or the surface model information of described image Mark, is labeled as attention object for the pixel for meeting preset condition, the pixel for being unsatisfactory for preset condition is labeled as different Object;
Step a4: doing morphology operations to the pixel for being labeled as attention object and the pixel for being labeled as foreign matter respectively, Attention object tab area after being adjusted;
Step a5: it from the attention object tab area, rejects while being noted as attention object and be labeled as foreign matter Pixel, using remaining pixel as final attention object cut-point, according to the final attention object cut-point Determine the second image only comprising attention object.
4. according to the method described in claim 3, it is characterized in that, to the pixel in the foreground image in the step a3 Mode used by being labeled is as follows:
Mode 1: according to the boundary information of described image, by the pixel mark within the boundary of the attention object in foreground image Note is attention object, and the pixel except boundary is labeled as foreign matter;
Mode 2: according to the surface model information of described image, by foreground image with the surface model of attention object most low coverage It is labeled as attention object from the pixel for being less than pre-determined distance value, other pixels are labeled as foreign matter.
5. the method according to claim 1, wherein it is described according to object posture information from the first image Middle acquisition key frame images specifically: extracted from the first image of present frame and meet any bar in first condition or second condition The 3D rendering of part is as key frame images, or extracts from the first image of present frame while meeting first condition and second The 3D rendering of condition is as key frame images;
The first condition are as follows: rotation angle in the posture information of present frame and it is all before key frame posture information in The difference of rotation angle is greater than the first preset value;
The second condition are as follows: in the posture information of present frame displacement with it is all before key frame posture information in position The Euclidean distance of shifting amount is greater than the second preset value.
6. the method according to claim 1, wherein described obtain the new 3D model progress gridding reconstruction To before the first reconstruction model further include: judge whether the new 3D model is complete.
7. according to the method described in claim 2, it is characterized in that, described according to the object posture information and the key Frame image obtains the second reconstruction model to first reconstruction model progress texture mapping and specifically includes:
The step of the color diagram regional area of dough sheet is obtained based on the first reconstruction model, key frame images and object posture information Suddenly;
The step of obtaining the texture region of dough sheet based on the first reconstruction model;
And the texture region of the color diagram regional area and the dough sheet based on the dough sheet obtains texture atlas and the second weight The step of established model.
8. the method according to the description of claim 7 is characterized in that described be based on the first reconstruction model, key frame images and mesh The step of mark object posture information obtains the color diagram regional area of dough sheet specifically includes:
Step c1: being projected the first reconstruction model to the coordinate system of the key frame images using object posture information, according to Projected position of the first reconstruction model dough sheet vertex in the color diagram of key frame images obtains dough sheet and color diagram regional area Corresponding relationship;
Step c2: projected position and shape of the first reconstruction model dough sheet vertex in the shape graph of key frame images are calculated The alternate position spike of corresponding points in figure;Calculate the normal direction on the first reconstruction model dough sheet vertex in the shape graph for projecting to key frame images The angle of amount and z-axis opposite direction;
Step c3: choosing the alternate position spike and the smallest key frame images of the angle, according to the dough sheet and color diagram part Region corresponding relationship and the color diagram regional area of the key frame images of selection obtain the color diagram regional area of dough sheet.
9. the method according to the description of claim 7 is characterized in that described obtain the texture area of dough sheet based on the first reconstruction model The step of domain, specifically includes:
Step e1: first reconstruction model is split to obtain segmentation block;
Step e2: by each segmentation block carry out UV parametrization and it is compact be arranged into atlas, calculate each dough sheet top of the first reconstruction model Point corresponding image coordinate in atlas obtains the texture region of dough sheet in turn.
10. a kind of real-time three-dimensional reconstructing device, which is characterized in that described device includes image capture device and calculating equipment;
Described image acquires equipment and is used for photographed scene image, and the first image transmitting acquired in real time is set to the calculating It is standby;The equipment that calculates includes that the image processing unit of three-dimensional reconstruction is realized based on the first image, described image processing Unit includes image division sub-unit, registration subelement, key frame images extraction subelement, image co-registration subelement, gridding Rebuild subelement and texture mapping subelement;
Described image divides subelement, for being split based on initial 3D model to the attention object in the first image Obtain the second image only comprising attention object;
The registration subelement, for obtaining object posture information according to second image and the initial 3D model;
The key frame images extract subelement, for extracting pass from the first image according to the object posture information Key frame image;
Described image merges subelement, obtains for carrying out conversion process to second image according to the object posture information To third image, merge to obtain new 3D model with the initial 3D model based on the third image;
The gridding reconstruction subelement, the new 3D model for merging to described image fusion subelement carry out grid Change to rebuild and obtains the first reconstruction model;
The texture mapping subelement is used for according to the object posture information and the key frame images to the gridding The first reconstruction model that reconstruction unit obtains carries out texture mapping and obtains the second reconstruction model.
11. a kind of calculating equipment, it is characterised in that: including processor and memory;
Said program code is transferred to the processor for storing program code by the memory;
The processor is used for according to the described in any item methods of instruction execution claim 1 to 9 in said program code.
CN201910437856.8A 2019-05-24 2019-05-24 A kind of real-time three-dimensional method for reconstructing, device and equipment Pending CN110148217A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910437856.8A CN110148217A (en) 2019-05-24 2019-05-24 A kind of real-time three-dimensional method for reconstructing, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910437856.8A CN110148217A (en) 2019-05-24 2019-05-24 A kind of real-time three-dimensional method for reconstructing, device and equipment

Publications (1)

Publication Number Publication Date
CN110148217A true CN110148217A (en) 2019-08-20

Family

ID=67593183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910437856.8A Pending CN110148217A (en) 2019-05-24 2019-05-24 A kind of real-time three-dimensional method for reconstructing, device and equipment

Country Status (1)

Country Link
CN (1) CN110148217A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553302A (en) * 2020-05-08 2020-08-18 深圳前海微众银行股份有限公司 Key frame selection method, device, equipment and computer readable storage medium
CN111754512A (en) * 2020-07-17 2020-10-09 成都盛锴科技有限公司 Pantograph state information acquisition method and system
CN111753739A (en) * 2020-06-26 2020-10-09 北京百度网讯科技有限公司 Object detection method, device, equipment and storage medium
CN112750159A (en) * 2019-10-31 2021-05-04 华为技术有限公司 Method, device and storage medium for acquiring pose information and determining object symmetry
CN112755523A (en) * 2021-01-12 2021-05-07 网易(杭州)网络有限公司 Target virtual model construction method and device, electronic equipment and storage medium
CN112785682A (en) * 2019-11-08 2021-05-11 华为技术有限公司 Model generation method, model reconstruction method and device
CN112926614A (en) * 2019-12-06 2021-06-08 顺丰科技有限公司 Box labeling image expansion method and device and computer readable storage medium
CN113160102A (en) * 2021-04-25 2021-07-23 北京华捷艾米科技有限公司 Method, device and equipment for reconstructing three-dimensional scene and storage medium
CN113269859A (en) * 2021-06-09 2021-08-17 中国科学院自动化研究所 RGBD vision real-time reconstruction method and system facing actuator operation space
TWI766218B (en) * 2019-12-27 2022-06-01 財團法人工業技術研究院 Reconstruction method, reconstruction system and computing device for three-dimensional plane
CN114859942A (en) * 2022-07-06 2022-08-05 北京云迹科技股份有限公司 Robot motion control method and device, electronic equipment and storage medium
WO2022253677A1 (en) * 2021-06-03 2022-12-08 Koninklijke Philips N.V. Depth segmentation in multi-view videos
WO2024032165A1 (en) * 2022-08-12 2024-02-15 华为技术有限公司 3d model generating method and system, and electronic device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246602A (en) * 2008-02-04 2008-08-20 东华大学 Human body posture reconstruction method based on geometry backbone
CN105631431A (en) * 2015-12-31 2016-06-01 华中科技大学 Airplane interesting area spectrum measuring method guided by visible light target outline model
CN105654492A (en) * 2015-12-30 2016-06-08 哈尔滨工业大学 Robust real-time three-dimensional (3D) reconstruction method based on consumer camera
CN106910242A (en) * 2017-01-23 2017-06-30 中国科学院自动化研究所 The method and system of indoor full scene three-dimensional reconstruction are carried out based on depth camera
US20180018805A1 (en) * 2016-07-13 2018-01-18 Intel Corporation Three dimensional scene reconstruction based on contextual analysis
CN108564616A (en) * 2018-03-15 2018-09-21 中国科学院自动化研究所 Method for reconstructing three-dimensional scene in the rooms RGB-D of fast robust
CN108898630A (en) * 2018-06-27 2018-11-27 清华-伯克利深圳学院筹备办公室 A kind of three-dimensional rebuilding method, device, equipment and storage medium
CN109242873A (en) * 2018-08-22 2019-01-18 浙江大学 A method of 360 degree of real-time three-dimensionals are carried out to object based on consumer level color depth camera and are rebuild

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246602A (en) * 2008-02-04 2008-08-20 东华大学 Human body posture reconstruction method based on geometry backbone
CN105654492A (en) * 2015-12-30 2016-06-08 哈尔滨工业大学 Robust real-time three-dimensional (3D) reconstruction method based on consumer camera
CN105631431A (en) * 2015-12-31 2016-06-01 华中科技大学 Airplane interesting area spectrum measuring method guided by visible light target outline model
US20180018805A1 (en) * 2016-07-13 2018-01-18 Intel Corporation Three dimensional scene reconstruction based on contextual analysis
CN106910242A (en) * 2017-01-23 2017-06-30 中国科学院自动化研究所 The method and system of indoor full scene three-dimensional reconstruction are carried out based on depth camera
CN108564616A (en) * 2018-03-15 2018-09-21 中国科学院自动化研究所 Method for reconstructing three-dimensional scene in the rooms RGB-D of fast robust
CN108898630A (en) * 2018-06-27 2018-11-27 清华-伯克利深圳学院筹备办公室 A kind of three-dimensional rebuilding method, device, equipment and storage medium
CN109242873A (en) * 2018-08-22 2019-01-18 浙江大学 A method of 360 degree of real-time three-dimensionals are carried out to object based on consumer level color depth camera and are rebuild

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
何晓昀: "一种建筑物室内三维场景重建机器人", 《信息化建设》 *
王征等: "基于多幅图像的低成本三维人体重建", 《计算机应用》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112750159A (en) * 2019-10-31 2021-05-04 华为技术有限公司 Method, device and storage medium for acquiring pose information and determining object symmetry
WO2021082736A1 (en) * 2019-10-31 2021-05-06 华为技术有限公司 Method and device for acquiring posture information and determining object symmetry, and storage medium
CN112785682A (en) * 2019-11-08 2021-05-11 华为技术有限公司 Model generation method, model reconstruction method and device
CN112926614A (en) * 2019-12-06 2021-06-08 顺丰科技有限公司 Box labeling image expansion method and device and computer readable storage medium
US11699264B2 (en) 2019-12-27 2023-07-11 Industrial Technology Research Institute Method, system and computing device for reconstructing three-dimensional planes
TWI766218B (en) * 2019-12-27 2022-06-01 財團法人工業技術研究院 Reconstruction method, reconstruction system and computing device for three-dimensional plane
CN111553302A (en) * 2020-05-08 2020-08-18 深圳前海微众银行股份有限公司 Key frame selection method, device, equipment and computer readable storage medium
CN111553302B (en) * 2020-05-08 2022-01-04 深圳前海微众银行股份有限公司 Key frame selection method, device, equipment and computer readable storage medium
CN111753739A (en) * 2020-06-26 2020-10-09 北京百度网讯科技有限公司 Object detection method, device, equipment and storage medium
CN111753739B (en) * 2020-06-26 2023-10-31 北京百度网讯科技有限公司 Object detection method, device, equipment and storage medium
CN111754512A (en) * 2020-07-17 2020-10-09 成都盛锴科技有限公司 Pantograph state information acquisition method and system
CN112755523A (en) * 2021-01-12 2021-05-07 网易(杭州)网络有限公司 Target virtual model construction method and device, electronic equipment and storage medium
CN112755523B (en) * 2021-01-12 2024-03-15 网易(杭州)网络有限公司 Target virtual model construction method and device, electronic equipment and storage medium
CN113160102A (en) * 2021-04-25 2021-07-23 北京华捷艾米科技有限公司 Method, device and equipment for reconstructing three-dimensional scene and storage medium
WO2022253677A1 (en) * 2021-06-03 2022-12-08 Koninklijke Philips N.V. Depth segmentation in multi-view videos
CN113269859B (en) * 2021-06-09 2023-11-24 中国科学院自动化研究所 RGBD vision real-time reconstruction method and system for actuator operation space
CN113269859A (en) * 2021-06-09 2021-08-17 中国科学院自动化研究所 RGBD vision real-time reconstruction method and system facing actuator operation space
CN114859942A (en) * 2022-07-06 2022-08-05 北京云迹科技股份有限公司 Robot motion control method and device, electronic equipment and storage medium
WO2024032165A1 (en) * 2022-08-12 2024-02-15 华为技术有限公司 3d model generating method and system, and electronic device

Similar Documents

Publication Publication Date Title
CN110148217A (en) A kind of real-time three-dimensional method for reconstructing, device and equipment
Rematas et al. Soccer on your tabletop
CN111243093B (en) Three-dimensional face grid generation method, device, equipment and storage medium
CN104992441B (en) A kind of real human body three-dimensional modeling method towards individualized virtual fitting
US9836645B2 (en) Depth mapping with enhanced resolution
CN102902355B (en) The space interaction method of mobile device
CN107833270A (en) Real-time object dimensional method for reconstructing based on depth camera
CN109636831A (en) A method of estimation 3 D human body posture and hand information
CN108154550A (en) Face real-time three-dimensional method for reconstructing based on RGBD cameras
US20150178988A1 (en) Method and a system for generating a realistic 3d reconstruction model for an object or being
US20100328308A1 (en) Three Dimensional Mesh Modeling
CN103400409A (en) 3D (three-dimensional) visualization method for coverage range based on quick estimation of attitude of camera
CN113012293A (en) Stone carving model construction method, device, equipment and storage medium
EP2766875A1 (en) Generating free viewpoint video using stereo imaging
CN102938142A (en) Method for filling indoor light detection and ranging (LiDAR) missing data based on Kinect
CN106875437A (en) A kind of extraction method of key frame towards RGBD three-dimensional reconstructions
CN110555412A (en) End-to-end human body posture identification method based on combination of RGB and point cloud
CN108648194A (en) Based on the segmentation of CAD model Three-dimensional target recognition and pose measuring method and device
CN107507269A (en) Personalized three-dimensional model generating method, device and terminal device
CN112784621A (en) Image display method and apparatus
Cheung Visual hull construction, alignment and refinement for human kinematic modeling, motion tracking and rendering
JP2016071645A (en) Object three-dimensional model restoration method, device, and program
Jinka et al. Peeledhuman: Robust shape representation for textured 3d human body reconstruction
CN106548508B (en) A kind of high quality 3D texture reconstruction method
US20040095484A1 (en) Object segmentation from images acquired by handheld cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190820

WD01 Invention patent application deemed withdrawn after publication