CN110490967A - Image procossing and object-oriented modeling method and equipment, image processing apparatus and medium - Google Patents
Image procossing and object-oriented modeling method and equipment, image processing apparatus and medium Download PDFInfo
- Publication number
- CN110490967A CN110490967A CN201910296077.0A CN201910296077A CN110490967A CN 110490967 A CN110490967 A CN 110490967A CN 201910296077 A CN201910296077 A CN 201910296077A CN 110490967 A CN110490967 A CN 110490967A
- Authority
- CN
- China
- Prior art keywords
- panoramic picture
- profile
- dimensional
- width
- height
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses image procossings and object-oriented modeling method and equipment, image processing apparatus and medium.The image processing method includes: to estimate the three-dimensional point coordinate of the matching characteristic point on the position and the panoramic picture of camera using the geometrical relationship of an at least width panoramic picture for shooting;The face profile in three dimensions of the panoramic picture is generated based on the profile that the edge pixel point that on the panoramic picture, feature contour belongs in the pixel of particular category is surrounded for every width panoramic picture;And the scale of the face profile in the three-dimensional space of the scale of the position of panorama camera when shooting every width panoramic picture and every width panoramic picture is normalized.By the panoramic picture using panorama camera shooting, generates three dimensional object face profile in three dimensions or even generate the 3D model of three dimensional object, the resolution ratio of object model generated can be effectively improved.
Description
Technical field
The present invention relates to object modeling field more particularly to image procossing and object-oriented modeling method and equipment, image procossing
Device and medium.
Background technique
In object modeling field, how to make the object model generated have high-resolution and/or high accuracy is industry
The target strongly pursued.
Object modeling may be implemented user and stay indoors the 2D and/or 3D of (such as passing through network) i.e. browsable three dimensional object
Structure, also, the 3D modeling of object can be realized such as effect on the spot in person, this is one extremely important in field of virtual reality
Application.
In object modeling field, especially 2D and 3D modeling field, technical solution both domestic and external is broadly divided into two major classes: hand
Dynamic production and automation modeling.
Method for making manually needs to rely on a large amount of manual operation, and the three-dimensional structure of object is identified,
And the splicing between multiple object models is carried out manually.The 3D model for making a set of three dimensional object manually needs the long period, because
This large amount of three object data needs numerous personnel to make manually, and personnel cost is excessively high, is difficult practical application.
Method for automating 3D modeling, the 3D scanning devices using profession most of at present, can directly acquire list
Then the three-dimensional point cloud of a object carries out the splicing of three-dimensional point cloud, generate 3D model.But the 3D scanning device of this profession
Image capture device precision it is not high, cause captured image resolution ratio not high, cause generate threedimensional model resolution ratio not
Enough height.Moreover, this 3D scanning device is generally expensive, it is difficult meet the needs of consumer level application.
Therefore, high-resolution acquisition image how is obtained, how to handle the image of acquisition efficiently to build for object
Mould provides high-resolution modeling and prepares data, even the data provided how to be enabled to simplify subsequent model and generated
Journey, the resolution ratio and/or accuracy for how effectively improving object model generated are that the present invention considers the technology to be solved
Problem.
Summary of the invention
One of in order to solve problem above, the present invention provides a kind of image procossing and object-oriented modeling methods and equipment, figure
As processing unit and medium.
According to one embodiment of present invention, a kind of image processing method is provided, which includes: phase seat in the plane
Set estimating step, wherein using the geometrical relationship of an at least width panoramic picture for shooting, estimate when shooting every width panoramic picture
Panorama camera position and the matching characteristic point on every width panoramic picture three-dimensional point coordinate, wherein every width panoramic picture
It is to be shot for a three dimensional object, each three dimensional object corresponds to one or more panoramic picture;Single image face profile
Generation step, wherein for every width panoramic picture, the picture of particular category is belonged to based on the panoramic picture, contour feature
The profile that edge pixel point in vegetarian refreshments is surrounded, generates the face profile in three dimensions of the panoramic picture;And
Dimension normalization step, wherein by the scale of the position of the estimated panorama camera when shooting every width panoramic picture and often
The scale of the face profile in three dimensions of width panoramic picture is normalized, and obtains by normalized each panoramic picture
Face profile in three dimensions.
Optionally, camera position estimating step is included: and is carried out using the geometrical relationship of an at least width panoramic picture for shooting
Feature Points Matching between the panoramic picture, and the characteristic point to match each other in the panoramic picture is recorded as matching characteristic
Point;And the re-projection error by for every width panoramic picture, reducing the matching characteristic point on the panoramic picture, to obtain
The three-dimensional point coordinate of the matching characteristic point on camera position and panoramic picture when shooting every width panoramic picture.
Optionally, single image face profile generation step includes: based on the feature phase between the pixel on panoramic picture
Like degree, determine that on the panoramic picture, contour feature belongs to the edge pixel point in the pixel of particular category,
In, the characteristic similarity of two pixels is the absolute value of the difference of the feature of described two pixels, the feature of the pixel
Including gray scale, color.
Optionally, dimension normalization step include: will be obtained in camera position estimating step described in an at least width panorama
The height value in all three-dimensional point coordinates on image sorts from small to large, takes the intermediate value or mean value of the forward height value that sorts
Height h is estimated as particular category profilec’;And height h is assumed using particular category profilecEstimate with particular category profile high
Spend hc’Ratio generate and pass through normalized each panorama from the face profile in three dimensions of every width panoramic picture
The face profile in three dimensions of image, wherein the particular category profile assumes height hcIt is one of arbitrary assumption
Highly.
According to one embodiment of present invention, a kind of object-oriented modeling method is provided, which includes: at image
Manage step, wherein using one of image processing method as described above, image procossing is carried out to an at least width panoramic picture, with
Obtain the face profile in three dimensions by normalized each panoramic picture;It is and multipair as splicing step, wherein base
In the face profile in three dimensions by normalized each panoramic picture, splicing obtains multipair image plane profile.
Optionally, three dimensional object modeling method further include: single object face profile generation step, wherein based on described
Obtained in image processing step pass through normalized each panoramic picture face profile, obtain each single 3 D object three
Face profile in dimension space.
Optionally, single object face profile generation step includes: for an at least width panoramic picture, by with lower section
Formula has one by one determined whether that several panoramic pictures belong to the same three dimensional object: if had between two width panoramic pictures
More than the matching characteristic point of special ratios, it is determined that this two width panoramic picture belongs to the same three dimensional object;And if it is determined that
Several panoramic pictures belong to the same three dimensional object, then for the same three dimensional object that is obtained by several described panoramic pictures
Each face profile takes the union of these face profiles, the face profile as the three dimensional object.
Optionally, described multipair as splicing step, additionally it is possible to the plane in three dimensions based on each panoramic picture
Profile, splicing obtain multipair image plane profile in three dimensions.
Optionally, object-oriented modeling method further include: 3D model generation step, wherein after the multipair step as splicing,
Object 3D model is converted by the multipair image plane profile in three dimensions that splicing obtains.
Optionally, 3D model generation step includes: the top planes profile in the multipair image plane profile for obtaining splicing
Three-dimensional point interpolation is carried out in inside, and all three-dimensional point coordinates in obtained each top planes profile are projected to accordingly entirely
In scape image coordinate system, to obtain top texture;Including the bottom plane profile in multipair image plane profile that splicing is obtained
Portion carries out three-dimensional point interpolation, and all three-dimensional point coordinates on obtained each bottom plane profile are projected to corresponding panorama sketch
In picture coordinate system, to obtain bottom texture;The three-dimensional vertices between top profile and bottom profile on same level position are connected,
It constitutes the face profile of support portion, and for carrying out three-dimensional point interpolation inside the face profile of the support portion, and will obtain
All three-dimensional point coordinates of the face profile of each support portion project in corresponding panoramic picture coordinate system, to obtain support portion
Texture;Based on the top texture, bottom texture, support portion texture, the 3D texture model of entire three dimensional object is generated.
Optionally, in 3D model generation step, by all three-dimensionals in obtained each three dimensional object top planes profile
The h of the estimation height of the contoured as described in point coordinatec’, estimation height at the top of the corresponding three dimensional object of camera distance height
Angle value replaces with the estimation height h of the corresponding three dimensional object bottom of camera distancef’, and each three dimensional object top horizontal made
The length and width values in all three-dimensional point coordinates on facial contour remain unchanged, and obtain corresponding each three dimensional object bottom plane
Profile, wherein the estimation height h at the top of the corresponding three dimensional object of the camera distancec’It obtains in the following way: will be in phase seat in the plane
The height value set in all three-dimensional point coordinates on at least width panoramic picture that estimating step obtains sorts from small to large,
The intermediate value or mean value for taking the forward height value that sorts are as the estimation height h at the top of the corresponding three dimensional object of the camera distancec’,
And the estimation height h of the corresponding three dimensional object bottom of the camera distancef’It obtains: will estimate in camera position in the following way
The height value in all three-dimensional point coordinates on an at least width panoramic picture that step obtains sorts from small to large, takes sequence
Estimation height h of the intermediate value or mean value of height value rearward as the corresponding three dimensional object bottom of camera distancef’。
According to one embodiment of present invention, a kind of image processing equipment is provided, which includes: phase seat in the plane
Estimation device is set, the geometrical relationship of at least width panoramic picture using shooting is configured for, estimation is shooting every width panorama
The three-dimensional point coordinate of matching characteristic point on the position of panorama camera when image and every width panoramic picture, wherein every width is complete
Scape image is shot for a three dimensional object, and each three dimensional object corresponds to one or more panoramic picture;Single image is flat
Facial contour generating means are configured for belonging to spy based on the panoramic picture, contour feature for every width panoramic picture
Determine the profile that the edge pixel point in the pixel of classification is surrounded, generates the plane in three dimensions of every width panoramic picture
Profile;And dimension normalization device, it is configured for the estimated panorama camera when shooting every width panoramic picture
The scale of the face profile in three dimensions of the scale of position and every width panoramic picture is normalized, and obtains by normalizing
The face profile in three dimensions for each panoramic picture changed.
Optionally, single image face profile generating means are additionally configured to: based between the pixel on panoramic picture
Characteristic similarity, determine the edge pixel point that on the panoramic picture, contour feature belongs in the pixel of particular category,
Wherein, the characteristic similarity of two pixels is the absolute value of the difference of the feature of described two pixels, the spy of the pixel
Sign includes gray scale, color.
Optionally, dimension normalization device is additionally configured to: described in being obtained as camera position estimation device at least
The height value in all three-dimensional point coordinates on one width panoramic picture sorts from small to large, takes the intermediate value for the forward height value that sorts
Or mean value is used as and is used as particular category profile estimation height hc’, estimation height at the top of the corresponding three dimensional object of camera distance
hc’;And utilize the hypothesis height h at the top of the corresponding three dimensional object of the camera distancecThree dimensional object corresponding to the camera distance
The estimation height h at topc’Ratio, generated from the face profile in three dimensions of every width panoramic picture and pass through normalizing
The face profile for each panoramic picture changed, wherein the hypothesis height h at the top of the corresponding three dimensional object of the camera distancecIt is any
It is assumed that a height.
According to one embodiment of present invention, a kind of object modeling equipment is provided, the object modeling equipment further include: as above
One of described image processing equipment is configured for carrying out image procossing to an at least width panoramic picture, to obtain by returning
The face profile in three dimensions of the one each panoramic picture changed;And it is multipair as splicing apparatus, it is configured for institute
State the face profile in three dimensions by normalized each panoramic picture, splicing obtain in three dimensions it is multipair as
Face profile.
Optionally, object modeling equipment further include: single object face profile generating means are configured for described
By the face profile in three dimensions of normalized each panoramic picture, obtain each three dimensional object in three dimensions
Face profile.
Optionally, single object face profile generating means are additionally configured to: for an at least width panoramic picture, being led to
Cross following manner, one by one determine whether that several panoramic pictures belong to the same three dimensional object: if two width panoramic pictures it
Between have matching characteristic point more than special ratios, then can be determined that this two width panoramic picture belongs to the same three dimensional object;And
If it is determined that several panoramic pictures belong to the same three dimensional object, then for obtained by several described panoramic pictures same three
Each face profile of dimensional object, takes the union of the face profile, the plane in three dimensions as the three dimensional object
Profile.
Optionally, described multipair as splicing apparatus is additionally configured to based on each single 3 D object in three dimensions
Face profile, splicing obtain multipair image plane profile in three dimensions.
Optionally, the object modeling equipment further include: 3D model generating means, be configured for splicing obtain three
Multipair image plane profile in dimension space is converted into three dimensional object 3D model.
Still another embodiment in accordance with the present invention provides a kind of image processing apparatus, comprising: processor;And memory,
It is stored thereon with executable code, when the executable code is executed by the processor, executes the processor above
One of method of description.
According to still another embodiment of the invention, a kind of non-transitory machinable medium is provided, is stored thereon with
Executable code makes the processor execute one of method described above when the executable code is executed by processor.
The present invention is based on multiple panoramic pictures for using panorama camera to shoot from three dimensional object, carry out the 2D of three dimensional object
Modeling and 3D modeling overcome and generate model resolution caused by three dimensional object model not using 3D scanning device in the prior art
High defect.
It in the present invention, is object modeling (such as modeling by using the panoramic picture in panorama camera shooting room
Deng) high-resolution acquisition image is provided.
Further, in the present invention, efficient image processing method is used, is object modeling (such as modeling)
It provides high-resolution modeling and prepares data, moreover, provided modeling, which prepares data, can simplify subsequent model generation
Process.
Still further, modeling method through the invention, it is (such as three-dimensional right that model generated can be effectively improved
2D the and/or 3D model of elephant) resolution ratio and/or accuracy.
Moreover, the present invention does not limit the application scenarios of modeling, for example, present invention may apply to modeling, vehicles to build
The various scenes modeled based on image of mould etc., the present invention are actually to provide a kind of comprehensive image of innovation
Processing scheme.
Detailed description of the invention
Disclosure illustrative embodiments are described in more detail in conjunction with the accompanying drawings, the disclosure above-mentioned and its
Its purpose, feature and advantage will be apparent, wherein in disclosure illustrative embodiments, identical appended drawing reference
Typically represent same parts.
Fig. 1 gives the schematic flow chart of the image processing method of an exemplary embodiment according to the present invention.
Fig. 2 gives the signal of the image procossing of an exemplary embodiment according to the present invention and the overall process of modeling
Property flow chart.
Fig. 3 gives the schematic block diagram of the image processing equipment of an exemplary embodiment according to the present invention.
Fig. 4 gives the schematic frame of the object modeling equipment of the automation of an exemplary embodiment according to the present invention
Figure.
Fig. 5 gives the schematic block diagram of the image processing apparatus of an exemplary embodiment according to the present invention.
Specific embodiment
The preferred embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although showing the disclosure in attached drawing
Preferred embodiment, however, it is to be appreciated that may be realized in various forms the disclosure without the embodiment party that should be illustrated here
Formula is limited.On the contrary, these embodiments are provided so that this disclosure will be more thorough and complete, and can be by the disclosure
Range is completely communicated to those skilled in the art.What needs to be explained here is that number, serial number and attached drawing in the application
Mark it is merely for convenience description and occur, for step of the invention, sequence etc. be not limited in any way, unless
The execution that step has been explicitly pointed out in specification has specific sequencing.
The present invention provides image processing method, image processing equipment, object-oriented modeling method, object modeling equipment and figures
As processing unit and computer media.
Firstly, in the present invention, the high-resolution panoramic image of three dimensional object is shot using common panorama camera, gram
The defect that 3D scanning camera captured image resolution ratio is not high described in background technique is taken.
Then, using multiple panoramic pictures of shooting, the plane in three dimensions of single panoramic picture can be extracted
Profile (can be described as " single image face profile ").
Further, by dimension normalization, the scale of single image face profile and the scale of camera position may be implemented
Between unification, generate pass through normalized each single image face profile, provided for subsequent progress object modeling high-resolution
And sufficient data preparation reduces the difficulty of subsequent processing work.
Still further, can be melted by the single object to each single image face profile for belonging to the same three dimensional object
It closes, obtains accurate single object face profile.
Still further, each single image face profile can be spliced in three dimensions, obtain more object models (at this time for
2D model).
Furthermore it is possible to be modified to more object models, to obtain more accurate model, so that model display effect is more
It is good.
Finally, generating by 3D model, complete, high-resolution and accurate 3D model is obtained.
Fig. 1 gives the schematic flow chart of the image processing method of an exemplary embodiment according to the present invention.
The image processing method of an exemplary embodiment according to the present invention will be introduced with initial reference to Fig. 1 below, come for
Sufficient data preparation is done in subsequent modeling processing, simplifies subsequent treatment process.As shown in Figure 1, image processing process includes
Camera position estimates that S110, single image face profile generate tri- S120, dimension normalization S130 steps, and modeling process can wrap
Include multiple subsequent steps.
Here, panorama camera is simply introduced first.The difference of panorama camera and general camera is that general camera is usually only
It is shot with a camera lens, and panorama camera is shot with two or more camera lens, so panorama camera can be real
Existing 360 degree of shooting.
The image processing method of an exemplary embodiment according to the present invention may comprise steps of.
In camera position estimating step S110, using the geometrical relationship of an at least width panoramic picture for shooting, estimation is being clapped
The position of panorama camera when taking the photograph every width panoramic picture and the three-dimensional point coordinate of the matching characteristic point on every width panoramic picture.
Wherein, every width panoramic picture is shot for a three dimensional object, and each three dimensional object corresponds to a width or more
Width panoramic picture.
In single image face profile generation step S120, for every width panoramic picture, based on the panoramic picture, profile
The profile that the edge pixel point that feature belongs in the pixel of particular category is surrounded, generate the panoramic picture in three-dimensional space
Between in face profile.
Thus, in the present invention it is possible to it is based on panoramic picture, automatically obtains the face profile of the image, without
It is artificial to participate in production, and without using expensive 3D scanning device.
In dimension normalization step S130, by the position of the estimated panorama camera when shooting every width panoramic picture
The scale of the face profile in three dimensions of scale and every width panoramic picture is normalized, and obtains by normalized each
The face profile in three dimensions of panoramic picture.
Here, optionally, above-mentioned camera position estimating step S110 may include following operation:
1) using the geometrical relationship of an at least width panoramic picture for shooting, the characteristic point between these panoramic pictures is carried out
Match, and records the characteristic point to match each other in the panoramic picture as matching characteristic point;And
2) by for every width panoramic picture, reducing the re-projection error of the matching characteristic point on the panoramic picture, come
The three-dimensional point coordinate of matching characteristic point on to the camera position and the panoramic picture when shooting every width panoramic picture.
Further optionally, above-mentioned single image face profile generation step S120 may include: based on panoramic picture
Characteristic similarity between pixel determines that on the panoramic picture, contour feature belongs to the side in the pixel of particular category
Edge pixel.Here, the characteristic similarity of two pixels can be the absolute value of the difference of the feature of the two pixels.Pixel
The feature of point may include such as gray scale, color etc..
Further optionally, above-mentioned dimension normalization step S130 may include following operation:
1) by the height in all three-dimensional point coordinates on at least width panoramic picture that camera position estimating step obtains
Angle value sorts from small to large, and the intermediate value or mean value for taking the forward height value that sorts are as particular category profile estimation height hc’;
And
2) height h is assumed using particular category profilecHeight h is estimated with particular category profilec’Ratio, from every width panorama
The face profile in three dimensions of image generates the face wheel in three dimensions by normalized each panoramic picture
It is wide.
Wherein, above-mentioned particular category profile assumes height hcBeing can be with a height of arbitrary assumption.
By above-mentioned image processing process, high-resolution basis is provided for the generation of subsequent model.Moreover, passing through
Above-mentioned image processing process, for subsequent model generate provide in three dimensions and pass through normalized each panorama sketch
The face profile of picture simplifies subsequent model and generates work, reduces the processing time, improve treatment effeciency.
By above-mentioned image procossing, the face profile data needed for model generates are provided for object modeling.
Fig. 2 gives the image procossing of an exemplary embodiment according to the present invention and the overall process of object modeling
Schematic flow chart.
Fig. 2 includes image processing section described above and the object modeling part that is described below.
The object of an exemplary embodiment according to the present invention is described next, with reference to the object modeling part of Fig. 2
Modeling method.
The object modeling process of an exemplary embodiment according to the present invention may include steps of.
In image processing step, using any one image processing method as described above, to an at least width panoramic picture
Image procossing is carried out, to obtain the face profile in three dimensions by normalized each panoramic picture.
Multipair as splicing step S140, based on it is above-mentioned by normalized each panoramic picture in three dimensions
Face profile, splicing obtain multipair image plane profile.
Further optionally, above-mentioned object-oriented modeling method can also include single object face profile generation step S135, use
In based on obtained in above-mentioned image processing step, by the face profile of normalized each panoramic picture, obtain each single three
The face profile in three dimensions of dimensional object.
Further optionally, above-mentioned single object face profile generation step S135 may include:
1) for an at least width panoramic picture, several panoramic pictures in the following manner, have one by one been determined whether
Belong to the same three dimensional object: if there is the matching characteristic point more than special ratios between two width panoramic pictures, it is determined that this two
Width panoramic picture belongs to the same three dimensional object;And
2) if it is determined that several panoramic pictures belong to the same three dimensional object, then for being obtained by this several panoramic picture
Each face profile of the same three dimensional object, takes the union of these face profiles, the face profile as the three dimensional object.
Further optionally, multipair as splicing step S140, additionally it is possible to based on each single 3 D object in three-dimensional space
In face profile, splicing obtain multipair image plane profile in three dimensions.
That is, in the present invention, the face profile in three dimensions of each panoramic picture can be both directly based upon, to splice
Obtain multipair image plane profile in three dimensions;The plane in three dimensions of single 3 D object can also first be obtained
Profile is then based on the face profile in three dimensions of each single 3 D object, to splice to obtain in three dimensions
Multipair image plane profile.
Further optionally, above-mentioned object-oriented modeling method of stating can also include 3D model generation step S150, for more
After object splices step S140, object 3D mould is converted by the multipair image plane profile in three dimensions that splicing obtains
Type.
Further optionally, above-mentioned 3D model generation step S150 may include operating as follows:
1) top planes profile in the multipair image plane profile for obtaining splicing carries out three-dimensional point interpolation in inside, and will
Obtained all three-dimensional point coordinates in each top planes profile project in corresponding panoramic picture coordinate system, to obtain top
Texture;
2) the bottom plane profile in the multipair image plane profile for obtaining splicing carries out three-dimensional point interpolation in inside, and will
Obtained all three-dimensional point coordinates on each bottom plane profile project in corresponding panoramic picture coordinate system, to obtain bottom
Texture;
3) three-dimensional vertices between top profile and bottom profile on same level position are connected, the plane of support portion is constituted
Profile, and for carrying out three-dimensional point interpolation inside the face profile of the support portion, and by the face wheel of obtained each support portion
Wide all three-dimensional point coordinates project in corresponding panoramic picture coordinate system, to obtain support portion texture;And
4) it is based on the top texture, bottom texture, support portion texture, generates the 3D texture model of entire three dimensional object.
It further optionally, will be in obtained each three dimensional object top planes profile in 3D model generation step S150
The h of the estimation height of the contoured as described in all three-dimensional point coordinatesc’, estimation at the top of the corresponding three dimensional object of camera distance
The height value of height replaces with the estimation height h of the corresponding three dimensional object bottom of camera distancef’, and each three-dimensional made is right
As the length and width values in all three-dimensional point coordinates in top planes profile remain unchanged, corresponding each three dimensional object is obtained
Bottom plane profile.
Wherein, above-mentioned " the estimation height h at the top of the corresponding three dimensional object of camera distancec’" obtain in the following way: it will
The height value in all three-dimensional point coordinates on an at least width panoramic picture described in obtaining in camera position estimating step is from small
To big sequence, the intermediate value or mean value for taking the forward height value that sorts are as estimating at the top of the corresponding three dimensional object of the camera distance
Count height hc’。
In addition, similarly, above-mentioned " the estimation height h of the corresponding three dimensional object bottom of camera distancef’" in the following way
It obtains: the height in all three-dimensional point coordinates on an at least width panoramic picture described in being obtained in camera position estimating step
Value sorts from small to large, and the intermediate value or mean value for taking the height value of sequence rearward are as the corresponding three dimensional object bottom of camera distance
Estimate height hf’。
In the following, in order to facilitate understanding and description, will by taking house image procossing and modeling as an example, come further in detail
Each treatment process of the invention is described.
In the example provided herein, above-mentioned house may include multiple rooms, and each room can be considered as one three
Dimensional object, the modeling in house can be considered as the modeling of multiple three dimensional objects.For example, can be multiple by the conduct in splicing house
Multiple rooms of three dimensional object, to construct the overall model in house.
For the term in above-mentioned overview image processing and object-oriented modeling method, for example, " top " therein is originally showing
It can be " ceiling " in room in example, " bottom " can be " floor " in room in this example, and " support portion " in this example may be used
To be " metope ".In addition, " profile that the edge pixel point that contour feature belongs in the pixel of particular category is surrounded " is at this
It can refer to " the ceiling profile that the edge pixel point in the pixel by belonging to ceiling is surrounded " in example." particular category wheel
Exterior feature estimation height " can be " the estimation height of camera to ceiling " in this example, and similarly, " particular category profile assumes high
Degree " can be " the hypothesis height of camera to ceiling " in this example.
In the image processing method of an exemplary embodiment according to the present invention, based on being shot at least in house
(panoramic picture only corresponds to a room to one width panoramic picture, but may shoot multiple panoramic pictures in a room, i.e.,
One room may correspond to multiple panoramic pictures), estimation shoots the position of the panorama camera of these images, then the phase based on estimation
Seat in the plane is set, and to extract the face profile of panoramic picture, then the face profile of extraction is normalized, to obtain needed for modeling
Face profile.
Therefore, as shown in Figure 1, in step S110, the several of at least width panoramic picture shot in a room are utilized
What relationship, estimation shoot the position of the panorama camera of these panoramic pictures.
In the present invention, it is alternatively possible to solve the problems, such as this using based on the method for multi-view geometry.
Specifically, camera position estimating step S110 for example may include following operation:
1) Feature Points Matching between these panoramic pictures is carried out, and records the characteristic point to match each other in these images;
And
2) it by for every Zhang Quanjing's image, reducing the re-projection error of the matching characteristic point on the panoramic picture, finds out
The camera position of every Zhang Quanjing's image and the three-dimensional point coordinate of the matching characteristic point on the panoramic picture.
For above-mentioned step 1), wherein in the image processing arts, image characteristic point refers to gray value of image
The point of acute variation or the biggish point of curvature (intersection point at i.e. two edges) on image border.Image characteristic point is able to reflect
Image substantive characteristics can be identified for that the target object in image.
The same object in the image of two different perspectivess how is efficiently and accurately matched, is many computer views
Feel the first step in application.Although image be in a computer in the form of gray matrix existing for, using image ash
Degree can not accurately find out the same object in two images.This is because the influence that gray scale is illuminated by the light, and work as image
After visual angle change, the gray value of the same object can also change therewith.It is therefore desirable to which finding out one kind can be moved in camera
When with rotation (visual angle changing), constant feature can still be maintained, find out different views using these constant features
The same object in the image at angle.
Therefore, it in order to preferably carry out images match, needs to select representative region, example in the picture
Such as: angle point, edge and some blocks in image.Wherein, the identification highest of angle point.In the processing of many computer visions,
Angle point is usually extracted as characteristic point, image is matched, the method that can be used for example has SFM (Structure Form
Motion, exercise recovery structure), (Simultaneous Localization and Mapping, vision position SLAM immediately
With map structuring) etc..
But simple angle point can not meet demand well, such as: camera is angle point from what is obtained at a distance, still
It may not be angle point on hand;Alternatively, angle point is just changed after camera rotation.For this purpose, the research of computer vision
Persons devise many more stable characteristic points, these characteristic points will not be with the change of the movement of camera, rotation or illumination
Change and change, the method that can be used for example has SIFT (Scale-Invariant Feature Transform, Scale invariant
Eigentransformation), SURF (Speed Up Robust Features, accelerate robust feature) etc..
The characteristic point of one image is made of two parts: key point (Keypoint) and description are sub (Descriptor).It closes
Key point refers to the position of this feature point in the picture, some also have direction, dimensional information;Description son be usually one to
Amount describes the information of key point surrounding pixel.In general, in matching, as long as description of two characteristic points is in vector space
It is closely located, so that it may to think that they are the same characteristic points.
The matching of characteristic point usually requires the following three steps: 1) extracting the key point in image;2) according to obtained pass
Key point position calculates description of characteristic point;3) it according to the description of characteristic point, is matched.
Optionally, such as open source computer vision library can be used in the relevant treatment of the Feature Points Matching in this step
OpenCV is realized.In order to succinct and do not obscure subject of the present invention, herein for this part more detailed treatment process not
It repeats again.
After carrying out the Feature Points Matching between these panoramic pictures, to record and match each other in these panoramic pictures
Characteristic point (also referred to as " matching characteristic point "), can for example carry out the record of matching characteristic point as follows.
For example, the characteristic point a and characteristic point b on image 2 on image 1 be it is matched, characteristic point b on image 2 again with
Characteristic point c on image 3 be it is matched, characteristic point c on image 3 be also with the characteristic point d on image 4 again it is matched, then may be used
To record a Feature Points Matching data (a, b, c, d) (alternatively referred to as " feature point tracking track ").Inputted as a result, this
The related record data of characteristic point to match each other in a little panoramic pictures.
For above-mentioned step 2), wherein image re-projection refers to be projected by the reference picture to any viewpoint
To generate new image, that is, image re-projection can change the direction of visual lines of generated image.
Specifically, in the present invention, image re-projection is referred to the corresponding three-dimensional point of a characteristic point p1 on image 1
Coordinate is projected in other piece image 2 by Current camera parameter, the obtained subpoint q2 on this image 2 and figure
Alternate position spike of the characteristic point p1 between the matching characteristic point p2 in the image 2 on picture 1, constitutes re-projection error
(Reprojection Error).Wherein, the matching characteristic point p2 in image 2 is actual position, and is obtained by re-projection
Subpoint q2 be estimation position, by minimizing the alternate position spike between subpoint q2 and matching characteristic point p2 as far as possible,
That is, being overlapped subpoint q2 as much as possible with matching characteristic point p2, camera position is solved.
Wherein, optimizing the variable for including in the objective function of (reduction) re-projection error includes camera position and characteristic point
Three-dimensional coordinate obtains the three-dimensional coordinate of camera position and characteristic point during gradually reducing (optimization) re-projection error.
Optionally, in the present invention it is possible to by combining gradient descent algorithm and De Luo Triangle ID algorithm (Delaunay
Triangulation), reduce re-projection error, achieve the purpose that optimization.
Wherein, when using gradient descent algorithm, by the three-dimensional point coordinate of matching characteristic point as constant, by camera position
As variable, conversely, when using moral sieve Triangle ID algorithm, by the three-dimensional point coordinate of matching characteristic point as variable, by camera
Position is as constant.
Optionally, in the present invention it is possible to gradual solution is used, to improve the camera position and three-dimensional point coordinate of solution
Precision, that is, in solution procedure, its camera position and matching characteristic are solved by way of increasing piece image every time
The three-dimensional point coordinate of point.Wherein, the method for gradual solution includes such as ISFM (Incremental SFM).
In addition, still optionally further, (bundle adjustment) can be optimized using boundling and thrown to further decrease again
Shadow error.Specifically, it can be carried out in every Zhang Quanjing's image and reduce re-projection error to obtain camera position and three-dimensional point
After the processing of coordinate, finally unifies to optimize to come while optimizing all camera positions using boundling and all three-dimensional points are sat
Mark.It can also be in reducing treatment process of the re-projection error to obtain camera position and three-dimensional point coordinate, for any
After the camera position and three-dimensional point coordinate of panoramic picture obtain, the processing of addition boundling optimization, with the camera position to acquisition
It is optimized with three-dimensional point coordinate.
Here, boundling optimization refers to the position for optimizing all cameras simultaneously and all three-dimensional point coordinates, is different from
Only optimize the mode of Current camera position and the three-dimensional point coordinate on present image in gradual solution respectively.
In addition, other than gradual solution described above, it can also be using the method for global formula solution.
It is raw based on the profile that the edge pixel point for belonging to ceiling on every width panoramic picture is surrounded in step S120
At the face profile in the three-dimensional space of the panoramic picture.
Since ceiling must be in the top of camera, in the panoramic picture of shooting, the pixel of the top must be
Belong to ceiling.Again, there are similar features due to belonging to most of pixel on ceiling, it can be according to pixel
The characteristic similarity of point, finally obtains all pixels point for belonging to ceiling.
For example, all pixels point of panoramic picture the first row to be regarded as to the pixel for belonging to ceiling first;For
Each pixel of second row calculates the characteristic similarity (feature here of itself and the pixel for belonging to same row in the first row
The features such as color, gray scale can be used;The characteristic similarity of two pixels can be for example the difference (example of the feature of two pixels
The difference of such as gray scale or the difference of color) absolute value etc.).If characteristic similarity is within some threshold value (according to 0-255 ash
10) angle value, threshold value for example may be configured as, then the pixel also belongs to ceiling, be further continued for calculating the third line and the second row on the column
Similarity, the similarity ... of fourth line and the third line, until characteristic similarity is more than the threshold value, pixel position at this time
It is the edge pixel point of ceiling.
The edge pixel point of these ceilings forms the edge of ceiling, and therefore, these edge pixel points are projected to
Three-dimensional space can form the face profile of ceiling.
The projection of pixel to three-dimensional space is described below.
It is assumed that the width of a panoramic picture is W, a height of H, and assume the edge pixel point c of obtained ceiling in image
Coordinate under coordinate system is (pc, qc).Since panoramic picture is obtained by spherical projection, by it under spheric coordinate system
It is expressed as (θc, φc), wherein θc∈ [- π, π] is longitude, φc∈ [- pi/2, pi/2] is dimension.
Wherein the relationship of spheric coordinate system and image coordinate system can be obtained by following formula 1:
Following formula 2 is for the coordinate (θ by the pixel c at ceiling edge under spheric coordinate systemc, φc) project to
Three-dimensional point coordinate (x on three-dimensional planarc, yc, zc):
Herein, term " image coordinate system " refers to the coordinate system where image slices vegetarian refreshments, is mainly used to describe in image
Position where pixel.Panoramic picture coordinate means the coordinate system where the pixel of panoramic picture as a result, is mainly used to retouch
State the position in panoramic picture where pixel.
It note that only give based on the similitude of the ceiling characteristic point on panoramic picture above and generate the panorama sketch
A kind of example of face profile in the three-dimensional space of picture, the method that the present invention can be used are not limited to the example.
Since ceiling is regarded as plane, it can be considered that each pixel at ceiling edge has system apart from camera
One height can be described as " height of camera distance ceiling ".
Here, since panorama camera is generally supported using tripod, height be it is fixed, so it is also contemplated that camera away from
Height from the height of ceiling, camera distance floor is fixed.
The face profile in three-dimensional space obtained for this step can assume one to each three-dimensional point on the profile
Height value, for example assume that the height of camera distance ceiling is hc, and the height of the hypothesis can be arbitrary value, such as
100 (height that true camera distance ceiling can be found out by subsequent processing).In order to avoid obscuring, hereafter by this
In the height h of camera distance ceiling that assumescReferred to as " the hypothesis height of camera distance ceiling " hc。
In the above embodiments, it can be based on panoramic picture, automatically obtain the face profile of the image, without
It is artificial to participate in production, and without using expensive 3D scanning device.
In step S130, by the scale of camera position when shooting every width panoramic picture obtained in step S110 and in step
The scale of the three-dimensional space face profile for the panoramic picture that rapid S120 is obtained is normalized.
On the one hand, since the scale in step S110 in camera position estimation is uncertain, it can not determine that camera arrives
The true altitude of ceiling profile.On the other hand, when obtaining the three-dimensional space face profile in room in step S120 using
The hypothesis height h of camera distance ceilingc, therefore, the scale of obtained camera position and the three-dimensional space face profile in room
Scale disunity, to subsequent room contour splicing cause certain difficulty.
Optionally, in this step, by the height coordinate value (value in Y-axis) of all three-dimensional points obtained in step S110
It is sorted from small to large, takes intermediate value (or the other reasonable value sides of mean value etc. for the forward a height coordinate value of sorting
Formula) estimation height h as camera distance ceilingc’。
Finally, utilizing the hypothesis height h of camera distance ceilingcWith estimation height hc’Ratio, regenerate scale and return
The one single room face profile changed.
For example, can be by will assume height hcWith estimation height hc’Ratio and the face profile that is obtained in step S120
On boundary point coordinate be multiplied, obtain the coordinate of the boundary point on the face profile of dimension normalization, thus obtain scale and return
One face profile changed.
On the other hand, can also taking the intermediate value of sequence b height coordinate value rearward, (or mean value etc. is other reasonably
Value mode) estimation height h as camera distance floorf’(the estimation height will be in subsequent 3D model generation step etc.
It is used).
By above-mentioned image processing process, high-resolution basis is provided for the generation of subsequent model.Moreover, passing through
Above-mentioned image processing process, for subsequent model generate provide in three dimensions and pass through normalized each panorama sketch
The face profile of picture simplifies subsequent model and generates work, reduces the processing time, improve treatment effeciency.
By above-mentioned image procossing to model the face profile data provided needed for model generates, next will continue
Citing is to describe modeling method.
Optionally, it in step S135, can be obtained each single based on the face profile by normalized each panoramic picture
The face profile in room.
In the present invention, a corresponding face profile in three dimensions is obtained from a panoramic picture, can be described as
" single image face profile ".
It is in this case, same due to that may include multiple panoramic pictures of same room in the panoramic picture of shooting
Room leads to the multiple face profiles in three dimensions of correspondence flat in more rooms that subsequent more room splicings obtain
In facial contour, in fact it could happen that the face profile that the respective different panoramic pictures in some or multiple rooms obtain is not overlapped, causes
The profile of splicing occurs being overlapped or chaotic phenomenon.Therefore, it may be considered that the fusion for first carrying out same room with them profile (can be described as
" single room fusion "), to avoid this phenomenon.Moreover, can also to eliminate single room contour imperfect for the fusion of same room profile
The phenomenon that.
For the situation of the above-mentioned single room fusion of needs, the present inventor gives illustrative methods below.
First, it is determined that whether two width panoramic pictures belong to same room.
It here can be by the way of based on Feature Points Matching, if had between two width panoramic pictures more than some proportion
The matching characteristic point of (special ratios, such as 50% etc.) then can be determined that this two width panoramic picture belongs to same room.
Then, if several panoramic pictures belong to same room, that is, for the same room obtained by different panoramic pictures
Face profile, the union of these face profiles is taken, as single room face profile (one, room wheel in three-dimensional space
Exterior feature avoids the situation of the multiple single image profiles in a room), it is achieved in the fusion of same room profile.
Wherein, the ratio of above-mentioned matching characteristic point can be arranged in the following ways: assuming that image 1 has n1A feature
Point, image 2 have n2The matching characteristic points of a characteristic point, two images are n.So the ratio of matching characteristic point can be n/
min(n1,n2)。
If it is alternatively possible to be set as the ratio greater than such as 50%, then it is assumed that two images are same room with them.
Here, the actual size of the setting of the ratio of matching characteristic point and ratio can be tested or rely on according to the actual situation
Empirically determined, the present invention does not limit this additionally.
As described above, in the present invention, for an above-mentioned at least width panoramic picture, can be melted by following single room
The mode of conjunction determines whether that several panoramic pictures belong to same room: if had between two width panoramic pictures more than specific ratio
The matching characteristic point of example, then can be determined that this two width panoramic picture belongs to same room.
If it is determined that several panoramic pictures belong to same room, then the same room for being obtained by several described panoramic pictures
Between each face profile, take the union of these face profiles, the face profile as the room.
In addition, after the profile fusion of same room, since the contour edge of acquisition is there may be noise, such as may
Show as sideline it is not straight and while with while out of plumb the phenomenon that.So the present invention can also be further to the profile in each room
Orthogonal fitting is carried out, more reasonable room face profile is obtained.
Pass through the above-mentioned optimization processing carried out specifically for single room, such as single room fusion and/or orthogonal
Fitting etc., available more accurate single room face profile improve model conducive to the generation of subsequent 2D and 3D model
Resolution ratio and accuracy.
It note that the step is not the step that house is two-dimentional or three-dimensional modeling is necessary, but the accurate of model can be improved
The preferred process mode of degree.
In step S140, return based on the camera position estimated in step S110 and what step S130 was obtained by scale
The one each room face profile changed, splices the profile in multiple rooms.
In this step, the splicing that realize each room face profile by dimension normalization, is spliced into more
Room contour can be adopted and manually be realized.
Alternatively, it is also possible to realize more room splicings using automatic mode, an invention of the invention will be given below
The more room connection schemes of automation that people proposes.
Optionally, in this step, the camera position that can use estimation, to each room wheel Jing Guo dimension normalization
Wide three-dimensional point coordinate carries out rotation and translation operation, and the three-dimensional point coordinate in each room is unified into the same coordinate system,
To realize the splicing of more room face profiles.
Assuming that there is the profile in N number of room, p-th of three-dimensional point of n-th of room contour is expressed asThe phase seat in the plane in the room
It sets and is expressed as { Rn, tn, wherein RnFor the spin matrix for indicating the rotation parameter of camera position, tnFor for indicating camera
The translation vector of the translation parameters of position.
At this point it is possible to which the camera position in selected first room is as reference frame, this is because the room obtained at present
Between profile be the outline position under respective coordinate system, need it is unified under a coordinate system, so needing a selected reference
Coordinate system.Exactly, the coordinate system where the camera position in first room can be selected as reference frame.Then,
The contoured three-dimensional point in other rooms can uniformly be arrived under the coordinate system by following formula 3:
By all contoured three-dimensional points by dimension normalization in addition to first room (for example, ceiling edge, wall
Three-dimensional point on face edge, floor edge) it is converted by formula 3, it can be unified to same seat by the roomed three-dimensional point of institute
Under mark system (that is, the reference frame in first room), the splicing of more room face profiles thus just can be realized.
Here it is possible to select the coordinate system in any one room as reference frame, the present invention is not intended to be limited in any this,
Because it is desirable that relative positional relationship, is not absolute positional relation in the present invention.
Here, the more room contours obtained after the splicing of more rooms of the step, can be used as the 2D model (example in house
Such as 2D floor plan) output.
Optionally, the profile in more rooms can also be modified in step S145.
It note that this step step that also not house is two-dimentional or three-dimensional modeling is necessary, but the standard of model can be improved
The preferred process mode of exactness.
In the present invention, after the profile to more rooms splices, can also further the profile to more rooms into
Row amendment, to obtain more accurate room contour.
Due to being influenced by single image face profile extraction accuracy and camera position estimated accuracy, the wheel of adjacent room
There may be overlapping region or there is the case where gap after splicing in exterior feature, therefore, can further be directed to both of these case
Carry out the correction of profile.
Bearing calibration for example can be as follows.Firstly, calculating the adjacent edge of profile two-by-two, (these adjacent edges should be theoretically
The distance between it is overlapped, that is, be theoretically regarded as a coincidence side of more room contours), if the distance is less than some threshold
Value, then can be determined that this two sides are adjacent relationships, can be deviated the profile accordingly at this time, make adjacent edge away from
From 0 (becoming to be overlapped, become and be overlapped side) is become, overlapping or the gap between adjacent edge are thus corrected.
For the above-mentioned threshold value, give one example for, for example these can be calculated and be regarded as a coincidence side
The average length L of adjacent edge, and using some proportion of the average length as threshold value, for example 0.2*L can be made as the distance
Threshold value.
Note that be intended merely to facilitate above understand for the sake of and the threshold value of illustration that provides, in fact, the present invention for
The threshold value does not make additional limitation, can be determined by experiment and experience.
It can be used as the suite by above-mentioned single room contour fusion and the revised more room contours of more room contours as a result,
One of room complete and accurate 2D floor plan (the 2D model in house).
Optionally, house 3D model can also be converted by more room face profiles of generation further in step S150.
Firstly for the ceiling plane profile in more room face profiles obtained in front of the step of, inside into
Row three-dimensional point interpolation, then projects to all three-dimensional point coordinates in corresponding panoramic picture, to obtain ceiling texture
(color value).
Here, the method for three-dimensional point interpolation will be illustrated.Such as, it is assumed that obtained more room face profiles
Ceiling profile is a rectangle, it is assumed that its a length of H, width W can then be grown and be respectively divided into N number of interval with width, can be obtained
N*N interpolation point in total.Then, some vertex (assuming that the three-dimensional point coordinate on the vertex is (x, y, z)) of the rectangle may be selected
As origin, then the coordinate of available this N*N point is followed successively by (x+H/N, y, z), (x+2*H/N, y, z) ... (x, y+W/N,
z)(x,y+2*W/N,z),…(x+H/N,y+W/N,z)…).It can be seen that obtaining contoured interior after three-dimensional point interpolation
Dense three-dimensional point coordinate.
Note that is above for the purpose of understanding and gives the specific example of a three-dimensional point interpolation, in fact,
The method of the available three-dimensional point interpolation of the present invention can have very much, however it is not limited to the example.
In addition, for example, specific projecting method can be as follows.It is assumed that the three-dimensional point coordinate after interpolation is (xi, yi, zi),
The longitude and latitude projected on panoramic picture is (θi, φi), then the projection formula can be indicated with following formula 4:
After obtaining longitude and latitude by the formula, the three-dimensional point can be obtained in the coordinate of panoramic picture plane further according to formula 1,
Color value at the point can be used as the texture of the three-dimensional point.
For most of scenes, the profile of ceiling can be assumed that be parallel and identical with the profile on floor.Therefore, sharp
With the ceiling plane profile in each room after the correction of above-mentioned acquisition, along on camera distance floor obtained above
Estimate height hf’, the same three-dimensional point that more room floor face profiles are produced by formula 2.
Here, the shape of the face profile on floor is assumed to be as ceiling, that is, the three-dimensional coordinate x of horizontal plane
As being with z, the difference is that (for example the face profile of ceiling is in the upper of camera for height, i.e. the y value of vertical direction
Side, floor are in the lower section of camera, so height is different).It therefore, only need to will be in the three-dimensional point of ceiling profile obtained above
Y value (the estimation height h of camera distance ceiling in coordinatec’) replace with the estimation height h on camera distance floorf’.
Similarly with the three-dimensional point interpolation of ceiling plane profile, for floor level profile, three-dimensional point is carried out in inside
Then interpolation is projected in corresponding panoramic picture using formula 4, to obtain floor texture.
Then, the three-dimensional vertices between ceiling profile and flooring profile on same level position are connected, may make up multiple
Then the face profile of metope is projected to similarly, for three-dimensional point interpolation is carried out inside these face profiles using formula 4
In corresponding panoramic picture, to obtain the texture of metope.
The 3D texture model in complete house is produced as a result,.
Premises modeling method through the invention can effectively improve the resolution ratio of model generated and accurate
Property.
In addition, it is desirable to point out, the above is in order to facilitate understanding and for the sake of description, to the present invention by taking modeling as an example
The method modeled based on image be illustrated, in fact, the present invention should not be limited only to the applied field of modeling
Scape, but the various scenes modeled based on image can be suitable for, such as modeled to vehicle to realize that VR is (virtual existing
It is real) see that vehicle etc. scene, the present invention are actually the comprehensive image procossing scheme of a kind of innovation that provides.
Fig. 3 gives the schematic block diagram of the image processing equipment of an exemplary embodiment according to the present invention.
As shown in figure 3, the image processing equipment 100 of an exemplary embodiment according to the present invention may include phase seat in the plane
Set estimation device 110, single image face profile generating means 120, dimension normalization device 130.
Wherein, camera position estimation device 110 can be configured for utilizing the several of at least width panoramic picture shot
What relationship, the position for estimating the panorama camera when shooting every width panoramic picture and the matching characteristic on every width panoramic picture
The three-dimensional point coordinate of point, wherein every width panoramic picture is shot for a three dimensional object, and each three dimensional object corresponds to one
Width or several panoramic pictures.
Single image face profile generating means 120 can be configured for being based on the panorama for every width panoramic picture
The profile that the edge pixel point that on image, contour feature belongs in the pixel of particular category is surrounded, generates every width panorama
The face profile in three dimensions of image.
Dimension normalization device 130 can be configured for the estimated panorama phase when shooting every width panoramic picture
The scale of the face profile in three dimensions of the scale of the position of machine and every width panoramic picture is normalized, obtain by
The face profile in three dimensions of normalized each panoramic picture.
Optionally, single image face profile generating means 120 may be further configured for based on the pixel on panoramic picture
Characteristic similarity between point, determines that on the panoramic picture, contour feature belongs to the edge in the pixel of particular category
Pixel.
Wherein, the characteristic similarity of two pixels is the absolute value of the difference of the feature of the two pixels.Here, pixel
The feature of point may include gray scale, color etc..
Optionally, dimension normalization device 130 may be further configured for obtained by camera position estimation device, institute
The height value in all three-dimensional point coordinates on an at least width panoramic picture is stated, is sorted from small to large, the forward height that sorts is taken
The intermediate value or mean value of value, which are used as, is used as particular category profile estimation height hc’, estimating at the top of the corresponding three dimensional object of camera distance
Count height hc’;And utilize the hypothesis height h at the top of the corresponding three dimensional object of the camera distancecCorresponding to the camera distance three
Estimation height h at the top of dimensional objectc’Ratio generate from the face profile in three dimensions of every width panoramic picture
By the face profile of normalized each panoramic picture.
Wherein, the hypothesis height h at the top of the corresponding three dimensional object of the camera distancecIt is a height of arbitrary assumption.
In the present invention, by using the panoramic picture in panorama camera shooting room, for three dimensional object modeling (such as house
Modeling etc.) high-resolution acquisition image is provided.
Further, in the present invention, efficient image processing equipment is used, for three dimensional object modeling (such as house
Modeling) high-resolution modeling preparation data are provided, moreover, provided modeling, which prepares data, can simplify subsequent model
Generating process.
Fig. 4 gives the schematic block diagram of the object modeling equipment of an exemplary embodiment according to the present invention.
As shown in figure 4, the object modeling equipment 1000 may include image processing equipment 100 shown in Fig. 3 and multipair
As splicing apparatus 140.
Wherein, image processing equipment 100 can be configured for a processing at least width panoramic picture, generate by normalization
Each panoramic picture face profile in three dimensions.
The multipair face profile that can be configured for as splicing apparatus 140 by normalized each panoramic picture,
Splicing obtains multipair image plane profile.
Optionally, which can also include: single object face profile generating means 135,
It can be configured for the face profile by normalized each panoramic picture, obtain the face profile in each single room.
Optionally, single object face profile generating means 135 may be further configured for for an at least width panorama
Image in the following manner, one by one determines whether that several panoramic pictures belong to the same three dimensional object: if two width panoramas
There is the matching characteristic point more than special ratios between image, then it is right to can be determined that this two width panoramic picture belongs to the same three-dimensional
As;And if it is determined that several panoramic pictures belong to the same three dimensional object, then for being obtained by several described panoramic pictures
Each face profile of the same three dimensional object, takes the union of the face profile, as the three dimensional object in three-dimensional space
In face profile.
Further optionally, multipair as splicing apparatus 140 may be further configured for being based on being generated by single object face profile
Face profile in three dimensions that device 135 generates, each single 3 D object, splicing obtain in three dimensions more
Object plane profile.
Further optionally, three dimensional object modelling apparatus 1000 can also include multipair as contour optimization device 145, can
To be configured for carrying out contour revising to the multipair multipair image plane profile obtained as splicing apparatus 140.
Optionally, which can also include 3D model generating means 150, can be configured
Multipair image plane profile in three dimensions for obtaining splicing is converted into three dimensional object 3D model.
Here, the difference such as each device 110,120,130,135,140,145,150 of above-mentioned object modeling equipment 1000
Corresponding with the step S110,120,130,135,140,145,150 that are described above etc., details are not described herein.
Above-mentioned object modeling equipment through the invention as a result, can effectively improve model generated resolution ratio and
Accuracy.
In addition, it is desirable to point out, the above is in order to facilitate understanding and for the sake of description, to this by taking three dimensional object models as an example
The technical solution of invention modeled based on image is illustrated, in fact, the present invention should not be limited only to three dimensional object
The application scenarios of modeling, but the various scenes modeled based on image can be suitable for, for example, modeling, vehicle
Modeling etc..The present invention is actually the comprehensive image procossing scheme of a kind of innovation that provides.
Fig. 5 gives the schematic block diagram of the image processing apparatus of an exemplary embodiment according to the present invention.
Referring to Fig. 5, which includes memory 10 and processor 20.
Processor 20 can be the processor of a multicore, also may include multiple processors.In some embodiments, locate
Reason device 20 may include a general primary processor and one or more special coprocessors, such as graphics processor
(GPU), digital signal processor (DSP) etc..In some embodiments, the circuit realization of customization can be used in processor 20,
Such as application-specific IC (ASIC, Application Specific Integrated Circuit) or scene can
Programmed logic gate array (FPGA, Field Programmable Gate Arrays).
It is stored with executable code on memory 10, when the executable code is executed by the processor 20, makes institute
It states processor 20 and executes one of method described above.Wherein, memory 10 may include various types of storage units, such as
Installed System Memory, read-only memory (ROM) and permanent storage.Wherein, ROM can store processor 20 or computer
The static data or instruction that other modules need.Permanent storage can be read-write storage device.Permanently store dress
Set the non-volatile memory device that the instruction and data of storage will not be lost can be after computer circuit breaking.In some realities
It applies in mode, permanent storage device is used as permanent storage using mass storage device (such as magnetically or optically disk, flash memory).
In other embodiment, permanent storage device can be removable storage equipment (such as floppy disk, CD-ROM drive).In system
It deposits and can be read-write storage equipment or the read-write storage equipment of volatibility, such as dynamic random access memory.Installed System Memory
It can store the instruction and data that some or all processors need at runtime.In addition, memory 10 may include any
The combination of computer readable storage medium, including (DRAM, SRAM, SDRAM, flash memory can for various types of semiconductor memory chips
Program read-only memory), disk and/or CD can also use.In some embodiments, memory 10 may include readable
And/or the removable storage equipment write, such as laser disc (CD), read-only digital versatile disc (such as DVD-ROM, it is double
Layer DVD-ROM), read-only Blu-ray Disc, super disc density, flash card (such as SD card, min SD card, Micro-SD card etc.),
Magnetic floppy disc etc..Computer readable storage medium does not include carrier wave and the momentary electron signal by wirelessly or non-wirelessly transmitting.
In addition, being also implemented as a kind of computer program or computer program product, the meter according to the method for the present invention
Calculation machine program or computer program product include the calculating for executing the above steps limited in the above method of the invention
Machine program code instruction.
Alternatively, the present invention can also be embodied as a kind of (or the computer-readable storage of non-transitory machinable medium
Medium or machine readable storage medium), it is stored thereon with executable code (or computer program or computer instruction code),
When the executable code (or computer program or computer instruction code) by electronic equipment (or calculate equipment, server
Deng) processor execute when, so that the processor is executed each step according to the above method of the present invention.
Those skilled in the art will also understand is that, various illustrative logical blocks, mould in conjunction with described in disclosure herein
Block, circuit and algorithm steps may be implemented as the combination of electronic hardware, computer software or both.
What flow chart and block diagram in attached drawing etc. showed the system and method for multiple embodiments according to the present invention can
The architecture, function and operation being able to achieve.In this regard, each box in flowchart or block diagram can represent a mould
A part of block, program segment or code, a part of the module, section or code include one or more for realizing rule
The executable instruction of fixed logic function.It should also be noted that in some implementations as replacements, the function of being marked in box
It can also be occurred with being different from the sequence marked in attached drawing.For example, two continuous boxes can actually be substantially in parallel
It executes, they can also be executed in the opposite order sometimes, and this depends on the function involved.It is also noted that block diagram and/
Or the combination of each box in flow chart and the box in block diagram and or flow chart, can with execute as defined in function or
The dedicated hardware based system of operation is realized, or can be realized using a combination of dedicated hardware and computer instructions.
Various embodiments of the present invention are described above, above description is exemplary, and non-exclusive, and
It is not limited to disclosed each embodiment.Without departing from the scope and spirit of illustrated each embodiment, for this skill
Many modifications and changes are obvious for the those of ordinary skill in art field.The selection of term used herein, purport
In the principle, practical application or improvement to the technology in market for best explaining each embodiment, or make the art
Other those of ordinary skill can understand each embodiment disclosed herein.
Claims (21)
1. a kind of image processing method, which is characterized in that described image processing method includes:
Camera position estimating step, wherein using the geometrical relationship of an at least width panoramic picture for shooting, estimation is shooting every width
The three-dimensional point coordinate of matching characteristic point on the position of panorama camera when panoramic picture and every width panoramic picture, wherein every
Width panoramic picture is shot for a three dimensional object, and each three dimensional object corresponds to one or more panoramic picture;
Single image face profile generation step, wherein for every width panoramic picture, based on the panoramic picture, profile is special
The profile that the edge pixel point that sign belongs in the pixel of particular category is surrounded, generate the panoramic picture in three-dimensional space
In face profile;And
Dimension normalization step, wherein by the scale of the position of the estimated panorama camera when shooting every width panoramic picture
It is normalized, is obtained by normalized each panorama with the scale of the face profile in three dimensions of every width panoramic picture
The face profile in three dimensions of image.
2. image processing method as described in claim 1, which is characterized in that camera position estimating step includes:
Using the geometrical relationship of an at least width panoramic picture for shooting, the Feature Points Matching between the panoramic picture is carried out, and
The characteristic point to match each other in the panoramic picture is recorded as matching characteristic point;And
By for every width panoramic picture, reducing the re-projection error of the matching characteristic point on the panoramic picture, to be clapped
The three-dimensional point coordinate of the matching characteristic point on camera position and panoramic picture when taking the photograph every width panoramic picture.
3. image processing method as described in claim 1, which is characterized in that single image face profile generation step includes:
Based on the characteristic similarity between the pixel on panoramic picture, determine that on the panoramic picture, contour feature belongs to
The edge pixel point in the pixel of particular category,
Wherein, the characteristic similarity of two pixels is the absolute value of the difference of the feature of described two pixels, the pixel
Feature include gray scale, color.
4. image processing method as described in claim 1, which is characterized in that dimension normalization step includes:
The height in all three-dimensional point coordinates on an at least width panoramic picture described in being obtained in camera position estimating step
Value sorts from small to large, and the intermediate value or mean value for taking the forward height value that sorts are as particular category profile estimation height hc';With
And
Height h is assumed using particular category profilecHeight h is estimated with particular category profilec' ratio, from every width panorama sketch
The face profile in three dimensions of picture generates the face wheel in three dimensions by normalized each panoramic picture
Exterior feature,
Wherein, the particular category profile assumes height hcIt is a height of arbitrary assumption.
5. a kind of object-oriented modeling method, which is characterized in that the object-oriented modeling method includes:
Image processing step, wherein using the image processing method as described in any one in Claims 1 to 4, at least
One width panoramic picture carries out image procossing, to obtain the face wheel in three dimensions by normalized each panoramic picture
It is wide;And
It is multipair as splice step, wherein based on the face wheel in three dimensions by normalized each panoramic picture
Exterior feature, splicing obtain multipair image plane profile.
6. object-oriented modeling method as claimed in claim 5, which is characterized in that the object-oriented modeling method further include:
Single object face profile generation step, wherein normalized each complete based on passing through obtained in described image processing step
The face profile of scape image obtains the face profile in three dimensions of each single 3 D object.
7. object-oriented modeling method as claimed in claim 6, which is characterized in that single object face profile generation step includes:
For an at least width panoramic picture, several panoramic picture categories in the following manner, have one by one been determined whether
In the same three dimensional object: if there is the matching characteristic point more than special ratios between two width panoramic pictures, it is determined that this two width
Panoramic picture belongs to the same three dimensional object;And
If it is determined that several panoramic pictures belong to the same three dimensional object, then it is same for being obtained by several described panoramic pictures
Each face profile of a three dimensional object, takes the union of these face profiles, the face profile as the three dimensional object.
8. object-oriented modeling method as claimed in claim 7, which is characterized in that described multipair as splicing step, additionally it is possible to base
In the face profile in three dimensions of each panoramic picture, splicing obtains multipair image plane profile in three dimensions.
9. the object-oriented modeling method as described in any one in claim 5~8, which is characterized in that the object modeling side
Method further include:
3D model generation step, wherein it is multipair as splicing step after, by splicing obtain in three dimensions it is multipair as
Face profile is converted into object 3D model.
10. object-oriented modeling method as claimed in claim 9, which is characterized in that 3D model generation step includes:
Top planes profile in the obtained multipair image plane profile of splicing is subjected to three-dimensional point interpolation in inside, and will be obtained
All three-dimensional point coordinates in each top planes profile project in corresponding panoramic picture coordinate system, to obtain top texture;
Bottom plane profile in the obtained multipair image plane profile of splicing is subjected to three-dimensional point interpolation in inside, and will be obtained
All three-dimensional point coordinates on each bottom plane profile project in corresponding panoramic picture coordinate system, to obtain bottom texture;
The three-dimensional vertices between top profile and bottom profile on same level position are connected, the face profile of support portion is constituted,
And for carrying out three-dimensional point interpolation inside the face profile of the support portion, and by the institute of the face profile of obtained each support portion
There is three-dimensional point coordinate to project in corresponding panoramic picture coordinate system, to obtain support portion texture;
Based on the top texture, bottom texture, support portion texture, the 3D texture model of entire three dimensional object is generated.
11. object-oriented modeling method as claimed in claim 10, which is characterized in that in 3D model generation step, by what is obtained
The h of the estimation height of the contoured as described in all three-dimensional point coordinates in each three dimensional object top planes profilec', phase
The height value of estimation height of the machine at the top of corresponding three dimensional object, replaces with the estimation of the corresponding three dimensional object bottom of camera distance
Height hf', and the length and width values in all three-dimensional point coordinates in each three dimensional object top planes profile made
It remains unchanged, obtains corresponding each three dimensional object bottom plane profile, wherein
Estimation height h at the top of the corresponding three dimensional object of the camera distancec' obtain in the following way: it will estimate in camera position
The height value in all three-dimensional point coordinates on an at least width panoramic picture that step obtains sorts from small to large, takes sequence
The intermediate value or mean value of forward height value are as the estimation height h at the top of the corresponding three dimensional object of the camera distancec', and
The estimation height h of the corresponding three dimensional object bottom of the camera distancef' obtain in the following way: it will estimate in camera position
The height value in all three-dimensional point coordinates on an at least width panoramic picture that step obtains sorts from small to large, takes sequence
Estimation height h of the intermediate value or mean value of height value rearward as the corresponding three dimensional object bottom of camera distancef’。
12. a kind of image processing equipment, which is characterized in that described image processing equipment includes:
Camera position estimation device, is configured for the geometrical relationship of at least width panoramic picture using shooting, and estimation is being clapped
The position of panorama camera when taking the photograph every width panoramic picture and the three-dimensional point coordinate of the matching characteristic point on every width panoramic picture,
Wherein, every width panoramic picture is shot for a three dimensional object, and each three dimensional object corresponds to one or more panorama sketch
Picture;
Single image face profile generating means are configured for for every width panoramic picture, based on the panoramic picture, wheel
The profile that the edge pixel point that wide feature belongs in the pixel of particular category is surrounded, generate every width panoramic picture in three-dimensional
Face profile in space;And
Dimension normalization device is configured for the position of the estimated panorama camera when shooting every width panoramic picture
The scale of the face profile in three dimensions of scale and every width panoramic picture is normalized, and obtains by normalized each
The face profile in three dimensions of panoramic picture.
13. image processing equipment as claimed in claim 12, which is characterized in that single image face profile generating means are also matched
It sets and is used for:
Based on the characteristic similarity between the pixel on panoramic picture, determine that on the panoramic picture, contour feature belongs to
Edge pixel point in the pixel of particular category,
Wherein, the characteristic similarity of two pixels is the absolute value of the difference of the feature of described two pixels, the pixel
Feature include gray scale, color.
14. image processing equipment as claimed in claim 12, which is characterized in that dimension normalization device is additionally configured to:
The height in all three-dimensional point coordinates on an at least width panoramic picture described in being obtained as camera position estimation device
Value sorts from small to large, takes the intermediate value for the forward height value that sorts or mean value to be used as and is used as particular category profile estimation height
hc', estimation height h at the top of the corresponding three dimensional object of camera distancec';And
Utilize the hypothesis height h at the top of the corresponding three dimensional object of the camera distancecAt the top of three dimensional object corresponding to the camera distance
Estimation height hc' ratio, generate from the face profile in three dimensions of every width panoramic picture by normalization
Each panoramic picture face profile,
Wherein, the hypothesis height h at the top of the corresponding three dimensional object of the camera distancecIt is a height of arbitrary assumption.
15. a kind of object modeling equipment, which is characterized in that the object modeling equipment further include:
Image processing equipment as described in any one in claim 12~14 is configured for an at least width panorama sketch
As carrying out image procossing, to obtain the face profile in three dimensions by normalized each panoramic picture;And
It is multipair as splicing apparatus, be configured in three dimensions flat by normalized each panoramic picture
Facial contour, splicing obtain multipair image plane profile in three dimensions.
16. object modeling equipment as claimed in claim 15, which is characterized in that the object modeling equipment further include:
Single object face profile generating means, be configured for it is described by normalized each panoramic picture in three-dimensional space
Between in face profile, obtain the face profile in three dimensions of each three dimensional object.
17. object modeling equipment as claimed in claim 16, which is characterized in that the single object face profile generating means are also
It is configured for:
For an at least width panoramic picture, in the following manner, one by one determine whether that several panoramic pictures belong to together
One three dimensional object: if there is the matching characteristic point more than special ratios between two width panoramic pictures, it can be determined that this two width
Panoramic picture belongs to the same three dimensional object;And
If it is determined that several panoramic pictures belong to the same three dimensional object, then it is same for being obtained by several described panoramic pictures
Each face profile of a three dimensional object, take the union of the face profile, as the three dimensional object in three dimensions
Face profile.
18. object modeling equipment as claimed in claim 17, which is characterized in that described multipair as splicing apparatus is also configured to use
In the face profile in three dimensions based on each single 3 D object, splicing obtains multipair image plane in three dimensions
Profile.
19. the object modeling equipment as described in any one in claim 15~18, which is characterized in that the object modeling
Equipment further include:
3D model generating means are configured for converting three for the multipair image plane profile in three dimensions that splicing obtains
Dimensional object 3D model.
20. a kind of image processing apparatus, comprising:
Processor;And
Memory is stored thereon with executable code, when the executable code is executed by the processor, makes the processing
Device executes the method as described in any one of claim 1~11.
21. a kind of non-transitory machinable medium, is stored thereon with executable code, when the executable code is located
When managing device execution, the processor is made to execute the method as described in any one of claim 1~11.
Priority Applications (10)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010574085.XA CN111862301B (en) | 2019-04-12 | 2019-04-12 | Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium |
CN202010574097.2A CN111862302B (en) | 2019-04-12 | 2019-04-12 | Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium |
CN201910296077.0A CN110490967B (en) | 2019-04-12 | 2019-04-12 | Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium |
SG11202111295UA SG11202111295UA (en) | 2019-04-12 | 2020-06-11 | Three-dimensional object modeling method, image processing method, and image processing device |
AU2020256662A AU2020256662B2 (en) | 2019-04-12 | 2020-06-11 | Three-dimensional object modeling method, image processing method, and image processing device |
PCT/CN2020/095629 WO2020207512A1 (en) | 2019-04-12 | 2020-06-11 | Three-dimensional object modeling method, image processing method, and image processing device |
EP20787452.0A EP3955213A4 (en) | 2019-04-12 | 2020-06-11 | Three-dimensional object modeling method, image processing method, and image processing device |
NZ782222A NZ782222A (en) | 2019-04-12 | 2020-06-11 | Three-dimensional object modeling method, image processing method, and image processing device |
US17/603,264 US11869148B2 (en) | 2019-04-12 | 2020-06-11 | Three-dimensional object modeling method, image processing method, image processing device |
JP2022506321A JP7311204B2 (en) | 2019-04-12 | 2020-06-11 | 3D OBJECT MODELING METHOD, IMAGE PROCESSING METHOD, IMAGE PROCESSING APPARATUS |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910296077.0A CN110490967B (en) | 2019-04-12 | 2019-04-12 | Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010574085.XA Division CN111862301B (en) | 2019-04-12 | 2019-04-12 | Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium |
CN202010574097.2A Division CN111862302B (en) | 2019-04-12 | 2019-04-12 | Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110490967A true CN110490967A (en) | 2019-11-22 |
CN110490967B CN110490967B (en) | 2020-07-17 |
Family
ID=68545807
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010574085.XA Active CN111862301B (en) | 2019-04-12 | 2019-04-12 | Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium |
CN201910296077.0A Active CN110490967B (en) | 2019-04-12 | 2019-04-12 | Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium |
CN202010574097.2A Active CN111862302B (en) | 2019-04-12 | 2019-04-12 | Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010574085.XA Active CN111862301B (en) | 2019-04-12 | 2019-04-12 | Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010574097.2A Active CN111862302B (en) | 2019-04-12 | 2019-04-12 | Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium |
Country Status (1)
Country | Link |
---|---|
CN (3) | CN111862301B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111127655A (en) * | 2019-12-18 | 2020-05-08 | 北京城市网邻信息技术有限公司 | House layout drawing construction method and device, and storage medium |
CN111127357A (en) * | 2019-12-18 | 2020-05-08 | 北京城市网邻信息技术有限公司 | House type graph processing method, system, device and computer readable storage medium |
WO2020207512A1 (en) * | 2019-04-12 | 2020-10-15 | 北京城市网邻信息技术有限公司 | Three-dimensional object modeling method, image processing method, and image processing device |
CN112055192A (en) * | 2020-08-04 | 2020-12-08 | 北京城市网邻信息技术有限公司 | Image processing method, image processing apparatus, electronic device, and storage medium |
CN112686989A (en) * | 2021-01-04 | 2021-04-20 | 北京高因科技有限公司 | Three-dimensional space roaming implementation method |
CN116246085A (en) * | 2023-03-07 | 2023-06-09 | 北京甲板智慧科技有限公司 | Azimuth generating method and device for AR telescope |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114332648B (en) * | 2022-03-07 | 2022-08-12 | 荣耀终端有限公司 | Position identification method and electronic equipment |
CN114442895A (en) * | 2022-04-07 | 2022-05-06 | 阿里巴巴达摩院(杭州)科技有限公司 | Three-dimensional model construction method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101923730A (en) * | 2010-09-21 | 2010-12-22 | 北京大学 | Fisheye camera and multiple plane mirror devices-based three-dimensional reconstruction method |
US20170134713A1 (en) * | 2015-11-06 | 2017-05-11 | Toppano Co., Ltd. | Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof |
CN108389157A (en) * | 2018-01-11 | 2018-08-10 | 江苏四点灵机器人有限公司 | A kind of quick joining method of three-dimensional panoramic image |
CN108416840A (en) * | 2018-03-14 | 2018-08-17 | 大连理工大学 | A kind of dense method for reconstructing of three-dimensional scenic based on monocular camera |
CN108537848A (en) * | 2018-04-19 | 2018-09-14 | 北京工业大学 | A kind of two-stage pose optimal estimating method rebuild towards indoor scene |
CN108961395A (en) * | 2018-07-03 | 2018-12-07 | 上海亦我信息技术有限公司 | A method of three dimensional spatial scene is rebuild based on taking pictures |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7149367B2 (en) * | 2002-06-28 | 2006-12-12 | Microsoft Corp. | User interface for a system and method for head size equalization in 360 degree panoramic images |
CN1758284A (en) * | 2005-10-17 | 2006-04-12 | 浙江大学 | Method for quickly rebuilding-up three-D jaw model from tomographic sequence |
CN100576907C (en) * | 2007-12-25 | 2009-12-30 | 谢维信 | Utilize the method for single camera real-time generating 360 degree seamless full-view video image |
CN101739717B (en) * | 2009-11-12 | 2011-11-16 | 天津汇信软件有限公司 | Non-contact scanning method for three-dimensional colour point clouds |
CN101950426B (en) * | 2010-09-29 | 2014-01-01 | 北京航空航天大学 | Vehicle relay tracking method in multi-camera scene |
TW201342303A (en) * | 2012-04-13 | 2013-10-16 | Hon Hai Prec Ind Co Ltd | Three-dimensional image obtaining system and three-dimensional image obtaining method |
CN103379267A (en) * | 2012-04-16 | 2013-10-30 | 鸿富锦精密工业(深圳)有限公司 | Three-dimensional space image acquisition system and method |
CN102650886B (en) * | 2012-04-28 | 2014-03-26 | 浙江工业大学 | Vision system based on active panoramic vision sensor for robot |
CN105488775A (en) * | 2014-10-09 | 2016-04-13 | 东北大学 | Six-camera around looking-based cylindrical panoramic generation device and method |
CN106780421A (en) * | 2016-12-15 | 2017-05-31 | 苏州酷外文化传媒有限公司 | Finishing effect methods of exhibiting based on panoramic platform |
CN106651767A (en) * | 2016-12-30 | 2017-05-10 | 北京星辰美豆文化传播有限公司 | Panoramic image obtaining method and apparatus |
CN107564039A (en) * | 2017-08-31 | 2018-01-09 | 成都观界创宇科技有限公司 | Multi-object tracking method and panorama camera applied to panoramic video |
JP6337226B1 (en) * | 2018-03-02 | 2018-06-06 | 株式会社エネルギア・コミュニケーションズ | Abnormal point detection system |
CN108876909A (en) * | 2018-06-08 | 2018-11-23 | 桂林电子科技大学 | A kind of three-dimensional rebuilding method based on more image mosaics |
CN109508682A (en) * | 2018-11-20 | 2019-03-22 | 成都通甲优博科技有限责任公司 | A kind of detection method on panorama parking stall |
-
2019
- 2019-04-12 CN CN202010574085.XA patent/CN111862301B/en active Active
- 2019-04-12 CN CN201910296077.0A patent/CN110490967B/en active Active
- 2019-04-12 CN CN202010574097.2A patent/CN111862302B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101923730A (en) * | 2010-09-21 | 2010-12-22 | 北京大学 | Fisheye camera and multiple plane mirror devices-based three-dimensional reconstruction method |
US20170134713A1 (en) * | 2015-11-06 | 2017-05-11 | Toppano Co., Ltd. | Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof |
CN108389157A (en) * | 2018-01-11 | 2018-08-10 | 江苏四点灵机器人有限公司 | A kind of quick joining method of three-dimensional panoramic image |
CN108416840A (en) * | 2018-03-14 | 2018-08-17 | 大连理工大学 | A kind of dense method for reconstructing of three-dimensional scenic based on monocular camera |
CN108537848A (en) * | 2018-04-19 | 2018-09-14 | 北京工业大学 | A kind of two-stage pose optimal estimating method rebuild towards indoor scene |
CN108961395A (en) * | 2018-07-03 | 2018-12-07 | 上海亦我信息技术有限公司 | A method of three dimensional spatial scene is rebuild based on taking pictures |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020207512A1 (en) * | 2019-04-12 | 2020-10-15 | 北京城市网邻信息技术有限公司 | Three-dimensional object modeling method, image processing method, and image processing device |
US11869148B2 (en) | 2019-04-12 | 2024-01-09 | Beijing Chengshi Wanglin Information Technology Co., Ltd. | Three-dimensional object modeling method, image processing method, image processing device |
CN111127655A (en) * | 2019-12-18 | 2020-05-08 | 北京城市网邻信息技术有限公司 | House layout drawing construction method and device, and storage medium |
CN111127357A (en) * | 2019-12-18 | 2020-05-08 | 北京城市网邻信息技术有限公司 | House type graph processing method, system, device and computer readable storage medium |
CN113240768A (en) * | 2019-12-18 | 2021-08-10 | 北京城市网邻信息技术有限公司 | House type graph processing method, system, device and computer readable storage medium |
CN113240768B (en) * | 2019-12-18 | 2022-03-15 | 北京城市网邻信息技术有限公司 | House type graph processing method, system, device and computer readable storage medium |
CN112055192A (en) * | 2020-08-04 | 2020-12-08 | 北京城市网邻信息技术有限公司 | Image processing method, image processing apparatus, electronic device, and storage medium |
CN112055192B (en) * | 2020-08-04 | 2022-10-11 | 北京城市网邻信息技术有限公司 | Image processing method, image processing apparatus, electronic device, and storage medium |
CN112686989A (en) * | 2021-01-04 | 2021-04-20 | 北京高因科技有限公司 | Three-dimensional space roaming implementation method |
CN116246085A (en) * | 2023-03-07 | 2023-06-09 | 北京甲板智慧科技有限公司 | Azimuth generating method and device for AR telescope |
CN116246085B (en) * | 2023-03-07 | 2024-01-30 | 北京甲板智慧科技有限公司 | Azimuth generating method and device for AR telescope |
Also Published As
Publication number | Publication date |
---|---|
CN110490967B (en) | 2020-07-17 |
CN111862302B (en) | 2022-05-17 |
CN111862301B (en) | 2021-10-22 |
CN111862302A (en) | 2020-10-30 |
CN111862301A (en) | 2020-10-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110675314B (en) | Image processing method, image processing apparatus, three-dimensional object modeling method, three-dimensional object modeling apparatus, image processing apparatus, and medium | |
CN110490916B (en) | Three-dimensional object modeling method and apparatus, image processing device, and medium | |
WO2020207512A1 (en) | Three-dimensional object modeling method, image processing method, and image processing device | |
CN110490967A (en) | Image procossing and object-oriented modeling method and equipment, image processing apparatus and medium | |
Hedman et al. | Scalable inside-out image-based rendering | |
EP2272050B1 (en) | Using photo collections for three dimensional modeling | |
Mastin et al. | Automatic registration of LIDAR and optical images of urban scenes | |
Zhang et al. | A UAV-based panoramic oblique photogrammetry (POP) approach using spherical projection | |
Pagani et al. | Dense 3D Point Cloud Generation from Multiple High-resolution Spherical Images. | |
CN112055192B (en) | Image processing method, image processing apparatus, electronic device, and storage medium | |
Yang et al. | Noise-resilient reconstruction of panoramas and 3d scenes using robot-mounted unsynchronized commodity rgb-d cameras | |
Rander | A multi-camera method for 3D digitization of dynamic, real-world events | |
Gouiaa et al. | 3D reconstruction by fusioning shadow and silhouette information | |
Karras et al. | Generation of orthoimages and perspective views with automatic visibility checking and texture blending | |
Bartczak et al. | Extraction of 3D freeform surfaces as visual landmarks for real-time tracking | |
Ventura et al. | Online environment model estimation for augmented reality | |
Pollefeys et al. | Acquisition of detailed models for virtual reality | |
Kim et al. | Planar Abstraction and Inverse Rendering of 3D Indoor Environments | |
Hwang et al. | 3D modeling and accuracy assessment-a case study of photosynth | |
Yao et al. | A new environment mapping method using equirectangular panorama from unordered images | |
Heng et al. | Keyframe-based texture mapping for rgbd human reconstruction | |
Do et al. | On multi-view texture mapping of indoor environments using Kinect depth sensors | |
Srinath et al. | 3D Experience on a 2D Display | |
Finnie | Real-Time Dynamic Full Scene Reconstruction Using a Heterogeneous Sensor System | |
Mihut et al. | Lighting and Shadow Techniques for Realistic 3D Synthetic Object Compositing in Images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40017574 Country of ref document: HK |