CN111862302B - Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium - Google Patents

Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium Download PDF

Info

Publication number
CN111862302B
CN111862302B CN202010574097.2A CN202010574097A CN111862302B CN 111862302 B CN111862302 B CN 111862302B CN 202010574097 A CN202010574097 A CN 202010574097A CN 111862302 B CN111862302 B CN 111862302B
Authority
CN
China
Prior art keywords
dimensional
panoramic
contour
image
panoramic image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010574097.2A
Other languages
Chinese (zh)
Other versions
CN111862302A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chengshi Wanglin Information Technology Co Ltd
Original Assignee
Beijing Chengshi Wanglin Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chengshi Wanglin Information Technology Co Ltd filed Critical Beijing Chengshi Wanglin Information Technology Co Ltd
Priority to CN202010574097.2A priority Critical patent/CN111862302B/en
Publication of CN111862302A publication Critical patent/CN111862302A/en
Application granted granted Critical
Publication of CN111862302B publication Critical patent/CN111862302B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image processing and object modeling method and apparatus, an image processing device, and a medium. The image processing method includes: estimating the position of a camera and the three-dimensional point coordinates of the matched characteristic points on the panoramic image by utilizing the geometric relationship of at least one shot panoramic image; for each panoramic image, generating a plane contour of the panoramic image in a three-dimensional space based on a contour surrounded by edge pixel points in pixel points of which the characteristic contour belongs to a specific category on the panoramic image; and normalizing the scale of the position of the panoramic camera when each panoramic image is shot and the scale of the plane profile in the three-dimensional space of each panoramic image. By generating a planar contour of a three-dimensional object in a three-dimensional space or even generating a 3D model of the three-dimensional object using a panoramic image photographed by a panoramic camera, the resolution of the generated object model can be effectively improved.

Description

Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium
Technical Field
The present invention relates to the field of object modeling, and in particular, to an image processing method and apparatus, an image processing apparatus, and a medium.
Background
In the field of object modeling, how to make the generated object model have high resolution and/or high accuracy is a goal that is strongly pursued in the industry.
Object modeling can enable a user to browse 2D and/or 3D structures of a three-dimensional object without going out (e.g., via a network), and 3D modeling of an object can achieve an effect as if it were immersive, which is a very important application in the field of virtual reality.
In the field of object modeling, especially 2D and 3D modeling, the technical solutions at home and abroad are mainly divided into two categories: manual fabrication and automated modeling.
For the manual manufacturing method, a large amount of manual operations are required, the three-dimensional structure of the object is identified, and the multiple object models are spliced manually. The manual production of a set of 3D models of three-dimensional objects takes a long time, so that a large amount of three-dimensional object data needs a lot of personnel to produce manually, the personnel cost is too high, and the practical application is difficult.
For the automatic 3D modeling method, currently, most professional 3D scanning devices are used, and three-dimensional point clouds of a single object can be directly obtained, and then the three-dimensional point clouds are spliced to generate a 3D model. However, the image acquisition equipment of such professional 3D scanning equipment is not accurate, resulting in a captured image with a low resolution, resulting in a generated three-dimensional model with a low resolution. Moreover, such 3D scanning devices are often expensive and difficult to meet with the requirements of consumer-grade applications.
Therefore, how to obtain a high-resolution acquired image, how to efficiently process the acquired image so as to provide high-resolution modeling preparation data for object modeling, and even how to enable the provided data to simplify a subsequent model generation process, and how to effectively improve the resolution and/or accuracy of a generated object model are technical problems that the present invention considers to solve.
Disclosure of Invention
In order to solve one of the above problems, the present invention provides an image processing and object modeling method and apparatus, an image processing device, and a medium.
According to an embodiment of the present invention, there is provided an image processing method including: a camera position estimation step of estimating a position of the panoramic camera at the time of photographing each panoramic image and three-dimensional point coordinates of matching feature points on each panoramic image, using a geometric relationship of at least one of the photographed panoramic images, wherein each panoramic image is photographed for one three-dimensional object, each three-dimensional object corresponding to one or more panoramic images; generating a planar contour of a single image, wherein for each panoramic image, a planar contour of the panoramic image in a three-dimensional space is generated based on a contour surrounded by edge pixels among pixels of which contour features belong to a specific category on the panoramic image; and a scale normalization step, wherein the scale of the position of the panoramic camera estimated when each panoramic image is shot and the scale of the plane contour of each panoramic image in the three-dimensional space are normalized, and the plane contour of each panoramic image in the three-dimensional space after normalization is obtained.
Optionally, the camera position estimating step comprises: matching feature points between the panoramic images by using the geometric relationship of at least one shot panoramic image, and recording the mutually matched feature points in the panoramic images as matched feature points; and reducing the reprojection error of the matching feature points on the panoramic image for each panoramic image to obtain the camera position when each panoramic image is shot and the three-dimensional point coordinates of the matching feature points on the panoramic image.
Optionally, the single image plane contour generating step comprises: determining the edge pixel points of the pixel points of which the outline features belong to a specific category on the panoramic image based on the feature similarity between the pixel points on the panoramic image, wherein the feature similarity of two pixel points is the absolute value of the difference between the features of the two pixel points, and the features of the pixel points comprise gray scale and color.
Optionally, the scale normalization step includes: sorting the height values in all three-dimensional point coordinates on the at least one panoramic image obtained in the camera position estimation step from small to large, and taking the median or mean of the height values sorted at the top as the estimated height h of the profile of the specific categoryc'; and assuming a height h with a particular class profilecEstimating height h with profile of specific categoryc' generating a normalized planar profile in three-dimensional space of each panoramic image from the planar profile in three-dimensional space of each panoramic image, wherein the profile of the specific category assumes a height hcIs an arbitrarily assumed height.
According to an embodiment of the present invention, there is provided an object modeling method including: an image processing step of performing image processing on at least one panoramic image using one of the image processing methods described above to obtain a normalized planar profile of each panoramic image in a three-dimensional space; and a multi-object splicing step, wherein based on the plane contour in the three-dimensional space of each normalized panoramic image, a multi-object plane contour is obtained through splicing.
Optionally, the three-dimensional object modeling method further includes: and generating a planar contour of the single object, wherein the planar contour in the three-dimensional space of each single three-dimensional object is obtained based on the normalized planar contour of each panoramic image obtained in the image processing step.
Optionally, the single-object plane contour generating step includes: for the at least one panoramic image, determining whether a plurality of panoramic images belong to the same three-dimensional object one by the following method: if more than specific proportion of matching feature points exist between the two panoramic images, the two panoramic images are determined to belong to the same three-dimensional object; and if the plurality of panoramic images are determined to belong to the same three-dimensional object, taking a union set of plane outlines of the same three-dimensional object obtained from the plurality of panoramic images as the plane outline of the three-dimensional object.
Optionally, in the multi-object stitching step, a multi-object plane contour in a three-dimensional space can be stitched based on the plane contour in the three-dimensional space of each panoramic image.
Optionally, the object modeling method further comprises: and a 3D model generation step, wherein after the multi-object splicing step, the multi-object plane contour in the three-dimensional space obtained by splicing is converted into an object 3D model.
Optionally, the 3D model generating step comprises: performing three-dimensional point interpolation on the top plane contour in the spliced multi-object plane contours, and projecting all three-dimensional point coordinates on each top plane contour to a corresponding panoramic image coordinate system to obtain top textures; carrying out three-dimensional point interpolation on the bottom plane contour in the spliced multi-object plane contours, and projecting all three-dimensional point coordinates on each obtained bottom plane contour into a corresponding panoramic image coordinate system to obtain bottom textures; connecting three-dimensional vertexes on the same plane position between the top outline and the bottom outline to form a plane outline of the supporting part, performing three-dimensional point interpolation inside the plane outline of the supporting part, and projecting all three-dimensional point coordinates of the obtained plane outline of each supporting part into a corresponding panoramic image coordinate system so as to obtain supporting part textures; and generating a 3D texture model of the whole three-dimensional object based on the top texture, the bottom texture and the supporting part texture.
Optionally, in the step of generating a 3D model, the estimated height h of the specific contour is selected from all three-dimensional point coordinates on the top plane contour of each three-dimensional objectc' the height value of the estimated height of the camera from the top of the corresponding three-dimensional object is replaced by the estimated height h of the camera from the bottom of the corresponding three-dimensional objectf' and keeping the length and width values in all three-dimensional point coordinates on the top plane profile of each three-dimensional object constant to obtain the bottom plane profile of each three-dimensional object, wherein the estimated height h of the camera from the top of each three-dimensional object is estimatedc' is obtained by: sorting the height values in all three-dimensional point coordinates on the at least one panoramic image obtained in the camera position estimation step from small to large, and taking the median or mean of the height values sorted at the front as the estimated height h of the camera from the top of the corresponding three-dimensional objectc', and an estimated height h of said camera from the bottom of the corresponding three-dimensional objectf' is obtained by: sorting the height values in all three-dimensional point coordinates on the at least one panoramic image obtained in the camera position estimation step from small to large, and taking the median or mean of the height values after sorting as the estimated height h of the camera from the bottom of the corresponding three-dimensional objectf’。
According to an embodiment of the present invention, there is provided an image processing apparatus including: camera position estimation means configured to estimate a position of the panoramic camera at the time of photographing each panoramic image, and three-dimensional point coordinates of matching feature points on each panoramic image, using a geometric relationship of at least one of the photographed panoramic images, wherein each panoramic image is photographed for one three-dimensional object, each three-dimensional object corresponding to one or more panoramic images; the single-image plane contour generation device is configured to generate a plane contour of each panoramic image in a three-dimensional space based on a contour surrounded by edge pixel points in pixel points, of which contour features belong to a specific category, on the panoramic image; and a scale normalization means configured to normalize the scale of the estimated position of the panoramic camera at the time of photographing each panoramic image and the scale of the planar contour of each panoramic image in the three-dimensional space, to obtain the normalized planar contour of each panoramic image in the three-dimensional space.
Optionally, the single image plane contour generating device is further configured to: determining edge pixels of pixels, on a panoramic image, of which outline features belong to a specific category based on feature similarity between pixels on the panoramic image, wherein the feature similarity of two pixels is an absolute value of a difference between features of the two pixels, and the features of the pixels comprise gray scale and color.
Optionally, the scale normalization means is further configured to: sorting the height values in all three-dimensional point coordinates on the at least one panoramic image obtained by the camera position estimation device from small to large, and taking the median or mean of the height values sorted at the top as the estimated height h serving as the profile of the specific categoryc' estimated height h of camera from top of corresponding three-dimensional objectc'; and using the assumed height h of the camera from the top of the respective three-dimensional objectcEstimated height h of the top of the three-dimensional object corresponding to the camera distancec' generating a normalized planar profile of each panoramic image from a planar profile of each panoramic image in three-dimensional space, wherein the camera is at an assumed height h from the top of the corresponding three-dimensional objectcIs an arbitrarily assumed height.
According to an embodiment of the present invention, there is provided an object modeling apparatus further including: one of the image processing apparatuses as described above, configured to perform image processing on at least one panoramic image to obtain a planar contour in a three-dimensional space of each panoramic image subjected to normalization; and the multi-object splicing device is configured to splice the plane profiles in the three-dimensional space of the normalized panoramic images to obtain the multi-object plane profiles in the three-dimensional space.
Optionally, the object modeling apparatus further includes: and the single-object plane contour generating device is configured to obtain a plane contour in a three-dimensional space of each three-dimensional object based on the normalized plane contour in the three-dimensional space of each panoramic image.
Optionally, the single-object plane contour generating device is further configured to: for the at least one panoramic image, determining whether a plurality of panoramic images belong to the same three-dimensional object one by: if more than specific proportion of matching feature points exist between the two panoramic images, the two panoramic images can be judged to belong to the same three-dimensional object; and if the plurality of panoramic images are determined to belong to the same three-dimensional object, taking a union set of plane profiles of the same three-dimensional object obtained from the plurality of panoramic images as the plane profile of the three-dimensional object in the three-dimensional space.
Optionally, the multi-object stitching device is further configured to stitch the multi-object plane contour in the three-dimensional space based on the plane contour in the three-dimensional space of each single three-dimensional object.
Optionally, the object modeling apparatus further includes: and the 3D model generation device is configured for converting the multi-object plane contour in the three-dimensional space obtained by splicing into a three-dimensional object 3D model.
According to still another embodiment of the present invention, there is provided an image processing apparatus including: a processor; and a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform one of the methods described above.
According to yet another embodiment of the invention, there is provided a non-transitory machine-readable storage medium having stored thereon executable code which, when executed by a processor, causes the processor to perform one of the methods described above.
The invention carries out 2D modeling and 3D modeling of the three-dimensional object based on a plurality of panoramic images shot from the three-dimensional object by using the panoramic camera, and overcomes the defect of low model resolution caused by using a 3D scanning device to generate a three-dimensional object model in the prior art.
In the present invention, a high-resolution captured image is provided for object modeling (e.g., house modeling, etc.) by taking a panoramic image of a room using a panoramic camera.
Further, in the present invention, an efficient image processing method is employed, which provides high-resolution modeling preparation data for object modeling (e.g., house modeling), and the provided modeling preparation data can simplify the subsequent model generation process.
Still further, by the modeling method of the present invention, the resolution and/or accuracy of the generated model (e.g., a 2D and/or 3D model of a three-dimensional object) can be effectively improved.
Moreover, the invention does not limit the application scenarios of modeling, for example, the invention can be applied to various image-based modeling scenarios such as house modeling, vehicle modeling, and the like, and actually provides an innovative comprehensive image processing scheme.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail exemplary embodiments thereof with reference to the attached drawings, in which like reference numerals generally represent like parts throughout.
Fig. 1 presents a schematic flow-chart of an image processing method according to an exemplary embodiment of the present invention.
Fig. 2 presents a schematic flow diagram of the overall process of image processing and modeling according to an exemplary embodiment of the present invention.
Fig. 3 presents a schematic block diagram of an image processing apparatus according to an exemplary embodiment of the present invention.
FIG. 4 presents a schematic block diagram of an automated object modeling apparatus in accordance with an exemplary embodiment of the present invention.
Fig. 5 gives a schematic block diagram of an image processing apparatus according to an exemplary embodiment of the present invention.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. It should be noted that the numbers, serial numbers and reference numbers in the present application are only presented for convenience of description, and no limitation is made to the steps, the sequence and the like of the present invention unless the specific sequence of the steps is explicitly indicated in the specification.
The invention provides an image processing method, an image processing apparatus, an object modeling method, an object modeling apparatus, an image processing device, and a computer medium.
Firstly, in the present invention, a common panoramic camera is used to capture a high resolution panoramic image of a three-dimensional object, overcoming the disadvantage of low resolution of the image captured by the 3D scanning camera described in the background art.
Then, using the plurality of panoramic images photographed, a planar contour in a three-dimensional space of a single panoramic image (may be referred to as a "single-image planar contour") may be extracted.
Furthermore, through the scale normalization, the unification between the scale of the single image plane outline and the scale of the camera position can be realized, the normalized single image plane outlines are generated, high-resolution and sufficient data preparation is provided for the subsequent object modeling, and the difficulty of the subsequent processing work is reduced.
Still further, the accurate single-object plane contour can be obtained by fusing the single objects of the single image plane contours belonging to the same three-dimensional object.
Still further, the plane outlines of the single images can be stitched in a three-dimensional space to obtain a multi-object model (in this case, a 2D model).
In addition, the multi-object model can be corrected to obtain a more accurate model, so that the model display effect is better.
Finally, a complete, high resolution and accurate 3D model is obtained by 3D model generation.
Fig. 1 presents a schematic flow-chart of an image processing method according to an exemplary embodiment of the present invention.
An image processing method according to an exemplary embodiment of the present invention will be described with reference to fig. 1 to make sufficient data preparation for a subsequent modeling process and simplify the subsequent processing. As shown in fig. 1, the image processing process includes three steps of camera position estimation S110, single image plane contour generation S120, and scale normalization S130, and the modeling process may include a plurality of subsequent steps.
Here, the panoramic camera is first briefly described. The panoramic camera is different from the general camera in that the general camera generally photographs with only one lens, and the panoramic camera photographs with two or more lenses, so that the panoramic camera can realize 360-degree photographing.
An image processing method according to an exemplary embodiment of the present invention may include the following steps.
In the camera position estimation step S110, the position of the panoramic camera at the time of photographing each panoramic image and the three-dimensional point coordinates of the matching feature points on each panoramic image are estimated using the geometric relationship of the photographed at least one panoramic image.
Wherein each panoramic image is taken for one three-dimensional object, each three-dimensional object corresponding to one or more panoramic images.
In the single image plane contour generating step S120, for each panoramic image, a plane contour of the panoramic image in the three-dimensional space is generated based on a contour surrounded by edge pixels among pixels whose contour features belong to a specific category on the panoramic image.
Thus, in the present invention, the planar profile of the image can be automatically obtained based on the panoramic image without human intervention and without using an expensive 3D scanning device.
In the scale normalization step S130, the scale of the estimated position of the panoramic camera at the time of photographing each panoramic image and the scale of the planar contour of each panoramic image in the three-dimensional space are normalized, and the planar contour of each panoramic image in the three-dimensional space is obtained through normalization.
Here, optionally, the above-mentioned camera position estimating step S110 may include the following operations:
1) matching feature points among the panoramic images by using the geometric relationship of at least one shot panoramic image, and recording the mutually matched feature points in the panoramic images as matched feature points; and
2) and reducing the reprojection error of the matching characteristic points on each panoramic image to obtain the position of the camera when each panoramic image is shot and the three-dimensional point coordinates of the matching characteristic points on the panoramic image.
In addition, optionally, the single-image plane contour generating step S120 may include: and determining edge pixel points among the pixel points of which the outline features belong to the specific category on the panoramic image based on the feature similarity between the pixel points on the panoramic image. Here, the feature similarity of two pixels may be an absolute value of a difference between features of the two pixels. The characteristics of the pixel points may include, for example, gray scale, color, and the like.
In addition, optionally, the above-mentioned scale normalization step S130 may include the following operations:
1) sorting the height values in all three-dimensional point coordinates on at least one panoramic image obtained in the camera position estimation step from small to large, and taking the median or mean of the height values sorted at the top as the estimated height h of the profile of the specific categoryc'; and
2) presume height h with class-specific profilecEstimating height h with profile of specific categoryc' the ratio of which is generated from a planar contour in three-dimensional space of each panoramic imageNormalized planar profiles in three-dimensional space of the respective panoramic images.
Wherein the above-mentioned specific class profile assumes a height hcIs a height that can be arbitrarily assumed.
Through the image processing process, a high-resolution basis is provided for the subsequent model generation. Moreover, through the image processing process, the plane contour of each panoramic image which is provided in the three-dimensional space and is normalized is generated for the subsequent model, so that the subsequent model generation work is simplified, the processing time is reduced, and the processing efficiency is improved.
Through the image processing, the plane contour data required by model generation is provided for object modeling.
Fig. 2 presents a schematic flow-chart of an overall process of image processing and object modeling according to an exemplary embodiment of the present invention.
Fig. 2 includes the image processing section described above and an object modeling section to be described next.
An object modeling method according to an exemplary embodiment of the present invention will be described below with reference to the object modeling section of fig. 2.
An object modeling process according to an exemplary embodiment of the present invention may include the following steps.
In the image processing step, at least one panoramic image is subjected to image processing using any one of the image processing methods described above to obtain a normalized planar contour of each panoramic image in three-dimensional space.
In the multi-object stitching step S140, a multi-object plane contour is stitched based on the normalized plane contours of the panoramic images in the three-dimensional space.
In addition, optionally, the object modeling method may further include a single-object plane contour generating step S135, configured to obtain a plane contour in a three-dimensional space of each single three-dimensional object based on the normalized plane contour of each panoramic image obtained in the image processing step.
In addition, optionally, the single object plane contour generating step S135 may include:
1) for at least one panoramic image, determining whether a plurality of panoramic images belong to the same three-dimensional object one by the following method: if more than specific proportion of matching feature points exist between the two panoramic images, the two panoramic images are determined to belong to the same three-dimensional object; and
2) if the plurality of panoramic images are determined to belong to the same three-dimensional object, taking the union set of the plane outlines of the same three-dimensional object obtained from the plurality of panoramic images as the plane outline of the three-dimensional object.
In addition, optionally, in the multi-object stitching step S140, a multi-object plane contour in the three-dimensional space can also be stitched based on the plane contour in the three-dimensional space of each single three-dimensional object.
That is, in the present invention, the multi-object plane contour in the three-dimensional space may be obtained by stitching directly based on the plane contour in the three-dimensional space of each panoramic image; the planar contour in the three-dimensional space of the single three-dimensional object can be obtained first, and then the multi-object planar contour in the three-dimensional space can be obtained by splicing based on the planar contour in the three-dimensional space of each single three-dimensional object.
In addition, optionally, the above object modeling method may further include a 3D model generating step S150, configured to convert the multi-object plane contour in the three-dimensional space obtained by stitching into an object 3D model after the multi-object stitching step S140.
In addition, optionally, the 3D model generating step S150 may include the following operations:
1) performing three-dimensional point interpolation on the top plane contour in the spliced multi-object plane contours, and projecting all three-dimensional point coordinates on each top plane contour to a corresponding panoramic image coordinate system to obtain top textures;
2) carrying out three-dimensional point interpolation on the bottom plane contour in the spliced multi-object plane contours, and projecting all three-dimensional point coordinates on each obtained bottom plane contour into a corresponding panoramic image coordinate system to obtain bottom textures;
3) connecting three-dimensional vertexes on the same plane position between the top outline and the bottom outline to form a plane outline of the supporting part, performing three-dimensional point interpolation inside the plane outline of the supporting part, and projecting all three-dimensional point coordinates of the obtained plane outline of each supporting part into a corresponding panoramic image coordinate system so as to obtain supporting part textures; and
4) and generating a 3D texture model of the whole three-dimensional object based on the top texture, the bottom texture and the support part texture.
In addition, optionally, in the 3D model generating step S150, h, which is the estimated height of the specific contour, of all the three-dimensional point coordinates on the top plane contour of each obtained three-dimensional object is usedc' the height value of the estimated height of the camera from the top of the corresponding three-dimensional object is replaced by the estimated height h of the camera from the bottom of the corresponding three-dimensional objectf' and keeping the length and width values in all three-dimensional point coordinates on the top plane profile of each three-dimensional object constant, and obtaining the bottom plane profile of each three-dimensional object correspondingly.
Wherein, the above-mentioned "estimated height h of the camera from the top of the corresponding three-dimensional objectc' "is obtained by: sorting the height values in all three-dimensional point coordinates on the at least one panoramic image obtained in the camera position estimation step from small to large, and taking the median or mean of the height values sorted at the front as the estimated height h of the camera from the top of the corresponding three-dimensional objectc’。
In addition, similarly, the above-described "estimated height h of the camera from the bottom of the corresponding three-dimensional objectf' "is obtained by: sorting the height values in all three-dimensional point coordinates on the at least one panoramic image obtained in the camera position estimation step from small to large, and taking the median or mean of the height values after sorting as the estimated height h of the camera from the bottom of the corresponding three-dimensional objectf’。
Hereinafter, for convenience of understanding and description, the respective processes of the present invention will be further described in detail by taking house image processing and house modeling as examples.
In the example given here, the house described above may comprise a plurality of rooms, each room may be considered as one three-dimensional object, and the modeling of the house may be considered as the modeling of a plurality of three-dimensional objects. For example, an integral model of a house may be constructed by splicing a plurality of rooms in the house as a plurality of three-dimensional objects.
For the terms in the general image processing and object modeling method described above, for example, the "top" therein may be the "ceiling" of the room in the present example, the "bottom" may be the "floor" of the room in the present example, and the "support" may be the "wall" in the present example. In addition, the "outline surrounded by edge pixels among pixels whose outline feature belongs to a specific category" may refer to "a ceiling outline surrounded by edge pixels among pixels belonging to a ceiling" in this example. The "estimated height of the profile for the specific category" may be "estimated height of the camera to the ceiling" in this example, and similarly, the "assumed height of the profile for the specific category" may be "assumed height of the camera to the ceiling" in this example.
In an image processing method according to an exemplary embodiment of the present invention, a position of a panoramic camera that takes at least one panoramic image taken within a house (one panoramic image corresponds to only one room, but a plurality of panoramic images may be taken within one room, that is, one room may correspond to a plurality of panoramic images) is estimated, a planar contour of the panoramic image is extracted based on the estimated camera position, and then the extracted planar contour is normalized to obtain a planar contour required for modeling.
Therefore, as shown in fig. 1, in step S110, the position of the panoramic camera that captured the panoramic images is estimated using the geometric relationship of at least one of the panoramic images captured in one room.
In the present invention, a multi-view geometry based approach can optionally be employed to solve this problem.
Specifically, the camera position estimating step S110 may include, for example, the following operations:
1) matching the characteristic points of the panoramic images, and recording the characteristic points matched with each other in the images; and
2) for each panoramic image, the reprojection error of the matching feature points on the panoramic image is reduced, and the camera position of each panoramic image and the three-dimensional point coordinates of the matching feature points on the panoramic image are obtained.
For the above step 1), in the image processing technology, the image feature point refers to a point where the image gray value changes drastically or a point with a large curvature on the image edge (i.e. the intersection of two edges). The image feature points can reflect the essential features of the image and can identify the target object in the image.
How to efficiently and accurately match the same object in two images from different perspectives is the first step in many computer vision applications. Although the image exists in the form of a gray matrix in the computer, the same object in the two images cannot be accurately found by using the gray of the image. This is because the gray scale is affected by the light, and when the image viewing angle changes, the gray scale value of the same object will also change. Therefore, it is desirable to find a feature that can remain unchanged when the camera moves and rotates (the angle of view changes), and use the unchanged feature to find the same object in images from different angles of view.
Therefore, in order to better perform image matching, it is necessary to select a representative region in an image, for example: corners, edges and some blocks in the image. Wherein the identification degree of the corner point is the highest. In many computer vision processes, angular points are usually extracted as feature points to match images, and examples of usable methods include SFM (Structure Form Motion), SLAM (Simultaneous Localization and Mapping), and the like.
However, a simple corner point does not meet the requirements well, for example: the camera gets a corner point from far, but may not be at near; alternatively, the corner points change when the camera is rotated. For this reason, researchers in computer vision have designed many more stable Feature points that do not change with the movement, rotation, or illumination of the camera, and examples of the method that can be used include SIFT (Scale-Invariant Feature Transform), SURF (speedup Robust Features), and the like.
The feature points of an image are composed of two parts: a Keypoint (Keypoint) and a Descriptor (Descriptor). The key points refer to the positions of the feature points in the image, and some feature points also have direction and scale information; a descriptor is typically a vector that describes the information of the pixels around a keypoint. In general, in matching, two feature points can be considered as the same feature point as long as their descriptors are close to each other in the vector space.
Matching of feature points typically requires the following three steps: 1) extracting key points in the image; 2) calculating descriptors of the feature points according to the obtained key point positions; 3) and matching according to the descriptors of the characteristic points.
Alternatively, the related processing of feature point matching in this step may be implemented using, for example, the open source computer vision library OpenCV. For brevity and without obscuring the subject matter of the present invention, further details of the processing of this section are not provided herein.
After feature point matching between these panoramic images is performed, feature points (also referred to as "matching feature points") that match each other in these panoramic images are recorded, and recording of the matching feature points may be performed, for example, as follows.
For example, if a feature point a on the image 1 matches a feature point b on the image 2, the feature point b on the image 2 matches a feature point c on the image 3, and the feature point c on the image 3 matches a feature point d on the image 4, a piece of feature point matching data (a, b, c, d) (also referred to as a "feature point tracking trajectory") may be recorded. Thereby, the input panoramic images are recorded with respect to the mutually matched feature points.
For step 2) above, image re-projection refers to generating a new image by projecting a reference image from an arbitrary viewpoint, that is, image re-projection can change the direction of line of sight of the generated image.
Specifically, in the present invention, the image Reprojection refers to projecting the three-dimensional point coordinates corresponding to one feature point p1 on the image 1 into another image 2 by the current camera parameters, and the position difference between the resulting projected point q2 on this image 2 and the feature point p1 on the image 1 in the matching feature point p2 in this image 2 constitutes a Reprojection Error (Reprojection Error). Here, the matching feature point p2 in the image 2 is an actual position, and the projected point q2 obtained by the re-projection is an estimated position, and the camera position is solved by minimizing the difference in position between the projected point q2 and the matching feature point p2 as much as possible, that is, by making the projected point q2 and the matching feature point p2 coincide as much as possible.
The variables contained in the objective function for optimizing (reducing) the re-projection error comprise the three-dimensional coordinates of the camera position and the feature points, and the three-dimensional coordinates of the camera position and the feature points are obtained in the process of gradually reducing (optimizing) the re-projection error.
Optionally, in the present invention, the reprojection error may be reduced by combining a gradient descent algorithm and a Delaunay Triangulation algorithm (Delaunay Triangulation), so as to achieve the purpose of optimization.
When the gradient descent algorithm is used, the three-dimensional point coordinates of the matched characteristic points are taken as a constant, and the position of the camera is taken as a variable, and conversely, when the Delaunay triangle algorithm is used, the three-dimensional point coordinates of the matched characteristic points are taken as a variable, and the position of the camera is taken as a constant.
Alternatively, in the present invention, progressive solution may be used to improve the accuracy of the solved camera position and three-dimensional point coordinates, i.e., in the solution process, its camera position and the three-dimensional point coordinates of the matching feature points are solved by adding one image at a time. Among them, methods for progressive solution include, for example, isfm (incremental sfm).
Additionally, further optionally, bundle adaptation (bundle adaptation) may be employed to further reduce the reprojection error. Specifically, after the process of reducing the reprojection error to obtain the camera positions and the three-dimensional point coordinates is performed for each panoramic image, all the camera positions and all the three-dimensional point coordinates can be optimized simultaneously using the bundle optimization in the lump finally. In the process of reducing the reprojection error to obtain the camera position and the three-dimensional point coordinates, after the camera position and the three-dimensional point coordinates are acquired for any panoramic image, the processing of bundle optimization may be added to optimize the acquired camera position and the three-dimensional point coordinates.
Here, the bundle optimization refers to a method of optimizing the positions of all cameras and all three-dimensional point coordinates at the same time, and is different from a method of optimizing only the current camera position and the three-dimensional point coordinates on the current image in the progressive solution, respectively.
In addition, in addition to the progressive solution described above, a global solution method may be employed.
In step S120, a planar contour in a three-dimensional space of each panoramic image is generated based on a contour surrounded by edge pixel points belonging to a ceiling on the panoramic image.
Because the ceiling must be above the camera, in the panoramic picture of shooing, the pixel of the top must belong to the ceiling. And thirdly, most of the pixel points belonging to the ceiling have similar characteristics, so that all the pixel points belonging to the ceiling can be finally obtained according to the characteristic similarity of the pixel points.
For example, all the pixel points in the first row of the panoramic image are regarded as pixel points belonging to the ceiling; for each pixel in the second row, the feature similarity (the feature may adopt color, gray scale, etc., and the feature similarity of two pixels may be, for example, the absolute value of the difference between the features of two pixels (e.g., the difference between gray scales or the difference between colors, etc.)) with the pixel belonging to the same column in the first row is calculated. If the feature similarity is within a certain threshold (if a gray scale value of 0-255 is used, the threshold may be set to 10, for example), the pixel also belongs to the ceiling, and the similarity between the third row and the second row and the similarity between the fourth row and the third row on the column are continuously calculated … until the feature similarity exceeds the threshold, and the pixel position at this time is an edge pixel of the ceiling.
The edge pixels of the ceiling form the edge of the ceiling, and therefore, the plane outline of the ceiling can be formed by projecting the edge pixels to the three-dimensional space.
The projection of the pixel points into three-dimensional space will be described below.
Assume that the width of a panoramic image is W and the height is H, and assume that the obtained edge pixel point c of the ceiling has the coordinate (p) in the image coordinate systemc,qc). Since the panoramic image is obtained by spherical projection, it is expressed as (θ) in a spherical coordinate systemc,φc) Wherein thetac∈[-π,π]Is longitude, phic∈[-π/2,π/2]Is a dimension.
The relationship between the spherical coordinate system and the image coordinate system can be obtained by the following formula 1:
Figure GDA0003577016980000151
the following formula 2 is a coordinate (θ) of a pixel point c at the edge of the ceiling in a spherical coordinate systemc,φc) Three-dimensional point coordinates (x) projected onto a three-dimensional planec,yc,zc):
Figure GDA0003577016980000152
In this document, the term "image coordinate system" refers to a coordinate system where image pixels are located, and is mainly used to describe the locations of the pixels in the image. Therefore, the panoramic image coordinate system refers to a coordinate system where the pixel points of the panoramic image are located, and is mainly used for describing the positions where the pixel points are located in the panoramic image.
Note that the above gives only one example of generating a plane contour in a three-dimensional space of a panoramic image based on the similarity of ceiling feature points on the panoramic image, and the method that can be used by the present invention is not limited to this example.
Since the ceiling can be regarded as a plane, it can be regarded that each pixel point at the edge of the ceiling has a uniform height from the camera, which can be referred to as "height of the camera from the ceiling".
Here, since the panoramic camera is generally supported by a tripod and has a fixed height, it can be considered that the height of the camera from the ceiling and the height of the camera from the floor are fixed.
For the planar contour in the three-dimensional space obtained in this step, a height value can be assumed for each three-dimensional point on the contour, for example, the height of the camera from the ceiling is assumed to be hcAnd the assumed height may be any value, such as 100 (the true camera height from the ceiling may be found by subsequent processing). To avoid confusion, the height h of the camera from the ceiling will be assumed here belowcReferred to as "assumed height of camera from ceiling" hc
In the above embodiments, the planar profile of the image can be automatically obtained based on the panoramic image without human intervention for production and without using expensive 3D scanning equipment.
In step S130, the scale of the camera position at the time of photographing each panoramic image obtained in step S110 and the scale of the three-dimensional spatial plane profile of the panoramic image obtained in step S120 are normalized.
On the one hand, due to the scale uncertainty in the camera position estimation in step S110, the true height of the camera to the ceiling profile cannot be determined. On the other hand, the three-dimensional spatial plane profile of the room obtained in step S120 is assumed to be the height h of the camera from the ceilingcTherefore, the scale of the obtained camera position is not uniform with the scale of the three-dimensional space plane outline of the room, and certain difficulty is caused for splicing the subsequent room outlines.
Alternatively, in this step, the height coordinate values (values on the Y axis) of all the three-dimensional points obtained in step S110 are set) Sorting from small to large, and taking the median (or mean or other reasonable value-taking modes) of a height coordinate values which are sorted at the front as the estimated height h of the camera from the ceilingc’。
Finally, the assumed height h of the camera from the ceiling is usedcAnd the estimated height hc' regenerating a dimension-normalized single room plane profile.
For example, the height h may be assumed bycAnd the estimated height hc' is multiplied by the coordinates of the boundary points on the plane contour obtained in step S120 to obtain the coordinates of the boundary points on the scale-normalized plane contour, thereby obtaining the scale-normalized plane contour.
On the other hand, the median (or the mean or other reasonable value mode) of the b height coordinate values in the next sequence can be taken as the estimated height h of the camera from the floorf' (this estimated height will be used in the subsequent 3D model generation step, etc.).
Through the image processing process, a high-resolution basis is provided for the subsequent model generation. Moreover, through the image processing process, the plane contour of each panoramic image which is provided in the three-dimensional space and is normalized is generated for the subsequent model, so that the subsequent model generation work is simplified, the processing time is reduced, and the processing efficiency is improved.
The plane contour data required for model generation is provided for modeling through the above-described image processing, and the house modeling method will be described by continuing the example.
Alternatively, in step S135, a planar contour of each single room may be obtained based on the normalized planar contours of each panoramic image.
In the present invention, a corresponding planar contour in three-dimensional space is obtained from a panoramic image, which may be referred to as a "single-image planar contour".
Since the taken panoramic image may include a plurality of panoramic images of the same room, in this case, the same room corresponds to a plurality of plane profiles in the three-dimensional space, and therefore, in the plane profiles of multiple rooms obtained by the subsequent multi-room stitching process, the plane profiles obtained from different panoramic images of one or more rooms may not coincide, and the stitched profiles may overlap or be confused. Therefore, it can be considered to perform a fusion of the same room contour (which may be called "single room fusion") first to avoid this phenomenon. Moreover, the fusion of the same room contour can also eliminate the incomplete phenomenon of the single room contour.
The inventors of the present invention have given the following exemplary approach to the above-described situation requiring single room fusion.
First, it is determined whether two panoramic images belong to the same room.
Here, a feature point matching-based approach may be adopted, and if there are more than a certain proportion (a certain proportion, for example, 50%, etc.) of matching feature points between two panoramic images, it may be determined that the two panoramic images belong to the same room.
Then, if a plurality of panoramic images belong to the same room, that is, for plane contours of the same room obtained from different panoramic images, a union of these plane contours is taken as a single room plane contour in a three-dimensional space (one room contour, avoiding the case of multiple single image contours of one room), thereby realizing fusion of the same room contour.
The proportion of the matching feature points can be set in the following manner: suppose that image 1 has n1A feature point, image 2 has n2And n characteristic points are matched with the two images. The proportion of matching feature points may be n/min (n)1,n2)。
Alternatively, it may be set that if the ratio is larger than, for example, 50%, the two images are considered to be the same room.
Here, the setting of the proportion of the matching feature points and the actual size of the proportion may be tested or determined empirically according to actual circumstances, and the present invention is not limited thereto.
As described above, in the present invention, for at least one panoramic image described above, it can be determined whether a plurality of panoramic images belong to the same room by means of single-room fusion as follows: if there are more than a certain proportion of matching feature points between two panoramic images, it can be determined that the two panoramic images belong to the same room.
If it is determined that the plurality of panoramic images belong to the same room, for plane profiles of the same room obtained from the plurality of panoramic images, a union of the plane profiles is taken as a plane profile of the room.
In addition, after the contours of the same room are fused, noise may exist due to the obtained contour edges, and for example, the phenomena that the edge lines are not straight and the edge lines are not perpendicular to the edge lines may appear. Therefore, the invention can further carry out right-angle polygon fitting on the outline of each room to obtain a more reasonable room plane outline.
Through the optimization processing specially performed for the single room, such as single room fusion and/or right-angle polygon fitting, a more accurate single room plane contour can be obtained, the generation of subsequent 2D and 3D models is facilitated, and the resolution and the accuracy of the models are improved.
Note that this step is not a necessary step for two-dimensional or three-dimensional modeling of the house, but a preferred way of processing that can improve the accuracy of the model.
In step S140, the contours of the plurality of rooms are stitched based on the camera position estimated in step S110 and the scale-normalized contours of the respective rooms obtained in step S130.
In this step, the splicing of the planar contours of the rooms subjected to the scale normalization to splice the planar contours of the rooms into the multi-room contour can be realized manually.
In addition, the multi-room splicing can also be realized in an automatic mode, and an automatic multi-room splicing scheme proposed by the inventor of the invention is given below.
Optionally, in this step, the three-dimensional point coordinates of each room contour subjected to scale normalization may be rotated and translated by using the estimated camera position, so as to unify the three-dimensional point coordinates of each room into the same coordinate system, thereby implementing the splicing of the multi-room plane contours.
Assuming the contour of N rooms, the p-th three-dimensional point of the nth room contour is represented as
Figure GDA0003577016980000181
The camera position of the room is denoted as Rn,tnIn which R isnAs a rotation matrix for rotation parameters representing the camera position, tnIs a translation vector used to represent translation parameters for the camera position.
At this time, the camera position of the first room can be selected as the reference coordinate system, because the currently obtained room outlines are the outline positions in the respective coordinate systems, and need to be unified into one coordinate system, so that one reference coordinate system needs to be selected. Specifically, the coordinate system in which the camera position of the first room is located may be selected as the reference coordinate system. Then, the contour three-dimensional points of other rooms can be unified under the coordinate system by the following equation 3:
Figure GDA0003577016980000191
all dimension-normalized contour three-dimensional points (for example, three-dimensional points on a ceiling edge, a wall surface edge and a floor edge) except the first room are converted through a formula 3, so that the three-dimensional points of all rooms can be unified to the same coordinate system (namely, a reference coordinate system of the first room), and therefore splicing of the multi-room plane contour can be achieved.
Here, the coordinate system of any one room can be selected as the reference coordinate system, and the present invention is not limited in this respect, because the present invention requires a relative positional relationship, not an absolute positional relationship.
Here, the multi-room profile obtained after the multi-room stitching at this step may be output as a 2D model (e.g., a 2D house figure) of the house.
Optionally, in step S145, the contour of the multi-room may be corrected.
Note that this step is also not a necessary step for two-dimensional or three-dimensional modeling of the house, but a preferred way of handling that can improve the accuracy of the model.
In the invention, after the outlines of the multiple rooms are spliced, the outlines of the multiple rooms can be further corrected to obtain a more accurate room outline.
Due to the influence of the single-image plane contour extraction precision and the camera position estimation precision, the contours of adjacent rooms may have an overlapping area or a gap after splicing, and therefore, the contours can be further corrected for the two cases.
The correction method may be, for example, as follows. First, the distance between adjacent edges of two contours (which should theoretically be overlapped, that is, should theoretically be one overlapped edge of the multi-room contour) is calculated, if the distance is smaller than a certain threshold, it can be determined that the two edges are in an adjacent relationship, at this time, the contour can be shifted accordingly, so that the distance between the adjacent edges becomes 0 (becomes overlapped, becomes an overlapped edge), thereby correcting the overlap or gap between the adjacent edges.
For the above threshold, for example, an average length L of the adjacent edges that should be an overlapped edge may be calculated, and a certain proportion of the average length may be used as the threshold, for example, 0.2 × L may be used as the distance threshold.
Note that the above is merely an exemplary threshold value given for ease of understanding, and in fact, the present invention does not impose additional limitations on the threshold value, which can be determined experimentally and empirically.
Thus, the multi-room contour after the above single-room contour fusion and multi-room contour modification can be used as a complete and accurate 2D floor plan (2D model of the house) of the set of houses.
Optionally, the generated multi-room plane profile may be further converted into a house 3D model in step S150.
First, three-dimensional point interpolation is performed inside the ceiling plane contour among the multi-room plane contours obtained in the previous step, and then all three-dimensional point coordinates are projected into the corresponding panoramic image so as to acquire the ceiling texture (color value).
Here, a method of interpolating three-dimensional points will be exemplified. For example, assuming that the ceiling profile of the obtained multi-room plane profile is a rectangle, assuming that the length is H and the width is W, the length and the width can be divided into N intervals, respectively, so that a total of N × N interpolation points can be obtained. Then, a vertex of the rectangle may be selected (assuming that the three-dimensional point coordinates of the vertex are (x, y, z)) as an origin, and the coordinates of the N × N points may be sequentially (x + H/N, y, z), (x +2 × H/N, y, z) … (x, y + W/N, z) (x, y +2 × W/N, z), … (x + H/N, y + W/N, z) …). Therefore, after the three-dimensional point interpolation, the dense three-dimensional point coordinates inside the contour are obtained.
It should be noted that a specific example of three-dimensional point interpolation is given above for the sake of understanding, and in fact, the three-dimensional point interpolation method applicable to the present invention may be many and is not limited to this example.
In addition, for example, a specific projection method may be as follows. The coordinate of the three-dimensional point after interpolation is assumed to be (x)i,yi,zi) The longitude and latitude projected on the panoramic image is (theta)i,φi) Then the projection formula can be represented by the following formula 4:
Figure GDA0003577016980000201
after the latitude and longitude are obtained by the formula, the coordinate of the three-dimensional point on the panoramic image plane can be obtained according to the formula 1, and the color value of the point can be used as the texture of the three-dimensional point.
For most scenes, the contour of the ceiling and the contour of the floor may be assumed to be parallel and the same. Thus, the corrected ceiling plane profile of each room obtained as described above is used, plus the estimated height h of the camera from the floor obtained abovef' also, three-dimensional points of the multi-room floor plan profile can be generated by equation 2.
Here, the shape of the plane contour of the floor is assumed to be the same as the ceiling, i.e., the three-dimensional coordinates x and z of the horizontal plane are the same, except for the height, i.e., the y value in the vertical direction (e.g., the plane contour of the ceiling is above the camera, and the floor is below the camera, so the heights are different). Therefore, it is only necessary to compare the y value (estimated height h of the camera from the ceiling) in the three-dimensional point coordinates of the ceiling profile obtained abovec') replace with an estimated height h of the camera from the floorf' then, the process is finished.
Similarly to the three-dimensional point interpolation of the planar contour of the ceiling, for the planar contour of the floor, the three-dimensional point interpolation is internally performed and then projected into the corresponding panoramic image using equation 4 so as to obtain the texture of the floor.
Then, three-dimensional vertices at the same plane position between the ceiling profile and the floor profile are connected to form plane profiles of a plurality of wall surfaces, and similarly, three-dimensional point interpolation is performed on the interiors of the plane profiles, and then the three-dimensional point interpolation is projected into the corresponding panoramic image by using formula 4 so as to obtain the texture of the wall surface.
Thus, a 3D texture model of the complete house may be generated.
By the house modeling method, the resolution and the accuracy of the generated model can be effectively improved.
Moreover, it should be noted that, for the sake of understanding and description, the method for modeling based on images of the present invention is described by taking house modeling as an example, and actually, the present invention should not be limited to the application scenario of house modeling, but can be applied to various scenarios for modeling based on images, such as the scenario of modeling vehicles to realize VR (virtual reality) driving, and the present invention actually provides an innovative comprehensive image processing scheme.
Fig. 3 presents a schematic block diagram of an image processing apparatus according to an exemplary embodiment of the present invention.
As shown in fig. 3, the image processing apparatus 100 according to an exemplary embodiment of the present invention may include a camera position estimating device 110, a single image plane contour generating device 120, and a scale normalizing device 130.
Wherein the camera position estimating means 110 may be configured to estimate the position of the panoramic camera when each panoramic image is taken, and the three-dimensional point coordinates of the matching feature points on each panoramic image, using the geometrical relationship of at least one panoramic image taken, wherein each panoramic image is taken for one three-dimensional object, each three-dimensional object corresponding to one or more panoramic images.
The single-image planar contour generating means 120 may be configured to generate, for each panoramic image, a planar contour in a three-dimensional space of each panoramic image based on a contour surrounded by edge pixels among pixels whose contour features belong to a specific category on the panoramic image.
The scale normalization means 130 may be configured to normalize the scale of the estimated position of the panoramic camera when each panoramic image is captured and the scale of the planar contour of each panoramic image in the three-dimensional space, resulting in the normalized planar contour of each panoramic image in the three-dimensional space.
Optionally, the single-image plane contour generating device 120 may be further configured to determine edge pixels among pixels on the panoramic image whose contour features belong to a specific category based on feature similarity between the pixels on the panoramic image.
The feature similarity of the two pixels is the absolute value of the difference between the features of the two pixels. Here, the characteristics of the pixel points may include gray scale, color, and the like.
Optionally, the scale normalization device 130 may be further configured to rank the height values in all three-dimensional point coordinates on the at least one panoramic image obtained by the camera position estimation device from small to large, and take the median or mean of the top-ranked height values as the estimated height h serving as the profile for the specific categoryc' estimated height h of camera from top of corresponding three-dimensional objectc'; and using the assumed height h of the camera from the top of the respective three-dimensional objectcEstimated height h of the top of the three-dimensional object corresponding to the camera distancec' generating a normalized planar profile of each panoramic image from the planar profile of each panoramic image in three-dimensional space.
Wherein the camera is at an assumed height h from the top of the respective three-dimensional objectcIs an arbitrarily assumed height.
In the present invention, a high-resolution captured image is provided for three-dimensional object modeling (e.g., house modeling, etc.) by taking a panoramic image of a room using a panoramic camera.
Further, in the present invention, an efficient image processing apparatus is employed, high-resolution modeling preparation data is provided for three-dimensional object modeling (e.g., house modeling), and the provided modeling preparation data can simplify the subsequent model generation process.
Fig. 4 presents a schematic block diagram of an object modeling apparatus in accordance with an exemplary embodiment of the present invention.
As shown in fig. 4, the object modeling apparatus 1000 may include the image processing apparatus 100 shown in fig. 3 and the multi-object stitching device 140.
Therein, the image processing device 100 may be configured to process at least one panoramic image, generating a planar contour in three-dimensional space of the normalized panoramic images.
The multi-object stitching device 140 may be configured to stitch the multi-object planar profiles based on the normalized planar profiles of the panoramic images.
Optionally, the three-dimensional object modeling apparatus 1000 may further include: a single-object planar contour generating means 135, which may be configured to derive a planar contour of each individual room based on the normalized planar contours of each panoramic image.
Optionally, the single-object plane contour generating means 135 may be further configured to determine, for the at least one panoramic image, whether a plurality of panoramic images belong to the same three-dimensional object one by: if more than specific proportion of matching feature points exist between the two panoramic images, the two panoramic images can be judged to belong to the same three-dimensional object; and if the plurality of panoramic images are determined to belong to the same three-dimensional object, taking a union set of plane outlines of the same three-dimensional object obtained from the plurality of panoramic images as the plane outline of the three-dimensional object in the three-dimensional space.
In addition, optionally, the multi-object stitching device 140 may be further configured to stitch the multi-object plane contour in the three-dimensional space based on the plane contour in the three-dimensional space of each single three-dimensional object generated by the single-object plane contour generation device 135.
In addition, optionally, the three-dimensional object modeling apparatus 1000 may further include a multi-object contour optimization device 145, which may be configured to perform contour correction on the multi-object plane contour obtained by the multi-object stitching device 140.
Optionally, the three-dimensional object modeling apparatus 1000 may further include a 3D model generation device 150, which may be configured to convert the stitched multi-object plane contour in the three-dimensional space into a three-dimensional object 3D model.
Here, the devices 110, 120, 130, 135, 140, 145, 150, etc. of the object modeling apparatus 1000 described above correspond to the steps S110, 120, 130, 135, 140, 145, 150, etc. described above, respectively, and are not described again here.
Therefore, the object modeling device can effectively improve the resolution and the accuracy of the generated model.
Moreover, it should be noted that, for the sake of understanding and description, the technical solution of the present invention for modeling based on images is described by taking three-dimensional object modeling as an example, and actually, the present invention should not be limited to the application scenario of three-dimensional object modeling, but can be applied to various scenarios for modeling based on images, such as house modeling, vehicle modeling, and the like. The present invention actually provides an innovative comprehensive image processing scheme.
Fig. 5 gives a schematic block diagram of an image processing apparatus according to an exemplary embodiment of the present invention.
Referring to fig. 5, the image processing apparatus 1 includes a memory 10 and a processor 20.
The processor 20 may be a multi-core processor or may include a plurality of processors. In some embodiments, processor 20 may comprise a general-purpose host processor and one or more special purpose coprocessors such as a Graphics Processor (GPU), Digital Signal Processor (DSP), or the like. In some embodiments, processor 20 may be implemented using custom circuits, such as an Application Specific Integrated Circuit (ASIC) or a Field Programmable Gate Array (FPGA).
The memory 10 has stored thereon executable code which, when executed by the processor 20, causes the processor 20 to perform one of the methods described above. The memory 10 may include various types of storage units, such as a system memory, a Read Only Memory (ROM), and a permanent storage device, among others. Wherein the ROM may store static data or instructions that are required by the processor 20 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered off. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at run-time. Further, the memory 10 may comprise any combination of computer-readable storage media, including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic and/or optical disks, may also be employed. In some embodiments, memory 10 may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), a read-only digital versatile disc (e.g., DVD-ROM, dual layer DVD-ROM), a read-only Blu-ray disc, an ultra-density optical disc, a flash memory card (e.g., SD card, min SD card, Micro-SD card, etc.), a magnetic floppy disk, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
Furthermore, the method according to the invention may also be implemented as a computer program or computer program product comprising computer program code instructions for carrying out the above-mentioned steps defined in the above-mentioned method of the invention.
Alternatively, the invention may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) which, when executed by a processor of an electronic device (or computing device, server, etc.), causes the processor to perform the steps of the above-described method according to the invention.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowcharts, block diagrams, etc. in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (21)

1. An image processing method, characterized in that the image processing method comprises:
generating a planar contour of a single image, wherein for a panoramic image shot by a panoramic camera, a planar contour of the panoramic image in a three-dimensional space is generated based on a contour surrounded by edge pixel points in pixel points of which contour features belong to a specific category on the panoramic image, and the planar contour is used for constructing a 3D model; and
wherein the specific categories include at least: a top, a bottom, a support;
when at least two panoramic images exist, before the single image plane contour generation step, the method further comprises the following steps:
a camera position estimation step in which a position of the panoramic camera at the time of photographing each panoramic image, and three-dimensional point coordinates of matching feature points on each panoramic image, each panoramic image being photographed for one three-dimensional object, each three-dimensional object corresponding to one or more panoramic images, are obtained using a geometric relationship of at least two panoramic images photographed for each of the at least one three-dimensional object photographed;
when there are at least two panoramic images, after the single image plane contour generating step, further comprising:
and a scale normalization step, wherein the scale of the position of the panoramic camera when the at least one panoramic image of each three-dimensional object is shot and the scale of the plane contour of the at least one panoramic image of each three-dimensional object in the three-dimensional space are normalized, and the plane contour of each normalized panoramic image in the three-dimensional space is obtained.
2. The image processing method of claim 1, wherein the camera position estimating step includes:
performing feature point matching between the panoramic images by using the geometric relationship of at least two panoramic images shot by each three-dimensional object in at least one shot three-dimensional object, and recording mutually matched feature points in the panoramic images as matching feature points; and
and reducing the reprojection error of the matching characteristic points on the panoramic image for at least two panoramic images shot by each three-dimensional object in the at least one three-dimensional object to obtain the position of a camera when each panoramic image is shot and the three-dimensional point coordinates of the matching characteristic points on the panoramic image.
3. The image processing method of claim 1, wherein the single image plane contour generating step comprises:
determining the edge pixel points among the pixel points on the panoramic image, of which the contour features belong to a specific category, based on the feature similarity between the pixel points on the panoramic image,
the feature similarity of the two pixels is an absolute value of a difference between features of the two pixels, and the features of the pixels comprise gray scale and color.
4. The image processing method of claim 1, wherein the scale normalization step comprises:
estimating the height in all three-dimensional point coordinates on the at least two panoramic images obtained in the camera position estimating stepThe values are sorted from small to large, and the median or the mean of the height values sorted at the top is taken as the estimated height h of the profile of the specific categoryc'; and
presume height h with class-specific profilecEstimating height h with profile of specific categoryc' generating a normalized planar contour in three-dimensional space of each panoramic image from the planar contour in three-dimensional space of at least one panoramic image of each three-dimensional object,
wherein the class-specific profile assumes a height hcIs an arbitrarily assumed height.
5. An object modeling method, characterized in that the object modeling method comprises:
an image processing step of performing image processing on at least one panoramic image using the image processing method according to any one of claims 1 to 4 to obtain a planar contour in a three-dimensional space of each panoramic image subjected to normalization; and
and a multi-object splicing step, wherein based on the normalized plane profiles of the panoramic images in the three-dimensional space, a multi-object plane profile is obtained through splicing.
6. The object modeling method of claim 5, further comprising:
and generating a planar contour of the single object, wherein the planar contour in the three-dimensional space of each single three-dimensional object is obtained based on the normalized planar contour of each panoramic image obtained in the image processing step.
7. The object modeling method of claim 6, wherein the single object plane contour generating step comprises:
for the at least one panoramic image, determining whether a plurality of panoramic images belong to the same three-dimensional object one by the following method: if more than specific proportion of matching feature points exist between the two panoramic images, the two panoramic images are determined to belong to the same three-dimensional object; and
and if the plurality of panoramic images belong to the same three-dimensional object, taking the union set of the plane outlines of the same three-dimensional object obtained from the plurality of panoramic images as the plane outline of the three-dimensional object.
8. The object modeling method according to claim 7, wherein in the multi-object stitching step, a multi-object planar contour in a three-dimensional space can be further stitched based on a planar contour in a three-dimensional space of each panoramic image.
9. The object modeling method of any of claims 6-8, further comprising:
and a 3D model generation step, wherein after the multi-object splicing step, the multi-object plane contour in the three-dimensional space obtained by splicing is converted into an object 3D model.
10. The object modeling method of claim 9, wherein the 3D model generating step includes:
performing three-dimensional point interpolation on the top plane contour in the spliced multi-object plane contours, and projecting all three-dimensional point coordinates on each top plane contour to a corresponding panoramic image coordinate system to obtain top textures;
carrying out three-dimensional point interpolation on the bottom plane contour in the spliced multi-object plane contours, and projecting all three-dimensional point coordinates on each obtained bottom plane contour into a corresponding panoramic image coordinate system to obtain bottom textures;
connecting three-dimensional vertexes on the same plane position between the top outline and the bottom outline to form a plane outline of the supporting part, performing three-dimensional point interpolation inside the plane outline of the supporting part, and projecting all three-dimensional point coordinates of the obtained plane outline of each supporting part into a corresponding panoramic image coordinate system so as to obtain supporting part textures;
and generating a 3D texture model of the whole three-dimensional object based on the top texture, the bottom texture and the supporting part texture.
11. The object modeling method according to claim 10, wherein in the 3D model generation step, h, which is an estimated height of the specific contour, is selected from coordinates of all three-dimensional points on the top plane contour of each of the three-dimensional objects obtainedc' the height value of the estimated height of the camera from the top of the corresponding three-dimensional object is replaced by the estimated height h of the camera from the bottom of the corresponding three-dimensional objectf' and keeping the length and width values in all three-dimensional point coordinates on the top planar contour of each three-dimensional object constant, obtaining the bottom planar contour of each three-dimensional object correspondingly, wherein,
the estimated height h of the camera from the top of the corresponding three-dimensional objectc' is obtained by: sorting the height values in all three-dimensional point coordinates on the at least one panoramic image obtained in the camera position estimation step from small to large, and taking the median or mean of the height values sorted at the front as the estimated height h of the camera from the top of the corresponding three-dimensional objectc', and
the estimated height h of the camera from the bottom of the corresponding three-dimensional objectf' is obtained by: sorting the height values in all three-dimensional point coordinates on the at least one panoramic image obtained in the camera position estimation step from small to large, and taking the median or mean of the height values after sorting as the estimated height h of the camera from the bottom of the corresponding three-dimensional objectf’。
12. An image processing apparatus characterized by comprising:
a single-image planar contour generation device configured to generate, for a panoramic image taken with a panoramic camera, a planar contour in a three-dimensional space of each panoramic image based on a contour surrounded by edge pixels among pixels whose contour features belong to a specific category on the panoramic image, the planar contour being used to construct a 3D model; and
wherein the specific categories include at least: top, bottom, support portion
When there are at least two panoramic images, the image processing apparatus further includes:
a camera position estimation device configured to obtain a position of the panoramic camera at the time of photographing each panoramic image and three-dimensional point coordinates of matching feature points on each panoramic image, using a geometric relationship of at least two panoramic images photographed by each of at least one three-dimensional object photographed, each panoramic image being photographed for one three-dimensional object, each three-dimensional object corresponding to one or more panoramic images;
and the scale normalization device is configured to normalize the obtained scale of the position of the panoramic camera when the at least one panoramic image of each three-dimensional object is shot and the scale of the plane contour of the at least one panoramic image of each three-dimensional object in the three-dimensional space to obtain the normalized plane contour of each panoramic image in the three-dimensional space.
13. The image processing apparatus of claim 12, wherein the single image plane contour generating device is further configured to:
determining edge pixel points among pixel points on the panoramic image, of which the outline features belong to a specific category, based on feature similarity among the pixel points on the panoramic image,
the feature similarity of the two pixels is an absolute value of a difference between features of the two pixels, and the features of the pixels comprise gray scale and color.
14. The image processing apparatus of claim 12, wherein the scale normalization means is further configured for:
the height values in all three-dimensional point coordinates on the at least two panoramic images obtained by the camera position estimation device are usedSorting from small to large, taking the median or mean of the height values sorted at the top as the estimated height h of the profile of a specific categoryc' estimated height h of camera from top of corresponding three-dimensional objectc'; and
using the assumed height h of the camera from the top of the respective three-dimensional objectcEstimated height h of the top of the three-dimensional object corresponding to the camera distancec' generating a normalized planar contour of each panoramic image from the planar contour of at least one panoramic image of each three-dimensional object in three-dimensional space,
wherein the camera is at an assumed height h from the top of the respective three-dimensional objectcIs an arbitrarily assumed height.
15. An object modeling apparatus, characterized in that the object modeling apparatus further comprises:
an image processing apparatus according to any one of claims 12 to 14, configured to image process at least one panoramic image to obtain a normalized planar profile of each panoramic image in three-dimensional space; and
and the multi-object splicing device is configured to splice the plane profiles in the three-dimensional space of the normalized panoramic images to obtain the multi-object plane profiles in the three-dimensional space.
16. The object modeling apparatus of claim 15, wherein the object modeling apparatus further comprises:
and the single-object plane contour generating device is configured to obtain a plane contour in a three-dimensional space of each three-dimensional object based on the normalized plane contour in the three-dimensional space of each panoramic image.
17. The object modeling apparatus of claim 16, wherein the single object plane contour generation means is further configured for:
for the at least one panoramic image, determining whether a plurality of panoramic images belong to the same three-dimensional object one by: if more than specific proportion of matching feature points exist between the two panoramic images, the two panoramic images can be judged to belong to the same three-dimensional object; and
and if the plurality of panoramic images belong to the same three-dimensional object, taking a union set of plane outlines of the same three-dimensional object obtained from the plurality of panoramic images as the plane outline of the three-dimensional object in the three-dimensional space.
18. The object modeling apparatus of claim 17, wherein the multi-object stitching means is further configured to stitch the multi-object planar contour in three-dimensional space based on planar contours of individual three-dimensional objects in three-dimensional space.
19. The object modeling apparatus of any of claims 16-18, further comprising:
and the 3D model generation device is configured to convert the spliced multi-object plane contour in the three-dimensional space into a three-dimensional object 3D model.
20. An image processing apparatus comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor causes the processor to perform the method of any of claims 1 to 11.
21. A non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor, causes the processor to perform the method of any of claims 1-11.
CN202010574097.2A 2019-04-12 2019-04-12 Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium Active CN111862302B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010574097.2A CN111862302B (en) 2019-04-12 2019-04-12 Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010574097.2A CN111862302B (en) 2019-04-12 2019-04-12 Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium
CN201910296077.0A CN110490967B (en) 2019-04-12 2019-04-12 Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910296077.0A Division CN110490967B (en) 2019-04-12 2019-04-12 Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium

Publications (2)

Publication Number Publication Date
CN111862302A CN111862302A (en) 2020-10-30
CN111862302B true CN111862302B (en) 2022-05-17

Family

ID=68545807

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202010574097.2A Active CN111862302B (en) 2019-04-12 2019-04-12 Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium
CN202010574085.XA Active CN111862301B (en) 2019-04-12 2019-04-12 Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium
CN201910296077.0A Active CN110490967B (en) 2019-04-12 2019-04-12 Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN202010574085.XA Active CN111862301B (en) 2019-04-12 2019-04-12 Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium
CN201910296077.0A Active CN110490967B (en) 2019-04-12 2019-04-12 Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium

Country Status (1)

Country Link
CN (3) CN111862302B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020207512A1 (en) * 2019-04-12 2020-10-15 北京城市网邻信息技术有限公司 Three-dimensional object modeling method, image processing method, and image processing device
CN111127357B (en) * 2019-12-18 2021-05-04 北京城市网邻信息技术有限公司 House type graph processing method, system, device and computer readable storage medium
CN111127655B (en) * 2019-12-18 2021-10-12 北京城市网邻信息技术有限公司 House layout drawing construction method and device, and storage medium
CN113436311A (en) * 2020-03-23 2021-09-24 阿里巴巴集团控股有限公司 House type graph generation method and device
CN112055192B (en) * 2020-08-04 2022-10-11 北京城市网邻信息技术有限公司 Image processing method, image processing apparatus, electronic device, and storage medium
CN112686989A (en) * 2021-01-04 2021-04-20 北京高因科技有限公司 Three-dimensional space roaming implementation method
CN114332648B (en) * 2022-03-07 2022-08-12 荣耀终端有限公司 Position identification method and electronic equipment
CN114442895A (en) * 2022-04-07 2022-05-06 阿里巴巴达摩院(杭州)科技有限公司 Three-dimensional model construction method
CN116246085B (en) * 2023-03-07 2024-01-30 北京甲板智慧科技有限公司 Azimuth generating method and device for AR telescope

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105488775A (en) * 2014-10-09 2016-04-13 东北大学 Six-camera around looking-based cylindrical panoramic generation device and method

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7149367B2 (en) * 2002-06-28 2006-12-12 Microsoft Corp. User interface for a system and method for head size equalization in 360 degree panoramic images
CN1758284A (en) * 2005-10-17 2006-04-12 浙江大学 Method for quickly rebuilding-up three-D jaw model from tomographic sequence
CN100576907C (en) * 2007-12-25 2009-12-30 谢维信 Utilize the method for single camera real-time generating 360 degree seamless full-view video image
CN101739717B (en) * 2009-11-12 2011-11-16 天津汇信软件有限公司 Non-contact scanning method for three-dimensional colour point clouds
CN101923730B (en) * 2010-09-21 2012-05-02 北京大学 Fisheye camera and multiple plane mirror devices-based three-dimensional reconstruction method
CN101950426B (en) * 2010-09-29 2014-01-01 北京航空航天大学 Vehicle relay tracking method in multi-camera scene
TW201342303A (en) * 2012-04-13 2013-10-16 Hon Hai Prec Ind Co Ltd Three-dimensional image obtaining system and three-dimensional image obtaining method
CN103379267A (en) * 2012-04-16 2013-10-30 鸿富锦精密工业(深圳)有限公司 Three-dimensional space image acquisition system and method
CN102650886B (en) * 2012-04-28 2014-03-26 浙江工业大学 Vision system based on active panoramic vision sensor for robot
TWI555379B (en) * 2015-11-06 2016-10-21 輿圖行動股份有限公司 An image calibrating, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
CN106780421A (en) * 2016-12-15 2017-05-31 苏州酷外文化传媒有限公司 Finishing effect methods of exhibiting based on panoramic platform
CN106651767A (en) * 2016-12-30 2017-05-10 北京星辰美豆文化传播有限公司 Panoramic image obtaining method and apparatus
CN107564039A (en) * 2017-08-31 2018-01-09 成都观界创宇科技有限公司 Multi-object tracking method and panorama camera applied to panoramic video
CN108389157A (en) * 2018-01-11 2018-08-10 江苏四点灵机器人有限公司 A kind of quick joining method of three-dimensional panoramic image
JP6337226B1 (en) * 2018-03-02 2018-06-06 株式会社エネルギア・コミュニケーションズ Abnormal point detection system
CN108416840B (en) * 2018-03-14 2020-02-18 大连理工大学 Three-dimensional scene dense reconstruction method based on monocular camera
CN108537848B (en) * 2018-04-19 2021-10-15 北京工业大学 Two-stage pose optimization estimation method for indoor scene reconstruction
CN108876909A (en) * 2018-06-08 2018-11-23 桂林电子科技大学 A kind of three-dimensional rebuilding method based on more image mosaics
CN108961395B (en) * 2018-07-03 2019-07-30 上海亦我信息技术有限公司 A method of three dimensional spatial scene is rebuild based on taking pictures
CN109508682A (en) * 2018-11-20 2019-03-22 成都通甲优博科技有限责任公司 A kind of detection method on panorama parking stall

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105488775A (en) * 2014-10-09 2016-04-13 东北大学 Six-camera around looking-based cylindrical panoramic generation device and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
景物三维立体图像真实性建模仿真研究;黄晓洲;《计算机仿真》;20170115(第01期);全文 *

Also Published As

Publication number Publication date
CN111862302A (en) 2020-10-30
CN110490967A (en) 2019-11-22
CN110490967B (en) 2020-07-17
CN111862301A (en) 2020-10-30
CN111862301B (en) 2021-10-22

Similar Documents

Publication Publication Date Title
CN110675314B (en) Image processing method, image processing apparatus, three-dimensional object modeling method, three-dimensional object modeling apparatus, image processing apparatus, and medium
CN110490916B (en) Three-dimensional object modeling method and apparatus, image processing device, and medium
CN111862302B (en) Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium
WO2020207512A1 (en) Three-dimensional object modeling method, image processing method, and image processing device
US11727587B2 (en) Method and system for scene image modification
Scharstein View synthesis using stereo vision
US9872010B2 (en) Lidar stereo fusion live action 3D model video reconstruction for six degrees of freedom 360° volumetric virtual reality video
Fitzgibbon et al. Automatic 3D model acquisition and generation of new images from video sequences
US10846844B1 (en) Collaborative disparity decomposition
US20190012804A1 (en) Methods and apparatuses for panoramic image processing
WO2015017941A1 (en) Systems and methods for generating data indicative of a three-dimensional representation of a scene
CN112055192B (en) Image processing method, image processing apparatus, electronic device, and storage medium
Willi et al. Robust geometric self-calibration of generic multi-projector camera systems
Banno et al. Omnidirectional texturing based on robust 3D registration through Euclidean reconstruction from two spherical images
Pagani et al. Dense 3D Point Cloud Generation from Multiple High-resolution Spherical Images.
GB2567245A (en) Methods and apparatuses for depth rectification processing
Alsadik Guided close range photogrammetry for 3D modelling of cultural heritage sites
Yang et al. Noise-resilient reconstruction of panoramas and 3d scenes using robot-mounted unsynchronized commodity rgb-d cameras
Ackermann et al. Removing the example from example-based photometric stereo
Hu et al. Multiple-view 3-D reconstruction using a mirror
Ortiz-Cayon et al. Automatic 3d car model alignment for mixed image-based rendering
Bartczak et al. Extraction of 3D freeform surfaces as visual landmarks for real-time tracking
JP2002092597A (en) Method and device for processing image
Rolin et al. View synthesis for pose computation
da Silveira et al. 3D Scene Geometry Estimation from 360$^\circ $ Imagery: A Survey

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant