CN107798725B - Android-based two-dimensional house type identification and three-dimensional presentation method - Google Patents

Android-based two-dimensional house type identification and three-dimensional presentation method Download PDF

Info

Publication number
CN107798725B
CN107798725B CN201710783949.7A CN201710783949A CN107798725B CN 107798725 B CN107798725 B CN 107798725B CN 201710783949 A CN201710783949 A CN 201710783949A CN 107798725 B CN107798725 B CN 107798725B
Authority
CN
China
Prior art keywords
dimensional
image
vector
line segments
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710783949.7A
Other languages
Chinese (zh)
Other versions
CN107798725A (en
Inventor
蔡毅
李雨龙
闵华清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201710783949.7A priority Critical patent/CN107798725B/en
Publication of CN107798725A publication Critical patent/CN107798725A/en
Application granted granted Critical
Publication of CN107798725B publication Critical patent/CN107798725B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a two-dimensional house type identification and three-dimensional presentation method based on Android, which comprises the following steps: extracting and classifying two-dimensional house type graph information, and constructing three-dimensional house type graph modeling elements; selecting a two-dimensional house-type graph in an image format; properly cutting and correcting the two-dimensional house type picture in size, and carrying out binarization operation and morphological operation on the two-dimensional house type picture; extracting basic data of each modeling element for three-dimensional drawing; automatically drawing a three-dimensional house-type graph; the roaming operation of the three-dimensional user-type graph is realized through gesture operation. The method can not only rapidly and conveniently identify the two-dimensional house types and simultaneously realize three-dimensional presentation on the mobile terminal, but also provide convenient interactive operation. Therefore, the method has the characteristics of low modeling difficulty and high efficiency, and can help the user understand and improve the user experience to the greatest extent.

Description

Android-based two-dimensional house type identification and three-dimensional presentation method
Technical Field
The invention relates to the technical field of two-dimensional image recognition and three-dimensional modeling, in particular to a two-dimensional house type recognition and three-dimensional presentation method based on Android.
Background
With the rapid advance of computer software and hardware technologies, the application of computer graphics in various industries is rapidly popularized and advanced. At present, computer graphics has entered the three-dimensional era, and three-dimensional graphics are ubiquitous in human life. The virtual and display of the three-dimensional graph has the characteristics of strong intuition, high display degree, good experience and the like. Meanwhile, computer visualization and computer animation have become popular topics in computer graphics in recent years, and the technical core of these popular topics is three-dimensional graphics. Modeling techniques are one of the most important technical fields of three-dimensional graphics. The modeling technology is one of the hottest directions in computer graphics at present, and plays an important role in various industries, such as building, carving, military, entertainment and the like.
The China real estate market is rapidly developed, is mainly a residential market at present, has 87.2% of market share, accounts for 7.5% of commercial real estate, and is office, cultural education, tourism, endowment real estate and the like. Currently, people select a proper house type mainly through a two-dimensional house type graph provided by a developer in the house buying process. From the research of the whole market, the display of the house type graph is mainly a two-dimensional house type graph which takes paper media as the main point. The two-dimensional house type graph has the characteristics of low cost, simple display medium, easy modeling and the like. But compared with a three-dimensional house-type graph, the three-dimensional house-type graph has the defects of being not visual enough, low in display degree and the like. Three-dimensional house type modeling needs professional engineering technicians due to the complex technology, and is not widely applied to the real estate industry. However, in the real estate refinement development today, three-dimensional house type modeling is a trend and is also the development place of new industries. Especially, the terminal application capable of combining with the current mobile internet is the development trend in the future. In the extensive market of houses, a mobile phone application which can be directly converted into a three-dimensional scene from a user type drawing is expected.
After research, no mobile phone end application with three-dimensional housing type presentation exists in the market. Some large companies, such as Autodesk and the like, which have made related software, use given rooms and furniture models for combination and assembly, are complex in operation, time-consuming in building one room is high, and the main function is used as a design reference for furniture furnishing. These software do not build rooms on the basis of actual house types, and the resulting house types of the rooms are not real.
The rise of three-dimensional modeling technology and the emergence of virtual reality technology provide an excellent working platform for design and innovation. Various modeling methods can be roughly classified into geometric modeling techniques, parametric modeling techniques, variable quantity modeling techniques, feature modeling techniques, and the like.
The common three-dimensional geometric modeling techniques are wire frame modeling, surface modeling and solid modeling.
1. Wire frame modeling: wire-frame modeling focuses on ridges on the boundaries of the features. The method mainly aims to solve the problem of representation of the shapes in the automatic drawing in the computer. Wire-frame modeling is very effective for planar shapes. But the shape body containing the curved surface has obvious defects because the contour lines of the shape body containing the curved surface are not all ridge lines. The contour lines, which are not ridge lines, are not determinable and vary with the viewpoint and viewing angle of the viewer.
2. Surface modeling: surface modeling adds information about faces in an object based on wire-frame modeling, and represents the object by the geometry of the face and defines the boundary of the face by a ring. The surface overcomes many defects of the wire-frame model, completely defines the three-dimensional surface, and both the unresolvable and unresolvable targets can be described by the surface model. However, the surface model has some disadvantages, mainly that it can only represent the surface boundary of the object, but does not express the real body attributes, such as mass, moment of inertia, center of mass, etc.
3. Solid modeling: solid modeling is the highest-level three-dimensional object model at present, and can completely represent all shape information of an object. A point on the outside, inside or surface of the object can be unambiguously determined, and the model can further meet the requirements of applications such as physical property calculation, finite element analysis, etc.
Parametric modeling and variable-quantity modeling are two main forms of constraint-based design methods. It has the following common points: the constraint relations added into the part model by designers in an interactive mode can be processed by the designers, and the ability of automatically updating the graph when constraint parameters change is provided, so that the designers do not consider how to update the constraint relations of the geometric model according with the design requirements.
Variable-quantity modeling provides a greater degree of freedom for the modification of design objects, allowing for size constraints.
Feature modeling has evolved from solid modeling techniques, which are techniques for modeling based on features of a product. Feature modeling can provide not only geometric information of a product, but also various functional information of the product.
In summary, when the above three-dimensional modeling methods are used for modeling a three-dimensional house type, the following disadvantages are present:
1. the modeling speciality is strong, the design is complex, the difficulty is high, and skilled modeling engineers are needed.
2. The operation is complicated, the parameters are numerous, and the learning cost is very high.
3. Professional modeling desktop software and hardware are needed, and the requirement of rapid modeling on the mobile terminal cannot be met.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a two-dimensional house type identification and three-dimensional presentation method based on Android.
The purpose of the invention can be achieved by adopting the following technical scheme:
a two-dimensional house type identification and three-dimensional presentation method based on Android comprises the following steps:
s0, extracting and classifying two-dimensional house type graph information, and constructing a three-dimensional house type graph modeling element;
s1, selecting a two-dimensional house-type picture in an image format by taking a picture or loading the picture in the photo album;
s2, correcting, cutting and processing the picture to obtain a binary image of the wall;
s3, extracting data for 3D drawing, automatically identifying rooms after a user slides and selects the positions of the rooms through an interactive interface, and performing 3D drawing after all the rooms are selected;
s4, drawing the selected room into a three-dimensional space to present a three-dimensional house type;
and S5, realizing gesture operation on the mobile terminal, and performing roaming operation on the three-dimensional user-type graph through gesture operation, wherein the roaming operation comprises a) realizing overall horizontal movement of the sight line vector through single-finger gesture operation, b) realizing vector direction translation of the sight line vector through double-finger gesture operation, and c) realizing rotation of the sight line vector around a point through double-finger gesture operation.
Further, the step S2 includes the following steps:
s201, cutting a non-graphic part of the picture;
s202, carrying out image binarization based on a global threshold, determining a threshold according to the spatial distribution of the image gray level, and realizing the conversion from a gray level image to a binarized image according to the threshold;
and S203, morphological image processing, wherein the bright area in the image is increased through the expansion operation, and the bright area in the image is reduced through the erosion operation.
Further, the step S202 includes the steps of:
s2021, obtaining each gray value appearing in the picture and the probability of the gray value appearing, storing the gray value by using a first stack in a two-dimensional array, storing the probability of the gray value appearing corresponding to the second stack, and storing PPjx(0, j) storing the gray value of the jth pixel in the image, PPjx(1, j) storing the probability of the occurrence of the jth gray value;
s2022, obtaining a discrete function distribution f (i) of gray-scale values:
Figure GDA0002378081880000041
wherein i is the number of types of gray values in the image;
s2023, calculating the sum PSum (i) of the probabilities of the occurrence of the first i gray-scale values:
Figure GDA0002378081880000042
wherein i is the number of types of gray values in the image;
s2024, solving the gray average value AGray of the whole image, namely summing the gray values of all pixels in the image, and then dividing the sum by the total number of the pixels;
s2025, obtaining the threshold weight wvalue (i) at different gray levels:
Figure GDA0002378081880000051
wherein i is the number of types of gray values in the image;
s2025, obtaining the gray value corresponding to the maximum value i of WValue (i) as the optimal binary threshold value.
Further, the step S3 includes the following steps:
s301, edge detection, namely firstly generating a group of normalized Gaussian kernels by using a discretized Gaussian function, then carrying out weighted summation on each point of an image gray matrix based on the Gaussian kernels, then highlighting the point with the obvious change of the neighborhood intensity value of the image gray point by an enhancement algorithm, and then detecting edge points by a thresholding method;
s302, Hough transformation, namely mapping curves or straight lines with the same shape in a Cartesian coordinate space to a point in a polar coordinate space to form a peak value by using transformation between the Cartesian coordinate space and the polar coordinate space, and converting the problem of detecting any shape into a statistical peak value problem;
s303, optimizing the extracted information;
and S304, constructing a scene tree.
Further, the step S303 includes the steps of:
s3031, classifying all line segments, dividing all line segments into vertical line segments, horizontal line segments and other line segments, only keeping the vertical line segments and the horizontal line segments, and if the coordinates of A and B meet the relation | x for a straight line AB on a planeA-xB|<Factor, then the line is vertical if the coordinates of A, B satisfy yA-yB|<Factor, then the line is horizontal, where Factor is the error Factor;
s3032, sorting the classified line segments according to the coordinates of the starting points of the straight line segments, wherein the sorting uses merging sorting;
s3033, the sorted line segments are optimized, some useless line segments are removed, and redundant line segments are merged.
Further, the step S3033 of optimizing the sorted line segments specifically includes:
according to the sorted straight-line segments, the specific optimization processing mode of the relationship between two adjacent straight-line segments is as follows: if the two straight line segments contain or have the same relation, the longer line segment is reserved;
if the two line segments are in an intersecting relationship, merging the two adjacent line segments;
if the two line segments are in a separated relation, the two line segments are reserved and the processing is continued.
Further, the step S304 includes the following steps:
s3041, loading an image;
s3042, selecting a room area;
s3043, according to the selected region, obtaining the nearest linear range of the region by using insertion sorting;
s3044, determining a rectangular frame, expanding the x direction and the y direction of the rectangular frame to the obtained range, detecting whether a straight line exists in the range, if so, returning an object recording the result, and if not, continuing to increase in the x direction or the y direction until the straight line falls in the edge determined by the rectangular frame;
s3045, expanding the rectangular frames again to obtain a second boundary, sequencing the distances between two obtained rectangular frames, namely innner and outter, taking two adjacent distance values as a vertical coordinate, taking the horizontal coordinate distance as 1, obtaining the slope between the two adjacent distances, and screening and optimizing the obtained result by using the slope;
s3046, determining the final range according to the nearest straight line range, the inner boundary and the outer boundary of the room and the determined size of the column, determining the position of the door or the window according to the size of the non-intersected blank area of the inner boundary and the outer boundary, and finally calculating all walls and the ground according to the obtained result.
Further, the screening and optimizing the obtained result by using the slope in the step S3045 specifically includes:
setting inner rectangular frames (p1, p2, p3, p4) and outer rectangular frames (p1, p2, p3, p 4);
calculating and sequencing boundary values of upper, lower, left and right;
sequencing the obtained distances, wherein the value of the original array is not changed in the sequencing process, the subscript values of the sequenced original array are recorded by using a new array, and the four obtained sequenced distance values are d1, d2, d3 and d 4;
calculating the inclination angle theta (d4-d3) pi/180 of two adjacent distances;
and screening according to the size of the inclination angle, wherein the set range size is as follows:
if the inclination angle theta epsilon (pi/6, and +/-infinity), the maximum value of the obtained distance is too large and should be eliminated, and the outer boundary is used as the inner boundary and the average value of d2 and d3 is used as the outer boundary;
if theta epsilon (pi/10, pi/6) indicates that a large column is contained between the inner boundary and the outer boundary at the moment, after the judgment of the column position is carried out, the column is stored, the edge is continuously expanded, outter is used as the inner boundary, and the average value of d2 and d3 is used as the outer boundary;
if theta epsilon (0, pi/10) indicates that the error is within the normal error range, no processing is performed.
Further, the step S5 includes the following steps:
s501, moving the entire sight line vector horizontally by a single finger operation:
each movement of a single-finger gesture on the screen generates a set of variables (dx, dy), and if the starting point of the variable is A and the end point of the variable is B, the variable is generated
Figure GDA0002378081880000071
Is based on the current position, rather than the x-axis and z-axis, as follows:
s5011, calculating the unit vector calculation method of the projection on the x0z plane as shown in the following formula
Figure GDA0002378081880000072
S5012, finding the in-plane sum of x0z
Figure GDA0002378081880000073
Vertical unit vector
Figure GDA0002378081880000074
S5013, calculating the vector after translation
Figure GDA0002378081880000075
S5014, applying the translated vector to a camera;
s502, translating the vector direction of the sight line vector through the double-finger operation:
on the screen, a distance variation ds is generated between two fingers when the two fingers move each time, and the solution of ds is shown as the following formula:
Figure GDA0002378081880000076
calculating the position after vector translation according to the distance variation ds, which comprises the following steps:
s5021, calculating
Figure GDA0002378081880000077
Unit vector of
Figure GDA0002378081880000078
S5022, calculating the vector after translation
Figure GDA0002378081880000079
S5023, applying the translated vector to the camera;
s503, rotating the sight line vector around the point A through the double-finger operation:
will vector
Figure GDA0002378081880000081
The method comprises the following steps of placing the camera in a ball, enabling the camera to be located at a point A of the center of the ball, enabling a viewpoint B to be located on a spherical surface, enabling two fingers to be equivalent to one finger when the distance between the two fingers is kept constant on a screen, generating two sets of same variables (dx, dy) each time the camera moves, and projecting the coordinate change of the screen into the change of an angle on the spherical surface after setting a projection parameter lambda, wherein the parameter equation of the ball is shown as the following formula:
Figure GDA0002378081880000082
in the above formula
Figure GDA0002378081880000083
The longitude is theta epsilon (0,180) is latitude, and a, b and c are sphere center coordinates, and the specific operation of the step is as follows:
s5031 projecting (dx, dy) as an angular change in the sphere parameter equation:
Figure GDA0002378081880000084
s5032, calculating the sphere diameter of the sphere
Figure GDA0002378081880000085
S5033, the calculation process of recalculating the coordinates of point B is shown as follows:
Figure GDA0002378081880000086
s5034, calculating the translated vector according to the following equation:
Figure GDA0002378081880000087
s5035, applying the translated vector to the camera.
Compared with the prior art, the invention has the following advantages and effects:
1) the invention converts the complex three-dimensional house type modeling process into an automatic modeling process, is simple to operate, is simple and convenient for the user to learn, and particularly can realize the construction of the three-dimensional house type by only needing a plurality of simple steps by the common user without professional knowledge.
2) The invention uses the mobile terminal to complete the identification of the two-dimensional house type picture and the presentation of the three-dimensional house type, realizes the intra-scene roaming, and is particularly suitable for service scenes such as remote house watching and the like. The mobile equipment has the characteristics of portability, easy use and the like, breaks through the limitation that the traditional method needs to be displayed on a desktop computer or special equipment, and greatly improves the user experience.
3) The method can be used in the fields of three-dimensional house type display, household product display, house sale advertisement and the like, and has a wide application range.
Drawings
FIG. 1 is a flowchart of steps of a method for identifying and presenting two-dimensional house types and three-dimensional house types based on Android disclosed by the invention;
FIG. 2 is a graph of edge detection results;
FIG. 3 is a schematic diagram of the parametric equations for a sphere.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
As shown in fig. 1, the embodiment discloses a two-dimensional house type identification and three-dimensional presentation method based on Android, which generates a dimensional house type scene from a given house type graph and realizes indoor roaming; meanwhile, the method is based on Android mobile equipment, the problem that a modeling result is displayed by special equipment is solved, and the method is simple and convenient.
The method comprises the following steps:
and S0, extracting and classifying information of the two-dimensional house type graph, and constructing modeling elements of the three-dimensional house type graph.
S1, selecting a two-dimensional house-type picture in an image format by taking a picture or loading the picture in the photo album: first a house picture is opened or a house picture is taken with a camera.
S2, correcting the picture properly, and cutting: and opening cutting software to cut, clicking an identification button after cutting, and performing a series of image processing to obtain a binary image of the wall.
And S3, extracting basic data of each modeling element for 3D drawing on the basis of the binarized two-dimensional house type picture.
And after clicking continues, entering an interface needing interaction, after the user slides and selects the room position, automatically identifying the room by the system, and clicking to draw after all the rooms are selected.
The step of properly cutting and correcting the two-dimensional house type picture in size in application, and the binarization operation and the morphology operation of the two-dimensional house type picture comprise the following steps: carrying out binarization operation on the two-dimensional house type picture and carrying out image binarization based on a global threshold value; the morphological operation on the two-dimensional house type picture comprises two types: swelling and corrosion.
S4, presenting a three-dimensional house type: the software will render the selected room into three-dimensional space.
The three-dimensional presentation method of the two-dimensional user-type graph in the application adopts a construction mode of combining multi-point touch interaction and automation of the mobile terminal based on the Android platform.
S5, realizing roaming in the three-dimensional floor plan: in three-dimensional space, roaming can be achieved using single-finger and two-finger operational models.
In the step, when the user-type graph is applied to building or user-type roaming in a three-dimensional state, different scenes can be dynamically switched and displayed on the Android platform-based mobile terminal according to the viewpoint position.
In step S0, the extracting and classifying the two-dimensional user-type graph information includes: (1) planar black and white handdrawing line frame type (no furniture) -wall black or hollow frame; (2) a planar black and white free-drawing line frame type-a black or hollow frame of a wall body; (3) the wall body is a hollow frame; (4) the plane color simulates a real object wire frame type, namely a black or hollow frame of a wall body; (5) the three-dimensional color simulation material object line frame type is a black or hollow frame of a wall body.
The specific process of step S2 is as follows:
s201, cutting a non-graphic part of the picture;
s202, image binarization based on a global threshold: and determining a threshold value according to the spatial distribution of the image gray level, and realizing the conversion from the gray level image to the binary image according to the threshold value. The binarization operation of the two-dimensional house type picture adopts the image binarization based on the global threshold value, so that the whole image presents an obvious black and white effect, interested parts in the image can be reserved to the maximum extent, and the data volume in the image is greatly reduced.
The global optimal binarization threshold value selection method comprises the following steps:
s2021, acquiring each gray value appearing in the picture and the probability of the appearance of the gray value.
First, the gray values that appear in the picture are sorted.
Secondly, calculating the number of all the gray values and counting the occurrence times of each gray value.
And finally, calculating the percentage (discrete probability) of the number of times of occurrence of each gray value to the number of all gray values. Using a first stack in a two-dimensional array for storing the grey values and a second stack for storing the probability of the grey values appearing, e.g. PPjx(0, j) storing the gray value of the jth pixel in the image, PPjx(1, j) store the probability of the occurrence of the jth gray value.
S2022, obtaining a discrete function distribution f (i) of gray-scale values:
Figure GDA0002378081880000111
where i is the number of categories of gray scale values in the image.
S2023, calculating the sum PSum (i) of the probabilities of the occurrence of the first i gray-scale values:
Figure GDA0002378081880000112
where i is the number of categories of gray scale values in the image.
S2024, average the gray level of the whole image, AGray, i.e. sum the gray values of all pixels in the image, then divide by the total number of pixels.
S2025, obtaining the threshold weight wvalue (i) at different gray levels:
Figure GDA0002378081880000113
where i is the number of categories of gray scale values in the image.
S2025, obtaining the gray value corresponding to the maximum value i of WValue (i) as the optimal binary threshold value.
S203, morphological image processing: the bright areas in the image can be increased by the dilation operation and decreased by the erosion operation.
The morphological operation on the two-dimensional house type picture comprises two types: swelling and corrosion. The expansion and corrosion can achieve a variety of functions, mainly as follows: noise is eliminated; segmenting independent image elements and connecting adjacent elements in the image; searching for an obvious maximum value area or a minimum value area in the image; the gradient of the image is determined.
The specific process of step S3 is as follows:
s301, edge detection:
filtering: the algorithms for edge detection are mainly based on the first and second derivatives of the image intensity, but the derivatives are usually very sensitive to noise, so filters have to be used to improve the performance of the noise-related edge detector. The conventional filtering method mainly includes gaussian filtering, that is, a set of normalized gaussian kernels is generated by using a discretized gaussian function, and then each point of an image gray matrix is subjected to weighted summation based on the gaussian kernels.
Enhancing: the basis of the enhanced edge is to determine the variation value of the neighborhood intensity of each point of the image. The enhancement algorithm can highlight points with significant changes in the intensity values of the image gray point neighborhood.
And (3) detection: enhanced images tend to have large gradient values for many points in the neighborhood, and in certain applications, these points are not the edge points that we are looking for, so some method should be used to trade off these points. In practical engineering, a common method is to detect by thresholding, and as shown in fig. 2, a result graph of edge detection is shown.
S302, Hough transform: the transformation between two coordinate spaces (Cartesian coordinates and polar coordinates) is used for mapping a curve or a straight line with the same shape in the Cartesian coordinate space to a point in the polar coordinate space to form a peak value, so that the problem of detecting any shape is converted into a statistical peak value problem.
S303, optimization of the extracted information:
s3031, classifying all line segments: all segments are divided into vertical, horizontal and other segments, with the other segments discarded. For a straight line AB on a plane, if the coordinates of A, B satisfy the relation | xA-xB|<Factor, then the line is vertical, similarly if the coordinates of A, B satisfy yA-yB|<Factor, then the line is horizontal, where Factor is the error Factor.
And S3032, sorting the classified line segments according to the coordinates of the starting points of the straight line segments. Sorting uses merge sorting, which is a stable sorting with a complexity of o (nlgn) and a spatial complexity of o (n).
And S3033, optimizing the sorted line segments. Mainly, useless line segments are removed, and redundant line segments are merged. The sorted straight line segments have three relationships between two adjacent straight line segments:
a. comprises/is the same as: only the longer line segments are retained;
b. intersecting: merging two adjacent line segments;
c. and (3) phase separation: both segments are retained and processing continues.
S304, constructing a scene tree:
s3041, load the image.
S3042, selecting a room area.
S3043, according to the selected region, using the insertion sorting to obtain the nearest straight line range of the region.
S3044, determining a rectangular frame, expanding the x-direction and y-direction of the rectangular frame to the obtained range, detecting whether there is a straight line in the range, if so, returning an object with a recorded result, and if not, continuing to grow in the x-direction or y-direction until there is a straight line within the edge determined by the rectangular frame.
S3045, a second boundary may be obtained by enlarging the rectangular frame again (selecting a rectangular frame slightly larger than the obtained rectangular frame). At this time, the distances between the two obtained rectangular frames (innner and outter) are sorted, and if two adjacent distance values are used as the ordinate and the abscissa distance is 1, the slope between the two adjacent distances can be obtained. The results obtained were screened and optimized using this slope.
Setting inner rectangular frames (p1, p2, p3, p4) and outer rectangular frames (p1, p2, p3, p 4);
and calculating and sequencing boundary values of upper, lower, left and right. Because the straight line is vertical or horizontal, the calculation method directly uses the subtraction of the abscissa or the ordinate of the two points;
sequencing the obtained distances, wherein the value of the original array is not changed in the sequencing process, the subscript values of the sequenced original array are recorded by using a new array, and the four obtained sequenced distance values are d1, d2, d3 and d 4;
calculating the inclination angle theta (d4-d3) pi/180 of two adjacent distances;
according to the size of the inclination angle, the range size set in the embodiment is as follows: if the inclination angle theta epsilon (pi/6, and +/-infinity), the maximum value of the obtained distance is too large and should be eliminated, and the outer boundary is used as the inner boundary and the average value of d2 and d3 is used as the outer boundary; if theta epsilon (pi/10, pi/6) indicates that a large column is contained between the inner boundary and the outer boundary at the moment, after the judgment of the position of the column is carried out, the column is stored, the edge is continuously expanded, outter is used as the inner boundary, the average value of d2 and d3 is used as the outer boundary, and if theta epsilon (0, pi/10) indicates that the error is within a normal error range, no processing is carried out.
S3046, determining the final range according to the nearest straight line range, the inner boundary and the outer boundary of the room and the size of the column determined in the step S3045, determining the position of the door or the window according to the size of the non-intersected blank area of the inner boundary and the outer boundary, and finally calculating all walls and the ground according to the obtained result.
The specific process of step S5 is as follows:
s501, overall horizontal movement of sight line vector (single finger):
on the cell phone screen, each movement of the gesture produces a set of variables (dx, dy),
Figure GDA0002378081880000141
is based on the current position, not the x-axis and z-axis, so each move is based on the new "temporary x-axis z-axis" of the current vector. The specific operation steps are as follows:
s5011, calculating the unit vector calculation method of the projection on the x0z plane as shown in the following formula
Figure GDA0002378081880000142
S5012, finding the in-plane sum of x0z
Figure GDA0002378081880000143
Vertical unit vector
Figure GDA0002378081880000144
(according to
Figure GDA0002378081880000145
);
S5013, calculating the vector after translation
Figure GDA0002378081880000146
S5014, applying the translated vector to the camera.
S502, vector direction translation (double finger) of the sight line vector:
on the screen of the mobile phone, a distance variable ds is generated between two fingers when the two fingers move each time, and the solution of ds is shown as the following formula.
Figure GDA0002378081880000147
The position after vector translation can be calculated according to the distance variation ds, and the specific operation steps are summarized as follows:
s5021, calculating
Figure GDA0002378081880000151
Unit vector of
Figure GDA0002378081880000152
S5022, calculating the vector after translation
Figure GDA0002378081880000153
And S5023, applying the translated vector to the camera.
S503, the sight line vector rotates around point a (double finger):
the operation idea of the step is to use the vector
Figure GDA0002378081880000154
The method is characterized in that the method is placed in a ball, a camera is located at a point A of the center of the ball, a viewpoint B is located on a spherical surface, when the distance between two fingers on a mobile phone screen is kept constant, the two fingers are equivalent to one finger at this time, two groups of same variables (dx, dy) are generated by moving every time, and after a projection parameter lambda is set, the coordinate change of the mobile phone screen is projected to be the change of an angle on the spherical surface, as shown in figure 3.
The equation for the parameters of the ball is given by:
Figure GDA0002378081880000155
in the above formula
Figure GDA0002378081880000156
As longitude, θ ∈ (0,180) is latitude, and a, b, c are center coordinates.
The specific operation of the step is as follows:
s5031 projecting (dx, dy) as an angular change in the sphere parameter equation:
Figure GDA0002378081880000157
s5032, calculating the sphere diameter of the sphere
Figure GDA0002378081880000158
S5033, the calculation process of recalculating the coordinates of point B is shown as follows:
Figure GDA0002378081880000159
s5034, calculating the translated vector according to the following equation:
Figure GDA00023780818800001510
s5035, applying the translated vector to the camera.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (7)

1. A two-dimensional house type identification and three-dimensional presentation method based on Android is characterized by comprising the following steps:
s0, extracting and classifying two-dimensional house type graph information, and constructing a three-dimensional house type graph modeling element;
s1, selecting a two-dimensional house-type picture in an image format by taking a picture or loading the picture in the photo album;
s2, correcting, cutting and processing the picture to obtain a binary image of the wall; wherein, the step S2 includes the following steps:
s201, cutting a non-graphic part of the picture;
s202, carrying out image binarization based on a global threshold, determining a threshold according to the spatial distribution of the image gray level, and realizing the conversion from a gray level image to a binarized image according to the threshold; the step S202 process is as follows:
the step S202 includes the steps of:
s2021, obtaining each gray value appearing in the picture and the probability of the gray value appearing, storing the gray value by using a first stack in a two-dimensional array, storing the probability of the gray value appearing corresponding to the second stack, and storing PPjx(0, j) storing the gray value of the jth pixel in the image, PPjx(1, j) storing the probability of the occurrence of the jth gray value;
s2022, obtaining a discrete function distribution f (i) of gray-scale values:
Figure FDA0002378081870000011
wherein i is the number of types of gray values in the image;
s2023, calculating the sum PSum (i) of the probabilities of the occurrence of the first i gray-scale values:
Figure FDA0002378081870000012
wherein i is the number of types of gray values in the image;
s2024, solving the gray average value AGray of the whole image, namely summing the gray values of all pixels in the image, and then dividing the sum by the total number of the pixels;
s2025, obtaining the threshold weight wvalue (i) at different gray levels:
Figure FDA0002378081870000021
wherein i is the number of types of gray values in the image;
s2025, obtaining a gray value corresponding to the value i of the maximum WValue (i) as an optimal binarization threshold value;
s203, morphological image processing, namely increasing a bright area in the image through expansion operation and reducing the bright area in the image through corrosion operation;
s3, extracting data for 3D drawing, automatically identifying rooms after a user slides and selects the positions of the rooms through an interactive interface, and performing 3D drawing after all the rooms are selected;
s4, drawing the selected room into a three-dimensional space to present a three-dimensional house type;
and S5, realizing gesture operation on the mobile terminal, and performing roaming operation on the three-dimensional user-type graph through gesture operation, wherein the roaming operation comprises a) realizing overall horizontal movement of the sight line vector through single-finger gesture operation, b) realizing vector direction translation of the sight line vector through double-finger gesture operation, and c) realizing rotation of the sight line vector around a point through double-finger gesture operation.
2. The Android-based two-dimensional house type identification and three-dimensional presentation method of claim 1, wherein the step S3 comprises the steps of:
s301, edge detection, namely firstly generating a group of normalized Gaussian kernels by using a discretized Gaussian function, then carrying out weighted summation on each point of an image gray matrix based on the Gaussian kernels, then highlighting the point with the obvious change of the neighborhood intensity value of the image gray point by an enhancement algorithm, and then detecting edge points by a thresholding method;
s302, Hough transformation, namely mapping curves or straight lines with the same shape in a Cartesian coordinate space to a point in a polar coordinate space to form a peak value by using transformation between the Cartesian coordinate space and the polar coordinate space, and converting the problem of detecting any shape into a statistical peak value problem;
s303, optimizing the extracted information;
and S304, constructing a scene tree.
3. The Android-based two-dimensional house type identification and three-dimensional presentation method of claim 2, wherein the step S303 comprises the steps of:
s3031, classifying all line segments, dividing all line segments into vertical line segments, horizontal line segments and other line segments, only keeping the vertical line segments and the horizontal line segments, and if the coordinates of A and B meet the requirement of a straight line AB on a planeRelation | xA-xB|<Factor, then the line is vertical if the coordinates of A, B satisfy yA-yB|<Factor, then the line is horizontal, where Factor is the error Factor;
s3032, sorting the classified line segments according to the coordinates of the starting points of the straight line segments, wherein the sorting uses merging sorting;
s3033, the sorted line segments are optimized, some useless line segments are removed, and redundant line segments are merged.
4. The Android-based two-dimensional house type identification and three-dimensional presentation method of claim 3, wherein the step S3033 of optimizing the sorted line segments specifically comprises:
according to the sorted straight-line segments, the specific optimization processing mode of the relationship between two adjacent straight-line segments is as follows: if the two straight line segments contain or have the same relation, the longer line segment is reserved;
if the two line segments are in an intersecting relationship, merging the two adjacent line segments;
if the two line segments are in a separated relation, the two line segments are reserved and the processing is continued.
5. The Android-based two-dimensional house type identification and three-dimensional presentation method of claim 2, wherein the step S304 comprises the following steps:
s3041, loading an image;
s3042, selecting a room area;
s3043, according to the selected region, obtaining the nearest linear range of the region by using insertion sorting;
s3044, determining a rectangular frame, expanding the x direction and the y direction of the rectangular frame to the obtained range, detecting whether a straight line exists in the range, if so, returning an object recording the result, and if not, continuing to increase in the x direction or the y direction until the straight line falls in the edge determined by the rectangular frame;
s3045, expanding the rectangular frames again to obtain a second boundary, sequencing the distances between two obtained rectangular frames, namely innner and outter, taking two adjacent distance values as a vertical coordinate, taking the horizontal coordinate distance as 1, obtaining the slope between the two adjacent distances, and screening and optimizing the obtained result by using the slope;
s3046, determining the final range according to the nearest straight line range, the inner boundary and the outer boundary of the room and the determined size of the column, determining the position of the door or the window according to the size of the non-intersected blank area of the inner boundary and the outer boundary, and finally calculating all walls and the ground according to the obtained result.
6. The Android-based two-dimensional house type identification and three-dimensional presentation method of claim 5, wherein the screening and optimization of the obtained result using the slope in the step S3045 is specifically:
setting inner rectangular frames (p1, p2, p3, p4) and outer rectangular frames (p1, p2, p3, p 4);
calculating and sequencing boundary values of upper, lower, left and right;
sequencing the obtained distances, wherein the value of the original array is not changed in the sequencing process, the subscript values of the sequenced original array are recorded by using a new array, and the four obtained sequenced distance values are d1, d2, d3 and d 4;
calculating the inclination angle theta (d4-d3) pi/180 of two adjacent distances;
and screening according to the size of the inclination angle, wherein the set range size is as follows:
if the inclination angle theta epsilon (pi/6, and +/-infinity), the maximum value of the obtained distance is too large and should be eliminated, and the outer boundary is used as the inner boundary and the average value of d2 and d3 is used as the outer boundary;
if theta epsilon (pi/10, pi/6) indicates that a large column is contained between the inner boundary and the outer boundary at the moment, after the judgment of the column position is carried out, the column is stored, the edge is continuously expanded, outter is used as the inner boundary, and the average value of d2 and d3 is used as the outer boundary;
if theta epsilon (0, pi/10) indicates that the error is within the normal error range, no processing is performed.
7. The Android-based two-dimensional house type identification and three-dimensional presentation method of claim 1, wherein the step S5 comprises the steps of:
s501, moving the entire sight line vector horizontally by a single finger operation:
each movement of a single-finger gesture on the screen generates a set of variables (dx, dy), and if the starting point of the variable is A and the end point of the variable is B, the variable is generated
Figure FDA0002378081870000041
Is based on the current position, rather than the x-axis and z-axis, as follows:
s5011, calculating the unit vector calculation method of the projection on the x0z plane as shown in the following formula
Figure FDA0002378081870000051
S5012, finding the in-plane sum of x0z
Figure FDA0002378081870000052
Vertical unit vector
Figure FDA0002378081870000053
S5013, calculating the vector after translation
Figure FDA0002378081870000054
S5014, applying the translated vector to a camera;
s502, translating the vector direction of the sight line vector through the double-finger operation:
on the screen, a distance variation ds is generated between two fingers when the two fingers move each time, and the solution of ds is shown as the following formula:
Figure FDA0002378081870000055
calculating the position after vector translation according to the distance variation ds, which comprises the following steps:
s5021, calculating
Figure FDA0002378081870000056
Unit vector of
Figure FDA0002378081870000057
S5022, calculating the vector after translation
Figure FDA0002378081870000058
S5023, applying the translated vector to the camera;
s503, rotating the sight line vector around the point A through the double-finger operation:
will vector
Figure FDA0002378081870000059
The method comprises the following steps of placing the camera in a ball, enabling the camera to be located at a point A of the center of the ball, enabling a viewpoint B to be located on a spherical surface, enabling two fingers to be equivalent to one finger when the distance between the two fingers is kept constant on a screen, generating two sets of same variables (dx, dy) each time the camera moves, and projecting the coordinate change of the screen into the change of an angle on the spherical surface after setting a projection parameter lambda, wherein the parameter equation of the ball is shown as the following formula:
Figure FDA00023780818700000510
in the above formula
Figure FDA00023780818700000511
The longitude is theta epsilon (0,180) is latitude, and a, b and c are sphere center coordinates, and the specific operation of the step is as follows:
s5031 projecting (dx, dy) as an angular change in the sphere parameter equation:
Figure FDA0002378081870000061
s5032, calculating the sphere diameter of the sphere
Figure FDA0002378081870000062
S5033, the calculation process of recalculating the coordinates of point B is shown as follows:
Figure FDA0002378081870000063
s5034, calculating the translated vector according to the following equation:
Figure FDA0002378081870000064
s5035, applying the translated vector to the camera.
CN201710783949.7A 2017-09-04 2017-09-04 Android-based two-dimensional house type identification and three-dimensional presentation method Active CN107798725B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710783949.7A CN107798725B (en) 2017-09-04 2017-09-04 Android-based two-dimensional house type identification and three-dimensional presentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710783949.7A CN107798725B (en) 2017-09-04 2017-09-04 Android-based two-dimensional house type identification and three-dimensional presentation method

Publications (2)

Publication Number Publication Date
CN107798725A CN107798725A (en) 2018-03-13
CN107798725B true CN107798725B (en) 2020-05-22

Family

ID=61532205

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710783949.7A Active CN107798725B (en) 2017-09-04 2017-09-04 Android-based two-dimensional house type identification and three-dimensional presentation method

Country Status (1)

Country Link
CN (1) CN107798725B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961395B (en) * 2018-07-03 2019-07-30 上海亦我信息技术有限公司 A method of three dimensional spatial scene is rebuild based on taking pictures
CN109598783A (en) * 2018-11-20 2019-04-09 西南石油大学 A kind of room 3D modeling method and furniture 3D prebrowsing system
CN109542234A (en) * 2018-12-04 2019-03-29 广东三维家信息科技有限公司 A kind of information displaying method and device for Size Dwelling Design
CN110096949A (en) * 2019-03-16 2019-08-06 平安城市建设科技(深圳)有限公司 Floor plan intelligent identification Method, device, equipment and computer readable storage medium
CN112150492A (en) * 2019-06-26 2020-12-29 司空定制家居科技有限公司 Method and device for reading house-type graph and storage medium
CN110458927A (en) * 2019-08-02 2019-11-15 广州彩构网络有限公司 A kind of information processing method that picture is generated to three-dimensional house type model automatically
CN111105473B (en) * 2019-12-18 2020-09-25 北京城市网邻信息技术有限公司 Two-dimensional house-type graph construction method and device and storage medium
CN112015314B (en) * 2020-08-21 2022-05-31 北京五八信息技术有限公司 Information display method and device, electronic equipment and medium
CN113538708B (en) * 2021-06-17 2023-10-31 上海建工四建集团有限公司 Method for displaying and interacting three-dimensional BIM model in two-dimensional view

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101299270A (en) * 2008-05-27 2008-11-05 东南大学 Multiple video cameras synchronous quick calibration method in three-dimensional scanning system
CN101930627A (en) * 2010-09-10 2010-12-29 西安新视角信息科技有限公司 Three-dimensional dwelling size modeling method based on two-dimensional dwelling size diagram
CN104751517A (en) * 2015-04-28 2015-07-01 努比亚技术有限公司 Graphic processing method and graphic processing device
CN105279787A (en) * 2015-04-03 2016-01-27 北京明兰网络科技有限公司 Method for generating three-dimensional (3D) building model based on photographed house type image identification

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090237396A1 (en) * 2008-03-24 2009-09-24 Harris Corporation, Corporation Of The State Of Delaware System and method for correlating and synchronizing a three-dimensional site model and two-dimensional imagery

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101299270A (en) * 2008-05-27 2008-11-05 东南大学 Multiple video cameras synchronous quick calibration method in three-dimensional scanning system
CN101930627A (en) * 2010-09-10 2010-12-29 西安新视角信息科技有限公司 Three-dimensional dwelling size modeling method based on two-dimensional dwelling size diagram
CN105279787A (en) * 2015-04-03 2016-01-27 北京明兰网络科技有限公司 Method for generating three-dimensional (3D) building model based on photographed house type image identification
CN104751517A (en) * 2015-04-28 2015-07-01 努比亚技术有限公司 Graphic processing method and graphic processing device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于边缘检测与霍夫变换的车牌分割;徐晓冰等;《系统仿真技术及其应用学术交流会》;20031231;第626-630页 *

Also Published As

Publication number Publication date
CN107798725A (en) 2018-03-13

Similar Documents

Publication Publication Date Title
CN107798725B (en) Android-based two-dimensional house type identification and three-dimensional presentation method
US20240153290A1 (en) Systems and methods for extracting information about objects from scene information
CN107622244B (en) Indoor scene fine analysis method based on depth map
CN112348815B (en) Image processing method, image processing apparatus, and non-transitory storage medium
Pintore et al. State‐of‐the‐art in automatic 3D reconstruction of structured indoor environments
CN110310175B (en) System and method for mobile augmented reality
Zhang et al. Online structure analysis for real-time indoor scene reconstruction
Mura et al. Piecewise‐planar reconstruction of multi‐room interiors with arbitrary wall arrangements
CN105993034B (en) Contour completion for enhanced surface reconstruction
CN106952338B (en) Three-dimensional reconstruction method and system based on deep learning and readable storage medium
US20120075433A1 (en) Efficient information presentation for augmented reality
CN111563502A (en) Image text recognition method and device, electronic equipment and computer storage medium
Jia et al. 3d reasoning from blocks to stability
AU2022345532B2 (en) Browser optimized interactive electronic model based determination of attributes of a structure
Pound et al. A patch-based approach to 3D plant shoot phenotyping
WO2023024441A1 (en) Model reconstruction method and related apparatus, and electronic device and storage medium
CN115439607A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
Tatzgern Situated visualization in augmented reality
Wang et al. Understanding of wheelchair ramp scenes for disabled people with visual impairments
Xiao et al. Coupling point cloud completion and surface connectivity relation inference for 3D modeling of indoor building environments
Park et al. Segmentation of Lidar data using multilevel cube code
Parente et al. Integration of convolutional and adversarial networks into building design: A review
Qian et al. LS3D: Single-view gestalt 3D surface reconstruction from Manhattan line segments
Yin et al. [Retracted] Virtual Reconstruction Method of Regional 3D Image Based on Visual Transmission Effect
CN115668271A (en) Method and device for generating plan

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant