WO2016084389A1 - Dispositif de modélisation, dispositif de production de modèle tridimensionnel, procédé de modélisation et programme - Google Patents

Dispositif de modélisation, dispositif de production de modèle tridimensionnel, procédé de modélisation et programme Download PDF

Info

Publication number
WO2016084389A1
WO2016084389A1 PCT/JP2015/005918 JP2015005918W WO2016084389A1 WO 2016084389 A1 WO2016084389 A1 WO 2016084389A1 JP 2015005918 W JP2015005918 W JP 2015005918W WO 2016084389 A1 WO2016084389 A1 WO 2016084389A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
measurement
region
dimensional object
modeling
Prior art date
Application number
PCT/JP2015/005918
Other languages
English (en)
Japanese (ja)
Inventor
治美 山本
俊嗣 堀井
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Priority to EP15863391.7A priority Critical patent/EP3226212B1/fr
Priority to US15/528,174 priority patent/US10127709B2/en
Priority to JP2016561259A priority patent/JP6238183B2/ja
Priority to CN201580063371.7A priority patent/CN107004302B/zh
Publication of WO2016084389A1 publication Critical patent/WO2016084389A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/04Architectural design, interior design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/16Using real world measurements to influence rendering

Definitions

  • the present invention relates to a modeling device, a three-dimensional model generation device, a modeling method, and a program.
  • plane adjacency relationships are classified into creases, step boundaries, and in-plane boundaries based on the position and inclination of the plane.
  • the line is being calculated. That is, the configuration described in Document 1 extracts folds, step boundaries, and in-plane boundaries in order to extract indoor structural lines including objects already arranged in the target space.
  • Document 1 describes that a structural line that is blocked by a small obstacle and cannot be detected from an image is restored by extracting the structural line in this way.
  • Reference 1 describes that the structure line can be stably restored even when a part of the region is missing due to noise or a small obstacle.
  • the technique described in Document 1 is a technique that uses a large object such as a desk as a constituent element of a three-dimensional model, and is placed indoors for the purpose of re-clothing cloth or constructing a heat insulating material. It is not possible to generate a model of a three-dimensional object from which an object is removed.
  • the present invention provides a modeling apparatus capable of generating a model of a three-dimensional object even when there is a region that cannot be measured by the measurement apparatus by blocking a part of the three-dimensional object by a relatively large object.
  • achieves the three-dimensional model production
  • the modeling device includes a data acquisition unit that acquires measurement data that is a three-dimensional coordinate value for a plurality of measurement points belonging to the three-dimensional object, from the measurement device that performs three-dimensional measurement of the three-dimensional object; and the measurement data
  • a surface extraction unit that generates a mathematical expression that represents a surface that constitutes the three-dimensional object, and a modeling unit that generates a shape model that represents the three-dimensional object using the mathematical expression
  • the surface extraction unit includes: Measurement points belonging to the one surface after obtaining a boundary line surrounding the one surface using mathematical expressions respectively representing one surface of the three-dimensional object and a plurality of surfaces surrounding the one surface A measurement point in a region defined with a predetermined width inside the one surface from the boundary line is extracted, and the mathematical expression representing the one surface is obtained using the extracted measurement point.
  • a data acquisition unit acquires measurement data that is a three-dimensional coordinate value for a plurality of measurement points belonging to the three-dimensional object from a measurement device that performs three-dimensional measurement of the three-dimensional object, and the measurement data
  • the surface extraction unit generates a mathematical expression representing the surface constituting the solid object using the model
  • the modeling unit generates a shape model representing the solid object using the mathematical formula
  • the surface extraction unit After obtaining a boundary line surrounding the one surface using mathematical expressions respectively representing one surface of the object and a plurality of surfaces surrounding the one surface, among the measurement points belonging to the one surface
  • the measurement points in a region defined by a predetermined width are extracted from the boundary line to the inside of the one surface, and the mathematical expression representing the one surface is obtained using the extracted measurement points.
  • the program according to the present invention causes a computer to function as the above-described modeling apparatus.
  • FIG. 8A and FIG. 8B are diagrams for explaining a change in distance according to the shape of the surface in each embodiment. It is the figure which showed the other operation example in embodiment with the flowchart. It is a figure explaining another example of operation in an embodiment.
  • the present embodiment relates to a modeling apparatus that generates a shape model of a three-dimensional object using a measurement result obtained by performing three-dimensional measurement of the three-dimensional object. Further, the present embodiment provides a three-dimensional model generation device that generates a shape model of a three-dimensional object in real space, a modeling method that generates a three-dimensional object model using a measurement result of three-dimensional measurement of a three-dimensional object, and a computer modeling apparatus. Related to the program to function as.
  • the three-dimensional model generation device described below includes a measurement device 20, a modeling device 10, and a monitor device (output device 41).
  • the measurement device 20 performs three-dimensional measurement of the three-dimensional object 30, the modeling device 10 generates a shape model of the three-dimensional object 30, and the monitor device uses the image information output from the modeling device 10 to display an image of the shape model. Is displayed.
  • the output device 41 includes a monitor device, and preferably includes a printer in addition to the monitor device. Further, as will be described later, it is desirable to include an input device 42 that gives an instruction to the modeling device 10.
  • This embodiment assumes a room provided inside the building as the three-dimensional object 30. That is, this embodiment pays attention to the inner surface of the room.
  • the three-dimensional object 30 may be either inside or outside the building, and the technology described below can be applied to the three-dimensional object 30 other than the building.
  • the room that is the three-dimensional object 30 of the present embodiment includes one floor surface 31, one ceiling surface 32, and a plurality of wall surfaces 33. Below, when it is not necessary to distinguish the floor surface 31, the ceiling surface 32, and the wall surface 33, it describes as the surface 3. FIG. Further, the surface 3 will be described on the assumption that it is a flat surface. However, even when the surface 3 is a curved surface, the technique of the present embodiment described below can be employed.
  • the measuring device 20 is a so-called 3D laser scanner.
  • the measuring device 20 is configured to project a beam-like laser in space and receive a reflected wave from an object.
  • a 3D laser scanner generally employs either a time-of-flight method or a phase shift method.
  • a 3D laser scanner adopts a method based on the principle of triangulation.
  • adopts the phase shift type measuring device 20 is assumed.
  • a continuous wave of a laser whose intensity is modulated from the measuring device 20 is projected into a space, and the laser is reflected by a phase difference (interference wave) between the projected irradiation wave and the received reflected wave. It means the technology to measure the distance to the object.
  • the phase shift method measures the flight time by the phase difference and may be treated as a kind of time-of-flight method.
  • the measuring device 20 adopting the phase shift method continuously measures the distance, it can measure the distance at a higher speed than the configuration using the pulse wave that intermittently measures the distance. is there. Further, this type of measuring apparatus 20 can measure the distance with an error of 1 cm or less (for example, an error of 1 / 10,000 or less with respect to the distance). In the technique of the present embodiment described below, the same effect can be expected even when the time-of-flight method and the method based on the triangulation method are adopted.
  • the measuring device 20 used in the present embodiment includes a measuring unit that rotates along an installed surface, and the measuring unit scans a laser in a plane that intersects the installed surface at each position during the rotation ( Scan).
  • a surface on which the measurement unit rotates is a horizontal plane
  • a plane orthogonal to the horizontal plane is a vertical plane. That is, the direction of laser irradiation from the measuring device 20 is represented by a combination of an angle at which the measurement unit rotates in the horizontal plane and an angle at which the laser is scanned in the vertical plane.
  • the angle at which the measurement unit rotates is an angle with respect to a reference direction determined in the measuring device 20, and the angle at which the laser is scanned in the vertical plane is determined by an angle with respect to the vertical direction (direction perpendicular to the horizontal plane), for example.
  • the position of the object reflecting the laser is ( ⁇ , ⁇ , ⁇ ).
  • the angle ⁇ and the angle ⁇ are determined by the measuring device 20, and the distance ⁇ is measured according to the principle described above.
  • the coordinate value of the part irradiated with the laser is represented by the coordinate value of the polar coordinate system (spherical coordinate system) defined in the measuring device 20.
  • the measuring device 20 Since the laser uses a continuous wave, if the object irradiated with the laser is continuous, the laser is irradiated to the object without interruption. However, the measuring device 20 obtains coordinate values at regular time intervals. This time interval corresponds to the angle ⁇ when scanning the laser in the vertical plane. In other words, the measurement device 20 performs measurement with a resolution corresponding to the angle ⁇ in the vertical plane. On the other hand, in the horizontal plane, measurement is performed with a resolution corresponding to the angle ⁇ . The angle ⁇ and the angle ⁇ are appropriately determined. In addition, the measuring device 20 performs three-dimensional measurement on a spatial region of almost the entire circumference except for the periphery of the measuring device 20 in the installed surface.
  • the measuring device 20 measures the coordinate value related to the part irradiated with the laser in the three-dimensional object 30 with the resolution of the angle ⁇ and the angle ⁇ . In other words, the measuring device 20 obtains the coordinate values related to the three-dimensional object 30 discretely.
  • the position where the coordinate value is obtained in the three-dimensional object 30 is referred to as a “measurement point”.
  • data relating to the three-dimensional coordinate value output from the measuring device 20 is referred to as “measurement data”. A large number of measurement points are obtained at intervals of an angle ⁇ and an angle ⁇ .
  • the measuring device 20 of this embodiment calculates
  • the orthogonal coordinate system can be determined in the measuring device 20 as in the polar coordinate system, but the coordinate system is set outside the measuring device 20 and the coordinate value of the measuring device 20 is determined in the coordinate system. If possible, the coordinate value of the measurement point may be expressed in an external coordinate system.
  • the process of converting the coordinate value of the polar coordinate system determined in the measurement device 20 into the coordinate value of the orthogonal coordinate system may be performed by either the measurement device 20 or the modeling device 10. In the present embodiment, it is assumed that the measurement device 20 performs conversion from a coordinate value in the polar coordinate system to a coordinate value in the orthogonal coordinate system.
  • the measuring device 20 When the measuring device 20 is installed on the floor surface 31, the measuring device 20 irradiates the entire range of the room except a part of the floor surface while the measuring unit rotates in a plane parallel to the floor surface. To do. Therefore, the laser is irradiated in various directions around the measurement unit, and as a result, the beam-shaped laser is scanned three-dimensionally. However, the direction to which the laser is irradiated excludes the direction toward the member that supports the measurement unit. Therefore, the measurement unit does not irradiate a laser around the part where the measurement device 20 is installed on the floor surface.
  • the measuring device 20 Since the measuring device 20 measures the time until the irradiated laser is reflected by the three-dimensional object 30 and returns to the measuring device 20 by changing the phase difference, the intensity changes periodically with the passage of time. A laser whose intensity is modulated is used. The laser is projected into a space where the three-dimensional object 30 exists. When the measuring device 20 receives the laser reflected by the three-dimensional object 30 from the space where the laser is projected, the measuring device 20 obtains the phase difference of the modulation waveform between the received reflected wave and the irradiation wave projected to the space.
  • the measuring device 20 In order to obtain the phase difference between the reflected wave and the irradiation wave, one of a technique using a reference wave corresponding to the irradiation wave and a technique using an electric signal having information corresponding to the phase of the irradiation wave is used.
  • the measuring device 20 converts the phase difference into a distance.
  • the laser projecting into the space is branched into two, one of the branched lasers is used as the irradiation wave, and the other branched laser is used as the reference wave.
  • the irradiation wave is projected into a space in order to measure the distance, and the reference wave is projected so as to propagate a known distance in the measuring device 20. Since the reference wave is propagated through a known distance, the phase difference between the reflected wave and the irradiation wave is obtained by obtaining the phase difference between the reference wave and the reflected wave.
  • the modulation signal used for generating the irradiation wave is used as an electric signal having information corresponding to the phase of the irradiation wave. Since the relationship between the phase of the modulation signal and the phase of the irradiation wave is generally constant, the phase difference between the irradiation wave and the reflected wave can be obtained by using the modulation signal without using the reference wave. . In the present embodiment, a technique using a reference wave is employed.
  • the measurement points measured by the 3D laser scanner are higher in density as the distance is shorter, and lower in density as the distance is longer. Therefore, when a plane grid is set and each grid point of the plane grid is associated with a measurement point, the distance between adjacent measurement points increases as the distance to the measurement point decreases, and decreases as the distance to the measurement point increases. . That is, the distance between the measurement points in the real space increases as the distance increases.
  • the measurement points measured by the measurement device 20 are associated one-to-one with the lattice points of a planar lattice having a constant lattice constant, The farther the distance is, the shorter the distance.
  • planar lattice a square lattice, a rectangular lattice, an orthorhombic lattice, a hexagonal lattice, a parallel lattice, and the like can be adopted.
  • the coordinate values obtained by the measurement with the 3D laser scanner are coordinate values in the polar coordinate system, and when developed in association with the lattice points of the planar lattice, a distorted shape is formed in which a straight line in the horizontal direction of the real space becomes a curve.
  • the measuring device 20 has not only three-dimensional measurement but also a function of outputting the reflection intensity of the laser and a function of imaging the three-dimensional object 30.
  • the measuring device 20 has a function of outputting image data of a grayscale image (reflection intensity image) having the reflection intensity of the laser as a pixel value.
  • this grayscale image is developed two-dimensionally, it becomes a distorted shape as described above.
  • the measuring device 20 includes a solid-state image sensor such as a CCD image sensor or a CMOS image sensor, and a wide-angle optical system disposed in front of the solid-state image sensor.
  • the measuring device 20 makes two rounds of the measurement unit in a plane parallel to the installation surface, obtains a three-dimensional coordinate value related to the three-dimensional object 30 using a laser in the first round, and determines the three-dimensional object 30 in the second round.
  • the solid-state imaging device outputs image data of a color image.
  • the color image is associated with the coordinate value of the grayscale image according to the orientation when the solid object 30 is imaged by the solid-state imaging device, and RGB color information obtained from the color image is given to the measurement point.
  • Kinect registered trademark
  • the measuring device 20 capable of outputting measurement data and image data.
  • FIG. 2 An example of an image output from the measuring device 20 is shown in FIG.
  • a distorted image is obtained.
  • the reason for the distorted image is that the measurement points measured by the measurement device 20 are developed on a plane. That is, the measurement points are associated with the lattice points of a two-dimensional planar lattice, and the interval between adjacent measurement points is widened at the short-distance measurement points and narrowed at the long-distance measurement points. In other words, in the two-dimensional image shown in FIG. 2, the interval between the measurement points differs depending on the location in the image.
  • a developed image is hereinafter referred to as a developed image.
  • the developed image has a coordinate value corresponding to the measurement data as a pixel value, and a gray value (color value) that is image data of the three-dimensional object 30 as a pixel value.
  • the measuring device 20 is not limited to a configuration that scans a beam-shaped laser, and may employ a configuration that projects a pattern such as a line shape, a stripe shape, or a lattice shape.
  • the measurement device 20 may be a distance image sensor that receives reflected light of intensity-modulated light with an area image sensor and generates a distance image in which the pixel value is a distance value from the output of the area image sensor.
  • the measuring device 20 may measure the flight time from light projection to light reception without using intensity-modulated light. Or you may employ
  • the measurement device 20 When the measurement device 20 is configured to measure measurement data by the stereo image method, a camera is used to obtain measurement data, and therefore it is possible to obtain image data simultaneously with this camera.
  • a grayscale image corresponding to the received light intensity of the reflected light can be generated using the output of the area image sensor. Data can be used for image data.
  • the received light intensity of the reflected light is integrated over one period or a plurality of periods of the intensity-modulated light, so that a change in the intensity of the reflected light with time is removed.
  • the measuring device 20 may be configured to output only measurement data.
  • the coordinate system of the measuring device 20 is set without depending on the arrangement of the three-dimensional object 30.
  • the vertical direction is set to the z axis
  • the reference point of the z axis is set to 0 m above sea level.
  • the surface on which the measuring device 20 is installed is determined as a surface parallel to the xy plane.
  • the origin of the coordinate system is set at a fixed position in real space.
  • the origin is determined based on the position where the measuring device 20 is installed. That is, the base point when the measuring device 20 measures the distance is set as the origin.
  • the origin can be determined in another place in the real space, but when the position where the measuring device 20 is installed is used as a reference, calculation for determining the origin is not necessary.
  • the processing for associating the measurement data with the image data may be performed by the modeling device 10 instead of the measurement device 20. That is, the process of associating measurement data and image data and the process of coordinate conversion from the polar coordinate system to the orthogonal coordinate system may be performed by either the measurement apparatus 20 or the modeling apparatus 10.
  • the measurement device 20 can employ various configurations. However, in the three-dimensional model generation device described below, the measurement device 20 outputs a three-dimensional coordinate value measured using a laser as measurement data, and outputs a measured color image of the three-dimensional object 30 as image data. Assume a configuration. In the following, the processing after the coordinate value of the orthogonal coordinate system is obtained for the three-dimensional object 30 will be described.
  • the three-dimensional object 30 is a room including a floor surface 31, a ceiling surface 32, and a wall surface 33 as shown in FIG.
  • the floor surface 31, the ceiling surface 32, and the wall surface 33 will be described on the assumption that each is a flat surface or a combination of flat surfaces.
  • the combination of planes corresponds to a shape in which a step is formed.
  • the technique of this embodiment is applicable also when any of the floor surface 31, the ceiling surface 32, and the wall surface 33 is a curved surface (for example, cross-sectional U shape, hemispherical etc.).
  • a mathematical expression described later is a mathematical expression that represents a curved surface, or a mathematical expression that approximates a curved surface with a set of planes.
  • the measuring device 20 is installed on the floor 31 which is the inner space of the three-dimensional object 30.
  • the floor surface 31, the ceiling surface 32, and the wall surface 33 are not distinguished, they are simply referred to as a surface 3.
  • the ceiling surface 32 is not parallel to the floor surface 31 or the two wall surfaces 33 facing each other may not be parallel, in the following, the ceiling surface 32 is parallel to the floor surface 31.
  • the two wall surfaces 33 facing each other are basically parallel, but a part of the wall surface 33 may have a stepped step, a dent, or the like. In other words, the distance between the two wall surfaces 33 facing each other is allowed to change in a plurality of stages.
  • Surface 3 may be provided with openings such as windows and doorways, and may be equipped with switches, outlets (outlets) and other wiring equipment, lighting equipment, and the like. Furthermore, it is allowed that articles such as furniture are arranged in the room.
  • the modeling device 10 generates a shape model of the three-dimensional object 30 using the measurement data and image data acquired from the measurement device 20.
  • the shape model of the three-dimensional object 30 is appropriately selected from, for example, a wire frame model and a surface model.
  • the wire frame model is a data structure model in which points on the surface of the three-dimensional object 30 are connected by line segments in order to represent the surface shape of the three-dimensional object 30.
  • the modeling apparatus 10 may have a function of extracting various information related to the three-dimensional object 30 using a shape model.
  • the modeling device 10 is composed of a computer that operates according to a program.
  • the computer preferably includes a keyboard and a pointing device as the input device 42 and a display device as the output device 41. Further, the computer may be a tablet terminal or a smartphone integrated with a display device in which a touch panel as the input device 42 is the output device 41.
  • the modeling apparatus 10 can be realized not by a general-purpose computer but by a specially designed computer.
  • the computer constituting the modeling device 10 may have a configuration in which a computer server or a cloud computing system and a terminal device cooperate in addition to an independent configuration (standalone).
  • this configuration the user can enjoy the function of the modeling device 10 described below using a terminal device capable of communicating with a computer server or a cloud computing system.
  • the program is provided by a computer-readable recording medium or through a telecommunication line such as the Internet. This program causes a computer to function as a modeling apparatus 10 described below.
  • the modeling apparatus 10 includes a data acquisition unit 11 that acquires measurement data and image data from the measurement apparatus 20, as shown in FIG.
  • the data acquisition unit 11 receives the measurement data and the image data from the measurement device 20 as an electrical signal using a wired or wireless transmission path, and receives the measurement data and the image data from the measurement device 20 via a recording medium such as a memory card. And at least one of them.
  • the data acquisition unit 11 may be configured to acquire only measurement data.
  • the modeling device 10 stores the measurement data and image data related to the three-dimensional object 30 acquired by the data acquisition unit 11 in the storage unit 14.
  • the modeling apparatus 10 includes a modeling unit 13 that generates a shape model of the three-dimensional object 30.
  • the modeling apparatus 10 estimates the overall shape of the surface 3 using information obtained by measuring a part of the surface 3 and knowledge about the surface 3 (that is, a rule or a law). The overall shape of the three-dimensional object 30 is estimated from the overall shape.
  • the knowledge about the surface 3 includes knowledge about the shape of the surface 3 and knowledge about the arrangement of the surface 3.
  • the shape of the surface 3 for example, the knowledge that “the room is surrounded by a set of planes” is used. This knowledge includes the case where the distance between the faces 3 facing each other is not constant.
  • the surface 3 may be a curved surface instead of a flat surface, but the present embodiment will be described assuming only the case where the surface 3 is a flat surface.
  • the knowledge that “a boundary line separating two adjacent surfaces 3 is at least a part of an intersection line of the two adjacent surfaces 3” is used.
  • the surface 3 is a plane
  • the knowledge that “the vertex that is one end of the boundary line is shared by the three surfaces 3” is used.
  • such knowledge is incorporated in a procedure (algorithm) for generating a shape model of the three-dimensional object 30.
  • the modeling apparatus 10 includes a surface extraction unit 12 that obtains mathematical expressions corresponding to the individual surfaces 3 that constitute the three-dimensional object 30 using the measurement data and image data stored in the storage unit 14.
  • the mathematical formulas of the individual surfaces 3 obtained by the surface extraction unit 12 are transferred to the modeling unit 13 to generate a shape model of the three-dimensional object 30.
  • the measurement data output from the measurement device 20 includes the coordinate values of a plurality of surfaces 3 constituting the three-dimensional object 30 (here, the coordinate values represented by an orthogonal coordinate system).
  • the correspondence of is unknown. Therefore, the surface extraction unit 12 is required to classify the coordinate values in association with the surface 3 before generating the mathematical formula for each surface 3.
  • the process of classifying the coordinate values in association with the surface 3 can be performed for each coordinate value.
  • the processing load increases and the state of the surface 3 is increased. May cause an error.
  • a plurality of measurement data that are likely to belong to the same surface 3 are collectively handled as a set, the set is associated with the surface 3 as a unit, and the surface 3 is represented from the set associated with the surface 3
  • the formula is being sought.
  • the surface extraction unit 12 includes a first processing unit 121, a second processing unit 122, and a third processing unit 123 in order to perform such processing.
  • the first processing unit 121 divides a developed image P1 as shown in FIG. 2 in which the three-dimensional object 30 is developed on a plane.
  • the developed image has a coordinate value corresponding to measurement data as a pixel value, and a gray value (preferably a color value of a color) that is image data of the three-dimensional object 30 as a pixel value.
  • the first processing unit 121 divides the developed image P1 into a plurality of unit areas Ui.
  • i 1, 2,...
  • the unit area Ui is set to 10 pixels ⁇ 10 pixels, for example.
  • the number of the pixels Qn included in the unit region Ui is four pixels or more, and it is sufficient that the surface 3 constituting the three-dimensional object 30 included in the developed image P1 can be divided into a plurality of unit regions Ui.
  • n 1, 2,... That is, the unit area Ui is set so as to divide the developed image P1 into hundreds to thousands. Note that the aspect ratio of the unit region Ui is not necessarily 1: 1.
  • the first processing unit 121 obtains the direction of the unit area Ui in the real space.
  • the unit region Ui includes, for example, 100 pixels Qn.
  • the direction of the unit region Ui may vary depending on which pixel Qn is selected from the unit region Ui. There is sex. Therefore, as shown in FIG. 3, the first processing unit 121 obtains a normal vector Vi representing a representative direction for each unit region Ui.
  • the first processing unit 121 obtains a normal vector Vi that represents the unit region Ui.
  • the normal vector Vi of the unit region Ui is obtained by statistical processing (actually linear regression analysis).
  • the normal vector Vi representing the unit region Ui can be obtained from the frequency distribution of normal vectors obtained by combining three pixels Qn included in the unit region Ui.
  • the normal vector obtained from the three pixels Qn extracted from the unit area Ui corresponds to the outer product of two vectors directed from one of the three pixels Qn to the remaining two. Therefore, the absolute value (size) of the normal vector is proportional to the area surrounded by the three pixels Qn.
  • the measurement points are set at a constant angular interval around the origin, the distance between the measurement points increases as the distance from the origin to the measurement point increases. Therefore, the absolute value of the normal vector increases as the distance from the origin increases.
  • a unit normal vector in which the size of the normal vector Vi is set to 1 is used.
  • the normal vector Vi means a unit normal vector in principle.
  • the normal vector Vi representing the unit area Ui obtained by the first processing unit 121 is delivered to the second processing unit 122.
  • the first processing unit 121 scans the unit region Ui included in the developed image P1, and the first processing unit 121 calculates the normal vector Vi for one unit region Ui. It is assumed that the normal vector Vi is delivered to the second processing unit 122.
  • the second processing unit 122 uses the normal vector Vi for each unit region Ui delivered from the first processing unit 121 to integrate the unit regions Ui having substantially the same direction into the surface region Sk as shown in FIG.
  • k 1, 2,...
  • the surface region Sk is formed by grouping unit regions Ui having normal vectors Vi in substantially the same direction.
  • a normal vector Tk representative of the surface area Sk is determined in the same manner as the unit area Ui.
  • the normal vector Tk representing the surface region Sk is selected from an average value, a median value, a mode value, and the like obtained from the normal vector Vi of the unit region Ui constituting the surface region Sk.
  • a unit normal vector is used as the normal vector Tk.
  • the second processing unit 122 determines whether or not to include the unit region Ui delivered from the first processing unit 121 in the existing surface region Sk. This determination uses the size of the inner product of the unit region Ui and the surface region Sk and the distance between the surface region Sk and the representative point of the unit region Ui. As this distance, it is desirable to use the minimum value of the distances between the representative points of all the unit areas Ui included in the surface area Sk and the representative points of the unit area Ui delivered from the first processing unit 121. Since the calculation for evaluating the minimum value of the distance may be four arithmetic operations, the processing load does not increase significantly.
  • the second processing unit 122 uses any evaluation function using the inner product and the distance to A region Sk is selected.
  • the evaluation function is, for example, (inner product / distance), and the second processing unit 122 selects the surface region Sk that maximizes this evaluation function.
  • the evaluation function and the determination using the evaluation function are not limited to this example and are appropriately determined.
  • the second processing unit 122 classifies all the unit areas Ui included in the developed image P1 into the surface areas Sk by the above-described processing.
  • Each surface region Sk is a set of unit regions Ui that can be regarded as belonging to the same surface 3, and includes a plurality of unit regions Ui.
  • the surface area Sk does not necessarily correspond to the surface 3 constituting the three-dimensional object 30, but may correspond to an opening such as furniture, a window or an entrance, a wiring device, or a lighting device. There is also. Furthermore, there is a possibility that the surface region Sk includes noise caused by a measurement error or the like. Therefore, the second processing unit 122 compares the number of unit areas Ui included in the surface area Sk with a predetermined threshold value, and excludes the surface area Sk that is equal to or less than the threshold value from the list of the surface area Sk as being excluded from processing ( That is, the stored content is deleted).
  • the number of thresholds may be one, generally, since the number of unit areas Ui included in the surface 3 of the three-dimensional object 30 is relatively large, the floor surface 31, the ceiling surface 32, the wall surface 33, etc. In order to distinguish the generated surface, the threshold value may be set relatively large.
  • the second processing unit 122 can be configured to receive an operation input from the input device 42 through the input unit 16 and extract only the surface region Sk corresponding to the surface 3.
  • the second processing unit 122 causes the entire corresponding surface area Sk to be processed. It is configured.
  • the second processing unit 122 newly calculates a normal vector Tk representing the surface region Sk for each surface region Sk.
  • the second processing unit 122 scans the developed image P1 and sequentially extracts the unit areas Ui (S101).
  • the second processing unit 122 evaluates the inner product and the distance of the normal vectors for the extracted unit region Ui and the surface region Sk stored in the list (S102).
  • the second processing unit 122 integrates the unit area Ui into the surface area Sk based on the evaluation result of the evaluation function (S103). .
  • the second processing unit 122 adds the unit area Ui to the list as a new surface area Sk (S104). This process is performed for all the unit areas Ui of the developed image P1 (S105).
  • the modeling apparatus 10 preferably includes a color processing unit 17 that provides color information for each surface region Sk classified by the second processing unit 122.
  • the color processing unit 17 it is desirable to include an output unit 15 that outputs image information obtained by replacing the pixel value of the developed image P1 with the color information provided by the color processing unit 17.
  • the output unit 15 outputs the image information to the output device 41 and causes the output device 41 to display a color image based on the image information.
  • FIG. 6 schematically shows an image in which a color is given to each surface area Sk by the color processing unit 17.
  • only the six surface areas S1, S2, S3, S4, S5, and S6 are denoted by reference numerals, and different colors are represented by patterns that are attached to the areas.
  • the surface area Sk is also extracted from.
  • the third processing unit 123 estimates all the pixels included in the specified surface region Sk as pixels included in the specific surface 3 of the three-dimensional object 30, and uses the coordinate values of the pixels of the specified surface region Sk. Then, a mathematical expression representing the surface 3 is determined. In order to determine the mathematical expression representing the surface 3, statistical processing (actually linear regression analysis) is performed. The procedure for obtaining the mathematical expression representing the surface 3 will be described below.
  • the x-intercept is 1 / a
  • the y-intercept is 1 / b
  • the z-intercept is 1 / c.
  • Obtaining the mathematical expression representing the surface 3 using the coordinate values of the measurement points means obtaining estimated values of a, b, and c.
  • obtaining the estimated values of a, b, and c is a linear regression analysis in which a coordinate value (xi, yi, zi) at each measurement point is an observed value and an error term is ei.
  • the expected value of the error term ei is 0, and the error term ei for each observation value is uncorrelated and has equal variance.
  • the estimated values of a, b, and c are expressed by least square estimators (least squares estimator).
  • [X] is a matrix of observed values (xi, yi, zi)
  • [A] is a column vector having elements (a, b, c)
  • [E] is a column vector having error terms ei as elements.
  • a column vector in which all elements are 1 is represented as [1].
  • [X] T represents a transposed matrix of [X].
  • the first processing unit 121 obtains the normal vector Vi of the unit region Ui
  • the same processing can be performed. That is, when the coordinate values of the pixels included in the unit region Ui are applied to the above-described calculation, values corresponding to the above-described a, b, and c are also obtained for the unit region Ui. Since these values represent the orientation of the surface, obtaining values corresponding to a, b, and c using the coordinate values associated with the pixels included in the unit region Ui is representative of the unit region Ui. This is equivalent to finding the direction of the normal to do.
  • the mathematical formulas representing the surfaces 3 are obtained for all the surfaces 3 constituting the three-dimensional object 30, the mathematical formulas of these surfaces 3 are delivered to the modeling unit 13.
  • the modeling unit 13 extracts the contour line of the surface 3 constituting the three-dimensional object 30 by extracting the intersection line shared by the two surfaces 3 and the vertex shared by the three surfaces 3. . That is, the modeling unit 13 generates information on the shape model representing the three-dimensional object 30.
  • This shape model corresponds to a wire frame model represented by a set of coordinate values for each vertex of the three-dimensional object 30 and a mathematical expression corresponding to the boundary line of the surface 3 which is a line segment connecting two vertices.
  • the modeling unit 13 may form a shape model corresponding to a surface model represented by a mathematical expression representing the surface 3 and vertices surrounding the surface 3.
  • the surface 3 represented by the mathematical formula is usually one of a floor surface 31, a ceiling surface 32, and a wall surface 33. Therefore, when generating the shape model of the room, it is necessary to associate the surface 3 with any one of the floor surface 31, the ceiling surface 32, and the wall surface 33.
  • the modeling unit 13 displays the shape model on the screen of the monitor device that is the output device 41, and the input device for each surface in the shape model.
  • the type of information may be received from 42. That is, the user specifies one of the floor surface 31, the ceiling surface 32, and the wall surface 33 for each surface of the shape model.
  • the type of surface of the shape model may be determined in the order of selection, such as a ceiling surface 32, a floor surface 31, and a wall surface 33 in the clockwise direction.
  • the modeling unit 13 may identify the surfaces 3 located above and below the room, such as the floor surface 31 and the ceiling surface 32, based on the coordinate values that are measurement data. That is, the modeling unit 13 automatically assigns the floor surface 31 and the ceiling surface 32 in the shape model of the room, and the surface 3 excluding the floor surface 31 and the ceiling surface 32 among the surfaces 3 included in the shape model is the wall surface. 33 may be assigned.
  • the configuration in which the modeling unit 13 automatically determines the surface 3 as described above can be used in combination with the above-described configuration in which the user designates the type of the surface 3. For example, it is desirable that the modeling unit 13 is configured so that the type of the surface 3 that is automatically determined can be corrected by the input device 42.
  • the shape of the surface 3 that should be rectangular and the intersection line is not orthogonal is parallelogram shape, trapezoidal shape, or unequal square shape.
  • the monitor device that is the output device 41 and the input device 42
  • the result obtained when the manual operation intervenes may vary depending on the user.
  • the third processing unit 123 sets a rectangular area 25 circumscribing the surface 3 for each extracted surface 3 as shown in FIG.
  • This rectangular area 25 is defined as the maximum range for expanding the surface 3 obtained from the mathematical expression, and it is evaluated whether or not the measurement points included in the rectangular area 25 may be included in the surface 3. That is, the third processing unit 123 obtains the distance between the measurement point included in the rectangular area 25 and the surface 3 in the normal direction of the surface 3, and if the distance is equal to or less than a predetermined reference distance, the corresponding measurement point is determined. , A candidate to be added to the surface 3.
  • the third processing unit 123 recalculates the mathematical expression representing the surface including the candidate in the same manner as the processing for obtaining the mathematical expression representing the surface 3.
  • the measurement points used for recalculation need only be the measurement points targeted by the setting of the rectangular area 25.
  • the three points appropriately extracted from the surface 3 are projected onto the surface obtained by recalculation, and the coordinate values of the three points extracted from the surface 3 are converted into the coordinate values after projection. With this process, new coordinate values are set at the three points extracted from the surface 3.
  • the third processing unit 123 recalculates the mathematical expression corresponding to the surface 3 using the coordinate values of the three points. Similar recalculation is performed on the adjacent surface 3 to obtain a mathematical expression representing the surface 3, and further, intersections and vertices are obtained using the recalculated mathematical expression.
  • the surface 3 to be rectangular in the three-dimensional object 30 becomes rectangular. Will be corrected.
  • the shape model of the three-dimensional object 30 can be generated with high reproducibility.
  • the modeling unit 13 can place the three-dimensional object 30 represented by the three-dimensional coordinate value in the virtual three-dimensional space constructed by the modeling apparatus 10. .
  • the three-dimensional shape model generated by the modeling unit 13 is stored in the model storage unit 131 provided in the modeling unit 13.
  • the shape model data stored in the model storage unit 131 can be output to the output device 41 through the output unit 15. That is, the output unit 15 forms a virtual three-dimensional space by computer graphics in the output device 41, and arranges a virtual three-dimensional object 30 represented by a three-dimensional coordinate value in the three-dimensional space.
  • the output device 41 is a monitor device
  • the user changes the coordinate axis of the three-dimensional space or operates the input device 42 so as to change the position of the viewpoint at which the three-dimensional object 30 is viewed.
  • the three-dimensional object 30 can be viewed from various directions.
  • the data of the shape model includes a mathematical expression representing the surface 3, a mathematical expression representing the boundary line between the two adjacent surfaces 3, the coordinate value of the intersection of the three adjacent surfaces 3, and the coordinate value of the measurement point. It is out.
  • the output unit 15 can display a shape model viewed from the inside of the room when displayed on the monitor device which is the output device 41.
  • the shape model seen from the inside of the room can be used for simulation when performing indoor renovation. That is, it is possible to confirm on the screen of the monitor device, which is the output device 41, how the state of the room changes due to a change in a member having a function of decorating the room, such as a cloth, a curtain, or a lighting fixture. It becomes possible.
  • the user specifies the surface area Sk corresponding to the surface 3 constituting the three-dimensional object 30 out of the surface area Sk that is a set of the unit areas Ui. Therefore, the specification of the surface area Sk is appropriate. If so, it is possible to appropriately determine the mathematical expression representing the surface 3.
  • the surface 3 and the surface region Sk do not always correspond one-to-one, and the surface region Sk corresponding to the surface 3 may be divided into a plurality. In this case, although other information can be used to obtain the mathematical expression representing the surface 3, there is a possibility that the usable information cannot be fully utilized.
  • the second processing unit 122 has a function of integrating the surface areas Sk corresponding to the same surface 3. That is, when the user specifies a pixel included in any one of the surface regions Sk, if there is another surface region Sk corresponding to the same surface 3, the second processing unit 122 It is desirable to extract the pixels of the surface area Sk at the same time.
  • the second processing unit 122 uses additional information other than the direction of the unit area Ui. That is, the second processing unit 122 integrates the plurality of surfaces 3 into one surface region Sk by using additional information for the surface region Sk candidates formed by classifying the unit regions Ui with the orientation information. It is desirable to verify whether or not However, when it is clear that the surface 3 is a continuous surface, processing using additional information is not necessary.
  • the additional information may be a distance from the origin to the unit area Ui, and the additional information may be a distance from the origin to the unit area Ui and a distance between the plurality of unit areas Ui.
  • the second processing unit 122 extracts two unit regions Ui from among the plurality of unit regions Ui formed for the developed image P1. Next, the second processing unit 122 obtains the inner product of the normal vectors of the two extracted unit regions Ui, and obtains the distance from the origin for each of the two unit regions Ui whose inner product is greater than or equal to the reference value. Further, the second processing unit 122 obtains a difference in distance from the origin to each of the two unit regions Ui, and classifies the two unit regions Ui into the same surface region Sk if the difference is within the reference range.
  • the distance from the origin of the unit area Ui is the distance between the representative point of the unit area Ui and the origin.
  • a pixel at the center of the unit area Ui in the developed image P1 or a pixel at the corner (for example, the upper left corner) of the unit area Ui in the developed image P1 is used.
  • FIG. 8A and FIG. It becomes like this.
  • the measuring device 20 is disposed in the vicinity of the center of the room that is the three-dimensional object 30.
  • the origin for measuring the distance is set to the place where the measuring device 20 is installed. Under such conditions, the distance between the measuring device 20 and the wall 33 is minimum, and the distance is maximum near both ends of the wall 33.
  • FIG. 8A a smooth curve is obtained as shown in FIG. 8A.
  • This curve is ideally a secant curve (curve that is the reciprocal of the cosine) corresponding to the angle at which the wall surface 33 is viewed from the origin.
  • FIG. 8B shows a case where there are dents at two locations on the wall surface 33, and the distance increases discontinuously at the site corresponding to the dents.
  • the interval between the two unit regions Ui for obtaining the distance is guaranteed to be relatively small by the constraint that the inner product of the normal vectors is equal to or larger than the reference value, but there is another unit region Ui. Is not excluded. Therefore, the accuracy of separating the surface area Sk may be larger than the size of the unit area Ui.
  • the additional information includes not only the distance from the origin to the unit area Ui but also the distances between the plurality of unit areas Ui, the amount of information used for the determination by the second processing unit 122 further increases.
  • the distance between the plurality of unit areas Ui is the distance between the representative points of the two unit areas Ui, and the additional information is a determination result of whether or not this distance is within the range of the reference distance. is there. That is, the additional information is information as to whether or not two unit regions Ui obtained by calculating the inner product of the normal vectors are adjacent to each other.
  • the reference distance is determined by the size of the unit area Ui. For example, a distance obtained by adding several pixels to the number of pixels on one side of the unit area Ui is the reference distance.
  • the second processing unit 122 extracts two unit regions Ui from among the plurality of unit regions Ui formed for the developed image P1.
  • the two unit regions Ui extracted by the second processing unit 122 are unit regions Ui that have been determined that the distance between the representative points is within the range of the reference distance, in other words, the adjacent unit regions Ui. is there. If the inner product of the normal vectors of two adjacent unit regions Ui is greater than or equal to the reference value, the two unit regions Ui are estimated to be in substantially the same direction. Furthermore, if the difference in distance between each unit region Ui and the origin is within the reference range, it is estimated that the two unit regions Ui belong to the same plane 3.
  • Other operations in the second operation example are the same as those in the first operation example.
  • the inner product of the normal vectors of the two unit regions Ui is obtained after the process of determining that the distance between the two unit regions Ui is within the range of the reference distance.
  • the order of processing may be reversed.
  • the second processing unit 122 evaluates whether each unit region Ui delivered from the first processing unit 121 belongs to the surface region Sk previously added to the list. ing. That is, the unit region Ui is integrated into the surface region Sk using the inner product of the normal vector Vi of the unit region Ui and the normal vector Tk of the surface region Sk in the list and the distance between the unit region Ui and the surface region Sk. It is decided whether to do it.
  • the normal vector Tk of the surface region Sk changes as the unit region Ui is integrated into the surface region Sk, and the normal vector Tk deviates from the normal vector of the original surface 3. there is a possibility.
  • the second processing unit 122 determines whether or not all the unit areas Ui of the developed image P1 belong to the already extracted surface area Sk after the surface area Sk is extracted by the processing of the operation example 1. It is desirable to perform a re-evaluation process. That is, the second processing unit 122 sequentially evaluates whether or not all the unit areas Ui included in the developed image P1 should be integrated into the already extracted existing surface area Sk.
  • integrating the unit area Ui into the existing surface area Sk actually means giving the unit area Ui the label given to the surface area Sk.
  • the evaluation method is the same as in Operation Example 1, and the inner product of the normal vector Vi of the unit region Ui and the normal vector Tk of the surface region Sk is equal to or less than the reference value, and the distance between the unit region Ui and the surface region Sk Is equal to or greater than the reference distance as a condition for integration into the surface region Sk.
  • the unit region Ui that does not satisfy the condition is excluded from the surface region Sk even if it is already integrated into the surface region Sk.
  • the second processing unit 122 uses an evaluation function including an inner product and a distance, as in the first operation example, A surface area Sk for integrating the unit areas Ui is determined.
  • the normal vector Tk is calculated for each surface area Sk.
  • the normal vector Tk of the surface region Sk is selected from an average value, a median value, a mode value, and the like.
  • the process in which the second processing unit 122 obtains the surface region Sk in the operation example 1 or the operation example 2 is set as the preprocessing, and the processing described above is added to the preprocessing, so that the surface region Sk is extracted from the developed image P1.
  • the accuracy of extracting is increased.
  • the processing of the operation example 3 after the preprocessing is summarized in FIG. After the preprocessing (S110), in order to generate the surface area Sk from the developed image P1, the second processing unit 122 scans the developed image P1 again and sequentially extracts the unit areas Ui (S111). The second processing unit 122 evaluates the inner product and the distance of the normal vectors for the extracted unit region Ui and the existing surface region Sk stored in the list (S112).
  • the second processing unit 122 integrates the unit area Ui into the existing surface area Sk based on the evaluation result of the evaluation function ( S113).
  • the second processing unit 122 discards the corresponding unit area Ui. The above processing is performed for all the unit areas Ui of the developed image P1 (S114).
  • the normal vector Vi of the unit region Ui and the normal vector Tk of the surface region Sk The inner product is used. That is, the direction of the unit region Ui and the surface region Sk is evaluated by the normal vectors Vi and Tk.
  • the direction of the unit region Ui and the surface region Sk can be expressed by an angle with respect to the reference direction. That is, instead of obtaining the normal vector from the mathematical expression of the surface 3 expressed by the coordinate value of the orthogonal coordinate system, the orientation of the unit region Ui and the surface region Sk is expressed by two angles by expressing it in polar coordinates (that is, spherical coordinates). You may make it represent by a group.
  • the orientation of the surface 3 is expressed in polar coordinates, the coordinate value of the measurement point can be handled as it is without converting the coordinate value of the polar coordinate system to the coordinate value of the orthogonal coordinate system.
  • the direction of the unit area Ui is represented by a set of two angles.
  • an angle formed by a straight line obtained by projecting a normal vector on the xy plane with respect to the x-axis and an angle formed by the normal vector with respect to the z-axis
  • the direction of the unit area Ui is represented by these two angles.
  • the former angle corresponds to the azimuth angle
  • the latter angle corresponds to the elevation angle or the dip angle.
  • the former angle is called the first angle
  • the latter angle is called the second angle.
  • a value obtained by adding the square of the difference of the first angle and the square of the difference of the second angle may be used. That is, the second processing unit 122 determines that the two unit regions Ui belong to the same surface region Sk when the sum of squares of the difference between the first angle and the difference between the second angles is equal to or less than a suitably set threshold value.
  • the technique for determining whether or not two unit areas Ui are regarded as having the same orientation can also be applied to a technique for determining whether or not one unit area Ui belongs to the surface area Sk.
  • the unit cell may include a plurality of measurement points or a measurement point may not be included.
  • measurement data representing each unit cell is obtained.
  • the measurement data representing the unit cell is an average value of coordinate values or a coordinate value of the center of gravity obtained from the coordinate values of the measurement points included in the unit cell. That is, one piece of measurement data is obtained for each unit cell.
  • the developed image P1 When measurement data representing a unit cell is applied to the developed image P1, variation in the distribution density of the measurement data in the developed image P1 is suppressed, and the number of measurement data in the developed image P1 is reduced.
  • the processing after setting the measurement data for each unit grid in the developed image P1 is as described above. In other words, the developed image P1 is divided into unit regions Ui, classified into surface regions Sk for each unit region Ui, a mathematical expression representing the surface 3 is obtained from the surface region Sk, and finally a shape model is generated.
  • the third processing unit 123 can perform the following processing (referred to as M estimation) in order to perform estimation with higher robustness for the planar formula (formula of plane 3) obtained in the above-described operation example. It is.
  • M estimation processing in order to perform M estimation, first, an estimation range (for example, ⁇ 5 mm), a convergence condition (for example, ⁇ 1 mm), and the number of repetitions are determined.
  • the third processing unit 123 obtains a mathematical expression representing the surface 3 in the same procedure as in the first operation example using the measurement points included in any one of the surface regions Sk (may be measurement points representing a unit cell). .
  • the surface 3 represented by this mathematical formula is referred to as a candidate surface.
  • the third processing unit 123 recalculates the mathematical formula of the surface 3 using the measurement points included in the three-dimensional space estimated from the candidate surface.
  • the third processing unit 123 obtains a distance from the candidate surface (that is, an error with respect to the candidate surface) for the measurement point used for recalculation, and determines a weighting coefficient corresponding to the distance.
  • the weighting coefficient is a value of 1 or less, and is set to a value such as ⁇ 1- (distance / estimation range) 2 ⁇ 2 .
  • the third processing unit 123 recalculates the formula of the surface 3 by using (4) the value obtained by multiplying the coordinate value of the measurement point by the coordinate value of the measurement point as the coordinate value of the measurement point.
  • the 3rd process part 123 makes the surface 3 represented by the calculated
  • a weighting factor corresponding to the degree of contribution to the mathematical expression of the surface 3 is applied to the measurement point, and when determining the mathematical expression of the surface 3, the influence of an abnormal value having a large error is reduced, resulting in robustness. It is possible to estimate the highly realistic surface 3.
  • Other processes may be performed in the same manner as the above-described operation example.
  • the distribution density of the measurement points measured by the measurement device 20 varies. That is, as described above, since the measuring device 20 measures the coordinate value of the polar coordinate system, the distribution density of the measurement points increases when the distance to the three-dimensional object 30 is short, and the distance to the three-dimensional object 30 is long. And the distribution density of the measurement points becomes low. On the other hand, if the surface 3 measured by the measurement device 20 is distorted (deviation from the plane) despite the assumption that the surface 3 is a plane, the measurement points used to calculate the formula of the surface 3 include: Measurement points that deviate from the plane to be obtained are included.
  • the measurement points of the part close to the measurement device 20 have a higher distribution density than the measurement points of the part far from the measurement device 20, information on the measurement points of the part close to the measurement device 20 is included in the formula of the surface 3. Is easily reflected. That is, if the surface 3 is distorted in a region close to the measuring device 20, there is a high possibility that the obtained mathematical formula of the surface 3 is deviated from the mathematical formula of the surface 3 to be obtained.
  • the measurement unit rotates along the horizontal plane, and scans the laser along the vertical plane. Therefore, the laser passes many times immediately above the measuring device 20 and the density of scanning points near the measuring device 20 increases. That is, the distribution density of the measurement points immediately above the measuring device 20 inevitably increases due to the characteristics of the measuring device 20.
  • the ceiling surface 32 has the smallest distance from the measuring device 20 in the vicinity immediately above the measuring device 20, and therefore, in addition to specifying the measuring device 20 described above, the distance from the measuring device 20 is close.
  • the distribution density of the measurement points near 20 is very high.
  • required from the measurement point group containing such a measurement point is the ceiling surface 32 which is going to obtain
  • the measuring device 20 when measuring the surface 3 surrounding the room using the measuring device 20, the measuring device 20 is usually arranged at a site away from the wall surface 33 of the room.
  • the floor surface 31 and the wall surface 33 are relatively less likely to be distorted, but the ceiling surface 32 is more likely to be distorted than the floor surface 31 and the wall surface 33.
  • the load acting downward on the ceiling material constituting the ceiling is received by the member constituting the wall, etc., and the downward load by the lighting fixture etc. acts on the central part of the ceiling material Nevertheless, it is not supported from below. Therefore, the center part of the ceiling material is more likely to be lowered than the peripheral part.
  • the center portion of the ceiling surface 32 is used. It can be said that it is desirable to exclude the obtained measurement points. Since the distortion of the surface 3 surrounding the room is likely to occur on the ceiling surface 32, only the ceiling surface 32 is targeted. However, the processing described below can be applied even to the floor surface 31 or the wall surface 33. It is.
  • the measurement points included in the central region D ⁇ b> 2 of the ceiling surface 32 are excluded, and the region D ⁇ b> 1 defined in the peripheral portion of the ceiling surface 32 is added. Use only the included measurement points.
  • the size of the region D1 desirably has a width W1 of approximately 40 cm to 50 cm from the boundary line (side surrounding the ceiling surface 32) of the ceiling surface 32.
  • the true boundary line does not coincide with the true boundary line between the ceiling surface 32 and the wall surface 33, it is assumed that the true boundary line exists within a predetermined distance (for example, within 10 cm) from this boundary line. Is valid. Therefore, even when the measurement points included in the region D1 having the predetermined width W1 determined with reference to the boundary line surrounding the ceiling surface 32, the measurement points included in the region D2 in the center of the ceiling surface 32 are excluded. Will be. That is, even if the region D1 is set using the boundary line obtained based on the candidate surface, it is possible to exclude the measurement points at the center of the ceiling surface 32 and extract the measurement points at the periphery of the ceiling surface 32. It is.
  • the program is configured so that the third processing unit 123 uses a part of the process of the modeling unit 13 as a subroutine. It is desirable.
  • configurations corresponding to the first processing unit 121 and the second processing unit 122 may be omitted. That is, the surface extraction unit 12 only needs to be able to perform processing corresponding to the third processing unit 123 in addition to extracting measurement points in the region D1.
  • a condition is set such that the distance from all the wall surfaces 33 of the room to the measuring device 20 is a predetermined value or more.
  • the coordinate value of the measurement point the coordinate value of the polar coordinate system
  • the angle ⁇ at which the measuring device 20 scans the laser is greater than or equal to a predetermined angle (when the angle ⁇ is based on the vertical direction). It is a constraint condition.
  • the distance ⁇ to the measurement point is greater than or equal to a predetermined value (coordinate value in the polar coordinate system), or the distance ⁇ to the measurement point is greater than or equal to a predetermined magnification greater than 1. It is a constraint.
  • Constraint conditions are determined in order to extract measurement points on the periphery of the ceiling surface 32.
  • the measuring device 20 may be sufficiently separated from the wall surface 33. If the room is 5 m ⁇ 3.5 m, the minimum distance from the wall surface 33 to the measuring device 20 is set to 1.5 m or more, for example.
  • the constraint condition is that the angle ⁇ is 35 ° or more in the surface region Sk designated as the ceiling surface 32, The distance is determined in the form of 2.5 m or more, or 120% or more of the minimum distance between the ceiling surface 32 and the measuring device 20.
  • these numerical values are only examples, and can be appropriately determined within a range that achieves the purpose of excluding the measurement points for the center of the ceiling surface 32 and extracting the measurement points on the periphery of the ceiling surface 32. is there.
  • the third processing unit 123 When performing the process of extracting the measurement points in the region D1 around the ceiling surface 32, it is necessary for the third processing unit 123 to temporarily define a boundary line and further to determine the region D1 using this boundary line. .
  • the processing load is reduced because a process for obtaining the boundary line is unnecessary. That is, in the process for obtaining the mathematical expression of the surface 3, the processing load is reduced by limiting the number of measurement points.
  • the process of extracting the measurement points that satisfy the constraint conditions can be placed in front of the third processing unit 123.
  • the mathematical expression of the surface 3 can be accurately obtained.
  • the operation example 5 can be used in combination with the other operation examples described above.
  • the influence of the distortion of the ceiling surface 32 is reduced. It becomes possible to determine the numerical formula regarding 32 accurately. For example, when the height of the ceiling surface 32 (height with respect to the floor surface 31) is obtained from the mathematical formula of the surface 3 determined using all the measurement points of the ceiling surface 32, 1 is obtained for the measured value of the height. Even when an error of 2% to 2% occurs, the error is improved from about 0.1% to about 0.2% when the above-described processing is performed. That is, generally, the height of the ceiling surface 32 of the house from the floor surface 31 is about 2400 mm to ⁇ 2700 mm, so that the error can be suppressed to ⁇ 10 mm or less.
  • the operation example 5 describes the case where the mathematical expression of the ceiling surface 32 is obtained. However, even when the mathematical expression of the floor surface 31 or the wall surface 33 is obtained, the same technique can be adopted as necessary. It is. Further, as in the operation example 5, after obtaining the mathematical formula of the surface 3 excluding the measurement points that may reduce the accuracy when obtaining the mathematical formula of the surface 3, the measurement points distributed on the corresponding surface 3 By obtaining the relationship with the surface 3, it is possible to measure the distortion of the surface 3.
  • the modeling method of the present embodiment generates a shape model of the three-dimensional object 30 according to the following procedure. That is, the data acquisition unit 11 acquires measurement data that is a three-dimensional coordinate value for a plurality of measurement points belonging to the three-dimensional object 30 from the measurement device 20 that performs three-dimensional measurement of the three-dimensional object 30. Next, the surface extraction part 12 produces
  • the surface extraction unit 12 performs the first process, the second process, and the third process.
  • a developed image P1 that is a two-dimensional image in which the three-dimensional object 30 is developed on a plane and has measurement data as pixel values is used, and the developed image P1 is converted into a plurality of unit regions Ui. The directions are divided and the directions of the plurality of unit areas Ui are obtained.
  • each unit region Ui is classified into a surface region Sk corresponding to the surface 3 to which the unit region Ui belongs using a condition including a direction.
  • a mathematical expression representing the surface 3 is determined for each surface region Sk.
  • the modeling device 10 of the present embodiment includes a data acquisition unit 11, a surface extraction unit 12, and a modeling unit 13.
  • the data acquisition unit 11 acquires measurement data that is a three-dimensional coordinate value for a plurality of measurement points belonging to the three-dimensional object 30 from the measurement device 20 that performs three-dimensional measurement of the three-dimensional object 30.
  • the surface extraction unit 12 generates a mathematical expression representing the surface 3 constituting the three-dimensional object 30 using the measurement data.
  • the modeling unit 13 generates a shape model representing the three-dimensional object 30 using mathematical expressions.
  • the surface extraction unit 12 preferably includes a first processing unit 121, a second processing unit 122, and a third processing unit 123.
  • the first processing unit 121 uses a developed image P1 that is a two-dimensional image in which the three-dimensional object 30 is developed on a plane and has measurement data as pixel values, divides the developed image P1 into a plurality of unit regions Ui, and The direction of each unit area Ui is obtained.
  • the second processing unit 122 classifies each unit region Ui into a surface region Sk corresponding to the surface 3 to which the unit region Ui belongs using a condition including a direction.
  • the third processing unit 123 determines a mathematical expression representing the surface 3 for each surface region Sk.
  • the unit areas Ui obtained by dividing the developed image P1 are grouped for each unit area Ui in substantially the same direction, and the surface area Sk is formed by the unit areas Ui.
  • the surface area Sk corresponding to 3 usually includes a plurality of unit areas Ui. Therefore, when the mathematical expression representing the surface 3 is determined using such a surface region Sk, the influence of the outlier is reduced as compared with the case where the mathematical expression representing the surface 3 is determined using several measurement points.
  • the shape model of the three-dimensional object 30 can be generated with high accuracy. Even when only measurement points relating to a part of the surface 3 constituting the three-dimensional object 30 are obtained, the surface region Sk is formed by accumulating unit regions Ui having information regarding the surface 3. Therefore, the amount of information that can be used to determine the mathematical expression representing the surface 3 increases, and it is possible to determine a mathematical expression with higher accuracy.
  • the unit area Ui is a super pixel including a plurality of pixels, and the processing load is small as compared with the case where individual measurement points are handled individually.
  • statistical processing can be performed in the processing for determining the direction of the unit region Ui and the processing for obtaining the mathematical expression representing the surface 3 from the surface region Sk, so that an outlier is included in the measurement point. Even in such a case, it is possible to reduce or eliminate the influence of outliers. That is, an appropriate mathematical expression representing the surface 3 constituting the three-dimensional object 30 can be obtained, and as a result, the accuracy of the generated shape model is increased.
  • the surface extraction unit 12 obtains a mathematical expression representing the one surface 3 using measurement points extracted under the following conditions. That is, the surface extraction unit 12 uses the mathematical expressions respectively representing one surface 3 of the three-dimensional object 30 and the plurality of surfaces 3 surrounding the one surface 3 to obtain a boundary line surrounding the one surface 3. Ask. Thereafter, the surface extraction unit 12 extracts the measurement points in the region D1 defined by the predetermined width W1 from the boundary line to the inside of the one surface 3 from the measurement points belonging to the one surface 3.
  • the three-dimensional object 30 may be a room surrounded by a floor surface 31, a ceiling surface 32, and a plurality of wall surfaces 33.
  • the one surface 3 described above is a ceiling surface 32 and the plurality of surfaces 3 surrounding the one surface 3 are wall surfaces 33.
  • the third processing unit 123 determines a mathematical expression representing the surface 3 using measurement points that satisfy the constraints set for the position with respect to the measurement device 20 among the measurement points. It is desirable that the constraint condition is determined so as to exclude the measurement point at the center of the surface 3.
  • the measurement point at the center of the surface 3 constituting the three-dimensional object 30 is excluded, and the mathematical formula of the surface 3 is determined using only the measurement points at the periphery near the boundary line of the surface 3. Therefore, when modeling the three-dimensional object 30 by representing the three-dimensional object 30 with the boundary line of the surface 3, it is possible to accurately determine the mathematical expression of the surface 3.
  • the modeling apparatus 10 includes a color processing unit 17 that assigns color information to each surface area classified by the second processing unit 122, and an output unit that outputs image information of a color image obtained by replacing the pixel value of the developed image P1 with color information. 15 is desirable.
  • the surface area Sk can be identified by color. Therefore, when the image information of the output color image is displayed on the screen of the monitor device or printed by the printer, the surface area Sk is displayed to the user. It becomes easy to recognize. For example, when the user designates a specific surface area Sk, since the color is distinguished by the color, the possibility that the unnecessary surface area Sk is erroneously designated is reduced.
  • the second processing unit 122 sequentially extracts the unit areas Ui from the developed image P1 after the preprocessing, and associates the extracted unit areas Ui with the surface area Sk determined by the preprocessing.
  • the second processing unit 122 sequentially extracts the unit areas Ui from the developed image P1 and classifies the extracted unit areas Ui in order to determine the surface area Sk.
  • the unit area Ui is sequentially extracted from the developed image P1 and pre-processed to be distributed to the surface area Sk, and the direction of the surface area Sk may change as the unit areas Ui are integrated. That is, only in the pre-processing, there is a possibility that the direction range of the unit region Ui to be integrated differs depending on the timing at which the unit region Ui is integrated with the surface region Sk. In other words, with pre-processing alone, depending on the timing at which the unit area Ui is integrated into the surface area Sk, the conditions for integration may change, and unnecessary unit areas Ui may be integrated into the surface area Sk.
  • the unit region Ui is incorporated into the surface region Sk in a state where the orientation of the surface region Sk is fixed after the preprocessing, the unit region Ui is converted into the surface region Sk.
  • the conditions for integration do not change. That is, the unit region Ui integrated with the surface region Sk is guaranteed to be substantially in the same direction, and as a result, the accuracy of the mathematical expression representing the surface 3 is increased, and the accuracy of the shape model representing the three-dimensional object 30 is increased. Become.
  • the surface 3 is a plane and the direction is represented by a normal vector Vi of the unit region Ui.
  • the mathematical formula is a linear type.
  • the normal vector Vi of the unit area Ui only needs to be obtained for a plane, the amount of calculation is relatively small, and the processing load is relatively small even though the shape model is generated using information on many measurement points. It is possible to perform processing within a practical time.
  • the second processing unit 122 calculates the inner product of the normal vectors Vi for each two unit regions Ui out of the plurality of unit regions Ui, and determines the inner product when the inner product is equal to or greater than a predetermined reference value.
  • the unit regions Ui may be classified into the same surface region Sk.
  • each unit region Ui not only the inner product but also the distance from the origin of each unit region Ui is used to evaluate the direction of the two unit regions Ui. If the inner product is equal to or larger than the reference value, the two unit areas Ui are regarded as having substantially the same direction, but if the distances are different beyond the reference range, the two unit areas Ui do not constitute the same plane. It is judged. Thus, by using the distance from the origin in addition to the inner product, the amount of information increases and the accuracy of classifying the unit area Ui is improved.
  • the second processing unit 122 calculates the inner product of the normal vector Vi and the inner product from the origin determined in the real space where the three-dimensional object 30 exists for each two adjacent unit regions Ui out of the plurality of unit regions Ui. And the distance to each of the two unit regions Ui obtained. Also in this case, the second processing unit 122, when the inner product is equal to or larger than the predetermined reference value and the difference in distance is within the predetermined reference range, sets the two unit regions Ui for which the inner product is obtained as the same surface region. Classify.
  • the surface 3 may be a plane, and the direction may be represented by an angle of the unit region Ui with respect to the reference direction.
  • the second processing unit 122 obtains an angle difference for each of the two unit regions Ui out of the plurality of unit regions Ui. If the angle difference is equal to or less than a predetermined reference value, the second processing unit 122 calculates the angle difference.
  • the two obtained unit regions Ui are classified into the same surface region Sk.
  • the direction of the unit region Ui is represented by an angle
  • the measurement device 20 that measures the three-dimensional object 30 from a fixed point and expresses the coordinates of the measurement point by polar coordinates from the fixed point is used, the coordinate values are orthogonal. There is no need to convert to a coordinate system, and the processing load can be reduced accordingly.
  • the case where the surface 3 constituting the three-dimensional object 30 is a plane is taken as an example.
  • the above-described technique may be applied to a curved surface. Is possible. Further, the technique described above can be applied if the curved surface is approximated by a plurality of planes by dividing the curved surface into appropriate sections and representing each divided section with a plane.
  • the first processing unit 121 sets a three-dimensional lattice in the space including the three-dimensional object 30, and represents each of the unit lattices using measurement data regarding measurement points included in each unit lattice constituting the three-dimensional lattice. Measurement data may be obtained. In this case, the measurement data is used as the pixel value of the developed image P1.
  • the lattice constant is determined so that a plurality of measurement points are included in the three-dimensional lattice and the measurement points representing the unit lattice are used as measurement data, the distribution density of the measurement points actually measured by the measurement apparatus 20 varies. Even if it exists, it becomes possible to obtain measurement data at a substantially constant interval.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)
  • Image Generation (AREA)

Abstract

Le problème décrit par la présente invention consiste à établir un modèle de forme ayant une bonne précision, même lorsqu'un point de mesure d'une seule partie d'un plan peut être obtenu. Un dispositif (10) de modélisation selon la présente invention comprend une unité (11) d'acquisition de données, une unité (12) d'extraction de plan et une unité (13) de modélisation. À l'aide d'expressions numériques qui représentent respectivement un plan (3) d'un objet tridimensionnel (30) et une pluralité de plans (3) qui entourent le plan (3), l'unité (12) d'extraction de plan extrait une ligne de délimitation qui entoure le plan (3). Ensuite, l'unité (12) d'extraction de plan extrait, à partir des points de mesure associés au plan (3), des points de mesure à l'intérieur d'une région (D1) établie à une largeur prescrite (W1) vers l'intérieur sur le plan (3) à partir de la ligne de délimitation.
PCT/JP2015/005918 2014-11-28 2015-11-27 Dispositif de modélisation, dispositif de production de modèle tridimensionnel, procédé de modélisation et programme WO2016084389A1 (fr)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP15863391.7A EP3226212B1 (fr) 2014-11-28 2015-11-27 Dispositif de modélisation, dispositif de production de modèle tridimensionnel, procédé de modélisation et programme
US15/528,174 US10127709B2 (en) 2014-11-28 2015-11-27 Modeling device, three-dimensional model generating device, modeling method, and program
JP2016561259A JP6238183B2 (ja) 2014-11-28 2015-11-27 モデリング装置、3次元モデル生成装置、モデリング方法、プログラム
CN201580063371.7A CN107004302B (zh) 2014-11-28 2015-11-27 建模装置、三维模型生成装置、建模方法

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2014241554 2014-11-28
JP2014-241554 2014-11-28
JP2015097956 2015-05-13
JP2015-097956 2015-05-13

Publications (1)

Publication Number Publication Date
WO2016084389A1 true WO2016084389A1 (fr) 2016-06-02

Family

ID=56073976

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/005918 WO2016084389A1 (fr) 2014-11-28 2015-11-27 Dispositif de modélisation, dispositif de production de modèle tridimensionnel, procédé de modélisation et programme

Country Status (5)

Country Link
US (1) US10127709B2 (fr)
EP (1) EP3226212B1 (fr)
JP (1) JP6238183B2 (fr)
CN (1) CN107004302B (fr)
WO (1) WO2016084389A1 (fr)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102014211972A1 (de) * 2014-04-30 2015-11-05 Zumtobel Lighting Gmbh Sensoranordnung zum Erfassen ortsaufgelöster photometrischer Daten
US11481968B2 (en) * 2016-02-29 2022-10-25 Accurence, Inc. Systems and methods for improving property inspection efficiency
CN108154556A (zh) * 2017-12-12 2018-06-12 上海爱优威软件开发有限公司 一种终端虚拟装饰方法及系统
CN108830795B (zh) * 2018-06-14 2022-05-06 合肥市商巨智能装备有限公司 去除图像检测过程中摩尔纹的方法
CN109084700B (zh) * 2018-06-29 2020-06-05 上海摩软通讯技术有限公司 物品的三维位置信息获取方法及系统
CN110659547B (zh) * 2018-06-29 2023-07-14 比亚迪股份有限公司 物体识别方法、装置、车辆和计算机可读存储介质
US10679367B2 (en) * 2018-08-13 2020-06-09 Hand Held Products, Inc. Methods, systems, and apparatuses for computing dimensions of an object using angular estimates
JP7211005B2 (ja) * 2018-10-29 2023-01-24 富士通株式会社 地形推定プログラム、地形推定方法、及び、地形推定装置
CN109754453A (zh) * 2019-01-10 2019-05-14 珠海格力电器股份有限公司 基于微波雷达的房间效果图的构建方法及装置,系统
CN113255033B (zh) * 2021-05-11 2022-04-05 上海慧之建建设顾问有限公司 基于bim技术的建筑工程监理智能一体化云平台及监理方法
CN114820329B (zh) * 2022-07-01 2022-11-25 之江实验室 基于高斯过程大核注意力装置引导的曲面测量方法及装置
CN116452770B (zh) * 2023-02-17 2023-10-20 北京德风新征程科技股份有限公司 三维模型重建方法、装置、设备和介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04133184A (ja) * 1990-09-26 1992-05-07 Secom Co Ltd 室内3次元モデル作成方法とその装置
JPH07152810A (ja) * 1993-11-26 1995-06-16 Toshiba Corp 環境モデル作成装置
JP2003323461A (ja) * 2002-05-01 2003-11-14 Mitsubishi Heavy Ind Ltd Cadデータ作成装置および情報加工方法
WO2005088244A1 (fr) * 2004-03-17 2005-09-22 Sony Corporation Detecteur de plans, procede de detection de plans, et appareil robotise avec detecteur de plans
JP2006098256A (ja) * 2004-09-30 2006-04-13 Ricoh Co Ltd 3次元サーフェスモデル作成システム、画像処理システム、プログラム及び情報記録媒体
JP2012103134A (ja) * 2010-11-10 2012-05-31 Topcon Corp 構造物モデル作成装置及びその方法

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5819016A (en) 1993-10-05 1998-10-06 Kabushiki Kaisha Toshiba Apparatus for modeling three dimensional information
JP2001052163A (ja) 1999-08-11 2001-02-23 Sony Corp 情報処理装置及び方法、表示装置及び方法並びに記録媒体
JP4226360B2 (ja) 2003-03-10 2009-02-18 株式会社パスコ レーザデータのフィルタリング方法及びプログラム
WO2010095107A1 (fr) * 2009-02-19 2010-08-26 Dimensional Perception Technologies Ltd. Système et procédé de modélisation géométrique utilisant de multiples moyens d'acquisition de données
WO2011070927A1 (fr) 2009-12-11 2011-06-16 株式会社トプコン Dispositif, procédé et programme de traitement de données de groupes de points
CN101833786B (zh) * 2010-04-06 2011-12-28 清华大学 三维模型的捕捉及重建方法和系统
JP5343042B2 (ja) * 2010-06-25 2013-11-13 株式会社トプコン 点群データ処理装置および点群データ処理プログラム
JP5462093B2 (ja) * 2010-07-05 2014-04-02 株式会社トプコン 点群データ処理装置、点群データ処理システム、点群データ処理方法、および点群データ処理プログラム
JP5606340B2 (ja) 2011-01-14 2014-10-15 株式会社東芝 構造物計測システム
JP5711039B2 (ja) 2011-04-27 2015-04-30 株式会社トプコン 三次元点群位置データ処理装置、三次元点群位置データ処理方法、三次元点群位置データ処理システムおよびプログラム
US20150254861A1 (en) * 2012-10-18 2015-09-10 T. Eric Chornenky Apparatus and method for determining spatial information about environment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04133184A (ja) * 1990-09-26 1992-05-07 Secom Co Ltd 室内3次元モデル作成方法とその装置
JPH07152810A (ja) * 1993-11-26 1995-06-16 Toshiba Corp 環境モデル作成装置
JP2003323461A (ja) * 2002-05-01 2003-11-14 Mitsubishi Heavy Ind Ltd Cadデータ作成装置および情報加工方法
WO2005088244A1 (fr) * 2004-03-17 2005-09-22 Sony Corporation Detecteur de plans, procede de detection de plans, et appareil robotise avec detecteur de plans
JP2006098256A (ja) * 2004-09-30 2006-04-13 Ricoh Co Ltd 3次元サーフェスモデル作成システム、画像処理システム、プログラム及び情報記録媒体
JP2012103134A (ja) * 2010-11-10 2012-05-31 Topcon Corp 構造物モデル作成装置及びその方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3226212A4 *

Also Published As

Publication number Publication date
US20170330364A1 (en) 2017-11-16
EP3226212A4 (fr) 2017-10-04
EP3226212A1 (fr) 2017-10-04
EP3226212B1 (fr) 2020-07-08
US10127709B2 (en) 2018-11-13
CN107004302A (zh) 2017-08-01
JP6238183B2 (ja) 2017-11-29
CN107004302B (zh) 2020-09-15
JPWO2016084389A1 (ja) 2017-08-31

Similar Documents

Publication Publication Date Title
JP6238183B2 (ja) モデリング装置、3次元モデル生成装置、モデリング方法、プログラム
JP6286805B2 (ja) モデリング装置、3次元モデル生成装置、モデリング方法、プログラム、レイアウトシミュレータ
US11210806B1 (en) Using satellite imagery to enhance a 3D surface model of a real world cityscape
JP6489552B2 (ja) シーン内の寸法を求める方法
Hong et al. Semi-automated approach to indoor mapping for 3D as-built building information modeling
US9972067B2 (en) System and method for upsampling of sparse point cloud for 3D registration
AU2014222457B2 (en) Image processing
JP5799273B2 (ja) 寸法計測装置、寸法計測方法、寸法計測システム、プログラム
TW201514446A (zh) 三維量測模擬取點系統及方法
KR20140102108A (ko) 3차원 실내공간 정보 구축을 위한 라이다 데이터 모델링 방법 및 시스템
JP6185385B2 (ja) 空間構造推定装置、空間構造推定方法及び空間構造推定プログラム
JP2016217941A (ja) 3次元データ評価装置、3次元データ測定システム、および3次元計測方法
JP2012141758A (ja) 三次元データ処理装置、方法及びプログラム
JP6132246B2 (ja) 寸法計測方法
Afghantoloee et al. Coevrage Estimation of Geosensor in 3d Vector Environments
Boerner et al. Brute force matching between camera shots and synthetic images from point clouds
JP6595100B2 (ja) 接続要素の製作寸法を決定する方法及びシステム
KR101621858B1 (ko) 정점과 구조물이 위치하는 지점 간의 수평거리를 산출하는 장치 및 방법
JP2018041169A (ja) 情報処理装置およびその制御方法、プログラム
Majid et al. Three-dimensional recording of bastion middleburg monument using terrestrial laser scanner
Mohd Yusoff et al. Optimal camera placement for 3D environment
JP6530685B2 (ja) 物体検出装置、物体検出システム、物体検出方法および物体検出プログラム
Kachanov et al. Development of a method for the synthesis of a three-dimensional model of power transmission lines for visualization systems of training complexes
KR100910203B1 (ko) 자율이동플랫폼 장치의 지형영상 출력장치, 이를 구비하는 자율이동플랫폼 장치 및 자율이동플랫폼 장치의 지형영상 출력방법
Rzonca et al. Lidarometry as a Variant of Integration of Photogrammetric and Laser Scanning Data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15863391

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016561259

Country of ref document: JP

Kind code of ref document: A

REEP Request for entry into the european phase

Ref document number: 2015863391

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 15528174

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE