CN114755696A - Surveying and mapping method, device, equipment and storage medium - Google Patents

Surveying and mapping method, device, equipment and storage medium Download PDF

Info

Publication number
CN114755696A
CN114755696A CN202011561983.8A CN202011561983A CN114755696A CN 114755696 A CN114755696 A CN 114755696A CN 202011561983 A CN202011561983 A CN 202011561983A CN 114755696 A CN114755696 A CN 114755696A
Authority
CN
China
Prior art keywords
contour
points
acquisition
indoor scene
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011561983.8A
Other languages
Chinese (zh)
Inventor
萧敦育
管理
华刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wormpex Technology Beijing Co Ltd
Original Assignee
Wormpex Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wormpex Technology Beijing Co Ltd filed Critical Wormpex Technology Beijing Co Ltd
Priority to CN202011561983.8A priority Critical patent/CN114755696A/en
Publication of CN114755696A publication Critical patent/CN114755696A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a mapping method, a mapping device, mapping equipment and a storage medium, wherein the method comprises the following steps: determining a plurality of acquisition points in an indoor scene; for each acquisition point, measuring distances between the acquisition point and the boundary of the indoor scene in a plurality of directions, and determining contour points of the indoor scene according to the distances; determining a contour line according to the contour points; the contour points of the plurality of acquisition points are spliced before the step of determining the contour line according to the contour points to form the overall contour line of the indoor scene, or the contour lines of the plurality of acquisition points are spliced after the step of determining the contour line according to the contour points to form the overall contour line of the entire indoor scene. By using the surveying and mapping method, the plan view of the indoor scene can be automatically drawn.

Description

Surveying and mapping method, device, equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a mapping method, a mapping apparatus, a mapping device, and a computer storage medium.
Background
The existing shop mapping method requires a designer to go to the site to perform manual data surveying and manual recording on paper. And then returning to a design room to record data, and manually drawing an engineering red line graph. The work items and durations of the specific storefront graph generally include: plan collection and basic mapping (about 6 hours), plan design mapping (about 5 hours), manual construction mapping (about 5 hours), automatic construction mapping manual correction (about 3 hours), manual camera spot location design (about 0.5 hours), and rework to resolve previously unsortable items (about 5 hours). Therefore, the existing mode has the pain points of long time period, low surveying and mapping efficiency for special house types and the like.
Disclosure of Invention
The invention aims to provide a novel mapping method, a device, equipment and a storage medium.
The purpose of the invention is realized by adopting the following technical scheme. The mapping method provided by the invention comprises the following steps: determining a plurality of acquisition points in an indoor scene; for each acquisition point, measuring distances between the acquisition point and the boundary of the indoor scene in a plurality of directions, and determining contour points of the indoor scene according to the distances; determining a contour line according to the contour points; wherein the contour points of the plurality of acquisition points are stitched to form a contour point of the entirety of the indoor scene before the step of determining a contour line according to the contour points, or the contour lines of the plurality of acquisition points are stitched to form a contour line of the entirety of the indoor scene after the step of determining a contour line according to the contour points.
The object of the invention can be further achieved by the following technical measures.
The aforementioned mapping method, which measures distances between the acquisition points and boundaries of the indoor scene in a plurality of orientations, comprises: rotating the distance measuring mechanism, and measuring the distances between the acquisition point and the boundaries of the indoor scene in a plurality of directions in the rotating process; the determining contour points of the indoor scene according to the distance comprises: and determining a plurality of points on a plane as the contour points according to the distance.
The aforementioned mapping method, wherein the determining a contour line according to the contour point includes: and determining a line which passes through the contour points simultaneously according to the contour points as the contour line, and/or determining a fitted line as the contour line according to the contour points.
The aforementioned mapping method, wherein the determining a contour line according to the contour point includes: and connecting the contour points according to the distance of the contour points to obtain the contour line.
In the above mapping method, the connecting the contour points according to the intervals of the contour points to obtain the contour line includes connecting according to one or more of the following connecting conditions: connecting the contour points with the nearest contour points; if the distance between the contour points is smaller than a preset first distance threshold value, connecting the lines, otherwise, not connecting the lines; sequentially connecting all contour points to form one or more closed figures; and if the distances among the contour points are larger than a preset second distance threshold value, the contour points are not connected.
The aforementioned mapping method, wherein the determining a contour line according to the contour point includes: dividing a plurality of the contour points into a plurality of point groups; for each point group, converting the contour points of the same point group into a contour line segment; and connecting a plurality of contour line segments determined by a plurality of point groups to form the contour line.
The aforementioned mapping method, wherein the determining a contour line according to the contour point includes: recording the acquisition sequence or acquisition time of the contour points; and connecting the contour points in sequence according to the acquisition sequence or the acquisition time to obtain the contour line.
The aforementioned mapping method, wherein the determining of the contour line according to the contour point includes one or more of the following steps: distinguishing the indoor object represented by the contour point by machine learning; and judging whether a plurality of contour points need to be connected or not and/or judging a connection mode which needs to be adopted by utilizing machine learning, wherein the connection mode comprises an interpolation mode or a fitting mode.
The aforementioned mapping method, wherein the stitching the contour points of the plurality of acquisition points or the stitching the contour lines of the plurality of acquisition points comprises: determining relative positional relationships between a plurality of said acquisition points; and splicing the contour points or the contour lines corresponding to the plurality of acquisition points according to the relative position relation.
The aforementioned mapping method, the method further comprising: moving a ranging mechanism between a plurality of said acquisition points and acquiring tracking data while moving, said tracking data comprising one or more of position data of said acquisition points, displacement data while moving between a plurality of said acquisition points, a tracking trajectory, an indoor scene image.
The method for mapping, the stitching the contour points of the plurality of the acquisition points or the contour lines of the plurality of the acquisition points comprises: respectively determining a feature point or a feature area of each acquisition point according to the tracking data through feature extraction, wherein the feature point/the feature area has feature data; associating the characteristic points/the characteristic areas of different acquisition points with the same or similar characteristic data to form characteristic point pairs/characteristic area pairs; and splicing the contour lines or contour points of the plurality of acquisition points according to the feature point pairs/feature region pairs.
The aforementioned mapping method, the method further comprising: synchronizing the collected multiple data to eliminate the collection time difference; wherein the plurality of data that is synchronized comprises one or more of the distance between the acquisition point and a boundary of the indoor scene, the position data of the acquisition point, the displacement data, the tracking data, an image of the indoor scene.
The aforementioned mapping method, the method further comprising: during the process of measuring the distances between the acquisition points and the boundary of the indoor scene in a plurality of directions at each acquisition point, acquiring one or more of displacement data and distance measurement direction data of a distance measurement mechanism; the determining contour points of the indoor scene according to the distance comprises: and converting the measured distance into the distance on a horizontal plane and/or with a rotation center as the acquisition point according to one or more of the displacement data and the ranging azimuth data of the ranging mechanism.
The mapping method described above, wherein the acquiring ranging orientation data comprises: collecting a horizontal direction and a vertical direction during ranging; the determining contour points of the indoor scene according to the distance comprises: for each of the horizontal orientations, projecting the measured distance to a horizontal plane according to the vertical orientation to obtain the distance on the horizontal plane; determining the contour points according to the distances on the horizontal plane and the corresponding horizontal orientations.
The aforementioned mapping method, the method further comprising one or more of the following steps: determining the area of the indoor scene according to the overall contour line of the indoor scene; and determining the side length of all or part of the indoor scene according to the contour line.
The aforementioned mapping method, the method further comprising: and determining the contour lines of a plurality of heights in the indoor scene, and superposing to form a three-dimensional graph of the indoor scene.
The aforementioned mapping method, the method further comprising: and acquiring one or more of a panoramic picture and a point cloud picture of the indoor scene to obtain information of the top, the bottom and the pipeline of the indoor scene.
The object of the present invention is also achieved by the following technical means. According to the invention, a surveying device is proposed, comprising: an acquisition module comprising a ranging unit to measure, for each of a plurality of acquisition points, a distance between the acquisition point and a boundary of an indoor scene in a plurality of orientations; wherein the acquisition points are predetermined in the indoor scene; the contour point determining module is used for determining contour points of the indoor scene according to the distance; the contour line determining module is used for determining a contour line according to the contour points; a stitching module for stitching the contour points of the plurality of acquisition points to form contour points of the whole of the indoor scene before the contour line determination module performs the aforementioned processing, or stitching the contour lines of the plurality of acquisition points to form the contour lines of the whole of the indoor scene after the contour line determination module performs the aforementioned processing.
The object of the invention can be further achieved by the following technical measures.
In the aforementioned surveying device, the acquisition module further includes: a rotation unit for rotating the distance measuring unit; the distance measurement unit is specifically used for measuring distances between the acquisition point and boundaries of the indoor scene in multiple directions in a rotating process; the contour point determination module is specifically configured to determine a plurality of points located on a plane as the contour points according to the distance.
In the aforementioned surveying and mapping apparatus, the contour line determining module is specifically configured to: and determining a line passing through the contour points simultaneously according to the contour points as the contour line, and/or determining a fitting line according to the contour points as the contour line.
In the aforementioned surveying device, the contour line determining module is specifically configured to: and connecting the contour points according to the distance of the contour points to obtain the contour line.
In the aforementioned surveying and mapping apparatus, the contour line determining module is specifically configured to perform a line connection according to one or more of the following line connection conditions: connecting the contour points with the nearest contour points; if the distances among the contour points are smaller than a preset first distance threshold value, connecting the lines, and otherwise, not connecting the lines; sequentially connecting all contour points to form one or more closed figures; and if the distance between the contour points is larger than a preset second distance threshold value, the contour points are not connected.
In the aforementioned surveying and mapping apparatus, the contour line determining module is specifically configured to: dividing a plurality of the contour points into a plurality of point groups; for each point group, converting the contour points of the same point group into a contour line segment; and connecting a plurality of contour line segments determined by a plurality of point groups to form the contour line.
In the aforementioned surveying device, the acquisition module further includes: the acquisition sequence recording unit or the acquisition time recording unit is respectively used for recording the acquisition sequence or the acquisition time of the contour points; the contour line determination module is specifically configured to: and connecting the contour points in sequence according to the acquisition sequence or the acquisition time to obtain the contour line.
In the aforementioned surveying device, the contour line determining module includes one or more of the following units: a first machine learning unit configured to discriminate an indoor object represented by the contour point by machine learning; and the second machine learning unit is used for judging whether a plurality of contour points need to be connected and/or judging a connection mode which needs to be adopted by utilizing machine learning, wherein the connection mode comprises an interpolation mode or a fitting mode.
In the aforementioned surveying and mapping device, the splicing module is specifically configured to: determining relative positional relationships between a plurality of said acquisition points; and splicing the contour points or the contour lines corresponding to the plurality of acquisition points according to the relative position relation.
The aforementioned surveying device, the acquisition module further comprises a tracking data acquisition unit for: acquiring tracking data while moving the acquisition module (or moving the surveying device as a whole) between a plurality of the acquisition points; wherein the tracking data comprises one or more of position data of the acquisition points, displacement data when moving between a plurality of the acquisition points, a tracking trajectory, an indoor scene image.
In the aforementioned surveying and mapping device, the splicing module is specifically configured to: respectively determining a feature point or a feature area of each acquisition point according to the tracking data through feature extraction, wherein the feature point/the feature area has feature data; associating the characteristic points/the characteristic areas of different acquisition points with the same or similar characteristic data to form characteristic point pairs/characteristic area pairs; and splicing the contour lines or contour points of the plurality of acquisition points according to the feature point pairs/feature region pairs.
The aforementioned mapping apparatus, the apparatus further comprising: the synchronization module is used for synchronizing the acquired various data so as to eliminate the acquisition time difference; wherein the plurality of data that is synchronized comprises one or more of the distance between the acquisition point and a boundary of the indoor scene, the position data of the acquisition point, the displacement data, the tracking data, an image of the indoor scene.
In the surveying and mapping apparatus, the acquisition module further includes one or more of a ranging offset acquisition unit and a ranging azimuth acquisition unit; the distance measurement offset acquisition unit is used for acquiring displacement data of the distance measurement unit in the process that each acquisition point measures the distance between the acquisition point and the boundary of the indoor scene in multiple directions; the distance measurement azimuth acquisition unit is used for acquiring distance measurement azimuth data in the process that each acquisition point performs the measurement to measure the distance between the acquisition point and the boundary of the indoor scene in a plurality of azimuths; the contour point determination module includes a normalization unit configured to: and converting the measured distance into the distance on a horizontal plane and/or with a rotation center as the acquisition point according to one or more of the displacement data and the ranging azimuth data of the ranging unit.
In an embodiment, the distance and direction collecting unit is specifically configured to: acquiring a horizontal direction and a vertical direction when the distance measuring unit measures the distance; the normalization unit is specifically configured to: for each of the horizontal orientations, projecting the measured distance to a horizontal plane according to the vertical orientation to obtain the distance on the horizontal plane, and determining the contour points according to the distance on the horizontal plane and the corresponding horizontal orientation.
The aforementioned mapping apparatus, the apparatus further comprising one or more of: the area determining unit is used for determining the area of the indoor scene according to the overall contour line of the indoor scene; and the side length determining unit is used for determining the side length of all or part of the indoor scene according to the contour line.
The aforementioned mapping apparatus, the apparatus further comprising: and the superposition module is used for determining the contour lines of the heights in the indoor scene by utilizing the acquisition module, the contour point determination module, the contour line determination module and the splicing module and forming a three-dimensional graph of the indoor scene through superposition.
The aforementioned surveying device, the acquisition module further comprises: one or more of a panoramic picture acquisition unit and a point cloud picture acquisition unit; the panoramic picture acquisition unit is used for acquiring a panoramic picture of the indoor scene so as to obtain information of the top, the bottom and a pipeline of the indoor scene; the point cloud picture acquisition unit is used for acquiring the point cloud pictures of the indoor scene and obtaining the information of the top, the bottom and the pipeline of the indoor scene.
The aforementioned mapping device, the device comprising: one or more of a tracking camera, a depth camera; the tracking camera and/or the depth camera comprises one or more of the tracking data acquisition unit, the ranging offset acquisition unit and the ranging direction acquisition unit; the tracking camera and the depth camera are fixedly connected with the distance measuring unit and are used for synchronous rotation.
The object of the present invention is also achieved by the following technical means. According to the invention, a surveying device is proposed, comprising: a memory for storing non-transitory computer readable instructions; and a processor for executing the computer readable instructions, such that the computer readable instructions, when executed by the processor, implement any one of the possible mapping methods described above.
The object of the present invention is also achieved by the following technical means. A computer storage medium according to the present invention includes computer instructions that, when executed on a device, cause the device to perform any one of the possible mapping methods described above.
Compared with the prior art, the invention has obvious advantages and beneficial effects. By the technical scheme, the surveying and mapping method, the surveying and mapping device, the equipment and the storage medium provided by the invention at least have the following advantages and beneficial effects:
(1) the invention can carry out on-site scanning on the indoor scene to be mapped, and automatically draw the plan view of the indoor scene on site;
(2) according to the invention, more accurate mapping results can be obtained by collecting a plurality of positions in the indoor space and automatically splicing the collected results;
(3) according to the invention, more accurate mapping results are obtained by synchronizing data among a plurality of pieces of hardware;
(4) the invention can eliminate the error caused by the offset of the distance meter by normalizing the collected data, and even can carry out mapping by holding the collecting device by hand.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical means of the present invention more clearly understood, the present invention may be implemented in accordance with the content of the description, and in order to make the above and other objects, features, and advantages of the present invention more clearly understandable, the following preferred embodiments are described in detail with reference to the accompanying drawings.
Drawings
FIG. 1 is a block flow diagram of a mapping method of one embodiment of the present invention;
FIG. 2 is a schematic illustration of contour points obtained in one embodiment of the present invention;
FIG. 3 is a schematic illustration of a red line graph obtained in one embodiment of the present invention;
FIG. 4 is a schematic diagram of a top view of an indoor scene obtained in one embodiment of the present invention;
FIG. 5 is a schematic diagram of a mapping device in accordance with one embodiment of the present invention;
FIG. 6 is a schematic diagram of a surveying device according to another embodiment of the invention;
fig. 7 is a schematic structural diagram of an acquisition module according to an embodiment of the present invention;
FIG. 8 is a schematic view of a surveying device according to a further embodiment of the present invention;
fig. 9 is a schematic structural diagram of an apparatus according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description will be given of the embodiments, structures, features and effects of the surveying and mapping method, the surveying and mapping apparatus, the surveying and mapping device and the computer storage medium according to the present invention, in conjunction with the accompanying drawings and the preferred embodiments.
FIG. 1 is a schematic flow chart diagram of one embodiment of a mapping method of the present invention. Referring to fig. 1, the mapping method of the present invention mainly includes the following steps:
in step S10, one or more acquisition points (also referred to as acquisition sites, acquisition locations) are determined in an indoor scene (also referred to as an indoor space). Generally, the number of acquisition points is plural.
For each acquisition point, the boundary of the indoor scene is scanned, i.e., the distance between the acquisition point and the boundary of the indoor scene is measured at multiple orientations, step S11. Note that the boundary of the aforementioned indoor scene includes a boundary formed by a wall, and also includes a boundary of an indoor object such as a pillar. Optionally, the distance to the boundary of the indoor scene is measured point-to-point. Optionally, ranging is performed based on directional light. Specifically, the distance is measured in a point-to-point manner by generating a directional light, such as ranging based on laser light, infrared light. Alternatively, the foregoing ranging is performed using a ranging mechanism of a laser rangefinder (also referred to as a laser ruler).
And step S12, determining contour points of the indoor scene according to the distance between the measured acquisition points and the boundary of the indoor scene.
In step S13, a contour line is determined from the contour points.
Step S14, the contour points corresponding to the plurality of collection points are stitched to form the contour points of the whole indoor scene before the step of determining the contour lines from the contour points in the step S13, or the contour lines corresponding to the plurality of collection points are stitched to form the contour points or the contour lines of the whole indoor scene after the step of determining the contour lines from the contour points in the step S13.
Optionally, the foregoing step S12 further includes: the contour points are drawn in a two-dimensional plane. Optionally, the foregoing step S13 further includes: the contour lines are drawn in a two-dimensional plane.
It should be noted that the foregoing embodiment of the present invention adopts a 2D scanning manner, and generates a two-dimensional plane diagram in which the contour line of the indoor scene represents the indoor scene. In fact, the contour line of the indoor scene represents the boundary condition of the indoor scene, and may be referred to as a boundary map or a two-dimensional scanogram, or may be referred to as a red line map or a construction red line map because the boundary condition of the building is represented by a red line map in general in construction. Further, in fact, the aforementioned method determines a plan view of a certain height, which may be preset or adjusted according to the actual situation of the indoor scene to be painted.
By utilizing the surveying and mapping method provided by the invention, the indoor scene to be surveyed can be scanned on site, and the plan view of the indoor scene can be automatically drawn on site.
In some embodiments of the present invention, the step S11 includes a circular field scanning, which specifically includes: the distance measuring mechanism is rotated at one or more acquisition points, the rotation forms a rotation plane, and the distances between the acquisition points and the boundary of the indoor scene at a plurality of positions are measured in the rotation process. The aforementioned step S12 includes: a plurality of points are determined as contour points from the measured distances. Optionally, the determined contour points are a plurality of points located on a plane. Optionally, a circular field scan with a constant speed and a constant time interval is adopted, that is, the rotation speed is a constant preset value, and the time interval of distance measurement in the rotation process is a constant preset value, so that step S12 specifically includes: and for each distance measurement, obtaining position data of a point on the rotating surface as a profile point obtained by the distance measurement according to the distance between the acquisition point and the distance obtained by the distance measurement and an azimuth angle corresponding to the preset rotating speed and time interval. It is noted that although the example uses polar coordinates to represent the location of the contour points, it is also possible to use or convert to a cartesian coordinate system or any other coordinate system to represent the location of the contour points.
Optionally, during the rotation, the plane azimuth angle during the distance measurement is acquired while the distance measurement is performed, so that the contour point is determined in the plane according to the distance between the contour point and the acquisition point and the plane azimuth angle of the contour point.
Alternatively, during the rotation, the ranging is performed in a large number at certain time intervals, thereby forming a large number of contour points representing the boundary of the indoor scene. Alternatively, at one acquisition point, it may rotate only one revolution and perform ranging within the one revolution, or rotate multiple revolutions and perform ranging every week.
In some embodiments of the present invention, the mapping method of the present invention utilizes a mapping device that includes a positioning mechanism and a rotation mechanism. The positioning mechanism is a mechanism such as a tripod (tripod), a handheld support bar or the like for supporting and fixing the whole surveying and mapping device or at least a distance measuring mechanism and the like therein at a collecting point. The rotating mechanism is used for driving a distance measuring mechanism such as a laser ruler and other acquisition mechanisms to rotate relative to the positioning mechanism. Optionally, the distance measuring mechanism and the other collecting mechanisms are fixedly connected with the rotating mechanism in a non-detachable or detachable mode.
Note that, the foregoing step S13 may refer to determining the contour line of a part of the indoor space corresponding to one capture point according to the contour point of the capture point, or may refer to determining the contour line of the whole indoor scene according to the contour point of the whole indoor scene. In fact, according to the foregoing step S14, the method of the present invention may adopt a way of determining the contour line after stitching, or a way of determining the contour line before stitching.
Alternatively, the determining the contour line in the aforementioned step S13 includes: a plurality of contour points are connected in a two-dimensional planar space by lines.
Note that the "connection" in the connection of the plurality of contour points by the line, which is also referred to as "connection line", may be a connection using an interpolation method or a connection using a fitting method. In some embodiments of the present invention, the aforementioned step S13 includes: determining lines passing through the contour points at the same time from a plurality of contour points as contour lines, for example, connecting adjacent points with straight lines to form contour lines, connecting a plurality of points with Lagrange interpolation curves to form contour lines, and the like; and/or a fitted line determined from the plurality of contour points, which is only required to be generally close to the contour points, is used as a contour line, and the fitted contour line does not have to pass through each contour point, for example, a straight line is used for fitting the plurality of adjacent contour points, a least square curve is used for fitting the plurality of adjacent contour points, and the like. By "and/or" herein is meant: the contour lines can be connected by adopting an interpolation mode or connected by adopting a fitting mode, or a part of contour points can be connected by adopting an interpolation mode and another part of contour points can be connected by adopting a fitting mode.
As an optional specific example, through steps S11 and S12, a large number of contour points of the approximate contour of the indoor scene of the plurality of acquisition points are obtained, and are not referred to as a point diagram; then, through the step S14, splicing point diagrams of a plurality of acquisition points at the contour superposition position; then, in step S13, collinear points in the point diagram are converted into line segments; and finally, converting into a red line graph format.
In some embodiments of the present invention, the aforementioned step S13 includes: and connecting the contour points according to the intervals among the contour points to obtain the contour line.
In some embodiments of the present invention, the step S13 specifically includes: judging one or more of the following connecting conditions during connecting, and only connecting the contour points meeting the connecting conditions:
connecting the contour points with the nearest distance;
if the distance between the plurality of contour points is smaller than a preset distance threshold (which can be called a connection threshold or a first distance threshold), performing line connection, otherwise, not performing line connection;
connecting all contour points in sequence to form one or more closed graphs, wherein for example, a wall in an indoor scene forms one closed graph, and an upright column forms another closed graph; alternatively, all the contour points may not be connected, but a distance threshold (which may be called a disconnection threshold or a second distance threshold) may be preset, and if the distance between the contour points is greater than the preset second distance threshold, the contour points are not connected.
It is noted that the first pitch threshold is set to determine collinear contour points, and the second pitch threshold is set to exclude some cases where the lines should not be connected. For example, in some examples, a circular field scan with equal time intervals is used, such that the spacing between two adjacent contour points may differ in length depending on the distance of the room boundary from the acquisition point, whereas the position of the measured contour point may not be accurate enough when the room boundary is further from the acquisition point, and thus measurement points further from the acquisition point may be excluded by setting a second spacing threshold. As another example, in some examples, near an indoor object such as a pillar door window, there may be a case where contour points whose scan times are adjacent belong to an indoor boundary and an indoor object, respectively, and should not be connected, and thus the indoor object may be separated from the indoor boundary by setting a second distance threshold.
In some embodiments of the present invention, the step S13 includes performing connection by dividing into groups, which specifically includes: dividing a plurality of contour points into a plurality of point groups, for example, obtaining the point groups of the contour points according to the positions of the contour points by using Hough Transform and random sample consensus (RANSAC); then, for each point group, converting the contour points of the same point group into a contour line segment; then, a plurality of contour line segments determined by the different point groups are connected to form an overall contour line. Optionally, the connecting the contour line segments includes connecting end points of a plurality of contour line segments by an interpolation method or a fitting method to form an overall contour line.
It should be noted that the poles of the intersecting line segments respectively having different slopes form the inflection points in the contour.
In some embodiments of the present invention, the step S13 includes performing connection in a two-by-two connection manner, specifically including: firstly, one contour point is searched for another contour point meeting the connection condition, and the connection is carried out to obtain a line segment, then another contour point meeting the connection condition with the end point of the line segment is searched for, and the connection is carried out until all the contour points meeting the connection condition are connected.
In some embodiments of the present invention, unlike the aforementioned embodiments in which the wiring is performed according to the intervals between the contour points, the wiring is not performed by using the intervals between the contour points, or is performed not only by using the intervals between the contour points but also according to the acquisition order or the acquisition time. Specifically, the step S13 includes: recording the acquisition sequence or acquisition time of the contour points; and connecting the contour points in sequence according to the acquisition sequence or the acquisition time to obtain the contour line. It should be noted that, in this embodiment, during the process of sequentially connecting the contour points, one or more of the foregoing connection conditions may be determined, and only the contour points that meet the connection conditions are connected.
Further, in some examples, since the acquisition order/acquisition time between multiple acquisition points is independent, in order for the resulting contour lines not to divide the entire indoor scene into multiple regions but to form one overall plan view, the contour points are not connected only in acquisition order/acquisition time. To this end, in some embodiments of the present invention, the aforementioned step S13 further includes: if the pitch of the contour points exceeds a certain threshold (e.g., the second pitch threshold), no connection is made.
In some embodiments of the present invention, the determination of the contour line according to the contour points in the foregoing step S13 is performed by using a machine learning method. As an alternative specific embodiment, the foregoing step S13 includes one or more of the following steps:
using machine learning to determine the indoor objects represented by the contour points, such as whether the objects are "walls", "columns", "doors", "windows", etc.;
and judging whether a plurality of contour points need to be connected or not and/or judging which connection mode or connection modes need to be adopted by utilizing machine learning, wherein the connection modes comprise the interpolation mode or the fitting mode and the like.
Optionally, in some embodiments of the present invention, an interface for manually adjusting the contour line and an interactive interface are also provided.
In some embodiments of the present invention, the splicing process of the foregoing step S14 includes: determining relative position relation among a plurality of acquisition points; according to the relative position relationship, the contour points or contour lines corresponding to the multiple acquisition points are spliced to form the scanning condition and the plan of the whole indoor scene. Note that the contour points can be spliced before the connection; alternatively, the contour lines may be spliced after the connection. It should be noted that when contour points or contour lines are stitched, the plane map may be stitched, and the position data may also be stitched, for example, the stitching of the contour points includes: and splicing contour points in the two-dimensional plane graph, and/or splicing and combining position data of the contour points.
Further, in some embodiments of the present invention, stitching is performed using tracing (tracking). In some embodiments, the mapping method further includes: while the step of measuring the distances between the acquisition points and the boundary of the indoor scene in the plurality of orientations at each acquisition point of the aforementioned step S11 is performed, the distance measuring mechanism is also moved between the plurality of acquisition points, and tracking data (tracking data) is acquired while moving. The tracking data comprises one or more of position data of the acquisition points, motion data such as displacement speed and angular velocity when the acquisition points move, tracking tracks and indoor scene images. It should be noted that, the moving manner between the multiple acquisition points is not limited, for example, the robot may be used to drive the acquisition device or the entire mapping system to automatically move and stably stay at the acquisition points in the indoor scene, or the robot may be used to manually carry the acquisition device or the entire mapping system to move in the indoor scene. The aforementioned step S14 includes: the contour data (contour lines or contour points) of the plurality of acquisition points are correlated (also referred to as matched) from the trace data to stitch the contour data of the plurality of acquisition points.
As a specific example, the foregoing step S14 specifically includes:
first, with feature extraction, a feature point of each acquisition point is determined separately from the trace data, or a feature region of each acquisition point is determined separately, the feature point/feature region having feature data describing the property thereof. For example, feature extraction is performed on the tracking data, and areas such as corners, wall bumps or wall start and end points, windows, doors, and the like in an indoor scene can be determined. And taking the characteristic points/characteristic areas determined by tracking as the contour coincidence part as the basis of splicing. Note that the foregoing feature points and feature areas may be of various types: in an optional example, feature points in the plurality of contour points of each acquisition point are respectively determined through feature extraction; in another optional example, the feature points in the indoor scene image of each acquisition point are respectively determined through feature extraction; in another alternative example, feature regions in the vicinity of one contour point of each acquisition point are respectively determined by feature extraction; in another optional example, a feature region containing a plurality of feature points of each acquisition point is determined by feature extraction, respectively; in another optional example, a characteristic region containing a section of contour line of each acquisition point is determined through characteristic extraction; in yet another alternative example, the feature region in the indoor scene image of each acquisition point is determined separately by feature extraction. It should be noted that the present invention does not limit the method of feature extraction and feature matching used.
Then, the feature points/feature areas of different acquisition points with the same or similar feature data are associated to form feature point pairs (feature point pairs) or feature area pairs.
And then, splicing the contour data of the plurality of acquisition points according to the characteristic point pairs or the characteristic region pairs.
Optionally, the step S14 further includes: after correlating the profile data of a plurality of acquisition points, fine tuning is performed according to a threshold (also referred to as a threshold). This threshold is not referred to as a fine tuning threshold. As an optional specific example, in the foregoing example of stitching with feature points, the fine tuning threshold is a threshold of distance difference of feature point pairs. Optionally, the threshold of the distance difference may be adjusted according to the situation of the splicing result.
It should be noted that there may be cases where the data difference between multiple acquisition points is too large to automatically link to generate a closed contour. To this end, in some examples, the mapping method of examples of the invention further comprises: judging whether the step S14 of splicing the contour data of all the acquisition points can be completed or whether the fine tuning range is exceeded, for example, the distance difference fine tuning threshold value is exceeded so that the system cannot automatically complete the spliced contour area of the step S14, and if the splicing cannot be completed or the fine tuning range is exceeded, sending a prompt signal requiring manual correction; if the splicing cannot be performed manually or after the preset time of sending the prompt signal, an error is reported and the acquisition is immediately reacquired, or acquisition points corresponding to the area required by the splicing are added and the acquisition is performed on the newly added acquisition points. Note that in other examples, after the splice cannot be completed or the fine tuning range is exceeded, an error is directly reported and the acquisition is immediately reacquired, or acquisition points corresponding to the area required for the splice are added and the acquisition is performed on the newly added acquisition points.
It should be noted that, in general, the relative position relationship of the multiple acquisition points can be determined by using any of the aforementioned position data, motion data, tracking trajectory, and indoor scene image, but a combination of multiple data can be used to obtain a more accurate relative position relationship, for example, if the position data and motion data are accurate enough, the relative position relationship of the multiple acquisition points can be directly determined; or after the relative position relation of the plurality of acquisition points is determined according to the position data and the motion data, the picture information of the indoor scene image is used for correcting so as to prevent the position data and the motion data from being inaccurate.
During stitching, multiple acquisition points will generally have coincident contour points. Therefore, in some embodiments of the present invention, the aforementioned step S14 further includes: stitching is guided based on coincident contour points or contours between multiple acquisition points. As a specific embodiment, the step S14 specifically includes:
determining relative position relation among a plurality of acquisition points;
according to the relative position relation, unifying the position data of the contour points or contour lines corresponding to the plurality of acquisition points into the same plane; for example, the polar coordinates of the contour points obtained in step S12 with respect to the corresponding acquisition points are converted into plane rectangular coordinates of the same plane according to the relative positional relationship between the plurality of acquisition points;
then, judging whether the unified position data of all contour points or all contour lines have an overlapping area, and splicing the contour points or contour lines in the overlapping area; for example, after the overlap region is determined, the contour point acquired by the acquisition point closest to the overlap region or the contour point acquired by the acquisition point with the highest density of the contour points in the overlap region is selected as the contour point of the overlap region.
In another embodiment of the present invention, when the contour point stitching in step S14 is performed first and then the method of determining the entire contour line of the entire indoor scene from the entire contour points of the entire indoor scene in step S13 is performed: in the splicing process of the foregoing step S14, the overlapped contour points do not need to be processed; instead, in step S13, the contour lines are formed by directly using the connection method of the foregoing example, for example, the connection method of the foregoing division groups is used, so that the overlapped contour points are classified into the same point group when calculating the line segment, and are merged into the calculation when converting into the line segment, and the result is not affected.
Optionally, in some embodiments of the present invention, an interface and an interactive interface for manually adjusting the splicing condition of the collection point are further provided.
It is noted that in some embodiments of the present invention, contour points, contour lines, or maps of two adjacent acquisition points need to be mosaicable. Since the contour points and the contour lines are elements in the plan view, the contour points and the contour lines are not called a map. The aforementioned "mosaicable" means at least the case where two pictures to be mosaiced represent the same indoor scene. Alternatively, two acquisition points can be stitched together with the same portion of the map that also requires two acquisition points. In some examples, when the acquisition points are set, all the images cover the whole indoor scene, or in one mapping process, if the whole indoor scene cannot be covered, the acquisition points need to be replaced and mapping needs to be carried out again, so as to obtain a plan view of the whole indoor scene. Note that although in most examples, two drawings may be joined and it is necessary that the two drawings have the same portion, if the two drawings do not have the same portion, the joining may be performed based on the relative positional relationship information. If the graphs of two adjacent acquisition points have a missing part at the splicing position after being spliced, connecting lines according to the data attached to the two ends of the missing part to supplement the missing part.
In embodiments where multiple data are collected at a collection point, multiple hardware such as a laser ruler, tracking camera (tracking camera), depth camera, etc. are typically utilized, and the data may be streamed back, and the data of the multiple hardware may not be synchronized, typically with time stamps. The lack of synchronization of the data may be caused by a variety of reasons, such as acquisition being in rotation or movement, and acquisition of multiple pieces of hardware being time-differentiated while continuing to rotate or move. In some embodiments of the invention, the mapping method further comprises: synchronizing the collected multiple data to eliminate the collection time difference; wherein the synchronized plurality of data comprises one or more of the aforementioned distance between the acquisition point and the boundary of the indoor scene, position data of the acquisition point, displacement data when moving between the plurality of acquisition points, tracking data, an image of the indoor scene.
Specifically, a variety of synchronization approaches may be employed:
synchronization method example one, synchronization calibration (match), optionally based on correlation (correlation), may be performed by using a normal cross-correlation (NCC, also called Normalized cross-correlation);
the second example of the synchronization method is an interpolation (interpolation), which is processed by smoothing, for example.
During actual distance measurement, data acquired by an acquisition mechanism may have certain deviation, for example, the acquisition mechanism such as the distance measurement mechanism is not necessarily horizontal; also in the example of rotational ranging, the plane of the rotational scan may not be perpendicular to the ground plane, the center of the rangefinder rotation may be off-center from the acquisition point, and so on. To this end, in some embodiments of the present invention, the mapping method proposed by the present invention further includes a normalization process of converting data acquired by the distance measuring mechanism into a three-dimensional plane. Specifically, the mapping method provided by the invention further comprises the following steps: in the process of measuring the distance between the acquisition point and the boundary of the indoor scene at multiple orientations as described above at each acquisition point, one or more of acquiring displacement data of the ranging mechanism (e.g., the ranging mechanism may shake to produce a positional offset), acquiring ranging orientation data, and the like are performed. The aforementioned step S12 includes: and normalizing the distances between the boundaries of the indoor scene and the measured positions according to one or more of the displacement data and the ranging direction data of the ranging mechanism. Optionally, the normalizing comprises: and converting the distance between the boundary of the indoor scene and the distance measured by the distance measuring mechanism into the distance between the boundary of the indoor scene and the distance on the horizontal plane and/or with the rotation center as the acquisition point. Note that the above-mentioned normalization, in which the horizontal plane and/or the rotation center is the acquisition point, may be a conversion into a distance on the horizontal plane, a conversion into a distance satisfying the rotation center as the acquisition point, or a conversion into a distance satisfying both conditions.
As a specific example of normalization, in some embodiments of the invention, the acquiring ranging direction data comprises: the horizontal direction and the vertical direction when the distance measuring mechanism measures the distance are collected, for example, a horizontal angle in a horizontal coordinate system and a vertical angle formed by an included angle with a horizontal plane when the distance measuring mechanism measures the distance are collected. Wherein the horizontal orientation may be used to determine the horizontal orientation of the contour points. Note that in this example, the measured distance from the boundary of the indoor scene is in fact the distance in three-dimensional space. The step S12 specifically includes: and for each horizontal position, projecting the distance between the measured acquisition point and the boundary of the indoor scene corresponding to the horizontal position to the horizontal plane according to the acquired vertical position to obtain the distance between the boundary of the indoor scene on the horizontal plane, and determining the position of the contour point according to the distance between the boundary of the indoor scene on the horizontal plane and the corresponding horizontal position. Optionally, the position of the preset acquisition point is also required to be acquired, or the position of the acquisition point is acquired, so as to determine the position of the contour point acquired by acquisition and ranging according to the distance between the acquisition point on the horizontal plane and the boundary of the indoor scene, the horizontal position of the acquisition and ranging mechanism during ranging, and the position of the acquisition point. Specifically, trigonometric relationships may be utilized to convert the measured distance to a horizontal distance based on vertical orientation.
As another specific example of normalization, in some embodiments of the present invention, the acquiring displacement data of the distance measuring mechanism includes: and acquiring deviation vectors of the rotation center and the acquisition point when the distance measuring mechanism measures the distance. Note that in this example, the measured distance from the boundary of the indoor scene is in fact the distance between the center of rotation and the boundary of the indoor scene. The step S12 specifically includes: and according to the deviation vector, converting the distance between the measured rotation center and the boundary of the indoor scene into the distance between the acquisition point and the boundary of the indoor scene.
As yet another specific example of normalization, in some embodiments of the present invention, normalization is performed based on wall information. Specifically, the scanned distortion or offset is corrected based on the wall-related assumption, assuming that the wall is straight instead of arc-shaped, or that the wall corners are right-angled instead of special angles.
As yet another specific example of normalization, in some embodiments of the present invention, the foregoing examples may be combined to perform normalization. For example, the ranging direction data and the displacement data of the ranging mechanism are collected at the same time, and the normalization is performed based on the ranging direction data and the displacement data of the ranging mechanism at the same time in the aforementioned step S12.
Note that, in order to reduce or eliminate the deviation of the acquired data, the normalization process may be performed in a hardware manner, in addition to the aforementioned manner of acquiring corresponding data and calculating adjustment to perform the normalization process. As one specific example, a zeroed fixed support is used to support and secure all sensors/laser scales and like acquisition devices so that all sensors and laser scales are relatively stationary.
Note that the normalization process may be performed by using a data calculation method alone, may be performed by using a hardware method alone, or may be performed by combining these two methods.
By utilizing the surveying and mapping method provided by the invention, errors caused by the offset of the distance meter can be eliminated by normalization processing, and even surveying and mapping can be carried out by a handheld acquisition device.
In some embodiments of the invention, exemplary mapping methods of the invention further comprise one or more of the following: determining the area of the indoor scene according to the overall contour line of the whole indoor scene; based on the contour lines, the side length of all or a part of the indoor scene, such as the perimeter of the indoor scene and the length of a wall, is determined. Alternatively, in a case where the generated outline of the whole indoor scene is not a closed curve (or a closed polygonal line), the aforementioned determining the area of the indoor scene according to the outline of the whole indoor scene includes: for the non-closed part in the contour line of the whole indoor scene, the type of the non-closed part (such as a door and a window) is automatically judged (for example, the machine learning mode is adopted) or manually input, the end points of the non-closed part are connected to generate a closed contour line, and the area is calculated according to the closed contour line.
Fig. 2 is a schematic diagram of contour points obtained in a specific example of the mapping method of the present invention. In a specific example, scanning and ranging are performed on an indoor scene at a plurality of acquisition points to determine contour points, then the contour points of the acquisition points are spliced, and if the contour points are drawn in a two-dimensional plane, a "point diagram" composed of a large number of points is obtained, as shown in fig. 2, and each point in the diagram represents one contour point.
Fig. 3 is a schematic diagram of a red line graph obtained in a specific example of the mapping method of the present invention. In a specific example, after obtaining the information of contour points of the whole indoor scene as shown in fig. 2, a two-dimensional plane graph for representing the whole indoor scene as shown in fig. 3 is generated from the information of these contour points.
In some embodiments of the invention, exemplary mapping methods of the invention further comprise: according to the method shown in the previous embodiment, contour lines of a plurality of heights in the indoor scene are determined and are superposed to form a three-dimensional graph of the indoor scene.
In some embodiments of the invention, exemplary mapping methods of the invention further comprise: one or more of a panoramic picture, a point cloud picture, of the indoor scene is captured using a mechanism such as a 360 degree panoramic camera to obtain more information, e.g., top, bottom, pipeline information of the indoor scene. Fig. 4 is a schematic diagram of the top ceiling condition of an indoor scene obtained in one specific example of the mapping method of the present invention.
Optionally, in some embodiments of the present invention, after the aforementioned acquiring various data, further comprising storing the data.
Fig. 5 is a schematic structural block diagram of an embodiment of the surveying apparatus of the present invention, fig. 6 is a schematic structural block diagram of another embodiment of the surveying apparatus of the present invention, and fig. 7 is a schematic structural block diagram of an acquisition module provided in an embodiment of the surveying apparatus of the present invention. Referring to fig. 5, 6, and 7, a mapping apparatus 100 according to an example of the present invention mainly includes: an acquisition module 110, a contour point determination module 120, a contour line determination module 130, and a stitching module 140.
The acquisition module 110 includes a distance measurement unit 111. The ranging unit 111 is configured to measure, for each acquisition point of the one or more acquisition points, a distance between the acquisition point and a boundary of the indoor scene in a plurality of orientations. Optionally, the distance measuring unit 111 is a laser distance meter (also called a laser ruler). Optionally, the acquisition points are predetermined in an indoor scenario.
The contour point determination module 120 is configured to determine contour points of the indoor scene according to the measured distances.
The contour line determining module 130 is configured to determine a contour line from the contour points.
The stitching module 140 is configured to stitch the contour points corresponding to the plurality of acquisition points to form an overall contour point of the entire indoor scene before the processing of determining the contour line according to the contour points by the contour line determining module 130, or stitch the contour lines corresponding to the plurality of acquisition points to form an overall contour line of the entire indoor scene after the processing of determining the contour line according to the contour points by the contour line determining module 130.
In some embodiments of the present invention, the acquisition module 110 further comprises a rotation unit 112, and the rotation unit 112 is used for rotating the distance measuring unit 111. The ranging unit 111 is specifically configured to measure the distance between the acquisition point and the boundary of the indoor scene at multiple orientations during the rotation. The contour point determination module 120 is specifically configured to determine a plurality of points located on the plane as contour points according to the ranging result.
In some embodiments of the present invention, the acquisition module 110 further comprises a positioning mechanism (not shown in the figures). In some alternative examples, the positioning mechanism is a mechanism such as a tripod, a handheld support bar, or the like, for holding the entire mapping apparatus 100, or at least the acquisition module 110, at the acquisition point. The surveying personnel may manually move the acquisition module 110 or the entire surveying device 100. In other alternative examples, the positioning mechanism is an automatically movable device, such as a robot, that moves around the indoor scene carrying the acquisition module 110 and can be stopped steadily at the acquisition point. The aforementioned rotating unit 112 is used to generate rotation relative to the positioning mechanism. Optionally, the distance measuring unit 111 and other units in the acquisition module 110 are fixedly or detachably connected with the rotating unit 112.
In some embodiments of the present invention, the contour line determining module 130 is specifically configured to: and determining a line passing through the contour points simultaneously according to the contour points as a contour line, and/or determining a fitted line according to the contour points as a contour line.
In some embodiments of the present invention, the contour line determining module 130 is specifically configured to: and connecting the contour points according to the intervals of the contour points to obtain the contour line.
In some embodiments of the present invention, the contour line determining module 130 is specifically configured to perform the line connection according to one or more of the following line connection conditions:
connecting the contour points with the nearest distance;
if the distance between the contour points is smaller than a preset first distance threshold value, connecting the lines, otherwise, not connecting the lines;
sequentially connecting all contour points to form one or more closed figures;
and if the spacing of the contour points is larger than a preset second spacing threshold value, the contour points are not connected.
In some embodiments of the present invention, the contour line determining module 130 is specifically configured to: dividing a plurality of contour points into a plurality of point groups; for each point group, converting the contour points of the same point group into a contour line segment; a plurality of contour line segments determined from the plurality of point groups are connected to form a contour line.
In some embodiments of the present invention, the contour line determining module 130 is specifically configured to: firstly, one contour point is searched for another contour point meeting the connection condition and connected to obtain a line segment, and then another contour point meeting the connection condition and corresponding to the end point of the line segment is searched for and connected to the line segment until all contour points meeting the connection condition are connected.
In some embodiments of the present invention, the acquisition module 110 further comprises: an acquisition sequence recording unit or an acquisition time recording unit (not shown in the figure). The acquisition sequence recording unit is used for recording the acquisition sequence of the contour points, and the acquisition time recording unit is used for recording the acquisition time of the contour points. The contour line determining module 130 is specifically configured to: and connecting the contour points in sequence according to the acquisition sequence or the acquisition time to obtain the contour line.
In some embodiments of the invention, the contour determination module 130 includes one or more of a first machine learning unit, a second machine learning unit (not shown). The first machine learning unit is to: the indoor object represented by the contour points is discriminated by machine learning. The second machine learning unit is to: and judging whether the contour points need to be connected or not and/or judging the connection mode which needs to be adopted by utilizing machine learning. The aforementioned connection manner includes an interpolation manner, a fitting manner, and the like.
In some embodiments of the present invention, the splicing module 140 is specifically configured to: determining relative position relation among a plurality of acquisition points; and splicing contour points or contour lines corresponding to the plurality of acquisition points according to the relative position relation.
In some embodiments of the present invention, the acquisition module 110 further comprises a tracking data acquisition unit 113 for: tracking data is acquired while moving the acquisition module 110 between multiple acquisition points (or while moving the surveying device 100 as a whole). Wherein the trace data comprises: position data of acquisition points, displacement data when moving between a plurality of acquisition points, tracking data, one or more of indoor scene images. The stitching module 140 is configured to: the contour lines or contour points of the plurality of acquisition points are correlated according to the tracking data, and for example, the relative position relationship between the acquisition points is determined according to one or more of position data, displacement data, tracking data and indoor scene images of the acquisition points.
As a specific example, the splicing module 140 is specifically configured to: respectively determining a characteristic point or a characteristic area of each acquisition point according to the tracking data through characteristic extraction, wherein the characteristic point/the characteristic area have characteristic data describing the property of the characteristic point/the characteristic area; associating the characteristic points/characteristic areas of different acquisition points with the same or similar characteristic data to form characteristic point pairs/characteristic area pairs; and splicing contour lines or contour points of a plurality of acquisition points according to the feature point pairs/feature area pairs.
In some embodiments of the present invention, the exemplary mapping apparatus 100 of the present invention further comprises a drawing module 150 for drawing contour points in a two-dimensional plane, or for drawing contour lines in a two-dimensional plane.
In some embodiments of the present invention, mapping apparatus 100 of the present examples further includes a synchronization module 160. The synchronization module 160 is used for synchronizing the collected various data to eliminate the collection time difference. Wherein the synchronized plurality of data includes one or more of a distance between the acquisition point and a boundary of the indoor scene, position data of the acquisition point, displacement data, tracking data, an image of the indoor scene.
In some embodiments of the present invention, the acquisition module 110 further comprises one or more of a range offset acquisition unit 114, a range position acquisition unit 115. The ranging offset acquisition unit 114 is configured to acquire the displacement data of the ranging unit 111 during the aforementioned measurement of the distances between the acquisition points and the boundary of the indoor scene at a plurality of orientations at each acquisition point. The ranging azimuth acquisition unit 115 is configured to acquire ranging azimuth data during the aforementioned measurement of distances between the acquisition points and the boundary of the indoor scene at a plurality of azimuths at each acquisition point. The contour point determination module 120 includes a normalization unit (not shown in the figure). The normalization unit is configured to: the distance between the boundary of the indoor scene and the measured distance is normalized according to one or more of the displacement data and the ranging direction data of the ranging unit 111. Optionally, the normalizing comprises: and converting the measured distance between the boundary of the indoor scene and the measured object into the distance between the boundary of the indoor scene and the measured object on the horizontal plane and/or on the same rotating circle center.
In some embodiments of the present invention, the range-finding location acquisition unit 115 is specifically configured to: the horizontal and vertical orientations at the time of ranging by the ranging unit 111 are acquired. The normalization unit is specifically configured to: for each horizontal position, projecting the distance between the measured acquisition point and the boundary of the indoor scene to a horizontal plane according to the vertical position to obtain the distance on the horizontal plane, and determining the contour point according to the distance on the horizontal plane and the corresponding horizontal position.
In some embodiments of the present invention, the mapping apparatus 100 of the present example further includes one or more of an area determination unit 171, a side length determination unit 172. The area determination unit 171 is configured to: and determining the area of the indoor scene according to the overall contour line of the whole indoor scene. The side length determination unit 172 is configured to: and determining the side length of all or part of the indoor scene according to the contour line.
In some embodiments of the present invention, mapping apparatus 100 of examples of the present invention further comprises: and the superposition module (not shown in the figure) is used for determining the contour lines of the plurality of heights in the indoor scene by utilizing the acquisition module 110, the contour point determination module 120, the contour line determination module 130 and the splicing module 140, and forming a three-dimensional map of the indoor scene through superposition.
In some embodiments of the present invention, the acquisition module 110 further comprises: a panorama picture acquisition unit (not shown in the figure), a point cloud image acquisition unit (not shown in the figure). The panoramic picture acquisition unit is used for acquiring panoramic pictures of the indoor scene and obtaining information of the top, the bottom and the pipeline of the indoor scene. The point cloud picture acquisition unit is used for acquiring a point cloud picture of an indoor scene and obtaining information of the top, the bottom and a pipeline of the indoor scene.
Fig. 7 is a schematic block diagram of a surveying apparatus according to still another embodiment of the present invention. Referring to FIG. 7, in some embodiments of the present invention, an exemplary mapping apparatus 100 of the present invention comprises: one or more of a ranging unit 111, a tracking camera 191, and a depth camera 192. Specifically, the depth camera 192 can acquire depth information; optionally, the method is specifically configured to acquire a planar image and acquire depth information of a photographic object, including three-dimensional position and size information. Alternatively, a depth tracking device having both tracking camera 191 and depth camera 192 may be used. The tracking camera 191 and/or the depth camera 192 include one or more of a tracking data acquisition unit 113, a range offset acquisition unit 114, and a range orientation acquisition unit 115. The tracking camera 191 and the depth camera 192 are fixedly connected with the distance measuring unit 111 for synchronous rotation.
In some embodiments of the present invention, a mapping apparatus 100 of examples of the present invention includes: a data processing module 193. The data processing module 193 includes one or more of the contour point determining module 120, the contour line determining module 130, the stitching module 140, the drawing module 150, the synchronizing module 160, the area determining unit 171, the side length determining unit 172, and the superimposing module 180 in the foregoing embodiments. The data processing module 193 may be a stand-alone hardware device, such as a computer, a notebook, a tablet, and is provided with a data transceiving module; alternatively, the data processing module 193 may be integrated with other modules of the mapping apparatus 100 to form a hardware device, such as a chip, a single chip, an FPGA, or the like.
In some embodiments of the present invention, one or more of the aforementioned plurality of hardware modules comprises a data transmission unit for transmitting data in a wired manner or using a wireless signal. It should be noted that the present invention is not limited to the type of signal, and is generally a wireless network signal, including but not limited to a wireless fidelity signal (or referred to as a Wi-Fi signal), a Bluetooth signal (or referred to as a Bluetooth signal), a ZigBee signal (or referred to as a ZigBee signal), or any other wireless signal. Optionally, the aforementioned plurality of modules comprise a bluetooth unit for transmitting data using bluetooth signals.
It should be noted that, in the embodiments of the present invention, the present invention does not limit the specific presentation effect of the interactive interface, and the specific presentation effect may be adjusted accordingly according to the development requirement and the user requirement.
It should be noted that all relevant contents of the steps involved in the above method embodiments can be cited to the functional description of the corresponding functional modules of the mapping apparatus 100 of the present invention, and are not described herein again.
The surveying apparatus 100 of the present example is used to perform the surveying method described above, and therefore the same effects as the above-described implementation method can be achieved.
Alternatively, the mapping method provided by the present invention may be implemented by using software such as an Application (APP) or a service running in the background of the device. The augmented reality device or related devices are equipped with applications or services for implementing the mapping method proposed by the invention, so that interaction can be performed with the device.
FIG. 8 is a hardware block diagram illustrating a mapping device in accordance with one embodiment of the present invention. As shown in FIG. 8, a mapping device 200 in accordance with an embodiment of the present invention includes a memory 201 and a processor 202. The components of the mapping device 200 are interconnected by a bus system and/or other form of connection mechanism (not shown).
The memory 201 is used to store non-transitory computer readable instructions. In particular, memory 201 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc.
The processor 202 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the mapping device 200 to perform desired functions. In one embodiment of the present invention, the processor 202 is configured to execute the computer-readable instructions stored in the memory 201, so that the mapping apparatus 200 performs all or part of the steps of the mapping method of the embodiments of the present invention described above.
It will be appreciated that the apparatus, in order to carry out the above-described functions, comprises corresponding hardware and/or software modules for performing the respective functions. The present application is capable of being implemented in hardware or a combination of hardware and computer software in conjunction with the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed in hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, in conjunction with the embodiments, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In this embodiment, the device may be divided into functional modules according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
The integrated module may be implemented in the form of hardware. It should be noted that the division of the modules in this embodiment is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Embodiments of the present invention also provide a computer storage medium having stored thereon computer instructions that, when executed on an apparatus, cause the apparatus to perform the above-mentioned related method steps to implement the mapping method in the above-mentioned embodiments.
Embodiments of the present invention also provide a computer program product, which when run on a computer causes the computer to perform the above-mentioned related steps to implement the mapping method in the above-mentioned embodiments.
In addition, an embodiment of the present invention further provides an apparatus, which may be specifically a chip, a component or a module, and may include a processor and a memory connected to each other; the memory is used for storing computer-executable instructions, and when the device runs, the processor can execute the computer-executable instructions stored in the memory, so that the chip executes the mapping method in the above-mentioned embodiments of the methods.
The device, the computer storage medium, the computer program product, or the chip provided by the present invention are all configured to execute the corresponding method provided above, and therefore, the beneficial effects achieved by the device, the computer storage medium, the computer program product, or the chip may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Although the present invention has been described with reference to a preferred embodiment, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (21)

1. A method of mapping, the method comprising the steps of:
determining a plurality of acquisition points in an indoor scene;
for each acquisition point, measuring distances between the acquisition point and the boundary of the indoor scene in a plurality of directions, and determining contour points of the indoor scene according to the distances;
determining a contour line according to the contour points;
wherein the contour points of the plurality of acquisition points are stitched to form an overall contour point of the indoor scene before the step of determining a contour line from the contour points,
or after the step of determining the contour lines according to the contour points, splicing the contour lines of the plurality of acquisition points to form the overall contour line of the indoor scene.
2. The mapping method according to claim 1,
the measuring distances between the acquisition points and the boundary of the indoor scene in a plurality of orientations comprises: rotating the distance measuring mechanism, and measuring the distances between the acquisition point and the boundaries of the indoor scene in a plurality of directions in the rotating process;
the determining contour points of the indoor scene according to the distance comprises: and determining a plurality of points on a plane as the contour points according to the distance.
3. The method of mapping according to claim 1, wherein the determining a contour line from the contour points comprises:
and determining a line which passes through the contour points simultaneously according to the contour points as the contour line, and/or determining a fitted line as the contour line according to the contour points.
4. The method of mapping according to claim 1, wherein the determining a contour line from the contour points comprises:
and connecting the contour points according to the distance of the contour points to obtain the contour line.
5. The mapping method according to claim 4, wherein: the connecting the contour points according to the distance between the contour points to obtain the contour line includes connecting according to one or more of the following connecting conditions:
connecting the contour points with the nearest contour points;
if the distance between the contour points is smaller than a preset first distance threshold value, connecting the lines, otherwise, not connecting the lines;
sequentially connecting all contour points to form one or more closed figures;
and if the distance between the contour points is larger than a preset second distance threshold value, the contour points are not connected.
6. The method of mapping according to claim 1, wherein the determining a contour line from the contour points comprises:
dividing a plurality of the contour points into a plurality of point groups;
for each point group, converting the contour points of the same point group into a contour line segment;
and connecting a plurality of contour line segments determined by a plurality of point groups to form the contour line.
7. The method of mapping according to claim 1 or 5, wherein the determining a contour line from the contour points comprises:
recording the acquisition sequence or acquisition time of the contour points; and connecting the contour points in sequence according to the acquisition sequence or the acquisition time to obtain the contour line.
8. The method of mapping according to claim 1, wherein the determining of contour lines from the contour points comprises one or more of:
distinguishing the indoor object represented by the contour point by machine learning;
and judging whether a plurality of contour points need to be connected or not and/or judging a connection mode which needs to be adopted by utilizing machine learning, wherein the connection mode comprises an interpolation mode or a fitting mode.
9. The method of mapping according to claim 1, wherein the stitching the contour points of the plurality of acquisition points or the stitching the contour lines of the plurality of acquisition points comprises:
determining relative positional relationships between a plurality of said acquisition points;
and splicing the contour points or the contour lines corresponding to the plurality of acquisition points according to the relative position relation.
10. The mapping method according to claim 9, wherein:
the method further comprises the following steps: moving a ranging mechanism between a plurality of said acquisition points and acquiring tracking data while moving, said tracking data comprising one or more of position data of said acquisition points, displacement data while moving between a plurality of said acquisition points, a tracking trajectory, an indoor scene image.
11. The method of mapping according to claim 10, wherein said stitching the contour points of the plurality of acquisition points or said stitching the contour lines of the plurality of acquisition points comprises:
respectively determining a feature point or a feature area of each acquisition point according to the tracking data through feature extraction, wherein the feature point/the feature area has feature data;
associating the characteristic points/characteristic areas of different acquisition points with the same or similar characteristic data to form characteristic point pairs/characteristic area pairs;
and splicing the contour lines or the contour points of the plurality of acquisition points according to the feature point pairs/the feature area pairs.
12. The method of mapping according to claim 10, further comprising:
synchronizing the collected multiple data to eliminate the collection time difference; wherein the plurality of data synchronized comprises one or more of the distance between the acquisition point and a boundary of the indoor scene, the position data of the acquisition point, the displacement data, the tracking data, an image of the indoor scene.
13. The mapping method according to claim 1, characterized in that:
the method further comprises the following steps: during the process of measuring the distances between the acquisition points and the boundary of the indoor scene in a plurality of directions at each acquisition point, acquiring one or more of displacement data and distance measurement direction data of a distance measurement mechanism;
the determining contour points of the indoor scene according to the distance comprises: and converting the measured distance into the distance on a horizontal plane and/or with a rotation center as the acquisition point according to one or more of the displacement data and the ranging azimuth data of the ranging mechanism.
14. The mapping method according to claim 13, wherein:
the collecting ranging position data comprises: collecting a horizontal direction and a vertical direction during distance measurement;
the determining contour points of the indoor scene according to the distance comprises: for each of the horizontal orientations, projecting the measured distance to a horizontal plane according to the vertical orientation to obtain the distance on the horizontal plane; determining the contour points according to the distances on the horizontal plane and the corresponding horizontal orientations.
15. The method of mapping according to claim 1, further comprising one or more of the following steps:
determining the area of the indoor scene according to the overall contour line of the indoor scene;
and determining the side length of all or part of the indoor scene according to the contour line.
16. The method of mapping according to claim 1, further comprising:
and determining the contour lines of a plurality of heights in the indoor scene, and superposing to form a three-dimensional graph of the indoor scene.
17. The method of mapping according to claim 1, further comprising:
and acquiring one or more of a panoramic picture and a point cloud picture of the indoor scene to obtain information of the top, the bottom and the pipeline of the indoor scene.
18. A mapping apparatus, characterized in that the apparatus comprises:
an acquisition module comprising a ranging unit to measure, for each of a plurality of acquisition points, a distance between the acquisition point and a boundary of an indoor scene in a plurality of orientations; wherein the acquisition points are predetermined in the indoor scene;
the contour point determining module is used for determining contour points of the indoor scene according to the distance;
the contour line determining module is used for determining a contour line according to the contour points;
and the contour line determining module is used for processing contour lines of the indoor scene, and the contour line determining module is used for processing contour lines of the indoor scene.
19. The mapping apparatus of claim 18, wherein: the apparatus further comprises a module or unit for performing the steps of any of claims 2 to 17.
20. A mapping apparatus, comprising:
a memory for storing non-transitory computer readable instructions; and
a processor for executing the computer readable instructions such that the computer readable instructions, when executed by the processor, implement the mapping method of any of claims 1 to 17.
21. A computer storage medium comprising computer instructions which, when run on a device, cause the device to perform the mapping method of any of claims 1 to 17.
CN202011561983.8A 2020-12-25 2020-12-25 Surveying and mapping method, device, equipment and storage medium Pending CN114755696A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011561983.8A CN114755696A (en) 2020-12-25 2020-12-25 Surveying and mapping method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011561983.8A CN114755696A (en) 2020-12-25 2020-12-25 Surveying and mapping method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114755696A true CN114755696A (en) 2022-07-15

Family

ID=82324361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011561983.8A Pending CN114755696A (en) 2020-12-25 2020-12-25 Surveying and mapping method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114755696A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063557A (en) * 2022-08-18 2022-09-16 北京山维科技股份有限公司 Building intelligent extraction method and device based on tilt model
CN115098242A (en) * 2022-08-24 2022-09-23 广州市城市排水有限公司 Real-time acquisition and processing method and system for deep tunnel surveying and mapping data

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063557A (en) * 2022-08-18 2022-09-16 北京山维科技股份有限公司 Building intelligent extraction method and device based on tilt model
CN115063557B (en) * 2022-08-18 2022-11-08 北京山维科技股份有限公司 Building intelligent extraction method and device based on tilt model
CN115098242A (en) * 2022-08-24 2022-09-23 广州市城市排水有限公司 Real-time acquisition and processing method and system for deep tunnel surveying and mapping data
CN115098242B (en) * 2022-08-24 2022-11-08 广州市城市排水有限公司 Real-time acquisition and processing method and system for deep tunnel surveying and mapping data

Similar Documents

Publication Publication Date Title
EP3967972A1 (en) Positioning method, apparatus, and device, and computer-readable storage medium
US10810734B2 (en) Computer aided rebar measurement and inspection system
US8699005B2 (en) Indoor surveying apparatus
CN104964673B (en) It is a kind of can positioning and orientation close range photogrammetric system and measuring method
EP2775257B1 (en) Measuring instrument
EP1931945B1 (en) Surveying instrument and method of providing survey data using a surveying instrument
US20060167648A1 (en) 3-Dimensional measurement device and electronic storage medium
JP2004163292A (en) Survey system and electronic storage medium
CN114755696A (en) Surveying and mapping method, device, equipment and storage medium
CN102927917A (en) Multi-view vision measurement method of iron tower
CN112492292B (en) Intelligent visual 3D information acquisition equipment of free gesture
CN204963858U (en) Can fix a position close -range photogrammetry system of appearance
CN112254675A (en) Space occupancy rate acquisition and judgment equipment and method containing moving object
JP2017151026A (en) Three-dimensional information acquiring device, three-dimensional information acquiring method, and program
EP4332631A1 (en) Global optimization methods for mobile coordinate scanners
CN112253913B (en) Intelligent visual 3D information acquisition equipment deviating from rotation center
JPH06186036A (en) Three-dimensional location measuring system
US20230324167A1 (en) Laser scanner for verifying positioning of components of assemblies
JP4776983B2 (en) Image composition apparatus and image composition method
WO2022078444A1 (en) Program control method for 3d information acquisition
US20220018950A1 (en) Indoor device localization
CN112254669B (en) Intelligent visual 3D information acquisition equipment of many bias angles
CN112257535B (en) Three-dimensional matching equipment and method for avoiding object
JP6448413B2 (en) Roof slope estimation system and roof slope estimation method
KR20010087493A (en) A survey equipment and method for rock excavation surface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination