CN112037336B - Adjacent point segmentation method and device - Google Patents

Adjacent point segmentation method and device Download PDF

Info

Publication number
CN112037336B
CN112037336B CN202010929302.2A CN202010929302A CN112037336B CN 112037336 B CN112037336 B CN 112037336B CN 202010929302 A CN202010929302 A CN 202010929302A CN 112037336 B CN112037336 B CN 112037336B
Authority
CN
China
Prior art keywords
camera
point
cameras
pair
orthogonal distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010929302.2A
Other languages
Chinese (zh)
Other versions
CN112037336A (en
Inventor
刘程林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Seashell Housing Beijing Technology Co Ltd
Original Assignee
Seashell Housing Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Seashell Housing Beijing Technology Co Ltd filed Critical Seashell Housing Beijing Technology Co Ltd
Priority to CN202010929302.2A priority Critical patent/CN112037336B/en
Publication of CN112037336A publication Critical patent/CN112037336A/en
Application granted granted Critical
Publication of CN112037336B publication Critical patent/CN112037336B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a method and a device for segmenting adjacent point positions, wherein a Thiessen polygon segmentation method based on orthogonal constraint is used for segmenting the point positions of adjacent point clouds, pairwise pairing cameras for acquiring point cloud data to obtain a camera pair; acquiring the orthogonal distance between any point in the point cloud data acquired by each camera in the camera pair and each camera in the camera pair; comparing the size of the orthogonal distance between the point location and each camera, and reserving the point location when the orthogonal distance between the point location and the camera to which the point location belongs is determined to be the minimum; otherwise, deleting the point location; and segmenting the reserved point positions according to the cameras to which the point positions belong. The invention restrains the point position by using the orthogonality, so that the point position segmentation result is along the orthogonal coordinate axis, thereby being cleaner and clearer, avoiding messy segmentation and having better display effect in the indoor space.

Description

Adjacent point segmentation method and device
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method and an apparatus for partitioning neighboring point bits.
Background
In the data acquisition stage of the indoor three-dimensional model, special hardware equipment is needed to collect different shooting point data. After data collection is completed, the global coordinate systems of different shooting points are found out, so that the adjacent points can be registered and spliced into a complete model, and the cloud overlapping areas of the adjacent points need to be fused in the model splicing process.
In the algorithm for generating the three-dimensional extremely simple model based on the RGB panoramic image, point cloud splicing is another scene. In the prior art, a three-dimensional extremely simple model is generated by an RGB panorama as follows:
1. modeling a single panoramic image through a neural network algorithm to obtain a room extremely simple model of a single point location;
2. splicing local extremely simple models at different point positions according to the obtained pose data;
3. and fusing the overlapping areas of the adjacent point positions through a fusion algorithm.
However, the extremely simple model assumes that each single-point RGB panorama describes a closed space, but for a space with a large area and a complex structure, due to view angle occlusion, a single point cannot completely describe a space, and a prediction error or a prediction error exists in mutually occluded regions between different point locations due to different view angles, thereby causing an error in the segmentation of adjacent point locations in the point cloud fusion process.
Therefore, a new fusion mechanism is needed between adjacent points, which can discard the prediction inaccurate region by retaining the speculatively accurate part.
Disclosure of Invention
The embodiment of the invention aims to solve the technical problem that: the method and the device for segmenting the adjacent points are provided, and the problem that prediction errors exist in mutually shielded areas due to different visual angles or errors occur in segmenting the adjacent points caused by the prediction errors in the process of segmenting the adjacent points in the prior art is solved.
According to an aspect of the present invention, there is provided a method of neighboring point bit segmentation, the method including:
pairwise pairing cameras for acquiring point cloud data to obtain a camera pair; the cameras of the pair are visible to each other;
acquiring the orthogonal distance between any point in the point cloud data acquired by each camera in the camera pair and each camera in the camera pair;
comparing the size of the orthogonal distance between the point location and each camera, and reserving the point location when the orthogonal distance between the point location and the camera is determined to be minimum; otherwise, deleting the point location;
and segmenting the reserved point positions according to the cameras.
Optionally, pairwise pairing the cameras for acquiring the point cloud data to obtain a camera pair, including:
shooting through a camera to obtain point cloud data;
pairing the cameras which acquire the point cloud data and are overlapped with each other; each camera is respectively combined with the camera with the point cloud data superposed with each other to form a different camera pair.
Optionally, the obtaining an orthogonal distance between any point location in the point cloud data obtained by each camera in the pair of cameras and each camera in the pair of cameras includes:
taking the direction with the minimum orthogonal distance of the two cameras in each camera pair as the orthogonal direction of the two cameras;
and calculating the orthogonal distance between any point position and each camera in the camera pair in the orthogonal direction.
Optionally, the comparing the size of the orthogonal distance of the point location from each camera includes:
respectively acquiring the orthogonal distance between each point location and each camera in the corresponding camera pair;
and comparing the size of the orthogonal distance between the point position and each camera.
Optionally, the method further comprises:
traversing each point location in each of the camera pairs;
the reserved point location is attributed to and only one of the cameras.
Optionally, the segmenting the remaining point locations according to the cameras to which the point locations belong includes:
each reserved point position is respectively and correspondingly attributed to a camera;
and segmenting adjacent point positions according to the point positions to which the two cameras in the camera pair belong.
According to another aspect of the present invention, there is provided an adjacent point bit splitting apparatus, including:
the camera pair generation unit is used for pairing the cameras which acquire the point cloud data to obtain a camera pair; the cameras of the pair are visible to each other;
an orthogonal distance acquisition unit, configured to acquire an orthogonal distance between any point location in the point cloud data acquired by each camera in the camera pair and each camera in the camera pair;
the point location comparing unit is used for comparing the size of the orthogonal distance between the point location and each camera, and when the point location is determined to be the smallest orthogonal distance between the point location and the camera, the point location is reserved; otherwise, deleting the point location;
and the point location segmentation unit is used for segmenting the reserved point locations according to the cameras.
Optionally, the camera pair generation unit specifically includes:
the point cloud data acquisition subunit is used for acquiring point cloud data through camera shooting;
the camera pair forming subunit is used for forming pairs of cameras which acquire the point cloud data and are overlapped with each other; each camera is respectively combined with the camera with the point cloud data superposed with each other to form a different camera pair.
Optionally, the orthogonal distance obtaining unit specifically includes:
an orthogonal direction acquiring subunit, configured to take a direction in which an orthogonal distance between two cameras in each of the camera pairs is smallest as an orthogonal direction of the two cameras;
and the orthogonal distance acquisition subunit is used for calculating the orthogonal distance between any point position and each camera in the camera pair in the orthogonal direction.
Optionally, the point comparing unit is specifically configured to:
the orthogonal distance calculation subunit is used for respectively acquiring the orthogonal distance between each point position and each camera in the corresponding camera pair;
and the orthogonal distance comparison subunit is used for comparing the size of the orthogonal distance between the point position and each camera.
Optionally, the point location dividing unit is specifically configured to:
a home location subunit, configured to respectively and correspondingly home a camera to each of the reserved point locations;
and the segmentation subunit is used for segmenting adjacent point positions according to the point positions to which the two cameras in the camera pair belong.
According to another aspect of the present invention, there is provided a computer-readable storage medium storing a computer program for executing the method described above.
According to another aspect of the present invention, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
the processor is used for reading the executable instructions from the memory and executing the instructions to realize the method.
Based on the scheme provided by the embodiment of the invention, the method mainly comprises the following beneficial effects:
the invention provides a Thiessen polygon (voronoi) segmentation method based on orthogonal constraint, which comprises the steps of segmenting point locations of adjacent point clouds, pairing cameras for acquiring point cloud data to obtain a camera pair; the cameras of the pair are visible to each other; acquiring the orthogonal distance between any point in the point cloud data acquired by each camera in the camera pair and each camera in the camera pair; comparing the size of the orthogonal distance between the point location and each camera, and reserving the point location when the orthogonal distance between the point location and the camera to which the point location belongs is determined to be the minimum; otherwise, deleting the point location; and segmenting the reserved point positions according to the cameras to which the point positions belong.
According to the point location segmentation scheme provided by the invention, each point location is assigned to a specific control point, an overlapped area is not reserved, and any point location in the reconstructed point cloud is certain and unique to belong to a reconstruction result of a certain point location. The reserved point cloud model is satisfied to be nearest to the control point of the point cloud model in the orthogonal direction. The invention restrains the point positions by using the orthogonality, leads the point position segmentation results to be along orthogonal coordinate axes, is cleaner and clearer, does not generate messy segmentation to cause segmentation errors, and has better display effect in indoor space.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The invention will be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:
fig. 1 is a diagram illustrating a voronoi segmentation result in the prior art.
Fig. 2 is a schematic diagram of a voronoi segmentation result based on orthogonal constraint according to an embodiment of the present invention.
Fig. 3 is a schematic flowchart of a method for partitioning neighboring bits according to an embodiment of the present invention.
Fig. 4 is a schematic flowchart of another adjacent point segmentation method according to an embodiment of the present invention.
Fig. 5 is a schematic structural diagram of an adjacent point bit splitting apparatus according to an embodiment of the present invention.
Fig. 6 is a schematic structural diagram of an electronic device provided in an embodiment of the present invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
In the embodiment of the invention, a Thiessen polygon (voronoi) segmentation method based on orthogonal constraint is provided for segmenting adjacent point clouds, and a point cloud model reserved by the segmentation method is closest to a control point of the point cloud model in the orthogonal direction. In a popular way, in the process of reconstructing the whole space by two adjacent point locations, an overlapped region is not reserved, namely, any model point P in the reconstructed point cloud is a certain and unique reconstruction result belonging to a certain point location. The standard for judging the attribute of the model point P at this time is that the model point P should belong to the control point closest to the model point P and then accord with the principle of optimal confidence; however, the result of the optimal control theory similar to taylor polygon segmentation does not meet the aesthetic requirement in vision and brings a messy feeling to people, so that the scheme of the invention restricts the optimal control theory by using orthogonality when the optimal control theory is used for controlling the optimal control theory, and the segmentation result of people is along the orthogonal coordinate axes.
The scheme of generating the three-dimensional extremely simple model through the RGB panoramic image is to acquire the spatial structure information in each panoramic image through deep learning, ignore various objects in the space and establish a spatial three-dimensional model only with main structures such as a wall body and the like.
The adjacent points are not consistent in prediction of the same region, and a plurality of regions which are not overlapped are overlapped after direct splicing. Therefore, it is necessary to delete the overlapping area and finally retain the point where the reliability is the highest. The most straightforward solution is to describe its confidence level by distance, a conventional algorithm such as voronoi polygons. This arrangement ensures that any point within a polygon is less distant from the control points that make up the polygon than from the control points of other polygons. However, as shown in fig. 1, the division scheme has a poor effect of displaying in an indoor space and gives a sense of disorder.
The scheme of the invention provides an orthogonal constraint voronoi segmentation scheme, and changes the original credibility from the straight-line distance from a control point to the middle point of a polygon into the minimum distance in the horizontal or vertical direction. As shown in fig. 2, it is ensured that the parting lines are all in a horizontal or vertical orthogonal direction.
As shown in fig. 3, a flow chart of the adjacent point segmentation principle provided in this embodiment is shown, wherein,
step 11, pairwise pairing cameras for acquiring point cloud data to obtain a camera pair; the cameras in the pair are visible to each other.
The mutual visibility means that the two cameras in the camera pair are mutually located within the visual distance, and the obtained point cloud data are overlapped.
And 12, acquiring the orthogonal distance between any point in the point cloud data acquired by each camera in the camera pair and each camera in the camera pair.
Step 13, comparing the size of the orthogonal distance between the point location and each camera, and reserving the point location when the orthogonal distance between the point location and the camera to which the point location belongs is determined to be the minimum; otherwise, deleting the point.
And 14, segmenting the reserved point positions according to the cameras to which the point positions belong.
For the point location segmentation, the attribution of adjacent point locations can be clearly divided, so that each point location only belongs to point cloud data acquired by one camera, and errors caused by point location repetition are avoided. The point location is clearly and definitely attributed by the division of the point location, and the problem that one point location is simultaneously attributed to point cloud data acquired by a plurality of cameras is solved.
In one embodiment of the invention, the prediction of voronoi polygons may be constrained by orthogonal direction and orthogonal distance using an orthogonally constrained voronoi partitioning scheme, so that the partition lines are all orthogonal directions, horizontal or vertical.
The Thiessen polygon (voronoi) is a subdivision of a spatial plane, and is characterized in that any position in the polygon is closest to a sampling point of the polygon and is far from the sampling point in an adjacent polygon, and each polygon contains only one sampling point. Due to the equal division characteristic of the Thiessen polygon on the space division, the method can be used for solving the problems of the closest point, the minimum closed circle and the like, and many space analysis problems such as adjacency, proximity and accessibility analysis and the like.
In one embodiment of the invention, point cloud data is acquired by camera shooting;
pairing the cameras which acquire the point cloud data and are overlapped with each other; each camera is respectively combined with the camera with the point cloud data superposed with each other to form a different camera pair.
In one embodiment of the invention, the direction in which the orthogonal distance between the two cameras in each camera pair is the smallest is taken as the orthogonal direction of the two cameras;
and calculating the orthogonal distance between any point position and each camera in the camera pair in the orthogonal direction.
In one embodiment of the present invention, the orthogonal distance between each point location and each camera in the corresponding camera pair is obtained respectively;
and comparing the orthogonal distance of the point position and each camera.
In one embodiment of the invention, each point location in each said camera pair is traversed;
the reserved point location is attributed to and only one of the cameras.
In an embodiment of the present invention, each of the reserved point locations respectively corresponds to one camera;
and segmenting adjacent point positions according to the point positions to which the two cameras in the camera pair belong.
As shown in fig. 4, another flow chart of the adjacent point segmentation principle provided in the embodiment of the present invention is specifically as follows:
in the process of point cloud fusion between different points, starting from the ith point, i is 1,2,3 … n, and n is the number of all points. For each point P in the point locationxyA determination is made whether it satisfies the reserved condition.
Finding camera number 0, i.e. the camera that satisfies visibility with point i, gets the visible camera list (list).
Pairwise pairing between the cameras can be seen, and a list of camera pairs is obtained.
The distance of the two cameras in the orthogonal direction for each camera pair is found, and the orthogonal direction in which their distance is small is found, and the distance of the coordinate point from the camera in that direction will be taken as their orthogonal distance.
Traversing coordinate points in the 0 point location, and for any point PxyIts orthogonal distance to each camera in the pair is calculated.
And traversing all the camera pairs, and finding the camera with the minimum orthogonal distance away from the point as the alternative camera.
And taking the other cameras in the other camera pairs where the alternative cameras are located as comparison objects, and finding the cameras with smaller orthogonal distances.
If the camera with the smaller orthogonal distance is the camera to which the point belongs, the point is reserved; if the camera with the minimum orthogonal distance from the point is not the camera to which the point belongs, the point is deleted.
And traversing all coordinate points of the point location i.
And traversing all the shooting points. The finally obtained reserved point location is the misaligned point location. The point cloud data are fused according to the misaligned point positions, so that the problem of mutual overlapping can be avoided.
An embodiment of the present invention provides an adjacent point bit splitting apparatus, as shown in fig. 5, the apparatus including:
a camera pair generation unit 21, configured to pair two cameras that acquire point cloud data to obtain a camera pair; the cameras of the pair are visible to each other;
an orthogonal distance acquiring unit 22 configured to acquire an orthogonal distance between any point in the point cloud data acquired by each camera in the camera pair and each camera in the camera pair;
a point location comparing unit 23, configured to compare the size of the orthogonal distance between the point location and each camera, and when it is determined that the orthogonal distance between the point location and the camera to which the point location belongs is the smallest, the point location is reserved; otherwise, deleting the point location;
and the point location segmentation unit 24 is configured to segment the reserved point location according to the camera to which the point location belongs.
In an embodiment of the present invention, the camera pair generating unit 21 specifically includes:
the point cloud data acquisition subunit is used for acquiring point cloud data through camera shooting;
the camera pair forming subunit is used for forming pairs of cameras which acquire the point cloud data and are overlapped with each other; each camera is respectively combined with the camera with the point cloud data superposed with each other to form a different camera pair.
In an embodiment of the present invention, the orthogonal distance obtaining unit 22 specifically includes:
an orthogonal direction acquiring subunit, configured to take a direction in which an orthogonal distance between two cameras in each of the camera pairs is smallest as an orthogonal direction of the two cameras;
and the orthogonal distance acquisition subunit is used for calculating the orthogonal distance between any point position and each camera in the camera pair in the orthogonal direction.
In an embodiment of the present invention, the point comparing unit 23 is specifically configured to:
the orthogonal distance calculation subunit is used for respectively acquiring the orthogonal distance between each point position and each camera in the corresponding camera pair;
and the orthogonal distance comparison subunit is used for comparing the size of the orthogonal distance between the point position and each camera.
In an embodiment of the present invention, the point segmentation unit 24 is specifically configured to:
a home location subunit, configured to respectively and correspondingly home a camera to each of the reserved point locations;
and the segmentation subunit is used for segmenting adjacent point positions according to the point positions to which the two cameras in the camera pair belong.
In an embodiment of the present invention, there is also provided a computer-readable storage medium storing a computer program for executing the above-mentioned method.
In one embodiment of the present invention, there is also provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
the processor is used for reading the executable instructions from the memory and executing the instructions to realize the method.
Fig. 6 is a schematic structural diagram of an embodiment of an electronic device according to the present invention. As shown in fig. 6, the electronic device includes one or more processors and memory.
The processor may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device to perform desired functions.
The memory may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by a processor to implement the behavior analysis based matching methods of the various embodiments of the invention described above and/or other desired functions.
In one example, the electronic device may further include: an input device and an output device, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input device may also include, for example, a keyboard, a mouse, and the like.
The output device may output various information including the determined distance information, direction information, and the like to the outside. The output devices may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, among others.
Of course, for simplicity, only some of the components of the electronic device that are relevant to the present invention are shown in fig. 6, omitting components such as buses, input/output interfaces, and the like. In addition, the electronic device may include any other suitable components, depending on the particular application.
In addition to the above-described methods and apparatus, embodiments of the present invention may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in the behavioral analysis-based matching method according to various embodiments of the present invention described in the above-mentioned part of the present specification.
The computer program product may write program code for carrying out operations for embodiments of the present invention in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Based on the scheme provided by the embodiment of the invention, the method mainly comprises the following beneficial effects:
providing a Thiessen polygon (voronoi) segmentation method based on orthogonal constraint, segmenting point locations of adjacent point clouds, pairing cameras for acquiring point cloud data to obtain a camera pair; the cameras of the pair are visible to each other; acquiring the orthogonal distance between any point in the point cloud data acquired by each camera in the camera pair and each camera in the camera pair; comparing the size of the orthogonal distance between the point location and each camera, and reserving the point location when the orthogonal distance between the point location and the camera to which the point location belongs is determined to be the minimum; otherwise, deleting the point location; and segmenting the reserved point positions according to the cameras to which the point positions belong.
According to the point location segmentation scheme provided by the invention, each point location is assigned to a specific control point, an overlapped area is not reserved, and any point location in the reconstructed point cloud is certain and unique to belong to a reconstruction result of a certain point location. The reserved point cloud model is satisfied to be nearest to the control point of the point cloud model in the orthogonal direction. The invention restrains the point position by using the orthogonality, so that the point position segmentation result is along the orthogonal coordinate axis, thereby being cleaner and clearer, avoiding messy segmentation and having better display effect in the indoor space.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The method and apparatus of the present invention may be implemented in a number of ways. For example, the methods and apparatus of the present invention may be implemented in software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustrative purposes only, and the steps of the method of the present invention are not limited to the order specifically described above unless specifically indicated otherwise. Furthermore, in some embodiments, the present invention may also be embodied as a program recorded in a recording medium, the program including machine-readable instructions for implementing a method according to the present invention. Thus, the present invention also covers a recording medium storing a program for executing the method according to the present invention.
The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to practitioners skilled in this art. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (11)

1. A method for partitioning bits of neighboring points, the method comprising:
pairwise pairing cameras for acquiring point cloud data to obtain a camera pair; the cameras of the pair are visible to each other;
acquiring the orthogonal distance between any point in the point cloud data acquired by each camera in the camera pair and each camera in the camera pair;
comparing the size of the orthogonal distance between the point location and each camera, and reserving the point location when the orthogonal distance between the point location and the camera is determined to be minimum; otherwise, deleting the point location;
dividing the reserved point positions according to the cameras to which the reserved point positions belong;
wherein, the pair of cameras that will acquire the point cloud data, obtains the camera pair, includes:
shooting through a camera to obtain point cloud data;
pairing the cameras which acquire the point cloud data and are overlapped with each other; each camera is respectively combined with the camera with the point cloud data superposed with each other to form a different camera pair.
2. The method of claim 1, wherein said obtaining orthogonal distances from each camera of the pair of cameras of any point location in the point cloud data obtained by each camera of the pair of cameras comprises:
taking the direction with the minimum orthogonal distance of the two cameras in each camera pair as the orthogonal direction of the two cameras;
and calculating the orthogonal distance between any point position and each camera in the camera pair in the orthogonal direction.
3. The method of claim 2, wherein said comparing the size of the orthogonal distance of the point location from each camera comprises:
respectively acquiring the orthogonal distance between each point location and each camera in the corresponding camera pair;
and comparing the size of the orthogonal distance between the point position and each camera.
4. The method of claim 1, wherein the method further comprises:
traversing each point location in each of the camera pairs;
the reserved point location is attributed to and only one of the cameras.
5. The method of claim 1, wherein the segmenting the remaining points according to the camera to which the points belong comprises:
each reserved point position is respectively and correspondingly attributed to a camera;
and segmenting adjacent point positions according to the point positions to which the two cameras in the camera pair belong.
6. An apparatus for partitioning bits of neighboring points, comprising:
the camera pair generation unit is used for pairing the cameras which acquire the point cloud data to obtain a camera pair; the cameras of the pair are visible to each other;
an orthogonal distance acquisition unit, configured to acquire an orthogonal distance between any point location in the point cloud data acquired by each camera in the camera pair and each camera in the camera pair;
the point location comparing unit is used for comparing the size of the orthogonal distance between the point location and each camera, and when the point location is determined to be the smallest orthogonal distance between the point location and the camera, the point location is reserved; otherwise, deleting the point location;
the point location segmentation unit is used for segmenting the reserved point locations according to the cameras to which the reserved point locations belong;
the camera pair generation unit specifically includes:
the point cloud data acquisition subunit is used for acquiring point cloud data through camera shooting;
the camera pair forming subunit is used for forming pairs of cameras which acquire the point cloud data and are overlapped with each other; each camera is respectively combined with the camera with the point cloud data superposed with each other to form a different camera pair.
7. The apparatus according to claim 6, wherein the orthogonal distance obtaining unit specifically includes:
an orthogonal direction acquiring subunit, configured to take a direction in which an orthogonal distance between two cameras in each of the camera pairs is smallest as an orthogonal direction of the two cameras;
and the orthogonal distance acquisition subunit is used for calculating the orthogonal distance between any point position and each camera in the camera pair in the orthogonal direction.
8. The apparatus of claim 6, wherein the point comparison unit is specifically configured to:
the orthogonal distance calculation subunit is used for respectively acquiring the orthogonal distance between each point position and each camera in the corresponding camera pair;
and the orthogonal distance comparison subunit is used for comparing the size of the orthogonal distance between the point position and each camera.
9. The apparatus according to claim 6, wherein the point segmentation unit is specifically configured to:
a home location subunit, configured to respectively and correspondingly home a camera to each of the reserved point locations;
and the segmentation subunit is used for segmenting adjacent point positions according to the point positions to which the two cameras in the camera pair belong.
10. A computer-readable storage medium, in which a computer program is stored, characterized in that the computer program is adapted to perform the method of any of the preceding claims 1-5.
11. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing the processor-executable instructions;
the processor is used for reading the executable instructions from the memory and executing the instructions to realize the method of any one of the claims 1 to 5.
CN202010929302.2A 2020-09-07 2020-09-07 Adjacent point segmentation method and device Active CN112037336B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010929302.2A CN112037336B (en) 2020-09-07 2020-09-07 Adjacent point segmentation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010929302.2A CN112037336B (en) 2020-09-07 2020-09-07 Adjacent point segmentation method and device

Publications (2)

Publication Number Publication Date
CN112037336A CN112037336A (en) 2020-12-04
CN112037336B true CN112037336B (en) 2021-08-31

Family

ID=73584958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010929302.2A Active CN112037336B (en) 2020-09-07 2020-09-07 Adjacent point segmentation method and device

Country Status (1)

Country Link
CN (1) CN112037336B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105989604A (en) * 2016-02-18 2016-10-05 合肥工业大学 Target object three-dimensional color point cloud generation method based on KINECT
CN111009002A (en) * 2019-10-16 2020-04-14 贝壳技术有限公司 Point cloud registration detection method and device, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0221144D0 (en) * 2002-09-12 2002-10-23 Snell & Wilcox Ltd Image processing using vectors
CN105844629B (en) * 2016-03-21 2018-12-18 河南理工大学 A kind of large scene City Building facade point cloud automatic division method
CN107944383A (en) * 2017-11-21 2018-04-20 航天科工智慧产业发展有限公司 Building roof patch division method based on three-dimensional Voronoi diagram
CN110227876B (en) * 2019-07-15 2021-04-20 西华大学 Robot welding path autonomous planning method based on 3D point cloud data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105989604A (en) * 2016-02-18 2016-10-05 合肥工业大学 Target object three-dimensional color point cloud generation method based on KINECT
CN111009002A (en) * 2019-10-16 2020-04-14 贝壳技术有限公司 Point cloud registration detection method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112037336A (en) 2020-12-04

Similar Documents

Publication Publication Date Title
US11238644B2 (en) Image processing method and apparatus, storage medium, and computer device
CN110009727B (en) Automatic reconstruction method and system for indoor three-dimensional model with structural semantics
JP6745328B2 (en) Method and apparatus for recovering point cloud data
CN110335316B (en) Depth information-based pose determination method, device, medium and electronic equipment
CN114902294B (en) Fine-grained visual recognition in mobile augmented reality
US20240062488A1 (en) Object centric scanning
US20120075433A1 (en) Efficient information presentation for augmented reality
EP3326156B1 (en) Consistent tessellation via topology-aware surface tracking
CN110648363A (en) Camera posture determining method and device, storage medium and electronic equipment
Zhao et al. A novel three-dimensional object detection with the modified You Only Look Once method
CN111986214B (en) Construction method of pedestrian crossing in map and electronic equipment
US10217224B2 (en) Method and system for sharing-oriented personalized route planning via a customizable multimedia approach
CN112907569B (en) Head image region segmentation method, device, electronic equipment and storage medium
JP2024508024A (en) Image data processing method and device
CN107223245A (en) A kind of data display processing method and device
CN113298708A (en) Three-dimensional house type generation method, device and equipment
CN115375836A (en) Point cloud fusion three-dimensional reconstruction method and system based on multivariate confidence filtering
Xiao et al. Coupling point cloud completion and surface connectivity relation inference for 3D modeling of indoor building environments
CN113129362B (en) Method and device for acquiring three-dimensional coordinate data
US20220358694A1 (en) Method and apparatus for generating a floor plan
CN114651246B (en) Method for searching for image using rotation gesture input
CN113284237A (en) Three-dimensional reconstruction method, system, electronic equipment and storage medium
CN111739134B (en) Model processing method and device for virtual character and readable storage medium
CN113763419A (en) Target tracking method, target tracking equipment and computer-readable storage medium
CN112037336B (en) Adjacent point segmentation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210413

Address after: 100085 Floor 101 102-1, No. 35 Building, No. 2 Hospital, Xierqi West Road, Haidian District, Beijing

Applicant after: Seashell Housing (Beijing) Technology Co.,Ltd.

Address before: Unit 05, room 112, 1st floor, office building, Nangang Industrial Zone, economic and Technological Development Zone, Binhai New Area, Tianjin 300457

Applicant before: BEIKE TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant