CN115908696A - Data point visibility characterization method and system for 3D modeling - Google Patents

Data point visibility characterization method and system for 3D modeling Download PDF

Info

Publication number
CN115908696A
CN115908696A CN202211256178.3A CN202211256178A CN115908696A CN 115908696 A CN115908696 A CN 115908696A CN 202211256178 A CN202211256178 A CN 202211256178A CN 115908696 A CN115908696 A CN 115908696A
Authority
CN
China
Prior art keywords
point
cloud data
point cloud
data
visibility
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211256178.3A
Other languages
Chinese (zh)
Inventor
肖东晋
张立群
刘顺宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alva Systems
Original Assignee
Alva Systems
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alva Systems filed Critical Alva Systems
Priority to CN202211256178.3A priority Critical patent/CN115908696A/en
Publication of CN115908696A publication Critical patent/CN115908696A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

An embodiment of the present disclosure provides a point cloud data visibility characterization method for 3D modeling, including: acquiring a plurality of frame images, and acquiring corresponding point cloud data based on the plurality of frame images; wherein the multi-frame image is from a computer vision system or a laser radar; obtaining a visible principal direction of each data point in the point cloud data based on the data point visibility calculation and the data point full-angle visibility distribution fitting; acquiring a visible direction range of the point cloud data based on the visible main direction of each data point, and taking the visible direction range as characteristic data of the point cloud data; and fusion information obtained by fusing the feature data and the visibility information of the point cloud data is used for representing the point cloud data, and the visibility information comprises the 3D coordinates and the local description words of the point cloud data. Corresponding systems, electronic devices, and computer-readable storage media are also provided that significantly increase the availability of point cloud data by adding visibility information to the point cloud data.

Description

Data point visibility characterization method and system for 3D modeling
Technical Field
The disclosure relates to the technical field of robots and computer vision, in particular to a point cloud data visibility representation method and system for 3D modeling.
Background
In recent years, 3D modeling techniques have been developed in the fields of robot and computer vision, and both in robot vision and SLAM, a series of steps such as detecting a surrounding environment by a monocular (binocular) vision technique or a laser (microwave) radar detection technique, generating and recording three-dimensional coordinates of a plurality of 3D data points, and forming a point cloud are inevitably involved. In the current point cloud data representation, two aspects of information are emphasized, one is a 3D coordinate of the point cloud data, and the other is a description word of the point cloud data. The former contains spatial correlation among data points, and the latter contains local texture information of the positions of the data points. The two parts of information play an important role in the subsequent application of the point cloud data.
However, the above cloud data characterization method has a significant drawback that the "visibility" of the point cloud data is not effectively characterized. As is well known, the visibility of a point in space is related to the position of the point and the direction of the line of sight, but the position information is not sufficient to fully determine whether the point is visible. In fact, the local reflection state of the point, the color texture, etc. have a significant influence on the visibility. Therefore, many data points cannot be found by a visual or radar detection means even under the condition that the sight line is not blocked, so that the point cloud data cannot completely reflect the observed reality, and the usability and the application efficiency of the data are greatly reduced.
Disclosure of Invention
The method comprises the steps of carrying out deep analysis on information involved in the point cloud data extraction process to obtain a visible direction range of each point in the point cloud data, storing the range as the characteristics of the point cloud data together with 3D coordinates of the data and local description words for subsequent processing, and obviously improving the usability of the point cloud data by adding visibility information into the point cloud data.
According to a first aspect of the present disclosure, there is provided a point cloud data visibility characterization method for 3D modeling, comprising:
s1, acquiring multi-frame images and acquiring corresponding point cloud data based on the multi-frame images; wherein the multi-frame image is from a computer vision system or a lidar;
s2, obtaining a visible main direction of each data point in the point cloud data based on data point visibility calculation and data point full-angle visibility distribution fitting;
s3, acquiring a visible direction range of the point cloud data based on the visible main direction of each data point, and taking the visible direction range as feature data of the point cloud data; and fusion information obtained by fusing the feature data and the visibility information of the point cloud data is used for representing the point cloud data, and the visibility information comprises the 3D coordinates and the local description words of the point cloud data.
The above-described aspect and any possible implementation further provide an implementation, where S2 includes:
s21, collecting the actually measured visible angle of each data point in the point cloud data;
s22, calculating the angle visibility of each data point in the point cloud data at each actually measured visible angle, and obtaining a full-angle visibility distribution fitting curve of the data point based on the angle visibility of the data point at each actually measured visible angle;
and S23, extracting the visible main direction of the point cloud data points based on the fitting curve of the all-angle visibility distribution of the data points.
The above-described aspect and any possible implementation further provide an implementation, for computer vision, the S21 includes:
s211, determining whether a certain point cloud data point enters a certain frame image and can be detected at a certain measured angle;
s212, if the cloud data point enters a certain frame of image and can be detected, generating a local description word of the cloud data point at a certain point in the certain frame of image;
s213, traversing each frame of the other multi-frame images, and respectively generating a plurality of other local description words of the point cloud data point in all other entering images;
s214, determining the point cloud data point angle visibility based on the sum of the matching scores between the local description word and each of the other local description words.
As for the above-mentioned aspect and any possible implementation manner, there is further provided an implementation manner, where the obtaining manner of the local descriptor in S212 and S213 includes:
processing each frame of input image, the processing includes marking the image frame with number, and marking the k frame of image as I k
For the k frame image I k Performing angular point detection to obtain angular points on each frame image, and setting the angular points on each frame image as A k (1),A k (2),…,A k (n); the corner detection method is FAST, harris or Shi-Tomasi;
extracting normalized feature descriptors of each corner point from its neighbourhood, F k (1),F k (2),…,F k (n), the method for obtaining the normalized feature descriptor is SIFT, SURF or ORB.
As to the above-mentioned aspect and any possible implementation manner, there is further provided an implementation manner, for radar detection, the S21 includes:
s211', determining the radar echo intensity of the point cloud data point at a certain measured visible angle;
s212', judging whether the radar echo strength exceeds a set first threshold value;
s213', when the radar echo intensity is smaller than a set first threshold value, the angle visibility of the data point is zero; and when the radar echo intensity is larger than or equal to a set first threshold value, the data point angular visibility is the radar echo intensity of the point cloud data point.
The above aspect and any possible implementation manner further provide an implementation manner, where the obtaining of the fitting curve of the full-angle visibility distribution of the data points based on the angular visibilities of the plurality of data points at the plurality of measured visible angles in S22 is implemented by curve fitting interpolation.
The above-described aspect and any possible implementation manner further provide an implementation manner, where the S23 includes: and calculating the maximum value of the fitting curve of the full-angle visibility distribution of the data points, and taking the position corresponding to the maximum value as the visible main direction of the point cloud data points.
According to a second aspect of the present disclosure, there is provided a point cloud data visibility characterization system for 3D modeling, comprising:
the data acquisition and conversion module (101) is used for acquiring multi-frame images and acquiring corresponding point cloud data based on the multi-frame images;
a data point visibility information acquisition module (102) for obtaining a visible principal direction of each data point in the point cloud data based on data point visibility calculation and data point full-angle visibility distribution fitting;
a feature fusion and characterization module (103) for obtaining a visible direction range of the point cloud data based on the visible main direction of each data point, and using the visible direction range as feature data of the point cloud data; and fusing the characteristic data and the visibility information of the point cloud data to obtain fusion information, wherein the fusion information is used for representing the point cloud data.
According to a third aspect of the present disclosure, there is provided an electronic device comprising a processor and a memory, the memory storing a plurality of instructions, the processor being configured to read the instructions and to perform the method according to the first aspect.
According to a fourth aspect of the present disclosure, there is provided a computer readable storage medium storing a plurality of instructions readable by a processor and performing the method of the first aspect.
The invention has the beneficial effects that:
based on the data point visibility characterization method, the data point visibility characterization system, the electronic equipment and the computer-readable storage medium, the following beneficial effects are achieved:
the information involved in the point cloud data extraction process is deeply analyzed, the visible direction range of each point in the point cloud data is obtained, the range is used as the characteristic and the visibility information of the point cloud data and is stored together with the 3D coordinates and the local descriptors of the data for subsequent processing, and the usability of the point cloud data in 3D modeling is remarkably improved.
It should be understood that the statements herein reciting aspects are not intended to limit the critical or essential features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. The accompanying drawings are included to provide a further understanding of the present disclosure, and are not intended to limit the disclosure thereto, and the same or similar reference numerals will be used to indicate the same or similar elements, where:
FIG. 1 shows a flow diagram of a point cloud data visibility characterization method for 3D modeling according to an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of a method of obtaining a visible principal direction for each data point in the point cloud data based on a data point visibility calculation and a data point full-angle visibility distribution fit in accordance with an embodiment of the disclosure;
FIG. 3 illustrates a flow chart of a method of collecting measured angles for each data point in point cloud data for computer vision and calculating the visibility of the data point angle based on the measured angles, according to an embodiment of the present disclosure;
fig. 4 illustrates a flow chart of a method for collecting measured angles for each data point in point cloud data for radar detection and calculating the visibility of the data point angle based on the measured angles, according to an embodiment of the disclosure.
FIG. 5 illustrates a point cloud data visibility characterization system architecture diagram for 3D modeling according to an embodiment of the present disclosure.
Fig. 6 shows a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Example one
As shown in fig. 1, a method for point cloud data visibility characterization for 3D modeling, comprising:
s1, acquiring multi-frame images and acquiring corresponding point cloud data based on the multi-frame images;
in this example, S1 may be implemented in several ways as follows:
(1) Through the procedures of calibration, stereo matching and three-dimensional reconstruction of a camera, three-dimensional point cloud data are obtained from a plurality of or a plurality of frames of two-dimensional images under a computer platform;
(2) The common camera shoots a multi-frame image around a target object with BCH code identification to construct point cloud data (construct a 3D model based on the point cloud data), and the method comprises the following steps:
placing a target object on the BCH code, taking pictures at different angles, and storing a depth map and an RGB map (or a gray map);
the method comprises the steps of carrying out feature points on two shot RGB images, matching depth map point clouds corresponding to matched 2D points, obtaining external parameters according to the internal parameters, converting the point clouds of one image into the point clouds of the other image, then carrying out point cloud splicing, mapping the point clouds of all other images to a camera coordinate system of the first image, and carrying out point cloud splicing to obtain point cloud data.
S2, obtaining a visible main direction of each data point in the point cloud data based on data point visibility calculation and data point full-angle visibility distribution fitting;
s3, acquiring a visible direction range of the point cloud data based on the visible main direction of each data point, and taking the visible direction range as feature data of the point cloud data; and fusing the characteristic data and the visibility information of the point cloud data to obtain fusion information, wherein the fusion information is used for representing the point cloud data.
The above aspects and any possible implementations further provide an implementation in which the plurality of frames of images are derived from an input of a computer vision system and are acquired for successive shots; or the plurality of frames of images are from the input of radar detection.
As shown in fig. 2, according to the above-described aspect and any possible implementation, further providing an implementation, the S2 includes:
s21, collecting an actual measurement visible angle of each data point in the point cloud data, and calculating the angle visibility of the data point based on the actual measurement visible angle;
in this embodiment, the visibility of the data point calculated based on the measured angle is expanded based on the measured angle data, and is used to calculate the visibility of the data point facing the measured angle. In practical applications, both vision and radar technologies "see" the same point cloud data point repeatedly at several different angles. The meaning of "see repeatedly" is as follows:
(1) For computer vision, the point cloud data point enters more than one image frame, and is detected by a key point detection module of a vision system in each image frame and generates a local descriptor, and meanwhile, the matching score obtained by matching the local descriptor of the point in the images of different frames is higher, so that the point is seen repeatedly;
(2) For radar, it is usually meant that the radar echo intensity of the point cloud data point at several different sampling angles exceeds a preset threshold value, so that the point is "seen repeatedly".
As shown in fig. 3, for computer vision, the S21 includes:
s211, determining whether a certain point cloud data point enters a certain frame image and can be detected at a certain actual measurement angle;
s212, if the cloud data point enters a certain frame of image and can be detected, generating a local description word of the cloud data point at a certain point in the certain frame of image;
s213, repeating S211-S212, so that the certain point cloud data point traverses each frame of the other multi-frame images, and respectively generates a plurality of other local description words of the point cloud data point in all other entering images;
in this embodiment, the manner of acquiring the local descriptor in S212 and S213 includes:
processing each frame of input image: for convenience, the processing of this embodiment includes digitally marking the image frames, with the k-th frame image being marked as I k
For the k frame image I k Performing angular point detection to obtain angular points on each frame image, and setting the angular points on each frame image as A k (1), k (2),…,A k (n); methods of such corner detection include, but are not limited to, FAST, harris, shi-Tomasi, and the like;
extracting normalized feature descriptors of each corner point from its neighbourhood, F k (1),F k (2),…,F k (n) the method for obtaining the normalized feature descriptor includes, but is not limited to, SIFT, SURF, ORB, etc.
S214, determining the point cloud data point angle visibility based on the sum of the matching scores between the local description word and each of the other local description words.
This embodiment uses the k frame image I k For example, assume that the k frame image I k The key point of interest is the corner point A k (m) a corresponding key point m, theThe feature descriptor of the key point is F k (m) the k frame image I k The corresponding pose (i.e., the angle in global coordinates) is O k Then in the attitude O k Above, the visibility of the key point m is:
Figure BDA0003889638580000091
wherein the parameter N is determined according to whether the key point m can be detected on the frame image.
As shown in fig. 4, for radar detection, the S21 includes:
s211', determining the radar echo intensity of the point cloud data point at a certain measured angle;
s212', judging whether the radar echo strength exceeds a set first threshold value;
s213', when the radar echo strength does not exceed a set first threshold value, the angle visibility of the data point is zero; and when the radar echo intensity exceeds a set first threshold value, the data point angular visibility is the radar echo intensity of the point cloud data point.
S22, calculating the angle visibility of each data point in the point cloud data at each actually measured visible angle, and obtaining a full-angle visibility distribution fitting curve of the data point based on the angle visibility of the data point at each actually measured visible angle;
in this embodiment, each point in the point cloud data is provided with two pieces of information, namely "visible principal direction" and "visibility distribution". The user of the point cloud data can not only know the optimal sight direction of the point under the condition of no obstruction through the visible main direction, but also know the actual visibility of the point at different sight angles from the visibility distribution. The information is very important for the smooth implementation of technologies such as path planning, scene matching and obstacle avoidance in robot and SLAM application.
In this embodiment, the step S22 of obtaining the fitting curve of the full-angle visibility distribution of the data points based on the angular visibilities of the data points at multiple angles is implemented by curve fitting interpolation.
The above aspects and any possible implementations further provide an implementation, wherein the curve fitting includes fitting the data points (X) with a symmetric bell-shaped curve k ,Y k ) Wherein X is k The angles at which the point cloud data points are visible, Y k For the angle visibility of the data points at corresponding angles, the fitting curve of the all-angle visibility distribution of the data points is a Gaussian curve, and the mathematical expression is as follows:
Figure BDA0003889638580000101
wherein, a, σ 2 Respectively, mean value and variance of Euclidean distance of point cloud data points.
And S23, extracting the visible main direction of the point cloud data points based on the fitting curve of the data point all-angle visibility distribution.
And carrying out maximum value calculation on the full-angle visibility distribution fitting curve of the data points, wherein the position corresponding to the maximum value is the visible main direction of the point cloud data points. Namely, it is
argmax x g(x) (3)
If a Gaussian curve is used, the dominant direction is the parameter a.
In this embodiment, let the measured angle at which the data point can be "seen" be X k The visibility at the actually measured angle is Y k Fitting data points (X) using a symmetric "bell-shaped" curve k ,Y k ) K =1,2, \ 8230;, n, (assuming there are n measured angles), i.e. the following optimization is done:
min gk (Y k -g(X k )) 2 (4)
taking the fitting of Gaussian curve as an example, the target of fitting optimization is the parameter a, sigma with the maximum likelihood probability 2
In the present embodiment, as described above, the data (O) is used 1 ,V(O 1 )),(O 2 ,V(O 2 )),…,(O n ,V(O n ) To fit a curve g (x), as shown in equation (2),
then, solving an optimization problem:
Figure BDA0003889638580000111
thereby obtaining
Figure BDA0003889638580000112
A here opt (m) is the visible principal direction of the key point m, and the curve
Figure BDA0003889638580000113
I.e. the visibility distribution of the key point m.
The above-described aspects and any possible implementations further provide an implementation in which the visibility information includes 3D coordinates and a local descriptor of the point cloud data.
The embodiment takes the computer vision field as an example. In this embodiment, the input to the computer vision system is a number of frames of images taken in succession. These images all contain a key point (e.g., a significant mark on a wall). In a real environment, this point is not visible from every angle. Therefore, it is necessary to calculate the visible dominant direction and visibility distribution of the key point to obtain important information about "see or not" when the key point is viewed from different angles, so that the representation of the key point in the computer vision system can be more close to the actual environment.
And processing each frame of input image. For convenience, the image frames are numbered and the k frame image is designated as I k . To I k Corner detection is performed by methods including, but not limited to, FAST, harris, shi-Tomasi, etc. And obtaining the corner points on each frame image. Without setting these corner points to A k (1),A k (2),…,A k (n) of (a). Extracting normalized feature descriptors of each corner in the neighborhood of the corner, F k (1),F k (2),…,F k (n) the method of obtaining the feature descriptor includes, but is not limited to, SIFT, SURF, ORB, etc.
Consider the k frame image I k Assuming that the key point of interest on the frame image is m, the feature descriptor of the key point is F k (m) the pose (i.e. the angle in global coordinates) corresponding to the frame image is O k Then at this angle, the visibility of the key point m is:
Figure BDA0003889638580000121
the parameter N here is determined according to whether the mth keypoint can be detected on the image frame.
Usage data (O) 1 ,V(O 1 )),(O 2 ,V(O 2 )),…,(O n ,V(O n ) To fit a curve g (x),
Figure BDA0003889638580000122
that is, the optimization problem is solved:
Figure BDA0003889638580000123
this gives:
Figure BDA0003889638580000124
a herein opt (m) is the visible principal direction of the key point m, and the curve
Figure BDA0003889638580000125
I.e. the visibility distribution of the key point m.
As shown in fig. 5, according to a second aspect of the present disclosure, there is provided a point cloud data visibility characterizing system for 3D modeling, comprising:
the data acquisition and conversion module 101 is configured to acquire a plurality of frames of images and obtain corresponding point cloud data based on the plurality of frames of images;
the data point visibility information acquisition module 102 is configured to obtain a visible principal direction of each data point in the point cloud data based on data point visibility calculation and data point full-angle visibility distribution fitting; and
a feature fusion and characterization module 103, configured to obtain a visible direction range of the point cloud data based on the visible main direction of each data point, and use the visible direction range as feature data of the point cloud data; and fusing information obtained by fusing the characteristic data and the visibility information of the point cloud data is used for representing the point cloud data.
As shown in fig. 6, the present invention further provides an electronic device, which includes a processor 301 and a memory 302 connected to the processor 301, where the memory 302 stores a plurality of instructions, and the instructions can be loaded and executed by the processor, so that the processor can execute the method according to the second embodiment.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (10)

1. A point cloud data visibility characterization method for 3D modeling, comprising:
s1, acquiring a multi-frame image, and acquiring corresponding point cloud data based on the multi-frame image; wherein the multi-frame image is from a computer vision system or a lidar;
s2, obtaining a visible main direction of each data point in the point cloud data based on data point visibility calculation and data point full-angle visibility distribution fitting;
s3, acquiring a visible direction range of the point cloud data based on the visible main direction of each data point, and taking the visible direction range as feature data of the point cloud data; and fusion information obtained by fusing the characteristic data and the visibility information of the point cloud data is used for representing the point cloud data.
2. The method of claim 1, wherein the step S2 comprises:
s21, collecting the actually measured visible angle of each data point in the point cloud data;
s22, calculating the angle visibility of each data point in the point cloud data at each actually measured visible angle, and obtaining a full-angle visibility distribution fitting curve of the data point based on the angle visibility of the data point at each actually measured visible angle;
and S23, extracting the visible main direction of the point cloud data points based on the fitting curve of the all-angle visibility distribution of the data points.
3. The method of claim 2, wherein for computer vision, the step S21 comprises:
s211, determining whether a certain point cloud data point enters a certain frame image and can be detected at a certain measured angle;
s212, if the cloud data point enters a certain frame of image and can be detected, generating a local description word of the cloud data point at a certain point in the certain frame of image;
s213, traversing each frame of the other multi-frame images, and respectively generating a plurality of other local description words of the point cloud data point in all other entering images;
s214, determining the angle visibility of the point cloud data point based on the sum of the matching scores between the local description word and each of the other local description words.
4. The method of claim 3, wherein the acquiring of the local descriptor in S212 and S213 comprises:
processing each frame of input image, the processing includes marking the image frame with number, and marking the k frame of image as I k
For the k frame image I k Performing angular point detection to obtain angular points on each frame image, and setting the angular points on each frame image as A k (1),A k (2),…,A k (b) (ii) a The corner detection method is FAST, harris or Shi-Tomasi;
extracting normalized feature descriptors of each corner point from its neighbourhood, F k (1),F k (2),…,F k (n), the method for obtaining the normalized feature descriptor is SIFT, SURF or ORB.
5. The method of claim 2, wherein for radar detection, the step S22 comprises:
s211', determining the radar echo intensity of the point cloud data point at a certain actually measured visible angle;
s212', judging whether the radar echo strength exceeds a set first threshold value;
s213', when the radar echo intensity is smaller than a set first threshold value, the angle visibility of the data point is zero; and when the radar echo intensity is larger than or equal to a set first threshold value, the data point angular visibility is the radar echo intensity of the point cloud data point.
6. The method of characterizing visibility of point cloud data for 3D modeling according to claim 2,
and S22, obtaining a data point full-angle visibility distribution fitting curve based on the multiple data point angular visibilities at the multiple measured visible angles is realized by curve fitting interpolation.
7. The method of characterizing visibility of point cloud data for 3D modeling according to claim 2,
the S23 comprises: and carrying out maximum value calculation on the full-angle visibility distribution fitting curve of the data points, and taking the position corresponding to the maximum value as the visible main direction of the point cloud data points.
8. A point cloud data visibility characterization system for 3D modeling, for implementing the point cloud data visibility characterization method according to any one of claims 1-7, comprising:
the data acquisition and conversion module (101) is used for acquiring multi-frame images and acquiring corresponding point cloud data based on the multi-frame images;
a data point visibility information acquisition module (102) for obtaining a visible principal direction of each data point in the point cloud data based on data point visibility calculation and data point full-angle visibility distribution fitting;
a feature fusion and characterization module (103) for obtaining a visible direction range of the point cloud data based on the visible main direction of each data point, and using the visible direction range as feature data of the point cloud data; and fusion information obtained by fusing the characteristic data and the visibility information of the point cloud data is used for representing the point cloud data.
9. An electronic device comprising a processor and a memory, the memory storing a plurality of instructions, the processor configured to read the instructions and perform the method of any of claims 1-7.
10. A computer-readable storage medium storing a plurality of instructions readable by a processor and performing the method of any one of claims 1-7.
CN202211256178.3A 2022-10-13 2022-10-13 Data point visibility characterization method and system for 3D modeling Pending CN115908696A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211256178.3A CN115908696A (en) 2022-10-13 2022-10-13 Data point visibility characterization method and system for 3D modeling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211256178.3A CN115908696A (en) 2022-10-13 2022-10-13 Data point visibility characterization method and system for 3D modeling

Publications (1)

Publication Number Publication Date
CN115908696A true CN115908696A (en) 2023-04-04

Family

ID=86490366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211256178.3A Pending CN115908696A (en) 2022-10-13 2022-10-13 Data point visibility characterization method and system for 3D modeling

Country Status (1)

Country Link
CN (1) CN115908696A (en)

Similar Documents

Publication Publication Date Title
CN111325796B (en) Method and apparatus for determining pose of vision equipment
CN113362444B (en) Point cloud data generation method and device, electronic equipment and storage medium
US10032082B2 (en) Method and apparatus for detecting abnormal situation
US10706567B2 (en) Data processing method, apparatus, system and storage media
CN108898676B (en) Method and system for detecting collision and shielding between virtual and real objects
WO2021052283A1 (en) Method for processing three-dimensional point cloud data and computing device
CN112785625B (en) Target tracking method, device, electronic equipment and storage medium
JP5833507B2 (en) Image processing device
US11847796B2 (en) Calibrating cameras using human skeleton
CN114119864A (en) Positioning method and device based on three-dimensional reconstruction and point cloud matching
JP2020518918A (en) Information processing method, apparatus, cloud processing device, and computer program product
JP2016179534A (en) Information processor, information processing method, program
CN115147831A (en) Training method and device of three-dimensional target detection model
CN116188893A (en) Image detection model training and target detection method and device based on BEV
CN113705390B (en) Positioning method, positioning device, electronic equipment and storage medium
CN112509126A (en) Method, device, equipment and storage medium for detecting three-dimensional object
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
CN115909253A (en) Target detection and model training method, device, equipment and storage medium
CN115908696A (en) Data point visibility characterization method and system for 3D modeling
CN114419564A (en) Vehicle pose detection method, device, equipment, medium and automatic driving vehicle
CN114494857A (en) Indoor target object identification and distance measurement method based on machine vision
CN117523428B (en) Ground target detection method and device based on aircraft platform
CN116844133A (en) Target detection method, device, electronic equipment and medium
WO2024083006A1 (en) Three-dimensional imaging method and apparatus, device, and storage medium
Zeineldin et al. FRANSAC: Fast RANdom sample consensus for 3D plane segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination