CN115239951A - Wall surface segmentation and identification method and system based on point cloud data processing - Google Patents

Wall surface segmentation and identification method and system based on point cloud data processing Download PDF

Info

Publication number
CN115239951A
CN115239951A CN202210641330.3A CN202210641330A CN115239951A CN 115239951 A CN115239951 A CN 115239951A CN 202210641330 A CN202210641330 A CN 202210641330A CN 115239951 A CN115239951 A CN 115239951A
Authority
CN
China
Prior art keywords
point cloud
data
cloud data
points
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210641330.3A
Other languages
Chinese (zh)
Other versions
CN115239951B (en
Inventor
陈铭昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Linghui Construction Technology Co ltd
Original Assignee
Guangdong Linghui Construction Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Linghui Construction Technology Co ltd filed Critical Guangdong Linghui Construction Technology Co ltd
Priority to CN202210641330.3A priority Critical patent/CN115239951B/en
Publication of CN115239951A publication Critical patent/CN115239951A/en
Application granted granted Critical
Publication of CN115239951B publication Critical patent/CN115239951B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a wall surface segmentation and identification method and system based on point cloud data processing, and relates to the technical field of building measurement.

Description

Wall surface segmentation and identification method and system based on point cloud data processing
Technical Field
The invention relates to the technical field of building measurement, in particular to a wall surface segmentation and identification method and system based on point cloud data processing.
Background
The point cloud data is a set of vectors in a three-dimensional coordinate system, and the scanning data is recorded in the form of points, each point including three-dimensional coordinates, and some may include color information or reflection intensity information.
In the field of building measurement, different types of walls and house types can be encountered in actual measurement scenes, and how to effectively detect different walls and distinguish different types of holes in different walls is the basis for further carrying out related measurement subsequently. Deep learning is applied less in this field, on one hand because point cloud data are more complicated than traditional image data processing, especially training of deep learning model needs a large amount of and different data, on the other hand because deep learning often needs a large amount of different training data, and the use cost that exists the scanner is higher, single scanning time is longer than ordinary time of taking a picture, need cross the problem such as the data of regional different house types and styles of looking for, high data acquisition cost lets many enterprises catch hold. Different from the traditional image recognition, the point cloud data has strong discreteness, if the method is based on three-dimensional space recognition, the serious data unbalance problem needs to be solved, and if the method is based on two-dimensional space recognition, the problem of three-dimensional projection needs to be solved.
The method comprises the steps of firstly setting a plurality of measuring stations by using a three-dimensional scanner to collect three-dimensional point cloud data comprising a complete wall surface to be measured, preprocessing the data, eliminating unreasonable distribution points, establishing a three-dimensional coordinate system, then calculating a training sample by using the measured data and the point cloud data, and obtaining a wall surface unbalance segmentation recognition result by using a training deep learning model.
Disclosure of Invention
In order to solve the problems of complex calculation process and low precision of the current wall surface segmentation identification method, the invention provides a wall surface segmentation identification method and system based on point cloud data processing.
In order to achieve the technical effects, the technical scheme of the invention is as follows:
a wall surface segmentation and identification method based on point cloud data processing comprises the following steps:
s1, loading point cloud data of a room, and performing plane fitting based on the point cloud data to obtain a fitting plane of the point cloud data;
s2, eliminating interference data in the fitting plane by using a clustering algorithm, and separating out a main plane;
s3, repeating the steps S1-S2, separating each vertical wall surface and each horizontal plane of the room, and obtaining position set data of the middle points of each vertical plane and each horizontal plane;
s4, mapping the point cloud data in a three-dimensional projection mode to obtain a plurality of basic pictures;
s5, carrying out self-adaptive height scaling on the sizes of a plurality of basic pictures, carrying out feature extraction on the basic pictures, then manually generating labeled pictures, randomly extracting a plurality of labeled pictures subjected to feature extraction from the labeled pictures and labeling, recombining the labeled pictures and the labels to obtain new labeled pictures and new instance segmentation information as training samples;
s6, introducing an example segmentation network, and training the example segmentation network by using a training sample to obtain a trained example segmentation network;
and S7, acquiring and processing point cloud data of the room to be detected in the S1-S4 mode to obtain a basic picture, carrying out self-adaptive scaling and feature extraction on the basic picture, inputting a trained example segmentation network, and outputting a wall segmentation recognition result.
In the technical scheme, the discreteness of point cloud data is considered, meanwhile, the problem of serious data imbalance faced by three-dimensional space recognition is considered, the point cloud data is processed, the plane is extracted from main data in a plane fitting mode, the subsequent calculation acceleration and the accuracy improvement are facilitated, then interference data in the plane are clustered and removed, the main plane is separated, three-dimensional projection is carried out on the point cloud data, the two-dimensional space recognition is carried out, the complexity of the data is reduced, the problems of lack of precision and slow operation of three-dimensional detection are avoided, the calculation power is saved, a plurality of basic pictures are preprocessed to generate labeled pictures, a plurality of labeled pictures are randomly extracted from the labeled pictures to carry out self-adaptive high-degree scaling, the basic pictures are subjected to feature extraction, the uncertainty of a network model is artificially helped to learn useful features better, the features brought by network random learning are reduced, then labeled pictures are manually generated, a plurality of labeled pictures which are randomly extracted from the labeled pictures and are labeled and recombined, new labeled pictures and the obtained by means of network segmentation and new instance training, the fact that the extracted data and the subsequent training data are well combined is a large amount of network training, and the network training is introduced in a robust training process, and the network training process of the network training is also used for obtaining the network training.
Preferably, in step S1, assuming that the position coordinates of three points in the point cloud data are (x 1, y1, z 1), (x 2, y2, z 2), and (x 3, y3, z 3), respectively, a plane equation is derived based on the position coordinates of the three points: ax + By + Cz + D =0, A, B and C are all unknown coefficients, D is a constant term, and A, B and C respectively satisfy the following conditions:
A=(y3-y1)*(z3-z1)-(z2-z1)*(y3-y1);
B=(x3-x1)*(z2-z1)-(x2-x1)*(z3-z1);
C=(x2-x1)*(y3-y1)-(x3-x1)*(y2-y1);
and substituting other points except for (x 1, y1, z 1), (x 2, y2, z 2) and (x 3, y3, z 3) in the point cloud data into a plane equation to obtain the distance from each point to the plane, and setting a plane H to exist, wherein the plane H is a fitting plane of the point cloud data when the sum of absolute values of distances from all the points to the plane H is minimum.
Preferably, in step S2, the interference data in the fitting plane is removed by using a clustering algorithm, and the process of separating the main plane is as follows:
s21, setting the minimum distance in the fitting plane field as epsilon, the minimum number of elements in the field as minPts, the total point number of point cloud as num, setting the total number set of clusters as omega, introducing a hollow matrix distMatrix with the shape of [ num, num ], setting the point cloud data sample set which is not accessed as gama, and setting the gama as a counting set from 0 to num;
s22, selecting two points M and N from the point cloud data, calculating the distance MN between the two points, obtaining a distance result d and putting the distance result d into a matrix distMatrix;
s23, setting the matching number of the M point as a count, judging whether d is smaller than epsilon, and if so, adding 1 to the matching number of the M point; otherwise, the matching number count of the M point is unchanged;
s24, selecting num-1 points except the N points from the point cloud data and carrying out distance calculation on the points and the M points, and repeating the step S23 to obtain the final matching number count of the M points after all the points which are subjected to distance calculation with the M points;
s25, judging whether the matching number count of the M point is greater than minPts, if so, recording the label information value of the M point as 1 by using omega; otherwise, recording the label information of the M points as 0 by using omega;
s26, adding the tag information values recorded in the omega, and executing the step S27 when the sum is more than 0, otherwise, ending;
s27, selecting any point with a label information value of 1 from the interior of the omega, modifying the label information of the point in the interior of the gamma to be-1, determining the distance between the point and other points in a matrix distMatrix, and determining the number q of sample points corresponding to the distance smaller than epsilon;
s28, judging whether q is larger than minPts, if so, modifying the position information of M points in the omega to be 0, and intersecting the points in the sample with unmarked points to generate a cluster;
s29, traversing all point clouds in the field of the fitting plane to obtain one or more clusters, reserving the most clusters in the same cluster, eliminating interference data in the fitting plane, and separating out the main plane.
Preferably, the solving formula of the distance result between the two points M and N is expressed as:
d=sqrt((x1-x2)^2+(y1-y2)^2+(z1-z2)^2)
wherein d represents the distance result between the M and N points, the coordinate of the M point is (x 1, y1, z 1), and the coordinate of the N point is (x 2, y2, z 2).
Preferably, in step S4 the data is transmitted,
let the three-dimensional projection formula be cross = (ay × bz-az × by, az × bx-ax × bz, ax × by-ay bx), where (ax, ay, az) represents a vector a and (bx, by, bz) represents a vector B, and to project the coordinates of the wall surface on the Z axis onto the Y axis, a value data _ Y = (0, 1) is set, and the transformation matrix trans _ matrix is a matrix with 4 rows and 4 columns;
assuming that a plane equation is Ax + By + Cz + D =0, then a normal vector is (a, B, C), calculating to obtain α = sqrt (a + B + C), obtaining a parallel normal normals = array ([ a/α, B/α, C/α ]), substituting data _ y and normals into a three-dimensional projection formula to obtain data _ x, and substituting data _ x and data _ y into the three-dimensional projection formula to obtain data _ z, wherein data _ x and data _ z respectively represent the projected result;
the data _ x, the data _ y and the data _ z are put into the first three rows and three columns of the transform matrix trans _ matrix, namely, trans _ matrix [:3,:3] = [ data _ x, data _ y and data _ z ], the coordinates (x, y and z) of each point in the point cloud are averaged to obtain the central point of the point cloud, the central point of the point cloud is multiplied by-1 and then multiplied by data _ z to obtain a vector (xa, ya and za), the xa, ya and data of the vector are added and then multiplied by _ z to obtain the value of a variable origin, the first three numerical values of the first three rows of the transform matrix trans _ matrix are respectively extracted, multiplied by origin and then multiplied by-1 to obtain a vector (xb, yb and zb), and the value obtained by adding xb, yb and zb of the vector is respectively put into the fourth position of each transform matrix trans _ matrix to determine the transform matrix. (ii) a
Converting the point cloud based on a conversion matrix trans _ matrix, mapping position information on an xy axis in a three-dimensional coordinate system corresponding to the point cloud data to hw on the conversion matrix trans _ matrix, subtracting an average value on a z axis from a value on the z axis, and filling the value to a corresponding hw position to obtain a concave-convex information matrix of a wall surface, normalizing the value in the concave-convex information matrix and mapping the value to an integer interval range of 0-255 to obtain a displayed picture format, completing three-dimensional projection to obtain a picture, wherein h and w of the picture are values on the x axis and the y axis respectively, the value on the z axis is a pixel value of the picture, h represents height, and w represents width.
Preferably, after the point cloud data is mapped in a three-dimensional projection mode, a picture two-dimensional array is obtained, and the picture two-dimensional array is processed by using a region average value-taking algorithm, so that the dispersion of the three-dimensional array is reduced, the complexity of subsequent related calculation is reduced, and the calculation efficiency is improved.
Preferably, the process of performing adaptive height scaling on the sizes of the plurality of base pictures and performing feature extraction on the base pictures in step S5 specifically includes:
s51, contrast enhancement is carried out on the basic picture based on a self-adaptive histogram equalization algorithm, then feature extraction is carried out, and the basic picture is stored as a new picture;
and S52, carrying out data annotation on the new picture by using data annotation software to obtain an annotated picture.
Preferably, in step S5, when a plurality of tagged pictures with extracted features and tags are randomly extracted from the tagged pictures and then recombined, the tagged pictures are all subjected to adaptive height unification, and then are randomly ordered and integrated to obtain a picture U, and a picture V with position division tagging information corresponding to the picture U is generated; generating a random position window by taking the height of the picture U as a reference, and cutting the picture U to obtain a picture W; and dividing the marking information from the picture V based on the position of the window to obtain the actual division marking condition of the picture W.
Preferably, the example segmentation network uses Darknet53 as a backbone network, trains a sample to be input into the force segmentation network, and performs post-processing on the results after acquiring a series of outputs. Filtering values in the heatmap position values to find out positions with values larger than 0.5, then obtaining offset values of corresponding points from offset based on the heatmap position values, and adding the offset values with the heatmap position values to obtain the central point of the detection target; then, the width and the height of the detection frame are obtained on the size output by using the position value of heatmap, the upper left corner coordinate of the detection frame is obtained by subtracting a half of the width and the height from the central point, the upper right corner coordinate of the detection frame is obtained by adding a half of the width and the height to the central point, and the position of a 2d detection frame is obtained; then, calculating data in the shape based on the position of the detection frame, reserving data information in the detection frame, replacing all data information outside the detection frame with 0, and finally performing matrix multiplication on the processed shape matrix and the saliency matrix to obtain an example segmentation result; and setting an upper limit of the iteration round during training, finishing training when the training times reach the upper limit of the iteration round, and storing the final weight parameters of the example segmentation network and the trained example segmentation network after the training is finished.
The application also provides a wall segmentation identification system based on point cloud data processing, the system includes:
the point cloud data fitting module is used for loading point cloud data of a room and performing plane fitting on the basis of the point cloud data to obtain a fitting plane of the point cloud data;
the clustering module is used for eliminating interference data in the fitting plane by using a clustering algorithm and separating out a main plane; repeatedly executing the point cloud data fitting module and the clustering module, separating each vertical wall surface and each horizontal surface of the room, and obtaining position set data of the midpoint of each vertical surface and each horizontal surface;
the three-dimensional projection module is used for mapping the point cloud data in a three-dimensional projection mode to obtain a plurality of basic pictures;
the preprocessing module is used for carrying out self-adaptive height scaling on the sizes of a plurality of basic pictures, carrying out feature extraction on the basic pictures, then manually generating labeled pictures, randomly extracting a plurality of labeled pictures subjected to feature extraction from the labeled pictures and labeling, recombining the labeled pictures and the labels to obtain new labeled pictures and new example segmentation information as training samples;
the method comprises the steps that an example segmentation network construction training module is introduced, an example segmentation network is trained by using a training sample, and the trained example segmentation network is obtained;
and the segmentation and identification module acquires point cloud data of the room to be detected, inputs the trained example segmentation network and outputs a wall segmentation and identification result.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention provides a wall surface segmentation and recognition method and system based on point cloud data processing, which are characterized in that point cloud data of a room are loaded, plane fitting is carried out on the point cloud data to obtain a fitting plane of the point cloud data, subsequent calculation acceleration is facilitated, the accuracy is improved, then interference data in the fitting plane are eliminated by using a clustering algorithm, a main plane is separated, then three-dimensional point cloud data are projected onto two dimensions based on three-dimensional projection to obtain a plurality of basic pictures, the complexity of the data is reduced, then the plurality of basic pictures are subjected to self-adaptive height scaling, the basic pictures are subjected to feature extraction, a network model is artificially helped to learn useful features better, the uncertainty caused by network random learning is reduced, then pictures are artificially generated, a plurality of marked pictures subjected to feature extraction are randomly extracted from the marked pictures and then combined again, new marked pictures and new instance segmentation information are obtained to serve as training samples, an instance segmentation network is introduced, the instance segmentation network is trained by using the training sample to obtain a trained instance segmentation network for wall surface recognition, and the matching of the above process also avoids the defect that a large amount of data which are different in the traditional deep learning mode, and the data cost is high.
Drawings
Fig. 1 is a schematic flow chart of a wall surface segmentation and identification method based on point cloud data processing according to embodiment 1 of the present invention;
fig. 2 is a diagram illustrating an effect of a picture U according to embodiment 1 of the present invention;
fig. 3 is a diagram illustrating an effect that a picture V having position division labeling information corresponds to a picture U generated in embodiment 1 of the present invention;
fig. 4 is an effect diagram illustrating an effect of generating a window based on the height of a picture U and cutting the picture U to obtain a picture W in embodiment 1 of the present invention;
FIG. 5 is a diagram illustrating an effect of obtaining an actual segmentation labeling status of a picture W based on the segmentation labeling information of the position in the picture V;
FIG. 6 is a diagram showing the input, output and image processing processes of the exemplary segmentation network proposed in embodiment 2 of the present invention;
fig. 7 is a schematic structural diagram of a wall surface segmentation and identification system based on point cloud data processing according to embodiment 3 of the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for better illustration of the present embodiment, some parts of the drawings may be omitted, enlarged or reduced, and do not represent actual sizes;
it will be understood by those skilled in the art that certain well-known descriptions of the figures may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
The positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
example 1
As shown in fig. 1, the present embodiment provides a wall surface segmentation and identification method based on point cloud data processing, and referring to fig. 1, the method specifically includes the following steps:
s1, loading point cloud data of a room, and performing plane fitting based on the point cloud data to obtain a fitting plane of the point cloud data;
s2, eliminating interference data in the fitting plane by using a clustering algorithm, and separating out a main plane;
s3, repeating the steps S1-S2, separating each vertical wall surface and each horizontal plane of the room, and obtaining position set data of the middle points of each vertical plane and each horizontal plane;
s4, mapping the point cloud data in a three-dimensional projection mode to obtain a plurality of basic pictures; (ii) a
S5, carrying out self-adaptive height scaling on the sizes of a plurality of basic pictures, carrying out feature extraction on the basic pictures, then manually generating labeled pictures, randomly extracting a plurality of labeled pictures subjected to feature extraction from the labeled pictures and labeling, recombining the labeled pictures and the labels to obtain new labeled pictures and new instance segmentation information as training samples;
s6, introducing an example segmentation network, and training the example segmentation network by using a training sample to obtain a trained example segmentation network;
and S7, acquiring and processing point cloud data of the room to be detected in the ways from S1 to S4 to obtain a basic picture, performing self-adaptive scaling and feature extraction on the basic picture, inputting the trained example segmentation network, and outputting a wall segmentation recognition result.
In this embodiment, the position coordinates of three points in the point cloud data are respectively (x 1, y1, z 1), (x 2, y2, z 2), and (x 3, y3, z 3), and a plane equation is obtained based on the position coordinates of the three points: ax + By + Cz + D =0, A, B and C are all unknown coefficients, D is a constant term, and A, B and C respectively satisfy the following conditions:
A=(y3-y1)*(z3-z1)-(z2-z1)*(y3-y1);
B=(x3-x1)*(z2-z1)-(x2-x1)*(z3-z1);
C=(x2-x1)*(y3-y1)-(x3-x1)*(y2-y1);
and substituting other points except for (x 1, y1, z 1), (x 2, y2, z 2) and (x 3, y3, z 3) in the point cloud data into a plane equation to obtain the distance from each point to the plane, and setting a plane H to exist, wherein the plane H is a fitting plane of the point cloud data when the sum of absolute values of distances from all the points to the plane H is minimum.
In step S2, eliminating interference data in the fitting plane by using a clustering algorithm, and separating out a main plane:
s21, setting the minimum distance in the fitting plane field as epsilon, the minimum number of elements in the field as minPts, the total point number of point cloud as num, setting the total number set of clusters as omega, introducing a hollow matrix distMatrix with the shape of [ num, num ], setting the point cloud data sample set which is not accessed as gama, and setting the gama as a counting set from 0 to num;
s22, selecting two points M and N from the point cloud data, calculating the distance MN between the two points, obtaining a distance result d and putting the distance result d into a matrix distMatrix;
s23, setting the matching number of the M points as a count, judging whether d is smaller than epsilon, and if so, adding 1 to the matching number of the M points; otherwise, the matching number count of the M point is unchanged;
s24, selecting num-1 points except the N points from the point cloud data and carrying out distance calculation on the points M, and repeating the step S23 to obtain a final matching number count of the points M after all the points which are subjected to distance calculation with the points M;
s25, judging whether the matching number count of the M point is greater than minPts, if so, recording the label information value of the M point as 1 by using omega; otherwise, recording the label information of the M points as 0 by using omega;
s26, adding the tag information values recorded in the omega, and executing the step S27 when the sum is more than 0, otherwise, ending;
s27, selecting any point with a label information value of 1 from the interior of the omega, modifying label information of the point in the gamma to-1, determining the distance between the point and other points in a matrix distMatrix, and determining the number q of sample points corresponding to the distance less than epsilon;
s28, judging whether q is larger than minPts, if so, modifying the position information of M points in the omega to be 0, and intersecting the points in the sample with unmarked points to generate a cluster;
s29, traversing all point clouds in the field of the fitting plane to obtain one or more clusters, reserving the most point clouds in the same cluster, eliminating interference data in the fitting plane, and separating out the main plane.
Specifically, the solving formula of the distance result between the two points M and N is expressed as follows:
d=sqrt((x1-x2)^2+(y1-y2)^2+(z1-z2)^2)
wherein d represents the distance result between the M and N points, the coordinate of the M point is (x 1, y1, z 1), and the coordinate of the N point is (x 2, y2, z 2).
In step S4, a three-dimensional projection formula is set as cross = (ay × bz-az × by, az × bx-ax × bz, ax × by-ay bx), where (ax, ay, az) represents a vector a and (bx, by, bz) represents a vector B, and a value data _ Y = (0, 1) is set for projecting the coordinate of the wall surface on the Z axis onto the Y axis, and a transformation matrix trans _ matrix is a matrix of 4 rows and 4 columns;
setting a plane equation as Ax + By + Cz + D =0, then the normal vector is (a, B, C), calculating to obtain α = sqrt (a + B + C) and obtain parallel normal normals = array ([ a/α, B/α, C/α ]), substituting data _ y and normals into a three-dimensional projection formula to obtain data _ x, and substituting the data _ x and data _ y into the three-dimensional projection formula to obtain data _ z, wherein the data _ x and the data _ z respectively represent projected results;
the data _ x, the data _ y and the data _ z are put into the first three rows and three columns of the transform matrix trans _ matrix, namely, trans _ matrix [:3,:3] = [ data _ x, data _ y and data _ z ], the coordinates (x, y and z) of each point in the point cloud are averaged to obtain the central point of the point cloud, the central point of the point cloud is multiplied by-1 and then multiplied by data _ z to obtain a vector (xa, ya and za), the xa, ya and data of the vector are added and then multiplied by _ z to obtain the value of a variable origin, the first three numerical values of the first three rows of the transform matrix trans _ matrix are respectively extracted, multiplied by origin and then multiplied by-1 to obtain a vector (xb, yb and zb), and the value obtained by adding xb, yb and zb of the vector is respectively put into the fourth position of each transform matrix trans _ matrix to determine the transform matrix. (ii) a
Converting the point cloud based on a conversion matrix trans _ matrix, mapping position information on an xy axis in a three-dimensional coordinate system corresponding to point cloud data to hw on the conversion matrix trans _ matrix, subtracting an average value on a z axis from a value on the z axis, and filling the obtained value to a corresponding hw position, at the moment, obtaining a concave-convex information matrix of a wall surface, normalizing the value in the concave-convex information matrix and mapping the normalized value to an integer interval range of 0-255, obtaining a displayed picture format, completing three-dimensional projection, and obtaining a picture, wherein h and w of the picture are values on the x axis and the y axis respectively, the value on the z axis is a pixel value of the picture, h represents height, and w represents width.
The point cloud data are mapped in a three-dimensional projection mode to obtain a picture two-dimensional array, and the picture two-dimensional array is processed by using a region average value-taking algorithm, so that the dispersion of the three-dimensional array is reduced, the complexity of subsequent related calculation is reduced, and the calculation efficiency is improved.
In step S5, the process of adaptively highly scaling the sizes of the plurality of base pictures and extracting the features of the base pictures specifically includes:
s51, contrast enhancement is carried out on the basic picture based on a self-adaptive histogram equalization algorithm, then feature extraction is carried out, and the basic picture is stored as a new picture; adaptive histogram equalization is a local histogram equalization method, unlike ordinary histogram equalization. The createCLAHE in the opencv open source library is directly called, wherein clipLimit can be set to be 5-20, tileGridSize can be set to be 3-11, and the image (local) contrast can be effectively enhanced or improved, so that more image-related edge information is acquired, and segmentation is facilitated.
And S52, carrying out data annotation on the new picture by using data annotation software to obtain an annotated picture.
Specifically, when 4 marked pictures with characteristic extraction and marks are randomly extracted from the marked pictures and then recombined, the 4 marked pictures are subjected to self-adaptive unification at 140px height, and then are randomly sequenced and integrated to obtain a picture U, wherein an effect picture is shown in figure 2, and a picture V with position segmentation marking information corresponding to the picture U is generated, and the effect picture is shown in figure 3; and generating a marking window at a random position by taking the height 140px of the picture U as a reference, wherein the width is 50 to 250px, cutting the picture U to obtain a picture W, and obtaining an effect graph as shown in FIG. 4.
Example 2
In this embodiment, a further description is given for an example segmentation network, and fig. 6 shows an input, output, and image processing process diagram of the example segmentation network proposed in this embodiment, where the example segmentation network uses the Darknet53 as a backbone network, trains a sample to be input into the real segmentation network, and performs post-processing on a series of results after obtaining a series of outputs. Filtering values in the heatmap position values to find out positions with values larger than 0.5, then obtaining offset values of corresponding points from offset based on the heatmap position values, and adding the offset values with the heatmap position values to obtain the central point of the detection target; then, the position value of heatmap is used for obtaining the width and the height of the detection frame on the size output, the center point subtracts half of the width and the height to obtain the coordinate of the upper left corner of the detection frame, the center point adds half of the width and the height to obtain the coordinate of the upper right corner of the detection frame, and the position of a 2d detection frame is obtained; then, calculating data in the shape based on the position of the detection frame, reserving data information in the detection frame, replacing all data information outside the detection frame with 0, and finally performing matrix multiplication on the processed shape matrix and the saliency matrix to obtain an example segmentation result; and setting an upper limit of the iteration round during training, finishing the training when the training number reaches the upper limit of the iteration round, and storing the final weight parameters of the example segmentation network and the trained example segmentation network after the training is finished. In the image processing process, a point position is obtained based on the Heatmap and the Offset, then the point position is combined with the Size of Shape and Size, and finally the point position is derived in an onnx format to facilitate subsequent model deployment work and is used for wall segmentation identification in real operation.
Example 3
Referring to fig. 7, the present application further provides a wall surface segmentation recognition system based on point cloud data processing, the system includes:
the point cloud data fitting module 101 is used for loading point cloud data of a room, and performing plane fitting on the basis of the point cloud data to obtain a fitting plane of the point cloud data;
the clustering module 102 is configured to eliminate interference data in a fitting plane by using a clustering algorithm, and separate a main plane; repeatedly executing the point cloud data fitting module and the clustering module, separating each vertical wall surface and each horizontal surface of the room, and obtaining position set data of the midpoint of each vertical plane and each horizontal surface;
the three-dimensional projection module 103 is used for mapping the point cloud data in a three-dimensional projection mode to obtain a plurality of basic pictures;
the preprocessing module 104 is used for carrying out self-adaptive height scaling on the sizes of a plurality of basic pictures, carrying out feature extraction on the basic pictures, then manually generating labeled pictures, randomly extracting a plurality of labeled pictures subjected to feature extraction from the labeled pictures and labeling, recombining the labeled pictures and the labels to obtain new labeled pictures and new example segmentation information as training samples;
an example segmentation network construction training module 105 introduces an example segmentation network, trains the example segmentation network by using a training sample, and obtains a trained example segmentation network;
and the segmentation and identification module 106 is used for acquiring point cloud data of the room to be detected, inputting the point cloud data into the trained example segmentation network and outputting a wall segmentation and identification result.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A wall surface segmentation and identification method based on point cloud data processing is characterized by comprising the following steps:
s1, loading point cloud data of a room, and performing plane fitting based on the point cloud data to obtain a fitting plane of the point cloud data;
s2, eliminating interference data in the fitting plane by using a clustering algorithm, and separating out a main plane;
s3, repeating the steps S1-S2, separating each vertical wall surface and each horizontal surface of the room, and obtaining position set data of the middle points of each vertical surface and each horizontal surface;
s4, mapping the point cloud data in a three-dimensional projection mode to obtain a plurality of basic pictures;
s5, carrying out self-adaptive height scaling on the sizes of a plurality of basic pictures, carrying out feature extraction on the basic pictures, then manually generating labeled pictures, randomly extracting a plurality of labeled pictures subjected to feature extraction from the labeled pictures and labeling, and recombining the labeled pictures and the labels to obtain new labeled pictures and new example segmentation information to be used as training samples;
s6, introducing an example segmentation network, and training the example segmentation network by using a training sample to obtain a trained example segmentation network;
and S7, acquiring and processing point cloud data of the room to be detected in the S1-S4 mode to obtain a basic picture, carrying out self-adaptive scaling and feature extraction on the basic picture, inputting a trained example segmentation network, and outputting a wall segmentation recognition result.
2. The method for identifying wall surface segmentation based on point cloud data processing according to claim 1, wherein in step S1, the position coordinates of three points in the point cloud data are set as (x 1, y1, z 1), (x 2, y2, z 2) and (x 3, y3, z 3), respectively, and a plane equation is obtained based on the position coordinates of the three points: ax + By + Cz + D =0, A, B and C are all unknown coefficients, D is a constant term, and A, B and C respectively satisfy the following conditions:
A=(y3-y1)*(z3-z1)-(z2-z1)*(y3-y1);
B=(x3-x1)*(z2-z1)-(x2-x1)*(z3-z1);
C=(x2-x1)*(y3-y1)-(x3-x1)*(y2-y1);
and substituting other points except for (x 1, y1, z 1), (x 2, y2, z 2) and (x 3, y3, z 3) in the point cloud data into a plane equation to obtain the distance from each point to the plane, and setting a plane H to exist, wherein the plane H is a fitting plane of the point cloud data when the sum of absolute values of distances from all the points to the plane H is minimum.
3. The method for identifying wall segmentation based on point cloud data processing according to claim 2, wherein in step S2, the clustering algorithm is used to eliminate the interference data in the fitting plane, and the process of separating the main plane is as follows:
s21, setting the minimum distance in the fitting plane field as epsilon, the minimum number of elements in the field as minPts, the total point number of point cloud as num, setting the total number set of clusters as omega, introducing a hollow matrix distMatrix with the shape of [ num, num ], setting the point cloud data sample set which is not accessed as gama, and setting the gama as a counting set from 0 to num;
s22, selecting two points M and N from the point cloud data, calculating the distance MN between the two points, obtaining a distance result d and putting the distance result d into a matrix distMatrix;
s23, setting the matching number of the M points as a count, judging whether d is smaller than epsilon, and if so, adding 1 to the matching number of the M points; otherwise, the matching number count of the M point is unchanged;
s24, selecting num-1 points except the N points from the point cloud data and carrying out distance calculation on the points and the M points, and repeating the step S23 to obtain the final matching number count of the M points after all the points which are subjected to distance calculation with the M points;
s25, judging whether the matching number count of the M point is greater than minPts, if so, recording the label information value of the M point as 1 by using omega; otherwise, recording the label information of the M points as 0 by using omega;
s26, adding the tag information values recorded in the omega, and executing the step S27 when the sum is more than 0, otherwise, ending;
s27, selecting any point with a label information value of 1 from the interior of the omega, modifying the label information of the point in the interior of the gamma to be-1, determining the distance between the point and other points in a matrix distMatrix, and determining the number q of sample points corresponding to the distance smaller than epsilon;
s28, judging whether q is larger than minPts, if so, modifying the position information of M points in the omega to be 0, and intersecting the points in the sample with unmarked points to generate a cluster;
s29, traversing all point clouds in the field of the fitting plane to obtain one or more clusters, reserving the most clusters in the same cluster, eliminating interference data in the fitting plane, and separating out the main plane.
4. The method for identifying wall segmentation based on point cloud data processing of claim 3, wherein the solving formula of the distance result between the points M and N is represented as:
d=sqrt((x1-x2)^2+(y1-y2)^2+(z1-z2)^2)
wherein d represents the distance result between the M and N points, the coordinate of the M point is (x 1, y1, z 1), and the coordinate of the N point is (x 2, y2, z 2).
5. The method for identifying wall surface segmentation based on point cloud data processing according to claim 4, wherein in step S4,
let the three-dimensional projection formula be cross = (ay × bz-az × by, az × bx-ax × bz, ax × by-ay bx), where (ax, ay, az) represents a vector a and (bx, by, bz) represents a vector B, and to project the coordinates of the wall surface on the Z axis onto the Y axis, a value data _ Y = (0, 1) is set, and the transformation matrix trans _ matrix is a matrix with 4 rows and 4 columns;
setting a plane equation as Ax + By + Cz + D =0, then the normal vector is (a, B, C), calculating to obtain α = sqrt (a + B + C) and obtain parallel normal normals = array ([ a/α, B/α, C/α ]), substituting data _ y and normals into a three-dimensional projection formula to obtain data _ x, and substituting the data _ x and data _ y into the three-dimensional projection formula to obtain data _ z, wherein the data _ x and the data _ z respectively represent projected results;
the data _ x, the data _ y and the data _ z are put into the first three rows and three columns of the transform matrix trans _ matrix, namely, trans _ matrix [:3,:3] = [ data _ x, data _ y and data _ z ], the coordinates (x, y and z) of each point in the point cloud are averaged to obtain the central point of the point cloud, the central point of the point cloud is multiplied by-1 and then multiplied by data _ z to obtain a vector (xa, ya and za), the xa, ya and data of the vector are added and then multiplied by _ z to obtain the value of a variable origin, the first three numerical values of the first three rows of the transform matrix trans _ matrix are respectively extracted, multiplied by origin and then multiplied by-1 to obtain a vector (xb, yb and zb), and the value obtained by adding xb, yb and zb of the vector is respectively put into the fourth position of each transform matrix trans _ matrix to determine the transform matrix. (ii) a
Converting the point cloud based on a conversion matrix trans _ matrix, mapping position information on an xy axis in a three-dimensional coordinate system corresponding to point cloud data to hw on the conversion matrix trans _ matrix, subtracting an average value on a z axis from a value on the z axis, and filling the obtained value to a corresponding hw position, at the moment, obtaining a concave-convex information matrix of a wall surface, normalizing the value in the concave-convex information matrix and mapping the normalized value to an integer interval range of 0-255, obtaining a displayed picture format, completing three-dimensional projection, and obtaining a picture, wherein h and w of the picture are values on the x axis and the y axis respectively, the value on the z axis is a pixel value of the picture, h represents height, and w represents width.
6. The wall surface segmentation and recognition method based on point cloud data processing as claimed in claim 5, wherein the point cloud data is mapped in a three-dimensional projection mode to obtain a picture two-dimensional array, and the picture two-dimensional array is processed by using a region average value-taking algorithm to reduce the dispersion of the three-dimensional array.
7. The method for identifying wall segmentation based on point cloud data processing according to claim 6, wherein the step S5 of performing adaptive height scaling on the sizes of the plurality of basic pictures and performing feature extraction on the basic pictures specifically comprises: s51, performing contrast enhancement on the basic picture based on a self-adaptive histogram equalization algorithm, then performing feature extraction, and storing the picture as a new picture;
and S52, carrying out data annotation on the new picture by using data annotation software to obtain an annotated picture.
8. The method for identifying wall segmentation based on point cloud data processing according to claim 7, wherein in step S5, when a plurality of tagged pictures with extracted features are randomly extracted from the tagged pictures and then tagged and recombined, the tagged pictures are all subjected to adaptive height unification, and then are randomly sorted and integrated to obtain a picture U, and a picture V with position segmentation tagging information corresponding to the picture U is generated; generating a marking window at a random position by taking the height of the picture U as a reference, and cutting the picture U to obtain a picture W; and dividing the marking information from the picture V based on the position of the window to obtain the actual division marking condition of the picture W.
9. The method of claim 8, wherein the instance segmentation network utilizes Darknet53 as a backbone network, trains samples to be input into the force segmentation network, and obtains a series of outputs to perform post-processing on the results. Filtering values in the heatmap position value, finding out a position with the value larger than 0.5, then obtaining an offset value of a corresponding point from the offset based on the heatmap position value, and adding the offset value and the heatmap position value to obtain a central point of the detection target; then, the width and the height of the detection frame are obtained on the size output by using the position value of heatmap, the upper left corner coordinate of the detection frame is obtained by subtracting a half of the width and the height from the central point, the upper right corner coordinate of the detection frame is obtained by adding a half of the width and the height to the central point, and the position of a 2d detection frame is obtained; then, based on the position of the detection frame, calculating data in the shape, keeping data information in the detection frame, replacing all data information outside the detection frame with 0, and finally performing matrix multiplication on the processed shape matrix and the saliency matrix to obtain an example segmentation result; and setting an upper limit of the iteration round during training, finishing the training when the training number reaches the upper limit of the iteration round, and storing the final weight parameters of the example segmentation network and the trained example segmentation network after the training is finished.
10. A wall segmentation and identification system based on point cloud data processing is characterized by comprising:
the point cloud data fitting module is used for loading point cloud data of a room and performing plane fitting on the basis of the point cloud data to obtain a fitting plane of the point cloud data;
the clustering module is used for eliminating interference data in the fitting plane by using a clustering algorithm and separating out a main plane; repeatedly executing the point cloud data fitting module and the clustering module, separating each vertical wall surface and each horizontal surface of the room, and obtaining position set data of the midpoint of each vertical surface and each horizontal surface;
the three-dimensional projection module is used for mapping the point cloud data in a three-dimensional projection mode to obtain a plurality of basic pictures;
the preprocessing module is used for carrying out self-adaptive height scaling on the sizes of a plurality of basic pictures, carrying out feature extraction on the basic pictures, then manually generating labeled pictures, randomly extracting a plurality of labeled pictures subjected to feature extraction from the labeled pictures and labeling, recombining the labeled pictures and the labels to obtain new labeled pictures and new example segmentation information as training samples;
the method comprises the steps that an example segmentation network construction training module is introduced, an example segmentation network is trained by using a training sample, and the trained example segmentation network is obtained;
and the segmentation and identification module acquires point cloud data of the room to be detected, inputs the trained example segmentation network and outputs a wall segmentation and identification result.
CN202210641330.3A 2022-06-08 2022-06-08 Wall surface segmentation recognition method and system based on point cloud data processing Active CN115239951B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210641330.3A CN115239951B (en) 2022-06-08 2022-06-08 Wall surface segmentation recognition method and system based on point cloud data processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210641330.3A CN115239951B (en) 2022-06-08 2022-06-08 Wall surface segmentation recognition method and system based on point cloud data processing

Publications (2)

Publication Number Publication Date
CN115239951A true CN115239951A (en) 2022-10-25
CN115239951B CN115239951B (en) 2023-09-15

Family

ID=83670006

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210641330.3A Active CN115239951B (en) 2022-06-08 2022-06-08 Wall surface segmentation recognition method and system based on point cloud data processing

Country Status (1)

Country Link
CN (1) CN115239951B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115409886A (en) * 2022-11-02 2022-11-29 南京航空航天大学 Part geometric feature measuring method, device and system based on point cloud
CN115619963A (en) * 2022-11-14 2023-01-17 吉奥时空信息技术股份有限公司 City building entity modeling method based on content perception
CN116152306A (en) * 2023-03-07 2023-05-23 北京百度网讯科技有限公司 Method, device, apparatus and medium for determining masonry quality
CN117994486A (en) * 2024-04-03 2024-05-07 广东一幕智能科技有限公司 Mobile house indoor environment control method and system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009727A (en) * 2019-03-08 2019-07-12 深圳大学 A kind of indoor threedimensional model automatic reconfiguration method and system with structure semantics
CN110532602A (en) * 2019-07-19 2019-12-03 中国地质大学(武汉) A kind of indoor autodraft and modeling method based on plan view image
CN111932688A (en) * 2020-09-10 2020-11-13 深圳大学 Indoor plane element extraction method, system and equipment based on three-dimensional point cloud
CN112700465A (en) * 2021-01-08 2021-04-23 上海建工四建集团有限公司 Actual measurement actual quantity oriented room body point cloud extraction and part segmentation method and device
CN112907735A (en) * 2021-03-10 2021-06-04 南京理工大学 Flexible cable identification and three-dimensional reconstruction method based on point cloud
CN112927234A (en) * 2021-02-25 2021-06-08 中国工商银行股份有限公司 Point cloud semantic segmentation method and device, electronic equipment and readable storage medium
CN113128405A (en) * 2021-04-20 2021-07-16 北京航空航天大学 Plant identification and model construction method combining semantic segmentation and point cloud processing
CN113379898A (en) * 2021-06-17 2021-09-10 西安理工大学 Three-dimensional indoor scene reconstruction method based on semantic segmentation
CN113935428A (en) * 2021-10-25 2022-01-14 山东大学 Three-dimensional point cloud clustering identification method and system based on image identification
CN113989291A (en) * 2021-10-20 2022-01-28 上海电力大学 Building roof plane segmentation method based on PointNet and RANSAC algorithm
CN114266891A (en) * 2021-11-17 2022-04-01 京沪高速铁路股份有限公司 Railway operation environment abnormity identification method based on image and laser data fusion

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009727A (en) * 2019-03-08 2019-07-12 深圳大学 A kind of indoor threedimensional model automatic reconfiguration method and system with structure semantics
CN110532602A (en) * 2019-07-19 2019-12-03 中国地质大学(武汉) A kind of indoor autodraft and modeling method based on plan view image
CN111932688A (en) * 2020-09-10 2020-11-13 深圳大学 Indoor plane element extraction method, system and equipment based on three-dimensional point cloud
CN112700465A (en) * 2021-01-08 2021-04-23 上海建工四建集团有限公司 Actual measurement actual quantity oriented room body point cloud extraction and part segmentation method and device
CN112927234A (en) * 2021-02-25 2021-06-08 中国工商银行股份有限公司 Point cloud semantic segmentation method and device, electronic equipment and readable storage medium
CN112907735A (en) * 2021-03-10 2021-06-04 南京理工大学 Flexible cable identification and three-dimensional reconstruction method based on point cloud
CN113128405A (en) * 2021-04-20 2021-07-16 北京航空航天大学 Plant identification and model construction method combining semantic segmentation and point cloud processing
CN113379898A (en) * 2021-06-17 2021-09-10 西安理工大学 Three-dimensional indoor scene reconstruction method based on semantic segmentation
CN113989291A (en) * 2021-10-20 2022-01-28 上海电力大学 Building roof plane segmentation method based on PointNet and RANSAC algorithm
CN113935428A (en) * 2021-10-25 2022-01-14 山东大学 Three-dimensional point cloud clustering identification method and system based on image identification
CN114266891A (en) * 2021-11-17 2022-04-01 京沪高速铁路股份有限公司 Railway operation environment abnormity identification method based on image and laser data fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YUQING WANG ET AL.: "CenterMask: single shot instance segmentation with point representation", 《ARXIV》, pages 1 - 9 *
宋锦阳 等: "三维点云室内平面要素提取与优化", 《测绘通报》, pages 15 - 20 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115409886A (en) * 2022-11-02 2022-11-29 南京航空航天大学 Part geometric feature measuring method, device and system based on point cloud
CN115409886B (en) * 2022-11-02 2023-02-21 南京航空航天大学 Part geometric feature measuring method, device and system based on point cloud
CN115619963A (en) * 2022-11-14 2023-01-17 吉奥时空信息技术股份有限公司 City building entity modeling method based on content perception
CN115619963B (en) * 2022-11-14 2023-06-02 吉奥时空信息技术股份有限公司 Urban building entity modeling method based on content perception
CN116152306A (en) * 2023-03-07 2023-05-23 北京百度网讯科技有限公司 Method, device, apparatus and medium for determining masonry quality
CN116152306B (en) * 2023-03-07 2023-11-03 北京百度网讯科技有限公司 Method, device, apparatus and medium for determining masonry quality
CN117994486A (en) * 2024-04-03 2024-05-07 广东一幕智能科技有限公司 Mobile house indoor environment control method and system
CN117994486B (en) * 2024-04-03 2024-06-04 广东一幕智能科技有限公司 Mobile house indoor environment control method and system

Also Published As

Publication number Publication date
CN115239951B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN115239951A (en) Wall surface segmentation and identification method and system based on point cloud data processing
CN108492343B (en) Image synthesis method for training data for expanding target recognition
CN104680144B (en) Based on the lip reading recognition methods and device for projecting very fast learning machine
CN111695622A (en) Identification model training method, identification method and device for power transformation operation scene
US20140233847A1 (en) Networked capture and 3d display of localized, segmented images
CN111461133B (en) Express delivery surface single item name identification method, device, equipment and storage medium
CN111274964B (en) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
JP2020135679A (en) Data set creation method, data set creation device, and data set creation program
CN110751097B (en) Semi-supervised three-dimensional point cloud gesture key point detection method
CN111126393A (en) Vehicle appearance refitting judgment method and device, computer equipment and storage medium
CN102542286A (en) Learning device, learning method, identification device, identification method, and program
CN115908988B (en) Defect detection model generation method, device, equipment and storage medium
CN111325184B (en) Intelligent interpretation and change information detection method for remote sensing image
US7403636B2 (en) Method and apparatus for processing an image
Kaur et al. 2-D geometric shape recognition using canny edge detection technique
CN111476226A (en) Text positioning method and device and model training method
CN108256578B (en) Gray level image identification method, device, equipment and readable storage medium
CN114581536B (en) Image color difference detection method based on feature perception and multi-channel learning
CN115471755A (en) Image target rapid detection method based on segmentation
CN111428565B (en) Point cloud identification point positioning method and device based on deep learning
CN113034420B (en) Industrial product surface defect segmentation method and system based on frequency space domain characteristics
CN113505784A (en) Automatic nail annotation analysis method and device, electronic equipment and storage medium
CN112749713A (en) Big data image recognition system and method based on artificial intelligence
CN111950475A (en) Yalhe histogram enhancement type target recognition algorithm based on yoloV3
Jiang et al. Research on defect detection technology of tablets in aluminum plastic package

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 528000 cnc5021, floor 5, block C, Jiabo City, No. 189, Foshan Avenue, Chancheng District, Foshan City, Guangdong Province (residence declaration)

Applicant after: Guangdong Linghui Digital Space Technology Co.,Ltd.

Address before: CNC5021, Floor 5, Block C, Jiabo City, No. 189, Foshan Avenue Middle, Chancheng District, Foshan City, Guangdong Province, 528041

Applicant before: Guangdong Linghui Construction Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant