CN113706503A - Whole vehicle point cloud image analysis method, device, equipment and storage medium - Google Patents
Whole vehicle point cloud image analysis method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN113706503A CN113706503A CN202110991588.1A CN202110991588A CN113706503A CN 113706503 A CN113706503 A CN 113706503A CN 202110991588 A CN202110991588 A CN 202110991588A CN 113706503 A CN113706503 A CN 113706503A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- image
- target
- target structure
- vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003703 image analysis method Methods 0.000 title claims description 24
- 238000001514 detection method Methods 0.000 claims abstract description 115
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 80
- 238000000034 method Methods 0.000 claims abstract description 36
- 238000010191 image analysis Methods 0.000 claims abstract description 18
- 238000004458 analytical method Methods 0.000 claims abstract description 14
- 239000011159 matrix material Substances 0.000 claims description 36
- 238000012549 training Methods 0.000 claims description 17
- 230000000007 visual effect Effects 0.000 claims description 15
- 238000002372 labelling Methods 0.000 claims description 13
- 238000012795 verification Methods 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 6
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 10
- 238000005259 measurement Methods 0.000 description 6
- 238000010606 normalization Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000012938 design process Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Evolutionary Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of image analysis, and discloses a method, a device, equipment and a storage medium for analyzing a point cloud image of a finished automobile. The method comprises the following steps: acquiring a whole vehicle point cloud image of a target vehicle; intercepting the point cloud image of the whole vehicle to obtain a corresponding target section image; performing target structure detection through a preset convolutional neural network model based on the target section image to obtain a corresponding target structure detection result; and completing the analysis of the point cloud image of the whole vehicle according to the target structure detection result. By the method, the corresponding target section image is obtained after the point cloud image of the whole vehicle is obtained, the target structure of the target structure image is detected by the preset convolutional neural network model to obtain the corresponding detection result, and the efficiency of recognizing the key structure section of the whole vehicle data is improved.
Description
Technical Field
The invention relates to the technical field of image analysis, in particular to a method, a device, equipment and a storage medium for analyzing a point cloud image of a finished automobile.
Background
With the rapid development of various real-time capture (real capture) equipment such as laser scanning and oblique photography, point clouds have become a third important space-time data source following vector maps and image data, and play more and more important roles in scientific research and engineering construction in various fields.
In the vehicle design and vehicle body part design process, the whole vehicle point cloud data is utilized, the whole vehicle body is modeled based on software, however, in the process, the identification of the key structure section of the whole vehicle is usually carried out manually, and the outstanding problems of large manual intervention, high efficiency, low consumption, long time, different standards and the like exist.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a method, a device, equipment and a storage medium for analyzing a point cloud image of a finished automobile, and aims to solve the technical problems of low efficiency, non-uniform standard and large manual intervention when the key structure cross section of the finished automobile is manually identified in the prior art.
In order to achieve the aim, the invention provides a method for analyzing a point cloud image of a whole vehicle, which comprises the following steps:
acquiring a whole vehicle point cloud image of a target vehicle;
intercepting the point cloud image of the whole vehicle to obtain a corresponding target section image;
performing target structure detection through a preset convolutional neural network model based on the target section image to obtain a corresponding target structure detection result;
and completing the analysis of the point cloud image of the whole vehicle according to the target structure detection result.
Optionally, the acquiring a complete vehicle point cloud image of the target vehicle includes:
acquiring vehicle point cloud data of a target vehicle;
and extracting image data in the finished automobile point cloud data to obtain a corresponding finished automobile point cloud image.
Optionally, the detecting a target structure based on the target cross-sectional image through a preset convolutional neural network model to obtain a corresponding target structure detection result includes:
obtaining a corresponding preset pixel matrix based on the target section image;
inputting the preset pixel matrix to an input layer in the preset convolutional neural network model;
processing the preset pixel matrix based on the input layer through a backbone network in the preset convolutional neural network model to obtain a corresponding output matrix result;
and carrying out result detection through a detection layer in the preset convolutional neural network model based on the output matrix result to obtain a corresponding target structure detection result.
Optionally, before the target structure detection is performed through a preset convolutional neural network model based on the target cross-sectional image to obtain a corresponding target structure detection result, the method further includes:
acquiring an initial section image in a preset database;
carrying out target structure labeling on the initial section image to obtain a sample section image;
and training the initial convolutional neural network model based on the sample sectional image to obtain a preset convolutional neural network model.
Optionally, the training of the initial convolutional neural network model based on the sample cross-sectional image to obtain a preset convolutional neural network model includes:
performing feature extraction on the sample section image through an initial convolutional neural network model to obtain a preset number of feature maps;
setting a corresponding sample verification frame in the preset number of feature maps;
detecting the result according to the sample verification box, and outputting a corresponding sample detection result, wherein the sample detection result comprises a detection frame coordinate, a detection frame confidence coefficient and a detection class probability;
and carrying out sample classification regression according to the sample detection result to obtain a preset convolutional neural network model.
Optionally, the target structure comprises: a spatial performance target structure, a passability target structure, and a visual field performance target structure;
the target structure labeling of the initial sectional image comprises the following steps:
identifying a spatial performance target structure, a passability target structure and a section image corresponding to a visual field performance target structure in the initial section image;
and marking the target structure by a preset marking mode on the cross-section images corresponding to the spatial performance target structure, the passability target structure and the visual field performance target structure.
Optionally, after the point cloud image of the whole vehicle is intercepted and a corresponding target cross-section image is obtained, the method further includes:
acquiring a preset adjustment size;
identifying a blank image in the target cross-sectional image;
cutting the target section image based on the preset adjustment size and the blank image to obtain a cut section image;
and updating the target sectional image according to the cut sectional image.
In addition, in order to achieve the above object, the present invention further provides an entire vehicle point cloud image analysis device, including:
the acquisition module is used for acquiring a whole vehicle point cloud image of the target vehicle;
the intercepting module is used for intercepting the point cloud image of the whole vehicle to obtain a corresponding target section image;
the detection module is used for detecting a target structure through a preset convolutional neural network model based on the target section image to obtain a corresponding target structure detection result;
and the analysis module is used for completing the analysis of the point cloud image of the whole vehicle according to the target structure detection result.
In addition, in order to achieve the above object, the present invention further provides a vehicle-mounted point cloud image analysis apparatus, including: the system comprises a memory, a processor and a complete vehicle point cloud image analysis program which is stored on the memory and can run on the processor, wherein the complete vehicle point cloud image analysis program is configured to realize the complete vehicle point cloud image analysis method.
In addition, in order to achieve the above object, the present invention further provides a storage medium, where the storage medium stores a complete vehicle point cloud image analysis program, and the complete vehicle point cloud image analysis program, when executed by a processor, implements the complete vehicle point cloud image analysis method as described above.
The method comprises the steps of obtaining a whole vehicle point cloud image of a target vehicle; intercepting the point cloud image of the whole vehicle to obtain a corresponding target section image; performing target structure detection through a preset convolutional neural network model based on the target section image to obtain a corresponding target structure detection result; and completing the analysis of the point cloud image of the whole vehicle according to the target structure detection result. By the method, the corresponding target section image is obtained after the point cloud image of the whole vehicle is obtained, the target structure of the target structure image is detected by the preset convolutional neural network model to obtain the corresponding detection result, the efficiency of recognizing the key structure section of the whole vehicle data is improved, the key structure is recognized by the system, time consumption is reduced, the detection standard is unified, objectivity is realized, and manual intervention is small.
Drawings
Fig. 1 is a schematic structural diagram of a complete vehicle point cloud image analysis device of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a first embodiment of a vehicle point cloud image analysis method according to the present invention;
FIG. 3 is a schematic overall flow chart of an embodiment of the vehicle point cloud image analysis method
FIG. 4 is a schematic flow chart of a second embodiment of a vehicle point cloud image analysis method according to the present invention;
FIG. 5 is a flowchart illustrating a point cloud image analysis method for a vehicle according to a third embodiment of the present invention
Fig. 6 is a block diagram of a first embodiment of the vehicle point cloud image analysis apparatus according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a complete vehicle point cloud image analysis device of a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the entire vehicle point cloud image analysis apparatus may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (Wi-Fi) interface). The Memory 1005 may be a Random Access Memory (RAM) Memory, or may be a Non-Volatile Memory (NVM), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration shown in fig. 1 does not constitute a limitation of the entire vehicle point cloud image analysis apparatus, and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a storage medium, may include therein an operating system, a network communication module, a user interface module, and a vehicle-completed point cloud image analysis program.
In the vehicle-finished point cloud image analysis apparatus shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 of the vehicle point cloud image analysis device can be arranged in the vehicle point cloud image analysis device, and the vehicle point cloud image analysis device calls a vehicle point cloud image analysis program stored in the memory 1005 through the processor 1001 and executes the vehicle point cloud image analysis method provided by the embodiment of the invention.
The embodiment of the invention provides a complete vehicle point cloud image analysis method, and referring to fig. 2, fig. 2 is a schematic flow diagram of a first embodiment of the complete vehicle point cloud image analysis method.
In this embodiment, the method for analyzing the point cloud image of the whole vehicle includes the following steps:
step S10: and acquiring a whole vehicle point cloud image of the target vehicle.
It should be noted that the execution subject of this embodiment is a system for analyzing a vehicle-finishing point cloud image of a target vehicle, and the system can acquire vehicle-finishing point cloud data of the vehicle, identify the point cloud data to acquire a vehicle-finishing point cloud image of the vehicle, perform cross-section processing on the vehicle-finishing point cloud image, and input the vehicle-finishing point cloud image into a trained convolutional neural network model for detection to obtain a detection result of a key structure in the vehicle-finishing point cloud image.
It can be understood that the vehicle point cloud image refers to a point cloud image of a target vehicle, and the vehicle point cloud image may be a vehicle point cloud picture dataset.
In a specific implementation, to obtain a complete vehicle point cloud image, further, the obtaining of the complete vehicle point cloud image of the target vehicle includes: acquiring vehicle point cloud data of a target vehicle; extracting image data in the finished automobile point cloud data to obtain a corresponding finished automobile point cloud image
It should be noted that the vehicle point cloud data refers to acquired point cloud data of a target vehicle, and the vehicle point cloud data is extracted and identified to obtain a vehicle point cloud image.
Step S20: and intercepting the point cloud image of the whole vehicle to obtain a corresponding target section image.
The point cloud image of the whole vehicle is intercepted, namely, the point cloud image of the whole vehicle is subjected to interface processing, the intercepted image comprises a target section required by three types of measurement of space performance, trafficability and visual field performance of a target, and the finally obtained section image is the target section image.
In a specific implementation, in order to make a detection result more accurate, the method further includes performing corresponding preprocessing on the obtained target cross-sectional image, and further, after the capturing the point cloud image of the entire vehicle and obtaining the corresponding target cross-sectional image, the method further includes: acquiring a preset adjustment size; identifying a blank image in the target cross-sectional image; cutting the target section image based on the preset adjustment size and the blank image to obtain a cut section image; and updating the target sectional image according to the cut sectional image.
It should be noted that the preset resizing refers to a preset size for cropping the target cross-sectional image so that all data are unified.
It is understood that the blank image refers to a cross-sectional image of the target cross-sectional image that does not contain the structure required for measurement.
In concrete implementation, after obtaining a blank image and presetting an adjustment size, cutting an original target section image, cutting an unnecessary part, namely the blank image, and adjusting all data to a uniform size based on the preset adjustment size to obtain a cut whole vehicle target section image, and marking a key part, wherein the key target part comprises: the space performance, the passing performance and the visual field performance are measured to obtain the required key cross section. And updating the cut and marked target section image of the whole automobile into a target section image.
Step S30: and carrying out target structure detection through a preset convolutional neural network model based on the target section image to obtain a corresponding target structure detection result.
It should be noted that the preset convolutional neural network model is obtained by training an initial convolutional neural network model based on sample labeled image data, a key part in the entire vehicle point cloud image is labeled on the sample labeled image, and a category of a target is given, where the key part target includes: the three types of measurement of space performance, passing performance and visual field performance are required to be key sections.
It can be understood that after the target point cloud image is input to the preset convolutional neural network model, the detection result of the key target structure can be output, so as to obtain a corresponding target structure detection result, where the target structure detection result includes a position of the target structure in the target cross-sectional image, a confidence and a probability of the target structure category.
Step S40: and completing the analysis of the point cloud image of the whole vehicle according to the target structure detection result.
After the target structure detection result is obtained, analyzing the target structure detection result, judging whether the name of the key structure in the target structure detection result can be identified, performing point cloud measurement according to rules, and finally completing the analysis process of the point cloud image of the whole vehicle.
For example, as shown in fig. 3, the system acquires vehicle-wide point cloud data of a vehicle, identifies the point cloud data to acquire a vehicle-wide point cloud image of the vehicle, performs cross-section processing on the vehicle-wide point cloud image, inputs the cross-section processed vehicle-wide point cloud image into a trained convolutional neural network model for detection to obtain a key structure detection result in the vehicle-wide point cloud image, performs structure identification based on the detection result, and performs point cloud measurement if the key structure detection result can be identified.
The method comprises the steps of obtaining a whole vehicle point cloud image of a target vehicle; intercepting the point cloud image of the whole vehicle to obtain a corresponding target section image; performing target structure detection through a preset convolutional neural network model based on the target section image to obtain a corresponding target structure detection result; and completing the analysis of the point cloud image of the whole vehicle according to the target structure detection result. By the method, the corresponding target section image is obtained after the point cloud image of the whole vehicle is obtained, the target structure of the target structure image is detected by the preset convolutional neural network model to obtain the corresponding detection result, the efficiency of recognizing the key structure section of the whole vehicle data is improved, the key structure is recognized by the system, time consumption is reduced, the detection standard is unified, objectivity is realized, and manual intervention is small.
Referring to fig. 4, fig. 4 is a schematic flow chart of a point cloud image analysis method for a finished vehicle according to a second embodiment of the present invention.
Based on the first embodiment, the step S30 in the vehicle point cloud image analysis method of this embodiment includes:
step S301: and obtaining a corresponding preset pixel matrix based on the target section image.
It should be noted that the preset convolutional neural network model structure includes an input layer, a backbone network, and 3 target detection branches.
It is to be understood that the preset pixel matrix is a pixel matrix obtained based on the target cross-sectional image for input to the input layer in the preset convolutional neural network model, and the preset pixel matrix is a pixel matrix of 608 × 3 in this embodiment.
Step S302: and inputting the preset pixel matrix to an input layer in the preset convolutional neural network model.
Note that, after the pixel matrix 608 × 3 is obtained, the input of the pixel matrix 608 × 3 is input to the input layer in the predetermined convolutional neural network.
Step S303: and processing the preset pixel matrix based on the input layer through a backbone network in the preset convolutional neural network model to obtain a corresponding output matrix result.
It should be noted that after the preset pixel matrix is input to the input layer, the preset pixel matrix is processed by the first to nineteenth convolution blocks in the backbone network in the preset convolution neural network model, and finally the corresponding output matrix result is obtained.
It can be understood that, the first convolution block in the preset convolution neural network model is sequentially provided with a convolution layer, a normalization layer and a LeaklyReLU layer, the convolution kernel size of the convolution layer is 3 × 3, the number of convolution kernels is 32, and the output size is 608 × 3; the second convolution block is sequentially provided with a convolution layer, a normalization layer and a LeaklyReLU layer, the convolution kernel of the convolution layer is 3 x 3, the number of the convolution kernels is 64, and the output size is 304 x 64; the third convolution block is composed of a residual block and comprises two convolution layers, wherein the convolution kernel size of the first convolution layer is 1 x 1, and the number of the first convolution layer is 64; the convolution kernel size of the second convolution layer is 3 × 3, the number is 128, and the output size is 304 × 128; the fourth convolution block is sequentially provided with a convolution layer, a normalization layer and a LeaklyReLU layer, the convolution kernel of the convolution layer is 3 x 3, the number of the convolution kernels is 128, and the output size is 152 x 128; the fifth convolution block is composed of 2 residual blocks with the same structure, each residual block comprises two convolution layers, the convolution kernel size of the first convolution layer is 1 x 1, and the number of the convolution kernels is 64; the convolution kernel size of the second convolution layer is 3 × 3, the number is 128, and the output size is 152 × 128; the sixth convolution block is sequentially provided with a convolution layer, a normalization layer and a LeaklyReLU layer, the convolution kernel of the convolution layer is 3 x 3, the number of the convolution kernels is 256, and the output size is 76 x 256; the seventh convolution block is composed of 8 residual blocks with the same structure, each residual block comprises two convolution layers, the convolution kernel size of the first convolution layer is 1 x 1, and the number of the convolution kernels is 128; the convolution kernel size of the second convolution layer is 3 × 3, the number of the convolution kernels is 256, and the output size of the convolution kernels is 76 × 256; the eighth convolution block is sequentially provided with a convolution layer, a normalization layer and a LeaklyReLU layer, the convolution kernel of the convolution layer is 3 x 3, the number of the convolution kernels is 512, and the output size is 38 x 512; the ninth convolution block is composed of 8 residual blocks with the same structure, each residual block comprises two convolution layers, the convolution kernel size of the first convolution layer is 1 x 1, and the number of the convolution kernels is 256; the convolution kernel size of the second convolution layer is 3 × 3, the number of the convolution kernels is 512, and the output size of the second convolution layer is 38 × 512; the tenth convolution block is sequentially provided with a convolution layer, a normalization layer and a LeaklyReLU layer, the convolution kernel of the convolution layer is 3 x 3, the number of the convolution kernels is 1024, and the output size is 19 x 1024; the tenth convolution block is composed of 4 residual blocks with the same structure, each residual block comprises two convolution layers, the convolution kernel size of the first convolution layer is 1 x 1, and the number of the convolution kernels is 512; the convolution kernel size of the second convolution layer is 3 × 3, the number of the convolution kernels is 1024, and the output size of the convolution kernels is 19 × 1024; the twelfth convolution set is sequentially provided with 5 convolution layers, the convolution kernels of the first convolution layer, the third convolution layer and the fifth convolution layer are 1 x 1 in size, the number of the convolution kernels is 512, the convolution kernels of the second convolution layer and the fourth convolution layer are 3 x 3 in size, and the number of the convolution kernels is 1024; the thirteenth convolution layer is sequentially provided with 2 convolution layers, the convolution kernels of the convolution layers are respectively 3 × 3 and 1 × 1, and the output size of the convolution set is 19 × 1024; the fourteenth convolutional layer input is the output of the twelfth convolutional layer, and 1 convolutional layer and 1 upper sampling layer are sequentially arranged, wherein the convolutional kernel of the convolutional layer is 1 x 1; the fifteenth convolution set input is the output of the ninth convolution block and the fourteenth convolution layer, and sets 5 layers of convolution layers in sequence, and the convolution set is the same as the twelfth convolution set; sixteenth convolution layer sets 2 convolution layers in turn, the convolution layer set is the same as the thirteenth convolution layer, and the output size of the layer is 38 x 512; a seventeenth convolutional layer input is the output of the fifteenth convolutional layer, which is arranged the same as the fourteenth convolutional layer; eighteenth convolution set input is the output of the seventh convolution block and the seventeenth convolution layer, and 5 layers of convolution layers are sequentially arranged, and the convolution set is the same as the twelfth convolution set; the nineteenth convolutional layer input is the output of the eighteenth convolutional layer, which is arranged as the thirteenth convolutional layer, and the layer output is 76 × 256 in size.
Step S304: and carrying out result detection through a detection layer in the preset convolutional neural network model based on the output matrix result to obtain a corresponding target structure detection result.
It should be noted that the output matrix result obtained after the processing is performed through the backbone network in the preset convolutional neural network model needs to perform result detection through the detection layer in the preset convolutional neural network model to obtain a corresponding target structure detection result.
It can be understood that the detection layer is detected according to output matrices of a thirteenth convolutional layer, a sixteenth convolutional layer and a nineteenth convolutional layer in a preset convolutional neural network model backbone network, and a target structure detection result is finally obtained.
In the embodiment, a corresponding preset pixel matrix is obtained based on the target section image; inputting the preset pixel matrix to an input layer in the preset convolutional neural network model; processing the preset pixel matrix based on the input layer through a backbone network in the preset convolutional neural network model to obtain a corresponding output matrix result; and performing result detection through a detection layer in the preset convolutional neural network model based on the output matrix result to obtain a corresponding target structure detection result, and performing target key structure detection on the target section image through the preset convolutional neural network model to finally obtain an accurate detection result and improve the efficiency of detection and identification.
Referring to fig. 5, fig. 5 is a schematic flow chart of a point cloud image analysis method for a finished automobile according to a third embodiment of the present invention.
Based on the first embodiment, before the step S30, the method for analyzing a point cloud image of a complete vehicle according to this embodiment further includes:
step S301': and acquiring an initial section image in a preset database.
It should be noted that the preset database is a sample database storing a plurality of point cloud images of the whole vehicle, and the initial sectional image is a sectional image of the whole vehicle including a key structure required for detection.
Step S302': and carrying out target structure labeling on the initial section image to obtain a sample section image.
It should be noted that, in order to perform better training on the initial convolutional neural network model, a cross section required for point cloud measurement in the initial cross-section image needs to be labeled, and the labeling manner may be outlined by a box or other manners, which is not limited in this embodiment.
It can be understood that after the initial cross-sectional image is obtained, the target structure is labeled, and a sample cross-sectional image which can be input into the initial convolutional neural network model for training is obtained.
Step S303': and training the initial convolutional neural network model based on the sample sectional image to obtain a preset convolutional neural network model.
It should be noted that, the key structure labeling training is performed on the convolutional neural network model based on the obtained sample sectional image, so that a preset convolutional neural network model capable of performing key structure labeling on the sectional image can be obtained.
In a specific implementation, in order to better train the initial model, further, the training the initial convolutional neural network model based on the sample cross-sectional image to obtain a preset convolutional neural network model includes: performing feature extraction on the sample section image through an initial convolutional neural network model to obtain a preset number of feature maps; setting a corresponding sample verification frame in the preset number of feature maps; detecting the result according to the sample verification box, and outputting a corresponding sample detection result, wherein the sample detection result comprises a detection frame coordinate, a detection frame confidence coefficient and a detection class probability; and carrying out sample classification regression according to the sample detection result to obtain a preset convolutional neural network model.
It should be noted that the performing, by using the initial convolutional neural network model, feature extraction on the sample cross-sectional image to obtain a preset number of feature maps refers to performing feature extraction on the sample cross-sectional image by using a convolutional block and a residual block in the initial convolutional neural network model to obtain a preset number of feature maps, where the preset number of feature maps may be 3 feature maps with different scales or other feature maps with different scales, and the number is not limited in this embodiment, but 3 are used as an example in this embodiment for explanation.
It can be understood that, the sample verification boxes are sample prior boxes, and setting the corresponding sample verification boxes in the preset number of feature maps means setting a certain number of prior boxes in each grid of each feature map, for example, 3 prior boxes may be set in each grid of each feature map.
It can be understood that after a sample verification frame, i.e., a sample prior frame, is obtained, result detection is performed according to the sample prior frame, a corresponding detection result is output, the output content includes the frame coordinates of the target structure, the confidence of the frame, and the probability of the category, classification and regression are performed on the target by using the content output by detection, and finally a trained preset convolutional neural network model is obtained.
It should be understood that, in the training process, the Keras framework is used to perform optimization iterative training on the convolutional neural network, in the training process, the learning rate lr in the optimization process is 0.001, the batch size of training is 32, and the number of iterations is 200, so as to finally obtain a better preset convolutional neural network model.
In a specific implementation, to enable the annotation training to be more accurate, further, the target structure includes: a spatial performance target structure, a passability target structure, and a visual field performance target structure; the target structure labeling of the initial sectional image comprises the following steps: identifying a spatial performance target structure, a passability target structure and a section image corresponding to a visual field performance target structure in the initial section image; and marking the target structure by a preset marking mode on the cross-section images corresponding to the spatial performance target structure, the passability target structure and the visual field performance target structure.
It should be noted that, the target structures related to spatial performance, trafficability, and visual field performance in the initial cross-sectional image are identified and labeled, and finally the labeled sample cross-sectional image is obtained, where the preset labeling manner refers to framing the corresponding target structure with a square frame.
In the embodiment, an initial section image in a preset database is obtained; carrying out target structure labeling on the initial section image to obtain a sample section image; and training the initial convolutional neural network model based on the sample sectional image to obtain a preset convolutional neural network model. The initial convolutional neural network model is trained through the sample sectional image, a preset convolutional neural network model capable of outputting accurate key target structure detection results can be obtained, and the accuracy of identification is improved.
In addition, referring to fig. 6, an embodiment of the present invention further provides an entire vehicle point cloud image analysis apparatus, where the entire vehicle point cloud image analysis apparatus includes:
the acquisition module 10 is used for acquiring a point cloud image of the whole vehicle of the target vehicle.
And the intercepting module 20 is used for intercepting the point cloud image of the whole vehicle to obtain a corresponding target section image.
And the detection module 30 is configured to perform target structure detection through a preset convolutional neural network model based on the target cross-sectional image, so as to obtain a corresponding target structure detection result.
And the analysis module 40 is used for completing the analysis of the whole vehicle point cloud image according to the target structure detection result.
The method comprises the steps of obtaining a whole vehicle point cloud image of a target vehicle; intercepting the point cloud image of the whole vehicle to obtain a corresponding target section image; performing target structure detection through a preset convolutional neural network model based on the target section image to obtain a corresponding target structure detection result; and completing the analysis of the point cloud image of the whole vehicle according to the target structure detection result. By the method, the corresponding target section image is obtained after the point cloud image of the whole vehicle is obtained, the target structure of the target structure image is detected by the preset convolutional neural network model to obtain the corresponding detection result, the efficiency of recognizing the key structure section of the whole vehicle data is improved, the key structure is recognized by the system, time consumption is reduced, the detection standard is unified, objectivity is realized, and manual intervention is small.
In an embodiment, the obtaining module 10 is further configured to obtain vehicle-finishing point cloud data of a target vehicle;
and extracting image data in the finished automobile point cloud data to obtain a corresponding finished automobile point cloud image.
In an embodiment, the detection module 30 is further configured to obtain a corresponding preset pixel matrix based on the target cross-sectional image;
processing the preset pixel matrix with the preset pixel moment through a backbone network in the preset convolutional neural network model to obtain a corresponding output matrix result;
inputting the output matrix result to an input layer in the preset convolutional neural network model through the pre-matrix based on the output matrix result;
and carrying out result detection on the basis of a detection layer in the convolutional neural network model of the input layer to obtain a corresponding target structure detection result.
In an embodiment, the detection module 30 is further configured to obtain an initial cross-sectional image in a preset database;
carrying out target structure labeling on the initial section image to obtain a sample section image;
and training the initial convolutional neural network model based on the sample sectional image to obtain a preset convolutional neural network model.
In an embodiment, the detection module 30 is further configured to perform feature extraction on the sample cross-sectional image through an initial convolutional neural network model to obtain a preset number of feature maps;
setting a corresponding sample verification frame in the preset number of feature maps;
detecting the result according to the sample verification box, and outputting a corresponding sample detection result, wherein the sample detection result comprises a detection frame coordinate, a detection frame confidence coefficient and a detection class probability;
and carrying out sample classification regression according to the sample detection result to obtain a preset convolutional neural network model.
In an embodiment, the detection module 30 is further configured to perform target structure labeling on the initial sectional image, and includes:
identifying a spatial performance target structure, a passability target structure and a section image corresponding to a visual field performance target structure in the initial section image;
and marking the target structure by a preset marking mode on the cross-section images corresponding to the spatial performance target structure, the passability target structure and the visual field performance target structure.
In an embodiment, the intercepting module 20 is further configured to obtain a preset adjustment size;
identifying a blank image in the target cross-sectional image;
cutting the target section image based on the preset adjustment size and the blank image to obtain a cut section image;
and updating the target sectional image according to the cut sectional image.
Since the present apparatus employs all technical solutions of all the above embodiments, at least all the beneficial effects brought by the technical solutions of the above embodiments are achieved, and are not described in detail herein.
In addition, an embodiment of the present invention further provides a storage medium, where a complete vehicle point cloud image analysis program is stored on the storage medium, and when being executed by a processor, the complete vehicle point cloud image analysis program implements the steps of the complete vehicle point cloud image analysis method described above.
Since the storage medium adopts all technical solutions of all the embodiments, at least all the beneficial effects brought by the technical solutions of the embodiments are achieved, and no further description is given here.
It should be noted that the above-described work flows are only exemplary, and do not limit the scope of the present invention, and in practical applications, a person skilled in the art may select some or all of them to achieve the purpose of the solution of the embodiment according to actual needs, and the present invention is not limited herein.
In addition, the technical details that are not described in detail in this embodiment may refer to the entire vehicle point cloud image analysis method provided in any embodiment of the present invention, and are not described herein again.
Further, it is to be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g. Read Only Memory (ROM)/RAM, magnetic disk, optical disk), and includes several instructions for enabling a terminal device (e.g. a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (10)
1. The method for analyzing the point cloud image of the whole vehicle is characterized by comprising the following steps of:
acquiring a whole vehicle point cloud image of a target vehicle;
intercepting the point cloud image of the whole vehicle to obtain a corresponding target section image;
performing target structure detection through a preset convolutional neural network model based on the target section image to obtain a corresponding target structure detection result;
and completing the analysis of the point cloud image of the whole vehicle according to the target structure detection result.
2. The vehicle point cloud image analysis method according to claim 1, wherein the acquiring of the vehicle point cloud image of the target vehicle comprises:
acquiring vehicle point cloud data of a target vehicle;
and extracting image data in the finished automobile point cloud data to obtain a corresponding finished automobile point cloud image.
3. The vehicle point cloud image analysis method of claim 1, wherein the target structure detection is performed through a preset convolutional neural network model based on the target section image to obtain a corresponding target structure detection result, and the method comprises the following steps:
obtaining a corresponding preset pixel matrix based on the target section image;
inputting the preset pixel matrix to an input layer in the preset convolutional neural network model;
processing the preset pixel matrix based on the input layer through a backbone network in the preset convolutional neural network model to obtain a corresponding output matrix result;
and carrying out result detection through a detection layer in the preset convolutional neural network model based on the output matrix result to obtain a corresponding target structure detection result.
4. The vehicle point cloud image analysis method according to claim 1, wherein before the target structure detection is performed through a preset convolutional neural network model based on the target section image to obtain a corresponding target structure detection result, the method further comprises:
acquiring an initial section image in a preset database;
carrying out target structure labeling on the initial section image to obtain a sample section image;
and training the initial convolutional neural network model based on the sample sectional image to obtain a preset convolutional neural network model.
5. The vehicle point cloud image analysis method of claim 4, wherein the training of the initial convolutional neural network model based on the sample cross-sectional image to obtain a preset convolutional neural network model comprises:
performing feature extraction on the sample section image through an initial convolutional neural network model to obtain a preset number of feature maps;
setting a corresponding sample verification frame in the preset number of feature maps;
detecting the result according to the sample verification box, and outputting a corresponding sample detection result, wherein the sample detection result comprises a detection frame coordinate, a detection frame confidence coefficient and a detection class probability;
and carrying out sample classification regression according to the sample detection result to obtain a preset convolutional neural network model.
6. The vehicle point cloud image analysis method of claim 4, wherein the target structure comprises: a spatial performance target structure, a passability target structure, and a visual field performance target structure;
the target structure labeling of the initial sectional image comprises the following steps:
identifying a spatial performance target structure, a passability target structure and a section image corresponding to a visual field performance target structure in the initial section image;
and marking the target structure by a preset marking mode on the cross-section images corresponding to the spatial performance target structure, the passability target structure and the visual field performance target structure.
7. The vehicle point cloud image analysis method according to any one of claims 1 to 6, wherein after the capturing the vehicle point cloud image to obtain a corresponding target section image, the method further comprises:
acquiring a preset adjustment size;
identifying a blank image in the target cross-sectional image;
cutting the target section image based on the preset adjustment size and the blank image to obtain a cut section image;
and updating the target sectional image according to the cut sectional image.
8. The utility model provides a whole car point cloud image analysis device which characterized in that, whole car point cloud image analysis device includes:
the acquisition module is used for acquiring a whole vehicle point cloud image of the target vehicle;
the intercepting module is used for intercepting the point cloud image of the whole vehicle to obtain a corresponding target section image;
the detection module is used for detecting a target structure through a preset convolutional neural network model based on the target section image to obtain a corresponding target structure detection result;
and the analysis module is used for completing the analysis of the point cloud image of the whole vehicle according to the target structure detection result.
9. A vehicle-mounted point cloud image analysis device is characterized by comprising: a memory, a processor, and a complete vehicle point cloud image analysis program stored on the memory and executable on the processor, the complete vehicle point cloud image analysis program configured to implement the complete vehicle point cloud image analysis method of any one of claims 1 to 7.
10. A storage medium having stored thereon a complete vehicle point cloud image analysis program which, when executed by a processor, implements a complete vehicle point cloud image analysis method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110991588.1A CN113706503B (en) | 2021-08-26 | 2021-08-26 | Whole vehicle point cloud image analysis method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110991588.1A CN113706503B (en) | 2021-08-26 | 2021-08-26 | Whole vehicle point cloud image analysis method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113706503A true CN113706503A (en) | 2021-11-26 |
CN113706503B CN113706503B (en) | 2024-01-30 |
Family
ID=78655551
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110991588.1A Active CN113706503B (en) | 2021-08-26 | 2021-08-26 | Whole vehicle point cloud image analysis method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113706503B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110263725A (en) * | 2019-06-21 | 2019-09-20 | 广州鹰瞰信息科技有限公司 | Fast vehicle detection method based on convolutional neural networks |
CN110458112A (en) * | 2019-08-14 | 2019-11-15 | 上海眼控科技股份有限公司 | Vehicle checking method, device, computer equipment and readable storage medium storing program for executing |
CN111144398A (en) * | 2018-11-02 | 2020-05-12 | 银河水滴科技(北京)有限公司 | Target detection method, target detection device, computer equipment and storage medium |
CN111981936A (en) * | 2020-08-31 | 2020-11-24 | 东风汽车集团有限公司 | Quick measurement record instrument of car body sheet metal structure characteristic |
CN112697066A (en) * | 2020-12-02 | 2021-04-23 | 王刚 | Vehicle part positioning method and device and computer storage medium |
DE102019131863A1 (en) * | 2019-11-25 | 2021-05-27 | Dürr Assembly Products GmbH | Use of a device for photogrammetric measurement of objects to determine the position and / or orientation of parts of a vehicle |
CN112859088A (en) * | 2021-01-04 | 2021-05-28 | 北京科技大学 | Vehicle position information acquisition method and system based on three-dimensional radar |
CN213658265U (en) * | 2020-12-21 | 2021-07-09 | 上海机动车检测中心技术有限公司 | Whole car trafficability characteristic detection device |
-
2021
- 2021-08-26 CN CN202110991588.1A patent/CN113706503B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111144398A (en) * | 2018-11-02 | 2020-05-12 | 银河水滴科技(北京)有限公司 | Target detection method, target detection device, computer equipment and storage medium |
CN110263725A (en) * | 2019-06-21 | 2019-09-20 | 广州鹰瞰信息科技有限公司 | Fast vehicle detection method based on convolutional neural networks |
CN110458112A (en) * | 2019-08-14 | 2019-11-15 | 上海眼控科技股份有限公司 | Vehicle checking method, device, computer equipment and readable storage medium storing program for executing |
DE102019131863A1 (en) * | 2019-11-25 | 2021-05-27 | Dürr Assembly Products GmbH | Use of a device for photogrammetric measurement of objects to determine the position and / or orientation of parts of a vehicle |
CN111981936A (en) * | 2020-08-31 | 2020-11-24 | 东风汽车集团有限公司 | Quick measurement record instrument of car body sheet metal structure characteristic |
CN112697066A (en) * | 2020-12-02 | 2021-04-23 | 王刚 | Vehicle part positioning method and device and computer storage medium |
CN213658265U (en) * | 2020-12-21 | 2021-07-09 | 上海机动车检测中心技术有限公司 | Whole car trafficability characteristic detection device |
CN112859088A (en) * | 2021-01-04 | 2021-05-28 | 北京科技大学 | Vehicle position information acquisition method and system based on three-dimensional radar |
Non-Patent Citations (4)
Title |
---|
BO-TAI WU: ""3D Environment Detection Using Multi-View Color Images and LiDAR Point Clouds"", 《2018 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS-TAIWAN (ICCE-TW)》 * |
吴光淮: ""基于人机工程学的汽车点云处理方法"", 《汽车实用技术》 * |
张立斌;吴岛;单洪颖;刘琦烽;: "基于激光点云的车辆外廓尺寸动态测量方法", 华南理工大学学报(自然科学版), no. 03 * |
程志贤;张海;周新建;: "基于逆向工程的汽车覆盖件快速模具设计", 机械设计与制造, no. 02 * |
Also Published As
Publication number | Publication date |
---|---|
CN113706503B (en) | 2024-01-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9349076B1 (en) | Template-based target object detection in an image | |
CN110826632B (en) | Image change detection method, device, equipment and computer readable storage medium | |
CN109960742B (en) | Local information searching method and device | |
CN114155244B (en) | Defect detection method, device, equipment and storage medium | |
CN109145956B (en) | Scoring method, scoring device, computer equipment and storage medium | |
CN114463637A (en) | Winter wheat remote sensing identification analysis method and system based on deep learning | |
CN115372877B (en) | Lightning arrester leakage ammeter inspection method of transformer substation based on unmanned aerial vehicle | |
CN112070076A (en) | Text paragraph structure reduction method, device, equipment and computer storage medium | |
CN112308069A (en) | Click test method, device, equipment and storage medium for software interface | |
CN112683169A (en) | Object size measuring method, device, equipment and storage medium | |
CN113221947A (en) | Industrial quality inspection method and system based on image recognition technology | |
CN111598460A (en) | Method, device and equipment for monitoring heavy metal content in soil and storage medium | |
CN116258956A (en) | Unmanned aerial vehicle tree recognition method, unmanned aerial vehicle tree recognition equipment, storage medium and unmanned aerial vehicle tree recognition device | |
CN111768405A (en) | Method, device, equipment and storage medium for processing annotated image | |
CN111768406A (en) | Cell image processing method, device, equipment and storage medium | |
CN113269717A (en) | Building detection method and device based on remote sensing image | |
CN113706503B (en) | Whole vehicle point cloud image analysis method, device, equipment and storage medium | |
CN116758419A (en) | Multi-scale target detection method, device and equipment for remote sensing image | |
CN116310194A (en) | Three-dimensional model reconstruction method, system, equipment and storage medium for power distribution station room | |
CN111368709A (en) | Picture text recognition method, device and equipment and readable storage medium | |
CN111104965A (en) | Vehicle target identification method and device | |
CN113570001B (en) | Classification identification positioning method, device, equipment and computer readable storage medium | |
CN115424153A (en) | Target detection method, device, equipment and storage medium | |
CN112381773B (en) | Key cross section data analysis method, device, equipment and storage medium | |
CN113408571B (en) | Image classification method and device based on model distillation, storage medium and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |