CN117274510A - Vehicle body fault detection method based on three-dimensional modeling and structural dimension measurement - Google Patents
Vehicle body fault detection method based on three-dimensional modeling and structural dimension measurement Download PDFInfo
- Publication number
- CN117274510A CN117274510A CN202311564740.3A CN202311564740A CN117274510A CN 117274510 A CN117274510 A CN 117274510A CN 202311564740 A CN202311564740 A CN 202311564740A CN 117274510 A CN117274510 A CN 117274510A
- Authority
- CN
- China
- Prior art keywords
- unit
- vehicle body
- image
- algorithm
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 42
- 238000005259 measurement Methods 0.000 title claims abstract description 34
- 238000000034 method Methods 0.000 claims abstract description 37
- 238000012545 processing Methods 0.000 claims abstract description 26
- 230000011218 segmentation Effects 0.000 claims abstract description 22
- 230000001360 synchronised effect Effects 0.000 claims abstract description 21
- 238000012423 maintenance Methods 0.000 claims abstract description 13
- 230000008569 process Effects 0.000 claims abstract description 11
- 238000005457 optimization Methods 0.000 claims description 16
- 230000009466 transformation Effects 0.000 claims description 16
- 238000004140 cleaning Methods 0.000 claims description 14
- 238000006243 chemical reaction Methods 0.000 claims description 12
- 230000003993 interaction Effects 0.000 claims description 10
- 239000011159 matrix material Substances 0.000 claims description 9
- 230000005540 biological transmission Effects 0.000 claims description 7
- 238000010586 diagram Methods 0.000 claims description 7
- 230000003287 optical effect Effects 0.000 claims description 7
- 238000004458 analytical method Methods 0.000 claims description 5
- 230000002452 interceptive effect Effects 0.000 claims description 4
- 239000003086 colorant Substances 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 3
- 230000001172 regenerating effect Effects 0.000 claims description 3
- 239000000725 suspension Substances 0.000 claims description 3
- 230000002457 bidirectional effect Effects 0.000 claims description 2
- 230000008901 benefit Effects 0.000 description 8
- 230000000694 effects Effects 0.000 description 8
- 230000010354 integration Effects 0.000 description 7
- 238000007405 data analysis Methods 0.000 description 6
- 230000000739 chaotic effect Effects 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000004148 unit process Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/20—Administration of product repair or maintenance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/70—Labelling scene content, e.g. deriving syntactic or semantic representations
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Economics (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Computer Graphics (AREA)
- Computing Systems (AREA)
- Entrepreneurship & Innovation (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a vehicle body fault detection method based on three-dimensional modeling and structural dimension measurement, relates to the technical field of measurement, and solves the problems of incomplete segmentation, inaccurate matching and inaccurate modeling of three-dimensional modeling in vehicle body fault detection. The detection method comprises the steps of obtaining unordered range multi-view of a vehicle body; analyzing and processing the acquired image data; performing virtual three-dimensional modeling on the processed image data; carrying out structural dimension measurement and fault positioning marking on each part of the three-dimensional model of the vehicle body; generating a fault report and a maintenance guide; according to the invention, the multi-dimensional unordered tensor view is matched and calibrated through the multi-view paired registration algorithm, the vehicle body three-dimensional model is constructed through the synchronous voxel constraint topological algorithm, and the vehicle body three-dimensional model is identified and segmented through the region graph theory algorithm, so that the integrity of identification segmentation is greatly improved, the matching precision is improved, and the detail problem in the modeling process is thinned.
Description
Technical Field
The invention relates to the technical field of measurement, in particular to a vehicle body fault detection method based on three-dimensional modeling and structural dimension measurement.
Background
The traditional vehicle body fault detection method mainly relies on manual visual inspection, and the method judges and detects by observing deformation, cracks and the like on the surface of the vehicle body, and has the defects of low accuracy, low efficiency, high operation difficulty and the like. The vehicle body fault detection method based on three-dimensional modeling and structure size measurement utilizes advanced three-dimensional technology to realize rapid and accurate detection and quantitative analysis of the vehicle body structure, and can greatly improve the detection accuracy and efficiency.
However, there are some drawbacks to three-dimensional modeling:
1. three-dimensional modeling requires a lot of data acquisition and processing effort, and modeling errors may exist, requiring fine tuning.
2. For some car bodies with complex internal structures, such as engine rooms, transmission systems and the like, the conventional three-dimensional modeling method is difficult to acquire internal structure information, and internal faults cannot be effectively diagnosed.
Disclosure of Invention
Aiming at the defects of the technology, the invention discloses a vehicle body fault detection method based on three-dimensional modeling and structural dimension measurement.
In view of the above, the invention provides a vehicle body fault detection method based on three-dimensional modeling and structural dimension measurement, comprising the following steps:
step 1, acquiring unordered range multi-view of a vehicle body;
acquiring a multi-view image of the vehicle body through an image acquisition module;
step 2, analyzing and processing the acquired image data;
the collected disordered images are subjected to preliminary processing through an image processing module;
step 3, performing virtual three-dimensional modeling on the processed image data;
converting the image data of the vehicle body into a virtual three-dimensional model through a three-dimensional virtual module; the three-dimensional virtual module comprises a data conversion unit, a tensor matching unit, a virtual modeling unit and an identification and segmentation unit, wherein the data conversion unit adopts a conversion algorithm to perform point cloud tensor on image data, the tensor matching unit adopts a multi-view pairwise registration algorithm to perform matching calibration among multi-dimensional unordered tensor views, the virtual modeling unit adopts a synchronous voxel constraint topology algorithm to perform three-dimensional model construction on the matched and calibrated multi-dimensional unordered tensor views, the identification and segmentation unit adopts a region graph theory algorithm to realize three-dimensional model identification and segmentation according to the structure of each system of the vehicle body, the output end of the data conversion unit is connected with the input end of the tensor matching unit, the output end of the tensor matching unit is connected with the input end of the virtual modeling unit, and the output end of the virtual modeling unit is connected with the input end of the identification and segmentation unit;
step 4, measuring the structural size and marking the fault of each part of the three-dimensional model of the separated vehicle body;
the method comprises the steps of extracting structural dimension information of a vehicle body and detecting faults through a measurement detection module, wherein the structural dimension information at least comprises length, width, height and convexity;
step 5, presenting the detection result to a user and simultaneously generating a fault report and a maintenance guide;
and checking the vehicle body detection process, the vehicle body structure size information data, the fault report and the maintenance guide at multiple terminals through the intelligent display module.
As a further embodiment of the present invention, the image acquisition module includes an internal scanning unit that acquires a structural diagram of an engine, a transmission system, a suspension system, a brake system, and a steering system inside the vehicle body through a laser scanner, an external image acquisition unit that acquires images of colors, a frame, wheels, a dashboard, and an entertainment system of the vehicle body through an RGB camera, and a motion unit that rotates and translates the vehicle body through a multifunctional platform to acquire an omnibearing comprehensive image, the internal scanning unit and the external image acquisition unit being bidirectionally connected with the motion unit.
As a further embodiment of the present invention, the image processing module includes a flow accelerating unit, a data dividing unit, an image cleaning unit, an image classifying unit, an image sorting unit, an image integrating unit and a data analyzing unit, wherein the flow accelerating unit simplifies a data processing flow by adopting a distributed accelerating algorithm, the data dividing unit is used for dividing input image data into a plurality of identical data blocks according to a sequence code of an acquisition device, the image cleaning unit fills up incomplete parts of the data blocks by adopting an interpolation algorithm and removes redundant, chaotic and invalid parts of the data blocks by adopting a wavelet transformation algorithm, the image classifying unit classifies the cleaned data blocks according to system types of a vehicle body by adopting a hybrid clustering algorithm, the image sorting unit sorts the classified data blocks in order according to time, the image integration unit adopts an optical flow data matching algorithm to place ordered data blocks in the same network dynamic space, the data analysis unit performs summarization analysis according to the duty ratio of the obtained effective image data of each system of the vehicle body in the total collected image data, the output end of the flow acceleration unit is respectively connected with the input ends of the data dividing unit, the image cleaning unit, the image classifying unit, the image ordering unit, the image integration unit and the data analysis unit, the output end of the data dividing unit is connected with the input end of the image cleaning unit, the output end of the image cleaning unit is connected with the input end of the image classifying unit, the output end of the image classifying unit is connected with the input end of the image ordering unit, the output end of the image ordering unit is connected with the input end of the image integration unit, the output end of the image integration unit is connected with the input end of the data analysis unit.
As a further embodiment of the present invention, the measurement detection module includes a dimension measurement unit, a fault determination unit, and a mark storage unit, wherein the dimension measurement unit obtains actual structural dimension information data of the three-dimensional model of the vehicle body through a three-dimensional scanner, the fault determination unit determines deformation, impact and crack degrees of the vehicle body by comparing the actual structural dimension information data of the vehicle body with standard structural dimension information data, the mark storage unit adopts an anchor frame method to find deformation, impact and crack of the surface of the vehicle body according to the determination result, and automatically marks and stores related position information, an output end of the dimension measurement unit is connected with an input end of the fault determination unit, and an output end of the fault determination unit is connected with an input end of the mark storage unit.
As a further embodiment of the present invention, the intelligent display module includes an interaction unit, a feedback unit, a display unit and an early warning unit, where the interaction unit performs detail inspection of the vehicle body detection process at multiple terminals through a 3D virtual ring, the feedback unit adds an undetected fault position according to a fault report, the display unit displays the fault report and a maintenance guide through an LED splicing screen, the early warning unit prompts the user of the longest service life of the vehicle body and the vehicle body replacement reminder in a manner of a buzzer, a short message and a telephone message, an output end of the interaction unit is connected with an input end of the feedback unit, an output end of the feedback unit is connected with an input end of the early warning unit, and an output end of the early warning unit is connected with an input end of the display unit.
As a further embodiment of the present invention, the working method of the multi-viewpoint paired registration algorithm is as follows: firstly, carrying out preliminary registration on a plurality of images with different view angles by using a multi-view registration algorithm to obtain initial transformation parameters of each image, then adopting a mixed registration method of feature-based registration, phase correlation-based registration and image entropy-based registration to carry out further fine registration on each pair of images, reversely transmitting the result of the fine registration to the multi-view registration, thereby recalculating the initial transformation parameters, and finally carrying out the re-registration on all the images by using an optimization adjustment algorithm according to the initial transformation parameters obtained by reverse transmission.
As a further embodiment of the present invention, the working method of the synchronous voxel constraint topology algorithm is as follows: firstly, converting registered image data into a three-dimensional voxel model through a voxelization algorithm to establish a compact discrete space expression data matrix, then carrying out shape adjustment and topology optimization on the voxel model by utilizing a synchronous constraint topology generation algorithm, regenerating a three-dimensional model based on voxels, and finally optimizing the whole model detail by using an interactive modeling mode.
As a further embodiment of the present invention, the working method of the region graph theory algorithm is as follows: firstly, dividing a three-dimensional model into a plurality of nodes by using a region growing algorithm, then, constructing an undirected weighted graph by using the contact area between the adjacent nodes as the distance weight between two points, then, dividing the undirected weighted graph by using a graph theory algorithm, wherein the division result is a plurality of subsets of the three-dimensional model, and finally, carrying out optimization operation on the division result, wherein the optimization operation comprises the removal of the subset of holes, noise and errors.
The invention has the positive beneficial effects that compared with the prior art:
according to the invention, the multi-dimensional unordered tensor view is matched and calibrated through the multi-view paired registration algorithm, the vehicle body three-dimensional model is constructed through the synchronous voxel constraint topological algorithm, and the vehicle body three-dimensional model is identified and segmented through the region graph theory algorithm, so that the integrity of identification segmentation is greatly improved, the matching precision is improved, and the detail problem in the modeling process is thinned. The fault detection capability of the vehicle body is improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings which are required in the description of the embodiments or the prior art will be briefly described below, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings may be obtained from these drawings without inventive faculty for a person skilled in the art, wherein,
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a basic architecture diagram of the present invention;
FIG. 3 is a diagram of an image processing module architecture;
FIG. 4 is a three-dimensional virtual module architecture diagram;
fig. 5 is a schematic diagram of a measurement detection module.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. It should be understood that the description is only illustrative and is not intended to limit the scope of the invention. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the present invention.
As shown in fig. 1 to 5, a vehicle body fault detection method based on three-dimensional modeling and structural dimension measurement includes the following steps:
step 1, acquiring unordered range multi-view of a vehicle body;
acquiring a multi-view image of the vehicle body through an image acquisition module;
step 2, analyzing and processing the acquired image data;
the collected disordered images are subjected to preliminary processing through an image processing module;
step 3, performing virtual three-dimensional modeling on the processed image data;
converting the image data of the vehicle body into a virtual three-dimensional model through a three-dimensional virtual module; the three-dimensional virtual module comprises a data conversion unit, a tensor matching unit, a virtual modeling unit and an identification and segmentation unit, wherein the data conversion unit adopts a conversion algorithm to perform point cloud tensor on image data, the tensor matching unit adopts a multi-view pairwise registration algorithm to perform matching calibration among multi-dimensional unordered tensor views, the virtual modeling unit adopts a synchronous voxel constraint topology algorithm to perform three-dimensional model construction on the matched and calibrated multi-dimensional unordered tensor views, the identification and segmentation unit adopts a region graph theory algorithm to realize three-dimensional model identification and segmentation according to the structure of each system of the vehicle body, the output end of the data conversion unit is connected with the input end of the tensor matching unit, the output end of the tensor matching unit is connected with the input end of the virtual modeling unit, and the output end of the virtual modeling unit is connected with the input end of the identification and segmentation unit;
step 4, measuring the structural size and marking the fault of each part of the three-dimensional model of the separated vehicle body;
the method comprises the steps of extracting structural dimension information of a vehicle body and detecting faults through a measurement detection module, wherein the structural dimension information at least comprises length, width, height and convexity;
step 5, presenting the detection result to a user and simultaneously generating a fault report and a maintenance guide;
and checking the vehicle body detection process, the vehicle body structure size information data, the fault report and the maintenance guide at multiple terminals through the intelligent display module.
The output end of the image acquisition module is connected with the input end of the image processing module, the output end of the image processing module is connected with the input end of the three-dimensional virtual module, the output end of the three-dimensional virtual module is connected with the input end of the measurement detection module, and the output end of the measurement detection module is connected with the input end of the intelligent display module.
Further, the image acquisition module comprises an internal scanning unit, an appearance acquisition unit and a motion unit, wherein the internal scanning unit acquires the structure diagram of an engine, a transmission system, a suspension system, a braking system and a steering system in the vehicle body through a laser scanner, the appearance acquisition unit acquires the images of the colors, the frames, the wheels, the instrument panel and the entertainment system of the vehicle body through an RGB camera, the motion unit rotates and translates the vehicle body through a multifunctional platform to acquire comprehensive images in all directions, and the internal scanning unit and the appearance acquisition unit are in bidirectional connection with the motion unit.
Further, the image processing module comprises a flow accelerating unit, a data dividing unit, an image cleaning unit, an image classifying unit, an image sorting unit, an image integrating unit and a data analyzing unit, the flow accelerating unit adopts a distributed accelerating algorithm to simplify the data processing flow, the data dividing unit is used for dividing the input image data into a plurality of identical data blocks according to the sequence code of the acquisition equipment, the image cleaning unit fills the incomplete part of the data block by adopting an interpolation algorithm and removes the redundant, chaotic and invalid part of the data block by adopting a wavelet transformation algorithm, the image classification unit classifies the cleaned data blocks according to the types of the systems of the vehicle body by adopting a mixed clustering algorithm, the image sorting unit sorts the sorted data blocks in order according to time, the image integrating unit adopts an optical flow data matching algorithm to place the sorted data blocks in the same network dynamic space, the data analysis unit performs summary analysis according to the ratio of the obtained effective image data of each system of the vehicle body in the total collected image data, the output end of the flow accelerating unit is respectively connected with the input ends of the data dividing unit, the image cleaning unit, the image classifying unit, the image sorting unit, the image integrating unit and the data analyzing unit, the output end of the data dividing unit is connected with the input end of the image cleaning unit, the output end of the image cleaning unit is connected with the input end of the image classifying unit, the output end of the image classification unit is connected with the input end of the image sorting unit, the output end of the image sorting unit is connected with the input end of the image integration unit, and the output end of the image integration unit is connected with the input end of the data analysis unit.
In a specific embodiment, the working principle of the image processing module is as follows: the flow accelerating unit processes the original data in parallel and sends the simplified original data to the data dividing unit. The data dividing unit divides the original data into a plurality of data blocks and distributes them to different devices. The image cleaning unit receives the data blocks distributed on the equipment, performs interpolation processing and wavelet transformation on the data blocks, removes incomplete, redundant, chaotic and invalid parts, and then sends the processed data blocks to the image classification unit. The image classification unit classifies the cleaned data blocks by using a hybrid clustering algorithm and sends the classified data blocks to the image ordering unit. The image ordering unit orders the classified data blocks in time order. The image integration unit receives the arranged data blocks and places the data blocks in the same network dynamic space model by utilizing an optical flow data matching algorithm. The data analysis unit receives the network dynamic space model and performs summary analysis in the model according to the duty ratio of the effective image data of each system of the vehicle body in the total acquired image data. Finally, the output of the processing module returns the results to the system for further analysis and application. In the whole processing process, each unit adopts a distributed processing mode to accelerate the data processing flow, thereby realizing higher efficiency and precision.
Further, the measurement detection module comprises a dimension measurement unit, a fault judgment unit and a mark storage unit, wherein the dimension measurement unit obtains actual structure dimension information data of a three-dimensional model of the vehicle body through a three-dimensional scanner, the fault judgment unit judges the deformation, impact and crack degree of the vehicle body by comparing the actual structure dimension information data of the vehicle body with standard structure dimension information data, the mark storage unit adopts an anchor frame method to find out deformation, impact and crack of the surface of the vehicle body according to a judgment result and automatically marks and stores relevant position information, the output end of the dimension measurement unit is connected with the input end of the fault judgment unit, and the output end of the fault judgment unit is connected with the input end of the mark storage unit.
In a specific embodiment, the working process of the measurement detection module is as follows: the dimension measuring unit acquires actual structure dimension information data of the vehicle body three-dimensional model through the three-dimensional scanner, and provides necessary data support for the fault judging unit; the fault judging unit judges the deformation, impact and crack degree of the vehicle body according to the comparison of the actual structure size information data and the standard structure size information data; the mark storage unit adopts an anchor frame method to automatically mark and store the related position information, thereby providing convenience for subsequent fault repair and maintenance work.
Further, the intelligent display module comprises an interaction unit, a feedback unit, a display unit and an early warning unit, wherein the interaction unit checks details of the vehicle body detection process at multiple terminals through a 3D virtual ring, the feedback unit adds undetected fault positions according to fault reports, the display unit displays the fault reports and maintenance guidelines through an LED spliced screen, the early warning unit prompts a user of the longest service life of a vehicle body and a vehicle body replacement prompt in a manner of a buzzer, a short message and a telephone message, the output end of the interaction unit is connected with the input end of the feedback unit, the output end of the feedback unit is connected with the input end of the early warning unit, and the output end of the early warning unit is connected with the input end of the display unit.
In a specific embodiment, the working process of the intelligent display module is as follows: the interaction unit is convenient for a user to check the detailed condition of vehicle detection through the 3D virtual finger ring, and the feedback unit perfects the fault report and improves the accuracy of the detection report; the early warning unit prompts the user of the longest service life of the vehicle body and the vehicle body replacement reminding in a plurality of modes, and reminds the user of timely maintaining and replacing the vehicle; the display unit displays the fault report and the maintenance guide in a visual mode, so that the user can conveniently check the fault report and the maintenance guide.
Further, the working method of the multi-view paired registration algorithm comprises the following steps: firstly, carrying out preliminary registration on a plurality of images with different view angles by using a multi-view registration algorithm to obtain initial transformation parameters of each image, then adopting a mixed registration method of feature-based registration, phase correlation-based registration and image entropy-based registration to carry out further fine registration on each pair of images, reversely transmitting the result of the fine registration to the multi-view registration, thereby recalculating the initial transformation parameters, and finally carrying out the re-registration on all the images by using an optimization adjustment algorithm according to the initial transformation parameters obtained by reverse transmission.
In a specific embodiment, the principle of the multi-view pairwise registration algorithm is: key feature points are extracted in each image. These points are selected as those points that are unique and stable relative to other areas. For each key feature point, the algorithm computes the descriptor associated with it. These descriptors are numerical values that uniquely describe the point feature, typically a fixed length vector. And searching matched characteristic points in the two images. This typically involves calculating the similarity between the descriptors of the feature points of the first image and the descriptors of each feature point in the second image. The higher the similarity, the more likely it is a matching point. And determining the possible range of the matched image according to the spatial relationship in the matched image. This can be achieved by using a basic matrix or an essential matrix, etc. By inverting the corresponding positions of the feature points which are likely to match, it is determined which feature points are matched and which are not. The matching results are optimized to produce the final image registration results, as shown in table 1.
Table 1 registration comparison table
According to table 1, the multi-view paired registration algorithm takes the shortest time of all registration algorithms, only 39.4 seconds, about 100 seconds less on average than other algorithms from the time point of view. This illustrates that the multi-view pairwise registration algorithm has a significant advantage in terms of time efficiency. From the registration rate, the registration rate of the multi-view paired registration algorithm reaches 98.6%, which is far higher than the average registration rate of the other two algorithms, namely 67.3% and 73.2%, respectively. This shows that the multi-view pairwise registration algorithm has significant advantages in terms of registration accuracy. From the viewpoint of error rate, the error rate of the multi-view paired registration algorithm is only 1.6%, which is about half of the average error rate of other algorithms, namely 20.5% and 17.3% respectively. This illustrates that the multi-view pairwise registration algorithm has higher reliability and stability.
In summary, the multi-view paired registration algorithm is a more excellent multi-view image registration algorithm from three aspects of time, registration accuracy and error rate.
The pseudo code of the multi-view pairwise registration algorithm is implemented as:
# preliminary multi-view registration
initial_params = multi_view_registration(images)
Feature matching registration
feature_aligned_images = feature_based_registration(initial_params, images)
# phase correlation registration
phase_aligned_images = phase_correlation_registration(feature_aligned_images)
Entropy registration of # images
entropy_aligned_images = entropy_based_registration(phase_aligned_images)
Reverse transfer of # preliminary registration parameters
final_params = inverse_transform(initial_params)
# re-optimize registration
final_aligned_images = optimised_registration(final_params, entropy_aligned_images)
In this example code, 'images' represent the input multiple images, and the 'multi_view_registration ()' function performs a preliminary registration of the images using a multi-view registration algorithm, returning the initial transformation parameters for each image. Then, each pair of images is subjected to finer registration through three functions of ' feature_based_registration () ', phase_registration_registration () ' and ' orbit_based_registration () ', and a registration result which is further optimized is obtained. And then reversely transferring the accurate registration result into the initial transformation parameters by using an 'reverse_transformation ()' function, so as to calculate the readjusted initial transformation parameters. Finally, all images are re-registered using the 'optimized_registration ()' function, and the final registration result is returned.
Further, the working method of the synchronous voxel constraint topology algorithm comprises the following steps: firstly, converting registered image data into a three-dimensional voxel model through a voxelization algorithm to establish a compact discrete space expression data matrix, then carrying out shape adjustment and topology optimization on the voxel model by utilizing a synchronous constraint topology generation algorithm, regenerating a three-dimensional model based on voxels, and finally optimizing the whole model detail by using an interactive modeling mode.
In a specific embodiment, the working principle of the synchronous voxel constraint topology algorithm is as follows: based on the camera pose, the two-dimensional feature points in each image are projected into the three-dimensional point cloud, and the three-dimensional point cloud of each image is obtained. Discretizing the three-dimensional point cloud through voxel gridding to generate a voxel point cloud. A voxel is a cube used to represent a three-dimensional point cloud, which is discretized into many small cubes, one for each voxel. For each voxel, the neighbors of its neighbors are computed and mapped into the corresponding image. The corresponding two-dimensional feature points of each voxel and its neighbors in all images are compared, and the positions of the neighbor voxels are updated using a synchronous optimization technique to ensure that they match in all images. And recalculating the three-dimensional coordinates of the voxel point cloud according to the new positions of the neighbor voxels. And updating the pose of the camera by using the optimized voxel point cloud, and calculating the final position of each point in the three-dimensional point cloud according to the position of the camera, as shown in table 2.
Table 2 comparison of modeling effects
The effect of the two different algorithms on the four stages A, B, C and D are listed in table 2 and the time required for the test and the percentage of the final effect are listed.
From a time perspective, the synchronous voxel constraint topology algorithm significantly reduces the computation time after the voxelization algorithm by only 24 seconds, on average about 51 seconds less than the voxelization algorithm. This illustrates that the simultaneous voxel constrained topology algorithm has a significant advantage in terms of time efficiency.
From the effect, the effect of the synchronous voxel constraint topology algorithm reaches 97.9%, which is far higher than the average effect of the voxelization algorithm and is 58.2%. This shows that the synchronous voxel constraint topology algorithm has obvious advantages in terms of modeling accuracy.
In summary, the synchronous voxel constraint topology algorithm is a more excellent modeling algorithm from both time and effect aspects. The algorithm utilizes the constraint relation among voxels and simultaneously considers the information of a plurality of images, thereby improving the modeling accuracy and robustness. Further, the working method of the optical flow data matching algorithm comprises the following steps: firstly extracting characteristic points in adjacent data blocks based on a SURF algorithm, matching by using a FLANN algorithm to obtain initial positions, then calculating optical flow data between the adjacent data blocks, then carrying out optical flow tracking on the characteristic points by using a KLT algorithm, then calculating an average motion vector of each object, obtaining a final position of the object according to the initial positions and the motion vectors, carrying out weighted fusion on the initial positions and the final positions to obtain matching results, and finally registering the adjacent data blocks according to the matching results to realize virtual three-dimensional modeling.
The pseudo code implementation of the synchronous voxel constraint topology algorithm is:
converting the registered image data into voxel model, input: registered image data D, voxel size v_size
voxel_grid = New VoxelGrid(v_size)
for each point p in D:
voxel_grid.add_point(p)
Carrying out shape adjustment and topology optimization on the voxel model by utilizing a synchronous voxel constraint topology algorithm, wherein Input is the voxel model voxel_grid, and the iteration number is item_num
for i in range(iter_num):
for each voxel in voxel_grid:
for each neighbor in voxel.get_neighbors():
match_2d_points(voxel, neighbor)
update_voxel_positions()
update_voxel_positions()
Input: voxel model voxel grid
mesh = New Mesh()
for each voxel in voxel_grid:
mesh.add_voxel_surface(voxel)
Optimization of global model details using interactive modeling
Note that: in the above code, 'update_voxel_positions ()' and 'match_2d_points ()' represent specific implementations of the updated voxel location and two-dimensional point matching in the synchronous voxel constraint topology algorithm, and specific algorithm details can be referred to the relevant description of the synchronous voxel constraint topology algorithm principle section.
Further, the working method of the region graph theory algorithm comprises the following steps: firstly, dividing a three-dimensional model into a plurality of nodes by using a region growing algorithm, then, constructing an undirected weighted graph by using the contact area between the adjacent nodes as the distance weight between two points, then, dividing the undirected weighted graph by using a graph theory algorithm, wherein the division result is a plurality of subsets of the three-dimensional model, and finally, carrying out optimization operation on the division result, wherein the optimization operation comprises the removal of the subset of holes, noise and errors.
In a specific embodiment, the working principle of the region graph theory algorithm is as follows: the image is divided into a plurality of small areas, and an average color thereof is calculated for each area as a representative color of the area. The similarity of two adjacent regions is calculated to determine whether they belong to the same region. Region similarity is typically measured using color similarity, texture similarity, and the like. Creating a region map according to the region similarity. The region map is represented using a connection matrix, each element in the connection matrix representing connectivity between two adjacent regions. The final segmentation result is constructed by continuously merging neighboring regions. Specifically, in the region graph, the edge with the smallest weight is found from the connection matrix, and the two regions connected by the edge are combined. After merging, the region similarity and the connection matrix are recalculated, and the step is repeatedly executed until a certain stopping condition (such as reaching a specified minimum segmentation number) is met. The label obtained by combining the regions is assigned to the pixels of the original image, forming the final image segmentation result, as shown in table 3.
Table 3 identifies a table of segmentation effects
From table 3, the region graph theory algorithm has a significant advantage from the recognition rate point of view, which is 98.6%, which is far higher than the region growing algorithm and the graph theory algorithm. This indicates that the region graph theory algorithm can more accurately identify objects in the image. The region graph theory algorithm also has obvious advantages from the aspect of the segmentation rate, which is 99.6 percent higher than that of the region growing algorithm and the graph theory algorithm. This suggests that the region graph theory algorithm can segment the image more efficiently. The region graph theory algorithm is slightly inferior to the region growing algorithm in terms of completeness, but still reaches 97.9% higher than the graph theory algorithm. This indicates that the region graph theory algorithm can better preserve the integrity of the target. The area graph theory algorithm takes the shortest time of 51 seconds from the total time, and is about 74 seconds and 95 seconds less than the area growth algorithm and the graph theory algorithm respectively. This shows that the region graph theory algorithm has a significant advantage in time efficiency.
In conclusion, the region graph theory algorithm has obvious advantages in recognition rate, segmentation rate and time efficiency, and can complete the image segmentation task more accurately and more rapidly.
The pseudo code of the region graph theory algorithm is implemented as:
dividing a three-dimensional model into a plurality of nodes using a region growing algorithm
seed_nodes = find_seed_nodes(model)
regions = []
visited_nodes = []
for each seed_node in seed_nodes:
region = grow_region(seed_node)
regions.append(region)
visited_nodes.extend(region)
grow_region(node):
region = [node]
queue = [node]
while queue:
current_node = queue.pop(0)
neighbors = find_neighbors(current_node)
for each neighbor in neighbors:
if neighbor not in visited_nodes and is_similar(current_node, neighbor):
visited_nodes.append(neighbor)
region.append(neighbor)
queue.append(neighbor)
return region
Constructing undirected weighted graph and partitioning using graph theory algorithm
graph = build_graph()
subsets = graph_cut(graph)
Optimization of the segmentation result
optimized_subsets = []
for each subset in subsets:
if has_holes(subset):
hole_filled_subset = fill_holes(subset)
optimized_subsets.append(hole_filled_subset)
else:
optimized_subsets.append(subset)
noise_removed_subset = remove_noise(optimized_subsets[-1])
optimized_subsets[-1] = noise_removed_subset
+/-auxiliary function
build_graph():
nodes = []
for each region in regions:
nodes.append(Node(region.id))
for each node_i in nodes:
for each node_j in nodes:
if node_i != node_j:
weight = calculate_weight
(node_ i.region, node_j.region)
node_i.neighbors.append((node_j, weight))
return Graph(nodes)
calculate_weight(region_i, region_j):
shared_area = calculate_shared_area(region_i, region_j)
return shared_area / (region_i.area + region_j.area –
shared_area)
Determining whether a region has holes
has_holes(region):
Filling holes
fill_holes(region):
Subset of// noise and error removal
remove_noise(region):
Data structure of definition
class Node:
def __init__(self, id):
self.id = id
self.region = ...
self.neighbors = []
class Graph:
def __init__(self, nodes):
self.nodes = nodes
Segmentation using graph theory algorithm
graph_cut(graph)
While specific embodiments of the present invention have been described above, it will be understood by those skilled in the art that these specific embodiments are by way of example only, and that various omissions, substitutions, and changes in the form and details of the methods and systems described above may be made by those skilled in the art without departing from the spirit and scope of the invention. For example, it is within the scope of the present invention to combine the above-described method steps to perform substantially the same function in substantially the same way to achieve substantially the same result. Accordingly, the scope of the invention is limited only by the following claims.
Claims (8)
1. A vehicle body fault detection method based on three-dimensional modeling and structural dimension measurement is characterized by comprising the following steps of: the method comprises the following steps:
step 1, acquiring unordered range multi-view of a vehicle body;
acquiring a multi-view image of the vehicle body through an image acquisition module;
step 2, analyzing and processing the acquired image data;
the collected disordered images are subjected to preliminary processing through an image processing module;
step 3, performing virtual three-dimensional modeling on the processed image data;
converting the image data of the vehicle body into a virtual three-dimensional model through a three-dimensional virtual module; the three-dimensional virtual module comprises a data conversion unit, a tensor matching unit, a virtual modeling unit and an identification and segmentation unit, wherein the data conversion unit adopts a conversion algorithm to perform point cloud tensor on image data, the tensor matching unit adopts a multi-view pairwise registration algorithm to perform matching calibration among multi-dimensional unordered tensor views, the virtual modeling unit adopts a synchronous voxel constraint topology algorithm to perform three-dimensional model construction on the matched and calibrated multi-dimensional unordered tensor views, the identification and segmentation unit adopts a region graph theory algorithm to realize three-dimensional model identification and segmentation according to the structure of each system of the vehicle body, the output end of the data conversion unit is connected with the input end of the tensor matching unit, the output end of the tensor matching unit is connected with the input end of the virtual modeling unit, and the output end of the virtual modeling unit is connected with the input end of the identification and segmentation unit;
step 4, measuring the structural size and marking the fault of each part of the three-dimensional model of the separated vehicle body;
the method comprises the steps of extracting structural dimension information of a vehicle body and detecting faults through a measurement detection module, wherein the structural dimension information at least comprises length, width, height and convexity;
step 5, presenting the detection result to a user and simultaneously generating a fault report and a maintenance guide;
and checking the vehicle body detection process, the vehicle body structure size information data, the fault report and the maintenance guide at multiple terminals through the intelligent display module.
2. The vehicle body fault detection method based on three-dimensional modeling and structural dimension measurement according to claim 1, wherein: the image acquisition module comprises an internal scanning unit, an appearance acquisition unit and a motion unit, wherein the internal scanning unit acquires the structural diagram of an engine, a transmission system, a suspension system, a braking system and a steering system in the vehicle body through a laser scanner, the appearance acquisition unit acquires the images of the colors, the frames, the wheels, the instrument panel and the entertainment system of the vehicle body through an RGB camera, the motion unit rotates and translates the vehicle body through a multifunctional platform to acquire an omnibearing comprehensive image, and the internal scanning unit and the appearance acquisition unit are in bidirectional connection with the motion unit.
3. The vehicle body fault detection method based on three-dimensional modeling and structural dimension measurement according to claim 1, wherein: the image processing module comprises a flow accelerating unit, a data dividing unit, an image cleaning unit, an image classifying unit, an image sorting unit, an image integrating unit and a data analyzing unit, wherein the flow accelerating unit adopts a distributed accelerating algorithm to simplify the data processing flow, the data dividing unit is used for dividing input image data into a plurality of identical data blocks according to the sequence code of an acquisition device, the image cleaning unit adopts an interpolation algorithm to fill up incomplete parts of the data blocks and adopts a wavelet transformation algorithm to remove redundant, disordered and invalid parts of the data blocks, the image classifying unit adopts a hybrid clustering algorithm to classify the cleaned data blocks according to the types of each system of the vehicle body, the image sorting unit is used for sorting the classified data blocks according to the time, the image integrating unit adopts an optical flow data matching algorithm to place the sorted data blocks in the same network dynamic space, the data analyzing unit carries out summarizing analysis according to the occupation ratio of the obtained effective image data of each system of the vehicle body in the total acquired image data, the output end of the flow accelerating unit is respectively connected with the data dividing unit, the image cleaning unit, the image classifying unit and the image classifying unit are connected with the image classifying unit, the image integrating unit and the image integrating unit are connected with the image classifying unit.
4. The vehicle body fault detection method based on three-dimensional modeling and structural dimension measurement according to claim 1, wherein: the measuring and detecting module comprises a dimension measuring unit, a fault judging unit and a mark storage unit, wherein the dimension measuring unit obtains actual structure dimension information data of a three-dimensional model of the vehicle body through a three-dimensional scanner, the fault judging unit judges the deformation, impact and crack degree of the vehicle body by comparing the actual structure dimension information data of the vehicle body with standard structure dimension information data, the mark storage unit adopts an anchor frame method to find out deformation, impact and crack of the surface of the vehicle body according to a judging result and automatically marks and stores relevant position information, the output end of the dimension measuring unit is connected with the input end of the fault judging unit, and the output end of the fault judging unit is connected with the input end of the mark storage unit.
5. The vehicle body fault detection method based on three-dimensional modeling and structural dimension measurement according to claim 1, wherein: the intelligent display module comprises an interaction unit, a feedback unit, a display unit and an early warning unit, wherein the interaction unit checks details of the vehicle body detection process at multiple terminals through a 3D virtual finger ring, the feedback unit adds undetected fault positions according to fault reports, the display unit displays the fault reports and maintenance guidelines through an LED spliced screen, the early warning unit prompts a user of the longest service life of a vehicle body and reminding of vehicle body replacement in a buzzer, short messages and telephone messages mode, the output end of the interaction unit is connected with the input end of the feedback unit, the output end of the feedback unit is connected with the input end of the early warning unit, and the output end of the early warning unit is connected with the input end of the display unit.
6. The vehicle body fault detection method based on three-dimensional modeling and structural dimension measurement according to claim 1, wherein: the working method of the multi-view paired registration algorithm comprises the following steps: firstly, carrying out preliminary registration on a plurality of images with different view angles by using a multi-view registration algorithm to obtain initial transformation parameters of each image, then adopting a mixed registration method of feature-based registration, phase correlation-based registration and image entropy-based registration to carry out further fine registration on each pair of images, reversely transmitting the result of the fine registration to the multi-view registration, thereby recalculating the initial transformation parameters, and finally carrying out the re-registration on all the images by using an optimization adjustment algorithm according to the initial transformation parameters obtained by reverse transmission.
7. The vehicle body fault detection method based on three-dimensional modeling and structural dimension measurement according to claim 1, wherein: the working method of the synchronous voxel constraint topology algorithm comprises the following steps: firstly, converting registered image data into a three-dimensional voxel model through a voxelization algorithm to establish a compact discrete space expression data matrix, then carrying out shape adjustment and topology optimization on the voxel model by utilizing a synchronous constraint topology generation algorithm, regenerating a three-dimensional model based on voxels, and finally optimizing the whole model detail by using an interactive modeling mode.
8. The vehicle body fault detection method based on three-dimensional modeling and structural dimension measurement according to claim 1, wherein: the working method of the region graph theory algorithm comprises the following steps: firstly, dividing a three-dimensional model into a plurality of nodes by using a region growing algorithm, then, constructing an undirected weighted graph by using the contact area between the adjacent nodes as the distance weight between two points, then, dividing the undirected weighted graph by using a graph theory algorithm, wherein the division result is a plurality of subsets of the three-dimensional model, and finally, carrying out optimization operation on the division result, wherein the optimization operation comprises the removal of the subset of holes, noise and errors.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311564740.3A CN117274510B (en) | 2023-11-22 | 2023-11-22 | Vehicle body fault detection method based on three-dimensional modeling and structural dimension measurement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311564740.3A CN117274510B (en) | 2023-11-22 | 2023-11-22 | Vehicle body fault detection method based on three-dimensional modeling and structural dimension measurement |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117274510A true CN117274510A (en) | 2023-12-22 |
CN117274510B CN117274510B (en) | 2024-05-24 |
Family
ID=89209122
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311564740.3A Active CN117274510B (en) | 2023-11-22 | 2023-11-22 | Vehicle body fault detection method based on three-dimensional modeling and structural dimension measurement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117274510B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117593592A (en) * | 2024-01-18 | 2024-02-23 | 山东华时数字技术有限公司 | Intelligent scanning and identifying system and method for foreign matters at bottom of vehicle |
CN118134996A (en) * | 2024-05-10 | 2024-06-04 | 金华信园科技有限公司 | Intelligent positioning volume judging system for packaging box |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040258309A1 (en) * | 2002-12-07 | 2004-12-23 | Patricia Keaton | Method and apparatus for apparatus for generating three-dimensional models from uncalibrated views |
GB201714179D0 (en) * | 2017-09-05 | 2017-10-18 | Nokia Technologies Oy | Cross-source point cloud registration |
US20200074747A1 (en) * | 2018-08-30 | 2020-03-05 | Qualcomm Incorporated | Systems and methods for reconstructing a moving three-dimensional object |
CN114708309A (en) * | 2022-02-22 | 2022-07-05 | 广东工业大学 | Vision indoor positioning method and system based on building plan prior information |
CN115147431A (en) * | 2021-03-15 | 2022-10-04 | 辉达公司 | Automatic labeling and segmentation using machine learning models |
US11475766B1 (en) * | 2021-04-16 | 2022-10-18 | Hayden Ai Technologies, Inc. | Systems and methods for user reporting of traffic violations using a mobile application |
US20220415059A1 (en) * | 2019-11-15 | 2022-12-29 | Nvidia Corporation | Multi-view deep neural network for lidar perception |
CN115937404A (en) * | 2022-09-27 | 2023-04-07 | 江苏聚目科技有限公司 | Grating three-dimensional reconstruction system and method based on multi-view shadow segmentation |
CN115984494A (en) * | 2022-12-13 | 2023-04-18 | 辽宁工程技术大学 | Deep learning-based three-dimensional terrain reconstruction method for lunar navigation image |
CN116486287A (en) * | 2023-04-04 | 2023-07-25 | 吉林大学 | Target detection method and system based on environment self-adaptive robot vision system |
WO2023192754A1 (en) * | 2022-04-01 | 2023-10-05 | Nvidia Corporation | Image stitching with an adaptive three-dimensional bowl model of the surrounding environment for surround view visualization |
-
2023
- 2023-11-22 CN CN202311564740.3A patent/CN117274510B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040258309A1 (en) * | 2002-12-07 | 2004-12-23 | Patricia Keaton | Method and apparatus for apparatus for generating three-dimensional models from uncalibrated views |
GB201714179D0 (en) * | 2017-09-05 | 2017-10-18 | Nokia Technologies Oy | Cross-source point cloud registration |
US20200074747A1 (en) * | 2018-08-30 | 2020-03-05 | Qualcomm Incorporated | Systems and methods for reconstructing a moving three-dimensional object |
US20220415059A1 (en) * | 2019-11-15 | 2022-12-29 | Nvidia Corporation | Multi-view deep neural network for lidar perception |
CN115147431A (en) * | 2021-03-15 | 2022-10-04 | 辉达公司 | Automatic labeling and segmentation using machine learning models |
US11475766B1 (en) * | 2021-04-16 | 2022-10-18 | Hayden Ai Technologies, Inc. | Systems and methods for user reporting of traffic violations using a mobile application |
CN114708309A (en) * | 2022-02-22 | 2022-07-05 | 广东工业大学 | Vision indoor positioning method and system based on building plan prior information |
WO2023192754A1 (en) * | 2022-04-01 | 2023-10-05 | Nvidia Corporation | Image stitching with an adaptive three-dimensional bowl model of the surrounding environment for surround view visualization |
CN115937404A (en) * | 2022-09-27 | 2023-04-07 | 江苏聚目科技有限公司 | Grating three-dimensional reconstruction system and method based on multi-view shadow segmentation |
CN115984494A (en) * | 2022-12-13 | 2023-04-18 | 辽宁工程技术大学 | Deep learning-based three-dimensional terrain reconstruction method for lunar navigation image |
CN116486287A (en) * | 2023-04-04 | 2023-07-25 | 吉林大学 | Target detection method and system based on environment self-adaptive robot vision system |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117593592A (en) * | 2024-01-18 | 2024-02-23 | 山东华时数字技术有限公司 | Intelligent scanning and identifying system and method for foreign matters at bottom of vehicle |
CN117593592B (en) * | 2024-01-18 | 2024-04-16 | 山东华时数字技术有限公司 | Intelligent scanning and identifying system and method for foreign matters at bottom of vehicle |
CN118134996A (en) * | 2024-05-10 | 2024-06-04 | 金华信园科技有限公司 | Intelligent positioning volume judging system for packaging box |
Also Published As
Publication number | Publication date |
---|---|
CN117274510B (en) | 2024-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117274510B (en) | Vehicle body fault detection method based on three-dimensional modeling and structural dimension measurement | |
Xia et al. | Geometric primitives in LiDAR point clouds: A review | |
Riegler et al. | Octnetfusion: Learning depth fusion from data | |
Chen et al. | Automatic building information model reconstruction in high-density urban areas: Augmenting multi-source data with architectural knowledge | |
Lafarge et al. | A hybrid multiview stereo algorithm for modeling urban scenes | |
CN112801050B (en) | Intelligent luggage tracking and monitoring method and system | |
CN112347550B (en) | Coupling type indoor three-dimensional semantic graph building and modeling method | |
Zhu et al. | Segmentation and classification of range image from an intelligent vehicle in urban environment | |
CN112883820B (en) | Road target 3D detection method and system based on laser radar point cloud | |
CN101976461A (en) | Novel outdoor augmented reality label-free tracking registration algorithm | |
EP4174792A1 (en) | Method for scene understanding and semantic analysis of objects | |
JP4568845B2 (en) | Change area recognition device | |
CN115727854A (en) | VSLAM positioning method based on BIM structure information | |
CN115128628A (en) | Road grid map construction method based on laser SLAM and monocular vision | |
CN116309817A (en) | Tray detection and positioning method based on RGB-D camera | |
CN114648669A (en) | Motor train unit fault detection method and system based on domain-adaptive binocular parallax calculation | |
CN114120095A (en) | Mobile robot autonomous positioning system and method based on aerial three-dimensional model | |
CN114358133B (en) | Method for detecting looped frames based on semantic-assisted binocular vision SLAM | |
Withers et al. | Modelling scene change for large-scale long term laser localisation | |
CN117635488A (en) | Light-weight point cloud completion method combining channel pruning and channel attention | |
JP2023508276A (en) | map containing covariances at multiresolution voxels | |
CN117367404A (en) | Visual positioning mapping method and system based on SLAM (sequential localization and mapping) in dynamic scene | |
CN116977636A (en) | Large-scale point cloud semantic segmentation method for three-dimensional point cloud | |
Dekker et al. | Point Cloud Analysis of Railway Infrastructure: A Systematic Literature Review | |
CN116310552A (en) | Three-dimensional target detection method based on multi-scale feature fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |