CN117392665A - Vehicle type and vehicle part recognition method and system - Google Patents

Vehicle type and vehicle part recognition method and system Download PDF

Info

Publication number
CN117392665A
CN117392665A CN202311399708.4A CN202311399708A CN117392665A CN 117392665 A CN117392665 A CN 117392665A CN 202311399708 A CN202311399708 A CN 202311399708A CN 117392665 A CN117392665 A CN 117392665A
Authority
CN
China
Prior art keywords
fusion
classified
target objects
sensor
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311399708.4A
Other languages
Chinese (zh)
Inventor
张怒涛
易春华
黄菠
白庆平
谭波
唐学敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Kairui Testing Equipment Co ltd
China Automotive Engineering Research Institute Co Ltd
Original Assignee
Chongqing Kairui Testing Equipment Co ltd
China Automotive Engineering Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Kairui Testing Equipment Co ltd, China Automotive Engineering Research Institute Co Ltd filed Critical Chongqing Kairui Testing Equipment Co ltd
Priority to CN202311399708.4A priority Critical patent/CN117392665A/en
Publication of CN117392665A publication Critical patent/CN117392665A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Abstract

The invention relates to the technical field of object recognition, in particular to a vehicle type and automobile part recognition method and system, which adopt three-dimensional coordinates, fusion reflection intensity and RGB image information to form a comprehensive storage structure based on the three-dimensional coordinates, fusion of various information as point information, in order to reduce the consumption of calculation force and the difficulty of calibration, 3D fusion information is used for generating 2D fusion information of a plurality of projection surfaces, calibration and machine learning are added to perform target detection and segmentation of classified target objects, a selection frame is arranged on the classified target objects, namely recognition results of the plurality of 2D projection surfaces, the classified target objects enter a 3D space reduction detection method, classified three-dimensional target objects are extracted according to the selection frame of the classified target objects of each projection surface, point cloud wrap measurement is performed according to the three-dimensional target objects, and a measurement report is generated. The scheme can carry out multi-parameter fusion, not only uses a 3D data source, but also reduces the calibration difficulty and the calculation force requirement.

Description

Vehicle type and vehicle part recognition method and system
Technical Field
The invention relates to the technical field of object recognition, in particular to a vehicle type and automobile part recognition method and system.
Background
The model is constructed by multi-parameter fusion, the identification effect of the constructed model is more excellent by combining the advantages of each parameter, but the model has the problems that the calculation model is complex, the calculation force requirement is large, the calibration is very difficult by combining a plurality of data sources in a three-dimensional space manually, the drawing frame calibration of a picture is not an order of magnitude, and the like; most of the current multi-parameter fusion image recognition algorithms and target detection algorithms are based on plane graph data, and the basic pre-training model and the data set are established based on plane images.
In the actual use process, the use effect is poor, specifically, a plane is used for detecting a camera photo, then the camera is internally converted into a 3D coordinate, and the 3D point cloud is sleeved, so that the precision and the effect cannot be guaranteed due to different 2D and 3D data sources and inconsistent characteristics.
Therefore, a method and a system for identifying vehicle types and automobile parts are urgently needed at present, multi-parameter fusion can be carried out, and the calibration difficulty and calculation force requirements are reduced.
Disclosure of Invention
The invention aims to provide a vehicle type and automobile part identification method which can carry out multi-parameter fusion and reduce calibration difficulty and calculation force requirements.
The basic scheme provided by the invention is as follows: a vehicle type and automobile part identification method comprises the following steps:
setting a 3D sensor and a plane image sensor to acquire three-dimensional coordinates (X, Y, Z), reflection intensity P and RGB image information (R, G, B) of a target object to be identified;
calibrating, removing environmental objects and fusing the viewing cone of the target object to obtain 3D fusion information (X, Y, Z, P, R, G and B);
performing gray level conversion on RGB image information in the fusion information to obtain converted 3D fusion information (X, Y, Z, P, G);
projecting the 3D fusion information to a plurality of projection surfaces to generate 2D fusion data;
performing target detection and segmentation of classified target objects on the generated 2D fusion data of each projection surface, and setting a selection frame for the classified target objects;
extracting classified three-dimensional target objects according to the selection frames of the classified target objects of each projection surface;
and measuring the point cloud wrappage according to the three-dimensional target object, and generating a measurement report.
Further, the 3D sensor is a lidar;
the planar image sensor is an RGB camera.
Further, the number of the projection surfaces is three.
Further, the projection surface includes: XY plane, YZ plane, and XZ plane.
Further, the generated 2D fusion data of each projection surface is subjected to target detection and segmentation of classified target objects by adopting manual calibration and/or a calibration algorithm and machine learning, and a selection frame is arranged on the classified target objects.
The beneficial effect of this scheme: according to the scheme, 3D point cloud data (three-dimensional coordinates), fusion reflection intensity and RGB image information are adopted to form a comprehensive storage structure based on the three-dimensional coordinates, multiple information is fused to form a point information storage structure, a V-shaped model is adopted to reduce calculation power consumption and reduce calibration difficulty, 2D fusion information of multiple projection surfaces is generated through the 3D fusion information, calibration and machine learning are added, target detection and segmentation of classified target objects are carried out, a selection frame is arranged on the classified target objects, namely recognition results of the multiple 2D projection surfaces, the classified target objects are restored to a 3D segmentation detection method in a 3D space, the classified three-dimensional target objects are extracted according to the selection frame of the classified target objects of each projection surface, point cloud wrap measurement is carried out according to the three-dimensional target objects, and a measurement report is generated.
According to the scheme, multi-parameter fusion can be carried out, a 3D data source is used, calibration difficulty and calculation force requirements are reduced, and the existing pre-training model of the 2D data set can be partially utilized for carrying out migration learning to accelerate the machine learning process in the processing of 2D fusion information, so that the calculation force requirements are further saved.
The second object of the invention is to provide a vehicle type and automobile part recognition system which can perform multi-parameter fusion and reduce calibration difficulty and calculation force requirements.
The invention provides a basic scheme II: a vehicle model and vehicle component identification system comprising: the system comprises a sensor, a POE switch and a computing terminal;
the sensor and the computing terminal are connected with the POE switch;
the sensor includes: a 3D sensor and a planar image sensor;
the 3D sensor is used for collecting three-dimensional coordinates (X, Y, Z) and reflection intensity P of a target object to be identified and transmitting the three-dimensional coordinates (X, Y, Z) and the reflection intensity P to the computing terminal through the POE switch;
the plane image sensor is used for collecting RGB image information (R, G, B) of a target object to be identified and transmitting the RGB image information to the computing terminal through the POE switch;
the computing terminal is used for calibrating, removing environmental objects and fusing the viewing cone of the target object to obtain 3D fusion information (X, Y, Z, P, R, G and B);
performing gray level conversion on RGB image information in the fusion information to obtain converted 3D fusion information (X, Y, Z, P, G);
projecting the 3D fusion information to a plurality of projection surfaces to generate 2D fusion data;
performing target detection and segmentation of classified target objects on the generated 2D fusion data of each projection surface, and setting a selection frame for the classified target objects;
extracting classified three-dimensional target objects according to the selection frames of the classified target objects of each projection surface;
and measuring the point cloud wrappage according to the three-dimensional target object, and generating a measurement report.
Further, the 3D sensor is a lidar;
the planar image sensor is an RGB camera.
Further, the number of the projection surfaces is three.
Further, the projection surface includes: XY plane, YZ plane, and XZ plane.
Further, the generated 2D fusion data of each projection surface is subjected to target detection and segmentation of classified target objects by adopting manual calibration and/or a calibration algorithm and machine learning, and a selection frame is arranged on the classified target objects.
Drawings
FIG. 1 is a logic block diagram of an embodiment of a vehicle model and vehicle component identification system according to the present invention.
Detailed Description
The following is a further detailed description of the embodiments:
reference numerals in the drawings of the specification include: a 3D sensor 1, a planar image sensor 2, a POE switch 3, a computing terminal 4 and a target object 5.
Example 1
The embodiment provides a vehicle type and automobile part identification method, which comprises the following steps:
setting a 3D sensor 1 and a plane image sensor 2 to acquire three-dimensional coordinates (X, Y, Z), reflection intensity P and RGB image information (R, G, B) of a target object 5 to be identified; the 3D sensor 1 is a laser radar, specifically, the 3D sensor 1, namely a 3D laser sensor, is a laser radar adopting a rotating lens working principle, two lenses with different speeds are refracted and overlapped through a refraction principle of a group of rotating lenses in the radar, laser with different angles is randomly emitted to obtain laser point clouds with reflection intensity P for the current field of view, the resolution of the laser radar is not lower than 640 and 480, a scanning array is constructed through a plurality of fixedly installed laser radars, the point cloud data of the plurality of radars can be calculated by hardware resources, the point clouds of peripheral objects such as the ground are manually or automatically spliced and synthesized according to the geometric relationship, interference information such as dust is removed, the interference information is reduced to be a point cloud file of the object of interest, the object is opposite to the installation direction, and the object is wrapped by the view cone as much as possible; the planar image sensor 2 is an RGB camera, the number of the planar image sensors is set according to the requirement, the planar image sensor can be elastically stacked, and the more the sensors are, the more the details can be measured, the higher the precision is;
calibrating, removing environmental objects and fusing the viewing cone of the target object 5 to obtain 3D fusion information (X, Y, Z, P, R, G and B);
because a plurality of laser radars and RGB cameras (namely RGB industrial cameras) coaxial with the radars are adopted, the point cloud formed by each laser radar is a point cloud belonging to a local coordinate system of the laser radars, and the laser radars are arranged at different positions (x, y and z) in the environment, and each radar has different pointing angles and rolling angles, the point clouds obtained by the local coordinate systems of the laser radars are synthesized into the point clouds under the same coordinate system, and each radar needs to be calibrated in a unified coordinate system;
specifically, performing calibration includes: searching a ground plane reference object, and carrying out coordinate transformation on each laser radar based on the same ground plane reference object, so that the obtained radar point cloud has a uniform ground plane, then carrying out manual fine adjustment on the x and y placement coordinates of the radar on the ground plane and the horizontal rotation angle alpha to ensure that the point cloud has the highest overlapping degree, wherein the manual fine adjustment process can be carried out in a manual mode and an automatic calculation mode, so that an integrated integral point cloud picture with uniform coordinates is obtained;
because only the real target object 5 is concerned, and the target object 5 and other environmental interferents are physically unconnected and independent, the ground plane in the point cloud image is removed, a plurality of independent point cloud ranges of non-target objects such as the target object 5, the environmental objects and the like are obtained, a point cloud gathering area in the center of a visual field is searched, and the non-target object 5 is segmented;
since the RGB camera and the laser radar are coaxially installed, through perspective projection of an RGB image on the radar point cloud, corresponding RGB colors can be calculated for the point of each point cloud, and 3D fusion information (X, Y, Z, P, R, G and B) can be obtained;
performing gray level conversion on RGB image information in the fusion information to obtain converted 3D fusion information (X, Y, Z, P, G);
projecting the 3D fusion information to a plurality of projection surfaces to generate 2D fusion data; the number of projection surfaces in this embodiment is three, including: an XY plane, a YZ plane, and an XZ plane;
the generated 2D fusion data of each projection surface is subjected to target detection and segmentation of the classified target object 5 by adopting a manual calibration and/or calibration algorithm and machine learning, and a selection frame is arranged on the classified target object 5; the calibration algorithm adopts a mathematical morphology-based method, adopts open operation (firstly corroding and then expanding) to treat point cloud, and gradually separates the ground point and the ground feature point by using a progressive filter window; classifying the laser point cloud by using an improved morphological filter, and separating out ground points and ground feature points;
extracting classified three-dimensional target objects 5 according to the selection frames of the classified target objects 5 of each projection surface;
according to the three-dimensional target object 5, measuring the point cloud wrappage and generating a measurement report; the point cloud wrap measurement is carried out according to the requirements of GB38900-2020 standard.
The scheme is used for identifying any industrial product based on 3D scanning and having regular industrialized appearance, can be projected to 2D for rapid target detection and segmentation, and has the characteristics of 3D depth brightness characteristics and 2D easy training in a 2D image. Based on the projection sum of the 3D object to the standard plane, the 3D image can be accurately restored to the 3D segmented object after being segmented in the projection 2D image, and the 3D scene requiring image segmentation measurement can be realized.
For example: identifying automotive components associated with road passing characteristics, such as rearview mirrors, tires, tanks, carriages, guard rails, etc.; and it recognizes at least 6 main stream vehicle types, such as van type, bin gate type, tank car, dump truck, flat plate type, fence type, etc.; the vehicle type recognition time is less than 60 seconds, the recognition precision of the automobile parts is more than 90%, and laser point cloud models with sensing signal intensity and vehicle color of different vehicle types are generated.
Example two
An example is substantially as shown in figure 1: a vehicle model and vehicle component identification system comprising: a sensor, a POE switch 3 and a computing terminal 4;
the sensor and the computing terminal 4 are connected with the POE switch 3, and communicate through the POE switch 3;
the sensor comprises: a 3D sensor 1 and a planar image sensor 2;
the 3D sensor 1 is used for acquiring three-dimensional coordinates (X, Y, Z) and reflection intensity P of a target object 5 to be identified and transmitting the three-dimensional coordinates and the reflection intensity P to the computing terminal 4 through the POE switch 3;
a planar image sensor 2 for acquiring RGB image information (R, G, B) of a target object 5 to be identified and transmitting the RGB image information to a computing terminal 4 through a POE switch 3;
the 3D sensor 1 is a laser radar, the planar image sensor 2 is an RGB camera, the number of the planar image sensors is set according to the requirement, the planar image sensors can be elastically stacked, and the more the sensors, the more the details can be measured, the higher the precision is; in other embodiments, an RGB industrial camera, a surface scanning (rotating lens) laser radar may be used, an adjustable base is set, and the adjustable base forms a same direction line, and by writing an internal reference adjustment program, the camera and the radar output data with the same axis, which are respectively image data and point cloud data, and the two data are calculated and fused, which can be output respectively: point cloud data with RGB colors and point cloud data with laser reflection intensity; specifically, the 3D sensor 1, namely a 3D laser sensor, is a laser radar adopting a rotating lens working principle, and through the refraction principle of a group of rotating lenses in the radar, the refraction superposition of two lenses with different speeds, randomly emits laser with different angles to obtain laser point clouds for the current field of view, each point of the laser point clouds has reflection intensity P, the resolution ratio of the laser radar is not lower than 640 and 480, a scanning array is constructed through a plurality of fixedly installed laser radars, the calculation hardware resource can splice and synthesize the point cloud data of the plurality of radars manually or automatically according to the geometric relationship, the point clouds of peripheral objects such as the ground are removed, the interference information such as dust is removed, the point cloud file of the object of interest is restored, the object is opposite to the installation direction, and the object is wrapped by the viewing cone as much as possible; the sensor and the computing terminal 4 may also perform self-checking in other embodiments, where the laser radar self-checking includes: radar ground plane based automatic detection, comprising: the method comprises the steps of automatically correcting radar coordinates, carrying out fuzzy calculation around a Z axis vertical to the ground by searching ground composition characteristics in a scene, and rotating the radar around the Z axis to finish radar coordinate calibration; compared with the traditional mode that three coordinates are adopted for simultaneous correction, larger calculated amount and long calculation time are faced, the generation speed of a model is further influenced, the radar coordinates are simpler and faster, the calculated amount and the hardware cost of a system are saved, and most of manual participation workload is reduced and the precision is guaranteed;
the computing terminal 4 is used for calibrating, removing environmental objects and fusing the viewing cone of the target object 5 to obtain 3D fusion information (X, Y, Z, P, R, G, B);
because a plurality of laser radars and RGB cameras (namely RGB industrial cameras) coaxial with the radars are adopted, the point cloud formed by each laser radar is a point cloud belonging to a local coordinate system of the laser radars, and the laser radars are arranged at different positions (x, y and z) in the environment, and each radar has different pointing angles and rolling angles, the point clouds obtained by the local coordinate systems of the laser radars are synthesized into the point clouds under the same coordinate system, and each radar needs to be calibrated in a unified coordinate system;
specifically, performing calibration includes: searching a ground plane reference object, and carrying out coordinate transformation on each laser radar based on the same ground plane reference object, so that the obtained radar point cloud has a uniform ground plane, then carrying out manual fine adjustment on the x and y placement coordinates of the radar on the ground plane and the horizontal rotation angle alpha to ensure that the point cloud has the highest overlapping degree, wherein the manual fine adjustment process can be carried out in a manual mode and an automatic calculation mode, so that an integrated integral point cloud picture with uniform coordinates is obtained;
because only the real target object 5 is concerned, and the target object 5 and other environmental interferents are physically unconnected and independent, the ground plane in the point cloud image is removed, a plurality of independent point cloud ranges of non-target objects such as the target object 5, the environmental objects and the like are obtained, a point cloud gathering area in the center of a visual field is searched, and the non-target object 5 is segmented;
since the RGB camera and the laser radar are coaxially installed, through perspective projection of an RGB image on the radar point cloud, corresponding RGB colors can be calculated for the point of each point cloud, and 3D fusion information (X, Y, Z, P, R, G and B) can be obtained;
performing gray level conversion on RGB image information in the fusion information to obtain converted 3D fusion information (X, Y, Z, P, G);
projecting the 3D fusion information to a plurality of projection surfaces to generate 2D fusion data; the number of projection surfaces in this embodiment is three, including: an XY plane, a YZ plane, and an XZ plane;
the generated 2D fusion data of each projection surface is subjected to target detection and segmentation of the classified target object 5 by adopting a manual calibration and/or calibration algorithm and machine learning, and a selection frame is arranged on the classified target object 5; the calibration algorithm adopts a mathematical morphology-based method, adopts open operation (firstly corroding and then expanding) to treat point cloud, and gradually separates the ground point and the ground feature point by using a progressive filter window; classifying the laser point cloud by using an improved morphological filter, and separating out ground points and ground feature points;
extracting classified three-dimensional target objects 5 according to the selection frames of the classified target objects 5 of each projection surface;
and carrying out point cloud inclusion measurement according to the three-dimensional target object 5, and generating a measurement report. The point cloud wrap measurement is carried out according to the requirements of GB38900-2020 standard.
The scheme is used for identifying any industrial product based on 3D scanning and having regular industrialized appearance, can be projected to 2D for rapid target detection and segmentation, and has the characteristics of 3D depth brightness characteristics and 2D easy training in a 2D image. Based on the projection sum of the 3D object to the standard plane, the 3D image can be accurately restored to the 3D segmented object after being segmented in the projection 2D image, and the 3D scene requiring image segmentation measurement can be realized.
For example: identifying automotive components associated with road passing characteristics, such as rearview mirrors, tires, tanks, carriages, guard rails, etc.; and it recognizes at least 6 main stream vehicle types, such as van type, bin gate type, tank car, dump truck, flat plate type, fence type, etc.; the vehicle type recognition time is less than 60 seconds, the recognition precision of the automobile parts is more than 90%, and laser point cloud models with sensing signal intensity and vehicle color of different vehicle types are generated.
The foregoing is merely an embodiment of the present invention, and a specific structure and characteristics of common knowledge in the art, which are well known in the scheme, are not described herein, so that a person of ordinary skill in the art knows all the prior art in the application day or before the priority date of the present invention, and can know all the prior art in the field, and have the capability of applying the conventional experimental means before the date, so that a person of ordinary skill in the art can complete and implement the present embodiment in combination with his own capability in the light of the present application, and some typical known structures or known methods should not be an obstacle for a person of ordinary skill in the art to implement the present application. It should be noted that modifications and improvements can be made by those skilled in the art without departing from the structure of the present invention, and these should also be considered as the scope of the present invention, which does not affect the effect of the implementation of the present invention and the utility of the patent. The protection scope of the present application shall be subject to the content of the claims, and the description of the specific embodiments and the like in the specification can be used for explaining the content of the claims.

Claims (10)

1. A vehicle type and automobile part recognition method is characterized by comprising the following steps:
setting a 3D sensor and a plane image sensor to acquire three-dimensional coordinates (X, Y, Z), reflection intensity P and RGB image information (R, G, B) of a target object to be identified;
calibrating, removing environmental objects and fusing the viewing cone of the target object to obtain 3D fusion information (X, Y, Z, P, R, G and B);
performing gray level conversion on RGB image information in the fusion information to obtain converted 3D fusion information (X, Y, Z, P, G);
projecting the 3D fusion information to a plurality of projection surfaces to generate 2D fusion data;
performing target detection and segmentation of classified target objects on the generated 2D fusion data of each projection surface, and setting a selection frame for the classified target objects;
extracting classified three-dimensional target objects according to the selection frames of the classified target objects of each projection surface;
and measuring the point cloud wrappage according to the three-dimensional target object, and generating a measurement report.
2. The vehicle model and vehicle component recognition method according to claim 1, wherein the 3D sensor is a lidar;
the planar image sensor is an RGB camera.
3. The method for recognizing vehicle models and automobile parts according to claim 1, wherein the number of the projection surfaces is three.
4. The vehicle model and vehicle component recognition method according to claim 3, wherein the projection surface includes: XY plane, YZ plane, and XZ plane.
5. The method for identifying vehicle models and parts of automobiles according to claim 1, wherein the generated 2D fusion data of each projection surface is subjected to target detection and segmentation of classified target objects by adopting a manual calibration and/or calibration algorithm and machine learning, and a selection frame is arranged for the classified target objects.
6. A vehicle model and vehicle component identification system, comprising: the system comprises a sensor, a POE switch and a computing terminal;
the sensor and the computing terminal are connected with the POE switch;
the sensor includes: a 3D sensor and a planar image sensor;
the 3D sensor is used for collecting three-dimensional coordinates (X, Y, Z) and reflection intensity P of a target object to be identified and transmitting the three-dimensional coordinates (X, Y, Z) and the reflection intensity P to the computing terminal through the POE switch;
the plane image sensor is used for collecting RGB image information (R, G, B) of a target object to be identified and transmitting the RGB image information to the computing terminal through the POE switch;
the computing terminal is used for calibrating, removing environmental objects and fusing the viewing cone of the target object to obtain 3D fusion information (X, Y, Z, P, R, G and B);
performing gray level conversion on RGB image information in the fusion information to obtain converted 3D fusion information (X, Y, Z, P, G);
projecting the 3D fusion information to a plurality of projection surfaces to generate 2D fusion data;
performing target detection and segmentation of classified target objects on the generated 2D fusion data of each projection surface, and setting a selection frame for the classified target objects;
extracting classified three-dimensional target objects according to the selection frames of the classified target objects of each projection surface;
and measuring the point cloud wrappage according to the three-dimensional target object, and generating a measurement report.
7. The vehicle model and vehicle component identification system of claim 6, wherein the 3D sensor is a lidar;
the planar image sensor is an RGB camera.
8. The vehicle model and vehicle component identification system of claim 6, wherein the number of projection surfaces is three.
9. The vehicle model and vehicle component identification system of claim 8, wherein the projection surface comprises: XY plane, YZ plane, and XZ plane.
10. The recognition system of vehicle models and parts of vehicles according to claim 6, wherein the generated 2D fusion data of each projection surface is subjected to target detection and segmentation of classified target objects by using manual calibration and/or calibration algorithm and machine learning, and a selection frame is set for the classified target objects.
CN202311399708.4A 2023-10-25 2023-10-25 Vehicle type and vehicle part recognition method and system Pending CN117392665A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311399708.4A CN117392665A (en) 2023-10-25 2023-10-25 Vehicle type and vehicle part recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311399708.4A CN117392665A (en) 2023-10-25 2023-10-25 Vehicle type and vehicle part recognition method and system

Publications (1)

Publication Number Publication Date
CN117392665A true CN117392665A (en) 2024-01-12

Family

ID=89438772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311399708.4A Pending CN117392665A (en) 2023-10-25 2023-10-25 Vehicle type and vehicle part recognition method and system

Country Status (1)

Country Link
CN (1) CN117392665A (en)

Similar Documents

Publication Publication Date Title
CN111553859B (en) Laser radar point cloud reflection intensity completion method and system
CN110363820B (en) Target detection method based on laser radar and pre-image fusion
CN109100741B (en) Target detection method based on 3D laser radar and image data
Vaudrey et al. Differences between stereo and motion behaviour on synthetic and real-world stereo sequences
CN113111887B (en) Semantic segmentation method and system based on information fusion of camera and laser radar
CN108638999B (en) Anti-collision early warning system and method based on 360-degree look-around input
CN116685873A (en) Vehicle-road cooperation-oriented perception information fusion representation and target detection method
KR102195164B1 (en) System and method for multiple object detection using multi-LiDAR
CN112396650A (en) Target ranging system and method based on fusion of image and laser radar
CN107796373B (en) Distance measurement method based on monocular vision of front vehicle driven by lane plane geometric model
CN111369617B (en) 3D target detection method of monocular view based on convolutional neural network
CN106560835A (en) Guideboard identification method and device
CN108645375B (en) Rapid vehicle distance measurement optimization method for vehicle-mounted binocular system
CN107590470A (en) A kind of method for detecting lane lines and device
CN112017248B (en) 2D laser radar camera multi-frame single-step calibration method based on dotted line characteristics
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
Jiang et al. Target detection algorithm based on MMW radar and camera fusion
CN114821526A (en) Obstacle three-dimensional frame detection method based on 4D millimeter wave radar point cloud
CN115546741A (en) Binocular vision and laser radar unmanned ship marine environment obstacle identification method
CN117111055A (en) Vehicle state sensing method based on thunder fusion
CN116978009A (en) Dynamic object filtering method based on 4D millimeter wave radar
CN114463303A (en) Road target detection method based on fusion of binocular camera and laser radar
CN113988197A (en) Multi-camera and multi-laser radar based combined calibration and target fusion detection method
CN112255604B (en) Method and device for judging accuracy of radar data and computer equipment
CN114792416A (en) Target detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination