CN115115631B - Hub defect detection method, device, equipment and computer readable medium - Google Patents

Hub defect detection method, device, equipment and computer readable medium Download PDF

Info

Publication number
CN115115631B
CN115115631B CN202211038083.4A CN202211038083A CN115115631B CN 115115631 B CN115115631 B CN 115115631B CN 202211038083 A CN202211038083 A CN 202211038083A CN 115115631 B CN115115631 B CN 115115631B
Authority
CN
China
Prior art keywords
hub
target
dimensional
target hub
wheel type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211038083.4A
Other languages
Chinese (zh)
Other versions
CN115115631A (en
Inventor
徐佐
于洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xinrun Fulian Digital Technology Co Ltd
Original Assignee
Shenzhen Xinrun Fulian Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xinrun Fulian Digital Technology Co Ltd filed Critical Shenzhen Xinrun Fulian Digital Technology Co Ltd
Priority to CN202211038083.4A priority Critical patent/CN115115631B/en
Publication of CN115115631A publication Critical patent/CN115115631A/en
Application granted granted Critical
Publication of CN115115631B publication Critical patent/CN115115631B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application relates to a method, a device, equipment and a computer readable medium for detecting hub defects. The method comprises the following steps: acquiring two-dimensional image data and three-dimensional point cloud data of a target hub under the condition that the target hub is detected to reach a preset position; determining the wheel type and the current pose of the target hub based on the two-dimensional image data and the three-dimensional point cloud data; calling a three-dimensional data model corresponding to the target hub according to the wheel type of the target hub, and generating an optimal shooting track and optimal shooting parameters of the target hub at the current pose by using the three-dimensional data model and the current pose; configuring a camera set and a light source set arranged at the tail end of the mechanical arm according to the optimal shooting parameters, and controlling the mechanical arm to acquire a hub surface image of the target hub according to the optimal shooting track; and inputting the hub surface image into a defect recognition neural network model to determine the defect of the target hub. The application solves the technical problems that the adaptability of different types of wheel types is poor, so that the imaging quality of the wheel hub is poor and the defect identification effect is poor.

Description

Hub defect detection method, device, equipment and computer readable medium
Technical Field
The present application relates to the field of wheel hub detection technologies, and in particular, to a method, an apparatus, a device, and a computer readable medium for detecting wheel hub defects.
Background
In an automobile assembly production line, the casting of a hub is a particularly critical ring, the hub is directly hooked with the safety of an automobile, the casting process of the hub is limited by a production process, various defects are inevitably generated, and the defects are detected in time in the production process and are necessary to repair or treat waste materials. However, there are thousands of hub types, and the differences between the shapes, colors and light reflection characteristics of different hub types are large, so that the detection of hub defects is not difficult due to the high reflectivity, low roughness and serious light reflection problem of a certain hub.
At present, among the correlation technique, usually based on actual demand, carry out the one set of image acquisition scheme of pertinence design to the wheel hub of the single model of wheel hub production line production to the defect detection demand of this kind of wheel hub is laminated to the at utmost, however this defect detection who just inevitably is difficult to adapt to different kind wheel hubs, and then leads to the problem that wheel hub imaging quality is poor, defect identification effect is poor.
Aiming at the problems of poor imaging quality of hubs and poor defect identification effect caused by poor adaptability of different types of wheel types, an effective solution is not provided at present.
Disclosure of Invention
The application provides a method, a device and equipment for detecting wheel hub defects and a computer readable medium, which are used for solving the technical problems of poor imaging quality and poor defect identification effect of wheel hubs caused by poor adaptability of different wheel types.
According to an aspect of an embodiment of the present application, there is provided a hub defect detecting method, including:
under the condition that a target hub is detected to reach a preset position, collecting two-dimensional image data and three-dimensional point cloud data of the target hub, wherein the two-dimensional image data is used for recording the color and texture of the surface of the target hub, and the three-dimensional point cloud data is used for recording the depth information of the target hub;
determining a wheel type and a current pose of the target hub based on the two-dimensional image data and the three-dimensional point cloud data;
calling a three-dimensional data model corresponding to the target hub according to the wheel type of the target hub, and generating an optimal shooting track and optimal shooting parameters of the target hub under the current pose by using the three-dimensional data model and the current pose;
configuring a camera set and a light source set arranged at the tail end of a mechanical arm according to the optimal shooting parameters, and controlling the mechanical arm to acquire a hub surface image of the target hub according to the optimal shooting track;
inputting the hub surface image into a defect recognition neural network model to determine the defects existing in the target hub.
Optionally, determining the wheel shape of the target hub based on the two-dimensional image data and the three-dimensional point cloud data comprises:
respectively positioning the central riser position and the valve hole position of the target hub in the two-dimensional image data and the three-dimensional point cloud data;
registering the two-dimensional image data and the three-dimensional point cloud data by using the position of the central dead head and the position of the valve hole which are obtained by positioning to obtain depth texture data fusing the color, the texture and the depth information of the surface of the target hub;
inputting the depth texture data into a depth neural network model to extract the wheel type characteristics of the target hub;
inputting the wheel type characteristics into a measurement learning model so as to determine the wheel type similarity between the target wheel hub and a wheel hub in a characteristic database by using the measurement learning model, wherein the characteristic database is a database for dynamically updating the wheel type in a production stage;
and determining the final wheel type of the target wheel hub according to the similarity discrimination result output by the metric learning model.
Optionally, determining the final wheel type of the target hub according to the similarity determination result output by the metric learning model includes:
determining the number of target wheel types with the wheel type feature similarity reaching a preset threshold value according to the similarity judgment result;
determining the target wheel type as the final wheel type of the target hub in case of only one target wheel type;
under the condition that at least two target wheel types exist, templates of all the target wheel types are taken and are subjected to height matching and diameter matching with the target wheel hubs one by one;
determining the comprehensive matching degree of the target wheel hub and each target wheel type based on the height matching result and the diameter matching result;
and determining the target wheel type corresponding to the template with the highest comprehensive matching degree as the final wheel type of the target hub.
Optionally, determining the current pose of the target hub based on the two-dimensional image data and the three-dimensional point cloud data comprises determining the current pose of the target hub using the two-dimensional image data as follows:
extracting front view data of the target hub from the two-dimensional image data;
inputting the front view data into a target detection model so as to identify a valve hole and a central riser of the target hub in the front view data by using the target detection model;
determining a first position offset value of the target hub in the x-axis direction and a second position offset value of the target hub in the y-axis direction in a target two-dimensional coordinate system respectively based on the position of the central riser, and determining a first angle offset value of the target hub relative to a preset initial angle based on the position of the valve hole;
determining the current pose of the target hub using the first position offset value, the second position offset value, and the first angle offset value.
Optionally, determining the current pose of the target hub based on the two-dimensional image data and the three-dimensional point cloud data comprises determining the current pose of the target hub using the three-dimensional point cloud data as follows:
calling a three-dimensional data model corresponding to the target hub according to the wheel type of the target hub;
performing three-dimensional point cloud registration on the three-dimensional point cloud data and the three-dimensional data model;
determining a third position offset value of the target hub relative to the three-dimensional data model in the x-axis direction, a fourth position offset value in the y-axis direction, a fifth position offset value in the z-axis direction and a second angle offset value of the target hub relative to a preset initial angle according to the registration result;
determining the current pose of the target hub using the third, fourth, fifth, and second angular offset values.
Optionally, the calling a three-dimensional data model corresponding to the target hub according to the wheel type of the target hub, and generating an optimal shooting track and optimal shooting parameters of the target hub by using the three-dimensional data model and the current pose includes:
replacing curved surfaces in the three-dimensional data model by using planes to obtain an approximate data model of the three-dimensional data model, wherein the replacing mode is that a plurality of planes are connected one by one according to the radian of the curved surfaces to be replaced, and the more planes used for replacing one curved surface, the smaller the difference between the three-dimensional data model and the approximate data model is;
adjusting the number of planes used for replacing the curved surface until the posture of the approximate data model is divided into a plurality of basic units under the condition that the difference value between the three-dimensional data model and the approximate data model is smaller than a distortion threshold value and the posture of the approximate data model is consistent with the current posture, wherein each basic unit comprises at least one plane;
simulating to shoot a target basic unit according to the principle that the shooting angle is larger than or equal to a first angle and the lighting angle is smaller than or equal to a second angle, and determining the optimal shooting position, the optimal shooting angle, the optimal lighting angle and the optimal camera parameter of the target basic unit in the simulated shooting process, wherein the first angle is larger than the second angle;
calculating the optimal shooting position, the optimal shooting angle, the optimal lighting angle and the optimal camera parameters of other basic units of the approximate data model by using the rotational symmetry of a hub and taking the optimal shooting position, the optimal shooting angle, the optimal lighting angle and the optimal camera parameters of the target basic unit as references;
and generating an optimal shooting track and optimal shooting parameters of the target hub in the current pose by using the optimal shooting positions, optimal shooting angles, optimal lighting angles and optimal camera parameters of all the basic units.
Optionally, after determining the wheel type and the current pose of the target hub based on the two-dimensional image data and the three-dimensional point cloud data, the method further comprises:
inquiring whether the wheel type of the target wheel hub has a corresponding customized shooting scheme or not in a wheel type customized database;
when the condition that the wheel type of the target hub has a corresponding customized shooting scheme is inquired, importing the current pose to generate a target customized shooting scheme of the target hub under the current pose;
and controlling the camera group and the light source group to acquire the hub surface image of the target hub according to the target customized shooting scheme.
According to another aspect of the embodiments of the present application, there is provided a hub defect detecting apparatus including:
the data acquisition module is used for acquiring two-dimensional image data and three-dimensional point cloud data of a target hub under the condition that the target hub is detected to reach a preset position, wherein the two-dimensional image data is used for recording the color and texture of the surface of the target hub, and the three-dimensional point cloud data is used for recording the depth information of the target hub;
the wheel type and pose identification module is used for determining the wheel type and the current pose of the target hub based on the two-dimensional image data and the three-dimensional point cloud data;
the shooting track and parameter generating module is used for calling a three-dimensional data model corresponding to the target hub according to the wheel type of the target hub and generating an optimal shooting track and optimal shooting parameters of the target hub at the current pose by using the three-dimensional data model and the current pose;
the surface image acquisition module is used for configuring a camera set and a light source set which are arranged at the tail end of the mechanical arm according to the optimal shooting parameters and controlling the mechanical arm to acquire a hub surface image of the target hub according to the optimal shooting track;
and the AI defect identification module is used for inputting the hub surface image into a defect identification neural network model so as to determine the defects of the target hub.
According to another aspect of the embodiments of the present application, there is provided an electronic device, including a memory, a processor, a communication interface, and a communication bus, where the memory stores a computer program executable on the processor, and the memory and the processor communicate with each other through the communication bus and the communication interface, and the processor implements the steps of the method when executing the computer program.
According to another aspect of embodiments of the present application, there is also provided a computer readable medium having non-volatile program code executable by a processor, the program code causing the processor to perform the above-mentioned method.
Compared with the related art, the technical scheme provided by the embodiment of the application has the following advantages:
the technical scheme includes that under the condition that a target hub is detected to reach a preset position, two-dimensional image data and three-dimensional point cloud data of the target hub are collected, wherein the two-dimensional image data are used for recording the color and texture of the surface of the target hub, and the three-dimensional point cloud data are used for recording the depth information of the target hub; determining the wheel type and the current pose of the target hub based on the two-dimensional image data and the three-dimensional point cloud data; calling a three-dimensional data model corresponding to the target hub according to the wheel type of the target hub, and generating an optimal shooting track and optimal shooting parameters of the target hub at the current pose by using the three-dimensional data model and the current pose; configuring a camera set and a light source set arranged at the tail end of the mechanical arm according to the optimal shooting parameters, and controlling the mechanical arm to acquire a hub surface image of the target hub according to the optimal shooting track; and inputting the hub surface image into a defect recognition neural network model to determine the defects existing in the target hub. According to the method and the device, wheel type recognition and pose recognition are carried out on the target hub in a two-dimensional and three-dimensional combined mode, a self-adaptive shooting track generation algorithm is combined with a three-dimensional data model and pose information of the target hub to generate the optimal shooting track of the hub of the model at the pose, high-quality hub surface images can be collected for hubs of different wheel types, the adaptability of a shooting scheme and the target hub is improved, and the technical problems that the imaging quality of the hub is poor and the defect recognition effect is poor due to the fact that the adaptability of different wheel types is poor are solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the technical solutions in the embodiments or related technologies of the present application, the drawings needed to be used in the description of the embodiments or related technologies will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without any creative effort.
FIG. 1 is a schematic diagram of an alternative wheel hub defect detecting method hardware environment according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of an alternative hub defect detecting method according to an embodiment of the present application;
FIG. 3 is a schematic view of an alternative wheel type identification operation provided in accordance with an embodiment of the present application;
FIG. 4 is a schematic diagram of an alternative wheel type identification algorithm provided in accordance with an embodiment of the present application;
FIG. 5 is an alternative hub elevational view provided in accordance with an embodiment of the present application;
FIG. 6 is a schematic diagram of an alternative three-dimensional data model and three-dimensional point cloud data provided in accordance with an embodiment of the present application;
FIG. 7 is a block diagram of an alternative hub defect detecting apparatus according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of an alternative hub defect detecting apparatus according to an embodiment of the present application;
fig. 9 is a hardware schematic diagram of an alternative hub defect detecting apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the following description, suffixes such as "module", "component", or "unit" used to indicate elements are used only for facilitating the description of the present application, and do not have a specific meaning per se. Thus, "module" and "component" may be used in a mixture.
In the correlation technique, usually based on actual demand, carry out the pertinence to the wheel hub of the single model of wheel hub production and design a set of image acquisition scheme to the defect detection demand of this kind of wheel hub is laminated to the at utmost, however this defect detection that just inevitably adapts to different kind wheel hubs, and then leads to the poor, the poor problem of defect identification effect of wheel hub imaging quality.
In order to solve the problems mentioned in the background, according to an aspect of embodiments of the present application, an embodiment of a method for detecting a hub defect is provided.
Alternatively, in the embodiment of the present application, the hub defect detecting method may be applied to a hardware environment formed by the terminal 101 and the server 103 as shown in fig. 1. As shown in fig. 1, the server 103 is connected to the terminal 101 through a network, which may be used to provide services (such as hub defect identification services) for the terminal or a client installed on the terminal, and the database 105 may be provided on the server or separately from the server, and is used to provide data storage services for the server 103, and the network includes but is not limited to: wide area network, metropolitan area network, or local area network, and the terminal 101 includes but is not limited to a PC, a cell phone, a tablet computer, and the like.
A hub defect detection method in the embodiment of the present application may be executed by the server 103, or may be executed by both the server 103 and the terminal 101, as shown in fig. 2, the method may include the following steps:
step S202, collecting two-dimensional image data and three-dimensional point cloud data of a target hub under the condition that the target hub is detected to reach a preset position, wherein the two-dimensional image data is used for recording the color and texture of the surface of the target hub, and the three-dimensional point cloud data is used for recording the depth information of the target hub;
step S204, determining the wheel type and the current pose of the target hub based on the two-dimensional image data and the three-dimensional point cloud data;
step S206, a three-dimensional data model corresponding to the target hub is called according to the wheel type of the target hub, and an optimal shooting track and optimal shooting parameters of the target hub under the current pose are generated by utilizing the three-dimensional data model and the current pose;
step S208, configuring a camera set and a light source set arranged at the tail end of a mechanical arm according to the optimal shooting parameters, and controlling the mechanical arm to acquire a hub surface image of the target hub according to the optimal shooting track;
step S210, inputting the hub surface image into a defect recognition neural network model to determine the defects of the target hub.
Through the steps S202 to S210, wheel type identification and pose identification are carried out on the target hub in a two-dimensional and three-dimensional combination mode, the self-adaptive shooting track generation algorithm is combined with the three-dimensional data model and the pose information of the target hub to generate the optimal shooting track of the hub of the model in the pose, high-quality hub surface images can be acquired for hubs of different wheel types, the adaptation degree of a shooting scheme and the target hub is improved, and the technical problems that the imaging quality of the hub is poor and the defect identification effect is poor due to the fact that the adaptability of different wheel types is poor are solved.
The technical scheme of the application comprises an in-place identification stage, a wheel type identification stage, a self-adaptive shooting scheme loading or generating stage, a hub surface image acquisition stage and an AI defect detection stage.
The in-place identification stage is a stage of detecting the position of the target hub on the conveyor belt in step S202. In the in-place identification stage, the position of the hub on the conveyor belt is detected mainly in an infrared photoelectric induction mode, and when the hub reaches a defect identification station, a signal is transmitted to a wheel type identification system for wheel type identification.
The wheel type identification stage is a stage of collecting the two-dimensional image data and the three-dimensional point cloud data in step S202 and step S204. In the wheel type identification stage, a 2D color camera, a 3D structured light camera or a binocular camera and the like are adopted to acquire color image data and three-dimensional point cloud data of the front surface (A surface and B surface) of the hub, and the improved wheel type identification algorithm is used for stably and accurately identifying the wheel type and the position and pose of the hub.
And in the hub surface image acquisition stage, a database is inquired according to the wheel type identification result, and if a customized shooting scheme of the wheel type exists, the hub surface image data is directly acquired. If the customized shooting scheme of the wheel type does not exist, the corresponding shooting scheme is generated through the self-adaptive shooting track generation algorithm provided by the application, and then the acquisition of the image data of the surface of the hub is carried out.
And finally, inputting the collected hub surface image into an AI defect detection algorithm for defect detection in the AI defect detection stage.
Through the process, the identification precision and the speed of the wheel type identification algorithm are improved, the self-adaptive shooting track generation algorithm is provided, the customized shooting scheme with high adaptation degree can be generated for target hubs with different wheel types and different poses, high-quality hub surface images can be acquired for hubs with different wheel types, the adaptation degree of the shooting scheme and the target hubs is improved, and the technical problems that the imaging quality of the hubs is poor and the defect identification effect is poor due to poor adaptability of different wheel types are solved. The detailed process of the present application is explained below.
In step S202, as shown in fig. 3, when the position of the target hub is detected and data is collected, the height of the hub is measured by the lateral infrared device, the position of the upper camera is adjusted according to the height of the hub, and after the position reaches a proper position, two-dimensional image data and three-dimensional point cloud data of the target hub are collected, so that the next wheel type identification can be performed by combining 2D texture features and 3D depth information.
In step S204, determining the wheel type of the target hub based on the two-dimensional image data and the three-dimensional point cloud data includes:
step 1, respectively positioning the central riser position and the valve hole position of the target hub in the two-dimensional image data and the three-dimensional point cloud data;
step 2, registering the two-dimensional image data and the three-dimensional point cloud data by using the position of the central dead head and the position of the valve hole which are obtained by positioning to obtain depth texture data which integrates the color, the texture and the depth information of the surface of the target hub;
step 3, inputting the depth texture data into a depth neural network model to extract the wheel type characteristics of the target hub;
step 4, inputting the wheel type characteristics into a measurement learning model so as to determine the wheel type similarity between the target wheel hub and a wheel hub in a characteristic database by using the measurement learning model, wherein the characteristic database is a database for dynamically updating the wheel type in a production stage;
and 5, determining the final wheel type of the target hub according to the similarity judgment result output by the metric learning model.
In the embodiment of the application, a basis is provided for a follow-up self-adaptive shooting track generation algorithm, manual intervention is reduced, labor cost is reduced, and the accuracy of wheel pattern recognition is improved to the maximum extent by adopting a mode of combining '2D + 3D'. The schematic diagram of the wheel type recognition algorithm is shown in fig. 4, and firstly, a 2D camera and a 3D camera are adopted to collect color, texture and depth data of a surface A and a surface B of a hub; then, registering the 2D data and the 3D data by using the positions of the central dead head and the air valve hole to synthesize RGB-D data; and then inputting the RGB-D data into a deep neural network to extract a more discriminative feature, namely a wheel type feature, and comparing the feature with a wheel type in a database by using a metric learning mode.
In the embodiment of the application, the central riser position and the valve hole position of the target hub are located in the two-dimensional image data and the three-dimensional point cloud data by adopting a target detection (Mask-Rcnn, solov2 and the like).
In the embodiment of the application, the characteristic database is a database for dynamically updating the wheel type in the production stage, namely the wheel type in the production stage in the production line is updated in real time, so that the comparison type is reduced when the characteristic comparison is carried out through measurement learning, and the processing efficiency is improved.
As shown in fig. 4, the measurement learning can quickly discriminate the easy-to-separate wheel types with some obvious features, but the difficult-to-separate wheel types with other unobvious features can be further subjected to model matching on the recognition result of the measurement learning, that is, the target wheel type to be determined output by the measurement learning is matched with the template, so as to determine the final wheel type of the target wheel hub. Specifically, determining the final wheel type of the target hub according to the similarity discrimination result output by the metric learning model includes:
step 1, determining the number of target wheel types with the wheel type feature similarity reaching a preset threshold value according to the similarity judgment result;
and 2, under the condition that only one target wheel type exists, determining the target wheel type as the final wheel type of the target hub. Alternatively, the first and second electrodes may be,
step 1, determining the number of target wheel types with the wheel type feature similarity reaching a preset threshold value according to the similarity judgment result;
step 2, under the condition that at least two target wheel types exist, templates of all the target wheel types are taken and are subjected to height matching and diameter matching with the target wheel hubs one by one;
step 3, determining the comprehensive matching degree of the target wheel hub and each target wheel type based on the height matching result and the diameter matching result;
and 4, determining the target wheel type corresponding to the template with the highest comprehensive matching degree as the final wheel type of the target hub.
In the embodiment of the application, if the similarity with a certain wheel type is obviously high and the similarity with other wheel types is low under the condition of easy separation, a conclusion is directly drawn; if the similarity between the wheel type and the wheel types is high in the difficult-to-distinguish case, a template matching mode is used, namely, the final wheel type is confirmed by further removing the false according to the height and the diameter of the wheel type.
According to the wheel type recognition algorithm improved by the technical scheme, the data combined by 2D and 3D are input into the deep learning network for feature extraction, the recognition degree of features is improved, and wheel type recognition errors caused by similar wheel types are avoided. And the accuracy of template matching and the speed of measurement learning calculation are combined, the measurement learning quickly determines the wheel type, if some wheel types are too similar (difficult-to-separate wheel types), the final wheel type is determined in a small range by using a template matching mode, and the accuracy and the speed of wheel type identification are greatly improved.
The above is a wheel type recognition algorithm, and the pose recognition algorithm is described below.
In step S204, determining the current pose of the target hub based on the two-dimensional image data and the three-dimensional point cloud data includes determining the current pose of the target hub using the two-dimensional image data as follows:
step 1, extracting front view data of the target hub from the two-dimensional image data;
step 2, inputting the front view data into a target detection model so as to identify a valve hole and a central riser of the target hub in the front view data by using the target detection model;
step 3, determining a first position offset value of the target hub in the x-axis direction and a second position offset value of the target hub in the y-axis direction in a target two-dimensional coordinate system based on the position of the central riser, and determining a first angle offset value of the target hub relative to a preset initial angle based on the position of the valve hole;
and 4, determining the current pose of the target hub by using the first position offset value, the second position offset value and the first angle offset value.
In the embodiment of the application, the pose of the target hub can be determined based on the two-dimensional image data. Determining the pose based on the two-dimensional image data requires identifying a valve hole and a central riser from the front view of the hub, and as shown in fig. 5, the position of the valve hole and the central riser in the front view of the hub is shown in fig. 5, where the valve hole is the hole position at the upper left corner of fig. 5, and the central riser is the hole position at the central position of fig. 5.
The application further provides a method for determining the pose of a target hub based on three-dimensional point cloud data, and specifically, determining the current pose of the target hub based on the two-dimensional image data and the three-dimensional point cloud data comprises determining the current pose of the target hub using the three-dimensional point cloud data as follows:
step 1, calling a three-dimensional data model corresponding to the target hub according to the wheel type of the target hub;
step 2, carrying out three-dimensional point cloud registration on the three-dimensional point cloud data and the three-dimensional data model;
step 3, determining a third position offset value of the target hub relative to the three-dimensional data model in the x-axis direction, a fourth position offset value in the y-axis direction, a fifth position offset value in the z-axis direction and a second angle offset value of the target hub relative to a preset initial angle according to the registration result;
and 4, determining the current pose of the target hub by using the third position offset value, the fourth position offset value, the fifth position offset value and the second angle offset value.
Fig. 6 shows a three-dimensional data model (left) and a hub scan of a target hub, i.e., an illustration of three-dimensional point cloud data (right) according to an embodiment of the present application.
In the embodiment of the application, a pose identification method based on two-dimensional image data and a pose identification method based on three-dimensional point cloud data can be combined, namely, the two recognition results are compared, if the error of the two recognized poses is larger than an error threshold value, the two-dimensional image data and the three-dimensional point cloud data are collected again for pose identification, and if the error is smaller than the error threshold value, the average value of the two poses is used as the final pose of the target hub. Therefore, the accuracy of pose identification can be further improved.
In step S206, retrieving a three-dimensional data model corresponding to the target hub according to the wheel type of the target hub, and generating an optimal shooting track and optimal shooting parameters of the target hub by using the three-dimensional data model and the current pose includes:
step 1, replacing curved surfaces in the three-dimensional data model by using planes to obtain an approximate data model of the three-dimensional data model, wherein the replacing mode is that a plurality of planes are connected one by one according to the radian of the curved surfaces to be replaced, and the more planes used for replacing one curved surface, the smaller the difference between the three-dimensional data model and the approximate data model is;
step 2, adjusting the number of planes used for replacing the curved surface until the posture of the approximate data model is divided into a plurality of basic units under the condition that the difference value between the three-dimensional data model and the approximate data model is smaller than a distortion threshold value and the posture of the approximate data model is consistent with the current posture, wherein each basic unit comprises at least one plane;
step 3, simulating to shoot a target basic unit according to the principle that the shooting angle is greater than or equal to a first angle and the lighting angle is less than or equal to a second angle, and determining the optimal shooting position, the optimal shooting angle, the optimal lighting angle and the optimal camera parameter of the target basic unit in the simulated shooting process, wherein the first angle is greater than the second angle;
step 4, calculating the optimal shooting position, the optimal shooting angle, the optimal lighting angle and the optimal camera parameters of other basic units of the approximate data model by using the rotational symmetry of the hub with the optimal shooting position, the optimal shooting angle, the optimal lighting angle and the optimal camera parameters of the target basic unit as reference;
and 5, generating the optimal shooting track and the optimal shooting parameters of the target hub in the current pose by using the optimal shooting positions, the optimal shooting angles, the optimal lighting angles and the optimal camera parameters of all the basic units.
There are thousands of types of wheel types, and it is difficult to be equally effective for other wheel types for an effective shooting scheme for one wheel type. Therefore, the self-adaptive shooting track generation algorithm is provided, and firstly the wheel type and the pose of the hub to be subjected to defect detection are accurately identified by using the wheel type identification algorithm, the 3D digital analogy of the wheel type is read, and the shooting track of the model hub under the pose is generated by using the self-adaptive shooting track generation algorithm and combining the 3D digital analogy and the pose information of the hub.
In the embodiment of the application, after the optimal shooting position, the optimal shooting angle, the optimal polishing angle and the optimal camera parameters of one basic unit are calculated, the optimal shooting position, the optimal shooting angle, the optimal polishing angle and the optimal camera parameters of the rest basic units can be calculated by utilizing the rotational symmetry of the hub, so that the consumption of calculation resources can be greatly reduced, and the calculation efficiency is improved.
In the embodiment of the application, each basic unit can be shot in a simulation mode according to the principle of high-angle shooting and low-angle lighting, so that the optimal shooting position, the optimal shooting angle, the optimal lighting angle and the optimal camera parameters of each basic unit are determined in the simulation shooting process, and the shooting scheme can be enabled to have higher adaptation degree with the target hub, namely the current pose.
In this embodiment of the present application, after determining the wheel type and the pose of the target hub, if there is a customized shooting scheme for the wheel type, directly acquiring image data of a hub surface, specifically, after determining the wheel type and the current pose of the target hub based on the two-dimensional image data and the three-dimensional point cloud data, the method further includes:
step 1, inquiring whether a wheel type of the target wheel hub has a corresponding customized shooting scheme in a wheel type customized database;
step 2, under the condition that a customized shooting scheme corresponding to the wheel type of the target hub exists, importing the current pose to generate a customized shooting scheme of the target hub under the current pose;
and 3, controlling the camera group and the light source group to acquire the hub surface image of the target hub according to the target customized shooting scheme.
In the embodiment of the application, the optimal shooting schemes of different wheel types can be enriched continuously, so that after the wheel type and the pose of the target hub are determined, if the customized shooting scheme of the wheel type exists, the corresponding customized shooting scheme can be applied to the current pose directly, and the processing efficiency is greatly improved.
According to the method, wheel type recognition and pose recognition are carried out on the target hub in a two-dimensional and three-dimensional combination mode, a self-adaptive shooting track generation algorithm is combined with a three-dimensional data model and pose information of the target hub, the optimal shooting track of the hub of the model under the pose is generated, high-quality hub surface images can be collected for hubs of different wheel types, the adaptability of a shooting scheme and the target hub is improved, and the technical problems that the imaging quality of the hub is poor and the defect recognition effect is poor due to poor adaptability of different wheel types are solved.
According to still another aspect of an embodiment of the present application, as shown in fig. 7, there is provided a hub defect detecting apparatus including:
the data acquisition module 701 is used for acquiring two-dimensional image data and three-dimensional point cloud data of a target hub under the condition that the target hub is detected to reach a preset position, wherein the two-dimensional image data is used for recording the color and texture of the surface of the target hub, and the three-dimensional point cloud data is used for recording the depth information of the target hub;
a wheel type and pose identification module 703 for determining a wheel type and a current pose of the target wheel hub based on the two-dimensional image data and the three-dimensional point cloud data;
a shooting track and parameter generating module 705, configured to invoke a three-dimensional data model corresponding to the target hub according to the wheel type of the target hub, and generate an optimal shooting track and an optimal shooting parameter of the target hub at the current pose by using the three-dimensional data model and the current pose;
a surface image acquisition module 707, configured to configure a camera group and a light source group arranged at a tail end of the robot arm according to the optimal shooting parameters, and control the robot arm to acquire a hub surface image of the target hub according to the optimal shooting track;
an AI defect recognition module 709, configured to input the hub surface image into a defect recognition neural network model to determine a defect existing in the target hub.
It should be noted that the data acquisition module 701 in this embodiment may be configured to execute step S202 in this embodiment, the wheel type and pose identification module 703 in this embodiment may be configured to execute step S204 in this embodiment, the shooting trajectory and parameter generation module 705 in this embodiment may be configured to execute step S206 in this embodiment, the surface image acquisition module 707 in this embodiment may be configured to execute step S208 in this embodiment, and the AI defect identification module 709 in this embodiment may be configured to execute step S210 in this embodiment.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules described above as a part of the apparatus may operate in a hardware environment as shown in fig. 1, and may be implemented by software or hardware.
Optionally, the wheel type and pose identification module is specifically configured to:
respectively positioning the central riser position and the valve hole position of the target hub in the two-dimensional image data and the three-dimensional point cloud data;
registering the two-dimensional image data and the three-dimensional point cloud data by using the position of the central dead head and the position of the valve hole which are obtained by positioning to obtain depth texture data fusing the color, the texture and the depth information of the surface of the target hub;
inputting the depth texture data into a depth neural network model to extract the wheel type characteristics of the target hub;
inputting the wheel type characteristics into a measurement learning model so as to determine the wheel type similarity between the target wheel hub and a wheel hub in a characteristic database by using the measurement learning model, wherein the characteristic database is a database for dynamically updating the wheel type in a production stage;
and determining the final wheel type of the target wheel hub according to the similarity discrimination result output by the metric learning model.
Optionally, the wheel type and pose recognition module is further configured to:
determining the number of target wheel types with the wheel type feature similarity reaching a preset threshold value according to the similarity judgment result;
determining the target wheel type as the final wheel type of the target hub in case of only one target wheel type;
under the condition that at least two target wheel types exist, templates of all the target wheel types are taken and are subjected to height matching and diameter matching with the target wheel hubs one by one;
determining the comprehensive matching degree of the target wheel hub and each target wheel type based on the height matching result and the diameter matching result;
and determining the target wheel type corresponding to the template with the highest comprehensive matching degree as the final wheel type of the target hub.
Optionally, the wheel type and pose recognition module is further configured to:
extracting front view data of the target hub from the two-dimensional image data;
inputting the front view data into a target detection model so as to identify a valve hole and a central riser of the target hub in the front view data by using the target detection model;
determining a first position offset value of the target hub in the x-axis direction and a second position offset value of the target hub in the y-axis direction in a target two-dimensional coordinate system respectively based on the position of the central riser, and determining a first angle offset value of the target hub relative to a preset initial angle based on the position of the valve hole;
determining the current pose of the target hub using the first position offset value, the second position offset value, and the first angle offset value.
Optionally, the wheel type and pose recognition module is further configured to:
calling a three-dimensional data model corresponding to the target hub according to the wheel type of the target hub;
performing three-dimensional point cloud registration on the three-dimensional point cloud data and the three-dimensional data model;
determining a third position offset value of the target hub relative to the three-dimensional data model in the x-axis direction, a fourth position offset value in the y-axis direction, a fifth position offset value in the z-axis direction and a second angle offset value of the target hub relative to a preset initial angle according to the registration result;
determining the current pose of the target hub using the third position offset value, the fourth position offset value, the fifth position offset value, and the second angle offset value.
Optionally, the shooting track and parameter generating module is specifically configured to:
replacing curved surfaces in the three-dimensional data model by using planes to obtain an approximate data model of the three-dimensional data model, wherein the replacing mode is that a plurality of planes are connected one by one according to the radian of the curved surfaces to be replaced, and the more planes used for replacing one curved surface, the smaller the difference between the three-dimensional data model and the approximate data model is;
adjusting the number of planes used for replacing the curved surface until the posture of the approximate data model is divided into a plurality of basic units under the condition that the difference value between the three-dimensional data model and the approximate data model is smaller than a distortion threshold value and the posture of the approximate data model is consistent with the current posture, wherein each basic unit comprises at least one plane;
simulating to shoot a target basic unit according to the principle that the shooting angle is larger than or equal to a first angle and the lighting angle is smaller than or equal to a second angle, and determining the optimal shooting position, the optimal shooting angle, the optimal lighting angle and the optimal camera parameter of the target basic unit in the simulated shooting process, wherein the first angle is larger than the second angle;
calculating the optimal shooting position, the optimal shooting angle, the optimal lighting angle and the optimal camera parameters of other basic units of the approximate data model by using the rotational symmetry of the hub with reference to the optimal shooting position, the optimal shooting angle, the optimal lighting angle and the optimal camera parameters of the target basic unit;
and generating an optimal shooting track and optimal shooting parameters of the target hub in the current pose by using the optimal shooting positions, optimal shooting angles, optimal lighting angles and optimal camera parameters of all the basic units.
Optionally, the apparatus further comprises a customization scheme application module, configured to:
inquiring whether the wheel type of the target wheel hub has a corresponding customized shooting scheme or not in a wheel type customized database;
when the condition that the wheel type of the target hub has a corresponding customized shooting scheme is inquired, importing the current pose to generate a target customized shooting scheme of the target hub under the current pose;
and controlling the camera group and the light source group to acquire the hub surface image of the target hub according to the target customized shooting scheme.
According to another aspect of the embodiments of the present application, there is provided a hub defect detecting apparatus, as shown in fig. 8, including a memory 801, a processor 803, a communication interface 805 and a communication bus 807, wherein the memory 801 stores a computer program that can be executed on the processor 803, the memory 801 and the processor 803 communicate with each other through the communication interface 805 and the communication bus 807, and the steps of the method are implemented when the processor 803 executes the computer program.
The memory and the processor in the electronic equipment are communicated with the communication interface through a communication bus. The communication bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
Optionally, the hub defect detecting apparatus further includes:
the two-dimensional image data acquisition device is used for acquiring two-dimensional image data of a target hub under the condition that the target hub reaches a preset position;
the three-dimensional point cloud data acquisition device is used for acquiring the three-dimensional point cloud data of the target hub under the condition that the target hub reaches a preset position;
the end of the mechanical arm is provided with a camera set and a light source set, the mechanical arm is used for moving the camera set and the light source set according to an optimal shooting track, and the camera set and the light source set are used for collecting hub surface images of a target hub on the optimal shooting track according to optimal shooting parameters.
Fig. 9 is a schematic diagram of a hub defect detecting apparatus according to an embodiment of the present application. In order to adapt to the flexible shooting tracks and shooting modes of different wheel types, as shown in fig. 9, the method adopts a mode that the tail end of the mechanical arm carries the camera set and the light source set, and a flexible shooting scheme of a self-adaptive shooting track generation algorithm is matched, so that the high-quality hub surface image is ensured to be acquired.
There is also provided in accordance with yet another aspect of an embodiment of the present application a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the steps of any of the embodiments described above.
Optionally, in an embodiment of the present application, a computer readable medium is configured to store program code for the processor to perform the following steps:
under the condition that a target hub is detected to reach a preset position, collecting two-dimensional image data and three-dimensional point cloud data of the target hub, wherein the two-dimensional image data is used for recording the color and texture of the surface of the target hub, and the three-dimensional point cloud data is used for recording the depth information of the target hub;
determining the wheel type and the current pose of the target hub based on the two-dimensional image data and the three-dimensional point cloud data;
calling a three-dimensional data model corresponding to the target hub according to the wheel type of the target hub, and generating an optimal shooting track and optimal shooting parameters of the target hub at the current pose by using the three-dimensional data model and the current pose;
configuring a camera set and a light source set which are arranged at the tail end of a mechanical arm according to the optimal shooting parameters, and controlling the mechanical arm to acquire a hub surface image of the target hub according to the optimal shooting track;
inputting the hub surface image into a defect recognition neural network model to determine the defects existing in the target hub.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
When the embodiments of the present application are specifically implemented, reference may be made to the above embodiments, and corresponding technical effects are achieved.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented by means of units performing the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application, which are essential or part of the technical solutions contributing to the prior art, may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk. It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The above description is merely exemplary of the present application and is presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (11)

1. A method for detecting a defect of a hub is characterized by comprising the following steps:
under the condition that a target hub is detected to reach a preset position, collecting two-dimensional image data and three-dimensional point cloud data of the target hub, wherein the two-dimensional image data is used for recording the color and texture of the surface of the target hub, and the three-dimensional point cloud data is used for recording the depth information of the target hub;
determining a wheel type and a current pose of the target hub based on the two-dimensional image data and the three-dimensional point cloud data;
calling a three-dimensional data model corresponding to the target hub according to the wheel type of the target hub, and generating an optimal shooting track and optimal shooting parameters of the target hub at the current pose by using the three-dimensional data model and the current pose;
configuring a camera set and a light source set which are arranged at the tail end of a mechanical arm according to the optimal shooting parameters, and controlling the mechanical arm to acquire a hub surface image of the target hub according to the optimal shooting track;
inputting the hub surface image into a defect recognition neural network model to determine the defects existing in the target hub.
2. The method of claim 1, wherein determining the wheel shape of the target hub based on the two-dimensional image data and the three-dimensional point cloud data comprises:
respectively positioning the central riser position and the valve hole position of the target hub in the two-dimensional image data and the three-dimensional point cloud data;
registering the two-dimensional image data and the three-dimensional point cloud data by using the position of the central dead head and the position of the valve hole which are obtained by positioning to obtain depth texture data fusing the color, the texture and the depth information of the surface of the target hub;
inputting the depth texture data into a depth neural network model to extract the wheel type characteristics of the target hub;
inputting the wheel type characteristics into a metric learning model so as to determine the wheel type similarity between the target wheel hub and a wheel hub in a characteristic database by using the metric learning model, wherein the characteristic database is a database for dynamically updating the wheel type in a production stage;
and determining the final wheel type of the target wheel hub according to the similarity discrimination result output by the metric learning model.
3. The method of claim 2, wherein determining the final wheel type of the target hub from the similarity discrimination result output by the metric learning model comprises:
determining the number of target wheel types with the wheel type feature similarity reaching a preset threshold value according to the similarity judgment result;
determining the target wheel type as the final wheel type of the target hub in case of only one target wheel type;
under the condition that at least two target wheel types exist, templates of all the target wheel types are taken and are subjected to height matching and diameter matching with the target wheel hubs one by one;
determining the comprehensive matching degree of the target wheel hub and each target wheel type based on the height matching result and the diameter matching result;
and determining the target wheel type corresponding to the template with the highest comprehensive matching degree as the final wheel type of the target hub.
4. The method of claim 1, wherein determining the current pose of the target hub based on the two-dimensional image data and the three-dimensional point cloud data comprises determining the current pose of the target hub using the two-dimensional image data as follows:
extracting front view data of the target hub from the two-dimensional image data;
inputting the front view data into a target detection model so as to identify a valve hole and a central riser of the target hub in the front view data by using the target detection model;
determining a first position offset value of the target hub in the x-axis direction and a second position offset value of the target hub in the y-axis direction in a target two-dimensional coordinate system respectively based on the position of the central riser, and determining a first angle offset value of the target hub relative to a preset initial angle based on the position of the valve hole;
determining the current pose of the target hub using the first position offset value, the second position offset value, and the first angle offset value.
5. The method of claim 1, wherein determining the current pose of the target hub based on the two-dimensional image data and the three-dimensional point cloud data comprises determining the current pose of the target hub using the three-dimensional point cloud data as follows:
calling a three-dimensional data model corresponding to the target hub according to the wheel type of the target hub;
performing three-dimensional point cloud registration on the three-dimensional point cloud data and the three-dimensional data model;
determining a third position offset value of the target hub relative to the three-dimensional data model in the x-axis direction, a fourth position offset value in the y-axis direction, a fifth position offset value in the z-axis direction and a second angle offset value of the target hub relative to a preset initial angle according to the registration result;
determining the current pose of the target hub using the third, fourth, fifth, and second angular offset values.
6. The method of claim 1, wherein retrieving a three-dimensional data model corresponding to the target hub according to the wheel type of the target hub, and generating an optimal photographing trajectory and optimal photographing parameters of the target hub using the three-dimensional data model and the current pose comprises:
replacing curved surfaces in the three-dimensional data model by using planes to obtain an approximate data model of the three-dimensional data model, wherein the replacing mode is that a plurality of planes are connected one by one according to the radian of the curved surfaces to be replaced, and the more planes used for replacing one curved surface, the smaller the difference between the three-dimensional data model and the approximate data model is;
adjusting the number of planes used for replacing the curved surface until the posture of the approximate data model is divided into a plurality of basic units under the condition that the difference value between the three-dimensional data model and the approximate data model is smaller than a distortion threshold value and the posture of the approximate data model is consistent with the current posture, wherein each basic unit comprises at least one plane;
simulating to shoot a target basic unit according to the principle that the shooting angle is larger than or equal to a first angle and the lighting angle is smaller than or equal to a second angle, and determining the optimal shooting position, the optimal shooting angle, the optimal lighting angle and the optimal camera parameter of the target basic unit in the simulated shooting process, wherein the first angle is larger than the second angle;
calculating the optimal shooting position, the optimal shooting angle, the optimal lighting angle and the optimal camera parameters of other basic units of the approximate data model by using the rotational symmetry of the hub with reference to the optimal shooting position, the optimal shooting angle, the optimal lighting angle and the optimal camera parameters of the target basic unit;
and generating an optimal shooting track and optimal shooting parameters of the target hub in the current pose by using the optimal shooting positions, optimal shooting angles, optimal lighting angles and optimal camera parameters of all the basic units.
7. The method of claim 1, after determining the wheel type and current pose of the target hub based on the two-dimensional image data and the three-dimensional point cloud data, the method further comprising:
inquiring whether the wheel type of the target wheel hub has a corresponding customized shooting scheme or not in a wheel type customized database;
when the condition that the wheel type of the target hub has a corresponding customized shooting scheme is inquired, importing the current pose to generate a target customized shooting scheme of the target hub under the current pose;
and controlling the camera group and the light source group to acquire the hub surface image of the target hub according to the target customized shooting scheme.
8. A hub defect detecting apparatus, comprising:
the data acquisition module is used for acquiring two-dimensional image data and three-dimensional point cloud data of a target hub under the condition that the target hub is detected to reach a preset position, wherein the two-dimensional image data is used for recording the color and texture of the surface of the target hub, and the three-dimensional point cloud data is used for recording the depth information of the target hub;
the wheel type and pose identification module is used for determining the wheel type and the current pose of the target hub based on the two-dimensional image data and the three-dimensional point cloud data;
the shooting track and parameter generating module is used for calling a three-dimensional data model corresponding to the target hub according to the wheel type of the target hub and generating an optimal shooting track and optimal shooting parameters of the target hub at the current pose by using the three-dimensional data model and the current pose;
the surface image acquisition module is used for configuring a camera set and a light source set which are arranged at the tail end of the mechanical arm according to the optimal shooting parameters and controlling the mechanical arm to acquire a hub surface image of the target hub according to the optimal shooting track;
and the AI defect identification module is used for inputting the hub surface image into a defect identification neural network model so as to determine the defects of the target hub.
9. A hub defect detecting device comprising a memory, a processor, a communication interface and a communication bus, wherein the memory has stored therein a computer program operable on the processor, and the memory and the processor communicate via the communication bus and the communication interface, wherein the processor implements the steps of the method according to any of the preceding claims 1 to 7 when executing the computer program.
10. The apparatus of claim 9, further comprising:
the two-dimensional image data acquisition device is used for acquiring two-dimensional image data of a target hub under the condition that the target hub reaches a preset position;
the three-dimensional point cloud data acquisition device is used for acquiring the three-dimensional point cloud data of the target hub under the condition that the target hub reaches a preset position;
the end of the mechanical arm is provided with a camera set and a light source set, the mechanical arm is used for moving the camera set and the light source set according to an optimal shooting track, and the camera set and the light source set are used for collecting hub surface images of a target hub on the optimal shooting track according to optimal shooting parameters.
11. A computer-readable medium having non-volatile program code executable by a processor, wherein the program code causes the processor to perform the method of any of claims 1 to 7.
CN202211038083.4A 2022-08-29 2022-08-29 Hub defect detection method, device, equipment and computer readable medium Active CN115115631B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211038083.4A CN115115631B (en) 2022-08-29 2022-08-29 Hub defect detection method, device, equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211038083.4A CN115115631B (en) 2022-08-29 2022-08-29 Hub defect detection method, device, equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN115115631A CN115115631A (en) 2022-09-27
CN115115631B true CN115115631B (en) 2022-12-09

Family

ID=83336030

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211038083.4A Active CN115115631B (en) 2022-08-29 2022-08-29 Hub defect detection method, device, equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN115115631B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1600351A1 (en) * 2004-04-01 2005-11-30 Heuristics GmbH Method and system for detecting defects and hazardous conditions in passing rail vehicles
DE102008055163A1 (en) * 2008-12-29 2010-07-01 Robert Bosch Gmbh Method for chassis measurement and device for measuring the chassis geometry of a vehicle
CN106295936A (en) * 2015-05-29 2017-01-04 深圳镭博万科技有限公司 Wheel hub type identification device and wheel hub mark system for tracing and managing
CN108273763A (en) * 2018-04-04 2018-07-13 苏州优纳科技有限公司 Wheel hub finished appearance detection device
CN208270445U (en) * 2018-06-25 2018-12-21 南昌工程学院 Track component surface defect detection apparatus based on three-dimensional measurement
CN110554704A (en) * 2019-08-15 2019-12-10 成都优艾维智能科技有限责任公司 unmanned aerial vehicle-based fan blade autonomous inspection method
CN112464709A (en) * 2020-10-16 2021-03-09 深圳精匠云创科技有限公司 Hub identification detection method, electronic device and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1600351A1 (en) * 2004-04-01 2005-11-30 Heuristics GmbH Method and system for detecting defects and hazardous conditions in passing rail vehicles
DE102008055163A1 (en) * 2008-12-29 2010-07-01 Robert Bosch Gmbh Method for chassis measurement and device for measuring the chassis geometry of a vehicle
CN106295936A (en) * 2015-05-29 2017-01-04 深圳镭博万科技有限公司 Wheel hub type identification device and wheel hub mark system for tracing and managing
CN108273763A (en) * 2018-04-04 2018-07-13 苏州优纳科技有限公司 Wheel hub finished appearance detection device
CN208270445U (en) * 2018-06-25 2018-12-21 南昌工程学院 Track component surface defect detection apparatus based on three-dimensional measurement
CN110554704A (en) * 2019-08-15 2019-12-10 成都优艾维智能科技有限责任公司 unmanned aerial vehicle-based fan blade autonomous inspection method
CN112464709A (en) * 2020-10-16 2021-03-09 深圳精匠云创科技有限公司 Hub identification detection method, electronic device and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Surface Defects Recognition of Wheel Hub Based on Improved Faster R-CNN;Xiaohong Sun 等;《electronics》;20190425;1-16 *
基于图像处理的轮辐角度测量;关明宇 等;《数据采集与处理》;20190115;189-194 *
摩托车车轮铸造面磕碰伤视觉检测系统研究;高枭禹;《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》;20220415;C035-609 *

Also Published As

Publication number Publication date
CN115115631A (en) 2022-09-27

Similar Documents

Publication Publication Date Title
CN106683070B (en) Height measuring method and device based on depth camera
CN106469448B (en) Automated industrial inspection with 3D vision
CN111652292B (en) Similar object real-time detection method and system based on NCS and MS
US11158039B2 (en) Using 3D vision for automated industrial inspection
CN108550166B (en) Spatial target image matching method
KR102073468B1 (en) System and method for scoring color candidate poses against a color image in a vision system
EP3836085B1 (en) Multi-view three-dimensional positioning
CN105718931B (en) System and method for determining clutter in acquired images
CN105865329A (en) Vision-based acquisition system for end surface center coordinates of bundles of round steel and acquisition method thereof
CN107527368B (en) Three-dimensional space attitude positioning method and device based on two-dimensional code
CN111161295B (en) Dish image background stripping method
CN111192326B (en) Method and system for visually identifying direct-current charging socket of electric automobile
CN110084830B (en) Video moving object detection and tracking method
CN113793413A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN113706472A (en) Method, device and equipment for detecting road surface diseases and storage medium
CN114067147A (en) Ship target confirmation method based on local shape matching
CN107895166B (en) Method for realizing target robust recognition based on feature descriptor by geometric hash method
Ryu et al. Feature-based pothole detection in two-dimensional images
CN113313725B (en) Bung hole identification method and system for energetic material medicine barrel
CN116883945B (en) Personnel identification positioning method integrating target edge detection and scale invariant feature transformation
CN115115631B (en) Hub defect detection method, device, equipment and computer readable medium
CN115034577A (en) Electromechanical product neglected loading detection method based on virtual-real edge matching
CN115147541A (en) Ladle detection method and device, electronic equipment and computer readable storage medium
CN112686155A (en) Image recognition method, image recognition device, computer-readable storage medium and processor
CN112529960A (en) Target object positioning method and device, processor and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant