CN114708230B - Vehicle frame quality detection method, device, equipment and medium based on image analysis - Google Patents

Vehicle frame quality detection method, device, equipment and medium based on image analysis Download PDF

Info

Publication number
CN114708230B
CN114708230B CN202210360837.1A CN202210360837A CN114708230B CN 114708230 B CN114708230 B CN 114708230B CN 202210360837 A CN202210360837 A CN 202210360837A CN 114708230 B CN114708230 B CN 114708230B
Authority
CN
China
Prior art keywords
image
dimensional
frame
pixel
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210360837.1A
Other languages
Chinese (zh)
Other versions
CN114708230A (en
Inventor
吴志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jingming Inspection Equipment Co ltd
Original Assignee
Shenzhen Jingming Inspection Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jingming Inspection Equipment Co ltd filed Critical Shenzhen Jingming Inspection Equipment Co ltd
Priority to CN202210360837.1A priority Critical patent/CN114708230B/en
Publication of CN114708230A publication Critical patent/CN114708230A/en
Application granted granted Critical
Publication of CN114708230B publication Critical patent/CN114708230B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30116Casting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Geometry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Graphics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an artificial intelligence technology, and discloses a frame quality detection method based on image analysis, which comprises the following steps: extracting double-channel characteristics from a multi-azimuth two-dimensional image of the target frame and fusing the double-channel characteristics into fused characteristics; separating the frame pixel area in each two-dimensional image according to the fusion characteristics, calculating the size of the frame pixel area, and calculating a first detection score of the target frame according to the size; constructing a three-dimensional frame model according to the multi-directional two-dimensional image of the standard frame, carrying out three-dimensional measurement on the three-dimensional frame model, and calculating a second detection score of the target frame according to a three-dimensional measurement result; and calculating the quality comprehensive score of the target frame according to the first detection score and the second detection score. The invention also provides a frame quality detection device based on image analysis, electronic equipment and a storage medium. The invention can improve the accuracy of the quality detection of the frame.

Description

Vehicle frame quality detection method, device, equipment and medium based on image analysis
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a frame quality detection method and device based on image analysis, electronic equipment and a computer-readable storage medium.
Background
In the manufacturing field, it is very labor-consuming to detect the production quality of the mold targets by business personnel, so more and more enterprises mainly monitor the targets through image analysis, for example, comparing the sizes of different target images and standard images to judge whether the targets have quality problems such as deformation.
The traditional image analysis method is mainly based on the pixel difference method of two images to directly divide the change area, the principle of the methods is simple, but a large amount of noise points can be generated, and great interference is caused to change identification; in addition, most of the commonly used change area identification based on image analysis is a change area identified through a difference image, the boundary of the change is divided according to experience, and visual interpretation is relied on, so that the accuracy is low when the quality problems such as deformation and the like exist in the existing identification target based on image analysis.
Disclosure of Invention
The invention provides a method and a device for detecting the quality of a vehicle frame based on image analysis and a computer readable storage medium, and mainly aims to solve the problem of low accuracy in vehicle frame quality detection.
In order to achieve the above object, the present invention provides a frame quality detection method based on image analysis, which includes:
acquiring two-dimensional images obtained by shooting a target frame from a plurality of directions;
extracting dual-channel features from the two-dimensional image, and performing feature fusion on the dual-channel features to obtain fusion features;
separating frame pixel areas in the two-dimensional image corresponding to each different direction according to the fusion characteristics, calculating the size of each frame pixel area, and calculating a first detection score of the target frame according to the size;
selecting two-dimensional images in one direction one by one as target images, and calculating the image distortion coefficient of the target images by using a double coordinate method;
constructing a space coordinate system, and mapping the two-dimensional image into the space coordinate system according to the plurality of directions to obtain an image coordinate;
carrying out coordinate correction on the image coordinates according to the image distortion coefficient to obtain corrected coordinates of each two-dimensional image;
constructing a three-dimensional frame model of the target frame according to the corrected coordinates, carrying out three-dimensional measurement on the three-dimensional frame model, and calculating a second detection score of the target frame according to a three-dimensional measurement result;
and calculating the quality comprehensive score of the target frame according to the first detection score and the second detection score.
Optionally, the extracting the two-channel feature from the two-dimensional image includes:
performing pixel enhancement on the two-dimensional image, and selecting pixel points with pixel values larger than a preset pixel threshold value in the two-dimensional image after the pixel enhancement as pixel points to be screened;
determining a connected domain formed by the pixel points to be screened as a characteristic pixel region of the target frame;
carrying out global feature extraction on the feature pixel area to obtain global features;
performing local feature extraction on the feature pixel area to obtain local features;
and collecting the global features and the local features to obtain the dual-channel features.
Optionally, the performing feature fusion on the two-channel features to obtain a fused feature includes:
mapping each feature in the two-channel features to different network layers in a pre-constructed full-connection layer network one by one;
carrying out jump linking on the two-channel characteristics in different network layers to obtain connection characteristics;
and performing composite addition operation on each connection characteristic to obtain a fusion characteristic.
Optionally, the separating the frame pixel region in the two-dimensional image corresponding to each different orientation according to the fusion feature includes:
selecting one of the two-dimensional images from the two-dimensional images corresponding to different directions one by one as an image to be separated;
calculating the pixel size of the image to be separated and calculating the characteristic size of the fusion characteristic corresponding to the image to be separated;
performing up-sampling on the fusion feature corresponding to the image to be separated according to the pixel size and the feature size until the feature size of the fusion feature corresponding to the image to be separated is the same as the pixel size of the image to be separated;
and cutting the image to be separated according to the up-sampled fusion characteristics to obtain a frame pixel area in the image to be separated.
Optionally, the constructing a three-dimensional frame model of the target frame according to the corrected coordinates, and performing three-dimensional measurement on the three-dimensional frame model includes:
counting characteristic coordinates of a frame pixel area in each two-dimensional image in the space coordinate system;
determining the space connected domain of the characteristic coordinates as a three-dimensional frame model of the target frame;
and solving the curved surface integral of the three-dimensional frame model to obtain a three-dimensional measurement result.
Optionally, the calculating a first detection score of the target frame according to the size includes:
acquiring standard frame dimension data;
calculating a difference between the dimensional measurement and the standard frame dimensional data;
and mapping the difference value to a preset numerical value interval to obtain a first detection score.
Optionally, the calculating an image distortion coefficient of the target image by using a dual coordinate method includes:
establishing a pixel coordinate system by taking any corner point of the target image as an origin, and establishing an image coordinate system by taking a central pixel of the target image as the origin;
calculating lens internal parameters of a camera for shooting the target image according to the coordinates of the target image in the image coordinate system and the pixel coordinate system;
and calculating a distortion coefficient of the target image according to the lens intrinsic parameters.
In order to solve the above problems, the present invention further provides an image analysis-based vehicle frame quality detection apparatus, including:
the image processing module is used for acquiring a two-dimensional image obtained by shooting a target frame from a plurality of directions, extracting dual-channel features from the two-dimensional image, and performing feature fusion on the dual-channel features to obtain fusion features;
the first calculation module is used for separating the frame pixel regions in the two-dimensional image corresponding to different directions according to the fusion characteristics, calculating the size of the frame pixel regions and calculating a first detection score of the target frame according to the size;
the second calculation module is used for selecting two-dimensional images in one direction one by one as target images, calculating an image distortion coefficient of the target images by using a double-coordinate method, constructing a space coordinate system, mapping the two-dimensional images into the space coordinate system according to the directions to obtain image coordinates, carrying out coordinate correction on the image coordinates according to the image distortion coefficient to obtain a corrected coordinate of each two-dimensional image, constructing a three-dimensional frame model of the target frame according to the corrected coordinates, carrying out three-dimensional measurement on the three-dimensional frame model, and calculating a second detection score of the target frame according to a three-dimensional measurement result;
and the comprehensive score analysis module is used for calculating the comprehensive quality score of the target vehicle frame according to the first detection score and the second detection score.
In order to solve the above problem, the present invention also provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the image analysis based frame quality detection method described above.
In order to solve the above problem, the present invention further provides a computer-readable storage medium, in which at least one computer program is stored, and the at least one computer program is executed by a processor in an electronic device to implement the image analysis-based vehicle frame quality detection method described above.
According to the embodiment of the invention, the two-dimensional images of the target frame in multiple directions are comprehensively analyzed, so that the multi-directional two-dimensional quality detection of the target frame is realized, and the accuracy of the frame quality detection is favorably improved; meanwhile, a three-dimensional frame model of the target frame is constructed according to the two-dimensional images in the plurality of directions, so that the integral three-dimensional quality detection of the target frame is realized; and the comprehensive quality score of the target frame is obtained by combining the results of the two-dimensional quality detection and the three-dimensional quality detection, so that the accuracy of quality detection on the target frame is improved. Therefore, the frame quality detection method and device based on image analysis, the electronic equipment and the computer readable storage medium provided by the invention can solve the problem of low accuracy in frame quality detection.
Drawings
Fig. 1 is a schematic flowchart of a method for detecting vehicle frame quality based on image analysis according to an embodiment of the present invention;
fig. 2 is a schematic flow chart illustrating feature fusion of dual-channel features according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a process of calculating a first detection score according to an embodiment of the present invention;
FIG. 4 is a functional block diagram of an apparatus for detecting vehicle frame quality based on image analysis according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device for implementing the vehicle frame quality detection method based on image analysis according to an embodiment of the present invention.
The implementation, functional features and advantages of the present invention will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the application provides a frame quality detection method based on image analysis. The execution subject of the vehicle frame quality detection method based on image analysis includes, but is not limited to, at least one of electronic devices such as a server and a terminal, which can be configured to execute the method provided by the embodiments of the present application. In other words, the vehicle frame quality detection method based on image analysis may be executed by software or hardware installed in a terminal device or a server device, and the software may be a block chain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like. The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
Referring to fig. 1, a schematic flow chart of a vehicle frame quality detection method based on image analysis according to an embodiment of the present invention is shown. In this embodiment, the method for detecting vehicle frame quality based on image analysis includes:
s1, acquiring two-dimensional images obtained by shooting a target frame from a plurality of directions.
In the embodiment of the invention, the target frame can be any type of automobile frame. In other embodiments of the present invention, the target frame may further include a frame of a vehicle such as a bicycle, a motorcycle, a boat, etc.
In detail, in order to realize accurate analysis of the target frame, the target frame may be photographed in a plurality of preset orientations, so that the target frame may be analyzed from a plurality of orientations, thereby improving accuracy of quality detection of the target frame.
Specifically, the predetermined plurality of orientations include, but are not limited to, a front view orientation, a left view orientation, a right view orientation, a rear view orientation, a top view orientation, and a bottom view orientation.
And S2, extracting the double-channel features from the two-dimensional image, and performing feature fusion on the double-channel features to obtain fusion features.
In the embodiment of the invention, in order to realize the detailed analysis of the two-dimensional images in different directions, the two-channel feature extraction can be carried out on the two-dimensional image shot in each direction.
In detail, the result of the two-channel feature extraction includes, but is not limited to, texture features, pixel features, geometric features, and the like.
In an embodiment of the present invention, the extracting two-channel features from the two-dimensional image includes:
performing pixel enhancement on the two-dimensional image, and selecting pixel points with pixel values larger than a preset pixel threshold value in the two-dimensional image after the pixel enhancement as pixel points to be screened;
determining a connected domain formed by the pixel points to be screened as a characteristic pixel region of the target frame;
carrying out global feature extraction on the feature pixel area to obtain global features;
performing local feature extraction on the feature pixel region to obtain local features;
and collecting the global features and the local features to obtain the dual-channel features.
In detail, the pixel enhancing the two-dimensional image includes: sequentially selecting areas in the two-dimensional image by using an n x n image window to obtain a plurality of image areas, wherein n is a positive integer; calculating a binary code element of the central pixel of each image area by using a preset algorithm according to the central pixel of each image area and the neighborhood pixels of the central pixel; and performing pixel enhancement on the central pixel according to the binary code element.
Optionally, the calculating a binary symbol of the central pixel of each image region by using a preset algorithm according to the central pixel of each image region and the neighborhood pixels of the central pixel includes:
calculating a binary symbol of a center pixel of the image area using an algorithm
Figure GDA0003880647740000061
Figure GDA0003880647740000062
Wherein, P 0 Is the central pixel of said image area, P e Is the mean value of the neighborhood pixels of the central pixel, n is the number of the neighborhood pixels, s (P) 0 -P e ) Is a quantization operation.
The embodiment of the invention performs detail enhancement processing on the converted two-dimensional image, filters noise pixel points in the converted two-dimensional image, and performs local texture deepening on the details of the two-dimensional image, thereby highlighting the detail characteristics in the two-dimensional image and being beneficial to improving the accuracy of analyzing the two-dimensional image.
In one embodiment of the present invention, the global features of the feature pixel region may be extracted by using a Histogram of Oriented Gradients (HOG), a Deformable Part Model (DPM), a Local Binary Pattern (LBP), or the like, or may be extracted by using a pre-trained artificial intelligence Model with a specific image feature extraction function, where the artificial intelligence Model includes, but is not limited to, a VGG-net Model and a U-net Model.
Further, the performing local feature extraction on the feature pixel region to obtain a local feature includes: performing frame selection on the characteristic pixel areas one by using a preset sliding window to obtain a pixel window; selecting one pixel point from the pixel window one by one as a target pixel point; judging whether the pixel value of the target pixel point is an extreme value in the pixel window; when the pixel value of the target pixel point is not an extreme value in the pixel window, returning to the step of selecting one pixel point as the target pixel point from the pixel window one by one; when the pixel value of the target pixel point is an extreme value in the pixel window, determining the target pixel point as a key point; vectorizing the pixel values of all key points in all the pixel windows, and collecting the obtained vectors as the local features of the feature pixel areas.
In this embodiment, the sliding window may be a pre-constructed selection box with a certain area, which may be used to frame the pixels in the characteristic pixel region, for example, a square selection box constructed with 10 pixels as height and 10 pixels as width.
In detail, the extreme value includes a maximum value and a minimum value, and when the pixel value of the target pixel point is the maximum value or the minimum value in the pixel window, the target pixel point is determined to be the key point of the pixel window.
In the embodiment of the invention, the two-dimensional image is analyzed through dual-channel feature extraction, so that the loss of specific features in the image processing process in a single feature extraction mode is avoided, more accurate feature extraction is realized, and the accuracy of subsequent analysis of the frame quality is improved.
Further, the two-channel feature extraction can extract various features of the two-dimensional image from different dimensions, so that in order to prevent the problems of redundancy, confusion and the like of the features during subsequent feature analysis, the results of the two-channel feature extraction can be subjected to feature fusion to obtain fusion features.
In the embodiment of the present invention, as shown in fig. 2, the performing feature fusion on the dual-channel feature to obtain a fusion feature includes:
s21, mapping each feature in the two-channel features one by one to different network layers in a pre-constructed full-connection layer network;
s22, jumping and linking the double-channel characteristics in different network layers to obtain connection characteristics;
and S23, performing composite addition operation on each connection feature to obtain a fusion feature.
In detail, the skip link (skip connection) is a direct dimension superposition of different features; the complex addition (add) is to add different features in the complex field, e.g., feature x and feature y complex add as: z = x + iy, wherein z is a fused feature of feature x and feature y.
In the embodiment of the invention, the unification and standardization of multi-dimensional features can be realized by performing feature fusion on the result of the dual-channel feature extraction, so that the accuracy of subsequent quality detection on the target frame is improved.
And S3, separating the frame pixel areas in the two-dimensional image corresponding to each different direction according to the fusion characteristics, measuring the size of the frame pixel areas, and calculating a first detection score of the target frame according to the size measurement result.
In one practical application scenario of the invention, since the fusion features are only abstract features of the image, and the two-dimensional images in different directions all contain a large amount of pixel information except the target frame, if the two-dimensional images are directly analyzed, not only is the waste of computing resources caused, but also the final analysis accuracy is reduced due to the complexity of the image information.
Therefore, according to the embodiment of the invention, the frame pixel area of the target frame can be separated from the two-dimensional images in each different direction according to the fusion characteristics, and the frame pixel area is subjected to targeted analysis, so that the accuracy of quality detection on the target frame is improved.
In an embodiment of the present invention, the separating the frame pixel region in the two-dimensional image corresponding to each different orientation according to the fusion feature includes:
selecting one of the two-dimensional images from the two-dimensional images corresponding to different directions one by one as an image to be separated;
calculating the pixel size of the image to be separated and calculating the characteristic size of the fusion characteristic corresponding to the image to be separated;
performing up-sampling on the fusion feature corresponding to the image to be separated according to the pixel size and the feature size until the feature size of the fusion feature corresponding to the image to be separated is the same as the pixel size of the image to be separated;
and cutting the image to be separated according to the up-sampled fusion characteristics to obtain a frame pixel area in the image to be separated.
In detail, since the fusion feature is obtained by feature extraction according to the image to be separated, the size ratio of the feature map of the fusion feature is smaller than that of the image to be separated, in order to realize accurate cropping of the frame pixel area in the image to be separated, the fusion feature can be up-sampled until the feature size of the fusion feature corresponding to the image to be separated is the same as the pixel size of the image to be separated, and then the image to be separated is cropped according to the up-sampled fusion feature, so as to obtain the frame pixel area in the image to be separated.
Furthermore, in order to realize accurate analysis of the target frame, size measurement can be performed on the frame pixel area, and then the first detection score of the target frame is analyzed and calculated according to the size measurement result.
In the embodiment of the present invention, referring to fig. 3, the calculating a first detection score of the target frame according to the size includes:
s31, acquiring size data of a standard frame;
s32, calculating a difference value between the size measurement result and the standard frame size data;
and S33, mapping the difference value to a preset numerical value interval to obtain a first detection score.
In detail, the standard frame dimension data is dimension data of a qualified frame acquired in advance.
Specifically, since the calculated range interval of the difference is too large, in order to realize subsequent standardized analysis of the difference, the difference may be mapped to a preset value interval by using a gaussian function, a normalization function, or other functions, so as to obtain a first detection score.
In the embodiment of the invention, the two-dimensional image analysis can accurately detect the size information of the target frame from the planar two-dimensional dimension, and can accurately react to the quality of the target frame to a certain extent.
And S4, selecting two-dimensional images in one direction one by one as target images, and calculating the image distortion coefficient of the target images by using a double-coordinate method.
In one practical application scenario of the present invention, the two-dimensional plane image obtained by shooting the target frame from any single direction further includes two-dimensional plane data (height and width) that can be observed from the frame in the direction, but three-dimensional data (depth/distance) between frame components in the image cannot be displayed, so that, in order to improve the accuracy of analyzing the target frame, one direction image can be selected from the two-dimensional images in different directions one by one as the target image, and the target image is subjected to image calibration until the two-dimensional images in all directions are subjected to image calibration, so as to obtain the distortion coefficient corresponding to the two-dimensional image in the three-dimensional space, thereby facilitating accurate evaluation of the target frame in the following.
In an embodiment of the present invention, the calculating an image distortion coefficient of the target image by using a dual coordinate method includes:
establishing a pixel coordinate system by taking any corner point of the target image as an origin, and establishing an image coordinate system by taking a central pixel of the target image as the origin;
calculating lens internal parameters of a camera for shooting the target image according to the coordinates of the target image in the image coordinate system and the pixel coordinate system;
and calculating a distortion coefficient of the target image according to the lens intrinsic parameters.
In detail, the image coordinate system (x, y) is constructed by taking the central pixel of the target image as an origin, and the image coordinate system can be used for describing the projection transmission relation of an object from the camera coordinate system to the image coordinate system in the imaging process of the target image; the pixel coordinate system (u, v) is established by taking the corner point of the target image as an origin, can be used for describing the coordinates of each pixel point on the digital image (photo) after the target image is imaged, and is the coordinate system where the information really read from the camera is located.
Specifically, since the pixel coordinate system (u, v) represents only the column number and the row number of the pixels, and the positions of the pixels in the image are not expressed in physical units, the system image coordinate system (x, y) expressed in physical units (such as millimeters) is also established so as to accurately calculate the lens parameters and distortion coefficients of the target image by combining the two coordinate systems.
In an embodiment of the present invention, the calculating a distortion coefficient of the target image according to the lens intrinsic parameters includes:
calculating a mapping matrix of each pixel point of the target image between the image coordinate system and the pixel coordinate system according to the principal point coordinate and the height and width of each pixel of the target image in the image coordinate system;
mapping the ideal pixel points selected in advance in the target image to a three-dimensional coordinate system constructed in advance according to the mapping matrix to obtain actual coordinates of the ideal pixel points in the three-dimensional coordinate system;
constructing an internal reference matrix according to the camera lens internal reference of the target image, and calculating the ideal coordinates of the ideal pixel points in the three-dimensional coordinate system by using the internal reference matrix;
calculating a radial distortion coefficient according to the radial difference between the actual coordinate and the ideal coordinate, calculating a tangential distortion coefficient according to the tangential difference between the actual coordinate and the ideal coordinate, and collecting the radial distortion coefficient and the tangential distortion coefficient as the distortion coefficient of the target image.
In detail, according to the height and width of the principal point coordinates and each pixel of the target image in the image coordinate system, it can be calculated that each pixel in the target image has the following mapping relationship in the pixel coordinate system and the image coordinate system:
Figure GDA0003880647740000101
Figure GDA0003880647740000102
wherein u is a horizontal axis coordinate of a pixel point (x, y) in the target image after being mapped to the pixel coordinate system, v is a vertical axis coordinate of the pixel point (x, y) in the target image after being mapped to the pixel coordinate system, x is a horizontal axis coordinate of the pixel point (x, y) in the target image in the image coordinate system, y is a vertical axis coordinate of the pixel point (x, y) in the target image in the image coordinate system, dx is a width of each pixel in the image coordinate system of the target image, dy is a height of each pixel in the image coordinate system of the target image, u is a height of each pixel in the image coordinate system of the target image, and v is a vertical axis coordinate of the pixel in the image coordinate system 0 Is the abscissa, v, of the coordinates of the principal point 0 Is the ordinate of the principal point coordinate.
Specifically, the mapping relationship may be converted into the following matrix form to obtain the mapping matrix:
Figure GDA0003880647740000111
furthermore, one pixel point can be arbitrarily selected from the target image as an ideal pixel point, the mapping matrix is utilized to calculate the ideal pixel point, and the selected ideal pixel point is mapped to a pre-constructed three-dimensional coordinate system according to the mapping matrix to obtain the actual coordinate of the ideal pixel point in the three-dimensional coordinate system.
In the embodiment of the invention, the following internal reference matrix can be constructed according to the lens internal reference of the target image:
Figure GDA0003880647740000112
wherein fx is the sum of the widths of all pixels of the target image in the horizontal axis direction in the image coordinate system, and fy is the sum of the heights of all pixels of the target image in the vertical axis direction in the image coordinate system.
In detail, the ideal pixel points can be mapped into the three-dimensional coordinate system by using the internal reference matrix to obtain ideal coordinates of the ideal coordinate system in the three-dimensional coordinate system, and further, radial derivation and tangential derivation are performed on the ideal coordinates and the actual coordinates to obtain a radial distortion coefficient and a tangential distortion coefficient.
And S5, constructing a space coordinate system, and mapping the two-dimensional image into the space coordinate system according to the plurality of directions to obtain an image coordinate.
In the embodiment of the invention, in order to perform three-dimensional analysis on two-dimensional images shot at different directions, a space coordinate system needs to be constructed, and each two-dimensional image is mapped into the space coordinate system so as to convert two-dimensional plane data into three-dimensional space data.
In the embodiment of the present invention, the constructing a spatial coordinate system, and mapping the two-dimensional image into the spatial coordinate system according to the plurality of orientations to obtain image coordinates includes:
establishing a coordinate system by taking a camera position origin corresponding to a two-dimensional image in any direction, a horizontal direction position x axis of the camera, a vertical direction position y axis of the camera and a vertical direction of a plane where the x axis and the y axis are located as a z axis, and mapping the two-dimensional image into the space coordinate system by using a preset map function to obtain an image coordinate.
In the embodiment of the invention, the two-dimensional plane coordinates shot in different directions are mapped to the constructed space coordinate system, so that the data contained in the plane image are converted from two dimensions to three dimensions, and the subsequent three-dimensional space analysis of the target frame is facilitated, and the accuracy of quality detection of the target frame is improved.
And S6, carrying out coordinate correction on the image coordinates according to the image distortion coefficient to obtain the corrected coordinates of each two-dimensional image.
In the embodiment of the invention, because the two-dimensional images mapped into the space coordinate system are obtained by shooting in different directions through the camera, distortion caused by factors such as the distance between the camera and the target frame, the directions and the like during shooting can also generate offset influence on pixel coordinates in the three-dimensional space coordinate.
Therefore, the coordinates of the image mapped into the spatial coordinate system can be corrected according to the image distortion coefficient of the two-dimensional image in each different direction calculated in step S4, so as to improve the accuracy of the coordinates of each image in the spatial coordinate system
In the embodiment of the invention, the image coordinates of each two-dimensional image in the space coordinate system can be respectively counted, and linear operation is performed by using the distortion coefficient and each image coordinate, so that the image coordinate is corrected, and the corrected coordinate of each two-dimensional image is obtained.
In detail, the linear operation includes, but is not limited to, addition, subtraction, multiplication, and division.
In the embodiment of the invention, the image coordinates are subjected to coordinate correction through the image distortion coefficient, so that the coordinate information of the two-dimensional image corresponding to each direction in the space can be restored, the three-dimensional information of the two-dimensional image is added, and the accuracy of detecting the target frame is improved.
S7, a three-dimensional frame model of the target frame is built according to the corrected coordinates, three-dimensional measurement is conducted on the three-dimensional frame model, and a second detection score of the target frame is calculated according to the three-dimensional measurement result.
In an embodiment of the present invention, the constructing a three-dimensional frame model of the target frame according to the corrected coordinates and performing three-dimensional measurement on the three-dimensional frame model includes:
counting characteristic coordinates of a frame pixel region in each two-dimensional image in the space coordinate system;
determining the space connected domain of the characteristic coordinates as a three-dimensional frame model of the target frame;
and solving the curved surface integral of the three-dimensional frame model to obtain a three-dimensional measurement result.
In detail, the surface integral of the three-dimensional frame model can be obtained by utilizing the Gaussian theorem to obtain a three-dimensional measurement result.
In the embodiment of the present invention, the step of calculating the second detection score of the target vehicle frame according to the three-dimensional measurement result is the same as the step of calculating the first detection score of the target vehicle frame according to the size measurement result in S3, and details are not repeated here.
S8, calculating a quality comprehensive score of the target frame according to the first detection score and the second detection score.
In an embodiment of the present invention, the calculating a quality comprehensive score of the target frame according to the first detection score and the second detection score includes:
calculating the quality comprehensive score of the target frame by using the following weight algorithm:
G=α*Q+β*P
wherein G is the quality comprehensive score, Q is the first detection score, P is the second detection score, alpha and beta are preset weight coefficients, and alpha + beta =1.
In detail, the quality comprehensive score is used for indicating the quality of the target frame, and when the quality comprehensive score is larger, the quality of the target frame is more qualified.
In the embodiment of the invention, the comprehensive quality score of the target frame is calculated according to the first detection score and the second detection score, so that the comprehensive analysis of the two-dimensional plane data and the three-dimensional space data of the target frame in different directions is realized, and the improvement of the accuracy of quality detection on the target frame is facilitated.
According to the embodiment of the invention, the two-dimensional images of the target frame in multiple directions are comprehensively analyzed, so that the multi-directional two-dimensional quality detection of the target frame is realized, and the accuracy of the frame quality detection is favorably improved; meanwhile, a three-dimensional frame model of the target frame is constructed according to the two-dimensional images in the plurality of directions, so that the integral three-dimensional quality detection of the target frame is realized; and the comprehensive quality score of the target frame is obtained by combining the results of the two-dimensional quality detection and the three-dimensional quality detection, so that the accuracy of quality detection on the target frame is improved. Therefore, the frame quality detection method based on image analysis provided by the invention can solve the problem of low accuracy in frame quality detection.
Fig. 4 is a functional block diagram of a vehicle frame quality detection apparatus based on image analysis according to an embodiment of the present invention.
The vehicle frame quality detection device 100 based on image analysis can be installed in electronic equipment. According to the realized functions, the vehicle frame quality detection device 100 based on image analysis may include an image processing module 101, a first calculating module 102, a second calculating module 103, and a comprehensive score analyzing module 104. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the image processing module 101 is configured to acquire two-dimensional images obtained by shooting a target frame from multiple directions, extract dual-channel features from the two-dimensional images, and perform feature fusion on the dual-channel features to obtain fusion features;
the first calculating module 102 is configured to separate, according to the fusion features, frame pixel regions in the two-dimensional image corresponding to each of the different orientations, calculate sizes of the frame pixel regions, and calculate a first detection score of the target frame according to the sizes;
the second calculation module 103 is configured to select two-dimensional images in one of the orientations one by one as a target image, calculate an image distortion coefficient of the target image by using a dual-coordinate method, construct a spatial coordinate system, map the two-dimensional images into the spatial coordinate system according to the plurality of orientations to obtain image coordinates, perform coordinate correction on the image coordinates according to the image distortion coefficient to obtain corrected coordinates of each two-dimensional image, construct a three-dimensional frame model of the target frame according to the corrected coordinates, perform three-dimensional measurement on the three-dimensional frame model, and calculate a second detection score of the target frame according to a three-dimensional measurement result;
and the comprehensive score analysis module 104 is used for calculating a quality comprehensive score of the target frame according to the first detection score and the second detection score.
In detail, when the vehicle frame quality detection device 100 based on image analysis according to the embodiment of the present invention is used, the same technical means as the vehicle frame quality detection method based on image analysis described in fig. 1 to 3 is adopted, and the same technical effects can be produced, which is not described herein again.
Fig. 5 is a schematic structural diagram of an electronic device for implementing the vehicle frame quality detection method based on image analysis according to an embodiment of the present invention.
The electronic device may include a processor 10, a memory 11, a communication bus 12, and a communication interface 13, and may further include a computer program, such as a vehicle frame quality detection program based on image analysis, stored in the memory 11 and executable on the processor 10.
In some embodiments, the processor 10 may be composed of an integrated circuit, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same function or different functions, and includes one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the whole electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device by running or executing programs or modules (for example, executing a vehicle frame quality detection program based on image analysis, etc.) stored in the memory 11 and calling data stored in the memory 11.
The memory 11 includes at least one type of readable storage medium including flash memory, removable hard disks, multimedia cards, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, for example a removable hard disk of the electronic device. The memory 11 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used not only to store application software installed in the electronic device and various types of data, such as codes of a vehicle frame quality detection program based on image analysis, etc., but also to temporarily store data that has been output or will be output.
The communication bus 12 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
The communication interface 13 is used for communication between the electronic device and other devices, and includes a network interface and a user interface. Optionally, the network interface may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), which are commonly used to establish a communication connection between the electronic device and other electronic devices. The user interface may be a Display (Display), an input unit such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
Only electronic devices having components are shown, it will be understood by those skilled in the art that the structures shown in the figures do not constitute limitations on the electronic devices, and may include fewer or more components than shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The frame quality detection program based on image analysis stored in the memory 11 of the electronic device is a combination of a plurality of instructions, and when running in the processor 10, can realize that:
acquiring two-dimensional images obtained by shooting a target frame from a plurality of directions;
extracting dual-channel features from the two-dimensional image, and performing feature fusion on the dual-channel features to obtain fusion features;
separating frame pixel areas in the two-dimensional image corresponding to each different position according to the fusion characteristics, calculating the size of each frame pixel area, and calculating a first detection score of the target frame according to the size;
selecting two-dimensional images in one direction one by one as target images, and calculating the image distortion coefficient of the target images by using a double-coordinate method;
constructing a space coordinate system, and mapping the two-dimensional image into the space coordinate system according to the plurality of directions to obtain an image coordinate;
carrying out coordinate correction on the image coordinates according to the image distortion coefficient to obtain corrected coordinates of each two-dimensional image;
constructing a three-dimensional frame model of the target frame according to the corrected coordinates, carrying out three-dimensional measurement on the three-dimensional frame model, and calculating a second detection score of the target frame according to a three-dimensional measurement result;
and calculating the quality comprehensive score of the target frame according to the first detection score and the second detection score.
Specifically, the specific implementation method of the instruction by the processor 10 may refer to the description of the relevant steps in the embodiment corresponding to the drawings, which is not described herein again.
Further, the electronic device integrated module/unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. The computer readable storage medium may be volatile or non-volatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium storing a computer program which, when executed by a processor of an electronic device, implements:
acquiring two-dimensional images obtained by shooting a target frame from a plurality of directions;
extracting dual-channel features from the two-dimensional image, and performing feature fusion on the dual-channel features to obtain fusion features;
separating frame pixel areas in the two-dimensional image corresponding to each different position according to the fusion characteristics, calculating the size of each frame pixel area, and calculating a first detection score of the target frame according to the size;
selecting two-dimensional images in one direction one by one as target images, and calculating the image distortion coefficient of the target images by using a double-coordinate method;
constructing a space coordinate system, and mapping the two-dimensional image into the space coordinate system according to the plurality of directions to obtain an image coordinate;
carrying out coordinate correction on the image coordinates according to the image distortion coefficient to obtain corrected coordinates of each two-dimensional image;
constructing a three-dimensional frame model of the target frame according to the corrected coordinates, carrying out three-dimensional measurement on the three-dimensional frame model, and calculating a second detection score of the target frame according to a three-dimensional measurement result;
and calculating the quality comprehensive score of the target frame according to the first detection score and the second detection score.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
Furthermore, it will be obvious that the term "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (9)

1. A vehicle frame quality detection method based on image analysis is characterized by comprising the following steps:
acquiring two-dimensional images obtained by shooting a target frame from a plurality of directions;
extracting dual-channel features from the two-dimensional image, and performing feature fusion on the dual-channel features to obtain fusion features;
separating frame pixel areas in the two-dimensional image corresponding to each different position according to the fusion characteristics, calculating the size of each frame pixel area, and calculating a first detection score of the target frame according to the size;
selecting two-dimensional images in one direction one by one as target images, establishing a pixel coordinate system by taking any corner point of the target images as an origin, establishing an image coordinate system by taking a central pixel of the target images as the origin, calculating lens intrinsic parameters of a camera for shooting the target images according to coordinates of the target images in the image coordinate system and the pixel coordinate system, and calculating image distortion coefficients of the target images according to the lens intrinsic parameters;
constructing a space coordinate system, and mapping the two-dimensional image into the space coordinate system according to the plurality of directions to obtain an image coordinate;
carrying out coordinate correction on the image coordinates according to the image distortion coefficient to obtain corrected coordinates of each two-dimensional image;
constructing a three-dimensional frame model of the target frame according to the corrected coordinates, carrying out three-dimensional measurement on the three-dimensional frame model, and calculating a second detection score of the target frame according to a three-dimensional measurement result;
and calculating the quality comprehensive score of the target frame according to the first detection score and the second detection score.
2. The image analysis-based vehicle frame quality detection method according to claim 1, wherein the extracting the two-channel features from the two-dimensional image comprises:
performing pixel enhancement on the two-dimensional image, and selecting pixel points with pixel values larger than a preset pixel threshold value in the two-dimensional image after the pixel enhancement as pixel points to be screened;
determining a connected domain formed by the pixel points to be screened as a characteristic pixel region of the target frame;
carrying out global feature extraction on the feature pixel area to obtain global features;
performing local feature extraction on the feature pixel area to obtain local features;
and collecting the global features and the local features to obtain the dual-channel features.
3. The image analysis-based vehicle frame quality detection method according to claim 1, wherein the feature fusion of the dual-channel features to obtain a fusion feature comprises:
mapping each feature in the two-channel features to different network layers in a pre-constructed full-connection layer network one by one;
carrying out jump linking on the two-channel characteristics in different network layers to obtain connection characteristics;
and performing composite addition operation on each connection characteristic to obtain a fusion characteristic.
4. The image analysis-based frame quality detection method according to claim 1, wherein the separating the frame pixel region in the two-dimensional image corresponding to each different orientation according to the fusion feature comprises:
selecting one of the two-dimensional images from the two-dimensional images corresponding to different directions one by one as an image to be separated;
calculating the pixel size of the image to be separated and calculating the feature size of the fusion feature corresponding to the image to be separated;
performing up-sampling on the fusion feature corresponding to the image to be separated according to the pixel size and the feature size until the feature size of the fusion feature corresponding to the image to be separated is the same as the pixel size of the image to be separated;
and cutting the image to be separated according to the up-sampled fusion characteristics to obtain a frame pixel area in the image to be separated.
5. The image analysis-based frame quality detection method according to claim 4, wherein the building of the three-dimensional frame model of the target frame according to the corrected coordinates and the three-dimensional measurement of the three-dimensional frame model comprise:
counting characteristic coordinates of a frame pixel region in each two-dimensional image in the space coordinate system;
determining the space connected domain of the characteristic coordinates as a three-dimensional frame model of the target frame;
and solving the curved surface integral of the three-dimensional frame model to obtain a three-dimensional measurement result.
6. The image analysis-based frame quality detection method according to any one of claims 1 to 5, wherein the calculating of the first detection score of the target frame according to the size comprises:
acquiring standard frame dimension data;
calculating a difference between the dimensional measurement and the standard frame dimensional data;
and mapping the difference value to a preset numerical value interval to obtain a first detection score.
7. A vehicle frame quality detection device based on image analysis is characterized in that the device comprises:
the image processing module is used for acquiring a two-dimensional image obtained by shooting a target frame from a plurality of directions, extracting dual-channel features from the two-dimensional image, and performing feature fusion on the dual-channel features to obtain fusion features;
the first calculation module is used for separating frame pixel areas in the two-dimensional image corresponding to different directions according to the fusion characteristics, calculating the size of the frame pixel areas and calculating a first detection score of the target frame according to the size;
the second calculation module is used for selecting two-dimensional images in one direction one by one as target images, establishing a pixel coordinate system by taking any corner point of the target images as an origin, establishing an image coordinate system by taking a central pixel of the target images as the origin, calculating lens intrinsic parameters of a camera for shooting the target images according to coordinates of the target images in the image coordinate system and the pixel coordinate system, calculating an image distortion coefficient of the target images according to the lens intrinsic parameters, establishing a space coordinate system, mapping the two-dimensional images into the space coordinate system according to the directions to obtain image coordinates, performing coordinate correction on the image coordinates according to the image distortion coefficient to obtain corrected coordinates of each two-dimensional image, establishing a three-dimensional frame model of the target frame according to the corrected coordinates, performing three-dimensional measurement on the three-dimensional frame model, and calculating a second detection score of the target frame according to a three-dimensional measurement result;
and the comprehensive score analysis module is used for calculating the comprehensive quality score of the target vehicle frame according to the first detection score and the second detection score.
8. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method of image analysis based vehicle frame quality detection of any one of claims 1 to 6.
9. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements the image analysis-based vehicle frame quality detection method according to any one of claims 1 to 6.
CN202210360837.1A 2022-04-07 2022-04-07 Vehicle frame quality detection method, device, equipment and medium based on image analysis Active CN114708230B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210360837.1A CN114708230B (en) 2022-04-07 2022-04-07 Vehicle frame quality detection method, device, equipment and medium based on image analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210360837.1A CN114708230B (en) 2022-04-07 2022-04-07 Vehicle frame quality detection method, device, equipment and medium based on image analysis

Publications (2)

Publication Number Publication Date
CN114708230A CN114708230A (en) 2022-07-05
CN114708230B true CN114708230B (en) 2022-12-16

Family

ID=82172075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210360837.1A Active CN114708230B (en) 2022-04-07 2022-04-07 Vehicle frame quality detection method, device, equipment and medium based on image analysis

Country Status (1)

Country Link
CN (1) CN114708230B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109949899A (en) * 2019-02-28 2019-06-28 未艾医疗技术(深圳)有限公司 Image three-dimensional measurement method, electronic equipment, storage medium and program product
CN112102409A (en) * 2020-09-21 2020-12-18 杭州海康威视数字技术股份有限公司 Target detection method, device, equipment and storage medium
WO2021027710A1 (en) * 2019-08-12 2021-02-18 阿里巴巴集团控股有限公司 Method, device, and equipment for object detection
CN113516660A (en) * 2021-09-15 2021-10-19 江苏中车数字科技有限公司 Visual positioning and defect detection method and device suitable for train
CN113870981A (en) * 2021-10-22 2021-12-31 卫宁健康科技集团股份有限公司 Image detection method, device, electronic equipment and system
CN114119992A (en) * 2021-10-28 2022-03-01 清华大学 Multi-mode three-dimensional target detection method and device based on image and point cloud fusion

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101975552A (en) * 2010-08-30 2011-02-16 天津工业大学 Method for measuring key point of car frame based on coding points and computer vision
CN110689008A (en) * 2019-09-17 2020-01-14 大连理工大学 Monocular image-oriented three-dimensional object detection method based on three-dimensional reconstruction
CN112556580B (en) * 2021-03-01 2021-09-03 北京领邦智能装备股份公司 Method, device, system, electronic device and storage medium for measuring three-dimensional size
CN113096094B (en) * 2021-04-12 2024-05-17 吴俊� Three-dimensional object surface defect detection method
CN114241338A (en) * 2022-02-15 2022-03-25 中航建筑工程有限公司 Building measuring method, device, equipment and storage medium based on image recognition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109949899A (en) * 2019-02-28 2019-06-28 未艾医疗技术(深圳)有限公司 Image three-dimensional measurement method, electronic equipment, storage medium and program product
WO2021027710A1 (en) * 2019-08-12 2021-02-18 阿里巴巴集团控股有限公司 Method, device, and equipment for object detection
CN112102409A (en) * 2020-09-21 2020-12-18 杭州海康威视数字技术股份有限公司 Target detection method, device, equipment and storage medium
CN113516660A (en) * 2021-09-15 2021-10-19 江苏中车数字科技有限公司 Visual positioning and defect detection method and device suitable for train
CN113870981A (en) * 2021-10-22 2021-12-31 卫宁健康科技集团股份有限公司 Image detection method, device, electronic equipment and system
CN114119992A (en) * 2021-10-28 2022-03-01 清华大学 Multi-mode three-dimensional target detection method and device based on image and point cloud fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于二维和三维视觉信息的钢轨表面缺陷检测";王静强;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20180815(第8期);全文 *

Also Published As

Publication number Publication date
CN114708230A (en) 2022-07-05

Similar Documents

Publication Publication Date Title
CN107944450B (en) License plate recognition method and device
CN111241989A (en) Image recognition method and device and electronic equipment
CN104166841A (en) Rapid detection identification method for specified pedestrian or vehicle in video monitoring network
CN111695609A (en) Target damage degree determination method, target damage degree determination device, electronic device, and storage medium
CN112699775A (en) Certificate identification method, device and equipment based on deep learning and storage medium
CN116229007B (en) Four-dimensional digital image construction method, device, equipment and medium using BIM modeling
CN113554008B (en) Method and device for detecting static object in area, electronic equipment and storage medium
CN111914939A (en) Method, device and equipment for identifying blurred image and computer readable storage medium
CN116168351B (en) Inspection method and device for power equipment
CN111127516A (en) Target detection and tracking method and system without search box
CN112528908A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN112132812A (en) Certificate checking method and device, electronic equipment and medium
CN114049568A (en) Object shape change detection method, device, equipment and medium based on image comparison
CN112906671B (en) Method and device for identifying false face-examination picture, electronic equipment and storage medium
CN109523570B (en) Motion parameter calculation method and device
CN116137061B (en) Training method and device for quantity statistical model, electronic equipment and storage medium
CN114708230B (en) Vehicle frame quality detection method, device, equipment and medium based on image analysis
CN114882059A (en) Dimension measuring method, device and equipment based on image analysis and storage medium
CN115601684A (en) Emergency early warning method and device, electronic equipment and storage medium
CN113627394B (en) Face extraction method and device, electronic equipment and readable storage medium
CN114783042A (en) Face recognition method, device, equipment and storage medium based on multiple moving targets
CN115757987A (en) Method, device, equipment and medium for determining accompanying object based on trajectory analysis
CN114240924A (en) Power grid equipment quality evaluation method based on digitization technology
CN113792671A (en) Method and device for detecting face synthetic image, electronic equipment and medium
CN112101139B (en) Human shape detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant