CN110705560A - Tire text acquisition method and device and tire specification detection method - Google Patents

Tire text acquisition method and device and tire specification detection method Download PDF

Info

Publication number
CN110705560A
CN110705560A CN201910974900.9A CN201910974900A CN110705560A CN 110705560 A CN110705560 A CN 110705560A CN 201910974900 A CN201910974900 A CN 201910974900A CN 110705560 A CN110705560 A CN 110705560A
Authority
CN
China
Prior art keywords
tire
text
image
tire text
text image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910974900.9A
Other languages
Chinese (zh)
Inventor
周康明
周佳敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co Ltd filed Critical Shanghai Eye Control Technology Co Ltd
Priority to CN201910974900.9A priority Critical patent/CN110705560A/en
Publication of CN110705560A publication Critical patent/CN110705560A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a method and a device for acquiring tire texts, a method for detecting tire specifications, computer equipment and a computer-readable storage medium. The method comprises the following steps: acquiring a tire image; detecting the tire image through the target detection model to obtain a tire text image; inputting the tire text image into a depth residual error network trained in advance, segmenting the tire text image through the depth residual error network to obtain a plurality of segmentation results, and combining the segmentation results to obtain a tire text outline, wherein the tire text outline is formed by connecting points around the tire text; and performing text recognition on the tire text outline to obtain tire text information. The tire text outline obtained by the method can be better fitted with the outline of the real tire text, so that the accuracy of the recognized tire text is higher.

Description

Tire text acquisition method and device and tire specification detection method
Technical Field
The present application relates to the field of tire testing technologies, and in particular, to a method and an apparatus for obtaining a tire text, a method for testing a tire specification, a computer device, and a computer-readable storage medium.
Background
The vehicle tire as an important component of the vehicle is an important part of the vehicle for bearing gravity, transmitting traction force, braking force, steering force and bearing road surface reaction force, and the quality of the vehicle tire is directly related to personal safety. Therefore, in the annual inspection of the vehicle, the performance inspection of the vehicle tire is required.
The vehicle tire is generally engraved with text information representing the tire specification and the tire structure, and whether the tire is qualified or not can be effectively judged by identifying the text information. However, in the conventional method for obtaining the tire text, a polygon target detection algorithm is used to detect a polygon area of the tire text, the polygon detected by the detection algorithm has only 16 sides at most, and for the tire text with a relatively large curvature, the polygon still cannot completely fit the actual outline of the tire text, and redundant characters are included to influence the subsequent character recognition,
therefore, the tire text information obtained by the conventional method for obtaining the tire text is poor in accuracy.
Disclosure of Invention
In view of the above, it is necessary to provide a method and an apparatus for acquiring a tire text, a method for detecting a tire specification, a computer device, and a computer-readable storage medium, which can improve the accuracy of the tire text.
A method of obtaining tire text, the method comprising:
acquiring a tire image;
detecting the tire image through a target detection model to obtain a tire text image;
inputting the tire text image into a depth residual error network trained in advance, segmenting the tire text image through the depth residual error network to obtain a plurality of segmentation results, and combining the segmentation results to obtain a tire text outline, wherein the tire text outline is formed by connecting points around a tire text;
and performing text recognition on the tire text outline to obtain tire text information.
In one embodiment, inputting the tire text image into a depth residual error network trained in advance, segmenting the tire text image through the depth residual error network to obtain a plurality of segmentation results, and merging the segmentation results to obtain a tire text outline, includes:
expanding the tire text image to obtain an expanded tire text image;
inputting the tire text image after edge expansion into a depth residual error network trained in advance, segmenting the tire text image through the depth residual error network to obtain a plurality of segmentation results, and combining the segmentation results to obtain a tire text outline.
In one embodiment, the detecting the tire image through the target detection model to obtain a tire text image includes:
detecting the tire image through a target detection model to obtain a plurality of first candidate regions;
carrying out non-maximum suppression on the plurality of first candidate regions so as to select a plurality of second candidate regions without overlapping relation in the plurality of first candidate regions;
and acquiring the confidence coefficient of each second candidate region, and determining the second candidate region with the confidence coefficient larger than a preset confidence coefficient threshold value in the plurality of second candidate regions as the tire text image.
In one embodiment, inputting the tire text image into a depth residual error network trained in advance, segmenting the tire text image through the depth residual error network to obtain a plurality of segmentation results, and merging the segmentation results to obtain a tire text outline, includes:
inputting the tire text image into a depth residual error network trained in advance, and extracting semantic features in the tire text image through the depth residual error network to obtain semantic features corresponding to each layer of the depth residual error network;
segmenting the tire text image according to the semantic features corresponding to each layer of the network to obtain a plurality of segmentation results;
and combining the segmentation results according to a progressive scale expansion algorithm to obtain the tire text outline.
In one embodiment, merging the segmentation results according to a progressive scale expansion algorithm to obtain a tire text outline, includes:
obtaining the sizes of the plurality of segmentation results, wherein the plurality of segmentation results are divided into a minimum segmentation result, a second small segmentation result and a third small segmentation result … … maximum segmentation result according to the sizes;
determining the minimum segmentation result as an initialized text outline;
scanning each pixel in the second small segmentation result, and merging the second small segmentation result into the initialized text outline according to the scanning result to obtain a first merged outline;
scanning each pixel in the third small segmentation result, and merging the third small segmentation result into the first merged contour according to the scanning result to obtain a second merged contour;
……
and repeating the steps until each pixel in the maximum segmentation result is scanned, and combining the maximum segmentation result into the last combined contour according to the scanning result to obtain the tire text contour.
In one embodiment, the training process of the deep residual network includes:
obtaining a tire text image sample;
inputting the tire text image sample into an initialized depth residual error network, and training the initialized depth residual error network according to a loss function L ═ λ Lc + (1- λ) Ls to obtain a trained depth residual error network, wherein L represents a loss value, Lc represents a loss function of the tire text image, Ls represents a loss function of contraction of a segmentation result, and λ is used for balancing importance between Lc and Ls.
In one embodiment, after obtaining the tire text image sample, the method further includes:
and expanding the edges of the tire text image samples to obtain the expanded tire text image samples.
A method for detecting tire specifications, including the method for obtaining a tire text according to the above embodiment, wherein the tire text information includes tire specification text information;
the tire specification detection method further includes:
acquiring real tire specification information;
and comparing the tire specification text information with the real tire specification information, and determining that the specification of the tire is qualified if the tire specification text information is consistent with the real tire specification information.
An apparatus for obtaining text of a tire, the apparatus comprising:
the image acquisition module is used for acquiring a tire image;
the image detection module is used for detecting the tire image through the target detection model to obtain a tire text image;
the text outline determining module is used for inputting the tire text image into a depth residual error network trained in advance, segmenting the tire text image through the depth residual error network to obtain a plurality of segmentation results, and combining the segmentation results to obtain a tire text outline, wherein the tire text outline is formed by connecting points around a tire text;
and the text recognition module is used for performing text recognition on the tire text outline to obtain tire text information.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring a tire image;
detecting the tire image through a target detection model to obtain a tire text image;
inputting the tire text image into a depth residual error network trained in advance, segmenting the tire text image through the depth residual error network to obtain a plurality of segmentation results, and combining the segmentation results to obtain a tire text outline, wherein the tire text outline is formed by connecting points around a tire text;
and performing text recognition on the tire text outline to obtain tire text information.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a tire image;
detecting the tire image through a target detection model to obtain a tire text image;
inputting the tire text image into a depth residual error network trained in advance, segmenting the tire text image through the depth residual error network to obtain a plurality of segmentation results, and combining the segmentation results to obtain a tire text outline, wherein the tire text outline is formed by connecting points around a tire text;
and performing text recognition on the tire text outline to obtain tire text information.
According to the method and the device for obtaining the tire text, the method for detecting the tire specification, the computer device and the computer readable storage medium, the tire image is detected to obtain the tire text image, then the tire text image is segmented and combined through the depth residual error network to obtain the tire text outline, and further the tire text information in the tire text outline is identified. The tire text image can be understood as the approximate position of the tire text in the tire image, and then the tire text outline which can more accurately reflect the tire text position is positioned from the approximate position, so that the positioning accuracy of the tire text is improved. And the tire text outline is determined through the depth residual error network, so that the tire text outline can be more irregular in shape and is not limited to a polygonal text area with at most 16 sides, the tire text outline can be better fitted to the outline of the real tire text, and therefore the accuracy of the recognized tire text is higher.
Drawings
FIG. 1 is a diagram illustrating an exemplary environment in which a method for obtaining tire text may be implemented;
FIG. 2 is a schematic flow chart diagram illustrating a method for obtaining tire text in one embodiment;
FIG. 3 is a schematic flow chart illustrating the process of expanding the tire text image and then obtaining the tire text outline according to the expanded tire text image according to an embodiment;
FIG. 4 is a tire text image referenced in FIG. 3;
FIG. 5 is an enlarged tire text image referenced in FIG. 3;
FIG. 6 is a schematic flow chart illustrating training of a depth residual network based on tire text image samples in one embodiment;
FIG. 7 is a schematic flow chart illustrating the process of inputting a tire text image into a pre-trained depth residual error network to obtain a tire text contour according to an embodiment;
FIG. 8 is an image of a minimum segmentation result K1 in one embodiment;
FIG. 9 is an image of a second small segmentation result K2 in one embodiment;
FIG. 10 is an image of the maximum segmentation result K3 in one embodiment;
FIG. 11 is a block diagram showing a configuration of a tire text acquisition device according to an embodiment;
FIG. 12 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The method for acquiring the tire text can be applied to the application environment shown in fig. 1. In which the tire text acquiring apparatus 102 is connected to the image capturing apparatus 104. The image capture device 104 is used to capture tire images of various vehicle tires. The tire text acquiring device 102 is configured to receive the tire image uploaded by the image acquiring device 104, and perform a series of operations such as detection, segmentation, combination, and identification on the tire image to finally obtain tire text information on the tire.
The tire text acquiring device 102 may be, but is not limited to, various servers (such as a local server or a cloud server), a personal computer, a notebook computer, a smart phone, a tablet computer, a portable wearable device, and the like. The image capturing device 104 may be, but is not limited to, various cameras, and the like.
In one embodiment, as shown in fig. 2, a tire text acquisition method is provided, which is described by taking the method as an example applied to the tire text acquisition device 102 in fig. 1, and includes the following steps:
s202, tire images are acquired.
Wherein the tire image is an image containing a vehicle tire. The tire may be a tire for various movable vehicles, such as automobile tires, motorcycle tires, electric vehicle tires, and large truck tires, among others.
Specifically, the image pickup apparatus picks up tire images of various vehicle tires. As an embodiment, when the operator issues an image capture command, the image capture device starts capturing tire images of the vehicle tires. As another embodiment, when the image capturing device detects a vehicle tire, the image capturing device automatically captures a tire image of the vehicle tire, for example, when the vehicle tire is detected within a capturing range of the camera, the camera automatically captures the vehicle tire to obtain a tire image. Then, the image acquisition device uploads the acquired tire image to the tire text acquisition device, so that the tire text acquisition device can acquire the tire image.
And S204, detecting the tire image through the target detection model to obtain a tire text image.
Specifically, the tire text acquisition device detects the tire image through the target detection model, wherein the target detection model determines the approximate position of the tire text in the tire image by using the image detection frame, and the tire text acquisition device determines the tire text image according to the position. Alternatively, the image detection frame may be a frame of an arbitrary shape, for example, the image detection frame may be a polygonal frame, a circular frame, an elliptical frame, or another irregular-shaped frame. For example, taking a rectangular detection box as an example, the tire text acquisition device may detect a rectangular tire text image as shown in fig. 3 through the target detection model after obtaining the tire image.
Note that the image detection frame is trained in advance based on the tire image sample labeled with the tire text region. The labeling mode can be manual labeling or automatic labeling by a machine. It should be noted that the tire text region labeled here can be understood as locating the approximate position of the tire text in the tire image first, so that when the specific position of the tire text is further determined based on the approximate position, the occupied computing resources are less, and the accuracy is improved. It is further understood that the tire text image may also be understood as the approximate location of the tire text first located from the tire image.
And S206, inputting the tire text image into a depth residual error network trained in advance, segmenting the tire text image through the depth residual error network to obtain a plurality of segmentation results, and combining the segmentation results to obtain the tire text outline.
Wherein the tire text outline is formed by connecting points around the tire text. It can be understood that the tire text contour obtained in the embodiment of the present application is not a polygon with a fixed shape such as a quadrangle, a hexagon, or a hexadecimal, but a plurality of points around the tire text are obtained first, and then the points are connected to obtain the tire text contour.
Wherein the depth residual error network is preset in the tire text acquisition device. It should be noted that, the deep residual error network is to divide a series of training tasks into a plurality of blocks for training, and finally achieve the purpose of minimizing the overall error by minimizing the error of each block.
Specifically, before executing S206, the tire text acquiring device trains the initialized depth residual error network based on the labeled tire text image sample, so as to obtain a trained depth residual error network.
The tire text obtaining device inputs the tire text image into a depth residual error network trained in advance after obtaining the tire text image, semantic features in the tire text image are extracted by using the depth residual error network to obtain a feature pyramid, then the tire text image is segmented based on the feature pyramid to obtain a plurality of segmentation results of different scales, then the segmentation results of the different scales are combined by using a progressive scale expansion algorithm, and finally a tire text outline formed by connecting points around the tire text is obtained.
And S208, performing text recognition on the tire text outline to obtain tire text information.
Specifically, the tire text acquisition device performs text recognition on the text information in the tire text outline after obtaining the tire text outline, so as to obtain the tire text information. Alternatively, the tire text information includes tire specification text information (e.g., 700R16), tire structure text information, and the like.
The method for acquiring the tire text comprises the steps of detecting a tire image to obtain a tire text image, segmenting and combining the tire text image through a depth residual error network to obtain a tire text outline, and identifying tire text information in the tire text outline. The tire text image can be understood as the approximate position of the tire text in the tire image, and then the tire text outline which can more accurately reflect the tire text position is positioned from the approximate position, so that the positioning accuracy of the tire text is improved. And the tire text outline is determined through the depth residual error network, so that the tire text outline can be more irregular in shape and is not limited to a polygonal text area with at most 16 sides, the tire text outline can be better fitted to the outline of the real tire text, and therefore the accuracy of the recognized tire text is higher.
In one embodiment, please refer to fig. 3, which relates to a specific process of expanding the tire text image and further obtaining the tire text outline according to the expanded tire text image. On the basis of the above embodiment, S206 includes the steps of:
s2062, carrying out edge expansion on the tire text image to obtain an edge expanded tire text image;
s2064, inputting the tire text image after edge expansion into a depth residual error network trained in advance, segmenting the tire text image through the depth residual error network to obtain a plurality of segmentation results, and combining the segmentation results to obtain the tire text outline.
Alternatively, the tire text acquiring device includes various implementation manners when expanding the tire text image, which are listed as follows:
implementation mode (one): obtaining a sideline of the tire text image; and expanding the side line of the tire text image outwards according to a preset edge expanding value to obtain the tire text image after edge expanding. The extended tire text image includes a tire text region and an extended region, and the tire text region corresponds to a position of the tire text image in the extended tire text image.
Implementation mode (b): obtaining a sideline of the tire text image; according to a preset edge expanding value, outwards expanding the edge line of the tire text image to obtain an initial tire text image after edge expanding, wherein the initial tire text image after edge expanding comprises a tire text region and an expanding region, and the tire text region corresponds to the position of the tire text image in the initial tire text image after edge expanding; and filling the expansion area by using a specified color, and determining the filled expanded initial tire text image as an expanded tire text image.
Illustratively, please refer to fig. 4 and 5 together, wherein fig. 3 is a tire text image, and fig. 4 is a tire text image after the edge expansion. The tire text acquisition device, after obtaining the tire text image, extends the length and width of the tire text image in a preset extension manner so that the tire text image is located in the middle of the extended tire text image, and fills the edge area of the extended tire text image with a specified color, for example, black or gray.
For example, assume that the tire text image is a square image with a side length of a. The tire text acquisition equipment expands the side length of the square image by 2 times according to a preset edge expansion mode, so that the tire text image after the final edge expansion is also a square image, and the side length is 3 a. It is understood that the size of the tire text image is unchanged during the edge expanding process, and only the edge area is expanded around the tire text image, so that the edge-expanded tire text image is obtained.
In the embodiment of the application, because the proportion of the text region in the image in the whole extended tire text image is smaller than that of the text region in the whole tire text image, in the subsequent text recognition process, the calculation resources occupied by the text detection are less, and the accuracy of the text detection is higher.
In one embodiment, the specific process of detecting a tire image by an object detection model to obtain a tire text image is involved. On the basis of the above embodiment, S204 includes the steps of: the tire text acquisition equipment inputs the tire image into a pre-trained target detection model, and the tire image is detected through the target detection model to obtain a plurality of first candidate areas; carrying out non-maximum suppression on the plurality of first candidate regions, and selecting a plurality of second candidate regions without overlapping relation from the plurality of first candidate regions; and acquiring the confidence coefficient of each second candidate region, and determining the second candidate region with the confidence coefficient larger than a preset confidence coefficient threshold value in the plurality of second candidate regions as the tire text image.
Alternatively, the target detection model may be a deep learning target detection model. Illustratively, the deep learning target detection model is taken as a rectangular target detection network ssd based on deep learning as an example for explanation. The tire text acquisition device predicts a plurality of rectangular-frame first candidate regions of the tire image using the trained rectangular target detection network. Since partial rectangular frame first candidate regions are overlapped in the plurality of rectangular frame first candidate regions, in order to obtain candidate regions having no overlapping relationship, non-maximum value suppression (NMS) is performed on the plurality of rectangular frame first candidate regions by a rectangular target detection network, a plurality of rectangular frame second candidate regions having no overlapping relationship are found from the overlapped rectangular frame first candidate regions, and the rectangular frame second candidate regions having a confidence greater than a preset confidence threshold among the plurality of rectangular frame second candidate regions currently predicted are taken as the finally output tire text images.
In the embodiment of the application, the deep learning target detection model is adopted to detect the tire images, so that the detection accuracy is higher, meanwhile, the deep learning target detection model has the self-learning capability, and after the self-learning, the detection accuracy is further effectively improved.
In one embodiment, the training process of the deep learning target detection model specifically includes: firstly, tire image samples with different shooting angles, different illumination intensities and different tire models are collected, then areas where tire texts are located are marked out in the tire image samples by adopting an image detection frame (such as a rectangular frame), and then coordinate information of the image detection frame and the tire image samples are input into an initialized deep learning target detection model for training, and finally the trained deep learning target detection model is obtained. Alternatively, the deep learning target detection model may be a rectangular target detection network ssd based on deep learning.
In one embodiment, please refer to fig. 7, which relates to a possible implementation process of inputting a tire text image into a depth residual error network trained in advance, segmenting the tire text image through the depth residual error network to obtain a plurality of segmentation results, and merging the plurality of segmentation results to obtain a tire text outline. On the basis of the above embodiment, S206 includes the steps of:
s206a, inputting the tire text image into a depth residual error network trained in advance, and extracting semantic features in the tire text image through the depth residual error network to obtain semantic features corresponding to each layer of the depth residual error network;
s206b, segmenting the tire text image according to the semantic features corresponding to each layer of the network to obtain a plurality of segmentation results;
s206c, merging the segmentation results according to the progressive scale expansion algorithm to obtain the tire text outline.
The semantic features refer to semantic elements which are specific to a certain subclass of real words, can restrict the syntactic format in which the small subclass of real words is located, and are sufficiently different from other subclass of real words.
Specifically, the tire text acquisition equipment inputs a tire text image into a depth residual error network trained in advance, semantic features in the tire text image are extracted through the depth residual error network, and a feature pyramid is obtained and comprises semantic features corresponding to each layer of the depth residual error network. And then, the tire text acquisition equipment segments the tire text image according to the semantic features corresponding to each layer of the network to obtain a plurality of segmentation results. And then the tire text acquisition equipment merges the multiple segmentation results according to a progressive scale expansion algorithm to obtain a tire text outline. Alternatively, the depth residual network may be a ResNet depth residual network.
As an embodiment, the specific process of merging the multiple segmentation results according to the progressive scale extension algorithm is as follows: obtaining the sizes of a plurality of segmentation results, wherein the plurality of segmentation results are divided into a minimum segmentation result, a second minimum segmentation result and a third minimum segmentation result … … maximum segmentation result according to the sizes; determining the minimum segmentation result as an initialized text outline; scanning each pixel in the second small segmentation result, and merging the second small segmentation result into the initialized text outline according to the scanning result to obtain a first merged outline; and scanning each pixel in the third small segmentation result, merging the third small segmentation result into the first merged contour according to the scanning result to obtain a second merged contour … …, and so on until each pixel in the maximum segmentation result is scanned, and merging the maximum segmentation result into the last merged contour according to the scanning result to obtain the tire text contour.
Illustratively, taking the three segmentation results as an example, the sizes of the three segmentation results are obtained, wherein the three segmentation results are divided into a minimum segmentation result, a second minimum segmentation result and a maximum segmentation result according to the sizes; determining the minimum segmentation result as an initialized text outline; scanning each pixel in the second small segmentation result, and merging the second small segmentation result into the initialized text outline according to the scanning result to obtain a first merged outline; and scanning each pixel in the maximum segmentation result, and combining the maximum segmentation result into the first combined contour according to the scanning result to obtain the tire text contour.
For example, referring to fig. 8, 9 and 10 together, fig. 8-10 correspond to the minimum segmentation result K1, the second small segmentation result K2 and the maximum segmentation result K3, respectively. Through a progressive scale expansion algorithm, a minimum segmentation result K1 is firstly used as the initialization of a text outline, then K2 is merged into the initialized text outline by scanning each pixel in a second small segmentation result K2 to obtain a first merged outline, and then K3 is merged into the first merged outline by scanning each pixel in a maximum segmentation result K3 to obtain a final tire text outline.
In one embodiment, please refer to fig. 6, which relates to a specific training process of the deep residual network. On the basis of the above embodiment, the training process of the deep residual error network includes the following steps:
s222, obtaining a tire text image sample;
and S224, inputting the tire text image sample into the initialized depth residual error network, and training the initialized depth residual error network according to the loss function L which is lambda Lc + (1-lambda) Ls to obtain the trained depth residual error network.
Where L represents the loss value and λ is used to balance the importance between Lc and Ls.
Lc represents a loss function of the tire text image, Lc is 1-D (Sn, Gn), Sn represents a network prediction result of the maximum text region, and Gn represents a group Truth of the maximum text region.
Wherein the content of the first and second substances,
Figure BDA0002233282650000141
s (i, x, y) and G (i, x, y) are the net prediction result Si and the polygon labeling result G (i) of the pixel (x, y).
Ls represents a loss function for the contraction of the segmentation result, since the network prediction needs to be extended from a smaller segmentation result to a larger segmentation result. The network predicts as few as a large number of segmentation results. The group Truth of the smaller segmentation result is contracted from the largest group Truth of the tire text image, the contraction coefficient is (0, 1), the larger the coefficient, the larger the contraction degree, and the formula is as follows:
p (i) ═ Area (1-r)/Perimeter, p (i) indicates the pixel to be reduced, Area indicates the Area of the largest group Truth, Perimeter indicates the Perimeter of the largest group Truth, and r indicates the shrinkage factor, where group Truth is the true result used for comparison with the predicted result during deep learning network training and is commonly used for calculating the loss function.
In one embodiment, in order to save the computational resources in the tire text acquisition process, on the basis of the above embodiment, when the tire text image sample is used to train the initialized depth residual error network, the tire text image sample is further subjected to edge expansion. Specifically, the training process of the deep residual error network comprises the following steps:
s232, obtaining a tire text image sample;
s234, expanding the tire text image sample to obtain an expanded tire text image sample;
and S236, inputting the tire text image sample after edge expansion into an initialized depth residual error network, and training the initialized depth residual error network according to a loss function L ═ λ Lc + (1- λ) Ls to obtain a trained depth residual error network, wherein L represents a loss value, Lc represents a loss function of the tire text image, Ls represents a loss function of contraction of a segmentation result, and λ is used for balancing importance between Lc and Ls.
As described above, the method of expanding the image includes various methods, and specifically, the following implementation methods are included:
implementation mode (one): obtaining a sideline of a tire text image sample; and expanding the side line of the tire text image sample outwards according to a preset edge expanding value to obtain the expanded tire text image sample. The extended tire text image sample comprises a tire text area and an extension area, wherein the tire text area corresponds to the position of the tire text image sample in the extended tire text image sample.
Implementation mode (b): obtaining a sideline of a tire text image sample; according to a preset edge expanding value, expanding the edge line of the tire text image sample outwards to obtain an initial tire text image after edge expanding, wherein the initial tire text image sample after edge expanding comprises a tire text area and an expanding area, and the tire text area corresponds to the position of the tire text image sample in the initial tire text image sample after edge expanding; and filling the extended area by using a specified color, and determining the filled extended initial tire text image sample as an extended tire text image sample.
In the embodiment, the tire text image sample is subjected to edge extension, and then the initialized depth residual error network is trained by using the tire text image sample, so that in the actual application stage, when the tire text information is acquired by using the trained depth residual error network, the whole computing resource can be reduced, and meanwhile, the acquisition efficiency of the tire text information is improved.
In one embodiment, the present invention relates to a method for detecting a tire specification, including the method for acquiring a tire text according to the above embodiment, wherein the tire text information includes tire specification text information;
the tire specification detection method further includes:
acquiring real tire specification information;
and comparing the tire specification text information with the real tire specification information, and determining that the specification of the tire is qualified if the tire specification text information is consistent with the real tire specification information.
Specifically, assume that the tire specification text information is a character string a: 700R16, the real tire specification information is a character string B: 700R16 compares the character string a with the character string B, and if the character string a is equal to the character string B, the tire specification is detected as being qualified, and if the character string a is not equal to the character string B, the tire specification is detected as being unqualified.
According to the tire specification detection method, the obtained tire text outline can be better fitted with the real tire text outline, so that the accuracy of the recognized tire text is higher, and further the tire specification detection accuracy is higher.
In one embodiment, the present invention relates to a method for detecting a tire structure, including the method for acquiring a tire text according to the above embodiment, wherein the tire text information includes tire structure text information;
the method for detecting a tire structure further includes:
acquiring real tire structure information;
and comparing the tire structure text information with the real tire structure information, and determining that the structure of the tire is qualified if the tire structure text information is consistent with the real tire structure information.
According to the tire structure detection method, the obtained tire text outline can be better fitted with the real tire text outline, so that the accuracy of the recognized tire text is higher, and further the detection accuracy of the tire structure is higher.
It should be understood that although the various steps in the flow charts of fig. 2-7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-7 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 11, there is provided a tire text acquisition apparatus 30 including: an image acquisition module 302, an image detection module 304, a text outline determination module 306, and a text recognition module 308, wherein:
an image acquisition module 302 for acquiring an image of a tire.
And the image detection module 304 is configured to detect the tire image through the target detection model to obtain a tire text image.
The text outline determining module 306 is configured to input the tire text image into a depth residual error network trained in advance, segment the tire text image through the depth residual error network to obtain a plurality of segmentation results, and merge the plurality of segmentation results to obtain a tire text outline, where the tire text outline is formed by connecting points around the tire text.
And the text recognition module 308 is configured to perform text recognition on the tire text outline to obtain tire text information.
The device for acquiring the tire text firstly detects the tire image to obtain the tire text image, then segments and merges the tire text image through the depth residual error network to obtain the tire text outline, and further identifies the tire text information in the tire text outline. The tire text image can be understood as the approximate position of the tire text in the tire image, and then the tire text outline which can more accurately reflect the tire text position is positioned from the approximate position, so that the positioning accuracy of the tire text is improved. And the tire text outline is determined through the depth residual error network, so that the tire text outline can be more irregular in shape and is not limited to a polygonal text area with at most 16 sides, the tire text outline can be better fitted to the outline of the real tire text, and therefore the accuracy of the recognized tire text is higher.
In one embodiment, the text outline determining module 306 is specifically configured to expand the tire text image to obtain an expanded tire text image; inputting the tire text image after edge expansion into a depth residual error network trained in advance, segmenting the tire text image through the depth residual error network to obtain a plurality of segmentation results, and combining the segmentation results to obtain the tire text outline.
In one embodiment, the image detection module 304 is specifically configured to detect the tire image through an object detection model, so as to obtain a plurality of first candidate regions; carrying out non-maximum suppression on the plurality of first candidate regions so as to select a plurality of second candidate regions without overlapping relation in the plurality of first candidate regions; and acquiring the confidence coefficient of each second candidate region, and determining the second candidate region with the confidence coefficient larger than a preset confidence coefficient threshold value in the plurality of second candidate regions as the tire text image.
In one embodiment, the text contour determining module 306 is specifically configured to input the tire text image into a depth residual error network trained in advance, extract semantic features in the tire text image through the depth residual error network, and obtain semantic features corresponding to each layer of the depth residual error network; segmenting the tire text image according to the semantic features corresponding to each layer of the network to obtain a plurality of segmentation results; and combining the multiple segmentation results according to a progressive scale expansion algorithm to obtain the tire text outline.
For specific limitations of the tire text acquisition device, reference may be made to the above limitations of the tire text acquisition method, which are not described herein again. The modules in the tire text acquisition device may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 12. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of obtaining tire text.
Those skilled in the art will appreciate that the architecture shown in fig. 12 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring a tire image;
detecting the tire image through the target detection model to obtain a tire text image;
inputting the tire text image into a depth residual error network trained in advance, segmenting the tire text image through the depth residual error network to obtain a plurality of segmentation results, and combining the segmentation results to obtain a tire text outline, wherein the tire text outline is formed by connecting points around the tire text;
and performing text recognition on the tire text outline to obtain tire text information.
The computer equipment firstly detects the tire image to obtain a tire text image, then segments and merges the tire text image through the depth residual error network to obtain a tire text outline, and further identifies tire text information in the tire text outline. The tire text image can be understood as the approximate position of the tire text in the tire image, and then the tire text outline which can more accurately reflect the tire text position is positioned from the approximate position, so that the positioning accuracy of the tire text is improved. And the tire text outline is determined through the depth residual error network, so that the tire text outline can be more irregular in shape and is not limited to a polygonal text area with at most 16 sides, the tire text outline can be better fitted to the outline of the real tire text, and therefore the accuracy of the recognized tire text is higher.
In one embodiment, the processor, when executing the computer program, further performs the steps of: expanding the tire text image to obtain an expanded tire text image; inputting the tire text image after edge expansion into a depth residual error network trained in advance, segmenting the tire text image through the depth residual error network to obtain a plurality of segmentation results, and combining the segmentation results to obtain the tire text outline.
In one embodiment, the processor, when executing the computer program, further performs the steps of: detecting the tire image through a target detection model to obtain a plurality of first candidate regions; carrying out non-maximum suppression on the plurality of first candidate regions, and selecting a plurality of second candidate regions without overlapping relation from the plurality of first candidate regions; and acquiring the confidence coefficient of each second candidate region, and determining the second candidate region with the confidence coefficient larger than a preset confidence coefficient threshold value in the plurality of second candidate regions as the tire text image.
In one embodiment, the processor, when executing the computer program, further performs the steps of: inputting the tire text image into a depth residual error network trained in advance, and extracting semantic features in the tire text image through the depth residual error network to obtain semantic features corresponding to each layer of the depth residual error network; segmenting the tire text image according to the semantic features corresponding to each layer of the network to obtain a plurality of segmentation results; and combining the multiple segmentation results according to a progressive scale expansion algorithm to obtain the tire text outline.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a tire image;
detecting the tire image through the target detection model to obtain a tire text image;
inputting the tire text image into a depth residual error network trained in advance, segmenting the tire text image through the depth residual error network to obtain a plurality of segmentation results, and combining the segmentation results to obtain a tire text outline, wherein the tire text outline is formed by connecting points around the tire text;
and performing text recognition on the tire text outline to obtain tire text information.
The computer-readable storage medium detects the tire image to obtain a tire text image, and then segments and merges the tire text image through the depth residual error network to obtain a tire text outline, thereby identifying the tire text information in the tire text outline. The tire text image can be understood as the approximate position of the tire text in the tire image, and then the tire text outline which can more accurately reflect the tire text position is positioned from the approximate position, so that the positioning accuracy of the tire text is improved. And the tire text outline is determined through the depth residual error network, so that the tire text outline can be more irregular in shape and is not limited to a polygonal text area with at most 16 sides, the tire text outline can be better fitted to the outline of the real tire text, and therefore the accuracy of the recognized tire text is higher.
In one embodiment, the computer program when executed by the processor further performs the steps of: expanding the tire text image to obtain an expanded tire text image; inputting the tire text image after edge expansion into a depth residual error network trained in advance, segmenting the tire text image through the depth residual error network to obtain a plurality of segmentation results, and combining the segmentation results to obtain the tire text outline.
In one embodiment, the computer program when executed by the processor further performs the steps of: detecting the tire image through a target detection model to obtain a plurality of first candidate regions; carrying out non-maximum suppression on the plurality of first candidate regions, and selecting a plurality of second candidate regions without overlapping relation from the plurality of first candidate regions; and acquiring the confidence coefficient of each second candidate region, and determining the second candidate region with the confidence coefficient larger than a preset confidence coefficient threshold value in the plurality of second candidate regions as the tire text image.
In one embodiment, the computer program when executed by the processor further performs the steps of: inputting the tire text image into a depth residual error network trained in advance, and extracting semantic features in the tire text image through the depth residual error network to obtain semantic features corresponding to each layer of the depth residual error network; segmenting the tire text image according to the semantic features corresponding to each layer of the network to obtain a plurality of segmentation results; and combining the multiple segmentation results according to a progressive scale expansion algorithm to obtain the tire text outline.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (11)

1. A method for obtaining tire text, the method comprising:
acquiring a tire image;
detecting the tire image through a target detection model to obtain a tire text image;
inputting the tire text image into a depth residual error network trained in advance, segmenting the tire text image through the depth residual error network to obtain a plurality of segmentation results, and combining the segmentation results to obtain a tire text outline, wherein the tire text outline is formed by connecting points around a tire text;
and performing text recognition on the tire text outline to obtain tire text information.
2. The method of claim 1, wherein inputting the tire text image into a depth residual network trained in advance, segmenting the tire text image through the depth residual network to obtain a plurality of segmentation results, and combining the segmentation results to obtain a tire text contour comprises:
expanding the tire text image to obtain an expanded tire text image;
inputting the tire text image after edge expansion into a depth residual error network trained in advance, segmenting the tire text image through the depth residual error network to obtain a plurality of segmentation results, and combining the segmentation results to obtain a tire text outline.
3. The method of claim 2, wherein the step of expanding the tire text image to obtain an expanded tire text image comprises:
obtaining a sideline of the tire text image;
according to a preset edge expanding value, expanding the edge line of the tire text image outwards to obtain an initial tire text image after edge expanding, wherein the initial tire text image after edge expanding comprises a tire text region and an expanding region, and the tire text region corresponds to the position of the tire text image in the initial tire text image after edge expanding;
and filling the expansion area by using a specified color, and determining the filled expanded initial tire text image as an expanded tire text image.
4. The method of claim 1, wherein the detecting the tire image by the object detection model to obtain a tire text image comprises:
detecting the tire image through a target detection model to obtain a plurality of first candidate regions;
carrying out non-maximum suppression on the plurality of first candidate regions so as to select a plurality of second candidate regions without overlapping relation in the plurality of first candidate regions;
and acquiring the confidence coefficient of each second candidate region, and determining the second candidate region with the confidence coefficient larger than a preset confidence coefficient threshold value in the plurality of second candidate regions as the tire text image.
5. The method of claim 1, wherein inputting the tire text image into a depth residual network trained in advance, segmenting the tire text image through the depth residual network to obtain a plurality of segmentation results, and combining the segmentation results to obtain a tire text contour comprises:
inputting the tire text image into a depth residual error network trained in advance, and extracting semantic features in the tire text image through the depth residual error network to obtain semantic features corresponding to each layer of the depth residual error network;
segmenting the tire text image according to the semantic features corresponding to each layer of the network to obtain a plurality of segmentation results;
and combining the segmentation results according to a progressive scale expansion algorithm to obtain the tire text outline.
6. The method of claim 5, wherein merging the plurality of segmentation results according to a progressive scaling algorithm to obtain a tire text outline comprises:
obtaining the sizes of the plurality of segmentation results, wherein the plurality of segmentation results are divided into a minimum segmentation result, a second small segmentation result and a third small segmentation result … … maximum segmentation result according to the sizes;
determining the minimum segmentation result as an initialized text outline;
scanning each pixel in the second small segmentation result, and merging the second small segmentation result into the initialized text outline according to the scanning result to obtain a first merged outline;
scanning each pixel in the third small segmentation result, and merging the third small segmentation result into the first merged contour according to the scanning result to obtain a second merged contour;
……
and repeating the steps until each pixel in the maximum segmentation result is scanned, and combining the maximum segmentation result into the last combined contour according to the scanning result to obtain the tire text contour.
7. The method of claim 1, wherein the training process of the deep residual network comprises:
obtaining a tire text image sample;
inputting the tire text image sample into an initialized depth residual error network, and training the initialized depth residual error network according to a loss function L ═ λ Lc + (1- λ) Ls to obtain a trained depth residual error network, wherein L represents a loss value, Lc represents a loss function of the tire text image, Ls represents a loss function of contraction of a segmentation result, and λ is used for balancing importance between Lc and Ls.
8. The method of claim 7, wherein after obtaining the tire text image sample, further comprising:
and expanding the edges of the tire text image samples to obtain the expanded tire text image samples.
9. A tire specification detection method, comprising the tire text acquisition method according to any one of claims 1 to 8, wherein the tire text information includes tire specification text information;
the tire specification detection method further includes:
acquiring real tire specification information;
and comparing the tire specification text information with the real tire specification information, and determining that the specification of the tire is qualified if the tire specification text information is consistent with the real tire specification information.
10. An apparatus for obtaining a tire text, the apparatus comprising:
the image acquisition module is used for acquiring a tire image;
the image detection module is used for detecting the tire image through the target detection model to obtain a tire text image;
the text outline determining module is used for inputting the tire text image into a depth residual error network trained in advance, segmenting the tire text image through the depth residual error network to obtain a plurality of segmentation results, and combining the segmentation results to obtain a tire text outline, wherein the tire text outline is formed by connecting points around a tire text;
and the text recognition module is used for performing text recognition on the tire text outline to obtain tire text information.
11. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 8 are implemented when the computer program is executed by the processor.
CN201910974900.9A 2019-10-14 2019-10-14 Tire text acquisition method and device and tire specification detection method Pending CN110705560A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910974900.9A CN110705560A (en) 2019-10-14 2019-10-14 Tire text acquisition method and device and tire specification detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910974900.9A CN110705560A (en) 2019-10-14 2019-10-14 Tire text acquisition method and device and tire specification detection method

Publications (1)

Publication Number Publication Date
CN110705560A true CN110705560A (en) 2020-01-17

Family

ID=69198837

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910974900.9A Pending CN110705560A (en) 2019-10-14 2019-10-14 Tire text acquisition method and device and tire specification detection method

Country Status (1)

Country Link
CN (1) CN110705560A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260675A (en) * 2020-01-21 2020-06-09 武汉大学 High-precision extraction method and system for image real boundary
CN111539269A (en) * 2020-04-07 2020-08-14 北京达佳互联信息技术有限公司 Text region identification method and device, electronic equipment and storage medium
CN111612009A (en) * 2020-05-21 2020-09-01 腾讯科技(深圳)有限公司 Text recognition method, device, equipment and storage medium
CN112991311A (en) * 2021-03-29 2021-06-18 深圳大学 Vehicle overweight detection method, device and system and terminal equipment
CN116152686A (en) * 2023-04-21 2023-05-23 北京科技大学 Truck tire fire prediction method and system based on unmanned aerial vehicle remote sensing image

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102867185A (en) * 2012-10-31 2013-01-09 江苏大学 Method and system for identifying automobile tire number
CN105067638A (en) * 2015-07-22 2015-11-18 广东工业大学 Tire fetal-membrane surface character defect detection method based on machine vision
CN108093647A (en) * 2014-12-22 2018-05-29 倍耐力轮胎股份公司 The method and apparatus of the defects of for being detected in Tire production process on tire
CN108288037A (en) * 2018-01-19 2018-07-17 深圳禾思众成科技有限公司 A kind of tire coding identifying system
CN109115773A (en) * 2018-07-20 2019-01-01 苏州光图智能科技有限公司 Tire information verification method, device and storage medium
CN109993040A (en) * 2018-01-03 2019-07-09 北京世纪好未来教育科技有限公司 Text recognition method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102867185A (en) * 2012-10-31 2013-01-09 江苏大学 Method and system for identifying automobile tire number
CN108093647A (en) * 2014-12-22 2018-05-29 倍耐力轮胎股份公司 The method and apparatus of the defects of for being detected in Tire production process on tire
CN105067638A (en) * 2015-07-22 2015-11-18 广东工业大学 Tire fetal-membrane surface character defect detection method based on machine vision
CN109993040A (en) * 2018-01-03 2019-07-09 北京世纪好未来教育科技有限公司 Text recognition method and device
CN108288037A (en) * 2018-01-19 2018-07-17 深圳禾思众成科技有限公司 A kind of tire coding identifying system
CN109115773A (en) * 2018-07-20 2019-01-01 苏州光图智能科技有限公司 Tire information verification method, device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIANG LI ET AL.: "Shape Robust Text Detection with Progressive Scale Expansion Network", 《ARXIV:1806.02559V1》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260675A (en) * 2020-01-21 2020-06-09 武汉大学 High-precision extraction method and system for image real boundary
CN111539269A (en) * 2020-04-07 2020-08-14 北京达佳互联信息技术有限公司 Text region identification method and device, electronic equipment and storage medium
CN111612009A (en) * 2020-05-21 2020-09-01 腾讯科技(深圳)有限公司 Text recognition method, device, equipment and storage medium
CN112991311A (en) * 2021-03-29 2021-06-18 深圳大学 Vehicle overweight detection method, device and system and terminal equipment
CN112991311B (en) * 2021-03-29 2021-12-10 深圳大学 Vehicle overweight detection method, device and system and terminal equipment
CN116152686A (en) * 2023-04-21 2023-05-23 北京科技大学 Truck tire fire prediction method and system based on unmanned aerial vehicle remote sensing image

Similar Documents

Publication Publication Date Title
CN110705560A (en) Tire text acquisition method and device and tire specification detection method
CN110390666B (en) Road damage detection method, device, computer equipment and storage medium
CN110414507B (en) License plate recognition method and device, computer equipment and storage medium
CN113239874B (en) Behavior gesture detection method, device, equipment and medium based on video image
CN111680746B (en) Vehicle damage detection model training, vehicle damage detection method, device, equipment and medium
CN111507958B (en) Target detection method, training method of detection model and electronic equipment
CN109241842B (en) Fatigue driving detection method, device, computer equipment and storage medium
CN113033604B (en) Vehicle detection method, system and storage medium based on SF-YOLOv4 network model
CN111368758B (en) Face ambiguity detection method, face ambiguity detection device, computer equipment and storage medium
CN110889428A (en) Image recognition method and device, computer equipment and storage medium
CN110516541B (en) Text positioning method and device, computer readable storage medium and computer equipment
CN110796082B (en) Nameplate text detection method and device, computer equipment and storage medium
CN111178245A (en) Lane line detection method, lane line detection device, computer device, and storage medium
CN111242126A (en) Irregular text correction method and device, computer equipment and storage medium
CN111814905A (en) Target detection method, target detection device, computer equipment and storage medium
WO2021217940A1 (en) Vehicle component recognition method and apparatus, computer device, and storage medium
CN111368638A (en) Spreadsheet creation method and device, computer equipment and storage medium
US20160307050A1 (en) Method and system for ground truth determination in lane departure warning
CN111435446A (en) License plate identification method and device based on L eNet
KR20220093187A (en) Positioning method and apparatus, electronic device, computer readable storage medium
CN113034514A (en) Sky region segmentation method and device, computer equipment and storage medium
CN111444911B (en) Training method and device of license plate recognition model and license plate recognition method and device
CN111401421A (en) Image category determination method based on deep learning, electronic device, and medium
CN111488945A (en) Image processing method, image processing device, computer equipment and computer readable storage medium
CN112348116A (en) Target detection method and device using spatial context and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200117

RJ01 Rejection of invention patent application after publication