CN115810194A - Tire character recognition method and device and electronic equipment - Google Patents

Tire character recognition method and device and electronic equipment Download PDF

Info

Publication number
CN115810194A
CN115810194A CN202211549111.9A CN202211549111A CN115810194A CN 115810194 A CN115810194 A CN 115810194A CN 202211549111 A CN202211549111 A CN 202211549111A CN 115810194 A CN115810194 A CN 115810194A
Authority
CN
China
Prior art keywords
character
information
pattern
tire
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211549111.9A
Other languages
Chinese (zh)
Inventor
李晶
刘彦辉
周璐
张博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Huaray Technology Co Ltd
Original Assignee
Zhejiang Huaray Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Huaray Technology Co Ltd filed Critical Zhejiang Huaray Technology Co Ltd
Priority to CN202211549111.9A priority Critical patent/CN115810194A/en
Publication of CN115810194A publication Critical patent/CN115810194A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Character Discrimination (AREA)

Abstract

The application discloses a tire character recognition method and device and electronic equipment, and relates to the technical field of image processing. The method comprises the following steps: acquiring a plurality of images of a plurality of regions of a target tire to obtain a plurality of images, and splicing the plurality of images to obtain an annular image; converting the annular image into a horizontal image through polar coordinate conversion; recognizing character information on the horizontal image, and determining the category information of the currently recognized character based on the character information; and comparing the character type with the standard type and outputting character position difference information and character content difference information. Based on the method, when the tire characters are recognized, the 3D camera is adopted to obtain the images of the tires, the character recognition accuracy can be improved, and the recognition of common texts, images and special characters can be realized.

Description

Tire character recognition method and device and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for recognizing tire characters, and an electronic device.
Background
In the background of the rapid development of smart manufacturing, more and more advanced manufacturing technologies are applied to production, wherein Optical Character Recognition (OCR) technology becomes an important technical means in industrial production, which can convert characters printed on industrial products into text format for analysis and processing in subsequent steps.
In the application scenario of OCR technology, tire character recognition is an important scenario, and the character identification carried by the tire surface usually reflects some performances and characteristics of the rubber tire on the vulcanization molding line. At present, most of tire character detection is manually finished, but the requirement is difficult to meet only manually during large-batch production and due to the complexity of a working environment.
The tire character recognition is realized by adopting the OCR technology, the most common mode is the character recognition of a tire mold instead of the character recognition of the tire, the tire mold has the advantages of simple background, regular construction and the like, and the effect is better when the large and medium-sized character recognition is realized. In addition, when character defect detection is carried out, characters on a tire are usually positioned firstly, then the correctness of the characters is detected, in the prior art, template matching is generally adopted to compare an image to be detected with a template image, and therefore the character matching degree is obtained.
Disclosure of Invention
The application provides a tire character recognition method, a tire character recognition device and electronic equipment, wherein a 3D imaging method is adopted to collect images of tires, so that text positioning recognition, pattern recognition and special character recognition are realized, and the problem of poor character recognition effect during tire character recognition is solved.
In a first aspect, the present application provides a method of tire character recognition, the method comprising:
acquiring images of a plurality of areas of a target tire to obtain a plurality of images, and splicing the images to obtain an annular image;
converting the annular image into a horizontal image by polar coordinate conversion;
recognizing character information on the horizontal image, and determining category information of the currently recognized character based on the character information, wherein the category information comprises character position information and character content information of the character in the horizontal image;
and comparing the character type with a standard type and outputting character position difference information and character content difference information.
The method is used for collecting the image of the target tire, further converting the annular image into the horizontal image, then recognizing the characters, and determining the content difference and the position difference between the current characters and the standard type characters.
In an alternative embodiment, the recognizing the character information on the horizontal image includes:
locating a home line in the horizontal image;
dividing the text line into a plurality of contents to be detected;
detecting the plurality of contents to be detected, and combining detection results of the plurality of contents to be detected into one content to be detected;
inputting the content to be detected into an identification network for identification to obtain a character identification result in the text line;
taking the character recognition result as the character information;
by the method, the text line is identified in a sliding block mode, and the detection network is used for detecting a plurality of contents to be detected in parallel, so that the character detection accuracy is improved.
In an alternative embodiment, the recognizing the character information on the horizontal image includes:
performing on-line training on the special characters by adopting a deep learning retrieval network, and establishing a characteristic template in a database;
matching the special characters to be inquired in the horizontal image in the feature template to obtain a detection result of the special characters to be inquired;
and taking the detection result of the special character to be inquired as the character information.
By the method, the non-appeared special characters can be updated into the database by adopting an online training mode, and the accuracy in identifying the special characters is ensured.
In an optional implementation manner, after comparing the category of the character with the standard category and outputting the character position difference information and the character content difference information, the method further includes:
inputting the pattern to be queried and the pattern template in the horizontal image into a shared feature extractor, and respectively extracting query pattern features of the pattern to be queried and pattern template features of the pattern template;
and comparing the query pattern features with the pattern template features, if the similarity between the pattern features to be queried and the pattern template features is greater than a preset threshold value. And outputting the position difference information of the pattern to be inquired.
In conclusion, the 3D camera is adopted to acquire the image of the target tire, the accuracy of the image is improved, the tire characters are imaged on the three-dimensional image, the height difference is mapped into the gray scale value, the higher the accuracy of the camera is, the better the character imaging quality is, the scheme realizes the recognition of common characters, patterns and special characters on the tire, and the function of updating new type pattern samples on line can be realized through the meta-feature convolution module and the meta-feature updating module during the image recognition.
In a second aspect, the present application provides a tire character recognition apparatus, the apparatus comprising:
the processing module is used for acquiring images of a plurality of areas of a target tire to obtain a plurality of images and splicing the images to obtain an annular image;
the conversion module is used for converting the annular image into a horizontal image through polar coordinate conversion;
the recognition module is used for recognizing character information on the horizontal image and determining the category of the currently recognized character based on the character information;
and the output module is used for comparing the type of the character with the standard type and outputting character position difference information and character content difference information.
In an optional embodiment, the identification module is further configured to:
locating lines of text in the horizontal image;
dividing the text line into a plurality of contents to be detected;
detecting the plurality of contents to be detected, and combining detection results of the plurality of contents to be detected into one content to be detected;
inputting the content to be detected into an identification network for identification to obtain a character identification result in the text line;
and taking the character recognition result as the character information.
In an optional embodiment, the identification module is further configured to:
performing on-line training on the special characters by adopting a deep learning retrieval network, and establishing a characteristic template in a database;
matching the special characters to be inquired in the horizontal image in the feature template to obtain a detection result of the special characters to be inquired;
and taking the detection result of the special character to be inquired as the character information.
In an optional embodiment, the identification module is further configured to:
inputting the pattern to be inquired and the pattern template in the horizontal image into a shared feature extractor, and respectively extracting the pattern feature to be inquired of the pattern to be inquired and the pattern template feature of the pattern template;
and comparing the query pattern features with the pattern template features, and outputting position difference information of the pattern to be queried if the similarity between the pattern features to be queried and the pattern template features is greater than a preset threshold value.
In a third aspect, the present application provides an electronic device, comprising:
a memory for storing a computer program;
and the processor is used for realizing the steps of the tire character recognition method when executing the computer program stored in the memory.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program, which when executed by a processor, implements the steps of a method for recognizing tire characters as described above.
For each of the second aspect to the fourth aspect and the possible technical effects achieved by each aspect, please refer to the above description of the possible technical effects achieved by the first aspect and the various possible schemes in the first aspect, and details are not repeated here.
Drawings
FIG. 1 is a flow chart of a method for recognizing tire characters according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a multi-region image acquisition according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a polar coordinate transformation coordinate system provided in an embodiment of the present application;
fig. 4 is a schematic view illustrating identification of a small-sized region of a text line according to an embodiment of the present application;
FIG. 5 is a schematic view of a tire character recognition apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clear, the present application will be further described in detail with reference to the accompanying drawings. The particular methods of operation in the method embodiments may also be applied to apparatus embodiments or system embodiments. It should be noted that "a plurality" is understood as "at least two" in the description of the present application. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. A is connected with B and can represent: a and B are directly connected and A and B are connected through C. In addition, in the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not intended to indicate or imply relative importance nor order to be construed.
The embodiments of the present application will be described in detail below with reference to the accompanying drawings.
In industrial production, OCR technology is an important technical means to convert characters printed on industrial products into text format for subsequent steps to analyze and process. OCR technique shoots the real-time of product on the production line through industry camera, acquires image information, and rethread vision processing software handles the image, discerns the character in the image, obtains the information in the image, has played good effect when realizing product quality control on the production line, and to a great extent has promoted product quality, has avoided the careless omission that brings of artifical quality control.
Tire character recognition is an important application scenario of OCR technology, and when first-item detection is carried out, defects of characters on tires can be found in time, and the same character defects of subsequent products are avoided. At present, tire character recognition is realized by adopting an OCR technology, character recognition of a tire mold is mainly adopted instead of character recognition of a tire, when the tire mold characters are recognized, the recognition effect of large and medium-sized characters is better, and when full character recognition is realized, the small character recognition effect is poorer due to the low imaging precision of small characters, poor gray scale and other reasons. In addition, when character defect detection is carried out, because the tire layout printing characters are not fixed, the universality is poor, and further, the precision is poor when the character defect detection is carried out.
In order to solve the above problems, embodiments of the present application provide a method and an apparatus for identifying a tire character, and an electronic device. Aiming at the problems of low character imaging progress, character gray scale difference and the like when tire characters are identified, a 3D camera is adopted to image the characters of a target tire on a three-dimensional image, the height difference is mapped into a gray scale value, the imaging quality of the characters can be greatly improved along with the improvement of the camera precision, and the identification of conventional characters, patterns and special characters can be realized.
The following further describes the technical solution of the present application by using a specific embodiment, and with reference to fig. 1, an implementation flow of the method for recognizing a tire character provided by the present application is as follows:
s1, acquiring images of a plurality of areas of a target tire to obtain a plurality of images, and splicing the plurality of images to obtain an annular image;
firstly, image acquisition is carried out on a plurality of areas of a target tire, in specific implementation, a 3D camera is adopted to carry out image acquisition on the target tire, the 3D camera can acquire three-dimensional information of a target object, and functions which cannot be realized by 2D vision, such as height, flatness, volume and the like of a product, can be realized. Due to the limited field of view of the 3D camera, one 3D camera cannot be used to completely capture the image of the target tire, so that different numbers of 3D cameras are required for tires of different sizes.
For example, truck tires may be 20 inches in size for image capture, requiring more 3D cameras for capturing images for larger diameter tires. At this time, 4 3D cameras may be required to complete the image acquisition of the tire. When the images of the tires of the passenger cars are acquired, the size of the tires of the passenger cars is smaller than that of the tires of the goods cars, so that only 3D cameras can be needed to complete the image acquisition of the tires.
It should be noted that the number of 3D cameras is only an exemplary illustration, and in a specific implementation, the number of 3D cameras needs to be adjusted according to the specific size of the tire. As shown in fig. 2, the target tire is divided into a plurality of regions, each region corresponds to one 3D camera, and when image acquisition is performed, an image of the corresponding region is acquired by each 3D camera.
For example, after the region 1 corresponding to the 3D camera 1, the region 2 corresponding to the 3D camera 2, the region 3 corresponding to the 3D camera 3, and the 3D camera perform image acquisition on the respective corresponding regions, a plurality of images are obtained.
Furthermore, after a plurality of images are acquired by acquiring a plurality of areas of the target tire, the images acquired by the cameras corresponding to the areas are spliced to obtain a complete annular image. When a plurality of acquired images are spliced, outliers need to be removed firstly, and when a 3D camera is adopted to acquire images of an entity in a reverse engineering, the outliers, namely small point clouds and discrete points far away from a main point cloud, can be introduced due to the influence of factors such as human or environment on the measurement process. The existence of outliers affects the subsequent modeling quality, so the outliers need to be removed after the splicing of multiple images is completed. And after the outliers are removed, projecting the image into a 2D image to obtain an annular image.
By the method, when the target tire is subjected to image acquisition, the target tire is divided into a plurality of areas, and the 3D camera is adopted to acquire images of the corresponding areas, so that the images acquired by each area have high accuracy. The problems of poor imaging effect, low resolution and the like caused by image acquisition of a target tire by adopting a 3D camera are solved.
S2, converting the annular image into a horizontal image through polar coordinate conversion;
as shown in fig. 3, all points in polar coordinates can be uniquely defined with the parameters θ and ρ, which are radii that are determined to be constant for a circle. Then if the transformation is to a rectangular coordinate system, it is a straight line parallel to the horizontal coordinate axis, and the vertical coordinate remains unchanged when the horizontal coordinate is transformed.
Further, when the annular image is converted into the horizontal image, firstly, the radius of the circumscribed circle, the radius of the inscribed circle and the circle center of the annular image are determined, then, the annular region of the annular image is determined according to the radius of the circumscribed circle, the radius of the inscribed circle and the circle center, thresholding processing is carried out on the annular image, and finally, the display region of the horizontal image is obtained through polar coordinate conversion.
By the above method, the annular image can be converted into a horizontal image by polar coordinate conversion.
S3, recognizing character information on the horizontal image, and determining the category information of the currently recognized character based on the character information;
in an alternative embodiment, a deep learning text positioning network is used to locate lines of text in a horizontal image when identifying lines of text for regular characters. Further, the text line is divided into a plurality of contents to be detected, then the plurality of contents to be detected are detected respectively, after the detection result of each area to be detected is obtained, the detection results of the plurality of contents to be detected are combined into one content to be detected, and one content to be detected is input into the recognition network to be recognized, so that the character recognition result of the text line is obtained.
Specifically, when text lines in a horizontal image are recognized, the horizontal image is obtained by polar coordinate conversion from a circular image, so that the resolution of the horizontal image is high, and text line detection cannot be performed on the entire horizontal image completely. In specific implementation, a sliding and dividing mode is adopted to divide the horizontal image into a plurality of contents to be detected, a detection network is further adopted to realize parallel detection on the plurality of contents to be detected, and meanwhile, the plurality of contents to be detected are detected and detection results are obtained.
In the present embodiment, the actual physical height of the character on the tire is small when the tire character is detected. For example, the actual size of letters or chinese characters on a tire of a passenger car is only a few centimeters, so when the image is acquired by a 3D camera and the characters on the image are projected as a 2D image, the accuracy of part of the characters on the image is low, and a large number of shading exists on the tire, and the shading also interferes with the detection of the characters to some extent. In particular, to avoid the above problems, regression-based detection networks are used to detect characters.
For example, as shown in fig. 4, for example, the content in the text line is campus, and a small-sized text box is constructed to improve the accuracy in character detection. Dividing the character campus into a plurality of areas to be detected, such as an area 1, an area 2, an area 3 and an area N in the figure, wherein the number of the areas N is determined according to the length of the text line and the length of a single area to be detected.
During recognition, character detection is simultaneously carried out on the divided regions 1, 2 and 3 to the region N to obtain the detection result of each character region, then the results of the plurality of detection regions are combined into a complete character, and further, each complete character is combined into a content to be detected, namely a complete text line.
As shown in fig. 4, after the text line is divided into a plurality of contents to be detected, each content to be detected is identified at the same time. For example, the content to be detected in the area 1, the area 2, the area 3, and the area 4 is identified, and finally the content is combined and identified to obtain the letter C. And similarly, identifying the contents to be detected in other areas to obtain the residual letters, and combining all the letters after the identification is finished to obtain the complete text line content.
In the embodiment of the present application, before the character detection is performed, the character dictionary model for each of the different kinds of tires is created based on the production information for each of the different kinds of tires provided by the factory. Specifically, each different tire has different characters on its surface. Information such as tire size, load, etc. are different, and numbers marked on the tire surface are also different, so that it is necessary to establish a character dictionary model according to character information of each different tire.
Further, the obtained content to be detected is sent to a Convolutional Recurrent Neural Network (CRNN) based on Lexicon-based, and the CRNN is used for identifying the text detection box to obtain an end-to-end identification result of the text line. For example, the content in the text line is campus, and at this time, the content of the whole text line can be identified through the CRNN network.
Further, the category information of the currently recognized character is determined according to the character information on the horizontal image. Specifically, character categories such as DOT, load, warning, and the like exist on the tire.
For example, DOT information on a certain type of tire is DOT 4B9Z 747R 1822, wherein the first eight digits represent the manufacture information and factory code of the tire, and the last four digits represent the manufacture date of the tire. For another example, the load index of the tire is 91, which indicates that the tire has a maximum load-bearing capacity of 615KG. Specifically, the position of the category information marked on the tire corresponding to each specification is also fixed. The character position information on the tire and the character content information can be obtained from the tire type information.
By the method, a regression-based detection network is adopted to construct a small-size text box to identify the characters, and finally all the identified contents are combined, so that the end-to-end identification of the whole line of characters is realized, and the accuracy of tire character identification is ensured.
In the embodiment of the present application, some special characters, such as the trade name, etc. of the tire, are also generally present when performing the tire character recognition. Most of the special characters on the tire are designed flower characters, so that the character recognition is difficult.
In an alternative embodiment, the deep learning retrieval network is used for on-line training of the special characters, and the feature templates are built in the database.
Specifically, when character recognition is carried out, for patterns or LOGO which do not appear, on-line training is carried out on special characters through a deep learning retrieval network, and the special characters which do not appear are extracted and stored in a database in an on-line training mode.
Further, the deep learning retrieval module compares the special characters to be queried in the horizontal image in the feature template in the database to obtain the retrieval result of the special characters in the horizontal image. And finally, taking the detection result of the special character to be inquired as the character information obtained by the current retrieval of the horizontal image.
By the method, the characteristic template of the special character can be established by a method of training the special character on line. And further, when the special characters are identified, respectively extracting the features of the image to be inquired and the features of the feature template through a convolution network to obtain a feature map, and obtaining the position information and the category of the special characters according to the feature map.
In an alternative embodiment, the pattern to be queried and the pattern template in the horizontal image are input into the shared feature extractor, and the query pattern feature of the pattern to be queried and the pattern template feature of the pattern template are respectively extracted.
For example, there are a number of patterns on a tire, such as the CCC standard, the brand LOGO, etc. The patterns are complicated and changeable, and the method adopting on-line training is not easy to obtain samples.
Therefore, when pattern recognition is performed, the recognition of the pattern is realized by sharing the meta-feature convolution module in the feature extractor based on the deep learning network YoloV 5. Specifically, when a certain type of pattern on a tire is identified, first, the pattern to be searched and the features of a pattern template are extracted to obtain a feature map. Specifically, the type and position information of the pattern to be queried and the pattern template are determined. Further, the pattern features to be queried are compared with the pattern template features, and if the similarity between the pattern features to be queried and the pattern template features is greater than a preset threshold, for example, the preset threshold is set to 95%. When the similarity between the pattern features to be queried and the pattern template features is greater than 95%, the position difference information of the patterns to be queried can be output.
And determining the type and the position information of the pattern to be inquired through feature comparison, and further outputting the position difference information of the pattern to be inquired after the identification is completed.
In the embodiment of the application, the function of updating the small sample pattern and the new class sample on line can be realized through the meta-feature updating module.
Specifically, when pattern recognition is performed, if a new type of pattern template needs to be added, features of the new type of pattern template can be extracted online through the shared feature extractor, and further, the newly extracted features are added into the pattern template through the meta-feature updating module.
By the method, the small sample patterns can be identified, and the function of updating new class patterns on line can be realized by the meta-feature updating module.
And S4, comparing the character type with the standard type and outputting character position difference information and character content difference information.
After the character is recognized, the character category is determined, and the recognized character category is further compared with the standard category.
For example, at this time, it is determined that the category of the recognized character is DOT, the recognized DOT content is compared with the DOT content on the tire, so that the difference from the standard content can be obtained, and the character position information and the content difference information between the recognition result and the standard category are output according to the comparison result. The character difference information is the character difference obtained after the recognition result is compared with the standard type, for example, the recognized DOT content is DOT 4b9z 747R 1822, and at this time, the DOT content of the standard type is DOT 4b9z 747R1622, so the recognized character content is misprinted. For another example, DOT content in the standard category should be in the a area on the tire, and the DOT content recognized at this time has a deviation in the a area, so that character position difference information in the horizontal image can be output.
By the method, the character type is compared with the standard type to obtain the difference between the current character and the standard type character, and the character position difference information and the character content difference information are obtained, so that whether the current recognized character has the problems of missing printing, wrong printing or printing position error and the like can be judged according to the character position difference information and the character content difference information.
Based on the same inventive concept, the embodiment of the application also provides a tire character recognition device, which is used for recognizing the tire characters, can improve the accuracy of recognizing the tire characters, and realizes the recognition of common texts, patterns and special characters.
Referring to fig. 5, the apparatus includes:
the processing module 501 is configured to acquire a plurality of images from a plurality of regions of a target tire, and perform stitching processing on the plurality of images to obtain an annular image;
a conversion module 502 for converting the annular image into a horizontal image by polar coordinate conversion;
a recognition module 503, configured to recognize character information on the horizontal image, and determine category information of a currently recognized character based on the character information;
an output module 504, configured to compare the category of the character with a standard category and output character position difference information and character content difference information.
In an optional embodiment, the identification module is further configured to:
locating lines of text in the horizontal image;
dividing the text line into a plurality of contents to be detected;
detecting the plurality of contents to be detected, and combining the detection results of the plurality of contents to be detected into one content to be detected;
inputting the content to be detected into an identification network for identification to obtain a character identification result in the text line;
and taking the character recognition result as the character information.
In an optional embodiment, the identification module is further configured to:
performing on-line training on the special characters by adopting a deep learning retrieval network, and establishing a characteristic template in a database;
matching the special characters to be inquired in the horizontal image in the feature template to obtain a detection result of the special characters to be inquired;
and taking the detection result of the special character to be inquired as the character information.
In an optional embodiment, the identification module is further configured to:
inputting the pattern to be inquired and the pattern template in the horizontal image into a shared feature extractor, and respectively extracting the pattern feature to be inquired of the pattern to be inquired and the pattern template feature of the pattern template;
and comparing the query pattern features with the pattern template features, and outputting position difference information of the pattern to be queried if the similarity between the pattern features to be queried and the pattern template features is greater than a preset threshold value.
It should be noted that the apparatus provided in the embodiment of the present application can implement all the method steps in the tire character recognition method embodiment, and can achieve the same technical effect, and detailed descriptions of the same parts and beneficial effects as those in the method embodiment are not repeated herein.
Based on the same inventive concept, an embodiment of the present application further provides an electronic device, where the electronic device may implement the function of the foregoing tire character recognition method, and as shown in fig. 6, the electronic device includes:
at least one processor 601 and a memory 602 connected to the at least one processor 601, in this embodiment, a specific connection medium between the processor 601 and the memory 602 is not limited, and fig. 6 illustrates an example where the processor 601 and the memory 602 are connected through a bus 600. The bus 600 is shown in fig. 6 by a thick line, and the connection manner between other components is merely illustrative and not limited thereto. The bus 600 may be divided into an address bus, a data bus, a control bus, etc., and is shown with only one thick line in fig. 6 for ease of illustration, but does not represent only one bus or type of bus. Alternatively, the processor 601 may also be referred to as a controller, without limitation to name a few.
In the present embodiment, the memory 602 stores instructions executable by the at least one processor 601, and the at least one processor 601 may perform the tire character recognition method discussed above by executing the instructions stored by the memory 602. The processor 601 may implement the functions of the various modules in the apparatus shown in fig. 5.
The processor 601 is a control center of the apparatus, and may connect various parts of the entire control device by using various interfaces and lines, and perform various functions of the apparatus and process data by operating or executing instructions stored in the memory 602 and calling data stored in the memory 602, thereby performing overall monitoring of the apparatus.
In one possible design, processor 601 may include one or more processing units and processor 601 may integrate an application processor that handles primarily the operating system, user interfaces, applications, etc., and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 601. In some embodiments, the processor 601 and the memory 602 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
The processor 601 may be a general-purpose processor, such as a Central Processing Unit (CPU), digital signal processor, application specific integrated circuit, field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like, that may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the tire character recognition method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
The memory 602, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 602 may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charge Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. The memory 602 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 602 in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
The processor 601 is programmed to solidify the codes corresponding to the tire character recognition method described in the foregoing embodiments into the chip, so that the chip can execute the steps of the tire character recognition method of the embodiment shown in fig. 1 when running. How to program the processor 601 is well known to those skilled in the art and will not be described here.
Based on the same inventive concept, the present application also provides a storage medium storing computer instructions, which when run on a computer, cause the computer to execute the tire character recognition method discussed above.
In some possible embodiments, the various aspects of the tire character recognition method provided herein may also be embodied in the form of a program product comprising program code for causing the control apparatus to perform the steps of the tire character recognition method according to various exemplary embodiments of the present disclosure as described above in this specification when the program product is run on a device.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method of tire character recognition, the method comprising:
acquiring images of a plurality of areas of a target tire to obtain a plurality of images, and splicing the images to obtain an annular image;
converting the annular image into a horizontal image by polar coordinate conversion;
recognizing character information on the horizontal image, and determining category information of the currently recognized character based on the character information, wherein the category information comprises character position information and character content information of the character in the horizontal image;
and comparing the character type with a standard type and outputting character position difference information and character content difference information.
2. The method of claim 1, wherein the recognizing character information on the horizontal image comprises:
locating lines of text in the horizontal image;
dividing the text line into a plurality of contents to be detected;
detecting the plurality of contents to be detected, and combining the detection results of the plurality of contents to be detected into one content to be detected;
inputting the content to be detected into an identification network for identification to obtain a character identification result in the text line;
and taking the character recognition result as the character information.
3. The method of claim 1, wherein the recognizing character information on the horizontal image comprises:
performing on-line training on the special characters by adopting a deep learning retrieval network, and establishing a characteristic template in a database;
matching the special characters to be inquired in the horizontal image in the feature template to obtain a detection result of the special characters to be inquired;
and taking the detection result of the special character to be inquired as the character information.
4. The method of claim 1, after comparing the category of the character with a standard category and outputting character position difference information and character content difference information, further comprising:
inputting the pattern to be queried and the pattern template in the horizontal image into a shared feature extractor, and respectively extracting query pattern features of the pattern to be queried and pattern template features of the pattern template;
and comparing the query pattern features with the pattern template features, and if the similarity between the pattern features to be queried and the pattern template features is greater than a preset threshold value, outputting the position difference information of the pattern to be queried.
5. A tire character recognition apparatus, said apparatus comprising:
the processing module is used for acquiring images of a plurality of areas of a target tire to obtain a plurality of images and splicing the images to obtain an annular image;
the conversion module is used for converting the annular image into a horizontal image through polar coordinate conversion;
the recognition module is used for recognizing character information on the horizontal image and determining the category information of the currently recognized character based on the character information;
and the output module is used for comparing the type of the character with the standard type and outputting character position difference information and character content difference information.
6. The apparatus of claim 5, wherein the identification module is further to:
locating lines of text in the horizontal image;
dividing the text line into a plurality of contents to be detected;
detecting the plurality of contents to be detected, and combining detection results of the plurality of contents to be detected into one content to be detected;
inputting the content to be detected into an identification network for identification to obtain a character identification result in the text line;
and taking the character recognition result as the character information.
7. The apparatus of claim 5, wherein the identification module is further to:
performing on-line training on the special characters by adopting a deep learning retrieval network, and establishing a characteristic template in a database;
matching the special characters to be inquired in the horizontal image in the feature template to obtain a detection result of the special characters to be inquired;
and taking the detection result of the special character to be inquired as the character information.
8. The apparatus of claim 5, wherein the identification module is further to:
inputting the pattern to be queried and the pattern template in the horizontal image into a shared feature extractor, and respectively extracting the feature of the pattern to be queried and the feature of the pattern template;
and comparing the query pattern features with the pattern template features, and outputting position difference information of the pattern to be queried if the similarity between the pattern features to be queried and the pattern template features is greater than a preset threshold value.
9. An electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1-4 when executing the computer program stored on the memory.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1-4.
CN202211549111.9A 2022-12-05 2022-12-05 Tire character recognition method and device and electronic equipment Pending CN115810194A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211549111.9A CN115810194A (en) 2022-12-05 2022-12-05 Tire character recognition method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211549111.9A CN115810194A (en) 2022-12-05 2022-12-05 Tire character recognition method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN115810194A true CN115810194A (en) 2023-03-17

Family

ID=85484923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211549111.9A Pending CN115810194A (en) 2022-12-05 2022-12-05 Tire character recognition method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115810194A (en)

Similar Documents

Publication Publication Date Title
CN110826416B (en) Bathroom ceramic surface defect detection method and device based on deep learning
US11410435B2 (en) Ground mark extraction method, model training METHOD, device and storage medium
CN110287873B (en) Non-cooperative target pose measurement method and system based on deep neural network and terminal equipment
CN116188475B (en) Intelligent control method, system and medium for automatic optical detection of appearance defects
CN109886928B (en) Target cell marking method, device, storage medium and terminal equipment
CN110956100A (en) High-precision map generation method and device, electronic equipment and storage medium
US20150043828A1 (en) Method for searching for a similar image in an image database based on a reference image
CN111695609A (en) Target damage degree determination method, target damage degree determination device, electronic device, and storage medium
CN114743259A (en) Pose estimation method, pose estimation system, terminal, storage medium and application
CN111079763A (en) Training sample generation, model training, character recognition method and device thereof
CN115937203A (en) Visual detection method, device, equipment and medium based on template matching
CN109409388A (en) A kind of bimodulus deep learning based on graphic primitive describes sub- building method
CN114219753A (en) Power equipment surface defect detection method based on deep learning and terminal
CN112966719B (en) Method and device for recognizing instrument panel reading and terminal equipment
CN117593420A (en) Plane drawing labeling method, device, medium and equipment based on image processing
CN112434582A (en) Lane line color identification method and system, electronic device and storage medium
CN117237681A (en) Image processing method, device and related equipment
CN112528918A (en) Road element identification method, map marking method and device and vehicle
CN115546219B (en) Detection plate type generation method, plate card defect detection method, device and product
CN111652200A (en) Processing method, device and equipment for distinguishing multiple vehicles from pictures in vehicle insurance case
CN111126286A (en) Vehicle dynamic detection method and device, computer equipment and storage medium
CN115810194A (en) Tire character recognition method and device and electronic equipment
CN116958019A (en) Defect detection method and device based on machine vision and storage medium
CN115841336A (en) Agricultural product transaction circulation information tracing method based on cloud computing
CN103871048A (en) Straight line primitive-based geometric hash method real-time positioning and matching method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination