CN111104934A - Engine label detection method, electronic device and computer readable storage medium - Google Patents

Engine label detection method, electronic device and computer readable storage medium Download PDF

Info

Publication number
CN111104934A
CN111104934A CN201911354821.4A CN201911354821A CN111104934A CN 111104934 A CN111104934 A CN 111104934A CN 201911354821 A CN201911354821 A CN 201911354821A CN 111104934 A CN111104934 A CN 111104934A
Authority
CN
China
Prior art keywords
character string
target
region
engine label
engine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911354821.4A
Other languages
Chinese (zh)
Inventor
周康明
罗余洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co Ltd filed Critical Shanghai Eye Control Technology Co Ltd
Priority to CN201911354821.4A priority Critical patent/CN111104934A/en
Publication of CN111104934A publication Critical patent/CN111104934A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/22Character recognition characterised by the type of writing
    • G06V30/224Character recognition characterised by the type of writing of printed characters having additional code marks or containing code marks
    • G06V30/2247Characters composed of bars, e.g. CMC-7
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a detection method of an engine label, electronic equipment and a computer readable storage medium. The detection method of the engine label comprises the following steps: determining an interested area of an image to be detected corresponding to the engine label; determining a target area according to the region of interest; identifying a target character string in the target area based on standard information of the engine label; and recording the position and/or length of the target character string corresponding to the standard information, and determining whether the engine label is qualified according to the position and/or length of the target character string. The application also provides an electronic device and a computer readable storage medium. The traditional manual checking mode can be replaced by the mode of automatically detecting whether the engine label is qualified, the labor cost is saved, and the checking efficiency and the accuracy of the checking result can be improved.

Description

Engine label detection method, electronic device and computer readable storage medium
Technical Field
The present disclosure relates to the field of vehicle detection, and more particularly, to a method for detecting an engine label, an electronic device, and a computer-readable storage medium.
Background
With the rapid development of social economy and the increasing improvement of living standard of people, the holding quantity of urban motor vehicles is rapidly increased at present, which directly brings about the remarkable increase of the vehicle inspection workload in the vehicle transaction process.
However, the inventors found that at least the following problems exist in the related art: the engine label is identified mainly by manual operation in the traditional vehicle inspection, and the engine label (such as a flexible label) is various in form, large in information amount and more in interference information, so that the inspection is difficult to finish quickly and accurately manually. Therefore, how to accurately and quickly inspect the engine label and avoid the defects of high cost, easy fatigue and low inspection accuracy in the manual inspection process, which is a technical problem to be solved urgently under the current situation.
Disclosure of Invention
An object of the present application is to provide a method for detecting an engine label, an electronic device, and a computer-readable storage medium, which can replace a conventional manual inspection method by automatically detecting whether the engine label is qualified, thereby saving labor cost and improving inspection efficiency and accuracy of inspection results.
According to an aspect of the present application, there is provided a detection method of an engine label, including: determining an interested area of the image to be detected corresponding to the engine label; determining a target area according to the region of interest; identifying a target character string in the target area based on the standard information of the engine label; and recording the position and/or the length of the target character string corresponding to the standard information, and determining whether the engine label is qualified or not according to the position and/or the length of the target character string.
According to another aspect of the present application, there is also provided an electronic device including: one or more processors; and a memory storing computer readable instructions that, when executed, cause the processor to perform the engine label detection method as described above.
According to another aspect of the present application, there is also provided a computer-readable storage medium storing a computer program which, when executed by a processor, implements the engine label detection method described above.
According to the detection method of the engine label, the interested area of the image to be detected corresponding to the engine label is determined, and the target area is determined according to the interested area; identifying a target character string in the target area based on standard information of the engine label; and recording the position and/or length of the target character string corresponding to the standard information, and determining whether the engine label is qualified according to the position and/or length of the target character string. The method and the device can be used for rapidly and accurately comparing the identified label information with the standard information when the label information of the engine is identified in the vehicle inspection, so that whether the engine label is qualified or not is judged, the traditional manual inspection mode can be replaced, the labor cost is saved, and the inspection efficiency and the accuracy of the inspection result can be improved.
In addition, the number of the interested areas is at least 2; the determining a target region according to the region of interest specifically includes: performing pairwise combination overlapping processing on the regions of interest to obtain overlapping regions; and determining the target area according to the overlapping area. By determining the target area according to the overlapping area, an implementation manner of how to determine the target area is provided, which is beneficial to realizing the determination method of the target area flexibly and variably.
In addition, the determining the target area according to the overlap area specifically includes: calculating the ratio of the area of the intersection region to the area of the phase-parallel region in the overlapping region; judging whether the ratio is larger than a preset ratio or not; and if the ratio is judged to be larger than the preset ratio, deleting the region of interest with small area in the two combined regions of interest to obtain the target region. When the ratio is judged to be larger than the preset ratio, the interesting regions with small areas in the two combined interesting regions are deleted to obtain the target region, so that the identification range of the target region can be narrowed, the influence of the small-area interesting regions on the identification result is avoided, and the accuracy of target region identification can be further improved.
In addition, the determining the second character string as the target character string specifically includes: acquiring the length of the character string corresponding to the substring in the second character string; judging the proportion of the length to the length of the whole second character string; and when the proportion is judged to be larger than the preset proportion, determining the second character string as the target character string. It can be understood that, when the length of the character string corresponding to the second character and the sub-string occupies a smaller ratio than or equal to a preset ratio, it may be indicated to some extent that the length of the character string corresponding to the second character string and the sub-string is very small, and therefore, the deletion may be performed, and only the second character string whose ratio is greater than the preset ratio is reserved as the target character string.
In addition, the determining whether the engine label is qualified according to the position and/or the length of the target character string specifically includes: constructing an array which has the same length as the standard information and has empty content; performing data accumulation on the target character string to the array according to the position and/or the length; counting the times that the accumulated data of each bit in the array are the same data; and determining whether the engine label is qualified or not according to the times. The more times that each bit of the array is accumulated to be the same data, the greater the possibility that the data is specific to the position of the array is, which can be described to a certain extent, so that the specific data of the target character string in the target area can be determined, and the engine label is convenient to determine whether the engine label is qualified.
In addition, the determining whether the engine label is qualified according to the times specifically comprises: acquiring accumulated data when the times are greater than preset times; acquiring the number of bits of the accumulated data; calculating the ratio of the number of bits to the total number of bits of the array; constructing labeling information according to the ratio; if the construction is successful, determining that the engine label is qualified; otherwise, determining that the engine label is unqualified. By constructing the label information according to the ratio, an implementation manner of determining whether the engine label is qualified is provided, which is beneficial to realizing the determination method of whether the engine label is qualified flexibly and changeably.
Description of the drawings:
one or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a flow chart of a method of engine signature detection provided in accordance with an aspect of the present application;
FIG. 2 is a flow chart of a method of engine signature detection provided in accordance with a practical scenario of the present application;
FIG. 3 is a schematic illustration of a method of engine signature detection provided in accordance with a practical scenario of the present application;
FIG. 4 is a schematic diagram of a block of interest classification and location regression through an object detection model according to an actual scenario of the present application;
fig. 5 is a schematic diagram of a specific manner of determining whether the engine label is acceptable according to an actual scenario of the present application.
The specific implementation mode is as follows:
in order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below based on the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
Fig. 1 shows a flow chart of a method for engine label detection according to an aspect of the present application, the method comprising steps 101 to 104:
in step 101, a region of interest of an image to be detected corresponding to the engine label is determined. After the image to be detected corresponding to the engine label is acquired, detecting an interested region of the image to be detected by using a target detection model based on deep learning, and if the interested region is detected, extracting the interested region; if the region of interest is not detected, the current image to be detected can be stored, the unqualified detection result of the engine label can be directly output, and related workers can be informed to manually recheck. In addition, it is worth mentioning that the region of interest here is: in machine vision and image processing, a region to be processed is outlined from a processed image in the form of a box, a circle, an ellipse, an irregular polygon, or the like.
In step 102, a target region is determined from the region of interest. Here, the region of interest may be re-screened to remove a portion of the region including invalid information to narrow the region of interest, so as to obtain the target region. Of course, if all the interested areas themselves include valid information, the current interested area may be the target area.
In step 103, a target character string in the target area is identified based on the standard information of the engine label. Here, the standard information is information of the engine label recorded in advance, and may be, for example, label information of the engine recorded at the time of vehicle registration; in addition, the target character string may be: a string of characters comprising at least one substring of said standard information. For example, a deep learning-based character recognition model may be used to sequentially perform character recognition on the target region, and compare the recognized character string with the character string included in the standard information to recognize the target character string in the target region.
In step 104, the position and/or length of the target character string corresponding to the standard information is recorded, and whether the engine label is qualified or not is determined according to the position and/or length of the target character string. Here, the specific content of the target character string may be combined, and according to the position and/or length of the target character string, matching analysis of character information may be performed, and if character information identical to standard information may be synthesized, a qualified detection result of the engine label may be output; otherwise, outputting the detection result that the engine label is unqualified. The synthesized character information can be stored and related workers are informed to carry out manual review.
In an embodiment of the present application, the number of the regions of interest is at least 2, and in step 102, determining the target region according to the regions of interest may include: performing pairwise combination overlapping processing on the regions of interest to obtain overlapping regions; and determining the target area according to the overlapping area. Here, it is assumed that the number of regions of interest is N, where N is a natural number greater than or equal to 2. And combining the N interested regions two by two, wherein N x (N-1)/2 combination modes are available in total, and the target region is determined according to the overlapping region obtained in each combination mode. In this embodiment, the target area is determined according to the overlapping area, which provides an implementation manner for how to determine the target area, and is beneficial to implementing the determination method of the target area flexibly and variably.
In the embodiment, the determining the target region according to the overlapping region may include calculating a ratio of an area of an intersection region to an area of a parallel region in the overlapping region, determining whether the ratio is greater than a preset ratio, and if the ratio is determined to be greater than the preset ratio, deleting a region of interest with a small area from the regions of interest combined in pairs to obtain the target region.
In an embodiment of the present application, in step 103, the identifying a target character string in the target area based on the standard information of the engine label may include: comparing a second character string identified based on the target area with a first character string included in the standard information; judging whether the second character string comprises at least one substring of the first character string; and if the judgment result is yes, determining the second character string as the target character string. Here, for example, if the standard information includes a first character string: 123abcdefg, if the second character string identified by the target area is: 123abdfefg, then 123ab of the second string includes at least one substring of the first string, then 123ab is the target string; if df in the first character string does not include at least one substring in the second character string, discarding the df character string in the first character string; the efg of the second character string comprises at least one substring of the first character string, and then the efg is the target character string.
Continuing with the foregoing embodiment, the determining the second character string as the target character string may include: acquiring the length of the character string corresponding to the substring in the second character string; judging the proportion of the length to the length of the whole second character string; and when the proportion is judged to be larger than the preset proportion, determining the second character string as the target character string. Here, the identified second character string may be compared with the first character string included in the standard information, assuming that the value of the preset ratio is a, when the ratio of the length of the character string corresponding to the substring in the second character string to the length of the entire second character string is greater than a, recording the starting position and/or the length of the character string corresponding to the first character string; otherwise, discarding the second character string.
In an embodiment of the present application, in step 104, the determining whether the engine label is qualified according to the position and/or the length of the target character string may include: constructing an array which has the same length as the standard information and has empty content; performing data accumulation on the target character string to the array according to the position and/or the length; counting the times that the accumulated data of each bit in the array are the same data; and determining whether the engine label is qualified or not according to the times. Here, the number of times may be compared with a preset number of times, and if the number of times accumulated for each digit is less than the preset number of times, it may be determined that the engine label is not qualified; if the accumulated times of each digit are greater than or equal to the preset times, the engine label is determined to be qualified. For example, the constructed array with the same length as the standard information and empty content is a 0-5 bit array shown as "mouth-mouth", and the data for accumulating the target character string into the array for the first time is 1, 2, 3, 4, 5, 6 according to the position and/or length; and according to the position and/or the length, the data for accumulating the target character string to the array for the second time are 1, 0 and 0, so that the times of accumulating the 0 th bit of data as the same data are 1 time, and the times of accumulating the 1 st-5 th bit of data as the same data are 0 times. It is understood that the more times each bit of the array is accumulated to be the same data, the more likely it is to say that the data is specific to the position of the array, and thus the specific data of the target character string in the target area can be determined, which facilitates to determine whether the engine label is qualified.
Continuing with the above embodiment, said determining whether said engine label is qualified based on said number of times may comprise: acquiring accumulated data when the times are greater than preset times; acquiring the number of bits of the accumulated data; calculating the ratio of the number of bits to the total number of bits of the array; constructing labeling information according to the ratio; if the construction is successful, at least one complete piece of information which is the same as the standard information can be constructed, and the engine label is determined to be qualified; otherwise, determining that the engine label is unqualified. Here, the ratio and a preset value can be compared, if the ratio is smaller than the preset value, the construction is determined to be failed, and the engine label can be determined to be unqualified; and if the ratio is greater than or equal to a preset ratio, judging that the construction can be successful, and determining that the engine label is qualified. In this embodiment, label information is constructed according to the ratio, so that a specific implementation manner how to determine whether the engine label is qualified is provided, which is beneficial to flexibly and variably implementing the determination method of whether the engine label is qualified.
In an embodiment of the present application, after step 101 and before step 103, the method may further include: classifying the region of interest to obtain a classification result; judging whether the target region belongs to a real region of interest or not according to the classification result; deleting regions of the target region that do not belong to the true region of interest. Here, it is understood that the region of interest detected after step 101 may not be the true region of interest, and therefore, the region of interest may be further classified. Specifically, the region of interest may be classified by using a deep learning-based target classification model to obtain a classification result; judging whether the target region belongs to a real region of interest or not according to the classification result; deleting regions of the target region that do not belong to the true region of interest.
In an actual scenario application of the present application, as shown in fig. 2 and fig. 3, fig. 2 is a flowchart of a method for detecting an engine label in an actual scenario of the present application; fig. 3 is a schematic diagram of a detection method of an engine label in an actual application scenario of the present application. Acquiring an image to be detected corresponding to the engine label and standard information of the engine label; detecting an interested region of an image to be detected by using a target detection model based on deep learning, judging whether the interested region exists, if so, extracting the interested region, if not, storing a related picture, outputting an unqualified detection result, and informing manual rechecking; overlapping and screening the detected interesting regions according to the method for screening the interesting regions, and deleting the interesting regions with small areas in the two combined interesting regions when preset conditions are met to obtain a target region; classifying the screened target area by using a target classification model based on deep learning, judging the target area according to a classification result, if the target area is determined not to be a real interesting area, discarding the target area, and if the target area is determined to be the real interesting area, entering the next process; sequentially performing character recognition on the finally obtained target area by using a character recognition model based on deep learning, comparing the recognized character string with standard information, if the character string does not comprise a substring of the standard information, discarding the currently recognized character string, and if the character string comprises a substring of the standard information, recording the initial position and/or the length of the character string in the standard information; and performing character information matching analysis according to the position and/or the length of the target character string, outputting a qualified detection result of the engine label if the synthesized label information is the same as the standard information, otherwise, outputting an unqualified detection result of the engine label, storing a related picture, and informing a worker to perform manual recheck.
Briefly, referring to fig. 3, an image to be detected is first transmitted into a target detection model, and the image to be detected is used as an input, and the position of the region of interest is output through the operation of the target detection model (as shown in fig. 4, a schematic diagram of classification and position regression of a frame of interest in the target detection model). And filtering the image by an interested region screening unit, and deleting the interested region with small area when the ratio of the area of the two intersection regions to the area of the two phase regions is larger than a preset ratio. And taking the filtered interested region image as input, and deleting the real interested region by the operation of the target classification model. And finally, applying a character recognition model to the obtained target area, and outputting each character string included by each target area. The method of firstly detecting, then screening and then identifying ensures that the effectiveness of the target area finally given to the character identification model is higher, the character identification efficiency is improved, and the time consumption is reduced.
In a practical scenario application of the present application, see fig. 5. Comparing a second character string identified based on the target area with a first character string included in the standard information; judging whether the second character string comprises at least one substring of the first character string; if the judgment result is yes, acquiring the length of the character string corresponding to the substring in the second character string; judging the proportion of the length to the length of the whole second character string; and when the proportion is judged to be larger than the preset proportion, determining the second character string as the target character string. And determining the length threshold of the character string corresponding to the sub-string in the second character string according to the preset proportion. The length of the character string corresponding to the sub string in the first character string may be obtained, and if the length is greater than the length threshold value theta2, the character string is recorded, otherwise, the character string is discarded. In fig. 5, if the length threshold value theta2 is set to 3, it is assumed that there is a character string whose character information is "AB", and the length is smaller than 3, and therefore, it is discarded.
Continuing with the above embodiment, the determining whether the engine label is qualified according to the position and/or the length of the target character string may be performed by constructing an array having the same length as the standard information and having empty contents, so as to match the character information. In the example shown in fig. 5, a 0-11 bit empty array vector with unit information of [ start, length ] can be constructed, start representing the start position of character information, and length representing the character length. In fig. 5, after the character information is filtered, the following information remains: character information 1, i.e., "12345", is written as (start: 0 length: 5); character information 2, i.e., "XD 1234" is written as (start: 0 length: 4); the character information 3, i.e., "76452A 9", is written as (start: 5 length: 7). Accumulating data in the empty array according to the initial positions and the lengths of the 3 character information, counting the number of digits accumulated for at least one time in the empty array after all the character information is accumulated, calculating the ratio of the number of digits accumulated for at least one time to the length of the empty array, indicating the constructed character information if the ratio is greater than a set threshold value theta3, and directly returning the qualified identifier of the engine label when the constructed character information is the same as standard information; if the engine label cannot be constructed, judging that the engine label is unqualified, returning the identifier of the engine label unqualified, simultaneously storing the picture, and manually rechecking by related workers.
In an embodiment of the present application, the target detection model in step 101 may be specifically obtained by acquiring images of the engine label under various conditions (such as different illumination, angles, dimensions, types, and image qualities), marking the region of interest using a rectangular frame, and recording the category set for the region of interest of the engine label and the specific position information of the rectangular frame, combining the loss function L (x, c, L, g), and training to obtain the target detection model according to the recorded category set for the region of interest of the engine label and the specific position information of the rectangular frame, where the correlation formulas are as shown in below ① to ⑧:
Figure BDA0002329989460000141
Figure BDA0002329989460000142
Figure BDA0002329989460000143
Figure BDA0002329989460000144
Figure BDA0002329989460000145
Figure BDA0002329989460000146
Figure BDA0002329989460000147
Figure BDA0002329989460000148
where N is the number of positive samples of the prior box,
Figure BDA0002329989460000149
is an indication parameter when
Figure BDA00023299894600001410
The time indicates that the ith prior frame is matched with the jth group channel, the category of the group channel is k, c is a category confidence degree predicted value, l is a position predicted value of a corresponding boundary frame on the basis of the prior frame, g is a position parameter of the group channel, d is a position parameter of the prior frame, cx is an abscissa of a central point in the prior frame, and cy is an ordinate of the central point in the prior frame; w is the width of the prior box and h is the height of the prior box.
In an embodiment of the present application, the target detection model in step 101 may be specifically detected in the following manner: inputting an image to be detected into a target detection model to obtain a plurality of candidate interested areas, wherein information contained in each candidate area is [ label, x _ center, y _ center, width, height ], as shown in fig. 5, wherein label represents the category of the candidate interested areas; (x _ center, y _ center) respectively represent the abscissa and ordinate of the center point of the candidate region of interest, and (width, height) respectively represent the width and height of the candidate region of interest.
In an embodiment of the present application, the target classification model in step 102 may be specifically obtained by: acquiring images of the engine labels under various conditions (such as different illumination, angles, dimensions, types and image quality), detecting the images of the engine labels by using a target detection model, and extracting images of candidate interested areas; secondly, classifying the images of the candidate interesting regions, and performing data annotation, wherein the classification result comprises the images of the candidate interesting regions and the images of the non-candidate interesting regions; and training a target classification model according to the labeled data.
In an embodiment of the present application, the character recognition model in step 103 may be specifically obtained by: taking the image of the target area as training data of the model, acquiring character information of the image of the target area, establishing a character file corresponding to the image of each region of interest, and performing data annotation; and training the character information recognition model by using the labeled data.
Furthermore, in an embodiment of the present application, there is also provided an electronic device, including: one or more processors; and a memory storing computer readable instructions that, when executed, cause the processor to perform the method of engine label detection as in any one of the above.
The embodiment of the application also provides a computer readable medium, and the computer program is used for realizing any one of the detection methods of the engine label when being executed by the processor.
For example, the computer readable instructions, when executed, cause the one or more processors to:
determining an interested area of the image to be detected corresponding to the engine label;
determining a target area according to the region of interest;
identifying a target character string in the target area based on the standard information of the engine label;
and recording the position and/or the length of the target character string corresponding to the standard information, and determining whether the engine label is qualified or not according to the position and/or the length of the target character string.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Program instructions which invoke the methods of the present application may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal-bearing medium and/or stored within a working memory of a computer device operating in accordance with the program instructions. An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. The terms first, second, etc. are used to denote names, but not any particular order.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (e.g., Central Processing Units (CPUs)), input/output interfaces, network interfaces, and memory.
The Memory may include volatile Memory in a computer readable medium, Random Access Memory (RAM), and/or nonvolatile Memory such as Read Only Memory (ROM) or flash Memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, Phase-Change RAM (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other Memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, magnetic cassette tape, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transmyedia), such as modulated data signals and carrier waves.
The basic principles and the main features of the solution and the advantages of the solution have been shown and described above. It will be understood by those skilled in the art that the present solution is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principles of the solution, but that various changes and modifications may be made to the solution without departing from the spirit and scope of the solution, and these changes and modifications are intended to be within the scope of the claimed solution. The scope of the present solution is defined by the appended claims and equivalents thereof.

Claims (10)

1. A detection method of an engine label is characterized by comprising the following steps:
determining an interested area of the image to be detected corresponding to the engine label;
determining a target area according to the region of interest;
identifying a target character string in the target area based on the standard information of the engine label;
and recording the position and/or the length of the target character string corresponding to the standard information, and determining whether the engine label is qualified or not according to the position and/or the length of the target character string.
2. The method of claim 1, wherein the engine label is detected by a sensor,
the number of the interested areas is at least 2;
the determining a target region according to the region of interest specifically includes:
performing pairwise combination overlapping processing on the regions of interest to obtain overlapping regions;
and determining the target area according to the overlapping area.
3. The method for detecting an engine label according to claim 2, wherein the determining the target area according to the overlap area specifically comprises:
calculating the ratio of the area of the intersection region to the area of the phase-parallel region in the overlapping region;
judging whether the ratio is larger than a preset ratio or not;
and if the ratio is judged to be larger than the preset ratio, deleting the region of interest with small area in the two combined regions of interest to obtain the target region.
4. The method for detecting an engine label according to claim 1, wherein the identifying a target character string in the target area based on the standard information of the engine label specifically includes:
comparing a second character string identified based on the target area with a first character string included in the standard information;
judging whether the second character string comprises at least one substring of the first character string;
and if the judgment result is yes, determining the second character string as the target character string.
5. The method for detecting an engine label according to claim 4, wherein the determining the second character string as the target character string specifically includes:
acquiring the length of the character string corresponding to the substring in the second character string;
judging the proportion of the length to the length of the whole second character string;
and when the proportion is judged to be larger than the preset proportion, determining the second character string as the target character string.
6. The method for detecting an engine label according to claim 1, wherein the determining whether the engine label is qualified according to the position and/or the length of the target character string specifically comprises:
constructing an array which has the same length as the standard information and has empty content;
performing data accumulation on the target character string to the array according to the position and/or the length;
counting the times that the accumulated data of each bit in the array are the same data;
and determining whether the engine label is qualified or not according to the times.
7. The method for detecting an engine label according to claim 6, wherein said determining whether the engine label is qualified according to the number of times specifically comprises:
acquiring accumulated data when the times are greater than preset times;
acquiring the number of bits of the accumulated data;
calculating the ratio of the number of bits to the total number of bits of the array;
constructing labeling information according to the ratio;
if the construction is successful, determining that the engine label is qualified; otherwise, determining that the engine label is unqualified.
8. The engine label detecting method according to any one of claims 1 to 7, further comprising, after the determining of the region of interest of the image to be detected corresponding to the engine label, before the identifying of the target character string in the target region based on the standard information of the engine label:
classifying the region of interest to obtain a classification result;
judging whether the target region belongs to a real region of interest or not according to the classification result;
deleting regions of the target region that do not belong to the true region of interest.
9. An electronic device, comprising:
one or more processors; and
a memory storing computer readable instructions that, when executed, cause the processor to perform the engine label detection method of any one of claims 1 to 8.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the engine label detection method according to any one of claims 1 to 8.
CN201911354821.4A 2019-12-22 2019-12-22 Engine label detection method, electronic device and computer readable storage medium Pending CN111104934A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911354821.4A CN111104934A (en) 2019-12-22 2019-12-22 Engine label detection method, electronic device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911354821.4A CN111104934A (en) 2019-12-22 2019-12-22 Engine label detection method, electronic device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111104934A true CN111104934A (en) 2020-05-05

Family

ID=70425190

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911354821.4A Pending CN111104934A (en) 2019-12-22 2019-12-22 Engine label detection method, electronic device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111104934A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887688A (en) * 2021-09-30 2022-01-04 广东省交通运输建设工程质量检测中心 Sample spot inspection management system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102008039121A1 (en) * 2008-08-22 2010-02-25 Volkswagen Ag Method for encoding character string of digital signature of manufacturer of vehicle, involves determining digital signature such that preset area of character string is changed so that another digital signature is same as former signature
CN104077556A (en) * 2013-03-26 2014-10-01 现代自动车株式会社 Apparatus and method for recognizing stamped character and system for detecting stamped depth of character using the same
US9036040B1 (en) * 2012-12-20 2015-05-19 United Services Automobile Association (Usaa) Vehicle identification number capture
CN108596177A (en) * 2018-05-09 2018-09-28 大连方盛科技有限公司 A kind of the area of computer aided discriminating method and system of motor vehicle VIN code rubbing films
CN109271967A (en) * 2018-10-16 2019-01-25 腾讯科技(深圳)有限公司 The recognition methods of text and device, electronic equipment, storage medium in image
CN109447076A (en) * 2018-09-20 2019-03-08 上海眼控科技股份有限公司 A kind of vehicle VIN code recognition detection method for vehicle annual test
CN110110715A (en) * 2019-04-30 2019-08-09 北京金山云网络技术有限公司 Text detection model training method, text filed, content determine method and apparatus
CN110598687A (en) * 2019-09-18 2019-12-20 上海眼控科技股份有限公司 Vehicle identification code detection method and device and computer equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102008039121A1 (en) * 2008-08-22 2010-02-25 Volkswagen Ag Method for encoding character string of digital signature of manufacturer of vehicle, involves determining digital signature such that preset area of character string is changed so that another digital signature is same as former signature
US9036040B1 (en) * 2012-12-20 2015-05-19 United Services Automobile Association (Usaa) Vehicle identification number capture
CN104077556A (en) * 2013-03-26 2014-10-01 现代自动车株式会社 Apparatus and method for recognizing stamped character and system for detecting stamped depth of character using the same
CN108596177A (en) * 2018-05-09 2018-09-28 大连方盛科技有限公司 A kind of the area of computer aided discriminating method and system of motor vehicle VIN code rubbing films
CN109447076A (en) * 2018-09-20 2019-03-08 上海眼控科技股份有限公司 A kind of vehicle VIN code recognition detection method for vehicle annual test
CN109271967A (en) * 2018-10-16 2019-01-25 腾讯科技(深圳)有限公司 The recognition methods of text and device, electronic equipment, storage medium in image
CN110110715A (en) * 2019-04-30 2019-08-09 北京金山云网络技术有限公司 Text detection model training method, text filed, content determine method and apparatus
CN110598687A (en) * 2019-09-18 2019-12-20 上海眼控科技股份有限公司 Vehicle identification code detection method and device and computer equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
丁进超;张伟伟;吴训成;: "基于双向长短期记忆网络的车牌识别算法" *
杨亭亭;曾洁;刘宾坤;曾奕哲;张育华;: "Android平台的车辆VIN识别系统设计" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887688A (en) * 2021-09-30 2022-01-04 广东省交通运输建设工程质量检测中心 Sample spot inspection management system
CN113887688B (en) * 2021-09-30 2024-03-08 广东省交通运输建设工程质量事务中心 Sample spot check management system

Similar Documents

Publication Publication Date Title
CN109886928B (en) Target cell marking method, device, storage medium and terminal equipment
CN110378258B (en) Image-based vehicle seat information detection method and device
CN111583180B (en) Image tampering identification method and device, computer equipment and storage medium
CN111274926B (en) Image data screening method, device, computer equipment and storage medium
CN110276295B (en) Vehicle identification number detection and identification method and device
CN111897962A (en) Internet of things asset marking method and device
CN111598827A (en) Appearance flaw detection method, electronic device and storage medium
CN112070135A (en) Power equipment image detection method and device, power equipment and storage medium
CN112651293B (en) Video detection method for road illegal spreading event
CN110765963A (en) Vehicle brake detection method, device, equipment and computer readable storage medium
CN112287884B (en) Examination abnormal behavior detection method and device and computer readable storage medium
CN115830399B (en) Classification model training method, device, equipment, storage medium and program product
CN111126393A (en) Vehicle appearance refitting judgment method and device, computer equipment and storage medium
CN113836850A (en) Model obtaining method, system and device, medium and product defect detection method
CN111507332A (en) Vehicle VIN code detection method and equipment
CN113240623A (en) Pavement disease detection method and device
CN115273115A (en) Document element labeling method and device, electronic equipment and storage medium
CN110728193B (en) Method and device for detecting richness characteristics of face image
CN111382638B (en) Image detection method, device, equipment and storage medium
CN113780116A (en) Invoice classification method and device, computer equipment and storage medium
CN111104934A (en) Engine label detection method, electronic device and computer readable storage medium
CN111931721B (en) Method and device for detecting color and number of annual inspection label and electronic equipment
CN113283396A (en) Target object class detection method and device, computer equipment and storage medium
CN115512098B (en) Bridge electronic inspection system and inspection method
CN111680691B (en) Text detection method, text detection device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200505