CN113888753A - Industrial quality inspection image character matching method and device based on multi-feature cascade model - Google Patents

Industrial quality inspection image character matching method and device based on multi-feature cascade model Download PDF

Info

Publication number
CN113888753A
CN113888753A CN202111137592.8A CN202111137592A CN113888753A CN 113888753 A CN113888753 A CN 113888753A CN 202111137592 A CN202111137592 A CN 202111137592A CN 113888753 A CN113888753 A CN 113888753A
Authority
CN
China
Prior art keywords
character
gradient
image
feature
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111137592.8A
Other languages
Chinese (zh)
Inventor
邹子杰
杨玄同
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202111137592.8A priority Critical patent/CN113888753A/en
Publication of CN113888753A publication Critical patent/CN113888753A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention belongs to the technical field of image processing, and particularly relates to an industrial quality inspection image character matching method and device based on a multi-feature cascade model. The method performs the steps of: step 1: judging whether a character area exists in the target image, if so, determining the position of the character area in the target image, and extracting the character area; step 2: carrying out image preprocessing on the extracted character area to obtain a preprocessed character area; and step 3: and respectively carrying out color feature extraction, gradient feature extraction and depth feature extraction on the preprocessed character region to respectively obtain the color feature, the gradient feature and the depth feature of the preprocessed character region. By introducing various characteristics, the method solves the problem of recognition and matching of various fonts and characters, improves the accuracy of matching and recognition, and has the advantage of high efficiency.

Description

Industrial quality inspection image character matching method and device based on multi-feature cascade model
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an industrial quality inspection image character matching method and device based on a multi-feature cascade model.
Background
The image matching refers to identifying a homonymous point between two or more images through a certain matching algorithm, for example, in the two-dimensional image matching, a window center point corresponding to the maximum relational number in a search area is taken as the homonymous point by comparing correlation coefficients of windows with the same size in a target area and the search area. The essence is to apply the best search problem of matching criteria under the condition of primitive similarity.
Image text matching, as the name implies, measures the similarity between an image and a piece of text, and the technology is a core algorithm of a plurality of pattern recognition tasks. For example, in an image text cross-modal retrieval task, when a query text is given, images with similar contents need to be retrieved according to the similarity of the image texts; in the image description generation task, given an image, similar texts need to be retrieved according to the content of the image, and the texts are used as (or further generated) text descriptions of the image; in the image question-answering task, the content containing the corresponding answer in the image needs to be searched based on the given text question, and the searched visual content in turn needs to search similar text expectation as the predicted answer.
The method is widely applied to the field of industrial quality inspection, such as production line quality inspection, logistics quality inspection, cargo quality inspection and the like. In production line and logistics quality inspection, the goods information on the goods outer package is often judged and identified in the production and transportation links, and further application is carried out. Such as checking the same batch of goods for printing errors. Whether goods not belonging to the batch are present in the logistics transportation, and the like. Therefore, the cargo information of the current cargo needs to be extracted by an image visual means, and the cargo information is matched with the information built in the system, retrieved and the like.
In the prior art, patent No. CN110020615A discloses a method for extracting characters and identifying content of pictures for authorization to objects; according to the method, the matching accuracy is improved by separating the main body of the picture from the character information and then matching the main body with the character information, but no better solution is provided for matching different characters and fonts.
Patent No. CN105825084A discloses a method for matching and detecting an object with an image, which performs image matching by feature extraction to improve matching efficiency and accuracy, but the matching method extracts a feature and is still at a low level in accuracy.
Disclosure of Invention
In view of the above, the main objective of the present invention is to provide a method and an apparatus for matching characters in an industrial quality inspection image based on a multi-feature cascade model.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
the industrial quality inspection image character matching method based on the multi-feature cascade model comprises the following steps:
step 1: judging whether a character area exists in the target image, if so, determining the position of the character area in the target image, and extracting the character area;
step 2: carrying out image preprocessing on the extracted character area to obtain a preprocessed character area;
and step 3: respectively performing color feature extraction, gradient feature extraction and depth feature extraction on the preprocessed character region to respectively obtain color features, gradient features and depth features of the preprocessed character region;
and 4, step 4: and performing feature splicing on the obtained color features, gradient features and depth features to obtain character region features, and performing feature matching identification on the character region features to complete image character matching.
Further, the method for image preprocessing of the extracted character region in step 2 performs the following steps:
step 2.1: carrying out histogram equalization processing on the character area to obtain a first preprocessing result;
step 2.2: sequentially carrying out gray level binarization processing on the character areas to obtain a first temporary result; carrying out corrosion expansion treatment on the first temporary result to obtain a second temporary result; finally, performing gradient calculation on the second temporary result to obtain a third temporary result;
step 2.3: taking the third temporary result as a second preprocessing result; and taking the collection of the first temporary result and the third temporary result as a third preprocessing result.
Further, the method for extracting color features of the preprocessed character regions in step 3 includes: histogram equalization processing is carried out on the preprocessed character area to obtain histograms of RGB three channels of the preprocessed character area, wherein the histograms are
Figure DEST_PATH_IMAGE001
And
Figure DEST_PATH_IMAGE002
(ii) a The above-mentioned
Figure 579983DEST_PATH_IMAGE001
And
Figure 434806DEST_PATH_IMAGE002
all the features are one-dimensional and are the first preprocessing result; will be provided with
Figure 401494DEST_PATH_IMAGE001
And
Figure 796704DEST_PATH_IMAGE002
splicing is carried out to obtain a one-dimensional characteristic vector
Figure DEST_PATH_IMAGE003
As a color feature.
Further, the method for extracting gradient features from the preprocessed character region in step 3 includes: performing gradient calculation on each pixel of the preprocessed character region to obtain a two-dimensional gradient vector formed by the gradient of each pixel of the preprocessed character region; then the two-dimensional gradient vector is subjected to dimension reduction treatment to obtain a one-dimensional gradient vector, and the one-dimensional gradient vector is subjected to dimension reduction treatmentThe one-dimensional gradient vector is used as a gradient feature
Figure DEST_PATH_IMAGE004
Further, the method for extracting depth features of the preprocessed character regions in step 3 includes: after the gradient is calculated, connecting the gradient map of the preprocessed character region with the original map of the preprocessed character region, specifically: splicing the gray-scale image of one channel and the gradient image of one channel to form a matrix of 2 channels; and then, carrying out depth feature extraction on the connected 2-channel matrix by using a convolutional neural network to obtain depth features.
Industrial quality control image character matching device based on multi-feature cascade model.
The industrial quality inspection image character matching method and device based on the multi-feature cascade model have the following beneficial effects:
1. the matching of different fonts and characters is realized: the present invention can eliminate the need to pay attention to the morphological characteristics of each character, font and character. The recognition matching of various fonts and characters is solved. Under the condition that various schemes are not needed and different character features are not needed to be distinguished like a plurality of schemes in the prior art, character matching is carried out according to characters, algorithm complexity is greatly reduced, and system efficiency is improved.
2. The precision is high: the invention extracts through various characteristics. The matching accuracy is greatly improved, and the matching is performed after the feature extraction is performed on a single feature, so that the multi-feature matching is realized, and the accuracy is higher. Meanwhile, the dominant features of the printed matter in the industrial quality inspection are the outlines of colors and fonts, and the multi-feature matching can be rapidly deployed aiming at different application scenes. The method can well distinguish whether the goods information has printing errors or not, whether the goods are put in batches or not, and the like, so that the accuracy is improved.
3. The efficiency is high: when the method is used for matching multiple features, the one-dimensional vector processing is adopted for the multiple features, so that the processing efficiency is optimized, and the industrial deployment and the cost control are facilitated.
Drawings
Fig. 1 is a schematic flow chart of a method for matching characters of an industrial quality inspection image based on a multi-feature cascade model according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of step 2 of the industrial quality inspection image text matching method based on the multi-feature cascade model according to the embodiment of the present invention.
Detailed Description
The method of the present invention will be described in further detail below with reference to the accompanying drawings and embodiments of the invention.
Example 1
As shown in fig. 1, the method and the device for matching the characters of the industrial quality inspection image based on the multi-feature cascading model execute the following steps:
step 1: judging whether a character area exists in the target image, if so, determining the position of the character area in the target image, and extracting the character area;
step 2: carrying out image preprocessing on the extracted character area to obtain a preprocessed character area;
and step 3: respectively performing color feature extraction, gradient feature extraction and depth feature extraction on the preprocessed character region to respectively obtain color features, gradient features and depth features of the preprocessed character region;
and 4, step 4: and performing feature splicing on the obtained color features, gradient features and depth features to obtain character region features, and performing feature matching identification on the character region features to complete image character matching.
Specifically, in the prior art, only the problem of a single scene is solved when image and character matching identification is performed, and the effect is not good in practical industrial application; it is only suitable for single font and character, and when the character and font are of various types, the effect is not good, and it is not suitable for industrial application. The flow and the flow links are complex, and the actual application efficiency is low; usually, only a single feature is considered, so that a good effect cannot be achieved in all scenes in practical application; meanwhile, the calculation complexity is high, and the industrial deployment and the cost control are not facilitated.
For the implementation of the invention, firstly, after the image to be processed is input, the character area image in the image is extracted, and irrelevant non-character image areas are excluded. There are two methods for extraction, one is to cut out the text area at the corresponding position by using a fixed template according to the application scene. If the characters are fixed at the upper left corner, only the corresponding position area of the upper left corner needs to be extracted. Text regions can be extracted by relevant scene text detection techniques.
Second, some basic pre-processing of the image will be performed. Filtering and denoising, and scaling. The filtering and noise reduction are to eliminate the noise of the input picture and enhance the precision of the subsequent processing. Common median filtering and the like can be used here, selected as desired according to different application scenarios. The scaling is to up/down sample the input picture to a scale. This step is to unify the scale of the measurements. Because the characteristic characteristics of the images with different scales are obviously different, the measurement judgment under the same scale is meaningful.
Example 2
Referring to fig. 2, on the basis of the previous embodiment, the method for image preprocessing the extracted character region in step 2 performs the following steps:
step 2.1: carrying out histogram equalization processing on the character area to obtain a first preprocessing result;
step 2.2: sequentially carrying out gray level binarization processing on the character areas to obtain a first temporary result; carrying out corrosion expansion treatment on the first temporary result to obtain a second temporary result; finally, performing gradient calculation on the second temporary result to obtain a third temporary result;
step 2.3: taking the third temporary result as a second preprocessing result; and taking the collection of the first temporary result and the third temporary result as a third preprocessing result.
Example 3
On the basis of the above embodiment, the method for extracting color features of the preprocessed character regions in step 3 includes: histogram equalization processing is carried out on the preprocessed character area to obtain histograms of RGB three channels of the preprocessed character area, wherein the histograms are
Figure 237174DEST_PATH_IMAGE001
And
Figure 997320DEST_PATH_IMAGE002
(ii) a The above-mentioned
Figure 264353DEST_PATH_IMAGE001
And
Figure 650204DEST_PATH_IMAGE002
all the features are one-dimensional and are the first preprocessing result; will be provided with
Figure 319083DEST_PATH_IMAGE001
And
Figure 250130DEST_PATH_IMAGE002
splicing is carried out to obtain a one-dimensional characteristic vector
Figure 4459DEST_PATH_IMAGE003
As a color feature.
Specifically, after the preprocessing is finished, three kinds of feature extraction are performed on the character region picture to be processed. Generally, in industrial quality inspection, the explicit character of the characters comes from different goods which are distinguished by different colors, and the characters or languages of the characters of the goods in different regions/types are inconsistent. Meanwhile, features extracted by a deep learning neural network are combined. It can be distinguished whether the goods information has printing errors, whether the goods batch is put in errors, and the like.
Color characteristics
Figure 67704DEST_PATH_IMAGE003
. Color characteristics
Figure DEST_PATH_IMAGE005
Is a one-dimensional feature vector. The method is obtained by the histogram characteristics of RGB three channels of the character image area to be processed. That is, the histograms of three channels of RGB and three one-dimensional features are obtained
Figure DEST_PATH_IMAGE006
. Then, the RGB is orderedSequential stitching to obtain one-dimensional color features
Figure 217188DEST_PATH_IMAGE005
Example 4
On the basis of the above embodiment, the method for performing gradient feature extraction on the preprocessed character region in step 3 includes: performing gradient calculation on each pixel of the preprocessed character region to obtain a two-dimensional gradient vector formed by the gradient of each pixel of the preprocessed character region; then, the two-dimensional gradient vector is subjected to dimension reduction processing to obtain a one-dimensional gradient vector, and the one-dimensional gradient vector is used as a gradient feature
Figure 584715DEST_PATH_IMAGE004
In particular, the gradient characteristics
Figure 826341DEST_PATH_IMAGE004
The gradient calculation method is obtained by calculating the gradient of each pixel of the character image area to be processed, and the gradient calculation is carried out to obtain a two-dimensional matrix vector. Therefore, a dimension reduction is required to be performed on the two-dimensional gradient vector to obtain the one-dimensional gradient vector characteristics
Figure 288415DEST_PATH_IMAGE004
Example 5
On the basis of the above embodiment, the method for performing depth feature extraction on the preprocessed character region in step 3 includes: after the gradient is calculated, connecting the gradient map of the preprocessed character region with the original map of the preprocessed character region, specifically: splicing the gray-scale image of one channel and the gradient image of one channel to form a matrix of 2 channels; and then, carrying out depth feature extraction on the connected 2-channel matrix by using a convolutional neural network to obtain depth features.
In particular, depth characteristics
Figure DEST_PATH_IMAGE007
And extracting through a convolutional neural network. Firstly, it is necessary toAnd connecting the gradient map of the character area to be processed with the original map of the character area to be processed, namely splicing the gray map of one channel and the gradient map of one channel to form a 2-channel matrix. The purpose of this operation is to introduce a gradient feature for guidance. And the significance and the difference of the feature vectors extracted by the convolutional neural network are improved. Here, the convolutional neural network structure has no particular requirement.
Finally, color characteristics are obtained
Figure 869569DEST_PATH_IMAGE005
Features of gradients
Figure 204735DEST_PATH_IMAGE004
Depth feature
Figure 622073DEST_PATH_IMAGE007
And then, performing feature concat once on the three one-dimensional features, namely feature splicing. Finally, the character area image to be processed is obtained, and the characteristics are extracted
Figure DEST_PATH_IMAGE008
Characteristic of
Figure DEST_PATH_IMAGE009
Is a one-dimensional vector.
Example 6
Industrial quality control image character matching device based on multi-feature cascade model.
Specifically, when multi-feature processing is performed, one-dimensional vectors are always adopted to process the features. For the application field of industrial quality inspection, the word structure is low in complexity and small in quantity. (e.g., the number of translation domains is large for text scanning) therefore, great accuracy can already be achieved using one-dimensional feature vectors. High dimensional features do not bring about significant improvement. Obtaining characteristics
Figure 638570DEST_PATH_IMAGE009
And then only needs to be compared with the characteristic template reserved in the computer system. Such as in industryIn quality inspection, whether the goods information of the same batch has printing errors or not is detected by only reserving the correct template characteristic vector
Figure DEST_PATH_IMAGE010
. Obtaining characteristics
Figure 323498DEST_PATH_IMAGE009
Then, the feature is obtained by calculation
Figure 32828DEST_PATH_IMAGE009
Post-sum correct template feature vector
Figure 983467DEST_PATH_IMAGE010
The threshold value is calculated according to the characteristic distance. The calculation mode can be selected from Euclidean distance, cosine distance and the like according to needs. And finally, comparing the calculated threshold value with the set correct threshold value to obtain whether the threshold value is correct or not.
Due to the model in the deep feature extraction, accurate model weight can be obtained only by performing model training in advance. In the aspect of model training, two-classification training is adopted. That is, since a picture including a text region and a picture not including a text region are prepared, the labels are "yes text" and "no text". And after the training process and the training pictures are processed in the forward direction, performing loss calculation, updating network parameters by utilizing backward propagation, and stopping updating after knowing that the loss does not decrease any more.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process and related description of the system described above may refer to the corresponding process in the foregoing method embodiments, and will not be described herein again.
It should be noted that, the system provided in the foregoing embodiment is only illustrated by dividing the functional units, and in practical applications, the functions may be distributed by different functional units according to needs, that is, the units or steps in the embodiments of the present invention are further decomposed or combined, for example, the units in the foregoing embodiment may be combined into one unit, or may be further decomposed into multiple sub-units, so as to complete all or the functions of the units described above. The names of the units and steps involved in the embodiments of the present invention are only for distinguishing the units or steps, and are not to be construed as unduly limiting the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Those of skill in the art would appreciate that the various illustrative elements, method steps, described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that programs corresponding to the elements, method steps may be located in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. To clearly illustrate this interchangeability of electronic hardware and software, various illustrative components and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The terms "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, method, article, or unit/apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or unit/apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent modifications or substitutions of the related art marks may be made by those skilled in the art without departing from the principle of the present invention, and the technical solutions after such modifications or substitutions will fall within the protective scope of the present invention.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (6)

1. The industrial quality inspection image character matching method based on the multi-feature cascade model is characterized by comprising the following steps:
step 1: judging whether a character area exists in the target image, if so, determining the position of the character area in the target image, and extracting the character area;
step 2: carrying out image preprocessing on the extracted character area to obtain a preprocessed character area;
and step 3: respectively performing color feature extraction, gradient feature extraction and depth feature extraction on the preprocessed character region to respectively obtain color features, gradient features and depth features of the preprocessed character region;
and 4, step 4: and performing feature splicing on the obtained color features, gradient features and depth features to obtain character region features, and performing feature matching identification on the character region features to complete image character matching.
2. The method of claim 1, wherein the method of image preprocessing the extracted character region in step 2 performs the steps of:
step 2.1: carrying out histogram equalization processing on the character area to obtain a first preprocessing result;
step 2.2: sequentially carrying out gray level binarization processing on the character areas to obtain a first temporary result; carrying out corrosion expansion treatment on the first temporary result to obtain a second temporary result; finally, performing gradient calculation on the second temporary result to obtain a third temporary result;
step 2.3: taking the third temporary result as a second preprocessing result; and taking the collection of the first temporary result and the third temporary result as a third preprocessing result.
3. The method as claimed in claim 2, wherein the step 3 of extracting color features of the preprocessed character regions comprises: histogram equalization processing is carried out on the preprocessed character area to obtain histograms of RGB three channels of the preprocessed character area, wherein the histograms are
Figure 208582DEST_PATH_IMAGE001
And
Figure 681151DEST_PATH_IMAGE002
(ii) a The above-mentioned
Figure 732284DEST_PATH_IMAGE001
And
Figure 657515DEST_PATH_IMAGE002
all the features are one-dimensional and are the first preprocessing result; will be provided with
Figure 581957DEST_PATH_IMAGE001
And
Figure 581137DEST_PATH_IMAGE002
splicing is carried out to obtain a one-dimensional characteristic vector
Figure 865488DEST_PATH_IMAGE003
As a color feature.
4. The method of claim 2, wherein the step 3 of performing gradient feature extraction on the preprocessed character regions comprises: performing gradient calculation on each pixel of the preprocessed character region to obtainPreprocessing a two-dimensional gradient vector formed by the gradient of each pixel of the character area; then, the two-dimensional gradient vector is subjected to dimension reduction processing to obtain a one-dimensional gradient vector, and the one-dimensional gradient vector is used as a gradient feature
Figure 464965DEST_PATH_IMAGE004
5. The method of claim 2, wherein the step 3 of depth feature extraction of the preprocessed character regions comprises: after the gradient is calculated, connecting the gradient map of the preprocessed character region with the original map of the preprocessed character region, specifically: splicing the gray-scale image of one channel and the gradient image of one channel to form a matrix of 2 channels; and then, carrying out depth feature extraction on the connected 2-channel matrix by using a convolutional neural network to obtain depth features.
6. Industrial quality control image text matching device based on multi-feature cascade model for realizing the method of any one of claims 1 to 5.
CN202111137592.8A 2021-09-27 2021-09-27 Industrial quality inspection image character matching method and device based on multi-feature cascade model Pending CN113888753A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111137592.8A CN113888753A (en) 2021-09-27 2021-09-27 Industrial quality inspection image character matching method and device based on multi-feature cascade model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111137592.8A CN113888753A (en) 2021-09-27 2021-09-27 Industrial quality inspection image character matching method and device based on multi-feature cascade model

Publications (1)

Publication Number Publication Date
CN113888753A true CN113888753A (en) 2022-01-04

Family

ID=79007268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111137592.8A Pending CN113888753A (en) 2021-09-27 2021-09-27 Industrial quality inspection image character matching method and device based on multi-feature cascade model

Country Status (1)

Country Link
CN (1) CN113888753A (en)

Similar Documents

Publication Publication Date Title
US10817741B2 (en) Word segmentation system, method and device
CN110390251B (en) Image and character semantic segmentation method based on multi-neural-network model fusion processing
CN111626146B (en) Merging cell table segmentation recognition method based on template matching
CN110942013A (en) Satellite image feature extraction method and system based on deep neural network
CN111027443B (en) Bill text detection method based on multitask deep learning
CN111126393A (en) Vehicle appearance refitting judgment method and device, computer equipment and storage medium
CN111310737A (en) Lane line detection method and device
CN113158895A (en) Bill identification method and device, electronic equipment and storage medium
EP1930852B1 (en) Image search method and device
CN113989604A (en) Tire DOT information identification method based on end-to-end deep learning
CN112200789B (en) Image recognition method and device, electronic equipment and storage medium
CN112364863B (en) Character positioning method and system for license document
CN113011528A (en) Remote sensing image small target detection method based on context and cascade structure
CN111292346B (en) Method for detecting contour of casting box body in noise environment
CN109191489B (en) Method and system for detecting and tracking aircraft landing marks
Chiang et al. A method for automatically extracting road layers from raster maps
CN110704667A (en) Semantic information-based rapid similarity graph detection algorithm
CN113888753A (en) Industrial quality inspection image character matching method and device based on multi-feature cascade model
CN113158745B (en) Multi-feature operator-based messy code document picture identification method and system
CN114241463A (en) Signature verification method and device, computer equipment and storage medium
CN116403232B (en) Book information extraction method based on pixel value fluctuation
CN111931689B (en) Method for extracting video satellite data identification features on line
JPH0896083A (en) Character recognizing device
Yang et al. A skeleton based binarization approach for video text recognition
CN118038467A (en) Ancient book character recognition method and ancient book recognition equipment containing dirt missing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination