CN112508845A - Depth learning-based automatic osd menu language detection method and system - Google Patents

Depth learning-based automatic osd menu language detection method and system Download PDF

Info

Publication number
CN112508845A
CN112508845A CN202011102734.2A CN202011102734A CN112508845A CN 112508845 A CN112508845 A CN 112508845A CN 202011102734 A CN202011102734 A CN 202011102734A CN 112508845 A CN112508845 A CN 112508845A
Authority
CN
China
Prior art keywords
osd menu
image
data set
module
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011102734.2A
Other languages
Chinese (zh)
Inventor
林志贤
滕斌
郭太良
林珊玲
谢斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Mindu Innovation Laboratory
Original Assignee
Fuzhou University
Mindu Innovation Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University, Mindu Innovation Laboratory filed Critical Fuzhou University
Priority to CN202011102734.2A priority Critical patent/CN112508845A/en
Publication of CN112508845A publication Critical patent/CN112508845A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30121CRT, LCD or plasma display

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Character Discrimination (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an automatic osd menu language detection method based on deep learning, which comprises the following steps: step S1, acquiring an image data set of the osd menu; step S2, preprocessing the image in the image data set and expanding the data set by image augmentation; s3, constructing a deep neural network and carrying out feature extraction on the image data set; step S4, classifying through a classifier according to the extracted features to realize the identification of the osd menu characters under different shooting environments of different interfaces; s5, using artificial design characteristics, using template matching method to refine auxiliary training, further classifying the classification result of the classifier, and obtaining the recognition result of the character shown in the osd menu; and step S6, positioning the place which is not consistent with the standard on the osd menu by using a matching positioning algorithm and comparing with the standard comparison table. The invention can accurately and quickly search and position the place where the menu is displayed nonstandard, and provides a better solution for error detection of the osd menu in the production process of the display.

Description

Depth learning-based automatic osd menu language detection method and system
Technical Field
The invention relates to the field of character recognition, in particular to an osd menu language automatic detection method and system based on deep learning.
Background
In recent years, due to the problems of defects of manufacturing processes or software and hardware design, various potential problems of displays in the production and development process are inevitable, so that the displays need to be subjected to strict test verification before being shipped. Conventional tests include a series of tests such as functional tests and performance tests, and from the current state of tests in the industry, manufacturers mainly rely on manual tests or semi-automatic tests. According to the conventional manual test, firstly, manually clicking a display key to set a test environment of an OSD menu according to the indication requirement of each test case; then, manually operating a test instrument to read data, and simultaneously observing and displaying the data by matching with human eyes; and finally, recording the test result and compiling a test report. Obviously, the testing mode not only needs to consume a large amount of time and labor, but also is easy to cause missed detection or false detection due to human negligence, the testing result is difficult to guarantee, and the testing requirement with high requirements cannot be met.
Osd (on Screen display) is an important setting for the user to set the display for optimal visual enjoyment, and it provides a channel for human-computer interaction. In order to avoid character errors of the OSD menu in production, the solution is to compare the standard specification provided by a manufacturer with the characters of the OSD menu on the display leaving the factory manually so as to obtain whether the characters of the OSD menu of the display leaving the factory have errors or not. The workload of manual detection is large, and visual fatigue is easily caused, so that the false detection rate is increased. Different words have great variations in grammatical writing and font styles. The only industrially mature character recognition system and the character detection system are immature, and no mature theory or technology is available for detecting the character abnormality of multiple languages, including defects, lack of strokes, incomplete character display and the like. The research on the automatic technology of OSD menu character detection becomes a problem to be solved urgently.
At present, most of the existing methods for image recognition use artificial feature design methods, such as using HOG (Histogram of Oriented Gradients), SIFT (Scale Invariant feature transform), and other methods to extract artificial features and input the artificial features into a classifier to complete pattern classification. The method completely depends on the prior knowledge of human beings, and the design process is time-consuming, labor-consuming and large in workload. The deep learning technology can automatically learn characteristics through a deep neural network of a large number of hidden layers, extracts more essential, more abstract and easier model learning characteristics from pixels, and provides more training samples, so that the generalization capability and popularization capability of the model are stronger.
Disclosure of Invention
In view of this, the present invention provides an automatic osd menu language detection system based on deep learning, which can precisely and quickly search and locate places where menus are displayed nonstandard, and provide a better solution for error detection of osd menus in the display production process.
In order to achieve the purpose, the invention adopts the following technical scheme:
an automatic osd menu language detection method based on deep learning comprises the following steps:
step S1, acquiring an image data set of the osd menu;
step S2, preprocessing the image in the image data set and expanding the data set by image augmentation;
s3, constructing a deep neural network and carrying out feature extraction on the image data set;
step S4, classifying through a classifier according to the extracted features to realize the identification of the osd menu characters under different shooting environments of different interfaces;
s5, using artificial design characteristics, using template matching method to refine auxiliary training, further classifying the classification result of the classifier, and obtaining the recognition result of the character shown in the osd menu;
and step S6, positioning the place which is not consistent with the standard on the osd menu by using a matching positioning algorithm and comparing with the standard comparison table.
Further, the step S1 is specifically: and (3) photographing displays of different models by using a camera to obtain an osd menu image, labeling the body shapes of different languages, and manufacturing classification labels.
Further, the step S2 is specifically:
s21, carrying out smooth denoising binarization processing on the images in the image data set, and simultaneously normalizing the images into a uniform size;
step S22 image augmentation is performed on the pre-processed image to expand the data set.
Further, the image augmentation method comprises the steps of horizontally translating, vertically translating and rotating the image.
Further, the step S3 is specifically: a neural network is built through deep learning, and parallel compression of images is realized by adopting an inclusion _ V3 structural unit; the parallel compression of the image is realized by adopting a multi-layer pooling unit, the features are integrated in parallel, and the features with translation invariance are extracted to the maximum extent; a multi-layer filter is adopted to replace a large-size filter; and carrying out normalization processing on the data inside by adopting batch normalization so that the output is normalized to be in normal distribution between 0 and 1.
Further, the step S4 is specifically: classifying the extracted features through a classifier to realize identification of osd menu characters in different interfaces and different shooting environments, calculating by taking a softmax function as the classifier, and outputting a model with a prediction probability of
Figure DEST_PATH_IMAGE002
Figure DEST_PATH_IMAGE004
Wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE005
representing the probability that the current instance belongs to class k, n representing the total number of classes, s k(x) Indicating that the current instance x belongs to class k, exp (-) indicates that the parenthetical element is indexed,
Figure DEST_PATH_IMAGE007
the sum of the index values representing the scores of example x for all categories from 1 to n, k ranging from 1 to n, and j ranging from 1 to n.
Further, the step S6 is specifically: and outputting the identified osd menu result in a text form, and positioning the position where the osd menu is displayed in an abnormal position by combining the menu standard comparison table of each display by utilizing an optimized matching positioning algorithm.
An automatic osd menu language detection system based on deep learning, comprising: the system comprises a data input module, an image preprocessing and image augmenting module, an intelligent identification module, a character probability prediction module and a deep matching and matching positioning module which are sequentially connected;
the data input module is used for acquiring an osd menu image data set and making a classification label;
the image preprocessing and image augmenting module is used for preprocessing the image in the image data set and augmenting the data set through image augmentation;
the intelligent recognition model module is used for extracting the characteristics of the image data set after preprocessing and image augmentation so as to realize the recognition of the osd menu characters in different shooting environments with different interfaces;
the character probability prediction module is used for classifying the extracted features through a classifier and sequentially outputting model prediction results from large to small in a probability value form;
the deep matching module is used for performing auxiliary training on the classification result of the character probability prediction module within a specified range limited by the classification result by using a template matching method by using artificial design characteristics so as to further classify the classification result of the classifier;
and the matching positioning module is used for positioning the position where the osd menu is displayed to be nonstandard in the recognition result by utilizing an optimized matching positioning algorithm and combining the menu standard comparison table of each display.
Compared with the prior art, the invention has the following beneficial effects:
the invention can accurately and quickly search and position the place where the menu is displayed nonstandard, and provides a better solution for error detection of the osd menu in the production process of the display.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of the system architecture of the present invention;
FIG. 3 is a schematic structural diagram of an optimized inclusion _ V3 of a multi-layer filter according to an embodiment of the invention;
FIG. 4 is a schematic diagram of a classification layer according to an embodiment of the invention;
fig. 5 is a matching location flow diagram.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
Referring to fig. 1, the present invention provides an automatic osd menu language detection method based on deep learning, which includes the following steps:
step S1, acquiring an image data set of the osd menu, making classification labels, photographing displays of different models by using a camera, acquiring osd menu images, labeling the bodies of different languages, and making classification labels;
step S2, preprocessing the image in the image data set and expanding the data set by image augmentation;
s3, constructing a deep neural network and carrying out feature extraction on the image data set;
step S4, classifying through a classifier according to the extracted features to realize the identification of the osd menu characters under different shooting environments of different interfaces;
s5, using artificial design characteristics, using template matching method to refine auxiliary training, further classifying the classification result of the classifier, and obtaining the recognition result of the character shown in the osd menu;
and step S6, positioning the place which is not consistent with the standard on the osd menu by using a matching positioning algorithm and comparing with the standard comparison table.
Referring to fig. 2, the invention relates to an automatic osd menu language detection system based on deep learning, which comprises the following modules:
the system comprises a data input module, an image preprocessing and image augmenting module, an intelligent identification module, a character probability prediction module and a deep matching and matching positioning module which are sequentially connected;
the data input module is used for acquiring an osd menu image data set and making a classification label;
the image preprocessing and image augmenting module is used for preprocessing the image in the image data set and augmenting the data set through image augmentation;
the intelligent recognition model module is used for extracting the characteristics of the image data set after preprocessing and image augmentation so as to realize the recognition of the osd menu characters in different shooting environments with different interfaces;
the character probability prediction module is used for classifying the extracted features through a classifier and sequentially outputting model prediction results from large to small in a probability value form;
the deep matching module is used for performing auxiliary training on the classification result of the character probability prediction module within a specified range limited by the classification result by using a template matching method by using artificial design characteristics so as to further classify the classification result of the classifier;
and the matching positioning module is used for positioning the position where the osd menu is displayed to be nonstandard in the recognition result by utilizing an optimized matching positioning algorithm and combining the menu standard comparison table of each display.
In this embodiment, the method shown in fig. 1 is implemented in the system shown in fig. 2, and the specific implementation process is as follows:
in the data input module, osd images were acquired by taking a photograph, for a total of 300 image data sets. The method comprises 200 images of a training set, 50 images of a verification set and 50 images of a test set. And marking the images according to the shapes of different languages to manufacture classification labels.
In an image preprocessing and image augmentation module, the training set and the test set are unified to be normalized to be 64x64 pixels, and the image is smoothed and binarized. Unifying the labels into one-hot code (one-hot code) format. And carrying out image augmentation on the preprocessed image through operations such as image horizontal translation, image vertical translation, image rotation and the like, and expanding a data set so as to train a model with stronger generalization capability.
In the intelligent identification module, automatic feature extraction is performed through a convolutional neural network, and an improved inclusion _ V3-based structural model is mainly adopted, which is shown in fig. 3. Parallel compression of the image is realized by using an inclusion _ V3 structural unit, so that the size of the feature representation is mildly reduced, and the feature representation is prevented from being severely compressed by a traditional convolution structure; the parallel compression of the image is realized by using a multi-layer pooling unit, the features are integrated in parallel, and the features with translation invariance are extracted to the maximum extent; the multi-layer filter is used for replacing a large-size filter, so that redundant parameters are avoided, the training speed is increased, and the calculated amount is reduced; and (3) carrying out standardized processing on the interior of the data by using batch normalization to normalize the output to normal distribution between 0 and 1, thereby ensuring that the network can be carried out at a higher learning rate and preventing the gradient explosion or diffusion phenomenon from occurring.
In a character probability prediction module, a softmax function is adopted as a classifier for calculation, and the output model prediction probability is
Figure 863674DEST_PATH_IMAGE002
Figure 593864DEST_PATH_IMAGE004
Wherein the content of the first and second substances,
Figure 435918DEST_PATH_IMAGE002
representing the probability that the current instance belongs to class k, n representing the total number of classes, s k(x) Indicating that the current instance x belongs to class k, exp (-) indicates that the parenthetical element is indexed,
Figure 831127DEST_PATH_IMAGE007
the sum of the index values representing the scores of example x for all categories from 1 to n, k ranging from 1 to n, and j ranging from 1 to n. Specifically, each image (picture) used by the input system for prediction is an example, and the image (picture) of the current input system reaches the last layer, namely the softmax classification layer, through the feature extraction of the previous network, and then the probability of the image (picture) belonging to each class is calculated. The total number of categories is known after the category labels are made. Referring to fig. 4, a diagram of the last classification layer is shown, where Softmax is the activation function, i.e., the part of the block σ in the diagram. The score refers to a in the graph, a is obtained by multiplying the output of the network of the previous layer by the weight of the network of the previous layer, and does not represent the probability, so that the score needs to be normalized to be between 0 and 1 by softmax to represent the probability.
In the deep matching module, deep matching is carried out by means of traditional manual design features. In this embodiment, the feature HOG is designed by using the conventional manual method, and the cosine similarity is used to perform further deep matching within a small range of the category of the first three probabilities in the prediction result. ,
in the matching and positioning module, referring to a flow chart of the matching and positioning module shown in fig. 5, the content of the standard comparison table is subjected to classification operation, the output identification result is subjected to classification operation, the identification result and the content of the comparison table are subjected to similar comparison, the positioning probability is calculated, the final positioning position information is determined, and the position information and the final detection result are output.
The above description is only a preferred embodiment of the present invention, and all equivalent changes and modifications made in accordance with the claims of the present invention should be covered by the present invention.

Claims (8)

1. An automatic osd menu language detection method based on deep learning is characterized by comprising the following steps:
step S1, acquiring an image data set of the osd menu;
step S2, preprocessing the image in the image data set and expanding the data set by image augmentation;
s3, constructing a deep neural network and carrying out feature extraction on the image data set;
step S4, classifying through a classifier according to the extracted features to realize the identification of the osd menu characters under different shooting environments of different interfaces;
s5, using artificial design characteristics, using template matching method to refine auxiliary training, further classifying the classification result of the classifier, and obtaining the recognition result of the character shown in the osd menu;
and step S6, positioning the place which is not consistent with the standard on the osd menu by using a matching positioning algorithm and comparing with the standard comparison table.
2. The method for automatically detecting osd menu language based on deep learning according to claim 1, wherein the step S1 specifically comprises: and (3) photographing displays of different models by using a camera to obtain an osd menu image, labeling the body shapes of different languages, and manufacturing classification labels.
3. The method for automatically detecting osd menu language based on deep learning according to claim 1, wherein the step S2 specifically comprises:
s21, carrying out smooth denoising binarization processing on the images in the image data set, and simultaneously normalizing the images into a uniform size;
step S22 image augmentation is performed on the pre-processed image to expand the data set.
4. The method for automated detection of deep learning based osd menu language according to claim 1, wherein the method for image augmentation comprises horizontal translation, vertical translation and rotation of the image.
5. The method for automatically detecting osd menu language based on deep learning according to claim 1, wherein the step S3 specifically comprises: a neural network is built through deep learning, and parallel compression of images is realized by adopting an inclusion _ V3 structural unit; the parallel compression of the image is realized by adopting a multi-layer pooling unit, the features are integrated in parallel, and the features with translation invariance are extracted to the maximum extent; a multi-layer filter is adopted to replace a large-size filter; and carrying out normalization processing on the data inside by adopting batch normalization so that the output is normalized to be in normal distribution between 0 and 1.
6. The method for automatically detecting osd menu language based on deep learning according to claim 1, wherein the step S4 specifically comprises: classifying the extracted features through a classifier to realize identification of osd menu characters in different interfaces and different shooting environments, calculating by taking a softmax function as the classifier, and outputting a model with a prediction probability of
Figure RE-RE-DEST_PATH_IMAGE002
Figure RE-RE-DEST_PATH_IMAGE004
Wherein the content of the first and second substances,
Figure RE-404354DEST_PATH_IMAGE002
representing the probability that the current instance belongs to class k, n representing the total number of classes, s k(x) Indicating that the current instance x belongs to class k, exp (-) indicates that the parenthetical element is indexed,
Figure RE-RE-DEST_PATH_IMAGE006
the sum of the index values representing the scores of example x for all categories from 1 to n, k ranging from 1 to n, and j ranging from 1 to n.
7. The method for automatically detecting osd menu language based on deep learning according to claim 1, wherein the step S6 specifically comprises: and outputting the identified osd menu result in a text form, and positioning the position where the osd menu is displayed in an abnormal position by combining the menu standard comparison table of each display by utilizing an optimized matching positioning algorithm.
8. An automatic osd menu language detection system based on deep learning, comprising: the system comprises a data input module, an image preprocessing and image augmenting module, an intelligent identification module, a character probability prediction module and a deep matching and matching positioning module which are sequentially connected;
the data input module is used for acquiring an osd menu image data set and making a classification label;
the image preprocessing and image augmenting module is used for preprocessing the image in the image data set and augmenting the data set through image augmentation;
the intelligent recognition model module is used for extracting the characteristics of the image data set after preprocessing and image augmentation so as to realize the recognition of the osd menu characters in different shooting environments with different interfaces;
the character probability prediction module is used for classifying the extracted features through a classifier and sequentially outputting model prediction results from large to small in a probability value form;
the deep matching module is used for performing auxiliary training on the classification result of the character probability prediction module within a specified range limited by the classification result by using a template matching method by using artificial design characteristics so as to further classify the classification result of the classifier;
and the matching positioning module is used for positioning the position where the osd menu is displayed to be nonstandard in the recognition result by utilizing an optimized matching positioning algorithm and combining the menu standard comparison table of each display.
CN202011102734.2A 2020-10-15 2020-10-15 Depth learning-based automatic osd menu language detection method and system Pending CN112508845A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011102734.2A CN112508845A (en) 2020-10-15 2020-10-15 Depth learning-based automatic osd menu language detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011102734.2A CN112508845A (en) 2020-10-15 2020-10-15 Depth learning-based automatic osd menu language detection method and system

Publications (1)

Publication Number Publication Date
CN112508845A true CN112508845A (en) 2021-03-16

Family

ID=74954123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011102734.2A Pending CN112508845A (en) 2020-10-15 2020-10-15 Depth learning-based automatic osd menu language detection method and system

Country Status (1)

Country Link
CN (1) CN112508845A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020018069A1 (en) * 2000-08-11 2002-02-14 Kim Byung Han Picture adjustment method and apparatus for video display appliance
CN101030216A (en) * 2007-04-02 2007-09-05 丁光耀 Method for matching text string based on parameter characteristics
CN101035221A (en) * 2006-03-06 2007-09-12 Lg电子株式会社 Method and apparatus for setting language in television receiver
CN101193231A (en) * 2006-11-28 2008-06-04 康佳集团股份有限公司 OSD control module and its OSD menu color setting method
CN101431633A (en) * 2008-12-04 2009-05-13 深圳创维-Rgb电子有限公司 Multi-language supported OSD display method and system
CN101609455A (en) * 2009-07-07 2009-12-23 哈尔滨工程大学 A kind of method of high-speed accurate single-pattern character string coupling
CN106843666A (en) * 2015-12-04 2017-06-13 小米科技有限责任公司 The method and device of display interface adjustment
CN108664996A (en) * 2018-04-19 2018-10-16 厦门大学 A kind of ancient writing recognition methods and system based on deep learning
CN110188750A (en) * 2019-05-16 2019-08-30 杭州电子科技大学 A kind of natural scene picture character recognition method based on deep learning
CN110955806A (en) * 2019-11-29 2020-04-03 国家电网有限公司客户服务中心 Character string matching method for Chinese text
CN111191087A (en) * 2019-12-31 2020-05-22 歌尔股份有限公司 Character matching method, terminal device and computer-readable storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020018069A1 (en) * 2000-08-11 2002-02-14 Kim Byung Han Picture adjustment method and apparatus for video display appliance
CN101035221A (en) * 2006-03-06 2007-09-12 Lg电子株式会社 Method and apparatus for setting language in television receiver
CN101193231A (en) * 2006-11-28 2008-06-04 康佳集团股份有限公司 OSD control module and its OSD menu color setting method
CN101030216A (en) * 2007-04-02 2007-09-05 丁光耀 Method for matching text string based on parameter characteristics
CN101431633A (en) * 2008-12-04 2009-05-13 深圳创维-Rgb电子有限公司 Multi-language supported OSD display method and system
CN101609455A (en) * 2009-07-07 2009-12-23 哈尔滨工程大学 A kind of method of high-speed accurate single-pattern character string coupling
CN106843666A (en) * 2015-12-04 2017-06-13 小米科技有限责任公司 The method and device of display interface adjustment
CN108664996A (en) * 2018-04-19 2018-10-16 厦门大学 A kind of ancient writing recognition methods and system based on deep learning
CN110188750A (en) * 2019-05-16 2019-08-30 杭州电子科技大学 A kind of natural scene picture character recognition method based on deep learning
CN110955806A (en) * 2019-11-29 2020-04-03 国家电网有限公司客户服务中心 Character string matching method for Chinese text
CN111191087A (en) * 2019-12-31 2020-05-22 歌尔股份有限公司 Character matching method, terminal device and computer-readable storage medium

Similar Documents

Publication Publication Date Title
CN108664996B (en) Ancient character recognition method and system based on deep learning
CN108074231B (en) Magnetic sheet surface defect detection method based on convolutional neural network
CN109859164B (en) Method for visual inspection of PCBA (printed circuit board assembly) through rapid convolutional neural network
CN110245657B (en) Pathological image similarity detection method and detection device
CN109064454A (en) Product defects detection method and system
CN108805223B (en) Seal script identification method and system based on Incep-CapsNet network
CN114038037B (en) Expression label correction and identification method based on separable residual error attention network
CN110796131A (en) Chinese character writing evaluation system
CN114862838A (en) Unsupervised learning-based defect detection method and equipment
CN115205521B (en) Kitchen waste detection method based on neural network
CN111368682A (en) Method and system for detecting and identifying station caption based on faster RCNN
CN114998220A (en) Tongue image detection and positioning method based on improved Tiny-YOLO v4 natural environment
CN112381175A (en) Circuit board identification and analysis method based on image processing
CN115019294A (en) Pointer instrument reading identification method and system
CN111680577A (en) Face detection method and device
CN117437647B (en) Oracle character detection method based on deep learning and computer vision
CN111832499A (en) Simple face recognition classification system
CN112508845A (en) Depth learning-based automatic osd menu language detection method and system
Lakshmi et al. A new hybrid algorithm for Telugu word retrieval and recognition
CN116503674B (en) Small sample image classification method, device and medium based on semantic guidance
CN115375954B (en) Chemical experiment solution identification method, device, equipment and readable storage medium
CN115456968A (en) Capacitor appearance detection method and device, electronic equipment and storage medium
Nair et al. A Smarter Way to Collect and Store Data: AI and OCR Solutions for Industry 4.0 Systems
Zhang et al. Identification of Mongolian and Chinese Species in Natural Scenes Based on Convolutional Neural Network
CN114187290A (en) Pathological sample collecting and monitoring system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210316