CN113408517B - Image display method and device and electronic equipment - Google Patents

Image display method and device and electronic equipment Download PDF

Info

Publication number
CN113408517B
CN113408517B CN202110737217.0A CN202110737217A CN113408517B CN 113408517 B CN113408517 B CN 113408517B CN 202110737217 A CN202110737217 A CN 202110737217A CN 113408517 B CN113408517 B CN 113408517B
Authority
CN
China
Prior art keywords
image
text
gray
feature
classification model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110737217.0A
Other languages
Chinese (zh)
Other versions
CN113408517A (en
Inventor
张培龙
马小航
修平
步晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Mobile Communications Technology Co Ltd
Original Assignee
Hisense Mobile Communications Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Mobile Communications Technology Co Ltd filed Critical Hisense Mobile Communications Technology Co Ltd
Priority to CN202110737217.0A priority Critical patent/CN113408517B/en
Publication of CN113408517A publication Critical patent/CN113408517A/en
Application granted granted Critical
Publication of CN113408517B publication Critical patent/CN113408517B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30176Document

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an image display method and device and electronic equipment, wherein the method comprises the following steps: responding to an image display instruction, acquiring an image to be displayed and converting the image into a gray image; inputting the first image characteristics of the gray-scale image into a classification model, wherein the classification model is used for predicting the gray-scale image to be a text image or a non-text image according to the input first image characteristics; when the gray level image is determined to be a text image according to the prediction result of the classification model, the contrast of the image is improved and displayed; when the classification result predicted by the classification model is determined to be a non-text image, matching the second image characteristics of the gray level image with at least one preset text image characteristic template one by one; and when the successfully matched text image feature template exists, improving the contrast of the image and displaying the image. The text image can be recognized more accurately, and pocks caused by a trembling point algorithm are eliminated.

Description

Image display method and device and electronic equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image display method and apparatus, and an electronic device.
Background
The ink screen has the characteristics of eye protection and low power consumption, and is very suitable for users who like to read the electronic book. The display modes of the ink screen include 2 gray scale, 4 gray scale, 16 gray scale and the like. In the display modes of 4 gradation and 16 gradation, although the display effect is good, since the refresh is slow, the flicker is serious at the time of refresh, and since the frame rate is low, the feeling of jerkiness at the time of slide operation is remarkable. In the 2-gray-scale display mode, each pixel only has two colors of pure black and pure white without intermediate gray, the refresh frame rate is relatively high and can reach 6-7fps, and the method is suitable for sliding operations such as web page browsing and the like.
At present, in a 2-gray-scale display mode (smooth mode), image data to be displayed is binarized first, in order to avoid losing details in the image data, a dither algorithm based on error diffusion is adopted, and an intermediate gray scale is displayed under the average meaning of gray scales of a plurality of pixels, but in a reading application program (App) in the related art, displayed main contents are a character mode, so that a good display effect is not obtained by the dither algorithm.
In addition, many reading apps in the related art are designed for liquid crystal display mobile phones, and background ground colors are set to light ground colors such as gray, yellow, green and the like on a text reading interface, or text is highlighted, and the ground color is dark black, dark green and the like, so that the following problems exist when an ink screen mobile phone displays the reading App interface:
1) The display contrast is low, because the ink screen does not emit light, the ink screen is reflected and displayed by ambient light, the brightness difference between the characters and the background color on the reading interface with the background color is far smaller than that of the liquid crystal screen, and the characters and the background color are not clear;
2) The display interface has pockmarks, and because the ink screen adopts a dot shaking algorithm to display each gray level in a smooth mode, the pockmarks exist when the character reading interface is displayed, and characters are not clear.
Disclosure of Invention
The application aims to provide an image display method and device and electronic equipment. The method is used for solving the problems that when the existing ink screen mobile phone displays and reads App, the character display is not clear, and the display effect is poor, such as pockmarks and the like.
In a first aspect, an embodiment of the present application provides an image display method, where the method includes:
responding to an image display instruction, acquiring an image to be displayed and converting the image into a gray image;
inputting the first image characteristics of the gray-scale image into a classification model, wherein the classification model is used for predicting the gray-scale image to be a text image or a non-text image according to the input first image characteristics;
when the gray level image is determined to be a text image according to the prediction result of the classification model, the contrast of the image is improved and displayed;
when the classification result predicted by the classification model is determined to be a non-text image, matching the second image characteristics of the gray level image with at least one preset text image characteristic template one by one;
and when the successfully matched text image feature template exists, improving the contrast of the image and displaying the image.
In some possible embodiments, inputting the first image feature of the image into a classification model comprises:
dividing an image to be displayed into a plurality of image blocks, and respectively inputting first image characteristics of the image blocks into a classification model;
determining the image to be a text image according to the prediction result of the classification model, including:
counting the number of the text images in the classification result of each image block predicted by the classification model, and determining that the images are the text images when the counted number exceeds a set threshold.
In some possible embodiments, the classification model is trained as follows:
acquiring a sample set of a plurality of samples including image samples and corresponding sample labels, wherein the sample labels include text labels and non-text labels;
in a training stage, inputting an image sample in a sample into a network model, and performing model parameters by taking a sample label in the output sample as a target;
in the testing stage, inputting the image samples in the samples into a network model, and calculating the classification precision and the recall rate of the network model according to the classification result and the sample labels output by the network model;
and when the classification precision and the recall rate meet the requirements, finishing the training to obtain a classification model.
In some possible embodiments, the enhancing the contrast of the image comprises:
carrying out gray scale conversion on all pixels of the gray scale image one by one through a pixel gray scale value conversion formula to obtain an output pixel gray scale value, wherein the pixel gray scale value conversion formula is used for deepening the contrast between gray scales with different sizes;
and carrying out binarization processing on the output pixel gray value to obtain an image with improved contrast.
In some possible embodiments, the pixel gray value conversion formula includes:
I’(x)=1/(1+e (-K 1 *I(x)+K 2 ) ) Where I (x) is the pixel gray scale value of the gray scale image, I' (x) is the output pixel gray scale value of the gray scale image, K 1 >0,K 1 、K 2 Are all preset real numbers.
In some possible embodiments, the method further comprises:
starting a reading application program, and judging whether the identifier of the reading application program is in a preset list or not;
when the application program is not in a preset list and an image to be displayed is acquired, outputting and displaying the image to be displayed;
when the application program is in a preset list and an image to be displayed is acquired, inputting the first image characteristics of the gray image into a classification model.
In some possible embodiments, the first image feature of the grayscale image is obtained by feature extraction using a first feature extraction method, the second image feature of the grayscale image is obtained by feature extraction using a second feature extraction method, and the first feature extraction method/the second feature extraction method is any one of the following methods:
histogram of directional gradients, local binary pattern features, haar like features, histogram statistical feature variance, DCT coefficient features, or:
matching the second image characteristics of the gray level image with at least one preset text image characteristic template one by one, comprising the following steps:
and calculating the similarity of the second image characteristic of the image and the characteristic of at least one preset text image characteristic template by adopting any one of correlation, chi-square comparison, papanicolaou distance, euclidean distance, manhattan distance and Chebyshev distance.
In a second aspect, an embodiment of the present application provides an image display apparatus, including:
the image acquisition module is used for responding to the image display instruction, acquiring an image to be displayed and converting the image into a gray image;
the image model first prediction module is used for inputting the first image characteristics of the gray-scale image into a classification model, and the classification model is used for predicting the gray-scale image to be a text image or a model of a non-text image according to the input first image characteristics;
the first contrast improving module is used for improving the contrast of the image and displaying when the gray level image is determined to be a text image according to the prediction result of the classification model;
the image model second prediction module is used for matching the second image characteristics of the gray level image with at least one preset text image characteristic template one by one when the classification result predicted by the classification model is determined to be a non-text image;
and the second contrast improving module is used for improving the contrast of the image and displaying the image when the successfully matched text image feature template exists.
In a third aspect, an embodiment of the present application provides an electronic device, including at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of image display provided by the first aspect.
In a fourth aspect, embodiments of the present application provide a computer storage medium storing a computer program, where the computer program is used to make a computer execute the method for training a temperature prediction model provided in the first aspect.
In the embodiment of the application, in order to solve the problems that when an ink screen mobile phone displays a reading App, characters are not clearly displayed, and display effects such as pits are poor, the first image feature is extracted, the first image recognition processing is performed through an image classification model, whether a current image is a text image is determined as the text image is judged to improve the image contrast, the non-text image is determined as the text image, the second image recognition processing is performed through one-by-one matching with at least one preset text image feature template, and whether the non-text image determined by the first image recognition processing is really the non-text image is judged.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart of an image display method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating a method for training a classification model according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a method for enhancing contrast of an image according to an embodiment of the present disclosure;
FIG. 4 is a detailed flowchart of an image display method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an image display device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described in detail and clearly with reference to the accompanying drawings. In the description of the embodiments of the present application, "/" indicates an alternative meaning, for example, a/B may indicate a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.
In the description of the embodiments of the present application, the term "plurality" means two or more unless otherwise specified, and other terms and the like should be understood similarly, and the preferred embodiments described herein are only for the purpose of illustrating and explaining the present application, and are not intended to limit the present application, and features in the embodiments and examples of the present application may be combined with each other without conflict.
To further explain the technical solutions provided by the embodiments of the present application, the following detailed description is made with reference to the accompanying drawings and the specific embodiments. Although the embodiments of the present application provide method steps as shown in the following embodiments or figures, more or fewer steps may be included in the method based on conventional or non-inventive efforts. In steps where no necessary causal relationship exists logically, the order of execution of these steps is not limited to the order of execution provided by the embodiments of the present application. The method may be executed in sequence or in parallel according to the embodiments or methods shown in the drawings during actual processing or execution by a control device.
In view of the problem that the ink screen mobile phone in the related art has poor display effects such as unclear character display, pockmarks and the like when displaying a reading App. The application provides an image display method and device and electronic equipment, which can improve the contrast of a displayed image and eliminate pocks caused by a dither point algorithm.
In view of the above, the inventive concept of the present application is: before image display is carried out, the processor responds to an image display instruction, obtains an image to be displayed and converts the image into a gray image; inputting the first image characteristics of the gray-scale image into a classification model, wherein the classification model is used for predicting the gray-scale image to be a text image or a non-text image according to the input first image characteristics; when the gray level image is determined to be a text image according to the prediction result of the classification model, the contrast of the image is improved and displayed; when the classification result predicted by the classification model is determined to be a non-text image, matching the second image characteristics of the gray level image with at least one preset text image characteristic template one by one; and when the successfully matched text image feature template exists, improving the contrast of the image and displaying the image.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The following describes an image display method in an embodiment of the present application in detail with reference to the drawings.
Example 1
Referring to fig. 1, a schematic flow chart of an image display method according to an embodiment of the present application is shown, including:
step 101, in response to an image display instruction, acquiring an image to be displayed and converting the image into a gray image.
Specifically, the image is a display screen of the mobile terminal, and includes various App and Android system interfaces that may be used by the user, and especially, interfaces for reading apps, for example, in reading apps, the displayed image content is generally a text image or a non-text image, and the non-text image refers to that the display content in the image includes pictures, and often includes a plurality of gray-value pixels. Text image means that the display content in the image is characters and has no picture, and the gray pixel values are often distributed only on a few gray scales between 0 and 255 and are often distributed discontinuously. When a user uses the mobile terminal, a colored text image or non-text image is converted into a gray scale image.
Step 102, inputting the first image characteristics of the gray-scale image into a classification model, wherein the classification model is used for predicting that the gray-scale image is a model of a text image or a non-text image according to the input first image characteristics.
Specifically, the first image feature of the grayscale image is obtained by feature extraction using a first feature extraction method, where the first feature extraction method is any one of the following methods: histogram of directional gradients, local binary pattern features, haar like features, histogram statistical feature variance, discrete Cosine Transform (DCT for Discrete Cosine Transform) coefficient features.
The classification model in the application can be a Support Vector Machine (SVM), an adaBoost, a decision tree and other Machine learning classifiers, and the training process follows the training method of a general Machine learning classification model.
As an alternative embodiment, referring to fig. 2, the classification model is obtained by training in the following manner:
step 201, obtaining a sample set including a plurality of samples, wherein the samples include image samples and corresponding sample labels, and the sample labels include text labels and non-text labels;
the sample label specifically refers to labeling each image as 0 or 1, the text label class is 1, and the non-text label is 0 after the image sample is obtained.
Step 202, in a training stage, inputting an image sample in a sample into a network model, and performing model parameters by taking a sample label in the output sample as a target;
step 203, in the testing stage, inputting the image samples in the samples into the network model, and calculating the classification precision and the recall rate of the network model according to the classification result and the sample labels output by the network model;
specifically, a sample set is divided into a training set and a verification set, in a training stage, model parameters are adjusted by using samples in the training set, and in a training stage, corresponding model parameters are adjusted according to different network models, so that training optimization of the network models is realized. For example, taking a Radial Basis Function (RBF) SVM model as an example, the penalty coefficient C and the RBF kernel parameter gamma are adjusted, so that in the testing stage, the testing accuracy of the classification model is high and the recall rate can be properly reduced, and overfitting of the training set is avoided, specifically, the accuracy reaches 96% in the patent, and the recall rate is 80%.
And step 204, when the classification precision and the recall rate meet the requirements, finishing the training to obtain a classification model.
Specifically, when the classification precision and the recall ratio are determined to meet the requirements of high precision and low recall ratio, a final classification model is obtained.
And 103, when the gray level image is determined to be a text image according to the prediction result of the classification model, the contrast of the image is improved and displayed.
Specifically, the classification model obtained by the above method has high accuracy, and therefore, when a gray-scale image is determined to be a text image, a first identification ID corresponding to the text image is output 0 The first identification ID 0 The method is used for identifying the image as a text image and is an image obtained by adopting the classification model for identification.
And 104, when the classification result predicted by the classification model is determined to be a non-text image, matching the second image characteristics of the gray level image with at least one preset text image characteristic template one by one.
Specifically, since the recognition of the classification model is slightly low in recall rate, it is necessary to further recognize the image when the gray-scale image is determined to be a non-text image.
The second image feature of the gray image is obtained by performing feature extraction by using a second feature extraction method, wherein the second feature extraction method is any one of the following methods: histogram of directional gradient, local binary pattern feature, haar like feature, variance of histogram statistical feature, DCT coefficient feature. The second image characteristic may be the same as or different from the first image characteristic.
The text image feature template is N features stored in a feature library Lf of a text image which is calculated and stored in advance. As an optional implementation, the matching the second image feature of the grayscale image with the preset at least one text image feature template one by one includes:
and calculating the similarity of the second image characteristic of the gray level image and the characteristic of at least one preset text image characteristic template one by one.
And 105, when the successfully matched text image feature template exists, improving the contrast of the image and displaying the image.
Specifically, when the highest similarity exceeds a set similarity threshold, the text image feature template which is successfully matched is determined to exist, and the second identification ID corresponding to the text image is output n Identification ID n The method is used for identifying that the image is a text image, the image is successfully matched with a text image feature template with the label of N in at least one preset text image feature template, the value range of N is 1-N, and N is the number of the at least one preset text image feature template. For example, after presetting 10 text image feature templates with the respective labels of 1-10, calculating the similarity of the second image feature template and the 10 text image feature templates one by one, determining that the similarity value of the text image feature template with the label 9 and the second image feature is the highest, and outputting a second identifier ID 9
Whether the current display content is a text image or not is automatically identified through image feature extraction and image classification, the image contrast is improved, and pocks caused by a trembling point algorithm are eliminated.
Referring to fig. 3, the improving the contrast of the image includes:
301, performing gray level conversion on all pixels of the gray level image one by one through a pixel gray level conversion formula to obtain an output pixel gray level value, wherein the pixel gray level conversion formula is used for deepening the contrast between gray levels with different sizes;
and 302, performing binarization processing on the output pixel gray value to obtain an image with improved contrast.
As an optional implementation, the pixel gray value conversion formula includes:
I’(x)=1/(1+e (-K 1 *I(x)+K 2 ) ) Where I (x) is the pixel grayscale value of the grayscale image, I' (x) is the output pixel grayscale value of the grayscale image, K 1 >0,K 1 、K 2 Are all preset real numbers, wherein, in order to achieve the effect of improving the contrast of the image, K 2 Contains a value range of K 1 The value range of (2) is 0.039>K 1 >0.0039,5>K 2 >0。
In particular, K 1 To aim at ID 0 Predetermined real number of, K 2 According to different pre-established IDs n And a predetermined real number K 2 The determined ID with the current output n And correspondingly presetting real numbers. All pixels of the gray image are input into a conversion formula, each pixel value is calculated to obtain an output pixel gray value, the output pixel gray values are subjected to binarization processing, and a binarization processing result is applied to the text image, so that the separation of the background and the foreground of the text image is realized, and the contrast of characters in the foreground and the background is increased. As an alternative embodiment, inputting the first image feature of the image into a classification model includes:
dividing an image to be displayed into a plurality of image blocks, and respectively inputting first image characteristics of each image block into a classification model;
determining the image to be a text image according to the prediction result of the classification model, wherein the method comprises the following steps:
counting the number of the text images as the classification result in each image block predicted by the classification model, and determining that the images are the text images when the counted number exceeds a set threshold.
In the embodiment of the present invention, the second image feature may be obtained by extracting a whole image feature of an image to be displayed, and as another optional implementation, the second image feature may also be a feature obtained by combining features extracted from image blocks obtained by dividing the image.
Specifically, an image to be displayed is divided into m rows and n columns of image blocks, the image features are sequentially extracted from each image block to obtain m × n image features, and m is used as a referenceThe image features of each image block in the n image features are used as first image features, the first image features are input into the classification model for classification, the number of m-n image blocks recognized as text images is counted, if the counted number exceeds a set threshold value, the image composed of all the image blocks is judged to be the text image, and the first number ID is output 0 (ii) a And if the counted number does not exceed a set threshold value, determining that the classification result is a non-text image, and matching the second image characteristics of the gray level image with at least one preset text image characteristic template one by one.
As an optional implementation, the method further includes:
starting a reading application program, and judging whether the identifier of the reading application program is in a preset list or not;
when the application program is not in a preset list and an image to be displayed is acquired, outputting and displaying the image to be displayed;
when the application program is in a preset list and an image to be displayed is acquired, inputting the first image characteristics of the gray image into a classification model.
The identification of the reading application may be, but is not limited to, the application package name of the currently running application. In addition, a preset list is preset in the application, the preset list comprises application package names with reading application programs, if the application package names of the current running application are detected to be in the preset list, the image to be displayed is obtained and converted into a gray image, the subsequent steps are continuously executed, and otherwise, the subsequent steps are not executed, and the operation is finished. By detecting and filtering the application packet name of the currently running reading application program, some reading programs without reading functions can be eliminated, unnecessary identification algorithm calculation is reduced, and power consumption of the terminal is reduced.
As an optional implementation, matching the second image feature of the grayscale image with at least one preset text image feature template one by one includes:
and calculating the similarity of the second image characteristic of the image and the characteristic of at least one preset text image characteristic template by adopting any one of correlation, chi-square comparison, papanicolaou distance, euclidean distance, manhattan distance and Chebyshev distance.
Specifically, the second image feature is denoted as V t The feature library Lf stores N feature vectors corresponding to N text image feature templates one by one, and the N feature vectors are denoted as V n Will V t And V n The similarity is calculated one by one, and the value of n is 1, 2 and 3, 8230n.
1. When the calculation of the similarity is calculated by using the correlation:
Figure BDA0003142011170000111
wherein, cov (V) t ,V n ) Is a V t And V n Of (1) covariance, var [ V ] t ]Is a V t Variance of (V), var [ V ] n ]Is a V n The variance of (c). r (V) t ,V n ) Value of [0,1],V t And V n The exact same similarity value is noted as 1 and the exact different similarity value is noted as 0.
2. When the similarity is calculated by chi-square comparison:
Figure BDA0003142011170000112
when the two characteristics are identical, d (V) t ,V n ) The value is 0.
3. When the calculation of the similarity uses the babbit distance for calculation:
Figure BDA0003142011170000113
wherein Vt is - ,Vn - Is a feature vector V t ,V n N is the number of elements of the feature vector. The calculation result of the Papanicolaou distance is V t And V n A complete match is denoted as 1 and a complete mismatch is denoted as 0.
Fig. 4 is a flowchart illustrating an image display method according to an embodiment of the present application, including:
step 401, starting a reading application program;
step 402, determining whether the identifier of the reading application program is in a preset list, if yes, executing step 403, and if not, executing step 412;
step 403, when the application program is determined to be in a preset list, acquiring an image to be displayed;
step 404, converting the image into a gray level image;
step 405, inputting a first image feature of the gray level image into a classification model;
step 406, determining that the grayscale image is a model of a text image or a non-text image, if yes, performing step 407, and if not, performing step 408;
step 407, when the gray level image is determined to be a text image, increasing the contrast of the image, and continuing to execute step 412;
step 408, when the classification result predicted by the classification model is determined to be a non-text image, matching the second image characteristics of the gray level image with at least one preset text image characteristic template one by one;
step 409; judging whether the matching result is successful, if so, executing step 410, and if not, executing step 411;
step 410, determining that a successfully matched text image feature template exists, and executing step 407;
step 411, determining that no successfully matched text image feature template exists, and executing step 412;
at step 412, the image is displayed.
Example 2
Based on the same inventive concept, the present application also provides an image display apparatus, as shown in fig. 5, including:
an image obtaining module 501, configured to obtain an image to be displayed and convert the image into a grayscale image in response to an image display instruction;
an image model first prediction module 502, configured to input a first image feature of the grayscale image into a classification model, where the classification model is configured to predict, according to the input first image feature, that the grayscale image is a model of a text image or a non-text image;
a first contrast raising module 503, configured to raise the contrast of the image and display the image when the grayscale image is determined to be a text image according to the prediction result of the classification model;
the image model second prediction module 504 is configured to match second image features of the grayscale image with at least one preset text image feature template one by one when it is determined that the classification result predicted by the classification model is a non-text image;
and a second contrast promoting module 505, configured to promote the contrast of the image and display the image when there is a text image feature template that is successfully matched.
Optionally, the image model first prediction module 502 is configured to perform the following steps:
dividing an image to be displayed into a plurality of image blocks, and respectively inputting first image characteristics of the image blocks into a classification model;
optionally, the first contrast-up module 503 is configured to perform:
counting the number of the text images in the classification result of each image block predicted by the classification model, and determining that the images are the text images when the counted number exceeds a set threshold.
Optionally, the classification model is trained in the following manner:
obtaining a sample set comprising a plurality of samples, wherein the samples comprise image samples and corresponding sample labels, and the sample labels comprise text labels and non-text labels;
in the training stage, inputting an image sample in a sample into a network model, and performing model parameters by taking a sample label in the output sample as a target;
in the testing stage, inputting the image samples in the samples into a network model, and calculating the classification precision and the recall rate of the network model according to the classification result and the sample labels output by the network model;
and when the classification precision and the recall rate meet the requirements, finishing the training to obtain a classification model.
Optionally, the first and second contrast-up modules 503 and 505 are configured to perform the following steps:
carrying out gray level conversion on all pixels of the gray level image one by one through a pixel gray level conversion formula to obtain an output pixel gray level value, wherein the pixel gray level conversion formula is used for deepening the contrast between gray levels with different sizes;
and carrying out binarization processing on the output pixel gray value to obtain an image with improved contrast.
Optionally, the pixel gray value conversion formula includes:
I’(x)=1/(1+e(-K 1 *I(x)+K 2 ) I (x) is a pixel gray value of the gray scale image, I' (x) is an output pixel gray value of the gray scale image, K 1 >0,K 1 、K 2 Are all preset real numbers, wherein 0.039>K 1 >0.0039,5>K 2 >0。
Optionally, the apparatus further comprises a determining reading application identification module 506, wherein the determining reading application identification module 506 is configured to perform the following steps:
starting a reading application program, and judging whether the identifier of the reading application program is in a preset list or not;
when the application program is not in a preset list and an image to be displayed is acquired, outputting and displaying the image to be displayed;
when the application program is in a preset list and an image to be displayed is acquired, inputting the first image characteristics of the gray image into a classification model.
Optionally, the first image feature of the grayscale image is obtained by performing feature extraction by using a first feature extraction method, the second image feature of the grayscale image is obtained by performing feature extraction by using a second feature extraction method, and the first feature extraction method/the second feature extraction method is any one of the following methods:
histogram of directional gradient, local binary pattern feature, haar like feature, histogram statistical feature variance, DCT coefficient feature; or
Matching the second image characteristics of the gray level image with at least one preset text image characteristic template one by one, comprising the following steps:
and calculating the similarity of the second image characteristic of the image and the characteristic of at least one preset text image characteristic template by adopting any one of the modes of correlation, chi-square comparison, papanicolaou distance, euclidean distance, manhattan distance and Chebyshev distance.
Having described the image display method and apparatus according to an exemplary embodiment of the present application, an electronic device according to another exemplary embodiment of the present application is next described.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible implementations, an electronic device according to the present application may include at least one processor, and at least one memory. Wherein the memory stores program code which, when executed by the processor, causes the processor to perform the steps in the image display method according to various exemplary embodiments of the present application described above in the present specification.
The electronic device 130 according to this embodiment of the present application, i.e., the above-described temperature prediction and decision device, is described below with reference to fig. 6. The electronic device 130 shown in fig. 6 is only an example, and should not bring any limitation to the functions and the application range of the embodiments of the present application.
As shown in fig. 6, the electronic device 130 is represented in the form of a general electronic device. The components of the electronic device 130 may include, but are not limited to: the at least one processor 131, the at least one memory 132, and a bus 133 that connects the various system components (including the memory 132 and the processor 131).
Bus 133 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The memory 132 may include readable media in the form of volatile memory, such as Random Access Memory (RAM) 1321 and/or cache memory 1322, and may further include Read Only Memory (ROM) 1323.
Memory 132 may also include a program/utility 1325 having a set (at least one) of program modules 1324, such program modules 1324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which or some combination thereof may comprise an implementation of a network environment.
The electronic device 130 may also communicate with one or more external devices 134 (e.g., keyboard, pointing device, etc.), with one or more devices that enable a user to interact with the electronic device 130, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 130 to communicate with one or more other electronic devices. Such communication may occur via input/output (I/O) interfaces 135. Also, the electronic device 130 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 136. As shown, the network adapter 136 communicates with other modules for the electronic device 130 over the bus 133. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 130, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, to name a few.
In some possible embodiments, various aspects of an image display method provided by the present application may also be implemented in the form of a program product including program code for causing a computer device to perform the steps of an image display method according to various exemplary embodiments of the present application described above in this specification when the program product is run on the computer device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for monitoring of the embodiments of the present application may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on an electronic device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the consumer electronic device, partly on the consumer device, as a stand-alone software package, partly on the consumer electronic device and partly on a remote electronic device, or entirely on the remote electronic device or server. In the case of remote electronic devices, the remote electronic devices may be connected to the consumer electronic device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external electronic device (e.g., through the internet using an internet service provider).
It should be noted that although in the above detailed description several units or sub-units of the apparatus are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and block diagrams, and combinations of flows and blocks in the flowchart illustrations and block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (8)

1. An image display method, characterized in that the method comprises:
responding to an image display instruction, acquiring an image to be displayed and converting the image into a gray image;
inputting the first image characteristics of the gray-scale image into a classification model, wherein the classification model is used for predicting the gray-scale image to be a text image or a non-text image according to the input first image characteristics;
when the gray level image is determined to be a text image according to the prediction result of the classification model, the contrast of the image is improved and displayed;
when the classification result predicted by the classification model is determined to be a non-text image, matching the second image characteristics of the gray level image with at least one preset text image characteristic template one by one;
when the successfully matched text image feature template exists, the contrast of the image is improved and displayed;
wherein promoting the contrast of the image comprises:
carrying out gray level conversion on all pixels of the gray level image one by one through a pixel gray level conversion formula to obtain an output pixel gray level value, wherein the pixel gray level conversion formula is used for deepening the contrast between gray levels with different sizes;
performing binarization processing on the output pixel gray value to obtain an image with improved contrast;
wherein, the pixel gray value conversion formula comprises:
I’(x)=1/(1+e (-K 1 *I(x)+K 2 ) ) Wherein I (x) is the pixel gray scale value of the gray scale image, and I' (x) is the gray scale imageOutput pixel gray value of, K 1 >0,K 1 、K 2 Are all preset real numbers;
wherein the first feature extraction method is any one of: histogram of directional gradient, local binary pattern characteristic, haar like characteristic, histogram statistical characteristic variance, discrete Cosine Transform (DCT) for Discrete Cosine Transform coefficient characteristic;
the second feature extraction method is any one of the following: and the contrast of the image is improved and displayed by the directional gradient histogram, the local binary pattern characteristic, the Haar like characteristic, the histogram statistical characteristic variance and the DCT coefficient characteristic.
2. The method of claim 1, wherein inputting the first image feature of the image into a classification model comprises:
dividing an image to be displayed into a plurality of image blocks, and respectively inputting first image characteristics of the image blocks into a classification model;
determining the image to be a text image according to the prediction result of the classification model, wherein the method comprises the following steps:
counting the number of the text images in the classification result of each image block predicted by the classification model, and determining that the images are the text images when the counted number exceeds a set threshold.
3. The method of claim 1, wherein the classification model is trained by:
obtaining a sample set comprising a plurality of samples, wherein the samples comprise image samples and corresponding sample labels, and the sample labels comprise text labels and non-text labels;
in a training stage, inputting an image sample in a sample into a network model, and performing model parameter adjustment by taking a sample label in the output sample as a target;
in the testing stage, inputting the image samples in the samples into a network model, and calculating the classification precision and the recall rate of the network model according to the classification result and the sample labels output by the network model;
and when the classification precision and the recall rate meet the requirements, finishing the training to obtain a classification model.
4. The method of claim 1, further comprising:
starting a reading application program, and judging whether the identifier of the reading application program is in a preset list or not, wherein the preset list comprises application package names with the reading application program;
when the application program is not in a preset list and an image to be displayed is acquired, directly outputting and displaying the image to be displayed;
when the application program is in a preset list and an image to be displayed is acquired, inputting the first image characteristics of the gray image into a classification model.
5. The method according to claim 1, wherein a first image feature of the grayscale image is obtained by feature extraction using a first feature extraction method, and a second image feature of the grayscale image is obtained by feature extraction using a second feature extraction method, and the first/second feature extraction methods are any one of:
histogram of directional gradient, local binary pattern feature, haarlike feature, variance of histogram statistical feature, DCT coefficient feature; or
Matching the second image characteristics of the gray level image with at least one preset text image characteristic template one by one, comprising the following steps:
and calculating the similarity of the second image characteristic of the image and the characteristic of at least one preset text image characteristic template by adopting any one of correlation, chi-square comparison, papanicolaou distance, euclidean distance, manhattan distance and Chebyshev distance.
6. An image display apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for responding to the image display instruction, acquiring an image to be displayed and converting the image into a gray image;
the image model first prediction module is used for inputting the first image characteristics of the gray-scale image into a classification model, and the classification model is used for predicting the gray-scale image to be a text image or a model of a non-text image according to the input first image characteristics;
the first contrast improving module is used for improving the contrast of the image and displaying when the gray level image is determined to be a text image according to the prediction result of the classification model;
the image model second prediction module is used for matching the second image characteristics of the gray level image with at least one preset text image characteristic template one by one when the classification result predicted by the classification model is determined to be a non-text image;
the second contrast improving module is used for improving the contrast of the image and displaying the image when the successfully matched text image feature template exists;
wherein promoting the contrast of the image comprises:
carrying out gray scale conversion on all pixels of the gray scale image one by one through a pixel gray scale value conversion formula to obtain an output pixel gray scale value, wherein the pixel gray scale value conversion formula is used for deepening the contrast between gray scales with different sizes;
carrying out binarization processing on the output pixel gray value to obtain an image with improved contrast;
wherein the pixel gray value conversion formula comprises:
Figure FDA0003870138490000031
where I (x) is the pixel gray scale value of the gray scale image, I' (x) is the output pixel gray scale value of the gray scale image, K 1 >0,K 1 、K 2 Are all preset real numbers;
wherein the first feature extraction method is any one of the following: histogram of directional gradient, local binary pattern characteristic, haar like characteristic, histogram statistical characteristic variance, discrete Cosine Transform (DCT) for Discrete Cosine Transform coefficient characteristic;
the second feature extraction method is any one of the following: histogram of directional gradient, local binary pattern feature, haar like feature, variance of histogram statistical feature, DCT coefficient feature.
7. An electronic device comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
8. A computer storage medium, characterized in that the computer storage medium stores a computer program for causing a computer to perform the method according to any one of claims 1-5.
CN202110737217.0A 2021-06-30 2021-06-30 Image display method and device and electronic equipment Active CN113408517B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110737217.0A CN113408517B (en) 2021-06-30 2021-06-30 Image display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110737217.0A CN113408517B (en) 2021-06-30 2021-06-30 Image display method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113408517A CN113408517A (en) 2021-09-17
CN113408517B true CN113408517B (en) 2023-01-17

Family

ID=77680619

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110737217.0A Active CN113408517B (en) 2021-06-30 2021-06-30 Image display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113408517B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114582279B (en) * 2022-05-05 2022-07-22 卡莱特云科技股份有限公司 Display screen contrast improving method and device based on error diffusion and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130335442A1 (en) * 2012-06-18 2013-12-19 Rod G. Fleck Local rendering of text in image
CN109086756A (en) * 2018-06-15 2018-12-25 众安信息技术服务有限公司 A kind of text detection analysis method, device and equipment based on deep neural network
US20190108411A1 (en) * 2017-10-11 2019-04-11 Alibaba Group Holding Limited Image processing method and processing device
CN111274853A (en) * 2018-12-05 2020-06-12 北京京东尚科信息技术有限公司 Image processing method and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101140751A (en) * 2001-05-10 2008-03-12 三星电子株式会社 Method and apparatus for adjusting contrast and sharpness for regions in display device
CN104463103B (en) * 2014-11-10 2018-09-04 小米科技有限责任公司 Image processing method and device
CN105809645A (en) * 2016-03-28 2016-07-27 努比亚技术有限公司 Word display method and device and mobile terminal
CN106096610B (en) * 2016-06-13 2019-04-12 湖北工业大学 A kind of file and picture binary coding method based on support vector machines
CN107086027A (en) * 2017-06-23 2017-08-22 青岛海信移动通信技术股份有限公司 Character displaying method and device, mobile terminal and storage medium
CN107527374A (en) * 2017-08-18 2017-12-29 珠海市君天电子科技有限公司 A kind of method and apparatus of text importing
CN109326264B (en) * 2018-12-20 2021-07-27 深圳大学 Brightness Demura method and system of liquid crystal display module

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130335442A1 (en) * 2012-06-18 2013-12-19 Rod G. Fleck Local rendering of text in image
US20190108411A1 (en) * 2017-10-11 2019-04-11 Alibaba Group Holding Limited Image processing method and processing device
CN109086756A (en) * 2018-06-15 2018-12-25 众安信息技术服务有限公司 A kind of text detection analysis method, device and equipment based on deep neural network
CN111274853A (en) * 2018-12-05 2020-06-12 北京京东尚科信息技术有限公司 Image processing method and device

Also Published As

Publication number Publication date
CN113408517A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
WO2021203863A1 (en) Artificial intelligence-based object detection method and apparatus, device, and storage medium
EP3432197B1 (en) Method and device for identifying characters of claim settlement bill, server and storage medium
CN111488826B (en) Text recognition method and device, electronic equipment and storage medium
US10635946B2 (en) Eyeglass positioning method, apparatus and storage medium
US8792722B2 (en) Hand gesture detection
US8750573B2 (en) Hand gesture detection
US8867828B2 (en) Text region detection system and method
US20210124967A1 (en) Method and apparatus for sample labeling, and method and apparatus for identifying damage classification
CN110781925B (en) Software page classification method and device, electronic equipment and storage medium
CN112749695A (en) Text recognition method and device
CN112686243A (en) Method and device for intelligently identifying picture characters, computer equipment and storage medium
CN112308802A (en) Image analysis method and system based on big data
CN113436222A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN113408517B (en) Image display method and device and electronic equipment
CN114724145A (en) Character image recognition method, device, equipment and medium
CN106663212B (en) Character recognition device, character recognition method, and computer-readable storage medium
CN114529750A (en) Image classification method, device, equipment and storage medium
CN110232381A (en) License Plate Segmentation method, apparatus, computer equipment and computer readable storage medium
CN111898544B (en) Text image matching method, device and equipment and computer storage medium
CN112990212A (en) Reading method and device of thermal imaging temperature map, electronic equipment and storage medium
KR20170010753A (en) Method for the optical detection of symbols
CN111797830A (en) Rapid red seal detection method, system and device for bill image
CN107330470B (en) Method and device for identifying picture
CN112801960B (en) Image processing method and device, storage medium and electronic equipment
CN111242047A (en) Image processing method and apparatus, electronic device, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 266071 Shandong city of Qingdao province Jiangxi City Road No. 11

Patentee after: Qingdao Hisense Mobile Communication Technology Co.,Ltd.

Address before: 266071 Shandong city of Qingdao province Jiangxi City Road No. 11

Patentee before: HISENSE MOBILE COMMUNICATIONS TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder