CN115240197A - Image quality evaluation method, image quality evaluation device, electronic apparatus, scanning pen, and storage medium - Google Patents

Image quality evaluation method, image quality evaluation device, electronic apparatus, scanning pen, and storage medium Download PDF

Info

Publication number
CN115240197A
CN115240197A CN202210716574.3A CN202210716574A CN115240197A CN 115240197 A CN115240197 A CN 115240197A CN 202210716574 A CN202210716574 A CN 202210716574A CN 115240197 A CN115240197 A CN 115240197A
Authority
CN
China
Prior art keywords
image
image frame
character
frame
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210716574.3A
Other languages
Chinese (zh)
Inventor
吴爱红
殷兵
吴嘉嘉
胡金水
张银田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Original Assignee
iFlytek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd filed Critical iFlytek Co Ltd
Priority to CN202210716574.3A priority Critical patent/CN115240197A/en
Publication of CN115240197A publication Critical patent/CN115240197A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/141Image acquisition using multiple overlapping images; Image stitching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/18Extraction of features or characteristics of the image
    • G06V30/1801Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes or intersections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/18Extraction of features or characteristics of the image
    • G06V30/1801Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes or intersections
    • G06V30/18076Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes or intersections by analysing connectivity, e.g. edge linking, connected component analysis or slices

Abstract

The application provides an image quality evaluation method, an image quality evaluation device, an electronic device, a scanning pen and a storage medium, wherein the method comprises the following steps: detecting the contrast of the acquired image frame, and detecting characters and black image areas from the image frame; and if any one of the conditions that the contrast of the image frame is lower than a preset contrast threshold, no character exists in the image frame and a periodic black image area exists in the image frame is detected, determining that the image quality of the image frame is unqualified. By adopting the technical scheme, the image quality evaluation can be carried out on the collected image frame, so that the effectiveness of the image frame is judged, and the influence of the image quality of the image frame on the image recognition effect and the accuracy is avoided.

Description

Image quality evaluation method, image quality evaluation device, electronic apparatus, scanning pen, and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image quality evaluation method and apparatus, an electronic device, a scanning pen, and a storage medium.
Background
With the recent development of information technology, image recognition devices are in a variety of levels, such as a dictionary pen, a scanner pen, and the like. The image recognition device can process and splice the images collected by the image scanning device, and finally recognize the spliced images. The quality of the image collected by the image scanning device directly affects the effect and accuracy of subsequent image recognition. For example, for an image without characters, there is no recognition result directly, for an image with background texture, there is similar extraneous character directly recognized, or for an ineffective image with too low contrast, there is irrelevant character directly recognized, resulting in invalidation of the image recognition result. However, in the process of image recognition, the user cannot determine the quality of the acquired image frame, and thus cannot determine whether the acquired image frame is effective.
Disclosure of Invention
Based on the defects and shortcomings of the prior art, the application provides an image quality evaluation method, an image quality evaluation device, an electronic device, a scanning pen and a storage medium, which can evaluate the quality of an image frame so as to judge the effectiveness of the image frame.
A first aspect of the present application provides an image quality evaluation method, including:
detecting contrast of the acquired image frame, and detecting characters and black image areas from the image frame;
if any one of the preset conditions is detected, determining that the image quality of the image frame is unqualified;
wherein the preset conditions include that the contrast of the image frame is lower than a preset contrast threshold, no character exists in the image frame, and a periodic black image area exists in the image frame.
Optionally, the detecting the contrast of the acquired image frame includes:
and calculating the contrast of the image frame by using the pixel value of each pixel point in the image frame and a preset contrast calculation formula.
Optionally, detecting characters from the image frame includes:
extracting character edges of the obtained image frames to obtain character edge image frames;
performing foreground expansion on the character edges in the character edge image frame to obtain a character communicating body;
and determining whether characters exist in the image frame according to the number and the size of the character connected bodies.
Optionally, the extracting the character edge of the obtained image frame to obtain a character edge image frame includes:
binarizing the image frame according to the gradient amplitude of each pixel point in the image frame and a predetermined first binarization threshold value to obtain a first binary image frame;
and carrying out non-character edge filtering according to the foreground in the first binary image frame to obtain a character edge image frame.
Optionally, detecting a black image area from the image frame includes:
carrying out binarization on the obtained image frame by using a preset second binarization threshold value to obtain a second binary image frame;
and if the black image areas exist in the image frames are determined according to the second binary image frames, recording the acquisition time of the image frames, and determining whether the periodic black image areas exist in the image frames according to the recorded acquisition time of all the image frames with the black image areas.
Optionally, the image quality evaluation method further includes:
when any one of the preset conditions does not occur, splicing the image frame with a pre-stored historical spliced image to obtain a target spliced image;
and determining an image quality evaluation result of the target spliced image based on the text content in the target spliced image.
Optionally, determining an image quality evaluation result of the target stitched image based on the text content in the target stitched image, includes:
performing character edge extraction and edge foreground expansion on text content in the target mosaic image to obtain a character edge image;
determining the height ratio of the text content in the target splicing image according to the character connected body in the character edge image;
and if the height ratio of the text content in the target spliced image is detected to exceed the preset ratio range, determining that the image quality of the target spliced image is unqualified.
Optionally, the image quality evaluation method further includes:
and if the height ratio of the text content in the target splicing image is detected to exceed the preset ratio range, outputting prompt information indicating that the characters in the target splicing image are too large or too small.
Optionally, the image quality evaluation method further includes:
when the height ratio of the text content in the target spliced image does not exceed the preset ratio range, performing text recognition on the target spliced image;
and determining whether the image quality of the target spliced image is qualified or not based on the text recognition result of the target spliced image.
Optionally, determining whether the image quality of the target stitched image is qualified based on the text recognition result of the target stitched image, includes:
obtaining the posterior probability of a preset number of text recognition results in the text recognition results of all the target splicing images;
calculating the posterior probability variance according to the posterior probabilities of the preset number of text recognition results;
if the posterior probability variance is smaller than a preset variance threshold value, determining that the image quality of the target spliced image is unqualified;
and if the posterior probability variance is not less than a preset variance threshold, determining that the image quality of the target spliced image is qualified.
Optionally, the image quality evaluation method further includes:
and if the posterior probability variance is smaller than a preset variance threshold value, outputting the identification reliability corresponding to the posterior probability variance.
Optionally, the image quality evaluation method further includes:
if the contrast of the image frame is detected to be lower than a preset contrast threshold, outputting prompt information indicating that the contrast of the image frame is low;
if no character exists in the image frame, outputting prompt information indicating that no character exists in the image frame;
and if the periodic black image area exists in the image frame, outputting prompt information indicating that the frame rate of the scanning device is inconsistent with the screen brushing frequency of the scanned device.
A second aspect of the present application provides an image quality evaluation apparatus including:
the detection module is used for detecting the contrast of the acquired image frame and detecting characters and black image areas from the image frame;
the image frame quality evaluation module is used for determining that the image quality of the image frame is unqualified if any one of preset conditions is detected;
wherein the preset conditions include that the contrast of the image frame is lower than a preset contrast threshold, no character exists in the image frame, and a periodic black image area exists in the image frame.
A third aspect of the present application provides an electronic device comprising: a memory and a processor;
wherein the memory is connected with the processor and used for storing programs;
the processor is used for realizing the image quality evaluation method by operating the program in the memory.
The fourth aspect of the present application provides a wand, comprising:
the system comprises a scanning camera and a processor connected with the scanning camera;
the scanning camera is used for collecting image frames and sending the image frames to the processor;
the processor is used for detecting the contrast of the acquired image frame and detecting characters and black image areas from the image frame;
if any one of the preset conditions is detected, determining that the image quality of the image frame is unqualified;
wherein the preset condition comprises that the contrast of the image frame is lower than a preset contrast threshold, no character exists in the image frame, and a periodic black image area exists in the image frame.
A fifth aspect of the present application provides a storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described image quality evaluation method.
The image quality evaluation method provided by the application detects the contrast of the obtained image frame and detects characters and black image areas from the image frame; and if any one of the conditions that the contrast of the image frame is lower than a preset contrast threshold, no character exists in the image frame and a periodic black image area exists in the image frame is detected, determining that the image quality of the image frame is unqualified. By adopting the technical scheme, the image quality evaluation can be carried out on the collected image frame, so that the effectiveness of the image frame is judged, and the influence of the image quality of the image frame on the image identification effect and accuracy is avoided.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image quality evaluation method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a process flow for detecting characters from image frames according to an embodiment of the present application;
fig. 3 is a comparison diagram of an original image frame and a character edge image frame in the case of the presence/absence of characters provided by the embodiment of the present application;
FIG. 4 is a schematic diagram of a process flow for detecting a black image area from an image frame according to an embodiment of the present application;
fig. 5 is a schematic flowchart of another image quality evaluation method provided in an embodiment of the present application;
fig. 6 is a schematic flowchart of another image quality evaluation method provided in the embodiment of the present application;
FIG. 7 is a comparison of the vias provided in the examples of the present application before and after consolidation;
fig. 8 is a schematic flowchart of another image quality evaluation method provided in the embodiment of the present application;
fig. 9 is a schematic structural diagram of an image quality evaluation apparatus provided in an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical scheme of the embodiment of the application is suitable for application scenes of image quality evaluation, such as quality evaluation of scanned images in the image scanning process. By adopting the technical scheme of the embodiment of the application, the image quality evaluation can be carried out on the collected image frame, so that the effectiveness of the image frame is judged.
In the process of image recognition, the quality of the image to be recognized directly affects the effect and accuracy of image recognition. For example, an image scanning recognition device such as a scanning pen or a dictionary pen may directly recognize the image content as a similar irrelevant character for an image without characters, a background texture image, or an image with too low contrast. Because the image scanning identification device cannot determine the quality of the acquired image frame in the image scanning process, whether the image frame is effective or not cannot be judged, and the image identification effect and the accuracy of the identified result cannot be determined.
In view of the above-mentioned deficiencies of the prior art and the problem that it is actually impossible to determine the quality of the acquired image frame and thus to determine whether the image frame is valid, the inventors of the present application have made studies and experiments to provide an image quality evaluation method that can perform quality evaluation on the acquired image frame and determine the validity of the image frame.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
An embodiment of the present application provides an image quality evaluation method, which is shown in fig. 1 and includes:
s101, detecting the contrast of the acquired image frame, and detecting characters and black image areas from the image frame.
Specifically, in the process of image recognition, if the brightness is not enough when the image is captured, which results in a low image contrast of the image frame, the definition of characters in the image frame is not enough due to the dark image frame, and the characters in the image frame cannot be recognized, or the characters are wrongly recognized. If the image acquisition is a grayscale image acquisition, a "color weakness" data condition of losing contrast may occur when the text color or the background color is converted into grayscale, which may also result in a low image contrast of the image frame, inability to identify characters in the image frame, or an error in character identification. For example, a red font with a green background, a red font with a blue background, etc. may have a contrast loss after being converted into a gray scale image and collected as a gray scale image, resulting in a low contrast.
If there are no characters in the image frame, recognition of background texture or the like in the image frame as extraneous characters may occur. And because the graphic image on the screen of the electronic equipment is composed of fluorescent points which emit light due to the impact of the electron beams, the time for the fluorescent powder in the display tube to emit light after being impacted by the electron beams is short, the electron beams must continuously impact the fluorescent powder to enable the fluorescent powder to continuously emit light, so that the screen of the electronic equipment is continuously refreshed, the lower the screen refreshing rate is, the larger the flicker and jitter of the displayed image is, the higher the refreshing of the screen brings better visual perception to the eyes under a specific scene, and the flicker perception of the screen is reduced. If an image displayed on the screen of the electronic device is acquired, when the frame rate of the image scanning is different from the refreshing frequency of the screen of the scanned electronic device, the image scanned when the screen of the electronic device is refreshed appears, which causes that a black image area (namely a black frame or a black bar) appears periodically in the scanned image frame, and if the black frame or the black bar appears periodically in the scanned image frame, the frame rate of the image scanning is different from the refreshing frequency of the screen of the scanned electronic device, and then the black image area appears periodically in the scanned image frame all the time, which affects the image recognition.
In order to ensure the image recognition effect and the accuracy of the recognition result, the validity of the scanned image frame needs to be determined, and the accuracy of the image recognition effect and the recognition result can only be ensured if the validity of the image frame is ensured. Therefore, it is necessary to detect whether the image quality of the scanned image frame is qualified or not to determine whether the image frame is valid, thereby determining the image recognition effect. As can be seen from the above, the image quality of the image frame is related to the contrast of the image frame, the existence of characters in the image frame, and whether a black image region exists in the image frame, so the embodiment needs to calculate the contrast of the image frame according to the pixel values of each pixel point in the image frame; determining whether characters exist in the image frame by detecting character connected bodies in the image frame; whether periodically appearing black image areas exist in the image frame is detected by carrying out black image area detection on the image frame and recording the appearance time of the black image areas.
S102, if any one of the preset conditions is detected, determining that the image quality of the image frame is unqualified.
Specifically, through the above detection, if any one of the conditions that the contrast of the image frame is lower than the preset contrast threshold, no character exists in the image frame, and a periodic black image area exists in the image frame is detected, it indicates that the image quality of the image frame at the time is not qualified. The preset contrast threshold is an image contrast critical value which ensures that the image can realize image recognition, and if the contrast of the image frame is lower than the contrast threshold, the image frame is indicated to have low contrast and the recognition of the image frame is influenced. The present embodiment can determine whether an image frame is valid by evaluating the image quality of the image frame.
Further, if the contrast of the image frame is detected to be lower than a preset contrast threshold, outputting prompt information indicating that the contrast of the image frame is low; if no character is detected in the image frame, outputting prompt information indicating that no character exists in the image frame; and if the periodic black image area is detected in the image frame, outputting prompt information indicating that the frame rate of the scanning device is inconsistent with the screen brushing frequency of the scanned device. The prompt information can prompt the user that the quality of the currently acquired image frame is not qualified, so that the user can make corresponding work according to the prompt information. For example, if the user receives a prompt indicating that the contrast of the image frame is low or a prompt indicating that there are no characters in the image frame, the user may discard the current image frame, adjust the scanning angle or scanning position of the scanning device, and the like. If the user receives prompt information indicating that the frame rate of the scanning device is inconsistent with the screen swiping frequency of the scanned device, the user may adjust the frame rate of the scanning device or forgo scanning the scanned device, etc.
As can be seen from the above description, the image quality evaluation method provided in the embodiment of the present application detects the contrast of an acquired image frame, and detects characters and black image areas from the image frame; and if any one of the conditions that the contrast of the image frame is lower than a preset contrast threshold, no character exists in the image frame and a periodic black image area exists in the image frame is detected, determining that the image quality of the image frame is unqualified. By adopting the technical scheme of the embodiment, the image quality evaluation can be carried out on the collected image frame, so that the effectiveness of the image frame is judged, and the influence of the image quality of the image frame on the image recognition effect and the accuracy is avoided.
As an optional implementation manner, another embodiment of the present application discloses that, in step S101, detecting a contrast of an acquired image frame includes:
and calculating the contrast of the image frame by using the pixel value of each pixel point in the image frame and a preset contrast calculation formula.
Specifically, the contrast ratio represents the contrast ratio of brightness and darkness of the image frame, which generally represents the definition of the image quality, and the embodiment may calculate the contrast ratio of the image frame according to a preset contrast ratio calculation formula by using the pixel value of each pixel point in the image frame. The preset contrast calculation formula is as follows:
Figure BDA0003708270710000081
wherein C represents the contrast of the image frame; δ (i, j) represents a pixel difference between adjacent pixels i, j, P δ (i, j) represents a pixel distribution probability that the pixel difference between adjacent pixels i, j is δ. In addition, two ways of determining adjacent pixels are generally 4 adjacent and 8 adjacent, wherein the 4 adjacent is a pixel adjacent in four directions of up, down, left and right, and the 8 adjacent is a pixel adjacent in eight directions of up, down, left, right, upper left, upper right, lower left and lower right.
As an alternative implementation, referring to fig. 2, another embodiment of the present application discloses that, in step S101, detecting a character from an image frame includes:
s201, extracting character edges of the obtained image frames to obtain character edge image frames.
Specifically, in order to detect a character in an image frame, a character edge in the image frame needs to be extracted first to obtain a character edge image frame. Since the character edge has a special feature relative to the edge of other objects, especially the print, which has a double edge with a fixed font size and a substantially uniform stroke width, the early text detector utilized the inner color or pixel uniformity of the text edge and the relative fixed size and stroke width to position the text, so that the present embodiment can determine the character or not in the image frame according to the character edge in the image frame.
Further, the specific steps are as follows:
firstly, binarization is carried out on an image frame according to the gradient amplitude of each pixel point in the image frame and a first binarization threshold value which is predetermined, so as to obtain a first binary image frame.
In order to extract the character edges in the image frame, a first binarization threshold needs to be determined in advance, so that the image frame is binarized according to the first binarization threshold. In this embodiment, the tsu threshold of the image frame is determined according to the gray value of each pixel point in the image frame, and the tsu threshold may be used as the first binarization threshold. However, when there are fewer characters in the image frame, the greater fluid threshold may have a larger value, and then the accuracy of the binary image frame obtained by binarization according to the greater fluid threshold is lower when character edge extraction is performed, so that a fixed constraint threshold may be preset, and the minimum value between the greater fluid threshold and the constraint threshold is taken as the first binarization threshold. For example, when there are few characters in an image frame, the calculated Otsu threshold of the image frame is greater than a preset constraint threshold, and at this time, the constraint threshold is used as a first binarization threshold; when the number of characters existing in the image frame is normal, the calculated Otsu threshold of the image frame is smaller than a preset constraint threshold, and the Otsu threshold is used as a first binarization threshold at the moment.
In this embodiment, the gradient amplitude of each pixel point in the image frame needs to be calculated, and the image frame is binarized by comparing the gradient amplitude of each pixel point with the first binarization threshold, so as to obtain a first binary image frame corresponding to the image frame. For example, the gray value of the pixel point with the gradient amplitude larger than the first binarization threshold in the image frame is set to 0, and the gray value of the pixel point with the gradient amplitude smaller than the first binarization threshold in the image frame is set to 255, so that the image frame is binarized.
In addition, the pixelThe gradient amplitude of the point needs to be calculated according to the gradient of the pixel point in the x direction and the gradient of the pixel point in the y direction. For example, for a pixel with coordinates (x, y), the gradient g in the x direction x = f (x +1,y) -f (x, y), gradient g in y direction thereof y = f (x, y + 1) -f (x, y), then the gradient amplitude of the pixel point with the coordinate (x, y) is obtained
Figure BDA0003708270710000091
Wherein f (x +1,y) represents the pixel value of the pixel point with the coordinate (x +1,y), f (x, y) represents the pixel value of the pixel point with the coordinate (x, y), and f (x, y + 1) represents the pixel value of the pixel point with the coordinate (x, y + 1).
And secondly, performing non-character edge filtering according to the foreground in the first binary image frame to obtain a character edge image frame.
After the first binary image frame corresponding to the image frame is obtained in this embodiment, since the first binary image frame includes foreground pixel points with a gray value of 0 and background pixel points with a gray value of 255, the embodiment may filter out non-character edges according to a foreground composed of the foreground pixel points in the first binary image frame, thereby obtaining a character edge image frame. Because the character edge has a special part relative to the edges of other objects, the character has double edges with basically consistent stroke width, therefore, the edges which do not accord with the character edge characteristics in the foreground of the first binary image frame can be filtered according to the character edge characteristics. For example, the average distance between the top, bottom, left and right nearest edges of the edge may be counted first, and the distance is used as the width threshold of the text, if the edge is a character edge, the distance between the two edges of the character is similar to the determined width threshold, and if the edge is not a character edge, the distance between the two edges exceeds the width threshold.
S202, performing foreground expansion on character edges in the character edge image frame to obtain a character connected body.
Specifically, after the character edge image frame is obtained through the above steps, since the character edge is embodied in the character edge image frame through the foreground connected domain, but when the character edge is extracted, the foreground of the extracted character edge may be discontinuous, so that in order to ensure the continuity of the character edge in the character edge image frame, the foreground expansion may be performed on the character edge in the character edge image frame, thereby obtaining a continuous and clear character connected body.
And S203, determining whether characters exist in the image frame according to the number and the size of the character connected bodies.
Specifically, the present embodiment sets a number threshold and a size threshold of character links in advance, and if the number of character links in a character image frame exceeds the number threshold and there are character links with sizes exceeding the size threshold in all the character links, it is determined that a character exists in the image frame. If the size of a character connection is smaller than the size threshold, it may be that the character corresponding to the character connection is only a half character, such as the character connections in the upper left corner and the lower left corner of the b2 diagram in fig. 3. In fig. 3, a1 is an image frame without characters, a2 is an image frame with characters at the edge of a1, b1 is an image frame with characters, and b2 is an image frame with characters at the edge of b 1.
As an alternative implementation, referring to fig. 4, another embodiment of the present application discloses that, in step S101, detecting a black image area from an image frame includes:
s401, carrying out binarization on the obtained image frame by using a preset second binarization threshold value to obtain a second binary image frame.
Specifically, the second binarization threshold may be preset in this embodiment, and the image frame is binarized by using the second binarization threshold to obtain a second binary image frame. In order to distinguish binarization between characters in a detected image frame and a black image area in the detected image frame, different binarization modes or different binarization threshold values can be adopted to binarize the image frame. For example, binarization of the image frame when detecting characters in the image frame is to set the gray value of a pixel point with a gradient amplitude larger than a first binarization threshold value in the image frame to 0, and set the gray value of a pixel point with a gradient amplitude smaller than the first binarization threshold value in the image frame to 255, so that binarization of the image frame when detecting a black image area in the image frame may set the gray value of a pixel point with a gradient amplitude larger than a second binarization threshold value in the image frame to 255, and set the gray value of a pixel point with a gradient amplitude smaller than the second binarization threshold value in the image frame to 0. Thus, the binary image frame of the image frame without characters can be distinguished from the binary image frame when the image frame is a black frame.
S402, if the black image areas exist in the image frame are determined according to the second binary image frame, recording the acquisition time of the image frame, and determining whether the periodic black image areas exist in the image frame according to the recorded acquisition time of all the image frames with the black image areas.
Specifically, after the second binary image frame is obtained, whether a black image area exists in the second binary image frame may be analyzed. For example, the gray value of the pixel point with the gradient amplitude larger than the second binarization threshold in the image frame is set to 255, and the gray value of the pixel point with the gradient amplitude smaller than the second binarization threshold in the image frame is set to 0, then if the second binary image frame has a strip-shaped region penetrating through the horizontal direction or the vertical direction, it is indicated that a "black strip" exists in the image frame, or if the gray values of the pixel points in the second binary image frame are all 0, it is indicated that the image frame is a "black frame". If a "black bar" is present in an image frame or an image frame is a "black frame," this indicates that a black image area is present in the image frame.
If the black strip image areas exist in the image frame, recording the acquisition time of the image frame, analyzing the acquisition time of all the recorded image frames with the black image areas, judging whether the black image areas periodically appear, and if the black image areas periodically appear, indicating that the periodic black image areas exist in the image frame.
As an alternative implementation, referring to fig. 5, another embodiment of the present application discloses an image quality evaluation method, further including:
and S503, when any one of the preset conditions does not occur, splicing the image frame with a pre-stored historical spliced image to obtain a target spliced image.
Specifically, if it is detected that the contrast of the image frame is lower than a preset contrast threshold, no character exists in the image frame, and all the situations in the periodic black image area exists in the image frame do not occur, that is, the contrast of the image frame is not lower than the preset contrast threshold, a character exists in the image frame, and the periodic black image area does not exist in the image frame, it is determined that the image quality of the image frame at this time is qualified. Then, at this time, the image frame may be stitched with a pre-stored historical stitched image, so as to obtain a target stitched image. In this embodiment, the image frame and the history stitched image may be stitched by using an existing image stitching method, which is not specifically described in this embodiment.
S504, determining an image quality evaluation result of the target spliced image based on the text content in the target spliced image.
Specifically, in this embodiment, after the quality evaluation is performed on the image frame, the quality evaluation is also performed on the spliced target spliced image. In this embodiment, the size of the characters of the text content in the target stitched image is determined according to the text content in the target stitched image, so that the image quality evaluation is performed according to the size of the characters of the text content in the target stitched image. If the characters are too large, the complete characters cannot be displayed in the target spliced image, so that image recognition errors are caused; if the character is too small, it may be part of the previous line of text that was accidentally scanned while the previous line was scanned.
Steps S501 to S502 in fig. 5 are the same as steps S101 to S102 in fig. 1, and steps S501 to S502 are not described in detail in this embodiment.
As an alternative implementation, referring to fig. 6, another embodiment of the present application discloses that, in step S504, determining an image quality evaluation result for the target stitched image based on the text content in the target stitched image, includes:
and S604, performing character edge extraction and edge foreground expansion on the text content in the target spliced image to obtain a character edge image.
Specifically, in order to determine the size of the character of the text content in the target stitched image, firstly, character edge extraction and foreground expansion are performed on the text content in the target stitched image, so as to obtain a character edge image, where a manner of the character edge extraction is the same as that of the character edge extraction of the image frame in step S201, a manner of the foreground expansion of the character edge is the same as that of the foreground expansion of the character edge in the character edge image frame in step S202, and this embodiment is not specifically described again.
S605, determining the height ratio of the text content in the target splicing image according to the character connected body in the character edge image.
Specifically, the character connected components may be extracted from the character edge map corresponding to the target stitched image, and the connected components having a smaller left-right distance or a shorter distance or having a vertical structure may be merged, for example, the character connected components corresponding to the characters having a left-right structure may be divided into two left-right character connected components, and the character connected components corresponding to the characters having a vertical structure may be divided into a plurality of small character connected components distributed vertically. For example, the character connecting body corresponding to the word "kanji" in the diagram a in fig. 7 includes a plurality of connecting bodies distributed vertically (e.g., connecting bodies framed by a plurality of frames corresponding to the word "kanji" in the diagram a), and after the character connecting bodies corresponding to the word "kanji" are combined, an entire character connecting body in the diagram B (e.g., a connecting body framed by a frame corresponding to the word "kanji" in the diagram B) is obtained. After merging the character connected bodies of all characters in the character edge image, determining the height ratio of the merged character connected bodies in the character edge image.
And S606, if the height ratio of the text content in the target spliced image is detected to exceed the preset ratio range, determining that the image quality of the target spliced image is unqualified.
Specifically, a preset proportion range is set in this embodiment, and if it is detected that the height proportion of the text content in the target stitched image exceeds the preset proportion range, it indicates that the characters of the text content in the target stitched image are too large or too small, and the image quality of the target stitched image is not qualified at this time. For example, if the height of the characters of the text content in the target stitched image reaches 100%, it indicates that the character connection body penetrates through the entire image boundary, which indicates that the characters are too large at this time, and the target stitched image may not display the complete characters, thereby affecting image recognition. And if the height ratio of the characters of the text content in the target splicing image is smaller than the minimum value in the preset ratio range, the characters of the text content at the moment are too small.
Further, if the height ratio of the text content in the target splicing image is detected to exceed the preset ratio range, prompt information indicating that the characters in the target splicing image are too large or too small is output to prompt a user that the size of the characters in the target splicing image spliced by the image frames is unqualified, so that the user can respond in time.
Steps S601 to S603 in fig. 6 are the same as steps S501 to S503 in fig. 5, and the steps S601 to S603 are not specifically described again in this embodiment.
As an alternative implementation, referring to fig. 8, another embodiment of the present application discloses an image quality evaluation method, further including:
and S807, when the height ratio of the text content in the target spliced image does not exceed the preset ratio range, performing text recognition on the target spliced image.
Specifically, if it is detected that the height ratio of the text content in the target stitched image does not exceed the preset ratio range, it indicates that the image quality of the target stitched image is qualified, that is, the size of the characters of the text content in the target stitched image is qualified, and at this time, text recognition needs to be performed on the target stitched image. In this embodiment, the text recognition of the image may be implemented by using an existing image recognition method, and this embodiment is not specifically described.
And S808, determining whether the image quality of the target spliced image is qualified or not based on the text recognition result of the target spliced image.
Specifically, after the text content in the target stitched image is identified, the correct reliability of each character in the target stitched image needs to be identified according to the text identification result, so as to determine whether the image quality of the target stitched image is qualified.
Further, the specific steps are as follows:
firstly, the posterior probability of a preset number of text recognition results in the text recognition results of all target mosaic images is obtained.
In this embodiment, the posterior probabilities of the text recognition results in a preset number are selected from the text recognition results of all the target mosaic images corresponding to each character, for example, the preset number is 3, and the posterior probabilities of three text recognition results with the three highest posterior probabilities are selected from all the text recognition results of each character.
Secondly, calculating the posterior probability variance according to the posterior probabilities of the preset number of text recognition results.
And calculating the posterior probability variance corresponding to each character according to the posterior probability of the text recognition results of the preset number corresponding to each character. For example, if the predetermined number is 3, to calculate the posterior probability variance of a certain character, the square value of the difference between each posterior probability corresponding to the character and the average value of the three posterior probabilities is first calculated, and then the average value of the three square values is used as the posterior variance probability of the character.
Thirdly, if the posterior probability variance is smaller than a preset variance threshold, determining that the image quality of the target spliced image is unqualified; and if the posterior probability variance is not less than the preset variance threshold, determining that the image quality of the target spliced image is qualified.
In this embodiment, a preset variance threshold is set, and if the posterior probability variance of a character is smaller than the preset variance threshold, it indicates that the character is correctly identified and has low reliability, and it is determined that the image quality of the target stitched image is not qualified. If the posterior probability variance of the characters is not smaller than the preset variance threshold, the character recognition accuracy is high in reliability, the image quality of the target spliced image is determined to be qualified, and the text recognition result of the target spliced image can be output at the moment.
Further, if the posterior probability variance of the detected characters is smaller than the preset variance threshold, the recognition reliability corresponding to the posterior probability variance is output, so that the user can respond according to the recognition reliability of the characters, for example, the user can re-recognize text contents, re-stitch images or re-collect image frames, etc.
Steps S801-S806 in fig. 8 are the same as steps S601-S606 in fig. 6, and the steps S801-S806 are not described in detail in this embodiment.
In correspondence with the above-described image quality evaluation method, an embodiment of the present application also proposes an image quality evaluation apparatus, as shown in fig. 9, the apparatus including:
a detection module 100 for detecting contrast of the acquired image frame and detecting characters and black image areas from the image frame;
the image frame quality evaluation module 110 is configured to determine that the image quality of the image frame is not qualified if any one of preset conditions is detected;
the preset conditions comprise that the contrast of the image frame is lower than a preset contrast threshold, no character exists in the image frame, and a periodic black image area exists in the image frame.
The image quality evaluation device provided by the embodiment of the application utilizes the detection module 100 to detect the contrast of the obtained image frame and detect characters and black image areas from the image frame; by using the image frame quality evaluation module 110, if any one of the conditions that the contrast of the image frame is lower than a preset contrast threshold, no character exists in the image frame, and a periodic black image area exists in the image frame is detected, it is determined that the image quality of the image frame is not qualified. By adopting the technical scheme of the embodiment, the image quality evaluation can be carried out on the collected image frame, so that the effectiveness of the image frame is judged, and the influence of the image quality of the image frame on the image recognition effect and the accuracy is avoided.
As an optional implementation manner, another embodiment of the present application further discloses that the detection module 100 includes: and the contrast calculation unit is used for calculating the target contrast of the image frame by using the pixel value of each pixel point in the image frame and a preset contrast calculation formula.
As an optional implementation manner, another embodiment of the present application further discloses that the detection module 100 further includes: the device comprises a character edge extraction unit, a foreground expansion unit and a character detection unit.
The character edge extraction unit is used for extracting character edges of the obtained image frames to obtain character edge image frames;
the foreground expansion unit is used for performing foreground expansion on character edges in the character edge image frame to obtain a character connected body;
and the character detection unit is used for determining whether characters exist in the image frame according to the number and the size of the character communicating bodies.
As an optional implementation manner, another embodiment of the present application further discloses that the character edge extraction unit is specifically configured to:
binarizing the image frame according to the gradient amplitude of each pixel point in the image frame and a predetermined first binarization threshold value to obtain a first binary image frame;
and carrying out non-character edge filtering according to the foreground in the first binary image frame to obtain a character edge image frame.
As an optional implementation manner, another embodiment of the present application further discloses that the detection module 100 further includes: a binarization unit and a determination unit.
The binarization unit is used for carrying out binarization on the obtained image frame by using a preset second binarization threshold value to obtain a second binary image frame;
and the determining unit is used for recording the acquisition time of the image frame if the black image area exists in the image frame according to the second binary image frame, and determining whether the periodic black image area exists in the image frame according to the recorded acquisition time of all the image frames with the black image area.
As an optional implementation manner, another embodiment of the present application further discloses that the apparatus further includes: the device comprises a splicing module and a spliced image quality evaluation module.
The splicing module is used for splicing the image frame with a pre-stored historical spliced image to obtain a target spliced image when any one of preset conditions does not occur;
and the spliced image quality evaluation module is used for determining an image quality evaluation result of the target spliced image based on the text content in the target spliced image.
As an optional implementation manner, another embodiment of the present application further discloses a mosaic image quality evaluation module, which is specifically configured to:
carrying out character edge extraction and edge foreground expansion on text contents in the target spliced image to obtain a character edge image;
determining the height ratio of text content in the target spliced image according to the character connected body in the character edge image;
and if the height ratio of the text content in the target spliced image is detected to exceed the preset ratio range, determining that the image quality of the target spliced image is unqualified.
As an optional implementation manner, another embodiment of the present application further discloses that the apparatus further includes: and the output module is used for outputting prompt information indicating that the characters in the target splicing image are too large or too small if the height ratio of the text content in the target splicing image is detected to exceed the preset ratio range.
As an optional implementation manner, another embodiment of the present application further discloses that the image quality evaluation apparatus further includes: and identifying the module.
The recognition module is used for performing text recognition on the target spliced image when the height ratio of the text content in the target spliced image does not exceed the preset ratio range;
and the spliced image quality evaluation module is also used for determining whether the image quality of the target spliced image is qualified or not based on the text recognition result of the target spliced image.
As an optional implementation manner, another embodiment of the present application further discloses that the stitched image quality evaluation module is specifically further configured to:
obtaining the posterior probability of a preset number of text recognition results in the text recognition results of all target mosaic images;
calculating the posterior probability variance according to the posterior probabilities of the preset number of text recognition results;
if the posterior probability variance is smaller than a preset variance threshold, determining that the image quality of the target spliced image is unqualified;
and if the posterior probability variance is not less than a preset variance threshold, determining that the image quality of the target spliced image is qualified.
As an optional implementation manner, another embodiment of the present application further discloses that the output module is further configured to output the identification reliability corresponding to the posterior probability variance if it is detected that the posterior probability variance is smaller than a preset variance threshold.
As an optional implementation manner, another embodiment of the present application further discloses that the output module is further configured to output a prompt message indicating that the contrast of the image frame is low if it is detected that the contrast of the image frame is lower than a preset contrast threshold; if no character exists in the image frame, outputting prompt information indicating that no character exists in the image frame; and if the periodic black image area exists in the image frame, outputting prompt information indicating that the frame rate of the scanning device is inconsistent with the screen brushing frequency of the scanned device.
The image quality evaluation device provided by the embodiment belongs to the same application concept as the image quality evaluation method provided by the embodiment of the present application, can execute the image quality evaluation method provided by any embodiment of the present application, and has corresponding functional modules and beneficial effects for executing the image quality evaluation method. For details of the image quality evaluation method provided in the above embodiments of the present application, reference may be made to specific processing contents of the image quality evaluation method not described in detail in this embodiment.
Another embodiment of the present application further discloses an electronic device, as shown in fig. 10, the electronic device includes:
a memory 200 and a processor 210;
wherein, the memory 200 is connected to the processor 210 for storing programs;
the processor 210 is configured to implement the image quality evaluation method disclosed in any of the above embodiments by running the program stored in the memory 200.
Specifically, the electronic device may further include: a bus, a communication interface 220, an input device 230, and an output device 240.
The processor 210, the memory 200, the communication interface 220, the input device 230, and the output device 240 are connected to each other through a bus. Wherein:
a bus may include a path that transfers information between components of a computer system.
The processor 210 may be a general-purpose processor, such as a general-purpose Central Processing Unit (CPU), microprocessor, etc., an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of programs in accordance with the present invention. But may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components.
The processor 210 may include a main processor and may also include a baseband chip, a modem, and the like.
The memory 200 stores programs for executing the technical solution of the present invention, and may also store an operating system and other key services. In particular, the program may include program code including computer operating instructions. More specifically, memory 200 may include a read-only memory (ROM), another type of static storage device that may store static information and instructions, a Random Access Memory (RAM), another type of dynamic storage device that may store information and instructions, a magnetic disk storage, a flash, and so forth.
The input device 230 may include a means for receiving data and information input by a user, such as a keyboard, mouse, camera, scanner, light pen, voice input device, touch screen, pedometer, or gravity sensor, among others.
Output device 240 may include equipment that allows output of information to a user, such as a display screen, a printer, speakers, and the like.
Communication interface 220 may include any device that uses any transceiver or the like to communicate with other devices or communication networks, such as an ethernet network, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), etc.
The processor 2102 executes the programs stored in the memory 200 and invokes other devices, which may be used to implement the steps of the image quality evaluation method provided by the embodiments of the present application.
Another embodiment of the present application further provides a wand, including: the scanning camera to reach the treater that is connected with the scanning camera. The scanning camera is used for collecting image frames and sending the image frames to the processor; the processor is used for detecting the contrast of the acquired image frame and detecting characters and black image areas from the image frame; if any one of the preset conditions is detected, determining that the image quality of the image frame is unqualified; the preset conditions comprise that the contrast of the image frame is lower than a preset contrast threshold, no character exists in the image frame, and a periodic black image area exists in the image frame.
As an optional implementation manner, the detecting, by the processor, the contrast of the acquired image frame in this embodiment includes: and calculating the contrast of the image frame by using the pixel value of each pixel point in the image frame and a preset contrast calculation formula.
As an alternative implementation manner, the processor in this embodiment detects characters from an image frame, and includes:
extracting character edges of the obtained image frames to obtain character edge image frames;
performing foreground expansion on character edges in the character edge image frame to obtain a character connected body;
and determining whether characters exist in the image frame or not according to the number and the size of the character connected bodies.
As an optional implementation manner, in this embodiment, the performing, by the processor, character edge extraction on the obtained image frame to obtain a character edge image frame includes:
binarizing the image frame according to the gradient amplitude of each pixel point in the image frame and a predetermined first binarization threshold value to obtain a first binary image frame;
and performing non-character edge filtering according to the foreground in the first binary image frame to obtain a character edge image frame.
As an optional implementation manner, the processor in this embodiment detects a black image area from the image frame, including:
carrying out binarization on the obtained image frame by using a preset second binarization threshold value to obtain a second binary image frame;
and if the black image areas exist in the image frame is determined according to the second binary image frame, recording the acquisition time of the image frame, and determining whether the periodic black image areas exist in the image frame according to the recorded acquisition time of all the image frames with the black image areas.
As an optional implementation manner, the processor in this embodiment is further configured to:
when any one of the preset conditions does not occur, splicing the image frame with a pre-stored historical spliced image to obtain a target spliced image;
and determining an image quality evaluation result of the target spliced image based on the text content in the target spliced image.
As an optional implementation manner, the determining, by the processor in this embodiment, an image quality evaluation result of the target stitched image based on text content in the target stitched image includes:
performing character edge extraction and edge foreground expansion on text content in the target spliced image to obtain a character edge image;
determining the height ratio of text content in the target spliced image according to the character connected body in the character edge image;
and if the height ratio of the text content in the target spliced image is detected to exceed the preset ratio range, determining that the image quality of the target spliced image is unqualified.
As an optional implementation manner, the processor in this embodiment is further configured to:
and if the height ratio of the text content in the target spliced image is detected to exceed the preset ratio range, outputting prompt information indicating that the characters in the target spliced image are too large or too small.
As an optional implementation manner, the processor in this embodiment is further configured to: when the height ratio of the text content in the target spliced image does not exceed the preset ratio range, performing text recognition on the target spliced image;
and determining whether the image quality of the target spliced image is qualified or not based on the text recognition result of the target spliced image.
As an optional implementation manner, the determining, by the processor in this embodiment, whether the image quality of the target stitched image is qualified based on the text recognition result of the target stitched image includes:
obtaining the posterior probability of a preset number of text recognition results in the text recognition results of all target mosaic images;
calculating the posterior probability variance according to the posterior probabilities of the preset number of text recognition results;
if the posterior probability variance is smaller than a preset variance threshold, determining that the image quality of the target spliced image is unqualified;
and if the posterior probability variance is not less than the preset variance threshold, determining that the image quality of the target spliced image is qualified.
As an optional implementation manner, the processor in this embodiment is further configured to: and if the detected posterior probability variance is smaller than the preset variance threshold, outputting the identification reliability corresponding to the posterior probability variance.
As an optional implementation manner, the processor in this embodiment is further configured to:
if the contrast of the image frame is detected to be lower than a preset contrast threshold, outputting prompt information indicating that the contrast of the image frame is low;
if no character exists in the image frame, outputting prompt information indicating that no character exists in the image frame;
and if the periodic black image area exists in the image frame, outputting prompt information indicating that the frame rate of the scanning device is inconsistent with the screen brushing frequency of the scanned device.
The scan pen provided by the embodiment belongs to the same application concept as the image quality evaluation method provided by the embodiment of the present application, can execute the image quality evaluation method provided by any embodiment of the present application, and has the corresponding functional modules and beneficial effects for executing the image quality evaluation method. For details of the image quality evaluation method provided in the above embodiments of the present application, reference may be made to specific processing contents of the image quality evaluation method not described in detail in this embodiment.
Another embodiment of the present application further provides a storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps of the image quality evaluation method provided in any of the above embodiments.
While, for purposes of simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present application is not limited by the order of acts or acts described, as some steps may occur in other orders or concurrently with other steps in accordance with the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The steps in the method of each embodiment of the present application may be sequentially adjusted, combined, and deleted according to actual needs.
The modules and sub-modules in the device and the terminal in the embodiments of the application can be combined, divided and deleted according to actual needs.
In the several embodiments provided in the present application, it should be understood that the disclosed terminal, apparatus and method may be implemented in other manners. For example, the above-described terminal embodiments are merely illustrative, and for example, the division of a module or a sub-module is only one logical division, and there may be other divisions when the terminal is actually implemented, for example, a plurality of sub-modules or modules may be combined or integrated into another module, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules or sub-modules described as separate parts may or may not be physically separate, and parts that are modules or sub-modules may or may not be physical modules or sub-modules, may be located in one place, or may be distributed over a plurality of network modules or sub-modules. Some or all of the modules or sub-modules can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, each functional module or sub-module in the embodiments of the present application may be integrated into one processing module, or each module or sub-module may exist alone physically, or two or more modules or sub-modules may be integrated into one module. The integrated modules or sub-modules may be implemented in the form of hardware, or may be implemented in the form of software functional modules or sub-modules.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software unit executed by a processor, or in a combination of the two. The software cells may reside in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. The term "comprising", without further limitation, means that the element so defined is not excluded from the group consisting of additional identical elements in the process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (16)

1. An image quality evaluation method is characterized by comprising:
detecting contrast of the acquired image frame, and detecting characters and black image areas from the image frame;
if any one of the preset conditions is detected, determining that the image quality of the image frame is unqualified;
wherein the preset conditions include that the contrast of the image frame is lower than a preset contrast threshold, no character exists in the image frame, and a periodic black image area exists in the image frame.
2. The method of claim 1, wherein said detecting a contrast of the acquired image frames comprises:
and calculating the contrast of the image frame by using the pixel value of each pixel point in the image frame and a preset contrast calculation formula.
3. The method of claim 1, wherein detecting characters from the image frames comprises:
extracting character edges of the obtained image frames to obtain character edge image frames;
performing foreground expansion on the character edges in the character edge image frame to obtain a character communicating body;
and determining whether characters exist in the image frame according to the number and the size of the character connected bodies.
4. The method according to claim 3, wherein the performing character edge extraction on the acquired image frame to obtain a character edge image frame comprises:
carrying out binarization on the image frame according to the gradient amplitude of each pixel point in the image frame and a predetermined first binarization threshold value to obtain a first binary image frame;
and carrying out non-character edge filtering according to the foreground in the first binary image frame to obtain a character edge image frame.
5. The method of claim 1, wherein detecting black image regions from the image frame comprises:
carrying out binarization on the obtained image frame by using a preset second binarization threshold value to obtain a second binary image frame;
and if the black image areas exist in the image frames are determined according to the second binary image frames, recording the acquisition time of the image frames, and determining whether the periodic black image areas exist in the image frames according to the recorded acquisition time of all the image frames with the black image areas.
6. The method of claim 1, further comprising:
when any one of the preset conditions does not occur, splicing the image frame with a pre-stored historical spliced image to obtain a target spliced image;
and determining an image quality evaluation result of the target spliced image based on the text content in the target spliced image.
7. The method according to claim 6, wherein determining an image quality evaluation result of the target stitched image based on text content in the target stitched image comprises:
performing character edge extraction and edge foreground expansion on text content in the target mosaic image to obtain a character edge image;
determining the height ratio of the text content in the target splicing image according to the character connected body in the character edge image;
and if the height ratio of the text content in the target spliced image is detected to exceed the preset ratio range, determining that the image quality of the target spliced image is unqualified.
8. The method of claim 7, further comprising:
and if the height ratio of the text content in the target splicing image is detected to exceed the preset ratio range, outputting prompt information indicating that the characters in the target splicing image are too large or too small.
9. The method of claim 7, further comprising:
when the height ratio of the text content in the target spliced image does not exceed the preset ratio range, performing text recognition on the target spliced image;
and determining whether the image quality of the target spliced image is qualified or not based on the text recognition result of the target spliced image.
10. The method of claim 9, wherein determining whether the image quality of the target stitched image is acceptable based on the text recognition result of the target stitched image comprises:
obtaining the posterior probability of a preset number of text recognition results in the text recognition results of all the target splicing images;
calculating posterior probability variance according to the posterior probability of the preset number of text recognition results;
if the posterior probability variance is smaller than a preset variance threshold, determining that the image quality of the target spliced image is unqualified;
and if the posterior probability variance is not less than a preset variance threshold, determining that the image quality of the target spliced image is qualified.
11. The method of claim 10, further comprising:
and if the posterior probability variance is smaller than a preset variance threshold value, outputting the identification reliability corresponding to the posterior probability variance.
12. The method of claim 1, further comprising:
if the contrast of the image frame is detected to be lower than a preset contrast threshold, outputting prompt information indicating that the contrast of the image frame is low;
if no character exists in the image frame, outputting prompt information indicating that no character exists in the image frame;
and if the periodic black image area exists in the image frame, outputting prompt information indicating that the frame rate of the scanning device is inconsistent with the screen brushing frequency of the scanned device.
13. An image quality evaluation apparatus, characterized by comprising:
the detection module is used for detecting the contrast of the acquired image frame and detecting characters and black image areas from the image frame;
the image frame quality evaluation module is used for determining that the image quality of the image frame is unqualified if any one of preset conditions is detected;
wherein the preset conditions include that the contrast of the image frame is lower than a preset contrast threshold, no character exists in the image frame, and a periodic black image area exists in the image frame.
14. An electronic device, comprising: a memory and a processor;
wherein the memory is connected with the processor and used for storing programs;
the processor is configured to implement the image quality evaluation method according to any one of claims 1 to 12 by executing a program in the memory.
15. A wand, comprising:
the system comprises a scanning camera and a processor connected with the scanning camera;
the scanning camera is used for collecting image frames and sending the image frames to the processor;
the processor is used for detecting the contrast of the acquired image frame and detecting characters and black image areas from the image frame;
if any one of the preset conditions is detected, determining that the image quality of the image frame is unqualified;
wherein the preset conditions include that the contrast of the image frame is lower than a preset contrast threshold, no character exists in the image frame, and a periodic black image area exists in the image frame.
16. A storage medium, characterized in that the storage medium has stored thereon a computer program which, when executed by a processor, implements the image quality evaluation method according to any one of claims 1 to 12.
CN202210716574.3A 2022-06-22 2022-06-22 Image quality evaluation method, image quality evaluation device, electronic apparatus, scanning pen, and storage medium Pending CN115240197A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210716574.3A CN115240197A (en) 2022-06-22 2022-06-22 Image quality evaluation method, image quality evaluation device, electronic apparatus, scanning pen, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210716574.3A CN115240197A (en) 2022-06-22 2022-06-22 Image quality evaluation method, image quality evaluation device, electronic apparatus, scanning pen, and storage medium

Publications (1)

Publication Number Publication Date
CN115240197A true CN115240197A (en) 2022-10-25

Family

ID=83668933

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210716574.3A Pending CN115240197A (en) 2022-06-22 2022-06-22 Image quality evaluation method, image quality evaluation device, electronic apparatus, scanning pen, and storage medium

Country Status (1)

Country Link
CN (1) CN115240197A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115767280A (en) * 2022-12-30 2023-03-07 安徽淘云科技股份有限公司 Dictionary pen camera parameter adjusting method and device and dictionary pen
CN117333495A (en) * 2023-12-01 2024-01-02 浙江口碑网络技术有限公司 Image detection method, device, equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115767280A (en) * 2022-12-30 2023-03-07 安徽淘云科技股份有限公司 Dictionary pen camera parameter adjusting method and device and dictionary pen
CN115767280B (en) * 2022-12-30 2023-08-22 安徽淘云科技股份有限公司 Dictionary pen camera parameter adjusting method and device and dictionary pen
CN117333495A (en) * 2023-12-01 2024-01-02 浙江口碑网络技术有限公司 Image detection method, device, equipment and storage medium
CN117333495B (en) * 2023-12-01 2024-03-19 浙江口碑网络技术有限公司 Image detection method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US9311533B2 (en) Device and method for detecting the presence of a logo in a picture
WO2018018788A1 (en) Image recognition-based meter reading apparatus and method thereof
US8712188B2 (en) System and method for document orientation detection
TWI655586B (en) Method and device for detecting specific identification image in predetermined area
CN110033471B (en) Frame line detection method based on connected domain analysis and morphological operation
CN115240197A (en) Image quality evaluation method, image quality evaluation device, electronic apparatus, scanning pen, and storage medium
CN103034848B (en) A kind of recognition methods of form types
EP1091320A2 (en) Processing multiple digital images
JP2009535899A (en) Generation of bi-tonal images from scanned color images.
CN103268481A (en) Method for extracting text in complex background image
CN111382704A (en) Vehicle line-pressing violation judgment method and device based on deep learning and storage medium
CN108830133A (en) Recognition methods, electronic device and the readable storage medium storing program for executing of contract image picture
CN105205488A (en) Harris angular point and stroke width based text region detection method
CN111259878A (en) Method and equipment for detecting text
CN110598566A (en) Image processing method, device, terminal and computer readable storage medium
CN106709952B (en) A kind of automatic calibration method of display screen
WO2015002719A1 (en) Method of improving contrast for text extraction and recognition applications
CN108615030B (en) Title consistency detection method and device and electronic equipment
KR20150108118A (en) Remote automatic metering system based image recognition
CN113221778B (en) Method and device for detecting and identifying handwritten form
JP2011087144A (en) Telop character area detection method, telop character area detection device, and telop character area detection program
CN111209912A (en) Method for removing long interference lines of Chinese character and picture
CN106845488B (en) License plate image processing method and device
JP3268552B2 (en) Area extraction method, destination area extraction method, destination area extraction apparatus, and image processing apparatus
JP5424694B2 (en) Image recognition apparatus and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination