CN111915550A - Image quality detection method, detection device and storage medium - Google Patents

Image quality detection method, detection device and storage medium Download PDF

Info

Publication number
CN111915550A
CN111915550A CN201910385809.3A CN201910385809A CN111915550A CN 111915550 A CN111915550 A CN 111915550A CN 201910385809 A CN201910385809 A CN 201910385809A CN 111915550 A CN111915550 A CN 111915550A
Authority
CN
China
Prior art keywords
image
detection
detection image
live
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910385809.3A
Other languages
Chinese (zh)
Other versions
CN111915550B (en
Inventor
石小威
于越
付煜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910385809.3A priority Critical patent/CN111915550B/en
Publication of CN111915550A publication Critical patent/CN111915550A/en
Application granted granted Critical
Publication of CN111915550B publication Critical patent/CN111915550B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an image quality detection method, detection equipment and a storage medium, and belongs to the technical field of display. According to the image quality detection method provided by the embodiment of the application, the first matching degree between the first live-action detection image and the live-action reference image is determined, and the image quality of the detected equipment is detected based on the first matching degree, the first gray-scale detection image and the first color-scale detection image. When the method is used for detecting the image quality of the detected equipment, the first gray scale detection image and the first color scale detection image are detected, the first live-action detection image and the live-action reference image are compared, the image quality displayed by the detected equipment under the actual condition is accurately reflected through the actual effect of the image displayed by the detected equipment, and the reliability of image quality detection is improved.

Description

Image quality detection method, detection device and storage medium
Technical Field
The present application relates to the field of display technology. And more particularly, to an image quality detection method, a detection apparatus, and a storage medium.
Background
With the development of Display technology, LCD (Liquid Crystal Display) products can be prepared and applied to various fields, for example, the LCD products can be terminals, monitors, Liquid Crystal televisions, and the like. In order to ensure the quality of the image displayed on the LCD product, the LCD product is subjected to image quality inspection when it is shipped from a factory.
In the related art, when detecting the quality of an image displayed on an LCD product, a gray scale sample image and a color scale sample image are generally input into the LCD product, the gray scale sample image and the color scale sample image are displayed on the LCD product, the image displayed on the LCD is collected to obtain a gray scale detection image and a color scale detection image, the gray scale detection image and a gray scale reference image are respectively compared, the color scale detection image and the color scale reference image are compared, and whether the quality of the image displayed on the LCD product is qualified or not is determined according to the comparison result.
However, in the related art, when the detection image is compared with the reference image, only the grayscale image and the color level image are compared, the actual effect of the display image of the LCD product is not analyzed, the image quality displayed by the LCD product under the actual condition cannot be accurately reflected, and the detection reliability is low.
Disclosure of Invention
The embodiment of the application provides an image quality detection method, detection equipment and a storage medium, and can solve the problem of low reliability of image quality detection of an LCD product. The technical scheme is as follows:
in one aspect, an image quality detection method is provided, and the method includes:
acquiring a first live-action detection image, and acquiring a first gray-scale detection image and a first color-scale detection image, wherein the first live-action detection image, the first gray-scale detection image and the first color-scale detection image are respectively detection images obtained by collecting a live-action sample image, a gray-scale sample image and a color-scale sample image displayed on the detected equipment;
determining a first matching degree between the first live-action detection image and a live-action reference image, wherein the live-action reference image is an image obtained by collecting a live-action sample image displayed on a reference display device, and the reference display device is a display device with qualified image quality;
and detecting the image quality of the detected equipment based on the first matching degree, the first gray scale detection image and the first color scale detection image.
In a possible implementation manner, the obtaining an image quality detection result of the device to be detected based on the first matching degree, the first grayscale detection image, and the first color level detection image includes:
determining a first variance difference value between the first grayscale detection image and the grayscale reference image based on the first grayscale detection image and the grayscale reference image, and determining a second variance difference value between the first grayscale detection image and the color level reference image based on the first color level detection image and the color level reference image;
when the first variance difference value is within a first preset range, the second variance difference value is within a second preset range, and the first matching degree is greater than a first threshold value, determining that the image quality of the detected equipment is qualified;
and when the first variance difference is not in the first preset range, or the second variance difference is not in the second preset range, or the first matching degree is not greater than the first threshold, determining that the image quality of the detected equipment is unqualified.
In another possible implementation manner, before the acquiring the first live view detection image and the acquiring the first grayscale detection image and the first color level detection image, the method further includes:
collecting a correction sample image displayed on the detected equipment to obtain a first correction detection image;
performing edge recognition on the first correction detection image to obtain first edge information of the first correction detection image;
and when the first edge information is matched with the second edge information of the correction reference image, executing the steps of acquiring a first live-action detection image, and acquiring a first gray-scale detection image and a first color-scale detection image, wherein the correction reference image is an image obtained by acquiring the correction sample image displayed on the reference display device.
In another possible implementation manner, the method further includes:
when the first edge information is not matched with the second edge information, adjusting the acquisition position of the correction detection image;
re-collecting the corrected sample image displayed on the detected equipment to obtain a second corrected detection image;
performing edge recognition on the second correction detection image to obtain third edge information of the second correction detection image;
when the third edge information is matched with the second edge information, executing the steps of obtaining a first live-action detection image, and obtaining a first gray-scale detection image and a first color-scale detection image;
and when the third edge information is not matched with the second edge information, continuously adjusting the acquisition position of the correction detection image until the fourth edge information of the third correction detection image acquired at the adjusted acquisition position is matched with the second edge information, and executing the steps of acquiring the first real-scene detection image, and acquiring the first gray-scale detection image and the first color-scale detection image.
In another possible implementation manner, the method further includes:
determining an edge deviation between the edge position of the first correction detection image and the edge position of the correction reference image based on the first edge information and the second edge information;
and when the edge deviation is within a preset deviation range, determining that the first edge information is matched with the second edge information.
In another possible implementation manner, the acquiring the first live-action detection image includes:
acquiring the live-action sample image to obtain a second live-action detection image;
and carrying out image processing on the second real scene detection image to obtain the first real scene detection image, wherein the image processing comprises at least one of edge identification, edge cutting, scaling processing and noise reduction processing.
In another possible implementation manner, the acquiring the live-action sample image to obtain a second live-action detection image includes:
sending a collecting instruction to collecting equipment, wherein the collecting instruction is used for instructing the collecting equipment to collect the live-action sample image displayed on the detected equipment to obtain the second live-action detection image;
and receiving the second real scene detection image returned by the acquisition equipment.
In another possible implementation manner, before the acquiring the first live-action detection image, the method further includes:
and sending a display instruction to the detected equipment, wherein the display instruction carries the image data of the real scene sample image and is used for indicating the detected equipment to render the image data of the real scene sample image to obtain the first real scene detection image.
In another aspect, an image quality detection apparatus is provided, the apparatus including:
the acquisition module is used for acquiring a first live-action detection image, and acquiring a first gray-scale detection image and a first color-scale detection image, wherein the first live-action detection image, the first gray-scale detection image and the first color-scale detection image are respectively detection images obtained by acquiring a live-action sample image, a gray-scale sample image and a color-scale sample image displayed on the detected equipment;
a first determining module, configured to determine a first matching degree between the first live-action detection image and a live-action reference image, where the live-action reference image is an image obtained by collecting a live-action sample image displayed on a reference display device, and the reference display device is a display device with a qualified image quality;
and the detection module is used for detecting the image quality of the detected equipment based on the first matching degree, the first gray scale detection image and the first color scale detection image.
In a possible implementation manner, the detection module is configured to determine a first variance difference between the first grayscale detection image and the grayscale reference image based on the first grayscale detection image and the grayscale reference image, and determine a second variance difference between the first grayscale detection image and the grayscale reference image based on the first grayscale detection image and the grayscale reference image; when the first variance difference value is within a first preset range, the second variance difference value is within a second preset range, and the first matching degree is greater than a first threshold value, determining that the image quality of the detected equipment is qualified; and when the first variance difference is not in the first preset range, or the second variance difference is not in the second preset range, or the first matching degree is not greater than the first threshold, determining that the image quality of the detected equipment is unqualified.
In another possible implementation manner, the obtaining module is further configured to collect a corrected sample image displayed on the detected device, so as to obtain a first corrected detection image; performing edge recognition on the first correction detection image to obtain first edge information of the first correction detection image; and when the first edge information is matched with the second edge information of the correction reference image, acquiring a first live-action detection image, and acquiring a first gray-scale detection image and a first color-scale detection image, wherein the correction reference image is an image obtained by collecting the correction sample image displayed on the reference display equipment.
In another possible implementation manner, the obtaining module is further configured to adjust an acquisition position of a corrected and detected image when the first edge information does not match the second edge information; re-collecting the corrected sample image displayed on the detected equipment to obtain a second corrected detection image; performing edge recognition on the second correction detection image to obtain third edge information of the second correction detection image; when the third edge information is matched with the second edge information, acquiring a first live-action detection image, and acquiring a first gray-scale detection image and a first color scale detection image; and when the third edge information is not matched with the second edge information, continuously adjusting the acquisition position of the correction detection image until the fourth edge information of the third correction detection image acquired at the adjusted acquisition position is matched with the second edge information, acquiring the first live-action detection image, and acquiring the first gray-scale detection image and the first color scale detection image.
In another possible implementation manner, the apparatus further includes:
a second determination module configured to determine an edge deviation between an edge position of the first correction detection image and an edge position of the correction reference image based on the first edge information and the second edge information; and when the edge deviation is within a preset deviation range, determining that the first edge information is matched with the second edge information.
In another possible implementation manner, the obtaining module is further configured to collect the live-action sample image to obtain a second live-action detection image; and carrying out image processing on the second real scene detection image to obtain the first real scene detection image, wherein the image processing comprises at least one of edge identification, edge cutting, scaling processing and noise reduction processing.
In another possible implementation manner, the acquiring module is further configured to send an acquisition instruction to an acquisition device, where the acquisition instruction is used to instruct the acquisition device to acquire a live-action sample image displayed on the detected device, so as to obtain the second live-action detection image; and receiving the second real scene detection image returned by the acquisition equipment.
In another possible implementation manner, the apparatus further includes:
and the sending module is used for sending a display instruction to the detected equipment, wherein the display instruction carries the image data of the real scene sample image and is used for indicating the detected equipment to render the image data of the real scene sample image so as to obtain the first real scene detection image.
In another aspect, there is provided a detection apparatus, including:
a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, the instruction, the program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the operations performed in the above-described image quality detection method.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, which is loaded and executed by a processor to implement the operations performed in the above-mentioned image quality detection method.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
according to the image quality detection method provided by the embodiment of the application, the first matching degree between the first live-action detection image and the live-action reference image is determined, and the image quality of the detected equipment is detected based on the first matching degree, the first gray-scale detection image and the first color-scale detection image. When the method is used for detecting the image quality of the detected equipment, the first gray scale detection image and the first color scale detection image are detected, the first live-action detection image and the live-action reference image are compared, the image quality displayed by the detected equipment under the actual condition is accurately reflected through the actual effect of the image displayed by the detected equipment, and the reliability of image quality detection is improved.
Drawings
FIG. 1 is a schematic illustration of an implementation environment provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of a detection apparatus provided in an embodiment of the present application;
fig. 3 is a flowchart of an image quality detection method provided in an embodiment of the present application;
fig. 4 is a flowchart of an image quality detection method provided in an embodiment of the present application;
fig. 5 is a schematic diagram of an operation performed by a detection apparatus when the image quality detection method provided by the embodiment of the present application is performed;
fig. 6 is a schematic structural diagram of an image quality detection apparatus according to an embodiment of the present application;
fig. 7 is a block diagram of a detection apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions and advantages of the present application more clear, the following describes the embodiments of the present application in further detail.
The embodiment of the present application provides an implementation environment for image quality detection, and referring to fig. 1, the implementation environment includes: a detection device 101, a detected device 102 and an acquisition device 103. The detection device 101 and the detected device 102 may be connected via a Serial port, the detection device 101 and the acquisition device 103 may be connected via a USB (Universal Serial Bus), and the acquisition device 103 may send the acquired detection image to the detection device 101 via a wireless network or a wired network.
The detected device 102 is configured to display a first live view detection image, a first gray scale detection image, and a first color scale detection image.
The collecting device 103 is configured to collect the live-action sample image, the gray-scale sample image, and the color-scale sample image displayed on the detected device 102, respectively, to obtain a first live-action detection image, a first gray-scale detection image, and a first color-scale detection image, and send the first live-action detection image, the first gray-scale detection image, and the first color-scale detection image to the detecting device 101, respectively.
The detecting device 101 is configured to receive the first live-action detection image, the first grayscale detection image, and the first color-scale detection image, and perform image quality detection on the detected device 102 based on the first live-action detection image, the first grayscale detection image, and the first color-scale detection image.
In a possible implementation manner, the detection device 101 may further control the detected device 102 to display a live-action sample image, a gray-scale sample image, and a color-scale sample image. For the sake of convenience of distinction, a display instruction for the detection apparatus 101 to control the detection apparatus 102 to display the live-action sample image is referred to as a first display instruction. Accordingly, the steps may be: the detecting device 101 is further configured to send a first display instruction to the detected device 102, where the first display instruction carries image data of the real-scene sample image and is used to instruct the detected device 102 to display the first real-scene detection image. The detected device 102 is further configured to receive the first display instruction, and render a first live-action detection image according to the image data of the live-action sample image.
In another possible implementation manner, the detection device 101 may further control the acquisition device 103 to acquire a live-action sample image, a gray-scale sample image, and a color-scale sample image. Taking the example that the detection device 101 controls the collection device 103 to collect the live-action sample image, the detection device 101 is further configured to send a collection instruction to the collection device 103, where the collection instruction is used to instruct the collection device 103 to collect the live-action sample image, so as to obtain a first live-action detection image. The collecting device 103 is further configured to receive the collecting instruction, collect the live-action sample image, and obtain a first live-action detection image. The steps of the detecting device 101 controlling the collecting device 103 to collect the grayscale sample image and the color level sample image are similar to the steps of the real scene sample image, and are not described herein again.
In another possible implementation manner, the detection device 101 may also perform preprocessing on the first live view detection image, the first grayscale detection image, and the first color scale detection image. The detection apparatus 101 performs preprocessing on the first live view detection image as an example. For the sake of convenience of distinction, the image captured by the capturing device 103 is referred to as a second live view detection image. Correspondingly, the detection device 101 is further configured to receive the second live-action detection image, and perform image processing on the second live-action detection image to obtain the first live-action detection image. The image processing may include at least one of edge recognition, edge cropping, scaling, and noise reduction.
The detection device 101 may be a master device with image processing capability; the main control device may be a terminal or a control device. The device 102 to be tested is an LCD product, which may be a monitor, a liquid crystal television, etc., and the acquisition device 103 may be a consumer camera or an industrial camera. Moreover, the detection device 101 and the acquisition device 103 may be integrated on one device, or may be two separate devices. In the embodiments of the present application, this is not particularly limited.
In a possible implementation manner, the detection device 101 is configured with a high-performance graphics card, and the detection device 101 may include: the image processing system comprises a main control module, an image processing module, a report generating module and an information source input module, wherein the main control module is used for controlling the image processing module, the information source input module and the report generating module, and the reference is made to fig. 2. The following description will take an example of acquiring a live-action sample image to obtain a first live-action detection image.
The main control module is configured to send a display instruction to the detected device 102, where the display instruction carries image data of the real-scene sample image, and is configured to instruct the detected device 102 to render the image data, so as to obtain a first real-scene detection image.
The main control module is further configured to send a collection instruction to the collection device 103, where the collection instruction is used to instruct the collection device 103 to collect a live-action sample image displayed on the detected device 102, so as to obtain a second live-action detection image.
And the image processing module is used for receiving the second real-scene detection image, carrying out image processing on the second real-scene detection image to obtain a first real-scene detection image, comparing the first real-scene detection image with the real-scene reference image to obtain a detection result, and sending the detection result to the report generating module.
And the report generating module is used for writing data according to the detection result and generating a report.
Related personnel can judge whether the quality of the detection image displayed on the detected equipment 102 is qualified or not according to the detection result displayed on the report, when the quality is qualified, the detected equipment 102 can leave a factory, and when the quality is unqualified, the detected equipment 102 cannot leave the factory, and the detected equipment 102 needs to be maintained until the quality is qualified. The method can realize the automatic test of the image quality detection, can automatically analyze and generate the report without the operation of personnel, and improves the image quality detection efficiency.
It should be noted that, with continued reference to fig. 2, when the detection device 101 and the capture device 103 are integrated on the same device, the detection device 101 further includes an image capture module, which is configured to capture a live-action sample image displayed on the detection device 102 to obtain a second live-action detection image. The steps of the detection device 101 obtaining the first gray-scale detection image and the first color-scale detection image are similar to the above steps, and are not described herein again.
The live-action reference image is used for comparing with the first live-action detection image, and the live-action reference image may be an image pre-stored in a standard comparison library, or an image obtained by acquiring, by the detection device 101, a live-action sample image displayed on the reference display device 104 when the image quality detection method provided by the embodiment of the present application is executed, and the reference display device 104 is a display device with qualified image quality. In the embodiments of the present application, this is not particularly limited.
In one possible implementation, when the live-action reference image is an image obtained by capturing a live-action sample image displayed on the reference display device 104, with continued reference to fig. 1, the implementation environment further includes: reference is made to the display device 104. Correspondingly, the reference display device 104 is further configured to display the correction sample image, the gray-scale sample image, and the color-scale sample image, and the detection device 101 is further configured to collect the correction sample image, the gray-scale sample image, and the color-scale sample image displayed on the reference display device 104, and obtain the correction reference image, the gray-scale reference image, and the color-scale reference image, respectively.
The corrected reference image is used for performing detection environment correction by the detection device 101 when executing the image quality detection method provided by the embodiment of the application, adjusting the position of the acquisition device 103 for acquiring the detection image, and reducing the detection error. The gray scale reference image and the color scale reference image are respectively used for comparing with the first gray scale detection image and the first color scale detection image.
In another possible implementation manner, when the live-action reference image is an image in a standard comparison library, the standard comparison library further stores a correction reference image, a grayscale reference image and a tone scale reference image.
An embodiment of the present application provides an image quality detection method, and with reference to fig. 3, the method includes:
step S301: acquiring a first live-action detection image, and acquiring a first gray-scale detection image and a first color-scale detection image, wherein the first live-action detection image, the first gray-scale detection image and the first color-scale detection image are respectively detection images obtained by acquiring a live-action sample image, a gray-scale sample image and a color-scale sample image displayed on detected equipment;
step S302: determining a first matching degree between a first live-action detection image and a live-action reference image, wherein the live-action reference image is an image obtained by collecting a live-action sample image displayed on a reference display device, and the reference display device is a display device with qualified image quality;
step S303: and based on the first matching degree, the first gray scale detection image and the first color scale detection image, carrying out image quality detection on the detected equipment.
In a possible implementation manner, obtaining an image quality detection result of the device to be detected based on the first matching degree, the first grayscale detection image, and the first grayscale detection image includes:
determining a first variance difference value between the first gray scale detection image and the gray scale reference image based on the first gray scale detection image and the gray scale reference image, and determining a second variance difference value between the first color scale detection image and the color scale reference image based on the first color scale detection image and the color scale reference image, wherein the gray scale reference image and the color scale reference image are respectively images obtained by collecting a gray scale sample image and a color scale sample image displayed on a reference display device;
when the first variance difference value is within a first preset range, the second variance difference value is within a second preset range, and the first matching degree is greater than a first threshold value, determining that the image quality of the detected equipment is qualified;
and when the first variance difference value is not in a first preset range, or the second variance difference value is not in a second preset range, or the first matching degree is not greater than a first threshold value, determining that the image quality of the detected equipment is unqualified.
In another possible implementation manner, before acquiring the first live view detection image and acquiring the first grayscale detection image and the first color scale detection image, the method further includes:
collecting a correction sample image displayed on detected equipment to obtain a first correction detection image;
performing edge recognition on the first correction detection image to obtain first edge information of the first correction detection image;
and when the first edge information is matched with the second edge information of the correction reference image, executing the steps of acquiring a first live-action detection image, and acquiring a first gray-scale detection image and a first color-scale detection image, wherein the correction reference image is an image obtained by collecting a correction sample image displayed on the reference display equipment.
In another possible implementation manner, the method further includes:
when the first edge information is not matched with the second edge information, adjusting the acquisition position of the correction detection image;
re-collecting the corrected sample image displayed on the detected equipment to obtain a second corrected detection image;
performing edge recognition on the second correction detection image to obtain third edge information of the second correction detection image;
when the third edge information is matched with the second edge information, executing the steps of obtaining a first live-action detection image, and obtaining a first gray-scale detection image and a first color scale detection image;
and when the third edge information is not matched with the second edge information, continuously adjusting the acquisition position of the correction detection image until the fourth edge information of the third correction detection image acquired at the adjusted acquisition position is matched with the second edge information, and executing the steps of acquiring the first real-scene detection image and acquiring the first gray-scale detection image and the first color-scale detection image.
In another possible implementation manner, the method further includes:
determining an edge deviation between the edge position of the first correction detection image and the edge position of the correction reference image based on the first edge information and the second edge information;
and when the edge deviation is within the preset deviation range, determining that the first edge information is matched with the second edge information.
In another possible implementation, acquiring a first live-action detection image includes:
acquiring a live-action sample image to obtain a second live-action detection image;
and carrying out image processing on the second real scene detection image to obtain a first real scene detection image, wherein the image processing comprises at least one of edge identification, edge cutting, scaling processing and noise reduction processing.
In another possible implementation manner, acquiring a live-action sample image to obtain a second live-action detection image includes:
sending a collection instruction to collection equipment, wherein the collection instruction is used for instructing the collection equipment to collect the live-action sample image displayed on the detected equipment to obtain a second live-action detection image;
and receiving a second real scene detection image returned by the acquisition equipment.
In another possible implementation, before acquiring the first live-action detection image, the method further includes:
and sending a display instruction to the detected equipment, wherein the display instruction carries the image data of the real-scene sample image and is used for indicating the detected equipment to render the image data of the real-scene sample image to obtain a first real-scene detection image.
According to the image quality detection method provided by the embodiment of the application, the first matching degree between the first live-action detection image and the live-action reference image is determined, and the image quality of the detected equipment is detected based on the first matching degree, the first gray-scale detection image and the first color-scale detection image. When the method is used for detecting the image quality of the detected equipment, the first gray scale detection image and the first color scale detection image are detected, the first live-action detection image and the live-action reference image are compared, the image quality displayed by the detected equipment under the actual condition is accurately reflected through the actual effect of the image displayed by the detected equipment, and the reliability of image quality detection is improved.
The embodiment of the application provides an image quality detection method, which is applied to detection equipment, detected equipment and acquisition equipment, and referring to fig. 4, the method comprises the following steps:
step S401: the detection device performs detection correction based on the detection environment.
The detection environment comprises detection equipment, detected equipment and acquisition equipment. Before the image quality detection is carried out on the detected equipment, the detection equipment needs to carry out detection and correction, so that data errors are reduced, and the reliability of a detection result is improved. For the sake of convenience of distinction, an instruction for instructing the device to display the live-view sample image by the detection device is referred to as a first display instruction, an instruction for instructing the device to display the gray-scale sample image by the detection device is referred to as a second display instruction, an instruction for instructing the device to display the gray-scale sample image by the detection device is referred to as a third display instruction, and an instruction for instructing the device to display the correction sample image by the detection device is referred to as a fourth display instruction. An instruction for instructing the acquisition equipment to acquire the live-action sample image by the detection equipment is called a first acquisition instruction, an instruction for instructing the acquisition equipment to acquire the gray-scale sample image is called a second acquisition instruction, an instruction for instructing the acquisition equipment to acquire the color-scale sample image is called a third acquisition instruction, and an instruction for instructing the acquisition equipment to acquire the correction sample image is called a fourth acquisition instruction.
Accordingly, this step can be realized by the following steps (1) to (7), including:
(1) and the detection device sends a fourth display instruction to the detected device.
The fourth display instruction carries image data of the correction sample image and is used for instructing the detected equipment to render the image data of the correction sample image to obtain a first correction detection image. And the detected equipment receives the fourth display instruction, renders the image data of the correction sample image and displays the correction sample image.
In a possible implementation manner, after the detected device receives the fourth display instruction, a first success instruction may be returned to the detection device, where the first success instruction is used to indicate that the detected device successfully receives the fourth display instruction. The detection device may send a fourth acquisition instruction to the acquisition device when receiving the first success instruction returned by the detected device. In another possible implementation manner, the detection device may send the fourth acquisition instruction to the acquisition device after sending the fourth display instruction to the detected device and when the first preset time duration is reached.
The detection device may be a terminal, the detected device may be an LCD product, and the LCD product may be a monitor, a liquid crystal television, and the like. The first preset time period may be set and changed as needed, and is not particularly limited in the embodiment of the present application.
(2) And the detection equipment sends a fourth acquisition instruction to the acquisition equipment.
The fourth acquisition instruction is used for instructing the acquisition equipment to acquire the corrected sample image displayed on the detected equipment to obtain a first corrected detection image. And the acquisition equipment receives the fourth acquisition instruction, acquires the correction sample image, obtains a first correction detection image, and sends the first correction detection image to the detection equipment.
The capturing device may be a civil camera or an industrial camera, which is not particularly limited in the embodiments of the present application. However, compared with a conventional civil camera, a general industrial camera has higher image stability, high transmission capability and high anti-interference capability, so that in the embodiment of the application, the industrial camera can be used for collecting the corrected sample image displayed on the detected device to obtain the first corrected detection image.
(3) The detection device receives a first correction detection image returned by the acquisition device, and carries out edge recognition on the first correction detection image to obtain first edge information of the first correction image.
When the detection device performs edge recognition on the first corrected and detected image in this step, the contour points of the first corrected and detected image may be roughly detected, and then the detected contour points are connected by a link rule, so as to obtain first edge information of the first corrected and detected image. The first edge information includes at least one of information such as an edge position, a length, and a distance between two opposite edges of each edge of the first corrected and detected image.
(4) The detection device determines an edge deviation between an edge position of the first correction detection image and an edge position of the correction reference image based on the first edge information and the second edge information of the correction reference image.
And correcting the reference image into an image in a standard comparison library. The detection device also carries out edge recognition on the corrected reference image to obtain second edge information of the corrected reference image. The general image is rectangular and has four sides, namely an upper side, a lower side, a left side and a right side, wherein the upper side and the lower side are two opposite sides and have the same length; the left side and the right side are two opposite sides with the same length.
The detection device determines an edge deviation between an edge position of each edge of the first correction detection image and an edge position of each edge of the correction reference image with reference to the second edge information. For example, the detection device compares the left edge of the first corrected reference image with the left edge of the corrected reference image to obtain a first edge deviation; comparing the right side of the first correction detection image with the right side of the correction reference image to obtain a second edge deviation; comparing the upper edge of the first correction detection image with the upper edge of the correction reference image to obtain a third edge deviation; and comparing the lower edge of the first correction detection image with the lower edge of the correction reference image to obtain the fourth edge deviation.
In one possible implementation, the detection device may directly sum or weight-sum the first edge deviation, the second edge deviation, the third edge deviation, and the fourth edge deviation to obtain an edge deviation sum, and the edge deviation sum is used as an edge deviation between the edge position of the first correction detection image and the edge position of the correction sample image.
In another possible implementation, the detection apparatus may use the above-described four edge deviations as edge deviations between the edge positions of the first correction detection image and the edge positions of the correction sample image. In the embodiments of the present application, this is not particularly limited.
When the detection device takes the sum of the four edge deviations as the edge deviation between the edge position of the first correction detection image and the edge position of the correction sample image, the detection device determines whether the edge deviation is within a preset deviation range, and when the edge deviation is within the preset deviation range, the step (5) is executed; and (6) when the edge deviation is not within the preset deviation range.
When the detection device takes the four edge deviations as the edge deviation between the edge position of the first corrected detection image and the edge position of the corrected sample image, the detection device determines whether each edge deviation is within a corresponding preset deviation range, that is, the detection device determines whether the first edge deviation is within the preset deviation range corresponding to the first edge deviation, whether the second edge deviation is within the preset deviation range corresponding to the second edge deviation, whether the third edge deviation is within the preset deviation range corresponding to the third edge deviation, and whether the fourth edge deviation is within the preset deviation range corresponding to the fourth edge deviation, and when each edge deviation is within the corresponding preset deviation range, the step (5) is executed; and (6) when any edge deviation is not within the corresponding preset deviation range, executing the step.
The edge deviation may be a positive value, a negative value, or 0, and the preset deviation range may include the positive value, the negative value, and 0. In the embodiments of the present application, this is not particularly limited.
The preset deviation range may be set and changed as needed, and is not particularly limited in the embodiment of the present application. In addition, the preset deviation range corresponding to each edge deviation may be the same or different, and is not particularly limited in the embodiment of the present application.
(5) When the edge deviation is within a preset deviation range, the detection device determines that the first edge information and the second edge information match.
When the edge deviation is the sum of the four edge deviations, the detection device determines that the first edge information and the second edge information are matched when the sum of the edge deviations is within a preset deviation range.
When the edge deviation is the four edge deviations, the detection device determines that the first edge information and the second edge information are matched when each edge deviation is within the corresponding preset deviation range.
When the first edge information and the second edge information match, the detection device executes step S402; and (6) when the two are not matched, executing the step.
(6) And when the edge deviation is not within the preset deviation range, the detection equipment adjusts and corrects the acquisition position of the detection image.
And when the edge deviation is not within the preset deviation range, the detection equipment adjusts the position of the acquisition equipment for acquiring and correcting the detection image. In one possible implementation, the collecting device may be mounted on a movable collecting rail, and the collecting position of the collecting device is adjusted by changing the height, the front position, the rear position, and the like of the collecting rail.
For example, when the edge deviations are a first edge deviation, a second edge deviation, a third edge deviation and a fourth edge deviation, the detection device determines a center position of the first corrected detection image and a center position of the corrected sample image, keeps the center positions of the two images coincident, compares each side of the first corrected detection image with each side of the corrected sample image, and for each side of the corrected sample image, for example, a left side of the first corrected detection image is shifted by 0.5cm to the left, a right side of the first corrected detection image is shifted by 0.5cm to the right, an upper side of the first corrected detection image is shifted by 1cm upward, and a lower side of the first corrected detection image is shifted by 1cm downward, which indicates that the distance between the acquisition device and the device under test is short, and the acquisition device should be adjusted to move in a direction away from the device under test.
(7) The detection device sends a fifth acquisition instruction to the acquisition device after the acquisition position is adjusted, receives a second correction detection image returned by the acquisition device, and performs edge identification on the second correction detection image to obtain third edge information of the second correction detection image; when the third edge information matches the second edge information, performing step S402; when the third edge information does not match the second edge information, the position of the acquisition device for acquiring the corrected detection image is continuously adjusted until the fourth edge information of the third corrected detection image acquired at the adjusted acquisition position matches the second edge information, and step S402 is not executed.
It should be noted that the process in step (7) is similar to that in steps (1) to (6), and is not described herein again. In addition, when the image quality detection method provided by the embodiment of the application is executed, only one correction is needed for the detected devices in the same batch, and when the image quality of the detected devices in other batches is detected, the correction needs to be performed again.
Step S402: the detection device sends a display instruction to the detected device.
The display instruction may be a first display instruction, a second display instruction, or a third display instruction. When the display instruction is a first display instruction, the first display instruction carries image data of the real-scene sample image and is used for indicating the detected equipment to render the image data of the real-scene sample image to obtain a first real-scene detection image. And when the display instruction is a second display instruction, the second display instruction carries the image data of the gray scale sample image and is used for indicating the detected equipment to render the image data of the gray scale sample image to obtain a first gray scale detection image. And when the display instruction is a third display instruction, the third display instruction carries the image data of the color gradation sample image and is used for indicating the detected equipment to render the image data of the color gradation sample image to obtain a first color gradation detection image.
This step is similar to step (1) in step S401, and is not described herein again.
Step S403: the detected equipment receives the display instruction, renders the image data of the sample image and displays the sample image.
The display instruction may be a first display instruction, a second display instruction, or a third display instruction. When the display instruction is a first display instruction, the sample image is a real scene sample image; when the display instruction is a second display instruction, the sample image is a gray scale sample image; when the display instruction is a third display instruction, the sample image is a color gradation sample image.
In a possible implementation manner, when the display instruction is a first display instruction, the step may be: the detected equipment receives the first display instruction, renders image data of the real scene sample image, and accordingly displays the real scene sample image on a display screen of the detected equipment. When the display instruction is a second display instruction, the step may be: and the detected equipment receives the second display instruction, renders the image data of the gray-scale sample image, and displays the gray-scale sample image on the display screen of the detected equipment. When the display instruction is a third display instruction, the step may be: and the detected equipment receives the third display instruction, renders the image data of the color gradation sample image, and displays the color gradation sample image on the display screen of the detected equipment.
When the detected device displays the sample image, the sample image may be displayed on the entire display screen, or the sample image may be displayed in a partial area of the display screen.
Step S404: the detection equipment sends an acquisition instruction to the acquisition equipment.
The acquisition instruction is used for instructing acquisition equipment to acquire a sample image displayed on detected equipment to obtain a detection image. The acquisition instruction may be a first acquisition instruction, a second acquisition instruction, or a third acquisition instruction.
In a possible implementation manner, when the acquisition instruction is a first acquisition instruction, the step may be: the detection equipment sends a first acquisition instruction to the acquisition equipment, and the first acquisition instruction is used for indicating the acquisition equipment to acquire a live-action sample image displayed on the detected equipment to obtain a first live-action detection image. When the acquisition instruction is a second acquisition instruction, the method can comprise the following steps: the detection device sends a second acquisition instruction to the acquisition device, and the second acquisition instruction is used for instructing the acquisition device to acquire the gray scale sample image displayed on the detection device to obtain the first gray scale detection image. When the acquisition instruction is a third acquisition instruction, the step may be: the detection device sends a third acquisition instruction to the acquisition device, and the third acquisition instruction is used for instructing the acquisition device to acquire the color gradation sample image displayed on the detected device to obtain the first color gradation detection image.
It should be noted that, in step S403, after the detected device receives the display instruction, a success instruction may be returned to the detecting device, where the success instruction is used to indicate that the detected device successfully receives the display instruction. The detection device can execute the step when receiving the success instruction; or, in step S402, after the detection device sends the display instruction to the detected device, the detection device sends the acquisition instruction to the acquisition device when the second preset time length is reached.
The second preset time period may be set and changed as needed, and is not specifically limited in the embodiment of the present application. The second preset time period and the first preset time period may be the same or different, and in the embodiment of the present application, this is not particularly limited.
Step S405: the acquisition equipment receives the acquisition instruction, acquires the sample image displayed on the detected equipment to obtain a second detection image, and sends the second detection image to the detection equipment.
In this step, when the acquisition instruction is the first acquisition instruction, the sample image is a live-action sample image, and the second detection image is a second live-action detection image. Correspondingly, the steps can be as follows: the collecting equipment receives the first collecting instruction, collects the live-action sample image when the live-action sample image is displayed on the detected equipment, obtains a second live-action detection image, and sends the second live-action detection image to the detecting equipment.
When the acquisition instruction is a second acquisition instruction, the sample image is a gray scale sample image, and the second detection image is a second gray scale detection image. Correspondingly, the steps can be as follows: and the acquisition equipment receives the second acquisition instruction, acquires the gray scale sample image when the detected equipment displays the gray scale sample image, obtains a second gray scale detection image, and sends the second gray scale detection image to the detection equipment.
And when the acquisition instruction is a third acquisition instruction, the sample image is a color level sample image, and the second detection image is a second color level detection image. Correspondingly, the steps can be as follows: and the acquisition equipment receives the third acquisition instruction, acquires the color gradation sample image when the detected equipment displays the color gradation sample image, obtains a second color gradation detection image, and sends the second color gradation detection image to the detection equipment.
Step S406: the detection device receives a second detection image returned by the acquisition device, and performs image processing on the second detection image to obtain a first detection image, wherein the first detection image is a first real scene detection image, a first gray scale detection image or a first color scale detection image.
The first detection image in this step may be a first live-action detection image, a first gray-scale detection image, or a first color-scale detection image. When the first detection image is a first real-scene detection image, the second detection image is a second real-scene detection image; when the first detection image is a first gray scale detection image, the second detection image is a second gray scale detection image; when the first detected image is a first color level detected image, the second detected image is a second color level detected image.
The image processing includes at least one of edge recognition, edge cropping, scaling processing, and noise reduction processing. In this step, the detection device may perform image processing on the second detection image through a first preset algorithm, thereby reducing a deviation between the detection image and the reference image. The step of performing edge recognition on the second detected image by the detecting device is similar to the step (3) in step S401, and is not repeated here. In this step, the detection device can also perform noise reduction processing on the second detection image, thereby reducing noise interference in the external environment and the transmission process and reducing detection errors. In addition, when there is a deviation between the edge position of the second detected image and the edge position of the reference image, the edge of the second detected image may be cropped so that the edge position of the second detected image and the edge position of the reference image match. When the image size of the second detection image and the image size of the reference image are different greatly, the second detection image may be further subjected to scaling processing so that the image size of the second detection image and the image size of the reference image match.
The reference image corresponds to the second detected image. For example, when the second detected image is a second real-scene detected image, the reference image is a real-scene reference image; when the second detection image is a second gray-scale detection image, the reference image is a gray-scale reference image; when the second detected image is a second color level detected image, the reference image is a color level reference image.
The first preset algorithm may be set and changed as needed, and in the embodiment of the present application, the first preset algorithm is not specifically limited. For example, the first preset algorithm may be a MATLAB (matrix laboratory) algorithm.
Step S407: when the first detection image is a first gray-scale detection image, the detection device determines a first variance value between the first gray-scale detection image and the gray-scale reference image based on the first gray-scale detection image and the gray-scale reference image.
The gray-scale reference image is used for comparing with the first gray-scale detection image. The gray scale reference image can be an image pre-stored in a standard comparison library, or an image obtained by collecting a gray scale sample image displayed on a reference display device by a detection device, wherein the reference display device is a display device with qualified image quality. In the embodiments of the present application, this is not particularly limited.
When the grayscale reference image is an image in the standard comparison library, the detection device can directly obtain the grayscale reference image corresponding to the grayscale sample image in the standard comparison library according to the grayscale sample image. The gray scale reference image is an image obtained by acquiring a gray scale sample image displayed on standard equipment in advance. The standard equipment and the detected equipment are the same in equipment type, but the image quality displayed by the standard equipment is higher than that displayed by the detected equipment, so that the image quality displayed on the standard equipment can be used as a reference to be compared with the image quality displayed on the detected equipment, and the quality detection result of the image displayed on the detected equipment is obtained.
The detection equipment determines the occurrence frequency of each gray level pixel in a first gray level detection image based on the first gray level detection image to obtain a gray level histogram of the first gray level detection image; and determining the variance of the gray-scale histogram of the first gray-scale detection image according to the gray-scale histogram of the first gray-scale detection image.
The detection equipment determines the occurrence frequency of each gray level pixel in the gray level reference image based on the gray level reference image to obtain a gray level histogram of the gray level reference image; and determining the variance of the histogram of the gray-scale reference image according to the histogram of the gray-scale reference image.
The detection equipment determines a difference value between the two variances according to the variance of the gray-scale histogram of the first gray-scale detection image and the variance of the histogram of the gray-scale reference image to obtain a first variance difference value.
Step S408: when the first detection image is a first tone detection image, the detection apparatus determines a second variance difference value between the first tone detection image and the tone reference image based on the first tone detection image and the tone reference image.
The reference image of the color level can also be an image in a standard comparison library or an image obtained by collecting a sample image of the color level displayed on a reference display device. In the embodiments of the present application, this is not particularly limited. When the color-scale reference image is an image in the standard comparison library, the detection device may directly obtain the color-scale reference image corresponding to the color-scale sample image in the standard comparison library according to the color-scale sample image. The color level reference image is an image obtained by acquiring a color level sample image displayed on standard equipment in advance.
The detection equipment determines the occurrence frequency of each color gradation pixel in the first color gradation detection image based on the first color gradation detection image to obtain a color gradation histogram of the first color gradation detection image; and determining the variance of the color gradation histogram of the first toner detection image according to the color gradation histogram of the first toner detection image.
The detection equipment determines the occurrence frequency of each color level pixel in the color level reference image based on the color level reference image to obtain a color level histogram of the color level reference image; and determining the variance of the color level histogram of the color level reference image according to the color level histogram of the color level reference image.
The detection equipment determines the difference between the two variances according to the variance of the color level histogram of the first toner detection image and the variance of the color level histogram of the color level reference image to obtain a second variance difference value.
Step S409: when the first detection image is a first live-action detection image, the detection apparatus determines a first degree of matching between the first live-action detection image and the live-action reference image.
The live-action reference image can be an image in a standard comparison library or an image obtained by acquiring a live-action sample image displayed on a reference display device. In the embodiments of the present application, this is not particularly limited. For example, the live-action reference image is an image in the standard alignment library. In the embodiment of the application, the live-action image is introduced, the comparison of the first live-action detection image and the live-action reference image in the spatial position is increased, the image display abnormity and the distortion degree can be detected, the visual quality of the live-action sample image after being displayed by the detection equipment is determined from the three aspects of brightness, contrast and structure, and the reliability of image quality detection is improved.
In this step, the step of determining, by the detection device, the first matching degree between the first live-action detection image and the live-action reference image may be: the detection equipment determines a first mean value, a first covariance and a first variance of the first real-scene detection image based on the first real-scene detection image; determining a second mean, a second covariance, and a second variance of the live-action reference image based on the live-action reference image; and determining the first matching degree according to the first mean value, the first covariance, the first variance, the second mean value, the second covariance and the second variance.
In a possible implementation manner, the computing device may determine the first matching degree through a second preset algorithm according to the first mean, the first covariance, the first variance, the second mean, the second covariance, and the second variance. The second preset algorithm may be set and changed as needed, for example, the second preset algorithm may be an SSIM (structural similarity index) algorithm, and this step may be: the computing equipment determines a first matching degree between the first live-action detection image and the live-action reference image through an SSIM algorithm according to the first mean value, the first covariance, the first variance, the second mean value, the second covariance and the second variance.
It should be noted that, after the step 406 is executed, the detection apparatus may determine a first variance difference between the first grayscale detection image and the grayscale reference image, or determine a second variance difference between the first grayscale detection image and the grayscale reference image, or determine a first matching degree between the first live-action detection image and the live-action reference image. That is, after the detection device executes step 406, step 407 may be executed first, and then steps 408 and 409 may be executed; or, after the detection device executes step 406, it first executes step 408, and then executes steps 407 and 409; alternatively, after the detection device executes step 406, it first executes step 409, and then executes steps 407 and 408. In the embodiments of the present application, this is not particularly limited.
Step 410: and when the first variance difference value is within a first preset range, the second variance difference value is within a second preset range, and the first matching degree is greater than a first threshold value, the detection equipment determines that the image quality of the detected equipment is qualified.
And when the first variance difference value is within a first preset range, the second variance difference value is within a second preset range, and the first matching degree is greater than a first threshold value, the detection equipment determines that the quality of the image displayed on the detected equipment is qualified.
The first preset range, the second preset range and the first threshold value can be set and changed as required, and in the real-time example of the application, the first preset range, the second preset range and the first threshold value are not specifically limited. For example, the first preset range is 0 to 10, the second preset range is 0 to 8, and the first threshold is 0.8, when the first variance difference is 4, the second variance difference is 6, and the first matching degree is 0.9, the first variance difference 4 is within the first preset range 0 to 10, the second variance difference 6 is within the second preset range 0 to 8, and the first matching degree 0.9 is greater than the first threshold 0.8, the detection device determines that the quality of the image displayed on the device to be detected is qualified, and the device to be detected can leave the factory.
The detection device may generate a quality-qualified report and directly send the report to a specified person when it is determined that the image displayed on the detection device is qualified in quality. Or when the detection device generates a quality qualified report, sending a first notification message to the designated person, wherein the first notification message carries quality qualified related data and is used for notifying the designated person that the quality of the detected device is qualified.
Step 411: and when the first variance difference is not in a first preset range, or the second variance difference is not in a second preset range, or the first matching degree is not greater than a first threshold value, the detection equipment determines that the image quality of the detected equipment is unqualified.
And when the first variance is not in a first preset range, or the second variance is not in a second preset range, or the first matching degree is not greater than a first threshold value, the detection equipment determines that the quality of the image displayed by the detected equipment is unqualified. For example, the first preset range is 0 to 10, the second preset range is 0 to 8, and the first threshold is 0.8, when the first variance difference is 4, the second variance difference is 9, and the first matching degree is 0.9, the first variance difference 4 is within the first preset range 0 to 10, the first matching degree 0.9 is greater than the first threshold 0.8, but the second variance difference 9 is not within the second preset range 0 to 8, the detection device determines that the quality of the image displayed on the device to be detected is not qualified, and the device cannot leave the factory, and needs to be maintained until the quality of the displayed image is qualified.
The detection device can also generate a quality failure report when the image displayed by the detection device is not good, and directly send the report to the designated personnel. Or when the detection equipment generates a report of unqualified quality, sending a second notification message to the appointed person, wherein the second notification message carries the data related to the unqualified quality and is used for notifying the appointed person that the quality of the detected equipment is unqualified. Referring to fig. 5, fig. 5 is a schematic diagram of operations performed by the detection apparatus when the image quality detection method provided by the embodiment of the present application is performed.
According to the image quality detection method provided by the embodiment of the application, the first matching degree between the first live-action detection image and the live-action reference image is determined, and the image quality of the detected equipment is detected based on the first matching degree, the first gray-scale detection image and the first color-scale detection image. When the method is used for detecting the image quality of the detected equipment, the first gray scale detection image and the first color scale detection image are detected, the first live-action detection image and the live-action reference image are compared, the image quality displayed by the detected equipment under the actual condition is accurately reflected through the actual effect of the image displayed by the detected equipment, and the reliability of image quality detection is improved.
An embodiment of the present application provides an image quality detection apparatus, and referring to fig. 6, the apparatus includes:
the acquisition module 601 is configured to acquire a first live-action detection image, and acquire a first gray-scale detection image and a first color-scale detection image, where the first live-action detection image, the first gray-scale detection image and the first color-scale detection image are detection images obtained by acquiring a live-action sample image, a gray-scale sample image and a color-scale sample image displayed on the detected device, respectively;
a first determining module 602, configured to determine a first matching degree between a first live-action detection image and a live-action reference image, where the live-action reference image is an image obtained by collecting a live-action sample image displayed on a reference display device, and the reference display device is a display device with qualified image quality;
the detecting module 603 is configured to perform image quality detection on the detected device based on the first matching degree, the first grayscale detection image, and the first color level detection image.
In a possible implementation manner, the detecting module 603 is configured to determine a first variance difference between the first gray-scale detection image and the gray-scale reference image based on the first gray-scale detection image and the gray-scale reference image, and determine a second variance difference between the first gray-scale detection image and the color-scale reference image based on the first color-scale detection image and the color-scale reference image, where the gray-scale reference image and the color-scale reference image are images obtained by collecting a gray-scale sample image and a color-scale sample image displayed on a reference display device, respectively; when the first variance difference value is within a first preset range, the second variance difference value is within a second preset range, and the first matching degree is greater than a first threshold value, determining that the image quality of the detected equipment is qualified; and when the first variance difference value is not in a first preset range, or the second variance difference value is not in a second preset range, or the first matching degree is not greater than a first threshold value, determining that the image quality of the detected equipment is unqualified.
In another possible implementation manner, the obtaining module 601 is further configured to collect a corrected sample image displayed on the detected device, so as to obtain a first corrected detection image; performing edge recognition on the first correction detection image to obtain first edge information of the first correction detection image; and when the first edge information is matched with the second edge information of the correction reference image, acquiring a first live-action detection image, and acquiring a first gray-scale detection image and a first color-scale detection image, wherein the correction reference image is an image obtained by collecting a correction sample image displayed on the reference display equipment.
In another possible implementation manner, the obtaining module 601 is further configured to adjust a collection position of the corrected and detected image when the first edge information does not match the second edge information; re-collecting the corrected sample image displayed on the detected equipment to obtain a second corrected detection image; performing edge recognition on the second correction detection image to obtain third edge information of the second correction detection image; when the third edge information is matched with the second edge information, acquiring a first live-action detection image, and acquiring a first gray-scale detection image and a first color scale detection image; and when the third edge information is not matched with the second edge information, continuously adjusting the acquisition position of the correction detection image until the fourth edge information of the third correction detection image acquired at the adjusted acquisition position is matched with the second edge information, acquiring the first live-action detection image, and acquiring the first gray-scale detection image and the first color-scale detection image.
In another possible implementation manner, the apparatus further includes:
a second determination module for determining an edge deviation between the edge position of the first correction detection image and the edge position of the correction reference image based on the first edge information and the second edge information; and when the edge deviation is within the preset deviation range, determining that the first edge information is matched with the second edge information.
In another possible implementation manner, the obtaining module 601 is further configured to collect a live-action sample image to obtain a second live-action detection image; and carrying out image processing on the second real scene detection image to obtain a first real scene detection image, wherein the image processing comprises at least one of edge identification, edge cutting, scaling processing and noise reduction processing.
In another possible implementation manner, the obtaining module 601 is further configured to send a collecting instruction to the collecting device, where the collecting instruction is used to instruct the collecting device to collect a live-action sample image displayed on the detected device, so as to obtain a second live-action detection image; and receiving a second real scene detection image returned by the acquisition equipment.
In another possible implementation manner, the apparatus further includes:
and the sending module is used for sending a display instruction to the detected equipment, wherein the display instruction carries the image data of the real-scene sample image and is used for indicating the detected equipment to render the image data of the real-scene sample image to obtain a first real-scene detection image.
The image quality detection device provided by the embodiment of the application performs image quality detection on the detected equipment by determining the first matching degree between the first live-action detection image and the live-action reference image and based on the first matching degree, the first gray-scale detection image and the first color-scale detection image. When the device is used for detecting the image quality of the detected equipment, the first gray scale detection image and the first color scale detection image are detected, the first live-action detection image and the live-action reference image are compared, the image quality displayed by the detected equipment under the actual condition is accurately reflected through the actual effect of the image displayed by the detected equipment, and the reliability of image quality detection is improved.
It should be noted that: the image quality detection apparatus provided in the above embodiment is only illustrated by dividing the above functional modules when detecting the image quality, and in practical applications, the above function allocation may be performed by different functional modules according to needs, that is, the internal structure of the detection device is divided into different functional modules to perform all or part of the above described functions. In addition, the image quality detection apparatus and the image quality detection method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments, and are not described herein again.
Fig. 7 is a block diagram of a detection apparatus 700 according to an embodiment of the present disclosure. For example, the detection apparatus 700 may be used to perform the image quality detection methods provided in the various embodiments described above. Referring to fig. 7, the detection apparatus 700 includes: a processor 701 and a memory 702.
The processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 701 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 701 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. Memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 702 is used to store at least one instruction for execution by processor 701 to implement the image quality detection methods provided by method embodiments herein.
In some embodiments, the detection device 700 may further include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 703 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, display 705, camera 706, audio circuitry 707, positioning components 708, and power source 709.
The peripheral interface 703 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 701 and the memory 702. In some embodiments, processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 704 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 704 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 704 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 704 may communicate with other detection devices via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 704 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 705 is a touch display screen, the display screen 705 also has the ability to capture touch signals on or over the surface of the display screen 705. The touch signal may be input to the processor 701 as a control signal for processing. At this point, the display 705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 705 may be one, providing the front panel of the detection device 700; in other embodiments, the display 705 may be at least two, respectively disposed on different surfaces of the detection apparatus 700 or in a folded design; in still other embodiments, the display 705 may be a flexible display disposed on a curved surface or on a folded surface of the detection device 700. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display 705 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 706 is used to capture images or video. Optionally, camera assembly 706 includes a front camera and a rear camera. Generally, a front camera is disposed on a front panel of the inspection apparatus, and a rear camera is disposed on a rear surface of the inspection apparatus. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing or inputting the electric signals to the radio frequency circuit 704 to realize voice communication. For stereo sound acquisition or noise reduction purposes, the microphones may be multiple and disposed at different locations of the detection apparatus 700. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 707 may also include a headphone jack.
The Location component 708 is used to locate the current geographic Location of the sensing device 700 to implement navigation or LBS (Location Based Service). The positioning component 708 may be a positioning component based on the GPS of the united states, the beidou system of china, or the galileo system of the european union.
The power supply 709 is used to supply power to various components in the detection device 700. The power source 709 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When the power source 709 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the detection device 700 also includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyro sensor 712, pressure sensor 713, fingerprint sensor 714, optical sensor 715, and proximity sensor 716.
The acceleration sensor 711 can detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the detection apparatus 700. For example, the acceleration sensor 711 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 701 may control the touch screen 705 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 711. The acceleration sensor 711 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the detection apparatus 700, and the gyro sensor 712 may cooperate with the acceleration sensor 711 to acquire a 3D motion of the user with respect to the detection apparatus 700. From the data collected by the gyro sensor 712, the processor 701 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 713 may be disposed on a side bezel of detection device 700 and/or an underlying layer of touch display 705. When the pressure sensor 713 is arranged on the side frame of the detection device 700, the holding signal of the user to the detection device 700 can be detected, and the processor 701 performs left-right hand identification or shortcut operation according to the holding signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at a lower layer of the touch display 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 714 is used for collecting a fingerprint of a user, and the processor 701 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 714, or the fingerprint sensor 714 identifies the identity of the user according to the collected fingerprint. When the user identity is identified as a trusted identity, the processor 701 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 714 may be disposed to detect the front, back, or sides of the device 700. When a physical key or vendor Logo is provided on the detection device 700, the fingerprint sensor 714 may be integrated with the physical key or vendor Logo.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the touch display 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 705 is increased; when the ambient light intensity is low, the display brightness of the touch display 705 is turned down. In another embodiment, processor 701 may also dynamically adjust the shooting parameters of camera assembly 706 based on the ambient light intensity collected by optical sensor 715.
A proximity sensor 716, also known as a distance sensor, is typically provided on the front panel of the detection device 700. The proximity sensor 716 is used to capture the distance between the user and the front of the detection device 700. In one embodiment, the processor 701 controls the touch display 705 to switch from the bright screen state to the dark screen state when the proximity sensor 716 detects that the distance between the user and the front surface of the detection device 700 is gradually decreased; when the proximity sensor 716 detects that the distance between the user and the front surface of the detection device 700 is gradually increased, the processor 701 controls the touch display 705 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in FIG. 7 does not constitute a limitation of the detection apparatus 700, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
The embodiment of the present application further provides a computer-readable storage medium, which is applied to a terminal, and the computer-readable storage medium stores at least one instruction, at least one program, a code set, or a set of instructions, and the instruction, the program, the code set, or the set of instructions are loaded and executed by a processor to implement the operations performed by the detection device in the image quality detection method according to the foregoing embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for facilitating the understanding of the technical solutions of the present application by those skilled in the art, and is not intended to limit the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. An image quality detection method, characterized in that the method comprises:
acquiring a first live-action detection image, and acquiring a first gray-scale detection image and a first color-scale detection image, wherein the first live-action detection image, the first gray-scale detection image and the first color-scale detection image are respectively detection images obtained by acquiring a live-action sample image, a gray-scale sample image and a color-scale sample image displayed on detected equipment;
determining a first matching degree between the first live-action detection image and a live-action reference image, wherein the live-action reference image is an image obtained by collecting a live-action sample image displayed on a reference display device, and the reference display device is a display device with qualified image quality;
and detecting the image quality of the detected equipment based on the first matching degree, the first gray scale detection image and the first color scale detection image.
2. The method of claim 1, wherein obtaining the image quality detection result of the device under test based on the first matching degree, the first grayscale detection image and the first color level detection image comprises:
determining a first variance difference value between the first gray scale detection image and the gray scale reference image based on the first gray scale detection image and the gray scale reference image, and determining a second variance difference value between the first gray scale detection image and the color scale reference image based on the first color scale detection image and the color scale reference image, wherein the gray scale reference image and the color scale reference image are respectively images obtained by collecting the gray scale sample image and the color scale sample image displayed on the reference display equipment;
when the first variance difference value is within a first preset range, the second variance difference value is within a second preset range, and the first matching degree is greater than a first threshold value, determining that the image quality of the detected equipment is qualified;
and when the first variance difference is not in the first preset range, or the second variance difference is not in the second preset range, or the first matching degree is not greater than the first threshold, determining that the image quality of the detected equipment is unqualified.
3. The method of claim 1, wherein prior to said acquiring the first live action detection image and acquiring the first gray scale detection image and the first color scale detection image, the method further comprises:
collecting a correction sample image displayed on the detected equipment to obtain a first correction detection image;
performing edge recognition on the first correction detection image to obtain first edge information of the first correction detection image;
and when the first edge information is matched with the second edge information of the correction reference image, executing the steps of acquiring a first live-action detection image, and acquiring a first gray-scale detection image and a first color-scale detection image, wherein the correction reference image is an image obtained by acquiring the correction sample image displayed on the reference display device.
4. The method of claim 3, further comprising:
when the first edge information is not matched with the second edge information, adjusting the acquisition position of the correction detection image;
re-collecting the corrected sample image displayed on the detected equipment to obtain a second corrected detection image;
performing edge recognition on the second correction detection image to obtain third edge information of the second correction detection image;
when the third edge information is matched with the second edge information, executing the steps of obtaining a first live-action detection image, and obtaining a first gray-scale detection image and a first color-scale detection image;
and when the third edge information is not matched with the second edge information, continuously adjusting the acquisition position of the correction detection image until the fourth edge information of the third correction detection image acquired at the adjusted acquisition position is matched with the second edge information, and executing the steps of acquiring the first real-scene detection image, and acquiring the first gray-scale detection image and the first color-scale detection image.
5. The method of claim 3, further comprising:
determining an edge deviation between the edge position of the first correction detection image and the edge position of the correction reference image based on the first edge information and the second edge information;
and when the edge deviation is within a preset deviation range, determining that the first edge information is matched with the second edge information.
6. The method of claim 1, wherein said acquiring a first live action detection image comprises:
acquiring the live-action sample image to obtain a second live-action detection image;
and carrying out image processing on the second real scene detection image to obtain the first real scene detection image, wherein the image processing comprises at least one of edge identification, edge cutting, scaling processing and noise reduction processing.
7. The method of claim 6, wherein said acquiring the live-action sample image resulting in a second live-action detected image comprises:
sending a collecting instruction to collecting equipment, wherein the collecting instruction is used for instructing the collecting equipment to collect the live-action sample image displayed on the detected equipment to obtain the second live-action detection image;
and receiving the second real scene detection image returned by the acquisition equipment.
8. The method according to any of claims 1-7, wherein prior to said acquiring the first live action detection image, the method further comprises:
and sending a display instruction to the detected equipment, wherein the display instruction carries the image data of the real scene sample image and is used for indicating the detected equipment to render the image data of the real scene sample image to obtain the first real scene detection image.
9. A detection device, characterized in that the detection device comprises:
a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the instruction, the program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the operations performed in the image quality detection method of any of claims 1-8.
10. A computer-readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the operations performed in the image quality detection method according to any one of claims 1 to 8.
CN201910385809.3A 2019-05-09 2019-05-09 Image quality detection method, detection apparatus, and storage medium Active CN111915550B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910385809.3A CN111915550B (en) 2019-05-09 2019-05-09 Image quality detection method, detection apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910385809.3A CN111915550B (en) 2019-05-09 2019-05-09 Image quality detection method, detection apparatus, and storage medium

Publications (2)

Publication Number Publication Date
CN111915550A true CN111915550A (en) 2020-11-10
CN111915550B CN111915550B (en) 2024-03-29

Family

ID=73242833

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910385809.3A Active CN111915550B (en) 2019-05-09 2019-05-09 Image quality detection method, detection apparatus, and storage medium

Country Status (1)

Country Link
CN (1) CN111915550B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201104241A (en) * 2009-07-17 2011-02-01 Usun Technology Co Ltd Automatic spotting method of white-light interferometer
CN102937816A (en) * 2012-11-22 2013-02-20 四川华雁信息产业股份有限公司 Method and device for calibrating preset position deviation of camera
JP2013242491A (en) * 2012-05-22 2013-12-05 Sharp Corp Display device, gray-scale value correction method, television receiver, program, and recording medium
CN104216147A (en) * 2014-09-17 2014-12-17 中华人民共和国四川出入境检验检疫局 Image quality assessment based LCD (Liquid Crystal Display) display screen motion blur detection method
WO2017054442A1 (en) * 2015-09-30 2017-04-06 腾讯科技(深圳)有限公司 Image information recognition processing method and device, and computer storage medium
US20170186147A1 (en) * 2015-12-23 2017-06-29 Vmware, Inc. Quantitative visual perception quality measurement for virtual desktops
CN107590802A (en) * 2017-09-11 2018-01-16 康佳集团股份有限公司 A kind of television set display consistency detection method, storage medium and detection device
CN107845087A (en) * 2017-10-09 2018-03-27 深圳市华星光电半导体显示技术有限公司 The detection method and system of the uneven defect of liquid crystal panel lightness
CA3048094A1 (en) * 2016-12-23 2018-06-28 Wipotec Gmbh Testing and/or calibrating of a camera, in particular a digital camera, by means of an optical test standard
WO2018192662A1 (en) * 2017-04-20 2018-10-25 Hp Indigo B.V. Defect classification in an image or printed output
CN108710478A (en) * 2018-03-27 2018-10-26 广东欧珀移动通信有限公司 Control method, device, storage medium and the intelligent terminal of display screen
CN109471276A (en) * 2018-11-07 2019-03-15 凌云光技术集团有限责任公司 A kind of liquid crystal display colour cast defect inspection method and device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201104241A (en) * 2009-07-17 2011-02-01 Usun Technology Co Ltd Automatic spotting method of white-light interferometer
JP2013242491A (en) * 2012-05-22 2013-12-05 Sharp Corp Display device, gray-scale value correction method, television receiver, program, and recording medium
CN102937816A (en) * 2012-11-22 2013-02-20 四川华雁信息产业股份有限公司 Method and device for calibrating preset position deviation of camera
CN104216147A (en) * 2014-09-17 2014-12-17 中华人民共和国四川出入境检验检疫局 Image quality assessment based LCD (Liquid Crystal Display) display screen motion blur detection method
WO2017054442A1 (en) * 2015-09-30 2017-04-06 腾讯科技(深圳)有限公司 Image information recognition processing method and device, and computer storage medium
US20170186147A1 (en) * 2015-12-23 2017-06-29 Vmware, Inc. Quantitative visual perception quality measurement for virtual desktops
CA3048094A1 (en) * 2016-12-23 2018-06-28 Wipotec Gmbh Testing and/or calibrating of a camera, in particular a digital camera, by means of an optical test standard
WO2018192662A1 (en) * 2017-04-20 2018-10-25 Hp Indigo B.V. Defect classification in an image or printed output
CN107590802A (en) * 2017-09-11 2018-01-16 康佳集团股份有限公司 A kind of television set display consistency detection method, storage medium and detection device
CN107845087A (en) * 2017-10-09 2018-03-27 深圳市华星光电半导体显示技术有限公司 The detection method and system of the uneven defect of liquid crystal panel lightness
CN108710478A (en) * 2018-03-27 2018-10-26 广东欧珀移动通信有限公司 Control method, device, storage medium and the intelligent terminal of display screen
CN109471276A (en) * 2018-11-07 2019-03-15 凌云光技术集团有限责任公司 A kind of liquid crystal display colour cast defect inspection method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘桓宇: "面向头戴显示器的微显子系统研究", 硕士电子期刊信息科技辑, no. 1, pages 48 - 53 *
邱聚能;李辉;闫乐乐;梁平;: "基于图像质量评价的LCD运动模糊检测方法", 液晶与显示, no. 03, 15 June 2015 (2015-06-15), pages 163 - 169 *

Also Published As

Publication number Publication date
CN111915550B (en) 2024-03-29

Similar Documents

Publication Publication Date Title
CN110163833B (en) Method and device for determining opening and closing state of disconnecting link
CN112270718B (en) Camera calibration method, device, system and storage medium
CN109302632B (en) Method, device, terminal and storage medium for acquiring live video picture
CN109886208B (en) Object detection method and device, computer equipment and storage medium
CN110442521B (en) Control unit detection method and device
CN111028144A (en) Video face changing method and device and storage medium
CN111565309B (en) Display device and distortion parameter determination method, device and system thereof, and storage medium
CN110647881A (en) Method, device, equipment and storage medium for determining card type corresponding to image
CN112396076A (en) License plate image generation method and device and computer storage medium
CN109754439B (en) Calibration method, calibration device, electronic equipment and medium
CN116871982A (en) Device and method for detecting spindle of numerical control machine tool and terminal equipment
CN111127541B (en) Method and device for determining vehicle size and storage medium
CN111586279A (en) Method, device and equipment for determining shooting state and storage medium
CN112184802A (en) Calibration frame adjusting method and device and storage medium
CN111669611B (en) Image processing method, device, terminal and storage medium
CN112243083B (en) Snapshot method and device and computer storage medium
CN111915550B (en) Image quality detection method, detection apparatus, and storage medium
CN114241055A (en) Improved fisheye lens internal reference calibration method, system, terminal and storage medium
CN113824902A (en) Method, device, system, equipment and medium for determining time delay of infrared camera system
CN110443841B (en) Method, device and system for measuring ground depth
CN112399080A (en) Video processing method, device, terminal and computer readable storage medium
CN111723615A (en) Method and device for carrying out detection object matching judgment on detection object image
CN110889391A (en) Method and device for processing face image, computing equipment and storage medium
CN110660031A (en) Image sharpening method and device and storage medium
CN112308104A (en) Abnormity identification method and device and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant