CN113850115A - Method and system for automatically detecting chromatic aberration, distortion and uniformity of camera - Google Patents

Method and system for automatically detecting chromatic aberration, distortion and uniformity of camera Download PDF

Info

Publication number
CN113850115A
CN113850115A CN202110625412.4A CN202110625412A CN113850115A CN 113850115 A CN113850115 A CN 113850115A CN 202110625412 A CN202110625412 A CN 202110625412A CN 113850115 A CN113850115 A CN 113850115A
Authority
CN
China
Prior art keywords
sample
information
captured
color
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110625412.4A
Other languages
Chinese (zh)
Inventor
霍飞龙
杭云
郭宁
施唯佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Digital Life Technology Co Ltd
Original Assignee
Tianyi Smart Family Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Smart Family Technology Co Ltd filed Critical Tianyi Smart Family Technology Co Ltd
Priority to CN202110625412.4A priority Critical patent/CN113850115A/en
Publication of CN113850115A publication Critical patent/CN113850115A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a system for automatically detecting the performance of a camera. The method comprises the following steps: capturing a visual material sample by an object to be detected, wherein the visual material sample is automatically generated according to a detection item; analyzing the captured sample to obtain actual sample information about the test item; obtaining a unique sample identification associated with the captured sample, wherein the sample identification is indicative of reference sample information regarding the test item; and performing matching analysis on the actual sample information and the reference sample information to detect the performance of the object to be detected. In addition, the method can also be used for verifying the AI model based on machine vision.

Description

Method and system for automatically detecting chromatic aberration, distortion and uniformity of camera
Technical Field
The present invention relates to the field of image recognition and machine vision, and more particularly to a method and system for automatically detecting camera chromatic aberration, distortion, uniformity.
Background
In the era of the rapid development of technology today, cameras play an extremely important role in this era. In the aspect of security protection, the camera can prevent crime incidents and can be used as an auxiliary tool for human to carry out remote monitoring instead of human being on site. In addition, in terms of artificial intelligence, machine vision has become one of the basic pillars, and whether face recognition, object recognition, AI medical treatment, and the like, must rely on images captured by a camera.
The importance of cameras is self-evident, but the types of cameras are diverse, not to mention the performance is even more uneven. Therefore, camera performance detection is essential. Currently, in the aspect of camera performance detection, most of the detection is performed by taking pictures through a camera by using various charts, color charts and the like as sample materials. The detection precision of the camera is naturally limited in the aspects of manufacturing of the graphic card and the color card, time consumption of manual participation and the like. For example, the camera cannot cover all colors in the aspect of detecting the color reduction degree, more than 1600 colors exist in computers, the time length for manufacturing 1600 ten thousand color cards is immeasurable, and the time consumption for manually replacing the color cards is not to mention. Therefore, this method is clumsy and has a long time span and low detection accuracy. If the detection items are tens of thousands or millions, the detection method cannot be satisfied.
Accordingly, in order to greatly improve the accuracy and efficiency of camera performance detection, it is desirable to provide an improved method and system for camera performance detection.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
The invention provides a method and a system for automatically detecting the performance of a camera by automatically generating a known sample and setting a unique identifier, identifying the identifier from the sample captured by the camera and comparing the identifier with the content of the sample.
According to an aspect of the present invention, there is provided an automated detection method, the method comprising:
capturing a visual material sample by an object to be detected, wherein the visual material sample is automatically generated according to a detection item;
analyzing the captured sample to obtain actual sample information about the test item;
obtaining a unique sample identification associated with the captured sample, wherein the sample identification is indicative of reference sample information regarding the test item; and
and matching and analyzing the actual sample information and the reference sample information to detect the performance of the object to be detected.
According to an embodiment of the invention, the detection term includes one or more of chromatic aberration, distortion, uniformity or gray scale, and the sample information includes one or more of color information, color profile information, distribution state information or gray scale information.
According to a further embodiment of the present invention, the sample identification is added to the sample in the form of image text, and the obtaining the sample identification associated with the captured sample further comprises: the sample identification is identified from the captured sample by image text recognition.
According to a further embodiment of the invention, the object to be detected is an AI model based on machine vision, wherein the captured sample is recognized by the AI model to obtain a recognition result about the detection item, and the recognition result is subjected to matching analysis with the reference sample information indicated by the sample identification to detect the recognition capability of the AI model.
According to a further embodiment of the present invention, analyzing the captured sample for actual sample information further comprises:
converting the captured samples into a multi-dimensional array matrix;
reading the RGB value of each pixel point in the multidimensional array matrix; and
determining one or more of color information, color profile information, or distribution state information of the captured sample based on the read RGB values.
According to a further embodiment of the invention, the determining one or more of color information, color profile information, or distribution state information of the captured sample further comprises:
determining color profile information of the captured sample by picture processing; or
Color profile information or color distribution state information of the captured sample is determined by mathematical morphology composed of the read RGB values.
According to another aspect of the present invention, there is provided an automated detection system, the system comprising:
a sample generation module configured to automatically generate a visual material sample from a test item;
a sample identification module configured to set a unique sample identification for each sample, wherein the sample identification indicates reference sample information on the test item;
a storage and playback module configured to store and play back the visual material sample;
a sample processing module configured to:
analyzing the visual material sample captured by the object to be detected and played by the storage and play module to obtain actual sample information about the test item; and
obtaining a unique sample identification associated with the captured sample to obtain the reference sample information; and
a matching analysis module configured to perform matching analysis on the actual sample information and the reference sample information to detect performance of the object to be detected.
According to an embodiment of the invention, the detection term includes one or more of chromatic aberration, distortion, uniformity or gray scale, and the sample information includes one or more of color information, color profile information, distribution state information or gray scale information.
According to a further embodiment of the present invention, the sample identification is added to the sample in the form of image text, and the obtaining the sample identification associated with the captured sample further comprises: the sample identification is identified from the captured sample by image text recognition.
According to a further embodiment of the present invention, analyzing the captured sample for actual sample information further comprises:
converting the captured samples into a multi-dimensional array matrix;
reading the RGB value of each pixel point in the multidimensional array matrix; and
determining one or more of color information, color profile information, or distribution state information of the captured sample based on the read RGB values.
Compared with the scheme in the prior art, the method and the system for automatically detecting the performance of the camera have the advantages that:
(1) the testing range is wide, and the shape, size and type of the camera are not limited, such as a mobile phone camera, a watch camera, a household camera, a security camera, a network camera and the like;
(2) in the aspect of test convenience, excessive physical or hardware environment support is not needed, for example, the color graphic card is manufactured and replaced during test, and the test can be automatically carried out;
(3) the testing speed of the product is greatly improved, the set picture playing speed, namely the retention time of the test picture, can be modified according to the performance of the camera, and the image shot by the camera is ensured, so that the testing speed of the common camera can reach the testing speed of 10 or dozens of test pictures in 1 second;
(4) in the aspect of color detection, a test unit is smaller and more accurate, more than 1600 and more than ten thousand RGB color combinations exist, theoretically, more than 1600 and more than ten thousand colors need to be tested by a camera, but the current universal test scheme cannot be achieved, the full coverage of the test is limited no matter the color card is manufactured or the time consumption is caused by human participation, and the method and the system provide a feasible scheme of full color test;
(5) the testing scheme is flexible, and only needs to change correspondingly and set identification information according to the testing items of the camera when a sample picture or video is made; and
(6) the test cost is reduced, and the consumption of personnel and the cost investment of environment manufacturing such as color cards are greatly reduced.
These and other features and advantages will become apparent upon reading the following detailed description and upon reference to the accompanying drawings. It is to be understood that both the foregoing general description and the following detailed description are explanatory only and are not restrictive of aspects as claimed.
Drawings
So that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only some typical aspects of this invention and are therefore not to be considered limiting of its scope, for the description may admit to other equally effective aspects.
FIG. 1 illustrates an exemplary architecture diagram of a system for automatically detecting camera performance according to one embodiment of the invention.
Fig. 2 illustrates a scenario for automatically detecting camera performance with red and distortion in three primary colors as camera detection terms according to an embodiment of the present invention.
Figure 3a shows a diagram of camera distortion according to one embodiment of the present invention.
Fig. 3b shows a diagram of an AI model test according to an embodiment of the invention.
Fig. 4 shows a flow diagram of a method for automatically detecting camera performance according to one embodiment of the invention.
Detailed Description
The present invention will be described in detail below with reference to the attached drawings, and the features of the present invention will be further apparent from the following detailed description.
Fig. 1 is an exemplary architecture diagram of a system 100 for automatically detecting camera performance according to one embodiment of the invention. As shown in fig. 1, the system 100 of the present invention comprises: a sample generation module 101, a sample identification module 102, a storage and playback module 103, a sample processing module 104, and a matching analysis module 107, wherein the sample processing module 104 includes a sample analysis module 105 and an identification acquisition module 106.
The sample generation module 101 may automatically generate a picture or video sample according to the detection item. In one example, in the case where the detection item is color difference, the sample generation module 101 may automatically generate a color picture, i.e., a multi-dimensional array matrix (a three-dimensional array matrix in the case of color), and fill all elements in an RGB format. Further, the sample generation module 101 may automatically generate a pure color picture in which all the element values in the multidimensional array matrix are unique, thereby ensuring that the picture is pure color, which enables to reduce picture data processing. In another example, in the case where the detection term is distortion, the sample generation module 101 may automatically generate a multi-dimensional array matrix in which a color composition made up of matrix pixels is set to be a "rectangle" when the elements are filled, and 4 sides of the edges of the color rectangle are straight lines. In addition, when the detection item is a combination of chromatic aberration and distortion, the matrix form is only required to be set when the pure color picture is automatically generated, so that the distortion detection can be simultaneously completed under the condition of completing the chromatic aberration detection.
The sample identification module 102 may set a sample identification for each sample generated by the sample generation module 101, wherein the sample identification indicates reference sample information associated with the test item. For example, when the detection item is a color difference, the sample identification indicates color information, and when the detection item is a distortion, the sample identification indicates color profile information. It should be noted that the sample identification is to be visibly recognizable (e.g., by image text recognition) and not repeated. For example, the sample identification may be output in the form of text or an image in the sample. Alternatively, the sample identification may take other forms, for example, a unique identification of the sample with a timestamp. In one example of performing color difference and distortion detection, the sample flag is a (255,0,0 rectangle), and the sample information indicated by the sample flag is a red (255,0,0) rectangle, where text is used as the sample flag.
The store and play module 103 may store the generated picture or video samples after setting the identification for these samples, and then play the stored picture or video samples in turn, where each sample is shown for N seconds to ensure that the camera can take the picture or video in its entirety. Preferably, the storage and playback module 103 can adjust the detection rate according to the camera frame rate for fast detection. For example, in the case that the detection item is a color difference, the storage and playback module 103 may sequentially play over 1600 ten thousand pure color pictures that are automatically generated according to the camera frame rate to perform color restoration detection quickly, which greatly reduces time and labor cost compared with manual setting and replacement of a graphic card.
The device to be tested can take and store the played pictures or videos when the storage and playing module 103 plays. The device to be detected may be a camera or an AI model based on machine vision. The sample analysis module 105 may analyze the captured picture or video sample for actual sample information, where the actual sample information may include color information, color profile information, or distribution status information of the captured picture or video. In one example, the sample analysis module 105 may convert the captured picture or video into a multidimensional array matrix, read RGB information of each pixel point in the matrix, obtain color profile information through picture processing (e.g., OpenCV framework), or obtain color profile, distribution status information, etc. through a mathematical morphology composed of RGB values of each pixel point in the matrix.
The identification acquisition module 106 may acquire a sample identification associated with the captured picture or video sample to obtain reference sample information (e.g., RGB color information, color profile information, etc.) indicated by the sample identification. For example, in a case where the sample identifier is a text (255,0,0 rectangle) and the sample identifier is added to the picture sample, the identifier obtaining module 106 may identify the text identifier in the currently captured picture through image text recognition to obtain that the original RGB color of the picture is red (255,0,0) and the color outline is a rectangle.
The matching analysis module 107 can perform matching analysis on the actual sample information obtained from the sample analysis module 105 and the reference sample information obtained from the identification acquisition module 106 to detect the performance of the camera. In one example of detecting camera chromatic aberration, the picture obtained from the sample analysis module 105 is converted into RGB matrix information (e.g.,
Figure BDA0003101973060000061
) And comparing the RGB color information (e.g., red (255,0,0)) obtained from the mark obtaining module 106 to calculate a difference value of RGB values between corresponding pixels, wherein if the difference value is greater than a preset threshold, it is determined that the camera has a color difference problem.
Those skilled in the art will appreciate that the system of the present invention and its various modules may be implemented in either hardware or software, and that the modules may be combined or combined in any suitable manner.
Fig. 2 illustrates a scenario of automatic detection of camera performance with red and distortion in three primary colors as camera test items according to an embodiment of the present invention. The procedure for performing the performance test is as follows:
step S1: the sample generation module 101 may generate a picture matrix from the detection items (red and distortion), and fill each element value in the matrix to (255,0,0), whereby the matrix may represent red of one of the three primary colors, and further, the pattern of color composition is a rectangle, as shown in picture a in fig. 2.
Step S2: the sample identification module 102 may add a sample identification (255,0,0 rectangle) to picture a that is to be visibly recognizable and not repeated. Of course, the sample mark may be set in other forms as long as it can represent the red RGB color and the "rectangle" information. It should be noted that when the logo is added to the picture, the color of the logo is not close to the color of the picture, so that the color is prevented from being too close to cause recognition errors.
Step S3: the picture a can be stored in the storing and playing module 103 after adding the identifier, wherein the stored name is unique, and the picture storage format is uniform.
Step S4: the storage and playing module 103 can play the stored pictures in sequence and display the pictures for N seconds, so as to ensure that the camera can completely shoot the pictures. As shown in fig. 2, picture a shows N seconds.
Step S5: the camera may take picture a played by the storage and play module 103 and store it as picture a-a. As shown in fig. 3a, fig. 3a shows the distortion effect of picture a-a taken by the camera.
Step S6: the logo acquisition module 106 in the sample processing module 104 may use image text recognition to recognize the logo (255,0,0 rectangle) in picture a-a to get the original RGB of the currently taken picture to be red (255,0,0) and the original shape to be a rectangle.
Step S7: the sample analysis module 105 in the sample processing module 104 can convert the picture A-A into a multi-dimensional array matrix and read the RGB information of each pixel point in the matrix (in FIG. 2, the RGB information is
Figure BDA0003101973060000071
) The sample analysis module 105 may then obtain the color profile in the picture through image processing (e.g., OpenCV), or obtain the color profile through the values and distribution states of the pixel points in the matrix.
Step S8: the matching analysis module 107 may compare the sample information (255,0,0 rectangle) indicated by the identification obtained from step S6 with the actual sample information obtained from step S7 (b)
Figure BDA0003101973060000072
Irregular rectangle), wherein, because the image A-A matrix information includes a pixel point with RGB value (240,10,1), and the difference between the pixel point and the reference RGB value (255,0,0) of the corresponding pixel point exceeds a preset threshold, it is determined that the image shot by the camera has color difference. In addition, since the color profile analysis in step S7 shows that the captured picture a-a is not a regular rectangle and the edge thereof is curved, it is determined that the camera is distorted, and the degree of distortion is determinedAnd the judgment is carried out by the curve degree analysis of the straight line. In addition, when the uniformity of the camera is detected, if the picture A is a pure-color image, the RGB value of the pixel point of the shot picture A-A is not unique, and the more the peripheral pixels RGB change to gray, the poor uniformity of the camera can be judged.
Fig. 4 shows a flow diagram of a method 400 for automatically detecting camera performance according to one embodiment of the invention. The method begins at step 401 by capturing a visual material sample by a camera, wherein the visual material sample is automatically generated based on a test item, and the visual material sample may comprise a picture or a video sample. The detection terms may include one or more of chromatic aberration, distortion, uniformity, or gray scale.
At step 402, the sample analysis module 105 analyzes the captured sample for actual sample information about the test item, wherein the actual sample information may include color information, color profile information, or distribution status information of the captured sample. For example, the sample analysis module 105 may obtain the color information of the captured picture by converting the picture into a multidimensional array matrix and reading the RGB information of each pixel point in the matrix.
In step 403, the identification acquisition module 106 acquires a unique sample identification associated with the captured sample, wherein the sample identification indicates reference sample information (i.e., sample attributes) about the test item. For example, when the detection item is a red color difference and the sample identifier is a text identifier (255,0,0) added to the sample, the reference sample information indicated by the sample identifier is a red RGB color (255,0, 0).
In step 404, the matching analysis module 107 performs matching analysis on the actual sample information and the reference sample information to detect the camera performance. For example, in the case where the actual sample information indicates that the shape of the taken picture is a denormal rectangle, and the reference sample information indicates that the original shape of the picture is a rectangle, it is determined that the camera has distortion.
Those skilled in the art will appreciate that the method illustrated in fig. 4 may also be used to automatically verify AI models. In this case, the machine-vision-based AI model may derive the recognition result by recognizing the captured visual material sample, which is automatically generated according to the test item. In the example of fig. 3b, a picture sample generated for verifying the recognition capability of the AI model on a distorted picture is shown, wherein two rows of numbers "111" are displayed in different fonts, white in color and the same size. In this example, the AI model may identify the upper "111" and may not identify the lower "111". The identification acquisition module 106 acquires a unique sample identification associated with the captured sample, wherein the sample identification indicates reference sample information about the test item. In the example of fig. 3b, the reference sample information indicated by the sample identification is "two rows of numbers 111". Subsequently, the matching analysis module 107 performs matching analysis on the recognition result and the reference sample information to verify the AI model. In the example of fig. 3b, since the AI model does not recognize the following "111", and the recognition result ("number 111") does not coincide with the original sample information ("two rows of numbers 111"), it can be determined that the AI model has a poor recognition capability for the distorted picture. Therefore, the method can be used for not only testing the performance of the camera, but also effectively testing and verifying the AI system related to the machine vision, and automatically generating a large number of test samples and automatically testing.
What has been described above includes examples of aspects of the claimed subject matter. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the claimed subject matter, but one of ordinary skill in the art may recognize that many further combinations and permutations of the claimed subject matter are possible. Accordingly, the disclosed subject matter is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims.

Claims (10)

1. An automated detection method, the method comprising:
capturing a visual material sample by an object to be detected, wherein the visual material sample is automatically generated according to a detection item;
analyzing the captured sample to obtain actual sample information about the test item;
obtaining a unique sample identification associated with the captured sample, wherein the sample identification is indicative of reference sample information regarding the test item; and
and matching and analyzing the actual sample information and the reference sample information to detect the performance of the object to be detected.
2. The method of claim 1, wherein the detection terms include one or more of chromatic aberration, distortion, uniformity, or gray scale, and the sample information includes one or more of color information, color profile information, distribution state information, or gray scale information.
3. The method of claim 1, wherein the sample identification is added to a sample in the form of image text, and the obtaining the sample identification associated with the captured sample further comprises: the sample identification is identified from the captured sample by image text recognition.
4. The method of claim 2, wherein the object to be detected is an AI model based on machine vision, wherein the captured sample is recognized by the AI model to obtain a recognition result about the detection item, and the recognition result is subjected to matching analysis with the reference sample information indicated by the sample identification to detect a recognition capability of the AI model.
5. The method of claim 1, wherein analyzing the captured sample for actual sample information further comprises:
converting the captured samples into a multi-dimensional array matrix;
reading the RGB value of each pixel point in the multidimensional array matrix; and
determining one or more of color information, color profile information, or distribution state information of the captured sample based on the read RGB values.
6. The method of claim 5, wherein the determining one or more of color information, color profile information, or distribution state information of the captured sample further comprises:
determining color profile information of the captured sample by picture processing; or
Color profile information or color distribution state information of the captured sample is determined by mathematical morphology composed of the read RGB values.
7. An automated inspection system, the system comprising:
a sample generation module configured to automatically generate a visual material sample from a test item;
a sample identification module configured to set a unique sample identification for each sample, wherein the sample identification indicates reference sample information on the test item;
a storage and playback module configured to store and play back the visual material sample;
a sample processing module configured to:
analyzing the visual material sample captured by the object to be detected and played by the storage and play module to obtain actual sample information about the test item; and
obtaining a unique sample identification associated with the captured sample to obtain the reference sample information; and
a matching analysis module configured to perform matching analysis on the actual sample information and the reference sample information to detect performance of the object to be detected.
8. The system of claim 7, wherein the detection terms include one or more of chromatic aberration, distortion, uniformity, or gray scale, and the sample information includes one or more of color information, color profile information, distribution status information, or gray scale information.
9. The system of claim 7, wherein the sample identification is added to the sample in the form of image text, and the obtaining the sample identification associated with the captured sample further comprises: the sample identification is identified from the captured sample by image text recognition.
10. The system of claim 7, wherein analyzing the captured sample for actual sample information further comprises:
converting the captured samples into a multi-dimensional array matrix;
reading the RGB value of each pixel point in the multidimensional array matrix; and
determining one or more of color information, color profile information, or distribution state information of the captured sample based on the read RGB values.
CN202110625412.4A 2021-06-04 2021-06-04 Method and system for automatically detecting chromatic aberration, distortion and uniformity of camera Pending CN113850115A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110625412.4A CN113850115A (en) 2021-06-04 2021-06-04 Method and system for automatically detecting chromatic aberration, distortion and uniformity of camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110625412.4A CN113850115A (en) 2021-06-04 2021-06-04 Method and system for automatically detecting chromatic aberration, distortion and uniformity of camera

Publications (1)

Publication Number Publication Date
CN113850115A true CN113850115A (en) 2021-12-28

Family

ID=78972982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110625412.4A Pending CN113850115A (en) 2021-06-04 2021-06-04 Method and system for automatically detecting chromatic aberration, distortion and uniformity of camera

Country Status (1)

Country Link
CN (1) CN113850115A (en)

Similar Documents

Publication Publication Date Title
CN106709958A (en) Gray scale gradient and color histogram-based image quality evaluation method
US8072494B2 (en) Device and method for automatically testing display device
CN110736748A (en) Immunohistochemical nuclear plasma staining section diagnosis method and system
CN103795920A (en) Photo processing method and device
CN113688817A (en) Instrument identification method and system for automatic inspection
CN111028222A (en) Video detection method and device, computer storage medium and related equipment
CN110969202A (en) Portrait collection environment verification method and system based on color component and perceptual hash algorithm
Swaminathan et al. Component forensics of digital cameras: A non-intrusive approach
CN114943875A (en) Visual analysis method for cable element identification
CN116258663A (en) Bolt defect identification method, device, computer equipment and storage medium
CN108229281A (en) The generation method and method for detecting human face of neural network, device and electronic equipment
CN114066823A (en) Method for detecting color block and related product thereof
CN112073713B (en) Video leakage test method, device, equipment and storage medium
CN113866182A (en) Detection method and system for detecting defects of display module
CN116580026B (en) Automatic optical detection method, equipment and storage medium for appearance defects of precision parts
CN113850115A (en) Method and system for automatically detecting chromatic aberration, distortion and uniformity of camera
CN117218633A (en) Article detection method, device, equipment and storage medium
US8712161B2 (en) Image manipulating system and method
CN113569594A (en) Method and device for labeling key points of human face
Patil et al. Survey on approaches used for image quality assessment
CN115861895A (en) Material tracking method and system for lithium iron phosphate production site
CN115546141A (en) Small sample Mini LED defect detection method and system based on multi-dimensional measurement
CN112967258B (en) Display defect detection method and device for watch and computer readable storage medium
CN103020601B (en) Hi-line visible detection method and device
CN112135123A (en) Video quality detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220210

Address after: Room 1423, No. 1256 and 1258, Wanrong Road, Jing'an District, Shanghai 200072

Applicant after: Tianyi Digital Life Technology Co.,Ltd.

Address before: 201702 3rd floor, 158 Shuanglian Road, Qingpu District, Shanghai

Applicant before: Tianyi Smart Family Technology Co.,Ltd.

TA01 Transfer of patent application right