CN118154899A - Panel edge recognition method and device, electronic equipment and storage medium - Google Patents

Panel edge recognition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN118154899A
CN118154899A CN202410565205.8A CN202410565205A CN118154899A CN 118154899 A CN118154899 A CN 118154899A CN 202410565205 A CN202410565205 A CN 202410565205A CN 118154899 A CN118154899 A CN 118154899A
Authority
CN
China
Prior art keywords
image
panel
edge
sample
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410565205.8A
Other languages
Chinese (zh)
Other versions
CN118154899B (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Shuzhilian Technology Co Ltd
Original Assignee
Chengdu Shuzhilian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Shuzhilian Technology Co Ltd filed Critical Chengdu Shuzhilian Technology Co Ltd
Priority to CN202410565205.8A priority Critical patent/CN118154899B/en
Publication of CN118154899A publication Critical patent/CN118154899A/en
Application granted granted Critical
Publication of CN118154899B publication Critical patent/CN118154899B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a panel edge identification method and device, electronic equipment and a storage medium, and relates to the technical field of image processing. In the application, firstly, an edge recognition processing can be carried out on an obtained panel image to be recognized by utilizing a target neural network formed through network optimization to obtain an edge recognition result corresponding to the panel image to be recognized; secondly, based on the edge recognition result, carrying out edge straight line determination processing on the panel image to be recognized to obtain an edge straight line determination result of the target panel in the panel image to be recognized; then, the defect recognition result possessed by the target panel may be analyzed based on the edge straight line determination result. Based on the above, the problem of relatively low reliability of panel edge recognition in the prior art can be improved.

Description

Panel edge recognition method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and apparatus for identifying a panel edge, an electronic device, and a storage medium.
Background
Edge recognition of a panel (such as a glass substrate) is an important technical means in the fields of industrial automation, machine vision and computer vision, and for example, analysis and positioning of edge defects of the panel can be realized based on the edge recognition. However, the inventor has found that in the conventional panel edge recognition technology, there is a problem in that the reliability of recognition is relatively low.
Disclosure of Invention
In view of the above, the present application is directed to a method and apparatus for recognizing edges of a panel, an electronic device and a storage medium, so as to solve the problem that the reliability of recognizing edges of a panel is relatively low in the prior art.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical scheme:
a panel edge identification method, comprising:
performing edge recognition processing on an acquired panel image to be recognized by using a target neural network formed through network optimization to obtain an edge recognition result corresponding to the panel image to be recognized, wherein the panel image to be recognized is formed by performing image acquisition on a target panel, and the edge recognition result is used for reflecting edge distribution of the target panel in the panel image to be recognized;
based on the edge recognition result, carrying out edge straight line determination processing on the panel image to be recognized to obtain an edge straight line determination result of the target panel in the panel image to be recognized;
And analyzing a defect recognition result of the target panel based on the edge straight line determination result, wherein the defect recognition result is used for reflecting whether the edge of the target panel has a defect or not.
In a preferred option of the embodiment of the present application, in the above panel edge recognition method, the step of performing edge line determination processing on the panel image to be recognized based on the edge recognition result to obtain an edge line determination result of the target panel in the panel image to be recognized includes:
In the panel image to be identified, scanning the edge line of the target panel reflected by the edge identification result to obtain at least one scanning result corresponding to the edge identification result, wherein each scanning result is used for reflecting the scanned pixel point on the edge line;
For each scanning result, performing edge line determination processing on the panel image to be identified based on the scanned pixel points on the edge line reflected by the scanning result to obtain an edge line local determination result of the target panel in the panel image to be identified, wherein the edge line local determination result is used for reflecting the edge line determined based on the corresponding scanning result;
And determining an edge straight line determination result of the target panel in the panel image to be identified based on the edge straight line local determination result corresponding to each scanning result.
In a preferred option of the embodiment of the present application, in the above panel edge recognition method, the step of performing a scanning process on an edge line of the target panel reflected by the edge recognition result in the panel image to be recognized to obtain at least one scanning result corresponding to the edge recognition result includes:
Scanning along the column direction of pixel distribution in the panel image to be identified, determining a first crossed pixel point between each column scanning line and the edge line of the target panel reflected by the edge identification result, and constructing and forming a crossed pixel point set corresponding to the column direction based on the first crossed pixel point corresponding to each column scanning line;
Scanning along the row direction of pixel distribution in the panel image to be identified, determining a first crossed pixel point between each row scanning line and the edge line of the target panel reflected by the edge identification result, and constructing and forming a crossed pixel point set corresponding to the row direction based on the first crossed pixel point corresponding to each row scanning line;
Determining an intersection between the crossed pixel point set corresponding to the column direction and the crossed pixel point set corresponding to the row direction to obtain a crossed pixel point intersection;
And determining the intersection of the crossed pixel points, other crossed pixel points except the intersection of the crossed pixel points in the crossed pixel point set corresponding to the column direction and other crossed pixel points except the intersection of the crossed pixel points in the crossed pixel point set corresponding to the row direction as scanning results corresponding to the edge recognition results respectively so as to obtain at least three corresponding scanning results.
In a preferred option of the embodiment of the present application, in the above panel edge recognition method, the step of performing edge line determination processing on the panel image to be recognized to obtain a local edge line determination result of the target panel in the panel image to be recognized, for each of the scan results, based on the pixel points scanned on the edge line reflected by the scan result includes:
For each scanning result, performing outlier screening treatment on the scanned pixel points on the edge line reflected by the scanning result to obtain a target pixel point set corresponding to the scanning result;
And respectively carrying out straight line fitting processing on pixel points in a target pixel point set corresponding to each scanning result to obtain each edge straight line local determination result of the target panel in the panel image to be identified, wherein the edge straight line local determination result, the scanning result and the target pixel point set have a one-to-one correspondence, and each edge straight line local determination result is used for reflecting one fitted edge straight line.
In a preferred option of the embodiment of the present application, in the above panel edge identification method, the step of analyzing the defect identification result of the target panel based on the edge straight line determination result includes:
Comparing and analyzing the edge straight line determination result with an edge recognition sub-result corresponding to the edge straight line determination result included in the edge recognition result aiming at each edge straight line determination result belonging to the target type to obtain a defect recognition sub-result corresponding to the edge straight line determination result, wherein the edge recognition result comprises at least one edge recognition sub-result, any one edge recognition sub-result is used for reflecting one edge distribution of the target panel in the panel image to be recognized, and the edge straight line reflected by each edge straight line determination result belonging to the target type is distributed along the row direction or the column direction in the panel image to be recognized;
And determining the size information of the edge straight line reflected by the edge straight line determining result aiming at each edge straight line determining result which does not belong to the target type, and analyzing the defect identifying result of the target panel based on the size information.
In a preferred option of the embodiment of the present application, in the above panel edge identification method, the step of determining, for each edge straight line determination result not belonging to the target type, size information of an edge straight line reflected by the edge straight line determination result, and analyzing, based on the size information, a defect identification result of the target panel includes:
determining the size information of the edge straight line reflected by the edge straight line determining result aiming at each edge straight line determining result which does not belong to the target type;
Respectively carrying out matching processing on the size information corresponding to each edge straight line determination result and the reference size information configured for the size information in advance to obtain corresponding matching data;
And aiming at each edge straight line determining result which does not belong to the target type, if the matching data corresponding to the edge straight line determining result reflects that the size information is not matched with the reference size information, generating a defect identifying result which corresponds to the edge straight line determining result and is used for reflecting that the target panel has defects, and if the matching data corresponding to the edge straight line determining result reflects that the size information is matched with the reference size information, generating a defect identifying result which corresponds to the edge straight line determining result and is used for reflecting that the target panel has no defects.
In a preferred option of the embodiment of the present application, in the above panel edge recognition method, the step of performing edge recognition processing on the obtained panel image to be recognized by using a target neural network formed through network optimization to obtain an edge recognition result corresponding to the panel image to be recognized includes:
Acquiring a first sample panel image and an initial neural network, wherein the initial neural network comprises a first image coding model and a first image decoding model;
Performing unsupervised network optimization processing on the initial neural network based on the first sample panel image to obtain an intermediate neural network corresponding to the initial neural network, wherein in the process of performing unsupervised network optimization processing, the basis of network optimization processing at least comprises errors between the first sample panel image and a decoded image output by the first image decoding model;
constructing a candidate neural network based on a second image decoding model and a first image coding model in the intermediate neural network, wherein the candidate neural network comprises the second image coding model and the second image decoding model, and model parameters of the second image coding model are the same as those of the first image coding model in the intermediate neural network;
acquiring a second sample panel image and image tag information corresponding to the second sample panel image, wherein the image tag information is used for identifying panel edges in the second sample panel image;
Performing supervised network optimization processing on the candidate neural network based on the second sample panel image and image label information corresponding to the second sample panel image to obtain a target neural network corresponding to the candidate neural network, wherein in the process of performing the supervised network optimization processing, the basis of the network optimization processing at least comprises errors between the image label information and a sample edge recognition result output by the second image decoding model;
And performing edge recognition processing on the acquired panel image to be recognized by using the target neural network to obtain an edge recognition result corresponding to the panel image to be recognized.
In a preferred option of the embodiment of the present application, in the above panel edge recognition method, the step of performing unsupervised network optimization processing on the initial neural network based on the first sample panel image to obtain an intermediate neural network corresponding to the initial neural network includes:
Carrying out segmentation serialization processing on the first sample panel image to obtain a sample image segmentation sequence corresponding to the first sample panel image, wherein the sample image segmentation sequence comprises a plurality of local sample panel images which are arranged in sequence, the arrangement relation among the plurality of local sample panel images is related to the position relation of the plurality of local sample panel images in the first sample panel image, and the image sizes among every two local sample panel images are consistent;
utilizing a first image coding model included in the initial neural network, and mining candidate integral image coding features of the first sample panel image based on the sample image segmentation sequence;
Determining a first sample image segmentation sequence and a second sample image segmentation sequence in the sample image segmentation sequence, wherein the first sample image segmentation sequence and the second sample image segmentation sequence belong to subsequences in the sample image segmentation sequence, and a correlation is formed between the first sample image segmentation sequence and the second sample image segmentation sequence;
Decoding to obtain a decoded sample image segmentation sequence corresponding to the first sample image segmentation sequence based on the first sample image segmentation sequence and the candidate integral image coding feature by using a first image coding model and a first image decoding model which are included in the initial neural network, wherein a decoding object of the first image decoding model comprises the candidate integral image coding feature and a coding result of the first image coding model on the first sample image segmentation sequence, and the decoded sample image segmentation sequence belongs to a reduction result of decoding processing on the second sample image segmentation sequence based on the first sample image segmentation sequence;
Updating the candidate integral image coding features based on the decoding sample image segmentation sequence and the second sample image segmentation sequence, and marking the updated candidate integral image coding features to be marked as corresponding target integral image coding features;
And decoding the target overall image coding characteristic by using a first image decoding model included in the initial neural network to obtain a decoded sample panel image corresponding to the first sample panel image, performing error calculation on the initial neural network based on the first sample panel image and the decoded sample panel image to obtain a corresponding target decoding error, and updating and optimizing network parameters of the initial neural network along the direction of reducing the target decoding error to obtain an intermediate neural network corresponding to the initial neural network.
In a preferred option of the embodiment of the present application, in the above panel edge recognition method, the step of mining candidate global image coding features of the first sample panel image based on the sample image segmentation sequence by using a first image coding model included in the initial neural network includes:
aiming at each local sample panel image in the sample image segmentation sequence, carrying out coding processing on the local sample panel image by using a first image coding model included in the initial neural network to obtain local image coding characteristics corresponding to the local sample panel image;
Performing feature compression processing on local image coding features corresponding to each local sample panel image respectively to obtain compressed image coding features corresponding to each local image coding feature, wherein feature sizes between every two compressed image coding features are consistent, and each compressed image coding feature comprises at least one feature parameter;
And combining the compressed image coding features corresponding to each local image coding feature to form candidate integral image coding features of the first sample panel image.
In a preferred option of the embodiment of the present application, in the above panel edge recognition method, the step of performing feature compression processing on the local image coding feature corresponding to each local sample panel image to obtain a compressed image coding feature corresponding to each local image coding feature includes:
for each local sample panel image in the sample image segmentation sequence, carrying out feature screening processing on local image coding features corresponding to the local sample panel image so as to screen out one feature parameter with the maximum value in the local image coding features;
And constructing compressed image coding features corresponding to each local image coding feature based on one feature parameter with the maximum value in each local image coding feature.
In a preferred option of the embodiment of the present application, in the above panel edge identification method, the step of updating the candidate integral image coding feature based on the decoded sample image segmentation sequence and the second sample image segmentation sequence, and marking the updated candidate integral image coding feature to be a corresponding target integral image coding feature includes:
performing error calculation on the initial neural network based on the decoding sample image segmentation sequence and the second sample image segmentation sequence to obtain a corresponding local decoding error;
Updating and optimizing network parameters of the initial neural network along the direction of reducing the local decoding error to obtain a pending neural network corresponding to the initial neural network;
Using a first image coding model included in the undetermined neural network, and mining new candidate integral image coding features of the first sample panel image based on the sample image segmentation sequence;
And updating the candidate integral image coding features based on the new candidate integral image coding features, and marking the updated candidate integral image coding features to form corresponding target integral image coding features.
In a preferred option of the embodiment of the present application, in the above panel edge recognition method, the step of performing supervised network optimization processing on the candidate neural network based on the second sample panel image and the image tag information corresponding to the second sample panel image to obtain a target neural network corresponding to the candidate neural network includes:
Carrying out segmentation serialization processing on the second sample panel image to obtain an image segmentation sequence to be mined corresponding to the second sample panel image, wherein the image segmentation sequence to be mined comprises a plurality of local panel images to be mined, which are arranged in sequence, the arrangement relation among the plurality of local panel images to be mined is related to the position relation of the plurality of local panel images to be mined in the second sample panel image, and the image sizes among every two local panel images to be mined are consistent;
digging out the integral image coding feature of the second sample panel image based on the image segmentation sequence to be mined by using a second image coding model included in the candidate neural network;
Decoding the whole image coding features of the second sample panel image by using a second image decoding model included in the candidate neural network to obtain a decoding labeling sample panel image corresponding to the second sample panel image, wherein the decoding labeling sample panel image has labeling information of a corresponding panel edge;
and determining an edge recognition error of the candidate neural network based on the label information of the corresponding panel edge in the decoded and labeled sample panel image and the image label information corresponding to the second sample panel image, and updating and optimizing network parameters of the candidate neural network along the direction of reducing the edge recognition error to obtain a corresponding target neural network.
The embodiment of the application also provides a panel edge recognition device, which comprises:
The edge recognition module is used for carrying out edge recognition processing on the acquired panel image to be recognized by utilizing a target neural network formed through network optimization to obtain an edge recognition result corresponding to the panel image to be recognized, wherein the panel image to be recognized is formed by carrying out image acquisition on a target panel, and the edge recognition result is used for reflecting edge distribution of the target panel in the panel image to be recognized;
the edge straight line determining module is used for determining the edge straight line of the panel image to be identified based on the edge identification result to obtain an edge straight line determining result of the target panel in the panel image to be identified;
And the defect identification module is used for analyzing a defect identification result of the target panel based on the edge straight line determination result, wherein the defect identification result is used for reflecting whether the edge of the target panel has a defect or not.
On the basis of the above, the embodiment of the application also provides an electronic device, which comprises:
A memory for storing a computer program;
and the processor is connected with the memory and is used for executing the computer program stored in the memory so as to realize the panel edge identification method.
On the basis of the above, the embodiment of the application also provides a computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, and the computer program executes the steps of the panel edge identification method.
According to the panel edge recognition method and device, the electronic equipment and the storage medium, firstly, the obtained panel image to be recognized can be subjected to edge recognition processing by utilizing the target neural network formed through network optimization, and the edge recognition result corresponding to the panel image to be recognized is obtained; secondly, based on the edge recognition result, carrying out edge straight line determination processing on the panel image to be recognized to obtain an edge straight line determination result of the target panel in the panel image to be recognized; then, the defect recognition result possessed by the target panel may be analyzed based on the edge straight line determination result. Based on the above, since the neural network generally has better generalization capability (for example, can have better recognition performance for differences between different panel images due to factors such as illumination and defects), the target neural network is used for performing edge recognition processing on the panel image to be recognized, so that the reliability of the edge recognition result can be relatively higher. In addition, compared with the scheme of directly utilizing the neural network to identify the edge defects, the target neural network does not need to pay attention to the edge defects, but can pay attention to the edge identification, so that the reliability of the edge identification result can be higher, and the reliability of the subsequent defect analysis based on the edge identification result with higher reliability can be higher. Therefore, the problem of relatively low reliability of panel edge recognition in the prior art can be effectively solved. In addition, by adopting technical means such as edge straight line determination processing after the edge recognition processing is performed based on the neural network, the final defect recognition result (compared with a scheme of directly using the neural network for defect recognition) can be better in interpretation.
Drawings
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present application.
Fig. 2 is a flowchart of a panel edge recognition method according to an embodiment of the present application.
Fig. 3 is a schematic view of a panel edge according to an embodiment of the present application.
Fig. 4 is a schematic diagram of scanning according to an embodiment of the present application.
Fig. 5 is a schematic diagram of an edge straight line determination result and an edge recognition sub-result according to an embodiment of the present application.
Fig. 6 is a schematic block diagram of a panel edge recognition device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments of the present application. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
As shown in fig. 1, an embodiment of the present application provides an electronic device. The electronic device may include a memory, a processor, and a panel edge recognition device.
In detail, the memory and the processor are electrically connected directly or indirectly to realize transmission or interaction of data. For example, the memory and the processor may be electrically connected by one or more communication buses or signal lines. The panel edge identification means comprises at least one software functional module stored in the memory in the form of software or firmware (firmware). The processor is configured to execute an executable computer program stored in the memory, for example, a software function module and a computer program included in the panel edge recognition device, so as to implement the panel edge recognition method provided by the embodiment of the application.
Alternatively, the Memory may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
And, the processor may be a general-purpose processor including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), a System on Chip (SoC), etc.; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
It will be appreciated that the architecture shown in fig. 1 is merely illustrative, and that the electronic device may further comprise more or fewer components than shown in fig. 1, or may have a different configuration than shown in fig. 1, e.g., may further comprise a communication unit for information interaction with other devices, such as image acquisition devices, etc.
With reference to fig. 2, an embodiment of the present application further provides a panel edge recognition method applicable to the above electronic device. The method steps defined by the flow related to the panel edge recognition method can be implemented by the electronic device. The specific flow shown in fig. 2 will be described in detail.
Step S110, performing edge recognition processing on the acquired panel image to be recognized by utilizing a target neural network formed through network optimization, and obtaining an edge recognition result corresponding to the panel image to be recognized.
In the embodiment of the application, the electronic device can perform edge recognition processing on the acquired panel image to be recognized by using a target neural network (an image recognition network) formed through network optimization to obtain an edge recognition result corresponding to the panel image to be recognized. The panel image to be identified is formed by performing image acquisition on a target panel (for example, after the image acquisition device performs information acquisition on the target panel, the formed image can be sent to the electronic device, so that the electronic device can acquire the image and serve as the panel image to be identified), and the edge identification result is used for reflecting the edge distribution of the target panel in the panel image to be identified, for example, the edge of the target panel is marked in the panel image to be identified, or, the pixel points belonging to the edge of the target panel are marked in the panel image to be identified.
And step S120, based on the edge recognition result, carrying out edge straight line determination processing on the panel image to be recognized to obtain an edge straight line determination result of the target panel in the panel image to be recognized.
In the embodiment of the application, after the edge recognition result is obtained, the electronic device may perform edge line determination processing on the panel image to be recognized based on the edge recognition result, so as to obtain an edge line determination result of the target panel in the panel image to be recognized. It should be noted that at least one straight line edge is provided in the edges of the target panel, so, in order to identify the defects of the straight line edge, edge straight line determination processing may be performed on the panel image to be identified, so as to obtain a corresponding edge straight line determination result.
And step S130, analyzing a defect identification result of the target panel based on the edge straight line determination result.
In the embodiment of the application, after the edge straight line determination result is obtained, the electronic device may analyze the defect identification result of the target panel based on the edge straight line determination result. The defect recognition result is used for reflecting whether the edge of the target panel has a defect or not.
Based on the above, since the neural network generally has better generalization capability (for example, can have better recognition performance for differences between different panel images due to factors such as illumination and defects), the target neural network is used for performing edge recognition processing on the panel image to be recognized, so that the reliability of the edge recognition result can be relatively higher. In addition, compared with the scheme of directly utilizing the neural network to identify the edge defects, the target neural network does not need to pay attention to the edge defects, but can pay attention to the edge identification, so that the reliability of the edge identification result can be higher, and the reliability of the subsequent defect analysis based on the edge identification result with higher reliability can be higher. Therefore, the problem of relatively low reliability of panel edge recognition in the prior art can be effectively solved. In addition, by adopting technical means such as edge straight line determination processing after the edge recognition processing is performed based on the neural network, the final defect recognition result (compared with a scheme of directly using the neural network for defect recognition) can be better in interpretation.
In the first aspect, it should be noted that, in step S110, a specific manner of performing the edge recognition processing on the panel image to be recognized by using the target neural network is not limited, and may be selected according to actual requirements.
For example, in an alternative embodiment, in order to improve the efficiency of performing edge recognition processing on the panel image to be recognized, an existing neural network (a neural network with edge recognition capability) may be used to perform edge recognition processing on the panel image to be recognized, so that an edge recognition result corresponding to the panel image to be recognized may be obtained.
For another example, in another alternative embodiment, in order to ensure the reliability of the edge recognition processing for the panel image to be recognized, the step S110 may further include a step S111, a step S112, a step S113, a step S114, a step S115, and a step S116, where the specific contents of each step are as follows.
Step S111, acquiring a first sample panel image and an initial neural network.
In the embodiment of the application, the first sample panel image and the initial neural network can be acquired respectively. Wherein the initial neural network may include a first image encoding model and a first image decoding model. Illustratively, the first image coding model may include a plurality of convolution layers and a plurality of pooling layers, and the plurality of convolution layers and the plurality of pooling layers may be alternately distributed, such as a first convolution layer (e.g., a convolution kernel size: 3x3, a channel number: 3, corresponding to R, G, B channels of the image), a first pooling layer (e.g., a pooling size: 2x 2), a second convolution layer (e.g., a convolution kernel size: 3x3, a channel number: 64), a second pooling layer (e.g., a pooling size: 2x 2), and so on. Illustratively, the first image decoding model may include a plurality of upsampling layers and a plurality of convolution layers, and the plurality of upsampling layers and the plurality of convolution layers may be alternately distributed, such as a first upsampling layer (e.g., a convolution kernel size: 2x 2), a first convolution layer (e.g., a convolution kernel size: 3x3, a channel number: 128), a second upsampling layer (e.g., a convolution kernel size: 2x 2), a second convolution layer (e.g., a convolution kernel size: 3x3, a channel number: 64), and the like. Illustratively, the output of the pooling layer of the first image encoding model may also be linked to the input of the upsampling layer of the first image decoding model.
And step S112, performing unsupervised network optimization processing on the initial neural network based on the first sample panel image to obtain an intermediate neural network corresponding to the initial neural network.
In the embodiment of the present application, after the first sample panel image and the initial neural network are acquired, an unsupervised network optimization process may be performed on the initial neural network based on the first sample panel image to obtain an intermediate neural network corresponding to the initial neural network, that is, the first sample panel image may not have label information, for example, pixel points belonging to a panel edge in the first sample panel image may not be marked. And in the process of performing unsupervised network optimization processing, the basis of the network optimization processing at least comprises errors between the first sample panel image and the decoded image output by the first image decoding model. That is, the initial neural network may be an image reconstruction network or an image restoration network, so that an error between the first sample panel image and the corresponding decoded image (reconstructed image, restored image) may be used as a basis for performing an unsupervised network optimization process on the initial neural network, for example, parameters of the initial neural network may be adjusted in a direction of reducing the error. In addition, in order to ensure the reliability of the network optimization processing performed on the initial neural network, that is, the image restoration or reconstruction capability of the intermediate neural network is better, a larger number of first sample panel images may be provided.
And step S113, constructing a candidate neural network based on the second image decoding model and the first image coding model in the intermediate neural network.
In the embodiment of the application, after the intermediate neural network is obtained, a candidate neural network can be constructed based on the second image decoding model and the first image coding model in the intermediate neural network. The candidate neural network may include a second image coding model and the second image decoding model, where model parameters of the second image coding model are the same as model parameters of the first image coding model in the intermediate neural network (for example, the second image coding model may be constructed based on model parameters of the first image coding model in the intermediate neural network, or the first image coding model in the intermediate neural network may be directly used as the second image coding model, so that the first image coding model that has identified the panel edge feature may be directly used, so that in a subsequent network optimization process, on one hand, efficiency of network optimization may be accelerated, and on the other hand, fewer second sample panel images may also be used, so that workload of image labeling work may be reduced. The second image decoding model may also include a plurality of upsampling layers and a plurality of convolution layers, and the plurality of upsampling layers and the plurality of convolution layers may be alternately distributed (as described in the foregoing related description), and in addition, in the second image decoding model, an output layer may be further connected after the last convolution layer, and the output layer may include an activation function such as a convolution kernel and a sigmoid, etc., so that an edge probability value (e.g., a probability value of 0-1) of each pixel point in the corresponding image may be generated by the output layer, and then, a pixel point whose edge probability value exceeds a preset probability value may be regarded as an edge of the corresponding panel. For example, the specific value of the preset probability value may be configured according to actual requirements, such as values of 0.6, 0.7, and the like.
Step S114, acquiring a second sample panel image and image tag information corresponding to the second sample panel image.
In the embodiment of the application, after the candidate neural network is constructed, a second sample panel image and image label information corresponding to the second sample panel image may be acquired, where the image label information is used to identify a panel edge in the second sample panel image, for example, pixel points belonging to the edge are marked in the second sample panel image. The step of acquiring the second sample panel image and the image label information corresponding to the second sample panel image may be performed before or in parallel with the step of constructing the candidate neural network, and the specific execution sequence may be selected according to actual requirements. In addition, the specific number of the second sample panel images is not limited, and in order to ensure that the candidate neural network can be subjected to reliable supervised network optimization processing, the number of the second sample panel images can be also multiple.
Step S115, performing supervised network optimization processing on the candidate neural network based on the second sample panel image and the image label information corresponding to the second sample panel image, to obtain a target neural network corresponding to the candidate neural network.
In the embodiment of the present application, after the second sample panel image and the image tag information corresponding to the second sample panel image are obtained, the candidate neural network may be subjected to supervised network optimization processing based on the second sample panel image and the image tag information corresponding to the second sample panel image, so as to obtain the target neural network corresponding to the candidate neural network. And in the process of performing the supervised network optimization processing, the basis of the network optimization processing at least comprises an error between the image tag information and a sample edge recognition result output by the second image decoding model. For example, an error between the image tag information and the sample edge recognition result output by the second image decoding model may be calculated first, and then, the network parameters of the candidate neural network may be updated and optimized in a direction to reduce the error.
And step S116, performing edge recognition processing on the acquired panel image to be recognized by using the target neural network to obtain an edge recognition result corresponding to the panel image to be recognized.
In the embodiment of the application, after the target neural network is obtained, the target neural network can be utilized to perform edge recognition processing on the obtained panel image to be recognized, so as to obtain an edge recognition result corresponding to the panel image to be recognized. For example, each pixel point belonging to the panel edge in the panel image to be identified may be reflected in the edge identification result.
It will be appreciated that in the step S112, the specific manner of performing the unsupervised network optimization process on the initial neural network is not limited, and may be selected according to actual requirements.
For example, in an alternative embodiment, in order to ensure the reliability of the network optimization process performed on the initial neural network, the step S112 may further include a step S112a, a step S112b, a step S112c, a step S112d, a step S112e, and a step S112f, where the specific contents of each step are as follows.
Step S112a, performing segmentation serialization processing on the first sample panel image to obtain a sample image segmentation sequence corresponding to the first sample panel image.
In the embodiment of the application, after the first sample panel image is acquired, the first sample panel image may be subjected to segmentation serialization processing, so that a sample image segmentation sequence corresponding to the first sample panel image may be obtained. The sample image segmentation sequence comprises a plurality of local sample panel images which are arranged in sequence, the arrangement relation among the plurality of local sample panel images is related to the position relation of the plurality of local sample panel images in the first sample panel image, and the image sizes of every two local sample panel images are consistent. That is, the first sample panel image may be subjected to a segmentation process (for example, sliding window segmentation, a specific window size may be configured according to actual needs, for example, 2×2, 3*3, 9*9, 16×16, etc.), so that a corresponding plurality of local sample panel images may be obtained, for example, a local sample panel image 1, a local sample panel image 2, a local sample panel image 3, a local sample panel image 4, etc., and then the plurality of local sample panel images may be ordered based on a relationship (for example, a sliding window order, etc.) between the plurality of local sample panel images to implement serialization, so that a sample image segmentation sequence may be obtained, for example, in the sample image segmentation sequence, the local sample panel image 1, the local sample panel image 2, the local sample panel image 3, the local sample panel image 4, etc. may be sequentially obtained.
Step S112b, using a first image coding model included in the initial neural network, and mining candidate integral image coding features of the first sample panel image based on the sample image segmentation sequence.
In the embodiment of the present application, after the sample image segmentation sequence is obtained, a first image coding model included in the initial neural network may be used to mine out candidate integral image coding features of the first sample panel image based on the sample image segmentation sequence. That is, the first image coding model may be used to perform global or global feature mining (coding) on the sample image segmentation sequence, so that candidate global image coding features of the first sample panel image may be obtained, so that the candidate global image coding features may be used to reflect global semantic information or global semantic information of the first sample panel image. The candidate global image coding features may be represented in a vector, and in embodiments of the present application, the various other coding features may be represented in a vector.
Step S112c, determining a first sample image segmentation sequence and a second sample image segmentation sequence in the sample image segmentation sequence.
In the embodiment of the present application, after the sample image segmentation sequence is obtained, a first sample image segmentation sequence and a second sample image segmentation sequence may be further determined from the sample image segmentation sequence. Wherein the first sample image segmentation sequence and the second sample image segmentation sequence both belong to sub-sequences in the sample image segmentation sequence, and a correlation exists between the first sample image segmentation sequence and the second sample image segmentation sequence. Illustratively, the sample image segmentation sequence may include a partial sample panel image 1, a partial sample panel image 2, a partial sample panel image 3, a partial sample panel image 99, a partial sample panel image 100 in order, such that the partial sample panel image 1, the partial sample panel image 2, the partial sample panel image 3, the partial sample panel image 99 may be regarded as a first sample image segmentation sequence, and the partial sample panel image 100 may be regarded as a second sample image segmentation sequence.
And step S112d, decoding to obtain a decoded sample image segmentation sequence corresponding to the first sample image segmentation sequence based on the first sample image segmentation sequence and the candidate integral image coding characteristics by using a first image coding model and a first image decoding model which are included in the initial neural network.
In the embodiment of the present application, after the first sample image segmentation sequence and the second sample image segmentation sequence are determined, a first image coding model and a first image decoding model included in the initial neural network may be used to decode to obtain a decoded sample image segmentation sequence corresponding to the first sample image segmentation sequence based on the first sample image segmentation sequence and the candidate integral image coding feature. The decoding object of the first image decoding model comprises the candidate integral image coding feature and a coding result of the first image coding model on the first sample image segmentation sequence, and the decoding sample image segmentation sequence belongs to a restoring result of decoding processing on the second sample image segmentation sequence based on the first sample image segmentation sequence. That is, the first sample image segmentation sequence may be subjected to feature mining (encoding) by using the first image encoding model to obtain a corresponding encoding result (such as an encoding feature), then the encoding result and the candidate integral image encoding feature may be fused (such as fusion based on an attention mechanism, or processing such as splicing, etc.) to obtain a corresponding fusion feature, and then the fusion feature may be decoded (predicted) by using the first image decoding model, so that a decoded sample image segmentation sequence corresponding to the first sample image segmentation sequence may be obtained, and a related second sample image segmentation sequence may be predicted based on the first sample image segmentation sequence. In this way, the candidate integral image coding features are fused in the prediction process, and have global semantic information, so that the prediction accuracy can be higher.
And step S112e, updating the candidate integral image coding feature based on the decoded sample image segmentation sequence and the second sample image segmentation sequence, and marking the updated candidate integral image coding feature as a corresponding target integral image coding feature.
In the embodiment of the present application, after the decoded sample image segmentation sequence is obtained, the candidate integral image coding feature may be updated based on the decoded sample image segmentation sequence and the second sample image segmentation sequence, and the updated candidate integral image coding feature may be marked to be a corresponding target integral image coding feature. For example, the candidate global image coding feature may be updated based on an error between the decoded sample image segmentation sequence and the second sample image segmentation sequence (i.e. an error between prediction and actual), such that global semantic information of the target global image coding feature characterization may be more accurate and reliable.
Step S112f, performing decoding processing on the target overall image coding feature by using a first image decoding model included in the initial neural network, to obtain a decoded sample panel image corresponding to the first sample panel image, performing error calculation on the initial neural network based on the first sample panel image and the decoded sample panel image, to obtain a corresponding target decoding error, and performing update optimization on network parameters of the initial neural network along a direction of reducing the target decoding error, to obtain an intermediate neural network corresponding to the initial neural network.
In the embodiment of the application, after the target integral image coding feature is obtained, a first image decoding model included in the initial neural network may be utilized to perform decoding processing on the target integral image coding feature to obtain a decoded sample panel image corresponding to the first sample panel image, then, based on the first sample panel image and the decoded sample panel image, error calculation may be performed on the initial neural network to obtain a corresponding target decoding error (i.e., an error between actual and predicted), and network parameters of the initial neural network may be updated and optimized along a direction of reducing the target decoding error to obtain an intermediate neural network corresponding to the initial neural network. That is, the output layer in the first image decoding model may have at least two branches, such as a first branch and a second branch, wherein the first branch may be used to decode (predict) the fusion feature, so that a decoded sample image segmentation sequence corresponding to the first sample image segmentation sequence may be obtained. The second branch may be used to perform decoding processing on the target overall image coding feature to obtain a decoded sample panel image corresponding to the first sample panel image.
Alternatively, in the step S112b, the specific manner of mining the candidate whole image coding feature of the first sample panel image is not limited, and selection may be required.
For example, in an alternative embodiment, in order to enable the candidate global image coding feature to fully characterize the semantic information in the first sample panel image, the step S112b may further include a step b1, a step b2, and a step b3, where the details of each step are as follows.
And b1, aiming at each local sample panel image in the sample image segmentation sequence, carrying out coding processing on the local sample panel image by using a first image coding model included in the initial neural network to obtain local image coding characteristics corresponding to the local sample panel image.
In the embodiment of the present application, after the sample image segmentation sequence is obtained, for each local sample panel image in the sample image segmentation sequence, the local sample panel image may be subjected to encoding processing (including rolling and pooling as described above, so as to implement corresponding feature mining) by using a first image encoding model included in the initial neural network, so as to obtain a local image encoding feature corresponding to the local sample panel image. Wherein the local image coding feature may be used to reflect local semantic information that the local sample panel image has.
And b2, respectively carrying out feature compression processing on the local image coding features corresponding to each local sample panel image to obtain compressed image coding features corresponding to each local image coding feature.
In the embodiment of the present application, after obtaining the local image coding feature corresponding to each local sample panel image, feature compression processing may be performed on the local image coding feature corresponding to each local sample panel image, so as to obtain a compressed image coding feature corresponding to each local image coding feature. The feature sizes of every two compressed image coding features are consistent, and each compressed image coding feature comprises at least one feature parameter. Illustratively, the feature compression processing may be performed on the local image coding feature corresponding to the local sample panel image 1 to obtain a corresponding compressed image coding feature 1, the feature compression processing may be performed on the local image coding feature corresponding to the local sample panel image 2 to obtain a corresponding compressed image coding feature 2, and the feature compression processing may be performed on the local image coding feature corresponding to the local sample panel image 3 to obtain a corresponding compressed image coding feature 3. In addition, the mode of performing feature compression processing on the local image coding features corresponding to each local sample panel image can be consistent, so that the attention points can be consistent. Based on the method, the feature size of the compressed image coding feature can be smaller by carrying out feature compression processing on the local image coding feature, and the data volume, complexity and the like of subsequent processing can be reduced under the condition that the local significance feature of the local sample panel image is reserved.
And b3, combining the compressed image coding features corresponding to each local image coding feature to form candidate integral image coding features of the first sample panel image.
In the embodiment of the present application, after obtaining the compressed image coding feature corresponding to each local image coding feature, the compressed image coding feature corresponding to each local image coding feature may be combined to form the candidate integral image coding feature of the first sample panel image. Illustratively, in an alternative embodiment, the compressed image coding features corresponding to each of the local image coding features may be directly stitched to obtain the candidate global image coding features of the first sample panel image, such as { compressed image coding feature 1, compressed image coding feature 2, compressed image coding feature 3,... In another alternative embodiment, for each of the compressed image coding features, attention processing may be performed on the compressed image coding features based on related compressed image coding features (for example, compressed image coding features corresponding to a local sample panel image adjacent to the local sample panel image in the sample image segmentation sequence) to obtain attention features corresponding to the compressed image coding features, and then, each attention feature may be fused (for example, spliced) to obtain candidate integral image coding features.
It should be further noted that, in the above embodiment, the specific manner of performing the feature compression processing on the local image coding feature may be selected according to actual requirements.
For example, in an alternative embodiment, the local image coding features may be subjected to a process such as mean pooling to obtain corresponding compressed image coding features.
For another example, in another alternative embodiment, in order to ensure that the obtained compressed image coding feature effectively reflects the salient feature of the local image coding feature, the step b2 may further include the following specific implementation matters:
Firstly, for each local sample panel image in the sample image segmentation sequence, carrying out feature screening processing on local image coding features corresponding to the local sample panel image so as to screen out one feature parameter with the maximum value in the local image coding features;
secondly, constructing compressed image coding features corresponding to each local image coding feature based on one feature parameter with the maximum value in each local image coding feature; for each of the partial image coding features, one of the partial image coding features having the maximum value may be used as a corresponding compressed image coding feature.
Alternatively, in the step S112e, the specific manner of updating the candidate whole image coding feature is not limited, and may be selected according to actual requirements.
For example, in an alternative embodiment, in order to enable the updated candidate global image coding feature to have a better characterizing capability on the image information in the first sample panel image, the step S112e may further include a step e1, a step e2, a step e3, and a step e4, where the specific content of each step is as follows.
And e1, performing error calculation on the initial neural network based on the decoding sample image segmentation sequence and the second sample image segmentation sequence to obtain a corresponding local decoding error.
In the embodiment of the present application, the error calculation may be performed on the initial neural network based on the decoded sample image segmentation sequence and the second sample image segmentation sequence, so as to obtain a corresponding local decoding error (which may also be understood as a prediction error), where the local decoding error may be used to reflect a difference between the decoded sample image sequence and the second sample image segmentation sequence.
And e2, updating and optimizing the network parameters of the initial neural network along the direction of reducing the local decoding error to obtain the undetermined neural network corresponding to the initial neural network.
In the embodiment of the application, after the local decoding error is obtained, the network parameters of the initial neural network can be updated and optimized along the direction of reducing the local decoding error until the local decoding error converges or the update and optimization times are larger than the preset times, and the undetermined neural network corresponding to the initial neural network is obtained.
And e3, utilizing a first image coding model included in the undetermined neural network, and mining out new candidate integral image coding features of the first sample panel image based on the sample image segmentation sequence.
In the embodiment of the application, after the undetermined neural network is obtained, a first image coding model in the undetermined neural network can be utilized to dig out new candidate integral image coding features of the first sample panel image based on the sample image segmentation sequence. That is, the feature mining may be performed again on the sample image segmentation sequence using the updated first image coding model, thereby obtaining new candidate overall image coding features.
And e4, updating the candidate integral image coding features based on the new candidate integral image coding features, and marking the updated candidate integral image coding features to form corresponding target integral image coding features.
In the embodiment of the application, after the new candidate integral image coding feature is mined, the candidate integral image coding feature can be updated based on the new candidate integral image coding feature, and the updated candidate integral image coding feature is marked to form a corresponding target integral image coding feature. For example, the new candidate integral image coding feature may be directly used as the target integral image coding feature, or the new candidate integral image coding feature and the candidate integral image coding feature may be subjected to processes such as superposition or weighted superposition, so as to obtain the target integral image coding feature.
It will be appreciated that in the above step S115, the specific manner of performing the supervised network optimization process on the candidate neural network is not limited, and may be selected according to actual requirements.
For example, in an alternative embodiment, in order to enable the target neural network to have a better information capturing capability (such as the mined feature can have a better characterizing capability), the step S115 may further include a step S115a, a step S115b, a step S115c, and a step S115d, where details of each step are as follows.
Step S115a, performing segmentation serialization processing on the second sample panel image to obtain a to-be-mined image segmentation sequence corresponding to the second sample panel image.
In the embodiment of the application, after the second sample panel image is obtained, the second sample panel image may be subjected to segmentation serialization processing to obtain the image segmentation sequence to be mined corresponding to the second sample panel image. The image segmentation sequence to be mined comprises a plurality of local panel images to be mined, wherein the local panel images to be mined are arranged in sequence, the arrangement relation among the local panel images to be mined is related to the position relation of the local panel images to be mined in the second sample panel image, and the image sizes of every two local panel images to be mined are consistent. For example, the sequence of image segmentation to be mined may refer to a description of the sequence of sample image segmentation.
And step S115b, mining the whole image coding characteristics of the second sample panel image based on the image segmentation sequence to be mined by using a second image coding model included in the candidate neural network.
In the embodiment of the present application, after the image segmentation sequence to be mined is obtained, the integral image coding feature of the second sample panel image may be mined based on the image segmentation sequence to be mined by using a second image coding model included in the candidate neural network. For specific processing, the first image coding model may be used to mine the sample image segmentation sequence to obtain an explanation of candidate global image coding features of the first sample panel image.
And step 115c, performing decoding processing on the whole image coding features of the second sample panel image by using a second image decoding model included in the candidate neural network to obtain a decoded and labeled sample panel image corresponding to the second sample panel image.
In the embodiment of the application, after the integral image coding feature is obtained, the integral image coding feature of the second sample panel image can be decoded by using a second image decoding model included in the candidate neural network, so as to obtain a decoding labeling sample panel image corresponding to the second sample panel image. The decoding labeling sample panel image has corresponding labeling information of the panel edge. Illustratively, by performing decoding processing, a probability value that each pixel point in the second sample panel image belongs to a panel edge may be obtained. For example ,{(100,150,0.1)、(101,150,0.3)、(102,150,0.6)、(103,150,0.7)、(104,150,0.1)、(105,150,0.2)、(106,150,0.3)、...},, where in each set of parameters, the first two parameters may characterize the coordinates of the pixel point and the latter parameter may characterize the probability value.
And step S115d, determining an edge recognition error of the candidate neural network based on the label information of the corresponding panel edge in the decoded and labeled sample panel image and the image label information corresponding to the second sample panel image, and updating and optimizing the network parameters of the candidate neural network along the direction of reducing the edge recognition error to obtain the corresponding target neural network.
In the embodiment of the present application, when the decoded and labeled sample panel image is obtained, an edge recognition error (for example, the edge recognition error may be determined based on a cross entropy loss function) of the candidate neural network may be determined based on label information of a corresponding panel edge in the decoded and labeled sample panel image and image label information corresponding to the second sample panel image, and network parameters of the candidate neural network may be updated and optimized along a direction of reducing the edge recognition error, so as to obtain a corresponding target neural network.
In the above embodiment, when the network optimization is performed based on the error, the parameters of the network may be adjusted in the direction of reducing the error until the error is reduced to the target error, the reduction range of the error is smaller than the target range, or the number of times of the network optimization is larger than the preset number of times. In addition, in the above embodiment, the specific manner of calculating the error is not limited, and may be selected according to actual requirements, such as a mean square error (Mean Squared Error, MSE), and the like.
In the second aspect, it should be noted that, in step S120, a specific manner of performing the edge line determination processing on the panel image to be identified is not limited, and may be selected according to actual requirements.
For example, in an alternative embodiment, the edge recognition result includes each pixel point belonging to the panel edge, so that straight line fitting processing may be directly performed on each pixel point belonging to the panel edge to obtain an edge straight line determination result of the target panel in the panel image to be recognized, that is, a fitted straight line (line segment) is obtained.
For another example, in another alternative embodiment, in order to ensure the reliability of the obtained edge straight line determination result, the step S120 may further include a step S121, a step S122, and a step S123, and the specific contents of each step are as follows.
Step S121, in the panel image to be identified, performing a scanning process on an edge line of the target panel reflected by the edge identification result, to obtain at least one scanning result corresponding to the edge identification result.
In the embodiment of the application, after the edge recognition result is obtained, the edge line of the target panel reflected by the edge recognition result can be scanned in the panel image to be recognized, so as to obtain at least one scanning result corresponding to the edge recognition result. Each scanning result is used for reflecting the scanned pixel points on the edge line. Illustratively, any of the at least one scan result may be used to reflect an edge line.
Step S122, for each scanning result, performing edge line determination processing on the panel image to be identified based on the scanned pixel points on the edge line reflected by the scanning result, to obtain a local edge line determination result of the target panel in the panel image to be identified.
In the embodiment of the present application, after the at least one scanning result is obtained, for each scanning result, based on the pixel points scanned on the edge line reflected by the scanning result, edge straight line determination processing may be performed on the panel image to be identified, so as to obtain a local edge straight line determination result of the target panel in the panel image to be identified. Wherein the edge straight line local determination result is used for reflecting the edge straight line determined based on the corresponding scanning result. For example, a straight line fitting process may be performed on the pixel points scanned on the edge line reflected by the scanning result, so as to obtain a corresponding edge straight line local determination result.
Step S123, determining an edge straight line determination result of the target panel in the panel image to be identified based on the edge straight line local determination result corresponding to each scanning result.
In the embodiment of the present application, after obtaining the edge line local determination result corresponding to each scanning result, each edge line local determination result may be respectively used as an edge line determination result, so that at least one edge line determination result may also be obtained. As shown in fig. 3, an exemplary panel without defects may have 8 edge lines, and accordingly, 8 edge line determination results may be obtained. The fifth edge line, the sixth edge line, the seventh edge line and the eighth edge line may also be curved (e.g. edge chamfer).
It is to be understood that, in the above-described step S121, the specific manner of performing the scanning process on the edge line of the target panel reflected by the edge recognition result is not limited, and may be selected according to the need.
Illustratively, in an alternative embodiment, in order to enable the at least one scan result to sufficiently reflect the distribution of the panel edges in the panel image to be identified, the step S121 may further include a step S121a, a step S121b, a step S121c, and a step S121d, where the specific contents of each step are as follows.
Step S121a, in the panel image to be identified, performing scanning along a column direction of the pixel distribution, determining a first intersecting pixel point between each column scanning line and an edge line of the target panel reflected by the edge identification result, and constructing and forming an intersecting pixel point set corresponding to the column direction based on the first intersecting pixel point corresponding to each column scanning line.
In the embodiment of the present application, after the edge recognition result is obtained, scanning processing may be performed in the to-be-recognized panel image along the column direction of the pixel distribution (as shown in fig. 4), so as to determine a first intersecting pixel point between each column scanning line and the edge line of the target panel reflected by the edge recognition result, and based on the first intersecting pixel point corresponding to each column scanning line, a set of intersecting pixel points corresponding to the column direction is formed. For example, as shown in fig. 3, for the first edge line, the fifth edge line, and the eighth edge line, a corresponding set of intersecting pixels may be obtained, and for the third edge line, the sixth edge line, and the seventh edge line, a corresponding set of intersecting pixels may be obtained.
Step S121b, performing scanning along a row direction of the pixel distribution in the panel image to be identified, determining a first intersecting pixel point between each row scanning line and an edge line of the target panel reflected by the edge identification result, and constructing and forming an intersecting pixel point set corresponding to the row direction based on the first intersecting pixel point corresponding to each row scanning line.
In the embodiment of the present application, after the edge recognition result is obtained, scanning processing may be performed in the panel image to be recognized along the row direction in which the pixels are distributed (as shown in fig. 4), so as to determine a first intersecting pixel point between each row scanning line and the edge line of the target panel reflected by the edge recognition result, and based on the first intersecting pixel point corresponding to each row scanning line, a set of intersecting pixel points corresponding to the row direction is formed. For example, as shown in fig. 3, for the second edge line, the fifth edge line, and the sixth edge line, a corresponding set of intersecting pixels may be obtained, and for the fourth edge line, the seventh edge line, and the eighth edge line, a corresponding set of intersecting pixels may be obtained.
Step S121c, determining an intersection between the intersecting pixel point set corresponding to the column direction and the intersecting pixel point set corresponding to the row direction, to obtain an intersecting pixel point intersection.
In the embodiment of the present application, after the intersection pixel point set corresponding to the column direction and the intersection pixel point set corresponding to the row direction are obtained, an intersection between the intersection pixel point set corresponding to the column direction and the intersection pixel point set corresponding to the row direction may be further determined, so as to obtain an intersection of intersection pixels, for example, further referring to fig. 3, an intersection between one intersection pixel point set corresponding to a first edge line, a fifth edge line, and an eighth edge line and one intersection pixel point set corresponding to a second edge line, a fifth edge line, and a sixth edge line, which actually refers to an intersection pixel point intersection corresponding to a fifth edge line; for another example, the intersection between one set of intersecting pixels corresponding to the first, fifth and eighth edge lines and one set of intersecting pixels corresponding to the fourth, seventh and eighth edge lines actually refers to the intersection of intersecting pixels corresponding to the eighth edge line; for another example, the intersection between one set of intersecting pixels corresponding to the third, sixth and seventh edge lines and one set of intersecting pixels corresponding to the fourth, seventh and eighth edge lines actually refers to the intersection of intersecting pixels corresponding to the seventh edge line; for another example, the intersection between one set of intersecting pixels corresponding to the third, sixth, and seventh edge lines and one set of intersecting pixels corresponding to the second, fifth, and sixth edge lines actually refers to the intersection of intersecting pixels corresponding to the sixth edge line.
Step S121d, determining the intersection of the intersecting pixel points, other intersecting pixel points other than the intersection of the intersecting pixel points in the intersecting pixel point set corresponding to the column direction, and other intersecting pixel points other than the intersection of the intersecting pixel points in the intersecting pixel point set corresponding to the row direction as scanning results corresponding to the edge recognition results, so as to obtain at least three corresponding scanning results.
In the embodiment of the present application, after the intersection of the intersecting pixel points is obtained, other intersecting pixel points other than the intersection of the intersecting pixel points in the intersecting pixel point set corresponding to the column direction, and other intersecting pixel points other than the intersection of the intersecting pixel points in the intersecting pixel point set corresponding to the row direction are determined as the scan results corresponding to the edge recognition result, so as to obtain at least three corresponding scan results, for example, a scan result corresponding to a first edge line, a scan result corresponding to a second edge line, a scan result corresponding to a third edge line, a scan result corresponding to a fourth edge line, a scan result corresponding to a fifth edge line, a scan result corresponding to a sixth edge line, a scan result corresponding to a seventh edge line, and a scan result corresponding to an eighth edge line.
It may be appreciated that, in the step S122, a specific manner of performing the edge line determination processing on the panel image to be identified is selected, and may be selected according to actual requirements.
For example, in an alternative embodiment, in order to ensure the reliability of the obtained edge straight line local determination result, the step S122 may further include a step S122a and a step S122b, where the details of each step are as follows.
Step S122a, for each scanning result, performing outlier screening processing on the scanned pixels on the edge line reflected by the scanning result, to obtain a target pixel set corresponding to the scanning result.
In the embodiment of the application, for each scanning result, abnormal point screening processing is performed on the scanned pixel points on the edge line reflected by the scanning result, so as to obtain a target pixel point set corresponding to the scanning result. For example, some pixels far away may be screened out, for example, for each pixel, a minimum distance between the pixel and each other pixel may be calculated, and then when the minimum distance is greater than a preset distance, the pixel may be screened out, where it should be noted that, in other embodiments, the pixels far away may be determined based on other manners, and then screened out.
Step S122b, respectively performing a straight line fitting process on the pixel points in the target pixel point set corresponding to each scanning result, to obtain a local determination result of the straight line of each edge of the target panel in the panel image to be identified.
In the embodiment of the present application, after obtaining the target pixel point set corresponding to each scanning result, a line fitting process may be performed on the pixel points in the target pixel point set corresponding to each scanning result (a specific manner of the line fitting process may refer to the related prior art, and no specific limitation is made here), so as to obtain a local determination result of the line of each edge of the target panel in the panel image to be identified. The edge straight line local determination results, the scanning results and the target pixel point set have a one-to-one correspondence relationship, and each edge straight line local determination result is used for reflecting one fitted edge straight line. As shown in fig. 3, there are eight edge lines, and there may be eight scan results, corresponding to eight edge line local determination results.
In the third aspect, in step S130, it should be noted that, a specific manner of analyzing the defect recognition result of the target panel based on the edge straight line determination result is not limited, and may be selected according to actual requirements.
For example, in an alternative embodiment, the edge straight line determination result and the edge recognition result may be directly compared and analyzed to determine whether the corresponding defect recognition result, such as whether the edge has a defect such as a protrusion or a depression, is determined.
For another example, in another alternative embodiment, in order to make the defect recognition result have higher reliability, the step S130 may further include a step S131 and a step S132, where the specific content of each step is as follows.
Step S131, for each edge line determination result belonging to the target type, comparing and analyzing the edge line determination result with an edge recognition sub-result corresponding to the edge line determination result included in the edge recognition result, to obtain a defect recognition sub-result corresponding to the edge line determination result.
In the embodiment of the present application, after the edge straight line determination result is obtained, the edge straight line determination result may be further classified, and for each edge straight line determination result belonging to the target type, for example, the edge straight line determination result may be compared with an edge recognition sub-result corresponding to the edge straight line determination result included in the edge recognition result, so as to obtain a defect recognition sub-result corresponding to the edge straight line determination result. The edge recognition result comprises at least one edge recognition sub-result, any one of the edge recognition sub-results is used for reflecting one edge distribution of the target panel in the panel image to be recognized, and each edge straight line which is reflected by the edge straight line determination result belonging to the target type is distributed in the panel image to be recognized along the row direction or the column direction. As shown in fig. 3, each edge line determination result belonging to the target type may refer to, for example, an edge line determination result corresponding to a first edge line, an edge line determination result corresponding to a second edge line, an edge line determination result corresponding to a third edge line, and an edge line determination result corresponding to a fourth edge line, so that the edge line determination result corresponding to the first edge line and the edge recognition sub-result may be compared and analyzed to determine whether there is a defect such as a protrusion or a depression on the first edge line (for example, as long as there is a pixel point in the edge recognition sub-result that is not on a line corresponding to the edge line determination result, it may be considered to have a defect such as a protrusion or a depression, or it may be considered to have a defect such as a protrusion or a depression when there is a pixel point in the edge recognition sub-result that is a distance between lines corresponding to the line determination result that is greater than a reference distance, which may be configured according to practical requirements), and may be specifically referred to fig. 5.
Step S132, for each edge straight line determination result not belonging to the target type, determining the size information of the edge straight line reflected by the edge straight line determination result, and analyzing the defect recognition result of the target panel based on the size information.
In the embodiment of the application, for each edge straight line determination result which does not belong to the target type, the size information of the edge straight line reflected by the edge straight line determination result can be determined, and the defect identification result of the target panel is analyzed based on the size information. As illustrated in fig. 3, each edge line determination result not belonging to the target type may mean, for example, an edge line determination result corresponding to a fifth edge line, an edge line determination result corresponding to a sixth edge line, an edge line determination result corresponding to a seventh edge line, an edge line determination result corresponding to an eighth edge line, and it may be determined whether there is a corresponding defect by performing a comparative analysis of size information, considering that, in general, the fifth edge line, the sixth edge line, the seventh edge line, and the eighth edge line are generally short (e.g., belong to an edge chamfer or the like), and thus, it is generally difficult for a protruding or recessed defect to occur or be effectively identified.
It is to be understood that, in the step S132, a specific manner of analyzing the defect recognition result of the target panel based on the size information is not limited, and may be selected according to the requirement.
Illustratively, in an alternative embodiment, in order to ensure reliable defect recognition based on each edge straight line determination result not belonging to the target type, the step S132 may further include a step S132a, a step S132b, and a step S132c, where the specific contents of each step are as follows.
Step S132a, for each edge straight line determination result not belonging to the target type, determines the size information of the edge straight line reflected by the edge straight line determination result.
In the embodiment of the present application, for each edge straight line determination result that does not belong to the target type, the size information of the edge straight line reflected by the edge straight line determination result, such as the length information in the row direction of the pixel distribution, the length information in the column direction of the pixel distribution, or the like, may also be referred to as the length information of the edge straight line (line segment).
Step S132b, respectively performing matching processing on the size information corresponding to each edge straight line determination result and the reference size information configured for the size information in advance, to obtain corresponding matching data.
In the embodiment of the application, after the size information of the edge straight line is determined, the size information corresponding to each edge straight line determination result and the reference size information configured for the size information in advance can be subjected to matching processing to obtain corresponding matching data. For example, the reference size information may be a size interval, such as a size interval in a row direction and a size interval in a column direction, so that it may be determined whether the length information in the row direction belongs to the size interval in the row direction and the length information in the column direction belongs to the size interval in the column direction in the size information corresponding to the edge straight line determination result, respectively, to obtain the corresponding matching data.
Step S132c, for each edge line determination result not belonging to the target type, generating a defect recognition result corresponding to the edge line determination result for reflecting that the target panel has a defect if the matching data corresponding to the edge line determination result reflects that the size information does not match the reference size information, and generating a defect recognition result corresponding to the edge line determination result for reflecting that the target panel has no defect if the matching data corresponding to the edge line determination result reflects that the size information matches the reference size information.
In the embodiment of the application, after the matched data are obtained, aiming at each edge straight line determination result which does not belong to a target type, on one hand, if the matched data corresponding to the edge straight line determination result reflect that the size information is not matched with the reference size information, a defect identification result which corresponds to the edge straight line determination result and is used for reflecting that the target panel has defects is generated; on the other hand, if the matching data corresponding to the edge line determination result reflects that the size information matches the reference size information, a defect recognition result corresponding to the edge line determination result for reflecting that the target panel has no defect is generated. For example, if the length information in the row direction does not belong to the size section in the row direction or the length information in the column direction does not belong to the size section in the column direction in the size information corresponding to the edge straight line determination result, a defect recognition result for reflecting that the target panel does not have a defect may be generated corresponding to the edge straight line determination result.
With reference to fig. 6, an embodiment of the present application further provides a panel edge recognition device applicable to the above electronic device. The panel edge recognition device can comprise an edge recognition module, an edge straight line determination module and a defect recognition module.
In detail, the edge recognition module may be configured to perform edge recognition processing on an obtained panel image to be recognized by using a target neural network formed through network optimization, so as to obtain an edge recognition result corresponding to the panel image to be recognized, where the panel image to be recognized is formed by performing image acquisition on a target panel, and the edge recognition result is used to reflect edge distribution of the target panel in the panel image to be recognized. In an embodiment of the present application, the edge recognition module may be used to perform step S110 shown in fig. 2, and the description of step S110 may be referred to above for the relevant content of the edge recognition module.
In detail, the edge line determining module may be configured to perform edge line determining processing on the panel image to be identified based on the edge recognition result, to obtain an edge line determining result of the target panel in the panel image to be identified. In the embodiment of the present application, the edge straight line determining module may be used to perform step S120 shown in fig. 2, and the description of step S120 may be referred to as related content of the edge straight line determining module.
In detail, the defect recognition module may be configured to analyze a defect recognition result of the target panel based on the edge straight line determination result, where the defect recognition result is configured to reflect whether the edge of the target panel has a defect. In an embodiment of the present application, the defect recognition module may be used to perform step S130 shown in fig. 2, and the description of step S130 may be referred to above for the relevant content of the defect recognition module.
In an embodiment of the present application, corresponding to the above-mentioned panel edge recognition method applied to the electronic device, a computer readable storage medium is further provided, where a computer program is stored in the computer readable storage medium, and the computer program executes each step of the panel edge recognition method when running.
The steps executed when the computer program runs are not described in detail herein, and reference may be made to the explanation of the panel edge recognition method.
In summary, according to the panel edge recognition method and device, the electronic device and the storage medium provided by the application, firstly, the obtained panel image to be recognized can be subjected to edge recognition processing by utilizing the target neural network formed through network optimization, so as to obtain the edge recognition result corresponding to the panel image to be recognized; secondly, based on the edge recognition result, carrying out edge straight line determination processing on the panel image to be recognized to obtain an edge straight line determination result of the target panel in the panel image to be recognized; then, the defect recognition result possessed by the target panel may be analyzed based on the edge straight line determination result. Based on the above, since the neural network generally has better generalization capability (for example, can have better recognition performance for differences between different panel images due to factors such as illumination and defects), the target neural network is used for performing edge recognition processing on the panel image to be recognized, so that the reliability of the edge recognition result can be relatively higher. In addition, compared with the scheme of directly utilizing the neural network to identify the edge defects, the target neural network does not need to pay attention to the edge defects, but can pay attention to the edge identification, so that the reliability of the edge identification result can be higher, and the reliability of the subsequent defect analysis based on the edge identification result with higher reliability can be higher. Therefore, the problem of relatively low reliability of panel edge recognition in the prior art can be effectively solved. In addition, by adopting technical means such as edge straight line determination processing after the edge recognition processing is performed based on the neural network, the final defect recognition result (compared with a scheme of directly using the neural network for defect recognition) can be better in interpretation.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus and method embodiments described above are merely illustrative, for example, flow diagrams and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, an electronic device, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A method of identifying a panel edge, comprising:
performing edge recognition processing on an acquired panel image to be recognized by using a target neural network formed through network optimization to obtain an edge recognition result corresponding to the panel image to be recognized, wherein the panel image to be recognized is formed by performing image acquisition on a target panel, and the edge recognition result is used for reflecting edge distribution of the target panel in the panel image to be recognized;
based on the edge recognition result, carrying out edge straight line determination processing on the panel image to be recognized to obtain an edge straight line determination result of the target panel in the panel image to be recognized;
And analyzing a defect recognition result of the target panel based on the edge straight line determination result, wherein the defect recognition result is used for reflecting whether the edge of the target panel has a defect or not.
2. The method for recognizing the edge of the panel according to claim 1, wherein the step of performing edge line determination processing on the panel image to be recognized based on the edge recognition result to obtain an edge line determination result of the target panel in the panel image to be recognized includes:
In the panel image to be identified, scanning the edge line of the target panel reflected by the edge identification result to obtain at least one scanning result corresponding to the edge identification result, wherein each scanning result is used for reflecting the scanned pixel point on the edge line;
For each scanning result, performing edge line determination processing on the panel image to be identified based on the scanned pixel points on the edge line reflected by the scanning result to obtain an edge line local determination result of the target panel in the panel image to be identified, wherein the edge line local determination result is used for reflecting the edge line determined based on the corresponding scanning result;
And determining an edge straight line determination result of the target panel in the panel image to be identified based on the edge straight line local determination result corresponding to each scanning result.
3. The method for recognizing the edge of the panel according to claim 2, wherein the step of scanning the edge line of the target panel reflected by the edge recognition result in the panel image to be recognized to obtain at least one scanning result corresponding to the edge recognition result comprises the steps of:
Scanning along the column direction of pixel distribution in the panel image to be identified, determining a first crossed pixel point between each column scanning line and the edge line of the target panel reflected by the edge identification result, and constructing and forming a crossed pixel point set corresponding to the column direction based on the first crossed pixel point corresponding to each column scanning line;
Scanning along the row direction of pixel distribution in the panel image to be identified, determining a first crossed pixel point between each row scanning line and the edge line of the target panel reflected by the edge identification result, and constructing and forming a crossed pixel point set corresponding to the row direction based on the first crossed pixel point corresponding to each row scanning line;
Determining an intersection between the crossed pixel point set corresponding to the column direction and the crossed pixel point set corresponding to the row direction to obtain a crossed pixel point intersection;
And determining the intersection of the crossed pixel points, other crossed pixel points except the intersection of the crossed pixel points in the crossed pixel point set corresponding to the column direction and other crossed pixel points except the intersection of the crossed pixel points in the crossed pixel point set corresponding to the row direction as scanning results corresponding to the edge recognition results respectively so as to obtain at least three corresponding scanning results.
4. The method for recognizing an edge of a panel according to claim 2, wherein the step of performing edge line determination processing on the panel image to be recognized based on the pixel points scanned on the edge line reflected by the scanning result for each of the scanning results to obtain a result of edge line local determination of the target panel in the panel image to be recognized includes:
For each scanning result, performing outlier screening treatment on the scanned pixel points on the edge line reflected by the scanning result to obtain a target pixel point set corresponding to the scanning result;
And respectively carrying out straight line fitting processing on pixel points in a target pixel point set corresponding to each scanning result to obtain each edge straight line local determination result of the target panel in the panel image to be identified, wherein the edge straight line local determination result, the scanning result and the target pixel point set have a one-to-one correspondence, and each edge straight line local determination result is used for reflecting one fitted edge straight line.
5. The method of claim 1, wherein the step of analyzing the defect recognition result of the target panel based on the edge straight line determination result includes:
Comparing and analyzing the edge straight line determination result with an edge recognition sub-result corresponding to the edge straight line determination result included in the edge recognition result aiming at each edge straight line determination result belonging to the target type to obtain a defect recognition sub-result corresponding to the edge straight line determination result, wherein the edge recognition result comprises at least one edge recognition sub-result, any one edge recognition sub-result is used for reflecting one edge distribution of the target panel in the panel image to be recognized, and the edge straight line reflected by each edge straight line determination result belonging to the target type is distributed along the row direction or the column direction in the panel image to be recognized;
And determining the size information of the edge straight line reflected by the edge straight line determining result aiming at each edge straight line determining result which does not belong to the target type, and analyzing the defect identifying result of the target panel based on the size information.
6. The method of claim 5, wherein the step of determining, for each edge line determination result not belonging to the target type, size information of an edge line reflected by the edge line determination result, and analyzing a defect recognition result possessed by the target panel based on the size information, includes:
determining the size information of the edge straight line reflected by the edge straight line determining result aiming at each edge straight line determining result which does not belong to the target type;
Respectively carrying out matching processing on the size information corresponding to each edge straight line determination result and the reference size information configured for the size information in advance to obtain corresponding matching data;
And aiming at each edge straight line determining result which does not belong to the target type, if the matching data corresponding to the edge straight line determining result reflects that the size information is not matched with the reference size information, generating a defect identifying result which corresponds to the edge straight line determining result and is used for reflecting that the target panel has defects, and if the matching data corresponding to the edge straight line determining result reflects that the size information is matched with the reference size information, generating a defect identifying result which corresponds to the edge straight line determining result and is used for reflecting that the target panel has no defects.
7. The method for recognizing the edges of the panel according to claim 1, wherein the step of performing edge recognition processing on the obtained panel image to be recognized by using the target neural network formed through network optimization to obtain the edge recognition result corresponding to the panel image to be recognized comprises the following steps:
Acquiring a first sample panel image and an initial neural network, wherein the initial neural network comprises a first image coding model and a first image decoding model;
Performing unsupervised network optimization processing on the initial neural network based on the first sample panel image to obtain an intermediate neural network corresponding to the initial neural network, wherein in the process of performing unsupervised network optimization processing, the basis of network optimization processing at least comprises errors between the first sample panel image and a decoded image output by the first image decoding model;
constructing a candidate neural network based on a second image decoding model and a first image coding model in the intermediate neural network, wherein the candidate neural network comprises the second image coding model and the second image decoding model, and model parameters of the second image coding model are the same as those of the first image coding model in the intermediate neural network;
acquiring a second sample panel image and image tag information corresponding to the second sample panel image, wherein the image tag information is used for identifying panel edges in the second sample panel image;
Performing supervised network optimization processing on the candidate neural network based on the second sample panel image and image label information corresponding to the second sample panel image to obtain a target neural network corresponding to the candidate neural network, wherein in the process of performing the supervised network optimization processing, the basis of the network optimization processing at least comprises errors between the image label information and a sample edge recognition result output by the second image decoding model;
And performing edge recognition processing on the acquired panel image to be recognized by using the target neural network to obtain an edge recognition result corresponding to the panel image to be recognized.
8. The method for recognizing a panel edge according to claim 7, wherein the step of performing an unsupervised network optimization process on the initial neural network based on the first sample panel image to obtain an intermediate neural network corresponding to the initial neural network comprises:
Carrying out segmentation serialization processing on the first sample panel image to obtain a sample image segmentation sequence corresponding to the first sample panel image, wherein the sample image segmentation sequence comprises a plurality of local sample panel images which are arranged in sequence, the arrangement relation among the plurality of local sample panel images is related to the position relation of the plurality of local sample panel images in the first sample panel image, and the image sizes among every two local sample panel images are consistent;
utilizing a first image coding model included in the initial neural network, and mining candidate integral image coding features of the first sample panel image based on the sample image segmentation sequence;
Determining a first sample image segmentation sequence and a second sample image segmentation sequence in the sample image segmentation sequence, wherein the first sample image segmentation sequence and the second sample image segmentation sequence belong to subsequences in the sample image segmentation sequence, and a correlation is formed between the first sample image segmentation sequence and the second sample image segmentation sequence;
Decoding to obtain a decoded sample image segmentation sequence corresponding to the first sample image segmentation sequence based on the first sample image segmentation sequence and the candidate integral image coding feature by using a first image coding model and a first image decoding model which are included in the initial neural network, wherein a decoding object of the first image decoding model comprises the candidate integral image coding feature and a coding result of the first image coding model on the first sample image segmentation sequence, and the decoded sample image segmentation sequence belongs to a reduction result of decoding processing on the second sample image segmentation sequence based on the first sample image segmentation sequence;
Updating the candidate integral image coding features based on the decoding sample image segmentation sequence and the second sample image segmentation sequence, and marking the updated candidate integral image coding features to be marked as corresponding target integral image coding features;
And decoding the target overall image coding characteristic by using a first image decoding model included in the initial neural network to obtain a decoded sample panel image corresponding to the first sample panel image, performing error calculation on the initial neural network based on the first sample panel image and the decoded sample panel image to obtain a corresponding target decoding error, and updating and optimizing network parameters of the initial neural network along the direction of reducing the target decoding error to obtain an intermediate neural network corresponding to the initial neural network.
9. The method of claim 8, wherein the step of mining candidate global image coding features of the first sample panel image based on the sample image segmentation sequence using a first image coding model included in the initial neural network comprises:
aiming at each local sample panel image in the sample image segmentation sequence, carrying out coding processing on the local sample panel image by using a first image coding model included in the initial neural network to obtain local image coding characteristics corresponding to the local sample panel image;
Performing feature compression processing on local image coding features corresponding to each local sample panel image respectively to obtain compressed image coding features corresponding to each local image coding feature, wherein feature sizes between every two compressed image coding features are consistent, and each compressed image coding feature comprises at least one feature parameter;
And combining the compressed image coding features corresponding to each local image coding feature to form candidate integral image coding features of the first sample panel image.
10. The method for recognizing a panel edge according to claim 9, wherein the step of performing feature compression processing on the local image coding feature corresponding to each of the local sample panel images to obtain a compressed image coding feature corresponding to each of the local image coding features includes:
for each local sample panel image in the sample image segmentation sequence, carrying out feature screening processing on local image coding features corresponding to the local sample panel image so as to screen out one feature parameter with the maximum value in the local image coding features;
And constructing compressed image coding features corresponding to each local image coding feature based on one feature parameter with the maximum value in each local image coding feature.
11. The panel edge identification method according to claim 8, wherein the step of updating the candidate whole-image coding feature based on the decoded sample image segmentation sequence and the second sample image segmentation sequence, and marking the updated candidate whole-image coding feature to be a corresponding target whole-image coding feature, comprises:
performing error calculation on the initial neural network based on the decoding sample image segmentation sequence and the second sample image segmentation sequence to obtain a corresponding local decoding error;
Updating and optimizing network parameters of the initial neural network along the direction of reducing the local decoding error to obtain a pending neural network corresponding to the initial neural network;
Using a first image coding model included in the undetermined neural network, and mining new candidate integral image coding features of the first sample panel image based on the sample image segmentation sequence;
And updating the candidate integral image coding features based on the new candidate integral image coding features, and marking the updated candidate integral image coding features to form corresponding target integral image coding features.
12. The method for identifying a panel edge according to claim 7, wherein the step of performing supervised network optimization processing on the candidate neural network based on the second sample panel image and the image tag information corresponding to the second sample panel image to obtain the target neural network corresponding to the candidate neural network includes:
Carrying out segmentation serialization processing on the second sample panel image to obtain an image segmentation sequence to be mined corresponding to the second sample panel image, wherein the image segmentation sequence to be mined comprises a plurality of local panel images to be mined, which are arranged in sequence, the arrangement relation among the plurality of local panel images to be mined is related to the position relation of the plurality of local panel images to be mined in the second sample panel image, and the image sizes among every two local panel images to be mined are consistent;
digging out the integral image coding feature of the second sample panel image based on the image segmentation sequence to be mined by using a second image coding model included in the candidate neural network;
Decoding the whole image coding features of the second sample panel image by using a second image decoding model included in the candidate neural network to obtain a decoding labeling sample panel image corresponding to the second sample panel image, wherein the decoding labeling sample panel image has labeling information of a corresponding panel edge;
and determining an edge recognition error of the candidate neural network based on the label information of the corresponding panel edge in the decoded and labeled sample panel image and the image label information corresponding to the second sample panel image, and updating and optimizing network parameters of the candidate neural network along the direction of reducing the edge recognition error to obtain a corresponding target neural network.
13. A panel edge identification device, comprising:
The edge recognition module is used for carrying out edge recognition processing on the acquired panel image to be recognized by utilizing a target neural network formed through network optimization to obtain an edge recognition result corresponding to the panel image to be recognized, wherein the panel image to be recognized is formed by carrying out image acquisition on a target panel, and the edge recognition result is used for reflecting edge distribution of the target panel in the panel image to be recognized;
the edge straight line determining module is used for determining the edge straight line of the panel image to be identified based on the edge identification result to obtain an edge straight line determining result of the target panel in the panel image to be identified;
And the defect identification module is used for analyzing a defect identification result of the target panel based on the edge straight line determination result, wherein the defect identification result is used for reflecting whether the edge of the target panel has a defect or not.
14. An electronic device, comprising:
A memory for storing a computer program;
a processor coupled to the memory for executing a computer program stored in the memory for implementing the panel edge identification method of any one of claims 1-12.
15. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when run, performs the panel edge identification method of any one of claims 1-12.
CN202410565205.8A 2024-05-09 2024-05-09 Panel edge recognition method and device, electronic equipment and storage medium Active CN118154899B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410565205.8A CN118154899B (en) 2024-05-09 2024-05-09 Panel edge recognition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410565205.8A CN118154899B (en) 2024-05-09 2024-05-09 Panel edge recognition method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN118154899A true CN118154899A (en) 2024-06-07
CN118154899B CN118154899B (en) 2024-09-06

Family

ID=91293318

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410565205.8A Active CN118154899B (en) 2024-05-09 2024-05-09 Panel edge recognition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118154899B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120229618A1 (en) * 2009-09-28 2012-09-13 Takahiro Urano Defect inspection device and defect inspection method
US20190165309A1 (en) * 2017-11-29 2019-05-30 Samsung Display Co., Ltd. Display panel and manufacturing method thereof
CN110047113A (en) * 2017-12-29 2019-07-23 清华大学 Neural network training method and equipment, image processing method and equipment and storage medium
CN112052839A (en) * 2020-10-10 2020-12-08 腾讯科技(深圳)有限公司 Image data processing method, apparatus, device and medium
CN115578331A (en) * 2022-09-28 2023-01-06 武汉精立电子技术有限公司 Display panel edge line defect detection method and application
CN116309563A (en) * 2023-05-17 2023-06-23 成都数之联科技股份有限公司 Method, device, medium, equipment and program product for detecting defect of panel edge

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120229618A1 (en) * 2009-09-28 2012-09-13 Takahiro Urano Defect inspection device and defect inspection method
US20190165309A1 (en) * 2017-11-29 2019-05-30 Samsung Display Co., Ltd. Display panel and manufacturing method thereof
CN110047113A (en) * 2017-12-29 2019-07-23 清华大学 Neural network training method and equipment, image processing method and equipment and storage medium
CN112052839A (en) * 2020-10-10 2020-12-08 腾讯科技(深圳)有限公司 Image data processing method, apparatus, device and medium
CN115578331A (en) * 2022-09-28 2023-01-06 武汉精立电子技术有限公司 Display panel edge line defect detection method and application
CN116309563A (en) * 2023-05-17 2023-06-23 成都数之联科技股份有限公司 Method, device, medium, equipment and program product for detecting defect of panel edge

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
冯毅雄;李康杰;高一聪;郑浩;谭建荣;: "基于特征与形貌重构的轴件表面缺陷检测方法", 浙江大学学报(工学版), no. 03, 31 March 2020 (2020-03-31), pages 8 - 15 *

Also Published As

Publication number Publication date
CN118154899B (en) 2024-09-06

Similar Documents

Publication Publication Date Title
CN111696094B (en) Immunohistochemical PD-L1 membrane staining pathological section image processing method, device and equipment
CN112200194B (en) Formula identification method and device, electronic equipment and storage medium
CN110780965B (en) Vision-based process automation method, equipment and readable storage medium
CN112712005B (en) Training method of recognition model, target recognition method and terminal equipment
CN114170516B (en) Vehicle weight recognition method and device based on roadside perception and electronic equipment
US20210304364A1 (en) Method and system for removing noise in documents for image processing
CN112529024B (en) Sample data generation method and device and computer readable storage medium
CN114124567A (en) Cloud service processing method based on big data vulnerability mining and artificial intelligence system
CN111612037A (en) Abnormal user detection method, device, medium and electronic equipment
CN111680680B (en) Target code positioning method and device, electronic equipment and storage medium
He et al. Pointinst3d: Segmenting 3d instances by points
CN113986674A (en) Method and device for detecting abnormity of time sequence data and electronic equipment
KR20210037632A (en) Method and apparatus for spoof detection
CN111339368A (en) Video retrieval method and device based on video fingerprints and electronic equipment
CN118172331A (en) Instance level detection method and device for regional change of double-time-phase remote sensing image
CN118154899B (en) Panel edge recognition method and device, electronic equipment and storage medium
CN117274913A (en) Security guarantee method and system based on intelligent building
CN113469167A (en) Method, device, equipment and storage medium for recognizing meter reading
CN112749293A (en) Image classification method and device and storage medium
Sanches et al. Recommendations for evaluating the performance of background subtraction algorithms for surveillance systems
CN115810152A (en) Remote sensing image change detection method and device based on graph convolution and computer equipment
CN115270841A (en) Bar code detection method and device, storage medium and computer equipment
CN113705270A (en) Method, device, equipment and storage medium for identifying two-dimensional code positioning code area
CN113869316A (en) Information notification method, device, equipment and computer storage medium
CN115620210B (en) Method and system for determining performance of electronic wire material based on image processing

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant