CN110827244A - Method and equipment for detecting appearance flaws of electronic equipment - Google Patents

Method and equipment for detecting appearance flaws of electronic equipment Download PDF

Info

Publication number
CN110827244A
CN110827244A CN201911032859.XA CN201911032859A CN110827244A CN 110827244 A CN110827244 A CN 110827244A CN 201911032859 A CN201911032859 A CN 201911032859A CN 110827244 A CN110827244 A CN 110827244A
Authority
CN
China
Prior art keywords
detection area
target
pixel
area
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911032859.XA
Other languages
Chinese (zh)
Inventor
徐鹏
沈圣远
常树林
姚巨虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yueyi Network Information Technology Co Ltd
Original Assignee
Shanghai Yueyi Network Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yueyi Network Information Technology Co Ltd filed Critical Shanghai Yueyi Network Information Technology Co Ltd
Priority to CN201911032859.XA priority Critical patent/CN110827244A/en
Publication of CN110827244A publication Critical patent/CN110827244A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Molecular Biology (AREA)
  • Immunology (AREA)
  • Biochemistry (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Pathology (AREA)
  • Computing Systems (AREA)
  • Signal Processing (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The application aims to provide a method and equipment for detecting appearance defects of electronic equipment. Compared with the prior art, the method and the device have the advantages that the electronic equipment appearance image to be detected is obtained, the target detection area is extracted based on the electronic equipment appearance image, the target detection area is input into the neural network model after training is finished, then the flaw detection result of the target detection area output by the neural network model is received, and the flaw detection result comprises the following steps: the defect type of the target detection area, the position of the defect in the target detection area and the confidence coefficient of the defect detection result can accurately identify the defect difference of the screen appearance of the second-hand electronic equipment such as a mobile phone.

Description

Method and equipment for detecting appearance flaws of electronic equipment
Technical Field
The application relates to the technical field of computers, in particular to a technology for detecting appearance flaws of electronic equipment.
Background
Since the conventional image processing method is largely dependent on the selection of the threshold, and the screen appearance of the handheld electronic device such as a mobile phone has different degrees of differences in aspects such as color, appearance, aging degree and the like, it is difficult to provide a certain threshold, so that the conventional image processing method is not suitable for the detection of the defects in the screen appearance.
Disclosure of Invention
The application aims to provide a method and equipment for detecting appearance defects of electronic equipment.
According to an aspect of the present application, there is provided a method for electronic device appearance flaw detection, wherein the method comprises:
acquiring an appearance image of the electronic equipment to be detected;
extracting a target detection area based on the electronic equipment appearance image;
inputting the target detection area into a neural network model after training is finished;
receiving a flaw detection result of the target detection area output by the neural network model, wherein the flaw detection result comprises: the flaw type of the target detection area, the position of the flaw in the target detection area and the confidence degree of the flaw detection result.
Further, after receiving the output flaw detection result of the target detection area from the neural network model, the method further includes:
identifying whether a confidence level of the flaw detection result is greater than a first preset threshold,
and if the defect type is larger than the first preset threshold, outputting result information including the defect type of the target detection area and the position of the defect in the target detection area.
Further, before inputting the target detection area into the neural network model after the training is finished, the method further includes:
presetting a neural network model and initial model parameters thereof;
inputting a sample target detection area into a neural network model with current model parameters to obtain a flaw prediction result of the sample target detection area, wherein the flaw prediction result comprises: the flaw type of a sample target detection area, the position of a flaw in the sample target detection area and the confidence coefficient of a flaw detection result;
calculating a difference value between the flaw prediction result and a real flaw result of the sample target detection area based on a preset target function, identifying whether the difference value is greater than a second preset threshold value,
if the difference is larger than a second preset threshold, then the fourth step is executed from the second step again after the neural network model parameters are updated based on the difference;
and if the difference is smaller than or equal to a second preset threshold value, step five, taking the neural network model with the current model parameters as the neural network model after training is finished.
Further, wherein the neural network model comprises a model of a FPN network in combination with a backbone network.
Further, the front 2 layer of the backbone network adopts res structure, and the back 2 layer of the network adopts initiation structure.
Further wherein the target detection area comprises at least any one of: a screen display area; a horizontal bezel area; a vertical bezel area; a non-screen outline region; a backplane region.
Further, wherein the target detection area comprises a horizontal frame area or a vertical frame area, and the extracting the target detection area based on the electronic device appearance image comprises:
performing frame detection on the electronic equipment appearance image to obtain a frame detection area;
sequentially carrying out pixel expansion and pixel segmentation on the frame detection area to obtain a segmented frame detection area;
and carrying out pixel clustering on the segmented frame detection area image to obtain a frame detection area of the frame of the electronic equipment.
Further, wherein, carry out pixel expansion and pixel segmentation to frame detection region in proper order, obtain the frame detection region after cutting apart, include:
and expanding a preset number of pixels outwards around the frame detection area, and performing pixel segmentation on the expanded frame detection area through a convolutional neural network U-net to obtain the segmented frame detection area.
Further, performing pixel clustering on the segmented frame detection area to obtain a frame detection area of the frame of the electronic device includes:
performing pixel clustering on all pixel points in the segmented frame detection area, and connecting all points corresponding to the frame in the segmented frame detection area after clustering together to obtain a frame detection area of the electronic equipment;
and carrying out maximum external rectangle interception on the frame detection area of the electronic equipment, and extracting the frame detection area of the electronic equipment.
Further, wherein, carry out pixel clustering to all pixel points in the frame detection region after cutting apart, include:
judging whether the pixel points of the segmented frame detection area are in the frame of the electronic equipment or not;
if yes, retaining corresponding pixel points and pixel values thereof in the frame detection area after segmentation;
if not, setting the corresponding pixel points in the segmented frame detection area to be black.
Further, wherein the object detection area comprises a screen display area, and the extracting the object detection area based on the electronic device appearance image comprises:
counting a picture color histogram of the electronic equipment appearance image;
clustering all pixel values on the image based on the picture color histogram, and determining a clustering area;
and judging the neighborhood of each clustering region, and counting the largest connected region as a screen display region of the electronic equipment.
Further, wherein the method further comprises:
and calculating the minimum bounding rectangle of the screen area based on the opencv implementation mode to extract the screen display area.
Further, wherein the clustering all pixel values on the image based on the picture color histogram, and the determining the clustering region includes:
determining a clustering center based on the picture color histogram;
and determining a clustering region based on the relation between all pixel values on the image and the clustering center.
Further, the determining a neighborhood of each cluster region, and taking a largest statistical connected domain as a screen display region of the electronic device, includes:
judging 8 neighborhoods of each region, and determining the neighborhoods smaller than the pixel threshold as the same connected domain;
and determining the area with the maximum connected domain as the screen area of the electronic equipment.
Further, the target detection area includes a screen display area, the defect category includes a screen display disconnection, and the method further includes:
acquiring information of a target point in the broken line by using the neural network model;
performing area extension according to the information of the target point to obtain a target extension diagram;
and judging the target extension diagram, and determining the position of the broken line according to the judgment result.
Further, the obtaining information of the target point in the broken line by using the neural network model includes:
acquiring a pixel value of a target point in a broken line by using the neural network model;
and acquiring coordinate information of a target point in the broken line by using the neural network model.
Further, the performing area extension according to the information of the target point to obtain a target extension map includes:
determining a first adjacent pixel according to the coordinate information of the target point, and acquiring a pixel value of the first adjacent pixel;
and judging whether the difference value between the pixel value of the target point and the pixel value of the first adjacent pixel is smaller than a first preset threshold value, if so, determining a target extension diagram according to the first adjacent pixel and the target point.
Further, the determining the target extension diagram and determining the position of the broken line according to the determination result includes:
judging whether the target extension diagram is allowed to be extended by the area, if so, continuing the area extension of the target extension diagram; if not, determining the position of the broken line according to the target extension diagram.
Further, whether the target extension diagram is allowed to be extended by the area is judged, and if yes, the area extension is continued on the target extension diagram; if not, determining the position of the broken line according to the target extension diagram, wherein the step of determining the position of the broken line comprises the following steps:
determining the pixel average value of all pixel points when the target extension diagram extends to the current last horizontal position in the horizontal direction;
acquiring the pixel value of the adjacent pixel at the current last horizontal position, judging whether the target extension diagram is allowed to be extended by the region according to the pixel value of the adjacent pixel at the current last horizontal position and the pixel average value, and if so, continuing to extend the region of the target extension diagram; and if not, taking the target extension diagram as the position of the broken line.
Further, whether the target extension diagram is allowed to be extended by the area is judged, and if yes, the area extension is continued on the target extension diagram; if not, determining the position of the broken line according to the target extension diagram, wherein the step of determining the position of the broken line comprises the following steps:
determining the pixel average value of all pixel points when the target extension diagram extends to the current last vertical position in the vertical direction;
acquiring the pixel value of the adjacent pixel at the current last vertical position, judging whether the target extension diagram is allowed to be extended by the region according to the pixel value of the adjacent pixel at the current last vertical position and the pixel average value, and if so, continuing to extend the region of the target extension diagram; and if not, taking the target extension diagram as the position of the broken line.
According to another aspect of the present application, there is also provided a computer readable medium having computer readable instructions stored thereon, the computer readable instructions being executable by a processor to implement the operations of the method as described above.
According to still another aspect of the present application, there is also provided an apparatus for electronic device appearance flaw detection, wherein the apparatus includes:
one or more processors; and
a memory storing computer readable instructions that, when executed, cause the processor to perform operations of the method as previously described.
Compared with the prior art, the method and the device have the advantages that the electronic equipment appearance image to be detected is obtained, the target detection area is extracted based on the electronic equipment appearance image, the target detection area is input into the neural network model after training is finished, then the flaw detection result of the target detection area output by the neural network model is received, and the flaw detection result comprises the following steps: the defect type of the target detection area, the position of the defect in the target detection area and the confidence coefficient of the defect detection result can accurately identify the defect difference of the screen appearance of the second-hand electronic equipment such as a mobile phone.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings:
FIG. 1 illustrates a flow diagram of a method for cosmetic defect detection of an electronic device in accordance with an aspect of the subject application;
FIG. 2 illustrates an appearance image of a mobile phone to be tested in a method for detecting appearance defects of an electronic device according to an aspect of the present application;
FIG. 3 illustrates a schematic diagram of a vertical bounding box detection area in accordance with an aspect of the subject application;
FIG. 4 illustrates a schematic diagram of a pixel expanded vertical frame detection area in accordance with an aspect of the subject application;
fig. 5 illustrates a diagram of a partitioned vertical frame detection area in accordance with an aspect of the subject application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present invention is described in further detail below with reference to the attached drawing figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
In order to further explain the technical means and effects adopted by the present application, the following description clearly and completely describes the technical solution of the present application with reference to the accompanying drawings and preferred embodiments.
Fig. 1 illustrates a method for determining an electronic device appearance defect detection provided by an aspect of the present application, wherein the method includes:
s11, acquiring an appearance image of the electronic equipment to be detected;
s12 extracting a target detection area based on the electronic device appearance image;
s13, inputting the target detection area into the neural network model after training;
s14 receiving a fault detection result of the target detection area output by the neural network model, the fault detection result including: the flaw type of the target detection area, the position of the flaw in the target detection area and the confidence degree of the flaw detection result.
In the present application, the method is performed by a device 1, the device 1 is a computer device and/or a cloud, the computer device includes but is not limited to a personal computer, a notebook computer, an industrial computer, a network host, a single network server, a plurality of network server sets; the Cloud is made up of a large number of computers or web servers based on Cloud Computing (Cloud Computing), which is a type of distributed Computing, a virtual supercomputer consisting of a collection of loosely coupled computers.
The computer device and/or cloud are merely examples, and other existing or future devices and/or resource sharing platforms, as applicable to the present application, are also intended to be included within the scope of the present application and are hereby incorporated by reference.
In this embodiment, in step S11, the device 1 acquires an external appearance image of an electronic device to be detected, where the electronic device includes, but is not limited to, a mobile phone, a PAD, a smart watch, and other terminal devices. For example, the method of the present application can be used for cosmetic flaw detection in a second-hand handset.
Continuing in this embodiment, in the step S12, a target detection area is extracted based on the electronic device appearance image. Here, the object detection area belongs to a part of the area in the appearance of the electronic device, preferably, wherein the object detection area includes at least any one of the following: a screen display area; a horizontal bezel area; a vertical bezel area; a non-screen outline region; a backplane region.
Fig. 2 is a diagram illustrating an appearance image of a mobile phone to be detected in an electronic device appearance defect detection method according to an aspect of the present application. The upper frame and the lower frame of the horizontal part are horizontal frames, the two frames of the left part and the right part are vertical frames, the screen display area is a screen part of the mobile phone, the non-screen outline area is a part of the front side of the mobile phone, out of the screen, and the back plate area is a back part of the mobile phone, which are not shown in the figure.
In this embodiment, if a defect of the vertical frame region needs to be detected, it is determined that the vertical frame region is the target detection region, and the vertical frame region needs to be extracted and detected.
In this embodiment, in step S13, the target detection area is input into the neural network model after training is finished. Preferably, the neural network model is a model combining an FPN network and a backbone network. The invention utilizes the improved characteristic pyramid (FPN) network and the backsbone network deep learning model to accurately identify the appearance defect difference of the second-hand electronic equipment such as a mobile phone.
Preferably, the front 2 layer of the backbone network adopts res structure, and the back 2 layer of the network adopts initiation structure.
Preferably, before inputting the target detection area into the neural network model after the training is finished, the method further includes:
presetting a neural network model and initial model parameters thereof;
inputting the target detection area into a neural network model with current model parameters to obtain a flaw prediction result of the target detection area, wherein the flaw prediction result comprises: the defect type of a target detection area, the position of a defect in the target detection area and the confidence coefficient of a defect detection result;
step three, calculating a difference value between the flaw prediction result and a real flaw result of the target detection area based on a preset target function, identifying whether the difference value is greater than a second preset threshold value,
if the difference is larger than a second preset threshold, then the fourth step is executed from the second step again after the neural network model parameters are updated based on the difference;
and if the difference is smaller than or equal to a second preset threshold value, step five, taking the neural network model with the current model parameters as the neural network model after training is finished.
In this embodiment, the neural network model is trained through a sample target detection area, specifically, the sample target detection area is labeled with a corresponding flaw, for example, the flaws in the backboard area include: cracks, stent screen separation, deformation, fracture loss, large area paint drop, small area paint drop or deformation, color exposed depressions without discoloration, deep scratches with different colors from the surroundings, small dots with different colors from the surroundings, fracture and other defects; as another example, imperfections for the screen display area include: delamination, character penetration, liquid leakage, line breakage, bright spots (bright spots), color spots (yellowing and bluish), and the like, and the types of the defects are merely examples, and are not particularly limited in the present application.
Continuing in this embodiment, in step S14, a flaw detection result of the target detection area output by the neural network model is received, where the flaw detection result includes: the flaw type of the target detection area, the position of the flaw in the target detection area and the confidence degree of the flaw detection result.
Preferably, after receiving the output flaw detection result of the target detection area from the neural network model, the method further includes: and identifying whether the confidence of the flaw detection result is greater than a first preset threshold, and if so, outputting result information including the flaw type of the target detection area and the position of the flaw in the target detection area. In this embodiment, by identifying the confidence of the flaw detection result, a reliable result can be screened from the flaw detection result and output.
In another embodiment of the present application, wherein the target detection area includes a horizontal frame area or a vertical frame area, the extracting the target detection area based on the electronic device appearance image includes:
s21 (not shown) carrying out border detection on the electronic equipment appearance image to obtain a border detection area;
s22 (not shown) sequentially performing pixel expansion and pixel division on the frame detection area to obtain a divided frame detection area;
s23 (not shown) performs pixel clustering on the segmented frame detection area image to obtain a frame detection area of the frame of the electronic device.
In this embodiment, the target detection area is exemplified by a vertical frame area. In step S21, a vertical frame is detected for the electronic device appearance image, and a vertical frame detection area is obtained. Here, the purpose of the vertical frame detection is to detect a vertical frame detection area, which is an approximate area where the vertical frame is located, and after the vertical frame detection, coordinates of different pixel points corresponding to the vertical frame detection area, a confidence degree corresponding to the vertical frame detection area, and the like can be obtained.
In this embodiment, in step S22, sequentially performing pixel expansion and pixel segmentation on the vertical frame detection area to obtain a segmented vertical frame detection area; firstly, performing pixel expansion processing on a vertical frame detection area to obtain a vertical frame detection area after pixel expansion, so as to perform pixel segmentation processing on the vertical frame detection area; and then, the pixel expanded vertical frame detection area is subjected to pixel division processing to obtain a divided vertical frame detection area. After the pixel segmentation is carried out on the vertical frame detection area, whether each pixel in the vertical frame detection area belongs to the vertical frame area or not can be judged visually.
In this embodiment, in step S23, pixel clustering is performed on the divided vertical frame detection area to obtain a frame position of the vertical frame of the mobile phone.
The steps S11 to S14 are performed to detect, segment, and aggregate the regions of the mobile phone in a deep learning manner, and filter out image information outside the vertical frame, so as to realize accurate positioning of the vertical frame of the mobile phone, and facilitate reduction of error recognition rate of defects in the vertical frame region.
For example, first, an appearance image a of a mobile phone to be detected is obtained as shown in fig. 2, where the mobile phone to be detected may be a second-hand mobile phone with a defective appearance. Then, performing vertical frame detection on the appearance image to obtain a vertical frame detection area a1 shown in fig. 3; here, the purpose of the vertical frame detection is to detect a vertical frame detection area a1, which is an approximate area where a vertical frame is located, and the detection result includes coordinates of different pixel points corresponding to the vertical frame detection area; and detecting the confidence score corresponding to the region by the vertical frame, and the like.
Then, performing pixel expansion processing on the vertical frame detection area a1 to obtain a pixel-expanded vertical frame detection area a2 shown in fig. 4, so as to perform pixel segmentation processing on the vertical frame detection area in the following; then, the pixel-enlarged vertical frame detection area a2 is subjected to pixel division processing to obtain a divided vertical frame detection area A3 as shown in fig. 5. After the pixel division is performed on the vertical frame detection area a2 after the pixel expansion, it can be determined whether each pixel in the vertical frame detection area belongs to the vertical frame area.
Finally, performing pixel clustering on the divided vertical frame detection area A3 to obtain a frame position D of the vertical frame of the mobile phone, wherein four coordinates corresponding to the frame position D of the vertical frame of the mobile phone are respectively: the number is (x1, y1), (x1, y2), (x2, y1) and (x2, y2), so that the vertical frame of the mobile phone can be accurately positioned, and the error identification rate of the defects of the vertical frame area can be reduced.
Next to the foregoing embodiment of the present application, the step S21 of performing vertical frame detection on the electronic device appearance image to obtain a vertical frame detection area, where the vertical frame detection area includes:
obtaining a vertical bounding box detection model, wherein the vertical bounding box detection model is determined by a residual error network resnet 50.
And performing vertical frame detection on the appearance image based on the vertical frame detection model to obtain a vertical frame detection area.
For example, the obtaining of the vertical frame detection model established based on resnet50 may specifically include: first, the improvement to resnet50 includes: after pruning based on resnet50, 1 convolution kernel is replaced by 2 convolution kernels inside the residual network block. Secondly, acquiring at least one recovered training appearance image of the mobile phone, namely acquiring training appearance images manually marked with 1000 mobile phones; then, detecting and predicting the vertical frame of each mobile phone according to the improved resnet50 to obtain a prediction result T of the vertical frame detection area of the vertical frame of the mobile phone indicated by the training appearance image corresponding to each mobile phone, and simultaneously obtaining a real result S of the vertical frame detection area of the vertical frame of the mobile phone indicated by the training appearance image corresponding to each mobile phone, and calculating a difference V between the prediction result T and the real result S of the vertical frame detection area of the vertical frame of the mobile phone indicated by the training appearance image corresponding to each mobile phone; then, inputting the difference value into the vertical frame detection model M established based on improved resnet50, and adjusting parameters of the vertical frame detection model M to obtain an improved vertical frame detection model M, so as to realize continuous training and optimization of the vertical frame detection model M, wherein the vertical frame detection model M is more favorable for obtaining deeper features in the process of performing vertical frame detection on the appearance image of the mobile phone; and finally, performing vertical frame detection on the appearance image based on the vertical frame detection model M to obtain a vertical frame detection area A1 shown in FIG. 3, which is favorable for realizing the positioning accuracy of the vertical frame.
Preferably, the sequentially performing pixel expansion and pixel segmentation on the frame detection area to obtain a segmented frame detection area includes:
and expanding a preset number of pixels outwards around the frame detection area, and performing pixel segmentation on the expanded frame detection area through a convolutional neural network U-net to obtain the segmented frame detection area.
Here, the predetermined number of pixels may be any number of pixels, and in a preferred embodiment of an aspect of the present application, the predetermined number of pixels may be preferably 100 pixels.
For example, if the preset number of pixel expansion is 100, that is, 100 pixels are expanded outwards around the vertical frame detection area a1 shown in fig. 3, so as to obtain a pixel-expanded vertical frame detection area a2 shown in fig. 4, and the pixel-expanded vertical frame detection area a2 is subjected to pixel segmentation through a convolutional neural network U-net, so as to obtain the segmented vertical frame detection area A3 shown in fig. 5, which is beneficial to achieving accurate positioning of a subsequent vertical frame. The size of the image subjected to pixel segmentation processing by the convolutional neural network U-net is the same as that of the image before segmentation, and whether a pixel point in the segmented image belongs to a target detection object can be distinguished, if so, the pixel value of the pixel point is reserved, and if not, the pixel value of the pixel point is set to be zero, namely, the pixel point is set to be black. For example, the input original image is: 128 × 128, obtaining a multi-layer reduced-resolution feature map by multi-layer convolution calculation, for example, corresponding to: 64 × 64, 32 × 32, 16 × 16, the pictures 32 × 32, 64 × 64, 128 × 128 can be obtained by upsampling once, and iteration is performed through a loss function, so that the position of the vertical frame (the non-frame position is black) can be well distinguished from the finally obtained image 128 × 128, because the input original image 128 × 128 is consistent with the output image 128 × 128, the convolutional neural network U-net is mainly used for confirming whether each pixel point in the image to be processed belongs to the target detection object (for example, the position of the vertical frame).
Next, in the foregoing embodiment of the present application, the step S23 performing pixel clustering on the divided vertical frame detection area to obtain a frame position of the vertical frame of the mobile phone, that is, a frame detection area, includes:
and performing pixel clustering on all pixel points in the divided vertical frame detection area, and connecting all points corresponding to the vertical frame in the divided vertical frame detection area after clustering together to obtain the area of the vertical frame of the mobile phone. And intercepting the maximum external rectangle of the area of the vertical frame of the mobile phone to obtain the frame position of the vertical frame of the mobile phone, thereby realizing the accurate positioning of the vertical frame of the mobile phone and being beneficial to reducing the error recognition rate of the defects of the area of the vertical frame.
For example, if all the pixels in the divided vertical frame detection area a3 shown in fig. 5 are: a1, a2, A3, a4, a5... a, performing pixel clustering on all pixel points a1, a2, A3, a4, and a5... a in the partitioned vertical frame detection region A3 shown in fig. 5, and connecting all points corresponding to vertical frames in the clustered partitioned vertical frame detection region to obtain a region a4 of the vertical frame of the mobile phone; and intercepting the maximum external rectangle of the area A4 of the vertical frame of the mobile phone to obtain the frame position D of the vertical frame of the mobile phone, thereby realizing the accurate positioning of the vertical frame of the mobile phone and being beneficial to reducing the error recognition rate of the defects of the area of the vertical frame.
Further, the pixel clustering of all the pixel points in the divided vertical frame detection area includes:
judging whether pixel points of the divided vertical frame detection area are in the vertical frame of the mobile phone or not;
if yes, reserving corresponding pixel points and pixel values thereof in the divided vertical frame detection area;
if not, setting the corresponding pixel points in the divided vertical frame detection area to be black, namely setting the pixel values of the pixel points in the area of the non-vertical frame to be 0.
For example, when all the pixel points a1, a2, a3, a4 and a5... an in the partitioned vertical frame detection area a3 shown in fig. 5 are subjected to pixel clustering, whether the pixel points a1, a2, a3, a4 and a5... an in the partitioned vertical frame detection area a3 shown in fig. 5 are in the vertical frame of the mobile phone is judged, and if yes, the corresponding pixel points and the pixel values thereof in the partitioned vertical frame detection area are reserved; if not, setting the pixel value of the pixel point to be 0 when the corresponding pixel point in the divided vertical frame detection area is set to be black. The pixel points a2, a4, a5, a7 of the region of the non-vertical frame are put black. Then, all the points corresponding to the vertical frames in the clustered divided vertical frame detection area A3 shown in fig. 5 are connected together to obtain an area a4 of the vertical frame of the mobile phone. Finally, maximum external rectangle intercepting is carried out on an area A4 of a vertical frame of the mobile phone by calling a related function in an Open Source Computer Vision Library (OpenCV), so as to obtain a frame position D of the vertical frame of the mobile phone, wherein four points of the intercepted maximum external rectangle are x1, x2, y1 and y2, and thus four coordinates corresponding to the frame position D of the vertical frame of the mobile phone are respectively obtained as follows: the number is (x1, y1), (x1, y2), (x2, y1) and (x2, y2), so that the vertical frame of the mobile phone can be accurately positioned, and the error identification rate of the defects of the vertical frame area can be reduced.
In another embodiment of the present application, wherein the target detection area includes a screen display area, the extracting the target detection area based on the electronic device appearance image includes:
s31 (not shown), counting a picture color histogram of the electronic device appearance image;
s32 (not shown) clustering all pixel values on the image based on the picture color histogram, determining a clustering region;
s33 (not shown) determines a neighborhood of each cluster region, and the connected domain with the largest statistics is used as a screen display region of the electronic device.
In step S31, the device 1 counts the picture color histogram of the appearance image, and specifically, the counting of the picture color histogram of the appearance image including the screen region may be implemented based on the existing statistical manner.
Continuing in this embodiment, in step S32, device 1 clusters all pixel values on the image based on the picture color histogram, and determines a clustering region. Here, since the front appearance image of the electronic device usually includes a screen region and a non-screen region, two cluster regions are usually determined.
Preferably, wherein the step S32 includes: s321 (not shown) determines a cluster center based on the picture color histogram; s322 (not shown) determines a clustering region based on the relationship of all pixel values on the image to the clustering center.
Specifically, in the step S321, the device 1 determines a cluster center based on the picture color histogram, specifically, for example, a mean value of all pixels can be determined by the picture color histogram, and the mean value is used as the cluster center, and further, in the step S322, a cluster region is determined by a clustering algorithm, for example, including but not limited to, by means of Kmeans clustering. Preferably, the cluster center is determined by the median of all pixels in the color histogram.
Continuing in this embodiment, in step S33, device 1 determines the neighborhood of each cluster region, and counts the largest connected domain as the screen region of the electronic device. Here, the neighborhood of each cluster region includes, but is not limited to, a 4-neighborhood or an 8-neighborhood, and the like.
Preferably, wherein the step S33 includes: judging 8 neighborhoods of each region, and determining the neighborhoods smaller than the pixel threshold as the same connected domain; and determining the area with the maximum connected domain as the screen area of the electronic equipment. The pixel threshold may be set in advance or statistically derived, and is not limited herein.
Preferably, wherein the method further comprises: s34 (not shown) calculates a minimum bounding rectangle of the screen region based on the opencv implementation to extract the screen region.
In this embodiment, the screen area may be extracted in a manner of a minimum bounding rectangle, so as to perform corresponding processing on the screen area. Here, the area corresponding to the minimum circumscribed rectangle is a screen area.
In another embodiment of the present application, wherein the target detection area includes a screen display area, the defect category includes a screen display disconnection, and the method further includes:
s41 (not shown) obtaining information of a target point in the broken line by using the neural network model;
s42 (not shown) performing region extension according to the information of the target point to obtain a target extension map;
s43 (not shown) determines the target extension diagram, and determines the position of the broken wire according to the determination result.
In this embodiment, in the step S41, information of the target point in the broken line is acquired by using the neural network model. Here, the trained neural network model is used to detect the broken line, and information of a plurality of points in the broken line, such as position coordinate information and corresponding pixel values corresponding to the plurality of points, can be obtained, wherein the trained neural network model is obtained by training using labeled broken line data; and then, obtaining the information of the target point through calculation according to the information of the plurality of points, wherein the information of the target point includes, but is not limited to, position coordinate information of the target point and a pixel value of the target point. The process of obtaining the target point information is simple, convenient and quick according to the neural network model, the obtained result is visual, a complex verification process is not needed, and the accuracy is high.
In step S42, a region is extended according to the information of the target point, and a target extension map is obtained. Here, the extension in the horizontal direction and the vertical direction is performed according to the position coordinate information of the target point and the pixel value of the target point, and preferably, the area extension is completed by capturing pixel points whose surrounding neighboring pixel values do not exceed a certain preset threshold, and the target extension graph is determined in combination with the target point.
In step S43, the target extension diagram is determined, and the position of the broken line is specified based on the determination result. Here, whether the obtained target extension diagram can be further extended is judged, and if yes, the target extension diagram is further extended; if not, the target extension diagram is the broken line, and the position coordinate information of the broken line is obtained according to the position coordinate information corresponding to the target extension diagram.
Preferably, in step S41, the neural network model is used to obtain the pixel values of the target points in the broken line; and acquiring coordinate information of the target point in the broken line by using the convolutional neural network. Here, the information of the target point in the broken line includes, but is not limited to, pixel values and coordinate information, and the information of the target point in the broken line can be accurately acquired through the convolutional neural network.
Preferably, in step S41, the neural network model is used to obtain pixel values of a plurality of pixel points in the broken line; and determining the pixel value of the target point according to the number of the plurality of pixel points and the pixel values of the plurality of pixel points. The ratio of the sum of the pixel values of the plurality of pixel points to the number of the plurality of pixel points is used to determine the pixel value of the target point.
Preferably, in step S41, coordinate information corresponding to a plurality of pixel points in the broken line is obtained by using the neural network model; determining the number of the plurality of pixel points and calculating the accumulated sum of the coordinate information corresponding to the plurality of pixel points; and determining the coordinate information of the target point according to the ratio of the accumulated sum of the coordinate information corresponding to the plurality of pixel points to the number of the plurality of pixel points. The neural network model can be used for automatically and accurately acquiring the coordinate information of a plurality of pixel points in the broken line so as to accurately acquire the coordinate information of the target point. It should be noted that the above manner of determining the coordinate information of the target point is only an example, and other manners may also be included, for example, the coordinate information of the target point may also be determined by the sum of the coordinate information corresponding to a plurality of pixel points and the weighted average of the number of the plurality of pixel points.
Preferably, in step S42, a first neighboring pixel is determined according to the coordinate information of the target point, and a pixel value of the first neighboring pixel is obtained; and judging whether the difference value between the pixel value of the target point and the pixel value of the first adjacent pixel is smaller than a first preset threshold value, if so, determining a target extension diagram according to the first adjacent pixel and the target point. Capturing surrounding directly adjacent pixels by using the coordinate information of the target point to obtain a plurality of first adjacent pixels, and obtaining pixel values of the first adjacent pixels. Then, setting a first preset threshold, and when the absolute value of the difference between the pixel value of the target point and the pixel value of the first adjacent pixel is smaller than the first preset threshold, adding the first adjacent pixel to the target point in a preset manner to form a target extension diagram, preferably, the preset manner includes but is not limited to: and marking the first adjacent pixel and the target point in a preset mode, and determining the target extension diagram according to the marked pixel points in the preset mode.
Preferably, in step S43, it is determined whether the target extension map is allowed to be extended by an area, and if so, the area extension is continued for the target extension map; if not, determining the position of the broken line according to the target extension diagram. The region extension includes extension in a horizontal direction and extension in a vertical direction, when the target extension diagram cannot continue the region extension, the target extension diagram is a broken line, and the position of the broken line is determined according to the position of the target extension diagram.
Determining the pixel average value of all pixel points when the horizontal direction of the target extension diagram extends to the current last horizontal position; acquiring the pixel value of the adjacent pixel at the current last horizontal position, judging whether the target extension diagram is allowed to be extended by the region according to the pixel value of the adjacent pixel at the current last horizontal position and the pixel average value, and if so, continuing to extend the region of the target extension diagram; and if not, taking the target extension diagram as the position of the broken line. Here, the target extension map is extended horizontally to reach a current last horizontal position, and pixel values of neighboring pixels at the current last horizontal position are compared with the pixel average value, where the neighboring pixels at the current last horizontal position are not in the target extension map. Preferably, a second preset threshold is set, when the difference value between the pixel value of the adjacent pixel at the current last horizontal position and the pixel average value is smaller than the second preset threshold, the target extension map is allowed to be extended regionally, the adjacent pixel at the current last horizontal position is included in the target extension map, and the target extension map is continued to be extended horizontally; when the difference value between the pixel value of the pixel at the current last horizontal position and the pixel average value is greater than or equal to the second preset threshold value, the target extension diagram is not allowed to extend to the current last horizontal position, and the horizontal position of the target extension diagram is the horizontal position of the broken line, so that the horizontal position of the broken line is accurately confirmed.
Determining the pixel average value of all pixel points when the target extension diagram extends to the current last vertical position in the vertical direction; acquiring the pixel value of the adjacent pixel at the current last vertical position, judging whether the target extension diagram is allowed to be extended by the region according to the pixel value of the adjacent pixel at the current last vertical position and the pixel average value, and if so, continuing to extend the region of the target extension diagram; and if not, taking the target extension diagram as the position of the broken line. Here, the target extension map is vertically extended to reach a current last vertical position, and pixel values of neighboring pixels at the current last vertical position are compared with the pixel average value, wherein the neighboring pixels at the current last vertical position are not in the target extension map. Preferably, a third preset threshold is set, when the difference value between the pixel value of the adjacent pixel at the current last vertical position and the pixel average value is smaller than the third preset threshold, the target extension map is allowed to be extended regionally, the adjacent pixel at the current last vertical position is included in the target extension map, and the target extension map continues to be extended vertically; when the difference value between the pixel value of the adjacent pixel at the current last vertical position and the pixel average value is greater than or equal to the third preset threshold value, the target extension diagram is not allowed to extend to the current last vertical position, and the vertical position of the target extension diagram is the vertical position of the broken line, so as to accurately confirm the vertical position of the broken line.
Preferably, the convolutional neural network is a segmented neural network u-net network. In the preferred embodiment, the broken line region and the normal display region of the broken line picture can be displayed through a manual marking screen, the marked picture is used for training and dividing the neural network u-net network to obtain a broken line detection model, the broken line detection model is used for detecting the picture to be detected, when the broken line is detected, a plurality of pixel points in the broken line are obtained, and the target point can be accurately obtained after calculation through the mode.
Compared with the prior art, the method and the device have the advantages that the electronic equipment appearance image to be detected is obtained, the target detection area is extracted based on the electronic equipment appearance image, the target detection area is input into the neural network model after training is finished, then the flaw detection result of the target detection area output by the neural network model is received, and the flaw detection result comprises the following steps: the defect type of the target detection area, the position of the defect in the target detection area and the confidence coefficient of the defect detection result can accurately identify the defect difference of the screen appearance of the second-hand electronic equipment such as a mobile phone.
According to yet another aspect of the present application, there is also provided a computer readable medium having stored thereon computer readable instructions executable by a processor to implement the foregoing method.
According to another aspect of the present application, there is also provided an apparatus for screen area identification of an electronic device, wherein the apparatus includes:
one or more processors; and
a memory storing computer readable instructions that, when executed, cause the processor to perform operations of the method as previously described.
For example, the computer readable instructions, when executed, cause the one or more processors to: acquiring an appearance image of the electronic equipment to be detected; extracting a target detection area based on the electronic equipment appearance image; inputting the target detection area into a neural network model after training is finished; receiving a flaw detection result of the target detection area output by the neural network model, wherein the flaw detection result comprises: the flaw type of the target detection area, the position of the flaw in the target detection area and the confidence degree of the flaw detection result.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (22)

1. A method for electronic device appearance flaw detection, wherein the method comprises:
acquiring an appearance image of the electronic equipment to be detected;
extracting a target detection area based on the electronic equipment appearance image;
inputting the target detection area into a neural network model after training is finished;
receiving a flaw detection result of the target detection area output by the neural network model, wherein the flaw detection result comprises: the flaw type of the target detection area, the position of the flaw in the target detection area and the confidence degree of the flaw detection result.
2. The method of claim 1, wherein after receiving the outputted flaw detection result for the target detection area from the neural network model, further comprising:
identifying whether a confidence level of the flaw detection result is greater than a first preset threshold,
and if the defect type is larger than the first preset threshold, outputting result information including the defect type of the target detection area and the position of the defect in the target detection area.
3. The method of claim 1, wherein before inputting the target detection area into the neural network model after training, further comprising:
presetting a neural network model and initial model parameters thereof;
inputting a sample target detection area into a neural network model with current model parameters to obtain a flaw prediction result of the sample target detection area, wherein the flaw prediction result comprises: the flaw type of a sample target detection area, the position of a flaw in the sample target detection area and the confidence coefficient of a flaw detection result;
calculating a difference value between the flaw prediction result and a real flaw result of the sample target detection area based on a preset target function, identifying whether the difference value is greater than a second preset threshold,
if the difference is larger than a second preset threshold, then the fourth step is executed from the second step again after the neural network model parameters are updated based on the difference;
and if the difference is smaller than or equal to a second preset threshold value, step five, taking the neural network model with the current model parameters as the neural network model after training is finished.
4. The method of any one of claims 1 to 3, wherein the neural network model comprises a model of an FPN network in combination with a backbone network.
5. The method of claim 4, wherein the front 2 layers of the backhaul network adopt res structure, and the back 2 layers of the network adopt initiation structure.
6. The method of any of claims 1 to 5, wherein the target detection region comprises at least any one of:
a screen display area;
a horizontal bezel area;
a vertical bezel area;
a non-screen outline region;
a backplane region.
7. The method of claim 6, wherein the target detection area comprises a horizontal border area or a vertical border area, and wherein extracting the target detection area based on the electronic device appearance image comprises:
performing frame detection on the electronic equipment appearance image to obtain a frame detection area;
sequentially carrying out pixel expansion and pixel segmentation on the frame detection area to obtain a segmented frame detection area;
and carrying out pixel clustering on the segmented frame detection area image to obtain a frame detection area of the frame of the electronic equipment.
8. The method of claim 7, wherein sequentially performing pixel expansion and pixel segmentation on the border detection area to obtain a segmented border detection area comprises:
and expanding a preset number of pixels outwards around the frame detection area, and performing pixel segmentation on the expanded frame detection area through a convolutional neural network U-net to obtain the segmented frame detection area.
9. The method of claim 8, wherein the performing pixel clustering on the segmented border detection area to obtain a border detection area of a border of the electronic device comprises:
performing pixel clustering on all pixel points in the segmented frame detection area, and connecting all points corresponding to the frame in the segmented frame detection area after clustering together to obtain a frame detection area of the electronic equipment;
and carrying out maximum external rectangle interception on the frame detection area of the electronic equipment, and extracting the frame detection area of the electronic equipment.
10. The method of claim 9, wherein pixel clustering all pixel points within the segmented border detection region comprises:
judging whether the pixel points of the segmented frame detection area are in the frame of the electronic equipment or not;
if yes, retaining corresponding pixel points and pixel values thereof in the frame detection area after segmentation;
if not, setting the corresponding pixel points in the segmented frame detection area to be black.
11. The method of claim 6, wherein the target detection area comprises a screen display area, the extracting a target detection area based on the electronic device appearance image comprising:
counting a picture color histogram of the electronic equipment appearance image;
clustering all pixel values on the image based on the picture color histogram, and determining a clustering area;
and judging the neighborhood of each clustering region, and counting the largest connected region as a screen display region of the electronic equipment.
12. The method of claim 11, wherein the method further comprises:
and calculating the minimum bounding rectangle of the screen area based on the opencv implementation mode to extract the screen display area.
13. The method of claim 11 or 12, wherein the clustering of all pixel values on an image based on the picture color histogram, determining a clustering region comprises:
determining a clustering center based on the picture color histogram;
and determining a clustering region based on the relation between all pixel values on the image and the clustering center.
14. The method according to any one of claims 11 to 13, wherein the determining a neighborhood of each cluster region, and the counting a largest connected domain as a screen display region of the electronic device comprises:
judging 8 neighborhoods of each region, and determining the neighborhoods smaller than the pixel threshold as the same connected domain;
and determining the area with the maximum connected domain as the screen area of the electronic equipment.
15. The method of claim 6, wherein the target detection area comprises a screen display area, the defect category comprises a screen display disconnection, the method further comprising:
acquiring information of a target point in the broken line by using the neural network model;
performing area extension according to the information of the target point to obtain a target extension diagram;
and judging the target extension diagram, and determining the position of the broken line according to the judgment result.
16. The method of claim 15, wherein the obtaining information of target points in a broken line using the neural network model comprises:
acquiring a pixel value of a target point in a broken line by using the neural network model;
and acquiring coordinate information of a target point in the broken line by using the neural network model.
17. The method according to claim 16, wherein the performing region extension according to the information of the target point to obtain a target extension map comprises:
determining a first adjacent pixel according to the coordinate information of the target point, and acquiring a pixel value of the first adjacent pixel;
and judging whether the difference value between the pixel value of the target point and the pixel value of the first adjacent pixel is smaller than a first preset threshold value, if so, determining a target extension diagram according to the first adjacent pixel and the target point.
18. The method of claim 15, wherein the determining the target extension map and determining the location of the broken line according to the determination result comprises:
judging whether the target extension diagram is allowed to be extended by the area, if so, continuing the area extension of the target extension diagram; if not, determining the position of the broken line according to the target extension diagram.
19. The method of claim 18, wherein it is determined whether the target extension map is allowed to be extended by a region, and if so, the region extension is continued on the target extension map; if not, determining the position of the broken line according to the target extension diagram, wherein the step of determining the position of the broken line comprises the following steps:
determining the pixel average value of all pixel points when the target extension diagram extends to the current last horizontal position in the horizontal direction;
acquiring the pixel value of the adjacent pixel at the current last horizontal position, judging whether the target extension diagram is allowed to be extended by the region according to the pixel value of the adjacent pixel at the current last horizontal position and the pixel average value, and if so, continuing to extend the region of the target extension diagram; and if not, taking the target extension diagram as the position of the broken line.
20. The method of claim 18, wherein it is determined whether the target extension map is allowed to be extended by a region, and if so, the region extension is continued on the target extension map; if not, determining the position of the broken line according to the target extension diagram, wherein the step of determining the position of the broken line comprises the following steps:
determining the pixel average value of all pixel points when the target extension diagram extends to the current last vertical position in the vertical direction;
acquiring the pixel value of the adjacent pixel at the current last vertical position, judging whether the target extension diagram is allowed to be extended by the region according to the pixel value of the adjacent pixel at the current last vertical position and the pixel average value, and if so, continuing to extend the region of the target extension diagram; and if not, taking the target extension diagram as the position of the broken line.
21. A computer readable medium having computer readable instructions stored thereon which are executable by a processor to implement the method of any one of claims 1 to 20.
22. An apparatus for electronic device appearance blemish detection, wherein the apparatus comprises:
one or more processors; and
memory storing computer readable instructions that, when executed, cause the processor to perform the operations of the method of any of claims 1 to 20.
CN201911032859.XA 2019-10-28 2019-10-28 Method and equipment for detecting appearance flaws of electronic equipment Pending CN110827244A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911032859.XA CN110827244A (en) 2019-10-28 2019-10-28 Method and equipment for detecting appearance flaws of electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911032859.XA CN110827244A (en) 2019-10-28 2019-10-28 Method and equipment for detecting appearance flaws of electronic equipment

Publications (1)

Publication Number Publication Date
CN110827244A true CN110827244A (en) 2020-02-21

Family

ID=69550941

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911032859.XA Pending CN110827244A (en) 2019-10-28 2019-10-28 Method and equipment for detecting appearance flaws of electronic equipment

Country Status (1)

Country Link
CN (1) CN110827244A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476759A (en) * 2020-03-13 2020-07-31 深圳市鑫信腾机器人科技有限公司 Screen surface detection method and device, terminal and storage medium
CN111680750A (en) * 2020-06-09 2020-09-18 创新奇智(合肥)科技有限公司 Image recognition method, device and equipment
CN111830052A (en) * 2020-06-01 2020-10-27 涡阳县沪涡多孔矸石砖有限公司 Flaw detection system for hollow brick
WO2021082918A1 (en) * 2019-10-28 2021-05-06 上海悦易网络信息技术有限公司 Screen appearance defect detection method and device
CN113012103A (en) * 2021-02-07 2021-06-22 电子科技大学 Quantitative detection method for surface defects of large-aperture telescope lens
CN113298809A (en) * 2021-06-25 2021-08-24 成都飞机工业(集团)有限责任公司 Composite material ultrasonic image defect detection method based on deep learning and superpixel segmentation
TWI803824B (en) * 2020-12-09 2023-06-01 大陸商艾聚達信息技術(蘇州)有限公司 An artificial intelligence model automatic upgrading training system and method
US11798250B2 (en) 2019-02-18 2023-10-24 Ecoatm, Llc Neural network based physical condition evaluation of electronic devices, and associated systems and methods
US11843206B2 (en) 2019-02-12 2023-12-12 Ecoatm, Llc Connector carrier for electronic device kiosk
US11922467B2 (en) 2020-08-17 2024-03-05 ecoATM, Inc. Evaluating an electronic device using optical character recognition
US11989710B2 (en) 2018-12-19 2024-05-21 Ecoatm, Llc Systems and methods for vending and/or purchasing mobile phones and other electronic devices

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108846841A (en) * 2018-07-02 2018-11-20 北京百度网讯科技有限公司 Display screen quality determining method, device, electronic equipment and storage medium
US20180342050A1 (en) * 2016-04-28 2018-11-29 Yougetitback Limited System and method for detection of mobile device fault conditions
CN109711474A (en) * 2018-12-24 2019-05-03 中山大学 A kind of aluminium material surface defects detection algorithm based on deep learning
CN109859190A (en) * 2019-01-31 2019-06-07 北京工业大学 A kind of target area detection method based on deep learning
CN110378420A (en) * 2019-07-19 2019-10-25 Oppo广东移动通信有限公司 A kind of image detecting method, device and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180342050A1 (en) * 2016-04-28 2018-11-29 Yougetitback Limited System and method for detection of mobile device fault conditions
CN108846841A (en) * 2018-07-02 2018-11-20 北京百度网讯科技有限公司 Display screen quality determining method, device, electronic equipment and storage medium
CN109711474A (en) * 2018-12-24 2019-05-03 中山大学 A kind of aluminium material surface defects detection algorithm based on deep learning
CN109859190A (en) * 2019-01-31 2019-06-07 北京工业大学 A kind of target area detection method based on deep learning
CN110378420A (en) * 2019-07-19 2019-10-25 Oppo广东移动通信有限公司 A kind of image detecting method, device and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BIN XIAO ET AL.: "Simple Baselines for Human Pose Estimation and Tracking", 《ARXIV.ORG》 *
郝仕嘉等: "基于机器视觉的手机屏幕缺陷检测方法研究", 《信息与电脑》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11989710B2 (en) 2018-12-19 2024-05-21 Ecoatm, Llc Systems and methods for vending and/or purchasing mobile phones and other electronic devices
US11843206B2 (en) 2019-02-12 2023-12-12 Ecoatm, Llc Connector carrier for electronic device kiosk
US11798250B2 (en) 2019-02-18 2023-10-24 Ecoatm, Llc Neural network based physical condition evaluation of electronic devices, and associated systems and methods
WO2021082918A1 (en) * 2019-10-28 2021-05-06 上海悦易网络信息技术有限公司 Screen appearance defect detection method and device
CN111476759A (en) * 2020-03-13 2020-07-31 深圳市鑫信腾机器人科技有限公司 Screen surface detection method and device, terminal and storage medium
CN111830052A (en) * 2020-06-01 2020-10-27 涡阳县沪涡多孔矸石砖有限公司 Flaw detection system for hollow brick
CN111680750A (en) * 2020-06-09 2020-09-18 创新奇智(合肥)科技有限公司 Image recognition method, device and equipment
CN111680750B (en) * 2020-06-09 2022-12-06 创新奇智(合肥)科技有限公司 Image recognition method, device and equipment
US11922467B2 (en) 2020-08-17 2024-03-05 ecoATM, Inc. Evaluating an electronic device using optical character recognition
TWI803824B (en) * 2020-12-09 2023-06-01 大陸商艾聚達信息技術(蘇州)有限公司 An artificial intelligence model automatic upgrading training system and method
CN113012103B (en) * 2021-02-07 2022-09-06 电子科技大学 Quantitative detection method for surface defects of large-caliber telescope lens
CN113012103A (en) * 2021-02-07 2021-06-22 电子科技大学 Quantitative detection method for surface defects of large-aperture telescope lens
CN113298809B (en) * 2021-06-25 2022-04-08 成都飞机工业(集团)有限责任公司 Composite material ultrasonic image defect detection method based on deep learning and superpixel segmentation
CN113298809A (en) * 2021-06-25 2021-08-24 成都飞机工业(集团)有限责任公司 Composite material ultrasonic image defect detection method based on deep learning and superpixel segmentation

Similar Documents

Publication Publication Date Title
CN110827244A (en) Method and equipment for detecting appearance flaws of electronic equipment
WO2021082923A1 (en) Electronic device screen area defect detection method and device
CN112348815B (en) Image processing method, image processing apparatus, and non-transitory storage medium
CN110705583B (en) Cell detection model training method, device, computer equipment and storage medium
CN111340752A (en) Screen detection method and device, electronic equipment and computer readable storage medium
TWI655586B (en) Method and device for detecting specific identification image in predetermined area
CN110517246B (en) Image processing method and device, electronic equipment and storage medium
CN111027504A (en) Face key point detection method, device, equipment and storage medium
CN111612781A (en) Screen defect detection method and device and head-mounted display equipment
JP2022539912A (en) Electronic device backplane appearance defect inspection method and apparatus
CN110796669A (en) Vertical frame positioning method and equipment
CN115908269B (en) Visual defect detection method, visual defect detection device, storage medium and computer equipment
CN111259878A (en) Method and equipment for detecting text
CN113781406B (en) Scratch detection method and device for electronic component and computer equipment
CN111325717B (en) Mobile phone defect position identification method and equipment
CN110827246A (en) Electronic equipment frame appearance flaw detection method and equipment
CN111898610B (en) Card unfilled corner detection method, device, computer equipment and storage medium
CN115880288B (en) Detection method, system and computer equipment for electronic element welding
CN111028276A (en) Image alignment method and device, storage medium and electronic equipment
CN114298985B (en) Defect detection method, device, equipment and storage medium
CN112419207A (en) Image correction method, device and system
CN113065454B (en) High-altitude parabolic target identification and comparison method and device
CN116503414B (en) Screen defect detection method, device, computer equipment and storage medium
CN111935480B (en) Detection method for image acquisition device and related device
US11508143B2 (en) Automated salience assessment of pixel anomalies

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 1101-1103, No. 433, Songhu Road, Yangpu District, Shanghai

Applicant after: Shanghai wanwansheng Environmental Protection Technology Group Co.,Ltd.

Address before: Room 1101-1103, No. 433, Songhu Road, Yangpu District, Shanghai

Applicant before: SHANGHAI YUEYI NETWORK INFORMATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20200221

RJ01 Rejection of invention patent application after publication