WO2023191312A1 - Procédé d'évaluation d'état extérieur et de valeur de dispositif électronique, et appareil d'évaluation de valeur de dispositif électronique - Google Patents

Procédé d'évaluation d'état extérieur et de valeur de dispositif électronique, et appareil d'évaluation de valeur de dispositif électronique Download PDF

Info

Publication number
WO2023191312A1
WO2023191312A1 PCT/KR2023/002425 KR2023002425W WO2023191312A1 WO 2023191312 A1 WO2023191312 A1 WO 2023191312A1 KR 2023002425 W KR2023002425 W KR 2023002425W WO 2023191312 A1 WO2023191312 A1 WO 2023191312A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
images
screen
deep learning
evaluation
Prior art date
Application number
PCT/KR2023/002425
Other languages
English (en)
Korean (ko)
Inventor
지창환
유도형
Original Assignee
민팃(주)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020220098681A external-priority patent/KR20230140325A/ko
Application filed by 민팃(주) filed Critical 민팃(주)
Publication of WO2023191312A1 publication Critical patent/WO2023191312A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the examples below relate to a method for evaluating the appearance condition and value of electronic devices and an electronic device value evaluation device.
  • the artificial intelligence analysis method can evaluate the external condition of used electronic devices by capturing the external condition of used electronic devices through lighting and cameras and analyzing the obtained images using an artificial intelligence analysis algorithm.
  • an artificial intelligence analysis algorithm or deep learning evaluation model
  • the artificial intelligence analysis algorithm or deep learning evaluation model
  • a method of evaluating the appearance state of an electronic device performed by an electronic device value evaluation apparatus includes determining whether an exception case exists in a plurality of images obtained by photographing an electronic device; When it is determined that the exception case does not exist in the images, evaluating the external state of the electronic device using deep learning evaluation models and the images; When it is determined that there is a target image in which the exception case exists among the images, determining whether the exception case can be processed; If it is determined that the exception case can be handled, processing the exception case in the target image; And it may include performing the appearance state evaluation using the remaining images excluding the target image among the images, the target image for which the exception case has been processed, and the deep learning evaluation models.
  • the step of determining whether the exception case exists may include determining whether there is a first object corresponding to the floating icon in the screen image obtained by photographing the screen of the electronic device.
  • Determining whether the exception case exists may include determining whether there is a second object corresponding to an attachment of the electronic device in one or more of the images.
  • Determining whether the exception case exists may include determining whether there is a first case in which a shooting condition is not satisfied in one or more of the images.
  • the shooting conditions may include at least one of camera focus and lighting brightness.
  • the step of determining whether the exception case exists includes performing color conversion on a screen image obtained by photographing the screen of the electronic device among the images; determining whether the screen is a designated screen using saturation information and brightness information of the color-converted screen image; And when the screen is not the designated screen, it may include determining that a second case corresponding to a situation in which an unspecified screen is turned on exists.
  • the step of determining whether the exception case can be handled includes a first object corresponding to a floating icon in a screen image obtained by photographing the screen of the electronic device or an attachment of the electronic device to one or more of the images.
  • object 2 is determined to be the exception case, determining that the first object or the second object can be processed; And when it is determined that there is a first case in which the shooting condition is not satisfied in one or more of the images or a second case corresponding to a situation in which a screen not specified in one or more of the images is turned on, the first case or the The second case may include determining that the electronic device cannot be processed by the electronic device value evaluation device and that operator processing is necessary.
  • the step of processing the exception case includes performing masking processing on the first object or the second object so that at least one of the deep learning evaluation models does not mistake the first object or the second object as a defect. It can be included.
  • the step of performing the appearance state evaluation using the deep learning evaluation models and the images includes selecting the electronics from the images through the deep learning evaluation models. generating a mask predicting a defect state in each of the evaluation areas of the device, and determining a grade for a defect in each of the evaluation areas based on the generated masks; And it may include determining a final grade for the external condition of the electronic device based on each determined grade.
  • An electronic device value evaluation method performed by an electronic device value evaluation device evaluates the external condition of the electronic device based on a plurality of images obtained by photographing the electronic device and a plurality of deep learning evaluation models. Steps to perform; and determining the value of the electronic device based on the results of the external condition evaluation and the results of the internal state evaluation of the electronic device.
  • the step of performing the appearance state evaluation includes determining whether an exception case exists in the images; When it is determined that the exception case does not exist in the images, evaluating the external state of the electronic device using deep learning evaluation models and the images; When it is determined that there is a target image in which the exception case exists among the images, determining whether the exception case can be processed; If it is determined that the exception case can be handled, processing the exception case in the target image; And it may include performing the appearance state evaluation using the remaining images excluding the target image among the images, the target image for which the exception case has been processed, and the deep learning evaluation models.
  • the step of determining whether the exception case exists may include determining whether there is a first object corresponding to the floating icon in the screen image obtained by photographing the screen of the electronic device.
  • Determining whether the exception case exists may include determining whether there is a second object corresponding to an attachment of the electronic device in one or more of the images.
  • Determining whether the exception case exists may include determining whether there is a first case in which a shooting condition is not satisfied in one or more of the images.
  • the shooting conditions may include at least one of camera focus and lighting brightness.
  • the step of determining whether the exception case exists includes performing color conversion on a screen image obtained by photographing the screen of the electronic device among the images; determining whether the screen is a designated screen using saturation information and brightness information of the color-converted screen image; If the screen is not the designated screen, it may include determining that a second case corresponding to a situation in which an unspecified screen is turned on exists.
  • the step of determining whether the exception case can be handled includes a first object corresponding to a floating icon in a screen image obtained by photographing the screen of the electronic device or an attachment of the electronic device to one or more of the images.
  • object 2 is determined to be the exception case, determining that the first object or the second object can be processed; And when it is determined that there is a first case in which the shooting condition is not satisfied in one or more of the images or a second case corresponding to a situation in which a screen not specified in one or more of the images is turned on, the first case or the The second case may include determining that processing by the electronic device evaluation device is not possible and determining that operator processing is necessary.
  • the step of processing the exception case includes performing masking processing on the first object or the second object so that at least one of the deep learning evaluation models does not mistake the first object or the second object as a defect. It can be included.
  • the step of performing the appearance state evaluation using the deep learning evaluation models and the images includes selecting the electronic device from the images through the deep learning evaluation models. generating a mask predicting the defect state of each of the evaluation areas, and determining a grade for a defect in each of the evaluation areas based on the generated masks; And it may include determining a final grade for the external condition of the electronic device based on each determined grade.
  • An electronic device valuation device includes a memory that stores a plurality of deep learning evaluation models; and determine whether an exception case exists in a plurality of images obtained by photographing an electronic device, and when it is determined that the exception case does not exist in the images, the deep learning evaluation models and the An external state evaluation of the electronic device is performed using images, and when it is determined that there is a target image in which the exception case exists among the images, it is determined whether the exception case can be handled, and whether the exception case can be handled. If it is determined that the exception case is processed in the target image, the appearance state is evaluated using the remaining images excluding the target image among the images, the target image for which the exception case has been processed, and the deep learning evaluation models. It may include an appearance condition evaluation module that performs.
  • One embodiment detects exception cases in the input image of the artificial intelligence analysis algorithm (or deep learning evaluation model) and image-processes the detected exception case, so that the artificial intelligence analysis algorithm (or deep learning evaluation model) obtains accurate information from the given input image. Analysis results can be derived and the accuracy of evaluating the external condition of electronic devices can be improved.
  • FIG. 1 and 2 are diagrams explaining an unmanned purchase device and a server according to an embodiment.
  • 3 to 6 are diagrams illustrating the operation of an electronic device value evaluation device according to an embodiment.
  • Figure 7 is a flowchart explaining a method of evaluating the value of an electronic device according to an embodiment.
  • 8 to 13 are diagrams explaining an exception case processing method according to an embodiment.
  • Figure 14 is a block diagram illustrating the configuration of a computing device for training a deep learning model according to an embodiment.
  • 15A to 15C are diagrams explaining a target mask and a prediction mask according to an embodiment.
  • Figure 16 is a flowchart explaining a deep learning model training method of a computing device according to an embodiment.
  • first or second may be used to describe various components, but these terms should be interpreted only for the purpose of distinguishing one component from another component.
  • a first component may be named a second component, and similarly, the second component may also be named a first component.
  • FIG. 1 and 2 are diagrams explaining an unmanned purchase device and a server according to an embodiment.
  • an unmanned purchase device 110 and a server 120 are shown.
  • the unmanned purchase device 110 may purchase electronic devices (or used electronic devices) (e.g., smartphones, tablet PCs, wearable devices, etc.) from users and/or sell electronic devices (or used electronic devices) to users.
  • electronic devices or used electronic devices
  • the type of electronic device may be classified into a bar type, rollable type, or foldable type depending on its shape.
  • the unmanned purchase device 110 may be, for example, in the form of a kiosk, but is not limited thereto.
  • the unmanned embedded device 110 may include a photographing box for placing and photographing an electronic device, a system control unit, and a display.
  • the system control unit may control the overall operation of the unmanned embedding device 110.
  • the system control unit may display guidelines and/or precautions for selling electronic devices on the display.
  • the system control unit may display a user interface for receiving various information from the user (e.g., operating system (OS) information of an electronic device to be sold by the user) on the display.
  • OS operating system
  • a first application installed on the electronic device may display a serial number on the display of the electronic device.
  • the first application may be an application for inspecting the internal state of an electronic device and collecting information about the electronic device (e.g., model name, serial number, operating system version, etc.).
  • the user may input the serial number into the display of the unmanned acquisition device 110.
  • the system control unit may verify the entered serial number through the display or request verification from the server 120. The system control unit can open the door of the shooting box when the entered serial number is verified.
  • the user can connect the cable (e.g. USB Type C cable, Lightning cable, etc.) of the unmanned embedding device 110 and the electronic device, and the electronic device can be placed in the shooting box.
  • Electronic devices may be connected to the system control unit of the unmanned embedding device 110 through a cable.
  • the electronic device may be connected to the system control unit of the unmanned embedded device 110 wirelessly (e.g., Bluetooth, Bluetooth Low Energy (BLE), etc.).
  • the shooting box can align the placed electronic devices in a given position. If the electronic device is not aligned in a designated position, the cameras inside the shooting box may not be able to properly capture the exterior of the electronic device.
  • the first application can collect information about the electronic device and evaluate (or analyze) the internal state of the electronic device (e.g., hardware operating state, etc.).
  • the hardware operation state may indicate whether the hardware (eg, sensor, camera, etc.) of the electronic device is operating normally.
  • the first application can evaluate (or determine) whether the hardware of the electronic device operates normally.
  • a plurality of cameras and a plurality of lights may be located in the shooting box.
  • the first camera in the photographing box may acquire one or more front images of the electronic device by photographing the front of the electronic device.
  • the second camera in the shooting box may acquire one or more rear images of the electronic device by photographing the rear of the electronic device.
  • Each of the plurality of third cameras in the shooting box may acquire one or more side images (or corner images) by photographing each side (or corner) of the electronic device.
  • the first camera may acquire one or more images (hereinafter referred to as “screen images”) by photographing the screen of the electronic device.
  • the first application may display a single-color (e.g., white, black, red, blue, green, etc.) screen on the electronic device.
  • a monochromatic screen image When a monochromatic screen is displayed on the electronic device, the first camera may acquire an image (hereinafter referred to as a “monochromatic screen image”) by photographing the monochromatic screen of the electronic device.
  • a white screen is displayed on the electronic device
  • the first camera may acquire a first monochromatic screen image by photographing the white screen of the electronic device.
  • the first camera may acquire a second monochromatic screen image by photographing the black screen of the electronic device. While the electronic device displays a screen of a single color other than white and black (e.g., red, blue, green, etc.), the first camera may acquire a third single-color screen image by photographing the other single-color screen of the electronic device. .
  • a single color other than white and black e.g., red, blue, green, etc.
  • the electronic device value evaluation device 130 is based on images acquired by photographing electronic devices (e.g., one or more front images, one or more rear images, one or more side images, and one or more monochromatic screen images) and deep learning evaluation models. Appearance condition evaluation of electronic devices can be performed.
  • the electronic device valuation device 130 may be included in the server 120 .
  • the server 120 may receive images obtained by photographing an electronic device from the unmanned purchasing device 110, and may transmit the received images to the electronic device value evaluation device 130.
  • the first application in the electronic device may perform an evaluation of the internal state of the electronic device and transmit the result of the evaluation of the internal state of the electronic device to the server 120 through the unmanned embedded device 110.
  • the first application may allow the electronic device to connect to the server 120 and transmit the result of evaluating the internal state of the electronic device to the server 120 through the electronic device.
  • the electronic device value evaluation device 130 evaluates the electronic device based on the result of the external state evaluation of the electronic device and the result of the internal state evaluation of the electronic device (e.g., the result of the first application performing the internal state evaluation of the electronic device). Value (e.g. price) can be determined.
  • the electronic device value evaluation device 130 may transmit the value of the electronic device to the unmanned purchase device 110, and the unmanned purchase device 110 may transmit the value of the electronic device to the user.
  • the user can accept the value (e.g., price) of the electronic device and convey to the unmanned purchasing device 110 that the electronic device will be sold, and the unmanned purchasing device 110 is placed in the shooting box when the user decides to sell the electronic device.
  • Electronic devices can be moved to a recovery box (or storage box). Depending on the embodiment, the recovery box may be located inside or outside the unmanned burial device 110.
  • the electronic device valuation device 130 may be included in the unmanned purchase device 110 .
  • the electronic device value evaluation device 130 may receive images obtained by photographing the electronic device from cameras in a photographing box.
  • the electronic device value evaluation device 130 may receive the result of evaluating the internal state of the electronic device from the first application.
  • the electronic device value evaluation device 130 may determine the value (e.g., price) of the electronic device based on the result of evaluating the external condition of the electronic device and the result of evaluating the internal state of the electronic device.
  • the electronic device value evaluation device 130 can convey the value of the electronic device to the user.
  • the user can accept the value (e.g., price) of the electronic device and convey to the unmanned purchasing device 110 that the electronic device will be sold, and the unmanned purchasing device 110 is placed in the shooting box when the user decides to sell the electronic device.
  • Electronic devices can be moved to a recovery box (or storage box).
  • 3 to 6 are diagrams illustrating the operation of an electronic device value evaluation device according to an embodiment.
  • the electronic device evaluation device 130 may include a memory 310, an appearance condition evaluation module 320, and a value determination module 330.
  • the appearance condition evaluation module 320 and the value determination module 330 may be implemented with one processor.
  • the appearance condition evaluation module 320 and the value determination module 340 may each be implemented with separate processors.
  • a first processor may implement the appearance condition evaluation module 320 and a second processor may implement the value determination module 340.
  • the memory 310 may store a plurality of deep learning evaluation models. For example, the memory 310 may detect a defect in a first evaluation area (e.g., the front) of the electronic device and use a first deep learning evaluation model to determine the grade of the detected defect (or the first evaluation area), the electronic device.
  • a first evaluation area e.g., the front
  • a first deep learning evaluation model to determine the grade of the detected defect (or the first evaluation area), the electronic device.
  • a second deep learning evaluation model that detects defects in a second evaluation area (e.g., back side) and determines the grade of the detected defect (or second evaluation area), a third evaluation area (e.g., side (or a third deep learning evaluation model that detects defects in the corner) and determines the grade of the detected defect (or third evaluation area), and detects defects in the fourth evaluation area (e.g., screen) of the electronic device and determines the grade of the detected defect (or third evaluation area) It may include a fourth deep learning evaluation model that determines the grade of the defect (or fourth evaluation area). Table 1 below shows examples of defect types and grades for each of the evaluation areas (e.g. screen, front, side (or corner), rear).
  • medium afterimage is, for example, an electronic device that displays a white screen, but the user sees certain areas of the screen (e.g., the status display area at the top of the screen) as non-white and icons in certain areas. It can represent the visible phenomenon.
  • Strong afterimage for example, may indicate a phenomenon in which an electronic device displays a white screen, but the user sees a color other than white overall on the screen.
  • LCD-level afterimages are a condition in which the degree of afterimages is worse than strong afterimages. For example, an electronic device displays a white screen, but the user may see a color other than white overall on the screen and an icon may appear on the screen. .
  • Each of the first to fourth deep learning evaluation models can perform image segmentation on a given input image.
  • Figure 4 shows the schematic structure of a deep neural network that is the basis for each deep learning evaluation model.
  • the structure of a deep neural network will be described as an example, but it is not necessarily limited to this and neural networks of various structures can be used in deep learning evaluation models.
  • a deep neural network is a method of implementing a neural network and includes multiple layers.
  • a deep neural network for example, has an input layer (410) to which input data is applied, and an output layer (440) that outputs a result value derived through prediction based on input data based on training. ), and may include multiple hidden layers 420 and 430 between the input layer and the output layer.
  • Deep neural networks can be classified into convolutional neural networks, recurrent neural networks, etc., depending on the algorithm used to process information.
  • the input layer is called the lowest layer and the output layer is called the highest layer, and the layers can be named by sequentially ranking them from the output layer, which is the highest layer, to the input layer, which is the lowest layer.
  • hidden layer 2 is a layer higher than hidden layer 1 and the input layer, and may correspond to a lower layer than the output layer.
  • a relatively higher layer can output a predetermined calculation result by multiplying the output value of the relatively lower layer by a weight and receiving a biased value. At this time, the output operation result can be applied to the upper layer adjacent to the corresponding layer in a similar manner.
  • a method of training a neural network is called deep learning, and as described above, various algorithms such as convolutional neural networks and recurrent neural networks can be used in deep learning.
  • Training a neural network means determining and updating the weight(s) and bias(s) between layers, and/or weight(s) between a plurality of neurons belonging to different layers among adjacent layers, and It can be understood as encompassing both determining and updating bias(es).
  • the plurality of layers, the hierarchical structure between the plurality of layers, and the weights and biases between neurons can all be collectively expressed as the “connectivity” of a neural network. Accordingly, “training a neural network” can also be understood as building and training connectivity.
  • each of a plurality of layers may include a plurality of nodes.
  • Nodes may correspond to neurons in a neural network.
  • the term “neuron” may be used interchangeably with the term “node.”
  • Node 3-1 of hidden layer 2 (430) shown in FIG. 4 is connected to all nodes of hidden layer 1 (420), that is, all nodes 2-1 to 2-4, and is connected to the output value of each node. You can input a value multiplied by a predetermined weight.
  • Data input to the input layer 410 is processed through a plurality of hidden layers 420 and 430, so that an output value may be output through the output layer 440.
  • a larger weight multiplied by the output value of each node may mean that the connectivity between the two corresponding nodes is strengthened, and a smaller weight may mean that the connectivity between the two nodes is weakened. If the weight is 0, it may mean that there is no connectivity between the two nodes.
  • the appearance condition evaluation module 320 may perform an appearance condition evaluation on the electronic device based on a plurality of images obtained by photographing the electronic device and deep learning evaluation models. For example, the appearance condition evaluation module 320 may generate a mask predicting the defect state of each of the first to fourth evaluation areas of the electronic device from images through deep learning evaluation models. The appearance condition evaluation module 320 may determine the grade of defects in each of the first to fourth evaluation areas based on each generated mask. The exterior condition evaluation module 320 may determine the final grade of the exterior condition of the electronic device through each determined grade.
  • the first deep learning evaluation model 510 may receive a front image as input.
  • the first deep learning evaluation model 510 may generate a first mask predicting the defect state of the front of the electronic device (e.g., at least one of the location of the defect, the type of the defect, and the degree of the defect) through the front image.
  • the degree of defect may be related to the defect type.
  • the first deep learning evaluation model 510 may perform image segmentation on the front image to classify each pixel of the front image into one of the first classes, and generate a first mask according to this classification. can do. Table 2 below shows examples of first classes.
  • Class 1-1 e.g. frontal scratches, frontal major scratches, etc.
  • Class 1-2 e.g. front breakage, floating liquid crystal, etc.
  • Classes 1-3 e.g. non-electronic devices
  • Classes 1-4 e.g. front of electronic devices
  • the first camera in the shooting box can photograph not only the front of the electronic device but also the surroundings of the front, so the front image may include parts that are not the electronic device.
  • the first deep learning evaluation model 510 is a part of the front image. Pixels can be classified into a 1-1 class, and each of the remaining pixels can be classified into a 1-2 class, a 1-3 class, or a 1-4 class. Through this classification, the first deep learning evaluation model 510 can generate a first mask.
  • FIG. 6 An example of an image visually representing the first mask is shown in FIG. 6.
  • the black areas 610-1, 610-2, 610-3, and 610-4 are the first deep learning evaluation model 510 that selects some pixels of the front image as 1-3. It may represent a result of classification into a class (or a result of the first deep learning evaluation model 510 predicting that some pixels in the front image do not correspond to electronic devices).
  • the area 620 is a result of the first deep learning evaluation model 510 classifying some pixels of the front image into the first and second classes (or the first deep learning evaluation model 510 is located on the front of the electronic device from the front image). (results predicted to be damaged) can be displayed.
  • the area 630 is a result of the first deep learning evaluation model 510 classifying some pixels of the front image into the 1-1 class (or the first deep learning evaluation model 510 is located on the front of the electronic device from the front image). (result predicted to have a flaw) can be displayed.
  • the area 640 is a result of the first deep learning evaluation model 510 classifying some pixels of the front image into classes 1 to 4 (or the first deep learning evaluation model 510 classifies the front of the electronic device in the front image). predicted results).
  • the first deep learning evaluation model 510 may determine a grade for a defect on the front surface based on the first mask. For example, when the first deep learning evaluation model 510 predicts that there is at least one of damage and floating liquid crystal on the front of the electronic device through the front image, the grade of the defect on the front of the electronic device is graded as C (e.g. : It can be determined as Grade C in Table 1 above). The first deep learning evaluation model 510 may output a score of 5 corresponding to a C grade. When the first deep learning evaluation model 510 predicts that there are damages and scratches on the front of the electronic device through the front image, the grade of the defect on the front of the electronic device is graded as C (e.g., grade C in Table 1 above). can be decided.
  • C e.g., grade C in Table 1 above
  • the first deep learning evaluation model 510 may output a score of 5 corresponding to a C grade. If the first deep learning evaluation model 510 predicts that there is at least one of a scratch and a front damage level scratch on the front of the electronic device through the front image, the grade of the defect on the front of the electronic device is graded as B (e.g., above). It can be determined by grade B in Table 1). The first deep learning evaluation model 510 may output a score of 3 corresponding to grade B. If the first deep learning evaluation model 510 predicts that the front of the electronic device is clean (or has no defects on the front) through the front image, the grade of the defect on the front of the electronic device is graded as A (e.g., above). It can be determined by grade A in Table 1). The first deep learning evaluation model 510 may output a score of 1 corresponding to grade A.
  • the second deep learning evaluation model 520 can receive a rear image as input.
  • the second deep learning evaluation model 520 may generate a second mask predicting the defect state (e.g., at least one of defect location, defect rectification, and defect degree) of the back of the electronic device through the back image. there is.
  • the second deep learning evaluation model 520 may perform image segmentation on the back image, and classify each pixel of the back image into one of the second classes. Through this classification, the second deep learning evaluation model 520 may perform image segmentation on the back image. You can create a mask. Table 3 below shows examples of second classes.
  • Class 2-1 e.g. breakage, back lifting, camera retention (or lens) breakage, etc.
  • Class 2-2 e.g. non-electronic devices
  • Classes 2-3 e.g. rear of electronic devices
  • the second deep learning evaluation model 520 may determine the grade of the defect on the back side based on the second mask. For example, if the second deep learning evaluation model 520 predicts that there is at least one of damage, back lifting, and camera lens damage on the back of the electronic device through the back image, the grade of the defect on the back of the electronic device can be determined as C grade (e.g., C grade in Table 1 above). The second deep learning evaluation model 520 may output a score of 5 corresponding to a C grade. If the second deep learning evaluation model 520 predicts that the back of the electronic device is clean through the back image, the grade of the defect on the back of the electronic device may be determined as Grade A (e.g., Grade A in Table 1 above). . The second deep learning evaluation model 520 may output a score of 1 corresponding to grade A.
  • C grade e.g., C grade in Table 1 above
  • the second deep learning evaluation model 520 may output a score of 5 corresponding to grade A.
  • the third deep learning evaluation model 530 may receive side images (or corner images) as input.
  • the third deep learning evaluation model 530 determines the defect state (e.g., location of defect, type of defect, and degree of defect) of the sides (or corners) of the electronic device through side images (or corner images).
  • a third mask predicting (at least one) can be generated.
  • the third deep learning evaluation model 530 may perform image segmentation on side images (or corner images) and classify each pixel of each side image into one of the third classes. And through this classification, a third mask can be created. Table 4 below shows examples of third classes.
  • Class 3-1 e.g. scratches
  • Class 3-2 e.g. non-electronic devices
  • Class 3-3 e.g. the side (or corner) of an electronic device
  • the third deep learning evaluation model 530 may determine the grade of defects in the sides (or corners) based on the third mask. For example, when the third deep learning evaluation model 530 predicts that there is a scratch on the first side (or first corner) of the electronic device through side images (or corner images), the side of the electronic device
  • the grade for defects in corners (or corners) can be determined as a B+ grade (e.g., B+ grade in Table 1 above).
  • the third deep learning evaluation model 530 may output a score of 2 corresponding to a B+ grade.
  • the grade for can be determined as grade A (e.g. grade A in Table 1 above).
  • the third deep learning evaluation model 530 may output a score of 1 corresponding to grade A.
  • the fourth deep learning evaluation model 540 can receive a screen image (e.g., a single-color screen image) as input to an electronic device.
  • the fourth deep learning evaluation model 540 can generate a fourth mask that predicts the defect state (e.g., at least one of the location of the defect, the type of the defect, and the degree of the defect) of the screen of the electronic device through the screen image.
  • the fourth deep learning evaluation model 540 can perform image segmentation on the screen image and classify each pixel of the screen image into one of the fourth classes, and through this classification, the fourth class You can create a mask. Table 5 below shows examples of the fourth classes.
  • Class 4-1 e.g. 3 or more white spots, screen lines, stains, black spots, bullet damage, etc.
  • Class 4-2 e.g. LCD-class afterimage, LCD-class whitening, etc.
  • Class 4-3 e.g. strong afterimage, 2 or less white flowers, etc.
  • Class 4-4 e.g. medium afterimage, etc.
  • Classes 4-5 e.g. non-electronic devices
  • Class 4-6 e.g. screens of electronic devices
  • the fourth deep learning evaluation model 540 may determine a grade for a defect on the screen of an electronic device based on the fourth mask. For example, if the fourth deep learning evaluation model 540 predicts through the screen image that the screen of the electronic device has at least one of three or more white spots, screen lines, black spots, or bullet damage, the screen of the electronic device
  • the grade of the defect can be determined as grade D (e.g. grade D in Table 1 above).
  • the fourth deep learning evaluation model 540 can output a score of 7 corresponding to the D grade.
  • the grade of defect on the screen of the electronic device is classified into a DL grade (e.g. It can be determined by the DL grade in Table 1 above).
  • the fourth deep learning evaluation model 540 can output a score of 6 corresponding to the DL grade. If the fourth deep learning evaluation model 540 predicts that there is at least one of two or less strong afterimages and whitening on the screen of the electronic device through the screen image, the grade for the defect on the screen of the electronic device is a CL grade (e.g. : It can be determined by the CL grade in Table 1 above).
  • the fourth deep learning evaluation model 540 can output a score of 4 corresponding to the CL grade. If the fourth deep learning evaluation model 540 predicts that there is a medium afterimage on the screen of the electronic device through the screen image, the defect grade on the screen of the electronic device is graded as B (e.g., grade B in Table 1 above). You can decide. The fourth deep learning evaluation model 540 can output a score of 3 corresponding to grade B. If the fourth deep learning evaluation model 540 predicts that the screen of the electronic device is clean through the screen image, the defect grade of the screen of the electronic device may be determined as grade A (e.g., grade A in Table 1 above). . The fourth deep learning evaluation model 540 may output a score of 1 corresponding to grade A.
  • the value determination module 330 may determine the value of the electronic device based on the result of evaluating the external condition of the electronic device and/or the result of evaluating the internal state of the electronic device.
  • the value determination module 330 may determine the minimum grade among the grades determined by each of the first to fourth deep learning evaluation models 510 to 540 as the final grade for the external condition of the electronic device.
  • Grade A is the highest
  • Grade B+ can be lower than Grade A and higher than Grade B.
  • Grade CL may be lower than Grade B and higher than Grade C.
  • Grade D may be the lowest.
  • the grade determined by the first deep learning evaluation model 510 is a grade C
  • the grade determined by the second deep learning evaluation model 520 is a grade B+
  • the grade determined by the third deep learning evaluation model 530 is a grade B+.
  • the determined grade may be a C grade
  • the grade determined by the fourth deep learning evaluation model 540 may be a CL grade.
  • the C grade determined by the first deep learning evaluation model 510 may be the minimum grade, so the value determination module 330 is an electronic device.
  • the final grade for the appearance condition can be determined as grade C.
  • the lower the grade the higher the score output by each of the first to fourth deep learning evaluation models 510 to 540.
  • the value determination module 330 may determine the maximum score among the scores output by each of the first to fourth deep learning evaluation models 510 to 540 as the final score for the appearance evaluation of the electronic device.
  • the value determination module 330 may apply weights to the grades (or scores) determined by each of the first to fourth deep learning evaluation models 510 to 540, and the grade to which each weight is applied ( or score) can be used to determine the final grade (or final score) for the external condition of the electronic device. For example, the value determination module 330 may apply a first weight to the grade (or score) determined by the first deep learning evaluation model 510, and the grade determined by the second deep learning evaluation model 520. A second weight may be applied to the (or score), and a third weight may be applied to the grade (or score) determined by the third deep learning evaluation model 530, and the fourth deep learning evaluation model 540 A fourth weight can be applied to the grade determined by .
  • each of the first to fourth weights may be greater than 0 and less than 1.
  • the value determination module 330 may determine the final grade (or final score) for the external condition of the electronic device by adding up the grades (or scores) to which each of the first to fourth weights are applied.
  • the value determination module 330 may determine the first amount based on a result of evaluating the external condition of the electronic device (e.g., a final grade (or final score) for the external condition of the electronic device) and evaluating the internal condition of the electronic device.
  • the second amount can be determined based on the results of .
  • the value determination module 330 may calculate the price of the electronic device by subtracting the first amount and the second amount from the reference price of the electronic device (e.g., the highest used price of the same type of electronic device). For example, the value determination module 330 may obtain a standard price of an electronic device by linking it with a used market price database.
  • the value determination module 330 may obtain the final grade of the external condition of the electronic device and the mapped first amount of money from the first table in which the grade of the external condition and the amount are mapped to each other.
  • the value determination module 330 may obtain a second amount of money mapped to the result of evaluating the internal state of the electronic device from a second table in which the level of the internal state and the amount are mapped to each other.
  • the value determination module 330 may calculate the price of the electronic device by subtracting the first amount and the second amount from the standard amount.
  • the value determination module 330 may transmit the value (eg, price) of the electronic device to the unmanned purchase device 110.
  • the unmanned purchase device 110 may show the value (eg, price) of the electronic device to the user through a display.
  • the value determination module 330 may display the value (eg, price) of the electronic device on the display of the unmanned purchase device 110 .
  • the appearance condition evaluation module 320 may determine whether each of the images (e.g., front image, back image, side image, screen image) includes one or more objects that would be mistaken for a defect.
  • the object may include at least one of a first object corresponding to a floating icon on the screen of the electronic device and a second object corresponding to an attachment (eg, protective film, sticker, etc.) of the electronic device.
  • the first object corresponding to the floating icon may represent an object included in the image by photographing the floating icon on the screen of the electronic device.
  • Floating icons may include, but are not limited to, for example, a floating icon for assistive touch, a floating icon for triggering a specific task, etc.
  • the second object corresponding to the attachment of the electronic device may represent an object included in the image by photographing the attachment of the electronic device.
  • the appearance condition evaluation module 320 may perform processing on the object. For example, the appearance condition evaluation module 320 may perform masking processing on an object, but is not limited thereto. The appearance condition evaluation module 320 may perform appearance condition evaluation based on the image in which the object has been processed, the remaining images that do not include the object, and the deep learning evaluation models 510 to 540. The appearance state evaluation module 320 creates a mask predicting the defect state of each evaluation area of the electronic device from the image in which the object has been processed and the remaining images without the object through the deep learning evaluation models 510 to 540. can be generated, and a grade for a defect in each of the evaluation areas can be determined based on each generated mask, and a final grade for the external appearance of the electronic device can be determined through each determined grade.
  • the appearance condition evaluation module 320 may determine that there is no image including the above-described object among images obtained by photographing an electronic device. In this case, as described above, the appearance condition evaluation module 320 may perform appearance condition evaluation based on images and deep learning evaluation models 510 to 540.
  • the appearance condition evaluation module 320 determines whether there are images that cannot be analyzed by one or more of the deep learning evaluation models (hereinafter referred to as “model-analyzable images”) among the images obtained by photographing the electronic device. can do. For example, the exterior condition evaluation module 320 may determine, among images obtained by photographing an electronic device, images in which light reflection exists above a certain level, images in which the camera is out of focus, etc., as images that cannot be model analyzed. If there is an image that cannot be model analyzed, the exterior condition evaluation module 320 may request the operator to evaluate the exterior condition of the electronic device.
  • model-analyzable images the deep learning evaluation models
  • the electronic device value evaluation device 130 may evaluate the value of a bar-type electronic device.
  • the electronic device value evaluation device 130 (or the appearance condition evaluation module 320) uses a plurality of images obtained by photographing a bar-type electronic device and the first to fourth deep learning evaluations. Based on the models 510 to 540, the external condition of the bar-type electronic device can be evaluated.
  • the electronic device value evaluation device 130 can evaluate the value of electronic devices whose shape can be changed (eg, foldable, rollable, etc.).
  • An electronic device that can change its shape may have a first shape (e.g., an unfolded shape or a contracted shape) and a second shape (e.g., a folded shape or expansion) by manipulation. form) can be changed.
  • a foldable electronic device may be in an unfolded form, and its shape may be changed to a folded form through manipulation.
  • the rollable electronic device may be in a collapsed form, and the shape may be changed to an expanded form by manipulation.
  • the collapsed form may represent a state in which the rollable display is rolled in into the device, and the expanded form may represent a state in which the rollable display is rolled out from the device.
  • the electronic device value evaluation device 130 uses a plurality of images obtained by photographing a foldable electronic device in an unfolded form and the first to fourth deep learning evaluations. Based on the models 510 to 540, the level of defects in each evaluation area of the foldable electronic device in the unfolded form can be determined.
  • the unmanned purchase device 110 can change the foldable electronic device in the shooting box from the unfolded form to the folded form. Alternatively, the unmanned purchase device 110 may request the user to change the foldable electronic device from the unfolded form to the folded form and then reinsert the electronic device in the folded form into the unmanned purchase device 110.
  • a foldable electronic device changes from an unfolded form to a folded form
  • the folded portion may form a side surface, and the sub-screen of the foldable electronic device may be activated.
  • the unmanned embedding device 110 may obtain an image (hereinafter referred to as an image of the folded side) by photographing the side surface corresponding to the folded portion of the foldable electronic device through one or more of the plurality of third cameras in the shooting box.
  • the unmanned embedding device 110 may acquire an image (hereinafter referred to as a sub-screen image) by photographing a sub-screen of the foldable electronic device through the first camera in the capturing box.
  • the electronic device value evaluation device 130 (or the appearance condition evaluation module 320) is based on the image of the folded side and the fifth deep learning evaluation model to determine the fifth evaluation area (e.g., corresponding to the folded portion) of the foldable electronic device. side) can be evaluated.
  • the fifth deep learning evaluation model may be a deep learning evaluation model that detects defects in the fifth evaluation area of the foldable electronic device and determines the grade of the detected defect (or fifth evaluation area).
  • the electronic device value evaluation device 130 (or the external condition evaluation module 320) may input an image of the folded side into the fifth deep learning evaluation model.
  • the fifth deep learning evaluation model is a fifth mask that predicts the defect state (e.g., at least one of the location of the defect, the type of the defect, and the degree of the defect) of the fifth evaluation area of the foldable electronic device through the image of the folded side. can be created.
  • the fifth deep learning evaluation model can determine the grade for defects in the fifth evaluation area of the foldable electronic device based on the fifth mask.
  • the electronic device value evaluation device 130 (or the appearance condition evaluation module 320) can evaluate the sixth evaluation area (e.g., sub-screen) of the foldable electronic device based on the sub-screen image and the sixth deep learning evaluation model.
  • the sixth deep learning evaluation model may be a deep learning evaluation model that detects defects in the sixth evaluation area of the foldable electronic device and determines the grade of the detected defect (or sixth evaluation area).
  • the electronic device value evaluation device 130 (or the external condition evaluation module 320) may input a sub-screen image into the sixth deep learning evaluation model.
  • the sixth deep learning evaluation model uses a sixth mask that predicts the defect state (e.g., at least one of the defect location, defect type, and defect degree) of the sixth evaluation area of the foldable electronic device through the sub-screen image. can be created.
  • the sixth deep learning evaluation model can determine the grade for defects in the sixth evaluation area of the foldable electronic device based on the sixth mask.
  • the electronic device value evaluation device 130 or the appearance condition evaluation module 320
  • the sixth evaluation area For example, the grade of the defect (sub screen) can be determined.
  • the electronic device value evaluation device 130 (or value determination module 330) provides a result of evaluating the appearance condition of the foldable electronic device (e.g., a grade determined by each of the first to sixth deep learning evaluation models) and/or a folder.
  • the value of a foldable electronic device can be determined based on the results of evaluating the internal state of the foldable electronic device.
  • the electronic device value evaluation device 130 uses a plurality of images obtained by photographing a miniature rollable electronic device and first to fourth deep learning evaluation models. Based on the fields 510 to 540, the level of defects in each evaluation area of the miniature rollable electronic device can be determined.
  • the unmanned embedding device 110 can change the rollable electronic device in the shooting box from a reduced form to an expanded form. Alternatively, the unmanned embedding device 110 may request the user to change the rollable electronic device from a reduced form to an expanded form and then reinsert the electronic device in the expanded form into the unmanned embedding device 110.
  • a rollable electronic device changes from a reduced form to an expanded form
  • the screen and sides can be expanded.
  • the unmanned embedding device 110 may acquire an image (hereinafter referred to as an image of the expanded side) by photographing the expanded side through one or more of the plurality of third cameras in the shooting box.
  • the unmanned embedding device 110 may acquire an image (hereinafter referred to as an image of the expanded screen) by photographing the expanded screen of the electronic device through the first camera in the shooting box.
  • the electronic device value evaluation device 130 (or the appearance condition evaluation module 320) is a seventh evaluation area (e.g., the expanded side) of the rollable electronic device based on the image of the expanded side and the seventh deep learning evaluation model.
  • the seventh deep learning evaluation model may be a deep learning evaluation model that detects defects in the seventh evaluation area of the rollable electronic device and determines the grade of the detected defect (or seventh evaluation area).
  • the electronic device value evaluation device 130 (or the external condition evaluation module 320) may input an image of the expanded side surface into the seventh deep learning evaluation model.
  • the seventh deep learning evaluation model predicts the defect state (e.g., at least one of the location of the defect, the type of the defect, and the degree of the defect) of the seventh evaluation area of the rollable electronic device through the extended side image. You can create a mask.
  • the seventh deep learning evaluation model can determine the grade of a defect in the seventh evaluation area of the rollable electronic device based on the seventh mask.
  • the electronic device value evaluation device 130 or the external condition evaluation module 320 performs a seventh evaluation of the rollable electronic device based on the expanded side image and the third deep learning evaluation model 530. Areas (e.g. extended aspects) can be assessed.
  • the electronic device value evaluation device 130 (or the appearance condition evaluation module 320) is based on the image of the expanded screen and the fourth deep learning evaluation model 540 to determine the eighth evaluation area (e.g., expansion) of the rollable electronic device. screen) can be evaluated.
  • the electronic device value evaluation device 130 (or the external condition evaluation module 320) may input an image of the expanded screen into the fourth deep learning evaluation model 540.
  • the fourth deep learning evaluation model 540 may be a deep learning evaluation model that generates a mask predicting the defect state of the screen from a given screen image and determines the grade of the screen defect based on the generated mask. there is.
  • the fourth deep learning evaluation model 540 predicts the defect state (e.g., at least one of the location of the defect, the type of the defect, and the degree of the defect) of the eighth evaluation area of the rollable electronic device through the image of the expanded screen.
  • An eighth mask can be created.
  • the fourth deep learning evaluation model 540 may determine the grade of a defect in the eighth evaluation area of the rollable electronic device based on the eighth mask.
  • the electronic device value evaluation device 130 determines the results of the external state evaluation of the rollable electronic device (e.g., by each of the first to fourth deep learning evaluation models and the seventh deep learning evaluation model).
  • the value of the rollable electronic device may be determined based on the determined rating) and/or the results of an evaluation of the internal condition of the rollable electronic device.
  • the unmanned purchase device 110 may receive a wearable device (eg, a smart watch) from a user.
  • the electronic device value evaluation device 130 may store deep learning evaluation models that can evaluate the appearance (e.g., front, back, side, screen) of the wearable device.
  • the electronic device value evaluation device 130 may perform an external condition evaluation of the wearable device based on images obtained by photographing the wearable device and deep learning evaluation models.
  • the electronic device value evaluation device 130 may determine the value of the wearable device based on the result of evaluating the external state of the wearable device and the result of evaluating the internal state of the wearable device.
  • Figure 7 is a flowchart explaining a method of evaluating the value of an electronic device according to an embodiment.
  • the electronic device value evaluation device 130 performs an external condition evaluation of the electronic device based on a plurality of images obtained by photographing the electronic device and a plurality of deep learning evaluation models. You can.
  • the electronic device value evaluation device 130 may generate a mask that predicts the defect state of each evaluation area of the electronic device from images through deep learning evaluation models 510 to 540, and may generate a mask based on each generated mask. It is possible to determine the level of defects in each of the evaluation areas of the electronic device.
  • the electronic device value evaluation device 130 may determine the final grade of the external condition of the electronic device through each determined grade.
  • the shape of the electronic device may change.
  • the electronic device valuation device 130 may change the electronic device from a first form (eg, unfolded form or collapsed form) to a second form (eg, folded form or expanded form).
  • the electronic device value evaluation device 130 may request the user to change the electronic device from the first type to the second type and then reinsert the second type electronic device into the unmanned purchase device 110.
  • the electronic device value evaluation device 130 may use additional deep learning evaluation models other than the deep learning evaluation models 510 to 540 (e.g., the fifth deep learning evaluation model described above, the sixth deep learning evaluation model) an evaluation model, and at least one of the seventh deep learning evaluation model), a mask predicting the defect state of the evaluation area of the changed form of the electronic device from the image obtained by photographing the changed form of the electronic device (e.g., the defect state described above) At least one of the 5th mask, the 6th mask, and the 7th mask) can be generated.
  • the electronic device value evaluation apparatus 130 may determine the grade of a defect in the changed evaluation area of the electronic device based on a mask that predicts the defect state of the changed evaluation area.
  • Each of the fifth to seventh deep learning evaluation models described above can perform image segmentation on a given input image.
  • the electronic device value evaluation device 130 may determine the value of the electronic device based on the results of the external state evaluation of the electronic device and the results of the internal state evaluation of the electronic device.
  • Matters described with reference to FIGS. 1 to 6 may be applied to the electronic device value evaluation method of FIG. 7 .
  • 8 to 13 are diagrams explaining an exception case processing method according to an embodiment.
  • step 710 of FIG. 7 At least some or all of the steps shown in FIG. 8 may be included in step 710 of FIG. 7 .
  • the steps shown in FIG. 8 may be performed by the electronic device value evaluation device 130 (or the appearance condition evaluation module 320).
  • the electronic device value evaluation device 130 may determine whether an exception case exists in the images.
  • the exception case is a first object corresponding to a floating icon of an electronic device, a second object corresponding to an attachment of an electronic device, and an object for which shooting conditions (e.g., camera focus and/or lighting brightness) are not satisfied. It may include at least one of case 1 and a second case corresponding to a situation in which a screen not specified in the electronic device is turned on, or a combination thereof.
  • the image 900 shown in FIG. 9 is an example of a screen image of an electronic device (e.g., iPhone®), and the image 900 includes a first object 910 corresponding to a floating icon (e.g., assistive touch).
  • the first camera in the photographing box of the unmanned embedded device 110 may acquire the image 900 by photographing a monochromatic screen (eg, white screen) of an electronic device with a floating icon (eg, assistive touch).
  • a first object 910 corresponding to a floating icon eg, assistive touch
  • Floating icons do not have a fixed location on the screen and can exist in various locations.
  • the electronic device value evaluation device 130 recognizes the first object 910 as a screen defect (or screen breakage). (or may be misunderstood).
  • the electronic device value evaluation device 130 may detect the first object 910 in the image 900 before inputting the image 900 into the fourth deep learning evaluation model 540.
  • the electronic device value evaluation device 130 may detect the first object 910 in the image 900 using a template matching algorithm. The method of detecting the first object 910 is not limited to the template matching algorithm.
  • the electronic device value evaluation device 130 may perform masking processing (or filtering processing) on the first object 910, as will be described later.
  • the electronic device value evaluation device 130 may prevent the fourth deep learning evaluation model 540 from recognizing the first object 910 as a screen defect.
  • the image 100 shown in FIG. 10 is an example of a screen image of an electronic device (eg, Samsung Electronics' Galaxy smartphone), and a first object 1010 corresponding to a floating icon may exist in the image 1000.
  • the first camera in the photographing box of the unmanned embedded device 110 may acquire the image 1000 by photographing a monochromatic screen (eg, a white screen) of an electronic device with a floating icon. Accordingly, a first object 1010 corresponding to a floating icon may exist in the image 1000. If there is no special processing for the first object 1010, the electronic device value evaluation device 130 (or the fourth deep learning evaluation model 540) recognizes the first object 1010 as a screen defect (or screen breakage). (or may be misunderstood).
  • the electronic device value evaluation device 130 may detect the first object 1010 in the image 1000 using a template matching algorithm before inputting the image 1000 into the fourth deep learning evaluation model 540. , masking processing (or filtering processing) may be performed on the first object 1010. The first object 1010 can be prevented from being recognized as a screen defect.
  • the image 1100 shown in FIG. 11 is an example of a rear image of an electronic device, and a second object 1110 corresponding to a sticker attached to the electronic device may exist in the image 1100.
  • the second camera in the photographing box of the unmanned embedding device 110 can acquire the image 1100 by photographing the rear of the electronic device to which the sticker is attached. Accordingly, a second object 1110 corresponding to a sticker may exist in the image 1100. Without any special processing on the second object 1110, it may be difficult for the electronic device valuation device 130 (or the second deep learning evaluation model 520) to accurately evaluate the back of the electronic device.
  • the electronic device value evaluation device 130 may detect the second object 1110 in the image 1100 before inputting the image 1100 into the second deep learning evaluation model 520.
  • the electronic device valuation device 130 may perform blur processing on the image 1100 and use (or extract) pixel values within a certain threshold from the blurred image 1100. By finding the contour, the location of the second object 1110 can be determined.
  • the electronic device value evaluation device 130 may perform masking processing (or filtering processing) on the second object 1110.
  • the electronic device value evaluation device 130 may prevent the second object 1010 from being recognized as a screen defect.
  • the image 1200 shown in FIG. 12 and the image 1300 shown in FIG. 13 are examples of screen images of an electronic device.
  • a screen not specified in the electronic device e.g. The electronic device's home screen, a screen displaying text, etc.
  • an unspecified screen may be turned on on the electronic device.
  • the first camera captures the screen of the electronic device while a screen not designated for the electronic device is turned on, for example, image 1200 of FIG. 12 may be obtained.
  • image 1300 of FIG. 13 a designated screen (eg, a monochromatic screen) is turned on in the electronic device.
  • a designated screen may be turned on on the electronic device. If the first camera captures the screen of the electronic device while the screen designated for the electronic device is turned on, for example, the image 1300 of FIG. 13 may be obtained. It may be difficult for the electronic device value evaluation device 130 (or the fourth deep learning evaluation model 540) to accurately determine whether there is a defect in the screen of the electronic device from the image 1200 of FIG. 12. The electronic device value evaluation device 130 (or the fourth deep learning evaluation model 540) can accurately determine whether there is a defect in the screen of the electronic device from the image 1300 of FIG. 13.
  • the electronic device value evaluation device 130 may crop a partial area 1210 of the screen of the electronic device in the image 1200 to determine whether the screen designated for the electronic device is turned on.
  • the electronic device value evaluation device 130 may convert the color of the cropped area 1210 from a first color (eg, RGB) to a second color (eg, HSV).
  • the electronic device value evaluation device 130 can check the saturation information and brightness information of the color converted area, and if the combined value of the saturation information and brightness information is more than a certain value (e.g., 70000), it is not assigned to the electronic device. It may be determined that a second case corresponding to a situation in which an unlit screen is turned on has occurred. As an example shown in FIG.
  • the electronic device value evaluation device 130 may crop a partial area 1310 of the screen of the electronic device in the image 1300, and change the color of the cropped area 1310 to the first color.
  • the color can be converted to a second color, and if the sum of saturation information and brightness information in the color-converted area is less than a certain value, it can be determined that the second case has not occurred.
  • the electronic device value evaluation device 130 may determine whether there is a first case (eg, out-focusing) in which the shooting conditions (eg, camera focus) are not satisfied among the images.
  • the electronic device valuation device 130 can crop an area containing an electronic device in a specific image, detect an edge in the cropped area, and determine the specific image through the edge detection result. It can be determined whether is out of focus.
  • the electronic device value evaluation device 130 may calculate the variance of the Laplacian of the cropped area, and if the calculated variance is less than a threshold (e.g., 250), the first shooting condition is not satisfied. It can be determined that there is a case (e.g. out-of-focusing).
  • At least one of the images obtained by photographing an electronic device may be obtained by photographing while the lighting brightness (or lighting intensity) is above a certain level.
  • the electronic device value evaluation device 130 may determine whether there is a first case (eg, a case where the lighting brightness exceeds a certain level) in which the shooting conditions (eg, lighting brightness) are not satisfied among the images. For example, the electronic device value evaluation device 130 may calculate the average of pixel values of a specific image. If the calculated average exceeds a certain value (e.g., 120), the electronic device value evaluation device 130 determines that there is a first case (e.g., a case in which the lighting brightness exceeds a certain level) in which the shooting conditions are not satisfied. can do.
  • a first case e.g., a case in which the lighting brightness exceeds a certain level
  • the electronic device valuation device 130 assigns the images to a first object, a second object, a first case in which shooting conditions (e.g., camera focus and/or lighting brightness) are not met, and an electronic device. It may be determined whether there is at least one or a combination of the second cases corresponding to a situation in which a screen that is not turned on is turned on.
  • shooting conditions e.g., camera focus and/or lighting brightness
  • the electronic device value evaluation device 130 may perform appearance condition evaluation in step 803. Regarding the appearance condition evaluation in step 803, the appearance condition evaluation described above can be applied, so detailed description is omitted.
  • the electronic device value evaluation device 130 may determine whether the exception case can be processed in step 805. For example, the electronic device value evaluation device 130 may determine that image processing of the first object and the second object is possible by the electronic device value evaluation device 130 . The electronic device value evaluation device 130 may determine that image processing is not possible in the first case and the second case by the electronic device value evaluation device 130.
  • the electronic device valuation device 130 may process the exception case in step 807. there is. For example, when the first object exists in the screen image of the electronic device, the electronic device value evaluation device 130 may perform masking processing (or filtering processing) on the first object. When a first object exists in the screen image of the electronic device and a second object exists in the back image of the electronic device, the electronic device value evaluation device 130 performs masking (or filtering) on each of the first and second objects. can be performed.
  • the electronic device value evaluation device 130 may perform an appearance condition evaluation using the remaining images in which no exception cases exist among the images, one or more images in which the exception cases have been processed, and deep learning evaluation models.
  • the fourth deep learning evaluation model 540 performs image segmentation on the screen image with the first object masked. You can.
  • the fourth deep learning evaluation model 540 can classify each pixel of the screen image in which the first object has been masked into one of the fourth classes in Table 5 above, and generate a fourth mask through this classification. You can.
  • Table 5 above may further include the 4-7 class (e.g., exception screen of an electronic device), and the fourth deep learning evaluation model 540 may classify the masked pixels into the 4-7 class ( Example: exception screen of electronic device).
  • the fourth deep learning evaluation model 540 may determine a grade for a defect on the screen of an electronic device based on the fourth mask. If a second object exists in the back image of the electronic device and the second object is masked, the second deep learning evaluation model 520 may perform image segmentation on the back image with the second object masked.
  • the second deep learning evaluation model 520 can classify each pixel of the back image in which the second object is masked into one of the second classes in Table 3 above, and generate a second mask through this classification. You can.
  • Table 3 above may further include classes 2-4 (e.g., exception rear of electronic devices), and the second deep learning evaluation model 520 classifies the masked pixels into classes 2-4 ( Example: Exception rear of electronic devices).
  • the second deep learning evaluation model 520 may determine the grade of defects on the back of the electronic device based on the second mask.
  • the electronic device valuation device 130 is an exception case that cannot be processed through image processing (e.g., at least one of the first case and the second case).
  • the operator may be requested to handle the exception case.
  • the electronic device value evaluation device 130 may determine that operator processing is necessary when at least one of the first case and the second case exists. If there is a first case where the shooting conditions are not met (e.g., a case where the lighting brightness exceeds a certain level) and/or a second case where an unspecified screen on the electronic device is turned on, the operator must use the unmanned acquisition device (110 ) may be instructed to retake the electronic device, or the operator may evaluate the external condition of the electronic device.
  • Figure 14 is a block diagram illustrating the configuration of a computing device for training a deep learning model according to an embodiment.
  • a computing device 1400 that trains a deep learning model may include a memory 1410 and a processor 1420.
  • Memory 1410 may store one or more deep learning models.
  • the deep learning model may be based on the deep neural network described in Figure 4. Deep learning models can perform image segmentation on given input images.
  • the processor 1420 can train a deep learning model.
  • the processor 1420 can input a learning image for a defect into a deep learning model and generate a mask predicting the state of the defect from the learning image through the deep learning model.
  • the processor 1420 may calculate the similarity between the generated mask and the labeled mask for the defect.
  • 15A to 15C show examples of the generated mask and label mask, respectively.
  • the target mask may represent a label mask and the prediction mask may represent a mask generated by a deep learning model.
  • the processor 1420 may update at least one parameter in the deep learning model when the calculated similarity is less than a threshold. If the calculated similarity is greater than or equal to a threshold, the processor 1420 may end training for the deep learning model.
  • the processor 1420 uses the first deep learning model to A first mask that predicts the defect state of the front of the electronic device can be generated from the learning image.
  • the first deep learning model may perform image segmentation on the first learning image to generate a first mask that predicts the defect state of the front of the electronic device from the first learning image.
  • the processor 1420 may calculate a first similarity between the first mask and the label mask for the first defect.
  • the processor 1420 may update at least one parameter in the first deep learning model when the calculated first similarity is less than a threshold. If the calculated first similarity is greater than or equal to the threshold, the processor 1420 may end training for the first deep learning model.
  • the first deep learning model for which training has been completed can be mounted on the electronic device value evaluation device 130 as the first deep learning evaluation model 510.
  • the processor 1420 uses the second deep learning model to select an electronic device from the second learning image.
  • a second mask predicting the defect state of the rear surface can be generated.
  • the second deep learning model may perform image segmentation on the second learning image to generate a second mask that predicts the defect state of the back of the electronic device from the second learning image.
  • the processor 1420 may calculate a second similarity between the second mask and the label mask for the second defect.
  • the processor 1420 may update at least one parameter in the second deep learning model when the calculated second similarity is less than the threshold. If the calculated second similarity is greater than or equal to the threshold, the processor 1420 may end training for the second deep learning model.
  • the second deep learning model for which training has been completed can be mounted on the electronic device value evaluation device 130 as the second deep learning evaluation model 520.
  • the processor 1420 uses the third deep learning model.
  • a third mask that predicts the defect state of the side (or corner) of the electronic device can be generated from the third learning image.
  • the third deep learning model may perform image segmentation on the third learning image to generate a third mask that predicts the defect state of the side (or corner) of the electronic device from the third learning image.
  • the processor 1420 may calculate a third similarity between the third mask and the label mask for the third defect.
  • the processor 1420 may update at least one parameter in the third deep learning model when the calculated third similarity is less than the threshold. If the calculated third similarity is greater than or equal to the threshold, the processor 1420 may end training for the third deep learning model.
  • the third deep learning model for which training has been completed can be mounted on the electronic device value evaluation device 130 as the third deep learning evaluation model 530.
  • the processor 1420 uses the fourth deep learning model to select an electronic device from the fourth learning image.
  • a fourth mask predicting the defect state of the screen can be generated.
  • the fourth deep learning model may perform image segmentation on the fourth learning image to generate a fourth mask that predicts the defect state of the screen of the electronic device from the fourth learning image.
  • the processor 1420 may calculate a fourth similarity between the fourth mask and the label mask for the fourth defect.
  • the processor 1420 may update at least one parameter in the fourth deep learning model when the calculated fourth similarity is less than the threshold. If the calculated fourth similarity is greater than or equal to the threshold, the processor 1420 may end training for the fourth deep learning model.
  • the fourth deep learning model for which training has been completed can be mounted on the electronic device value evaluation device 130 as the fourth deep learning evaluation model 540.
  • the processor 1420 when the processor 1420 inputs the fifth learning image (e.g., an image in which the folded side with the fifth defect is photographed) to the fifth deep learning model, the processor 1420 uses the fifth deep learning model to 5 From the learning image, a fifth mask that predicts the defect state of the side corresponding to the folded portion of the foldable electronic device can be generated.
  • the processor 1420 may calculate a fifth similarity between the fifth mask and the label mask for the fifth defect.
  • the processor 1420 may update at least one parameter in the fifth deep learning model when the calculated fifth similarity is less than the threshold. If the calculated fifth similarity is greater than or equal to the threshold, the processor 1420 may end training for the fifth deep learning model.
  • the fifth deep learning model for which training has been completed can be mounted on the electronic device value evaluation device 130 as a fifth deep learning evaluation model.
  • the processor 1420 uses the sixth deep learning model to 6 A sixth mask that predicts the defect state of the sub-screen of the foldable electronic device can be generated from the learning image.
  • the processor 1420 may calculate a sixth similarity between the sixth mask and the label mask for the sixth defect.
  • the processor 1420 may update at least one parameter in the sixth deep learning model when the calculated sixth similarity is less than the threshold. If the calculated sixth similarity is greater than or equal to the threshold, the processor 1420 may end training for the sixth deep learning model.
  • the sixth deep learning model for which training has been completed can be mounted on the electronic device value evaluation device 130 as a sixth deep learning evaluation model.
  • the processor 1420 uses the seventh deep learning model to A seventh mask predicting the defect state of the extended side of the rollable electronic device can be generated from the seventh learning image.
  • the processor 1420 may calculate a seventh similarity between the seventh mask and the label mask for the seventh defect.
  • the processor 1420 may update at least one parameter in the seventh deep learning model when the calculated seventh similarity is less than the threshold. If the calculated seventh similarity is greater than or equal to the threshold, the processor 1420 may end training for the seventh deep learning model.
  • the seventh deep learning model for which training has been completed can be mounted on the electronic device value evaluation device 130 as the seventh deep learning evaluation model.
  • the processor 1420 may train the third deep learning model using the seventh learning image and allow the third deep learning model to generate the seventh mask.
  • the processor 1420 trains each of the deep learning models based on learning images taken of a wearable device with defects in the exterior to determine the exterior (e.g., front) of the wearable device. , back, side, and screen) can be created to create deep learning evaluation models.
  • Figure 16 is a flowchart explaining a deep learning model training method of a computing device according to an embodiment.
  • the computing device 1400 may input a training image for a defect into a deep learning model.
  • the computing device 1400 may generate a mask predicting the state of a defect from a learning image through a deep learning model.
  • the computing device 1400 may calculate the similarity between the generated mask and the label mask for the defect.
  • the computing device 1400 may determine whether the calculated similarity is less than a threshold.
  • the computing device 1400 may update at least one parameter in the deep learning model in step 1650.
  • the computing device 1400 may repeatedly perform steps 1610 to 1640.
  • the computing device 1400 may end training for the deep learning model in step 1660.
  • FIGS. 1 to 15 Contents described through FIGS. 1 to 15 can be applied to the deep learning model training method of FIG. 16.
  • the embodiments described above may be implemented with hardware components, software components, and/or a combination of hardware components and software components.
  • the devices, methods, and components described in the embodiments may include, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, and a field programmable gate (FPGA).
  • ALU arithmetic logic unit
  • FPGA field programmable gate
  • It may be implemented using a general-purpose computer or a special-purpose computer, such as an array, programmable logic unit (PLU), microprocessor, or any other device capable of executing and responding to instructions.
  • the processing device may execute an operating system (OS) and software applications running on the operating system. Additionally, a processing device may access, store, manipulate, process, and generate data in response to the execution of software.
  • OS operating system
  • a processing device may access, store, manipulate, process, and generate data in response to the execution of software.
  • a single processing device may be described as being used; however, those skilled in the art will understand that a processing device includes multiple processing elements and/or multiple types of processing elements. It can be seen that it may include.
  • a processing device may include multiple processors or one processor and one controller. Additionally, other processing configurations, such as parallel processors, are possible.
  • Software may include a computer program, code, instructions, or a combination of one or more of these, which may configure a processing unit to operate as desired, or may be processed independently or collectively. You can command the device.
  • Software and/or data may be used on any type of machine, component, physical device, virtual equipment, computer storage medium or device to be interpreted by or to provide instructions or data to a processing device. , or may be permanently or temporarily embodied in a transmitted signal wave.
  • Software may be distributed over networked computer systems and stored or executed in a distributed manner.
  • Software and data may be stored on a computer-readable recording medium.
  • the method according to the embodiment may be implemented in the form of program instructions that can be executed through various computer means and recorded on a computer-readable medium.
  • a computer-readable medium may store program instructions, data files, data structures, etc., singly or in combination, and the program instructions recorded on the medium may be specially designed and constructed for the embodiment or may be known and available to those skilled in the art of computer software.
  • Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tapes, optical media such as CD-ROMs and DVDs, and magnetic media such as floptical disks.
  • Examples of program instructions include machine language code, such as that produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter, etc.
  • the hardware devices described above may be configured to operate as one or multiple software modules to perform the operations of the embodiments, and vice versa.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Accounting & Taxation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Evolutionary Computation (AREA)
  • Game Theory and Decision Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé d'évaluation de l'état extérieur et de la valeur d'un dispositif électronique. Selon un mode de réalisation, il est déterminé si un cas d'exception est présent dans une pluralité d'images obtenues par photographie d'un dispositif électronique, l'état extérieur du dispositif électronique est évalué à l'aide de modèles d'évaluation d'apprentissage profond et des images s'il est déterminé que le cas d'exception n'est pas présent dans les images, il est déterminé si le cas d'exception peut être traité s'il est déterminé qu'il existe une image cible dans laquelle le cas d'exception est présent dans les images, le cas d'exception dans l'image cible est traité s'il est déterminé que le cas d'exception peut être traité, et l'état extérieur peut être évalué à l'aide des images restantes qui excluent l'image cible des images, l'image cible dans laquelle le cas d'exception est traité, et les modèles d'évaluation d'apprentissage profond.
PCT/KR2023/002425 2022-03-29 2023-02-21 Procédé d'évaluation d'état extérieur et de valeur de dispositif électronique, et appareil d'évaluation de valeur de dispositif électronique WO2023191312A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2022-0038459 2022-03-29
KR20220038459 2022-03-29
KR1020220098681A KR20230140325A (ko) 2022-03-29 2022-08-08 전자 기기의 외관 상태 평가 및 가치 평가 방법과 전자 기기 가치 평가 장치
KR10-2022-0098681 2022-08-08

Publications (1)

Publication Number Publication Date
WO2023191312A1 true WO2023191312A1 (fr) 2023-10-05

Family

ID=88203094

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/002425 WO2023191312A1 (fr) 2022-03-29 2023-02-21 Procédé d'évaluation d'état extérieur et de valeur de dispositif électronique, et appareil d'évaluation de valeur de dispositif électronique

Country Status (1)

Country Link
WO (1) WO2023191312A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190107594A (ko) * 2018-03-12 2019-09-20 (주)금강시스템즈 중고 단말기의 가치평가를 위한 외관 촬영 분석 시스템
KR20190116876A (ko) * 2018-04-05 2019-10-15 지엘에스이 주식회사 휴대폰 자동 매입장치
KR20200115308A (ko) * 2019-03-29 2020-10-07 민팃(주) 전자 기기 가치 평가 시스템
US20210192484A1 (en) * 2019-12-18 2021-06-24 Ecoatm, Llc Systems and methods for vending and/or purchasing mobile phones and other electronic devices
KR20210127199A (ko) * 2019-02-18 2021-10-21 에코에이티엠, 엘엘씨 전자 디바이스의 신경망 기반의 물리적 상태 평가, 및 관련된 시스템 및 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190107594A (ko) * 2018-03-12 2019-09-20 (주)금강시스템즈 중고 단말기의 가치평가를 위한 외관 촬영 분석 시스템
KR20190116876A (ko) * 2018-04-05 2019-10-15 지엘에스이 주식회사 휴대폰 자동 매입장치
KR20210127199A (ko) * 2019-02-18 2021-10-21 에코에이티엠, 엘엘씨 전자 디바이스의 신경망 기반의 물리적 상태 평가, 및 관련된 시스템 및 방법
KR20200115308A (ko) * 2019-03-29 2020-10-07 민팃(주) 전자 기기 가치 평가 시스템
US20210192484A1 (en) * 2019-12-18 2021-06-24 Ecoatm, Llc Systems and methods for vending and/or purchasing mobile phones and other electronic devices

Similar Documents

Publication Publication Date Title
TWI787296B (zh) 光學檢測方法、光學檢測裝置及光學檢測系統
WO2018143550A1 (fr) Appareil de notification de date d'expiration d'aliments stockés par une intelligence artificielle de lecture de caractères dans un réfrigérateur et procédé associé
WO2020238256A1 (fr) Dispositif et procédé de détection d'endommagement basé sur une segmentation insuffisante
WO2014058248A1 (fr) Appareil de contrôle d'images pour estimer la pente d'un singleton, et procédé à cet effet
US11947345B2 (en) System and method for intelligently monitoring a production line
WO2021215730A1 (fr) Programme informatique, procédé et dispositif pour générer une image de défaut virtuel à l'aide d'un modèle d'intelligence artificielle généré sur la base d'une entrée d'utilisateur
WO2019132131A1 (fr) Système électro-optique d'analyse d'image à longueurs d'onde multiples permettant de détecter une victime et un vaisseau d'accident
WO2019132566A1 (fr) Procédé de génération automatique d'image à profondeurs multiples
WO2019009664A1 (fr) Appareil pour optimiser l'inspection de l'extérieur d'un objet cible et procédé associé
WO2020246655A1 (fr) Procédé de reconnaissance de situation et dispositif permettant de le mettre en œuvre
WO2020004749A1 (fr) Appareil et procédé permettant à un équipement d'apprendre, à l'aide d'un fichier vidéo
WO2023120831A1 (fr) Procédé de désidentification et programme informatique enregistré sur un support d'enregistrement en vue de son exécution
WO2020091337A1 (fr) Appareil et procédé d'analyse d'image
WO2023022537A1 (fr) Système de détection de défaut de disques pour véhicules basé sur l'ia
WO2020032506A1 (fr) Système de détection de vision et procédé de détection de vision l'utilisant
WO2018131737A1 (fr) Dispositif d'inspection de panneau défectueux
WO2023191312A1 (fr) Procédé d'évaluation d'état extérieur et de valeur de dispositif électronique, et appareil d'évaluation de valeur de dispositif électronique
WO2022158628A1 (fr) Système de détermination de défaut dans un panneau d'affichage en fonction d'un modèle d'apprentissage automatique
WO2019160325A1 (fr) Dispositif électronique et son procédé de commande
WO2024076223A1 (fr) Dispositif d'évaluation de valeur d'appareil électronique et son procédé de fonctionnement
WO2023163476A1 (fr) Procédé et dispositif d'évaluation de la valeur de dispositif électronique, et procédé de formation du modèle d'apprentissage profond
KR20230127121A (ko) 전자 기기 가치 평가 방법 및 장치와 딥러닝 모델 트레이닝 방법
WO2023153812A1 (fr) Dispositif électronique de détection d'objet et son procédé de commande
WO2023177105A1 (fr) Boîte de capture servant à capturer un dispositif électronique, et dispositif d'achat libre service la comprenant
WO2024096435A1 (fr) Procédé d'évaluation et d'estimation de condition d'aspect de dispositif électronique, et dispositif d'estimation de dispositif électronique

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23781171

Country of ref document: EP

Kind code of ref document: A1