CN114155260A - Appearance image matting model training method and matting method for recycling detection - Google Patents

Appearance image matting model training method and matting method for recycling detection Download PDF

Info

Publication number
CN114155260A
CN114155260A CN202111493304.2A CN202111493304A CN114155260A CN 114155260 A CN114155260 A CN 114155260A CN 202111493304 A CN202111493304 A CN 202111493304A CN 114155260 A CN114155260 A CN 114155260A
Authority
CN
China
Prior art keywords
matting
image
training
model
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111493304.2A
Other languages
Chinese (zh)
Inventor
田寨兴
许锦屏
余卫宇
廖伟权
刘嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Epbox Information Technology Co ltd
Original Assignee
Guangzhou Epbox Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Epbox Information Technology Co ltd filed Critical Guangzhou Epbox Information Technology Co ltd
Priority to CN202111493304.2A priority Critical patent/CN114155260A/en
Publication of CN114155260A publication Critical patent/CN114155260A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a recovery detection appearance image cutout model training method and a cutout method. And further, establishing a convolutional neural network, determining a corresponding channel image according to a cutout processing result, and finally training a cutout model according to the channel image. Based on the method, the trained matting model can be used for outputting the optimal channel image, matting operation under a non-standard environment is completed, image interference caused by the environment is eliminated, and the recovery detection efficiency and accuracy are improved.

Description

Appearance image matting model training method and matting method for recycling detection
Technical Field
The invention relates to the technical field of recovery detection, in particular to an appearance image matting model training method and a matting method for recovery detection.
Background
With the development of electronic product technology, various intelligent devices such as smart phones, notebook computers, tablet computers, and the like are developed. At present, along with the rapid development of economy and technology, the popularization and the updating speed of intelligent equipment are also faster and faster. Taking a smart phone as an example, the coming of the 5G era accelerates the generation change of the smart phone. In the iterative process of the intelligent equipment, effective recovery is one of effective utilization means of the residual value of the intelligent equipment, and the chemical pollution to the environment and the waste can be reduced.
Therefore, various recycling methods such as recycling by a recycling machine or after-sales service have been carried out in accordance with the recycling process of the facility. In the recycling process, appearance detection is an extremely important link. Taking mobile phone recycling as an example, when collecting device appearance images for appearance detection, the environment may introduce a great interference. When the mobile phone is shot and detected in a non-standard environment, a plurality of external factors are interfered, and a subsequent image loss detection algorithm has a large error due to a background image, a background color or a background object and the like.
Disclosure of Invention
In view of the above, it is necessary to provide an appearance image matting model training method and a matting method for recovery detection in order to overcome the defect of appearance detection in recovery detection in a non-standard environment.
A recovery detection appearance image cutout model training method comprises the following steps:
acquiring a training image sample;
carrying out cutout processing on the training image sample to obtain a cutout processing result;
establishing a convolutional neural network, and determining a corresponding channel image according to a matting processing result;
training a cutout model according to the channel image; wherein, the cutout model is used for outputting the optimal channel image.
According to the appearance image matting model training method for recycling detection, after a training image sample is obtained, matting processing is carried out on the training image sample, and a matting processing result is obtained. And further, establishing a convolutional neural network, determining a corresponding channel image according to the cutout processing result, and finally training a cutout model according to the channel image. Based on the method, the trained matting model can be used for outputting the optimal channel image, matting operation under a non-standard environment is completed, image interference caused by the environment is eliminated, and the recovery detection efficiency and accuracy are improved.
In one embodiment, a process of performing matting processing on a training image sample to obtain a matting processing result includes the steps of:
processing the training image sample into a mask image;
and converting the mask image into a trisection image as a matting processing result.
In one embodiment, the process of processing a training image sample into a mask map comprises the steps of:
the training image sample was processed as a black and white mask image by the object cut algorithm of yolov 5.
In one embodiment, the trimap image includes a white image, a black image, and a gray image.
In one embodiment, the process of establishing a convolutional neural network and determining a corresponding channel image according to a matting processing result includes the steps of:
extracting the characteristics of the cutout processing result through an encoder of a convolutional neural network;
and outputting the channel image according to the characteristics through a decoder of the convolutional neural network.
In one embodiment, the process of establishing a convolutional neural network and determining a corresponding channel image according to a matting processing result further includes the steps of:
performing initial optimization on the channel image through convolution and an activation function of a convolutional neural network;
and establishing a loss function, carrying out secondary optimization on the primary optimization result, and outputting the channel image after the secondary optimization.
In one embodiment, the loss function is as follows:
Figure BDA0003400074350000031
Figure BDA0003400074350000032
wherein is alphapIndicates the predicted value, which istRepresents a reference value ∈2Representing variables preventing overfitting, IpIndicating the result of the matting process ItThe training image sample before the matting process is shown.
In one embodiment, the constraints of the matting model are a loss function.
A method for recovering and detecting appearance image sectional drawing comprises the following steps:
acquiring an appearance image to be scratched;
and inputting the appearance image to be subjected to matting into the matting model, and taking an output result of the matting model as a matting result.
According to the appearance image matting method for recycling detection, after the appearance image to be subjected to matting is obtained, the appearance image to be subjected to matting is input into a matting model, and an output result of the matting model is used as a matting result. Based on the method, the image matting operation under the non-standard environment is completed, the image interference introduced by the environment is eliminated, and the recovery detection efficiency and accuracy are improved.
In one embodiment, the method further comprises the following steps:
and integrating the output result with a preset background picture to serve as a matting result.
The utility model provides a retrieve outward appearance image cutout model trainer that detects, includes the step:
the sample acquisition module is used for acquiring a training image sample;
the matting processing module is used for carrying out matting processing on the training image sample to obtain a matting processing result;
the network processing module is used for establishing a convolutional neural network and determining a corresponding channel image according to a matting processing result;
the model training module is used for training a cutout model according to the channel image; wherein, the cutout model is used for outputting the optimal channel image.
The appearance image matting model training device for recycling detection obtains a training image sample, and then carries out matting processing on the training image sample to obtain a matting processing result. And further, establishing a convolutional neural network, determining a corresponding channel image according to the cutout processing result, and finally training a cutout model according to the channel image. Based on the method, the trained matting model can be used for outputting the optimal channel image, matting operation under a non-standard environment is completed, image interference caused by the environment is eliminated, and the recovery detection efficiency and accuracy are improved.
A device for picking and detecting appearance images comprises the following steps:
the image acquisition module is used for acquiring an appearance image to be scratched;
and the model output module is used for inputting the appearance image to be subjected to matting into the matting model and taking the output result of the matting model as a matting result.
The appearance image matting device for recycling detection obtains an appearance image to be scratched, inputs the appearance image to be scratched into a matting model, and takes an output result of the matting model as a matting result. Based on the method, the image matting operation under the non-standard environment is completed, the image interference introduced by the environment is eliminated, and the recovery detection efficiency and accuracy are improved.
A computer storage medium having stored thereon computer instructions which, when executed by a processor, implement the recovery detected appearance image matting model training method or the recovery detected appearance image matting method of any of the above embodiments.
After the training image sample is obtained, the computer storage medium performs cutout processing on the training image sample to obtain a cutout processing result. And further, establishing a convolutional neural network, determining a corresponding channel image according to the cutout processing result, and finally training a cutout model according to the channel image. Based on the method, the trained matting model can be used for outputting the optimal channel image, matting operation under a non-standard environment is completed, image interference caused by the environment is eliminated, and the recovery detection efficiency and accuracy are improved.
A computer device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the recovery detected appearance image matting model training method or the recovery detected appearance image matting method according to any of the above embodiments when executing the program.
After the training image sample is obtained, the computer device carries out cutout processing on the training image sample to obtain a cutout processing result. And further, establishing a convolutional neural network, determining a corresponding channel image according to the cutout processing result, and finally training a cutout model according to the channel image. Based on the method, the trained matting model can be used for outputting the optimal channel image, matting operation under a non-standard environment is completed, image interference caused by the environment is eliminated, and the recovery detection efficiency and accuracy are improved.
Drawings
FIG. 1 is a flowchart of an embodiment of a method for training a recycled detected appearance image matting model;
FIG. 2 is a flowchart of a recovery detection appearance image matting model training method according to another embodiment;
FIG. 3 is a schematic diagram of a convolutional neural network process;
FIG. 4 is a flowchart of an embodiment of a method for recycling detected appearance image matting;
FIG. 5 is a block diagram of an embodiment of an appearance image matting model training apparatus for recycling detection;
FIG. 6 is a block diagram of an embodiment of an apparatus for recycling and inspecting an image of an external appearance;
FIG. 7 is a schematic diagram of an internal structure of a computer according to an embodiment.
Detailed Description
For better understanding of the objects, technical solutions and effects of the present invention, the present invention will be further explained with reference to the accompanying drawings and examples. Meanwhile, the following described examples are only for explaining the present invention, and are not intended to limit the present invention.
The embodiment of the invention provides an equipment recovery system.
The equipment recovery system of an embodiment comprises a detection server and an equipment recovery terminal.
In one embodiment, the device recycling system of an embodiment further includes a third party server or a payment server.
The user side of the device to be recovered before being recovered comprises terminal devices which can communicate with each server or other side terminal devices, and the terminal devices can be the device to be recovered or other intelligent devices.
The staff side includes terminal equipment capable of communicating with each server or other side terminal equipment.
In one embodiment, the terminal device performing communication includes a smart device such as a mobile phone or a computer. As a preferred embodiment, the terminal device performing the communication has a network communication capability.
The equipment to be recovered comprises intelligent equipment or non-intelligent equipment such as a mobile phone, a computer, a watch, a television, furniture and the like.
The user side may communicate with the detection server, the third party server or the payment server, and the staff side may communicate with the detection server or the payment server.
The payment server is in communication interaction with the user side or the staff side and is used for completing collection and payment of a user account or a staff account.
The third-party server is communicated with one side of the user, comprises a shopping platform, a recovery platform or a communication platform and the like, and can be used for capturing a recovery request of the equipment to be recovered, which is removed by the user.
The equipment recovery terminal establishes communication with the detection server, and the detection server executes detection analysis on detection data obtained by executing recovery detection on the equipment to be recovered.
In one embodiment, an apparatus recycling terminal of an embodiment includes:
the detection module is used for detecting the equipment to be recovered to obtain detection data;
the display module is used for displaying the detection result of the detection server;
the human-computer interaction module is used for collecting display feedback of the detection result;
and the communication module is respectively connected with the detection module, the display module and the human-computer interaction module and is used for communicating with each server.
The detection module is used for detecting the equipment to be recovered, obtaining detection data and sending the detection data to the detection server through the communication module.
In one embodiment, the detection data includes an appearance image or interactive data of the device to be recycled. The staff accessible operating means retrieves the terminal and shoots the outward appearance image of waiting to retrieve the equipment, generates video or picture and uploads to the detection server. And the communication connection can be established with the equipment to be recovered through the operation equipment recovery terminal to obtain the interactive data.
In one embodiment, the detection module comprises a camera unit or a data interaction unit.
The camera shooting unit is used for shooting the appearance of the equipment to be recovered. The data interaction unit is used for establishing communication connection with the equipment to be recovered and acquiring interaction data.
In one embodiment, the data interaction unit comprises a wireless interaction unit or a wired interaction unit.
The wireless interaction unit comprises a WIFI interaction unit, a 4G interaction unit, an infrared interaction unit or a ZIGBEE interaction unit and the like. The wired interaction unit comprises a USB interaction unit or a bus interaction unit.
As a preferred embodiment, the data interaction unit comprises a USB interaction unit. The device recovery terminal establishes USB connection with the device to be recovered through the integrated USB interactive unit when recovery detection is executed, and acquires USB interactive data required by the detection server for acquiring the detection result.
The display module is used for displaying the detection result, and comprises display, acousto-optic display or voice display and the like.
In one embodiment, the display module includes a display unit. Wherein, the display unit comprises a display screen or a nixie tube display and the like. And the detection result is displayed to a user or staff and the like through a display unit.
The human-computer interaction module is used for realizing human-computer interaction between related personnel and the equipment recovery terminal and can be used for collecting display feedback or adjusting detection result display.
As a better implementation mode, the human-computer interaction module and the display module are the same touch display screen.
The communication module has the capability of communicating with each server, and comprises a plurality of communication modes such as 4G communication, WLAN communication, local area network communication and the like.
In order to better explain the deployment characteristics of the device recycling terminal according to the embodiment of the present invention, the device recycling terminal development and deployment in an application are described in the following with a specific application example. In a specific application example, the equipment recovery terminal development can be performed by a common tablet computer, and the tablet computer has network communication and shooting capabilities and can be used for shooting equipment to be recovered and uploading a shot appearance image to a detection server. Meanwhile, the tablet personal computer is improved, and is provided with a USB external data line (which can simultaneously comprise USB data lines of various external interface types) through software integration or hardware integration, and is connected with equipment to be recovered in the recovery process. And through software integration, required interactive data is rapidly read or captured and uploaded to a detection server. Meanwhile, when a detection result sent by the detection server is received, display and display can be carried out through the display screen, or voice display can be carried out through the power amplifier. Meanwhile, relevant personnel such as users or workers can operate the tablet personal computer to realize acquisition and display feedback or adjustment and display of detection results.
The device recycling terminal of any embodiment comprises a detection module, a display module, a human-computer interaction module and a communication module. Based on this, on the basis of satisfying and accomplishing the recovery process with the detection server, will retrieve the detection and realize through the detection server in high in the clouds, reduce equipment and retrieve terminal's cost, stability and portability.
Based on this, the embodiment of the invention provides a device recovery method on the side of a detection server.
The apparatus recovery method of an embodiment includes the steps of:
acquiring a device recovery request;
according to the equipment recovery request, sending recovery information to one side of a worker; the recovery information is used for indicating a worker to obtain a recovery terminal of the equipment from the terminal address, and holding the recovery terminal of the equipment to perform recovery detection on the equipment to be recovered from the recovery address;
receiving detection data sent by a device recovery terminal executing recovery detection, executing detection on the detection data and feeding back a detection result to the device recovery terminal; the equipment recovery terminal is used for displaying a detection result;
and executing corresponding equipment recovery operation according to the display feedback of the detection result.
The device recycle request is used to trigger the start of device recycle. The equipment recovery request can be sent by a user side, a worker side and various server sides, and is acquired by the detection server. Or the detection server executes final acquisition through indirect transmission from a user side, a worker side or multiple sides of various servers and the like.
In one embodiment, the process of obtaining a device recycle request includes the steps of:
and receiving request information sent by a user of the equipment to be recovered, and determining the request as an equipment recovery request.
The request information sent by the user side comprises the request information directly sent to the detection server or indirectly sent to the detection server through other sides. The user communicates with the detection server by operating the intelligent device (including the device to be recovered) to send the request information.
In one embodiment, the process of obtaining a device recycle request includes the steps of:
and acquiring communication interaction data between the user of the equipment to be recovered and the third-party server as an equipment recovery request.
The initiation of the device recycle request is not limited to the direct communication between the user and the detection server. In the communication interaction data of the user and the third-party server, a device recovery request exists, and the communication interaction data serves as the recovery request.
To better explain the embodiment, the device to be recycled is taken as a mobile phone as an example. A user purchases a new mobile phone on an online shopping platform (a third-party server) to replace an original old mobile phone, determines that the old mobile phone can be recycled in communication interaction of the online shopping platform, and sends data to a detection server by the online shopping platform as an equipment recycling request based on the communication interaction data.
The detection server may be implemented by an application, such as an APP or various applets. Sales drainage or user drainage by an application, providing online services related to device recovery requests: purchase new equipment, exchange for old equipment, or equipment recycling. And the user directly operates the application program or the third-party server to communicate with the interface of the application program, so that the triggering of the equipment recycling request is realized.
In one embodiment, before the process of sending the recycling information to the staff side according to the device recycling request, the method further comprises the following steps:
and determining corresponding staff according to the communication interaction data.
The communication interaction data of the user and the third-party server correspond to logistics services such as new equipment purchase or distribution, and based on the logistics services, the interaction process of the user and the third-party server is determined according to the communication interaction data, and corresponding staff are determined. For example, if the interaction between the user and the third-party server is online shopping, the corresponding logistics distribution personnel for online shopping goods is determined as staff.
In one embodiment, the method for recycling plant of the further embodiment further comprises the steps of:
and sending delivery information to the staff according to the communication interaction data so as to instruct the staff to deliver the equipment to be delivered to the recycling address according to the delivery information.
The delivery information is used for realizing the purpose of communication interaction between the user and the third-party server. In this embodiment, the third-party server implements delivery information delivery by interacting with the detection server, so as to reduce communication cost. Meanwhile, the multi-side servers required to be communicated by the staff are reduced, so that the staff can complete corresponding work only by communicating with the detection server, the work efficiency is improved, and the error rate is reduced.
And after the equipment recovery request is determined to be initiated, sending recovery information to one side of a worker, and instructing the worker to execute a series of operations such as equipment recovery terminal acquisition, recovery address confirmation, recovery detection and the like.
In one embodiment, the reclamation information is used to determine a terminal address and a reclamation address. The intelligent equipment on one side of the staff can be used for displaying the terminal address and the recovery address corresponding to the recovery information to the staff.
As a preferred embodiment, the terminal address is associated with a recycle address.
The recycling address may be predetermined during the obtaining process of the device recycling request, for example, a receiving address of the user or address information to be filled in. The terminal address is associated with the recycling address, such as determining the terminal address with the minimum distance according to the recycling address.
In a specific application example, each device recycling terminal is placed in a corresponding store or warehouse, and the address of the store or warehouse is the terminal address. And the staff acquires the equipment recovery terminal from the terminal address with the minimum distance to the recovery address according to the recovery information, and the store or warehouse updates the inventory information of the equipment recovery terminal according to the acquisition information.
After the device to be recovered is recovered, the worker can return the device recovery terminal to the corresponding terminal address, or execute recovery detection to the next recovery address according to the recovery information.
In one embodiment, the recycling information is further used for instructing a worker to operate the device recycling terminal. The recovery information is sent to the intelligent equipment on one side of the worker, and operation display is provided for the worker. The operation display comprises character display, picture display or video display.
In one embodiment, the recycle information includes a boot code. The boot code is used for unlocking the corresponding equipment recovery terminal.
In one embodiment, after the device recycling terminal is acquired, the method further includes the steps of:
and the updating equipment recovers the acquired information corresponding to the terminal.
Taking the device recycling terminal attached to the store or the warehouse as an example, the relevant personnel can log in the detection server to check whether the device recycling terminal of each terminal address is acquired or used, and know the state of the device recycling terminal in time. For example, the detection server can attach corresponding small programs, APPs and the like, and the staff can check the number of idle equipment recovery terminals of a store or a warehouse, whether the equipment recovery terminals are damaged or not by logging in the corresponding small programs. After the staff acquire the equipment recovery terminal, updating the corresponding equipment recovery terminal to be acquired; and after the staff returns the equipment recovery terminal, updating the corresponding equipment recovery terminal to be not acquired.
The staff holds the equipment recovery terminal to the target address, collects the detection data of the equipment to be recovered through the equipment recovery terminal, and sends the detection data to the detection server to obtain the detection result.
And the equipment recovery terminal displays the detection result, including character display, picture display or video display and the like.
As a preferred embodiment, when the detected data is a picture of the device to be recovered, the detection result includes marking a flaw or damage trace on the picture, and prompting the user through marking. And when the detected data is interactive data of the equipment to be recovered and the equipment recovery terminal, displaying the data to the user in a text form through data exception marking or data processing results.
In one embodiment, the detection result comprises a recovery price besides the problem detection of the equipment to be recovered, and the detection and quotation process is performed by advancing to the target address through showing the recovery price for the user.
And after the detection result is displayed, executing corresponding equipment recovery operation according to the display feedback.
In one embodiment, the presentation feedback may be sent to the detection server from the user side, the staff side, or the servers side at the data communication level. On the man-machine interaction level, the display feedback can be made by an intelligent device on one side of a user operation user, a user operation device recovery terminal, an intelligent device on one side of a worker operation worker, a worker operation device recovery terminal and the like.
According to the display feedback, corresponding equipment recycling operation is carried out, such as recycling determination, non-recycling determination, recycling price adjustment and the like. In one embodiment, the detection server completes the device recovery operation through interaction with a third-party server or a payment server and the like. For example, in the process of replacing old equipment with new equipment, when the equipment to be recovered is determined to be recovered, the payment server completes corresponding operations of deducting money of the user, adjusting payment of the user, paying remuneration by staff and the like.
Based on this, in one embodiment, the process of executing the corresponding device recycling operation according to the display feedback of the detection result includes the following steps:
and indicating the staff to hold the equipment to be recovered to the target address according to the display feedback.
Target address information is directly or indirectly issued to one side of a worker through the detection server, and the worker holds the equipment to be recovered to the target address to complete recovery.
In one embodiment, the process of executing the corresponding device recycling operation according to the display feedback of the detection result includes the steps of:
and adjusting the recycling price of the equipment to be recycled according to the display feedback.
And the detection server and the payment server complete interaction through the display feedback, and the recovery price of the equipment to be recovered is adjusted.
In one embodiment, the process of executing the corresponding device recycling operation according to the display feedback of the detection result includes the steps of:
and generating logistics information according to the display feedback to instruct the staff to carry out logistics delivery on the equipment to be recovered according to the logistics information.
The detection server can directly issue the logistics information to one side of the worker, or a third-party server executes indirect issuing to indicate the worker to execute logistics delivery of the equipment to be recovered, and therefore the recovery efficiency of the equipment is further improved.
In one embodiment, after the process of performing the corresponding device recycling operation according to the display feedback of the detection result, the method further includes the steps of:
and sending a payment instruction to the payment server to instruct the payment server to complete the payment and receipt for the account of the target person.
After the recovery is completed, the detection server sends a payment instruction to the payment server to instruct the payment server to complete the collection and payment of the account of the target person.
In one embodiment, the target personnel comprises users, staff or third-party merchants and the like, so that timely settlement and intelligentization of payment received by all parties in the equipment recycling process are achieved.
In the device recovery method in any embodiment, after the device recovery request is obtained, recovery information is sent to the worker side, the worker is instructed to obtain the device recovery terminal from the terminal address, and the device recovery terminal is held to perform recovery detection on the device to be recovered from the recovery address. And receiving detection data sent by the equipment recovery terminal for executing recovery detection, executing detection on the detection data and feeding back a detection result to the equipment recovery terminal, displaying the detection result by the equipment recovery terminal, and executing corresponding equipment recovery operation according to the display feedback of the detection result. Based on this, through long-range testing result feedback, reduce the professional requirement to the staff to reduce the hardware capability requirement to equipment recovery terminal, reduce the recovery cost in all aspects. Meanwhile, the detection result is displayed and fed back to the recovery address in a preposed mode, so that disputes and communication cost in the recovery process are reduced.
Based on the method, in the process that the worker holds the equipment recovery terminal to carry out recovery detection on the equipment to be recovered to obtain the appearance image in the detection data, the recovery detection appearance image cutout model training method is provided.
Fig. 1 is a flowchart of an embodiment of an appearance image matting model training method for recycling detection, and as shown in fig. 1, the embodiment of the appearance image matting model training method for recycling detection includes steps S100 to S103:
s100, acquiring a training image sample;
s101, performing cutout processing on a training image sample to obtain a cutout processing result;
s102, establishing a convolutional neural network, and determining a corresponding channel image according to a matting processing result;
s103, training a cutout model according to the channel image; wherein, the cutout model is used for outputting the optimal channel image.
The acquisition mode of the training image sample is consistent with or similar to that of the appearance image in the detection data, and the consistent or similar shot object is shot by the consistent or similar shooting equipment in the consistent or similar non-standard environment to acquire the training image sample. Meanwhile, the image parameters of the training image sample are consistent with or similar to the appearance image. In one embodiment, the training image sample comprises a video or picture.
And carrying out cutout processing on the training image sample to obtain a cutout processing result so as to eliminate the interfered background environment, wherein the cutout processing result after cutout is a characteristic region for centralized observation.
In one embodiment, fig. 2 is a flowchart of an appearance image matting model training method for recovery detection according to another embodiment, and as shown in fig. 2, a process of performing matting processing on a training image sample in step S101 to obtain a matting processing result includes steps S200 and S201:
s200, processing the training image sample into a mask image;
s201, converting the mask image into a trisection image as a matting processing result.
The training image sample is processed into a mask image by an image cutting or object cutting algorithm.
In one embodiment, a black and white mask map (mask map) is generated from the training image samples by the object segmentation algorithm of yolov 5.
In one embodiment, the mask map is processed into a trimap by a dilation-erosion multiple operation. As a preferred embodiment, the trimap image comprises a white image, a black image and a gray image.
The beneficial effects of step S200 and step S201 are explained as a specific application example below:
taking the AlphaMatting matting algorithm as an example, the principle is to divide the picture into three parts, a foreground, a background and a mask between the three parts. The relationship between the three is as follows:
Ii=∝iFi+(1 ∝i)Bi
ithe range of transparency is [0-1 ]]The ratio of foreground to background is indicated, F is foreground, and B is background. Since only the color RGB of each pixel point on the training image sample can be determined in advance, but a single channel of each pixel point contains three unknowns (F, B,. varies.) and equation solution cannot be established from a mathematical perspective, it is necessary to add a constraint condition to the function and determine a trimap image as the constraint condition. The trimap is to roughly scratch out a foreground (white) and a background (black) in advance and a fuzzy contour (gray) between the foreground and the background in advance, namely, a reference foreground F, a reference background B and a reference alpha are known in advance, so that a subsequent cutout model carries out more precise cutout on the basis of the constraint condition, and the size which is most close to the detected picture is determined.
In traditional interactive matting, curtain matting or machine learning matting, most of the trimap images are manually and slightly cut in advance. In this embodiment, because the staff retrieves to detect to wait to retrieve equipment to user's one side back door, the manual work is scratched and can be influenced very much detection time and experience. In order to save labor and time, the training image sample is roughly divided into black and white by using the yolov5 object division principle, at the moment, a portrait mode is adopted and the training image sample or the appearance image is focused, the division accuracy is improved, and a mask picture is obtained, wherein the mask picture only has black and white colors.
And after obtaining the mask picture, converting the mask picture into a constraint condition trimap picture. In one embodiment, the expansion and erosion mask map is transformed into a trimap map using the splate, anode function within opencv.
And establishing a convolutional neural network, and determining the matting processing result as channel images with different colors.
In one embodiment, as shown in fig. 2, the process of establishing a convolutional neural network in step S102 and determining a corresponding channel image according to a matting processing result includes steps S202 and S203:
s202, extracting the characteristics of the cutout processing result through an encoder of a convolutional neural network;
and S203, outputting a channel image according to the characteristics through a decoder of the convolutional neural network.
Fig. 3 is a schematic diagram of convolutional neural network processing, and as shown in fig. 3, a training image sample and a trimap map are used as input data of an encoder, and the input data enters an encoder flow, features are extracted, and multiple times of convolution and pooling are performed.
As shown in FIG. 3, the image entering the decoder opens the features obtained by the encoder, enlarges the gap between the foreground and the background, and performs convolution sampling for a plurality of times to obtain a relatively rough value which is equal to a value, so that a rough matte image can be obtained, wherein the foreground is white, the background is black, and the image is transparent.
In one embodiment, as shown in fig. 2, the process of establishing a convolutional neural network in step S102 and determining a corresponding channel image according to a matting processing result further includes step S204 and step S205:
s204, performing primary optimization on the channel image through convolution and an activation function of the convolutional neural network;
and S205, establishing a loss function, carrying out secondary optimization on the primary optimization result, and outputting the channel image after the secondary optimization.
As shown in FIG. 3, a value of oc is optimized using one small model. In one embodiment, the lambda expression is used for firstly carrying out fragmentation on the feature matrix to extract important features, and then the convolution + relu operation in the decoder is carried out on the extracted features and the obtained matte graph again to obtain a more refined value which is in the range of oc.
In one embodiment, the loss function is as follows:
Figure BDA0003400074350000161
Figure BDA0003400074350000162
wherein is alphapIndicates the predicted value, which istRepresents a reference value ∈2Representing variables preventing overfitting, IpIndicating the result of the matting process ItThe training image sample before the matting process is shown.
As shown in FIG. 3, a loss function is established, which includes two types, one is the predicted loss of value oc, as follows:
Figure BDA0003400074350000171
the other is the difference between the predicted value of ^ and the reference foreground, and the predicted image integrated with the reference background and the original image according to the following formula:
Figure BDA0003400074350000172
in one embodiment, the loss functions are combined into a total loss function as a constraint of the matting model.
In one embodiment, after the matting model is constructed, various model parameters are set to perform iterative training on the matting model for multiple times, and the loss function is used as a constraint condition to obtain the optimal alpha value meeting almost all images, i.e., an optimal channel image, i.e., a matting image.
In the method for training the appearance image matting model for recycling detection according to any embodiment, after the training image sample is obtained, the matting processing is performed on the training image sample to obtain a matting processing result. And further, establishing a convolutional neural network, determining a corresponding channel image according to the cutout processing result, and finally training a cutout model according to the channel image. Based on the method, the trained matting model can be used for outputting the optimal channel image, matting operation under a non-standard environment is completed, image interference caused by the environment is eliminated, and the recovery detection efficiency and accuracy are improved.
After the training of the cutout model is completed, a worker holds an equipment recovery terminal to carry out recovery detection on equipment to be recovered, and after an appearance image in detection data is obtained, an appearance image cutout method for recovery detection is provided.
Fig. 4 is a flowchart of an embodiment of a method for picking appearance images by recovery detection, and as shown in fig. 4, the method for picking appearance images by recovery detection includes steps S300 and S301:
s300, acquiring an appearance image to be scratched;
s301, inputting the appearance image to be subjected to matting into the matting model, and taking the output result of the matting model as a matting result.
In one embodiment, as shown in fig. 4, the method for recycling detected appearance image matting according to an embodiment further includes step S302:
and S302, integrating the output result with a preset background picture to serve as a matting result.
And integrating the output result with a preset background picture which is prepared in advance, namely placing the output result in the center of the preset background picture. Then, the four corners of the output result (when the device is rectangular) are taken to cut a rectangle closest to the output result. And determining the center point of the output result, taking the center as a fixed point, rotating the angle anticlockwise until the x coordinates of the upper and lower angles are consistent, and adjusting the lower size. The corresponding equipment to be recovered is regular in appearance images, and also accords with the picture format of image detection input, so that subsequent flaw loss detection is facilitated.
According to the appearance image matting method for recycling detection, after the appearance image to be subjected to matting is obtained, the appearance image to be subjected to matting is input into a matting model, and an output result of the matting model is used as a matting result. Based on the method, the image matting operation under the non-standard environment is completed, the image interference introduced by the environment is eliminated, and the recovery detection efficiency and accuracy are improved.
The embodiment of the invention also provides an appearance image matting model training device for recovery detection.
Fig. 5 is a block diagram of an embodiment of an appearance image matting model training device for recycling detection, and as shown in fig. 5, the embodiment of the appearance image matting model training device for recycling detection includes:
a sample obtaining module 100, configured to obtain a training image sample;
the matting processing module 101 is configured to perform matting processing on a training image sample to obtain a matting processing result;
the network processing module 102 is configured to establish a convolutional neural network, and determine a corresponding channel image according to a matting processing result;
the model training module 103 is used for training a cutout model according to the channel image; wherein, the cutout model is used for outputting the optimal channel image.
The appearance image matting model training device for recycling detection obtains a training image sample, and then carries out matting processing on the training image sample to obtain a matting processing result. And further, establishing a convolutional neural network, determining a corresponding channel image according to the cutout processing result, and finally training a cutout model according to the channel image. Based on the method, the trained matting model can be used for outputting the optimal channel image, matting operation under a non-standard environment is completed, image interference caused by the environment is eliminated, and the recovery detection efficiency and accuracy are improved.
The embodiment of the invention also provides an appearance image matting device for recovery detection.
Fig. 6 is a block diagram of an embodiment of an appearance image matting device for recycling detection, and as shown in fig. 6, the embodiment of the appearance image matting device for recycling detection includes:
an image obtaining module 200, configured to obtain an appearance image to be scratched;
and the model output module 201 is used for inputting the appearance image to be subjected to matting into the matting model, and taking the output result of the matting model as a matting result.
The appearance image matting device for recycling detection obtains an appearance image to be scratched, inputs the appearance image to be scratched into a matting model, and takes an output result of the matting model as a matting result. Based on the method, the image matting operation under the non-standard environment is completed, the image interference introduced by the environment is eliminated, and the recovery detection efficiency and accuracy are improved.
The embodiment of the present invention further provides a computer storage medium, on which computer instructions are stored, and when the instructions are executed by a processor, the method for training the appearance image matting model for recovery detection or the method for matting the appearance image for recovery detection in any of the above embodiments is implemented.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, the computer program can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a terminal, or a network device) to execute all or part of the methods of the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a RAM, a ROM, a magnetic or optical disk, or various other media that can store program code.
Corresponding to the computer storage medium, in an embodiment, there is also provided a computer device including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement any one of the above-mentioned methods for training a recycling detected appearance image matting model or recycling detected appearance image matting methods.
The computer device may be a terminal, and its internal structure diagram may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a recovery detected appearance image matting model training method or a recovery detected appearance image matting method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
After the training image sample is obtained, the computer equipment carries out cutout processing on the training image sample to obtain a cutout processing result. And further, establishing a convolutional neural network, determining a corresponding channel image according to the cutout processing result, and finally training a cutout model according to the channel image. Based on the method, the trained matting model can be used for outputting the optimal channel image, matting operation under a non-standard environment is completed, image interference caused by the environment is eliminated, and the recovery detection efficiency and accuracy are improved.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only show some embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A recovery detection appearance image cutout model training method is characterized by comprising the following steps:
acquiring a training image sample;
performing cutout processing on the training image sample to obtain a cutout processing result;
establishing a convolutional neural network, and determining a corresponding channel image according to the matting processing result;
training a cutout model according to the channel image; wherein, the cutout model is used for outputting the optimal channel image.
2. The method for training the appearance image cutout model for recycling detection according to claim 1, wherein the process of carrying out cutout processing on the training image sample to obtain the cutout processing result comprises the following steps:
processing the training image sample into a mask image;
and converting the mask image into a trisection image as the matting processing result.
3. The method for training the appearance image matting model for recycling detection according to claim 2, wherein the process of processing the training image sample into a mask image comprises the steps of:
the training image sample was processed as a black and white mask image by the object cut algorithm of yolov 5.
4. The method of claim 2, wherein the tri-segment map comprises a white map, a black map, and a gray map.
5. The method for training the appearance image matting model for recycling detection according to claim 1, wherein the process of establishing a convolutional neural network and determining the corresponding channel image according to the matting processing result comprises the steps of:
extracting features of the matting processing result through an encoder of the convolutional neural network;
and outputting a channel image according to the characteristics through a decoder of the convolutional neural network.
6. The method for training the appearance image matting model for recycling detection according to claim 5, wherein the process of establishing a convolutional neural network and determining the corresponding channel image according to the matting processing result further comprises the steps of:
performing preliminary optimization on the channel image through convolution and an activation function of the convolutional neural network;
and establishing a loss function, carrying out secondary optimization on the primary optimization result, and outputting the channel image after the secondary optimization.
7. The method for training the appearance image matting model for recycling detection according to claim 6, wherein the loss function is as follows:
Figure FDA0003400074340000021
Figure FDA0003400074340000022
wherein is alphapIndicates the predicted value, which istRepresents a reference value ∈2Representing variables preventing overfitting, IpRepresenting the matting processing result, ItRepresenting the training image sample before matting processing.
8. The method for training a recycling detected appearance image matting model according to claim 7, characterized in that the constraint condition of the matting model is the loss function.
9. A method for recovering and detecting appearance image matting is characterized by comprising the following steps:
acquiring an appearance image to be scratched;
and inputting the appearance image to be subjected to matting into a matting model, and taking an output result of the matting model as a matting result.
10. The method of recycling detected appearance image matting according to claim 9, characterized by further comprising the steps of:
and integrating the output result with a preset background picture to serve as the matting result.
CN202111493304.2A 2021-12-08 2021-12-08 Appearance image matting model training method and matting method for recycling detection Pending CN114155260A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111493304.2A CN114155260A (en) 2021-12-08 2021-12-08 Appearance image matting model training method and matting method for recycling detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111493304.2A CN114155260A (en) 2021-12-08 2021-12-08 Appearance image matting model training method and matting method for recycling detection

Publications (1)

Publication Number Publication Date
CN114155260A true CN114155260A (en) 2022-03-08

Family

ID=80454007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111493304.2A Pending CN114155260A (en) 2021-12-08 2021-12-08 Appearance image matting model training method and matting method for recycling detection

Country Status (1)

Country Link
CN (1) CN114155260A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11989701B2 (en) 2014-10-03 2024-05-21 Ecoatm, Llc System for electrically testing mobile devices at a consumer-operated kiosk, and associated devices and methods

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109461167A (en) * 2018-11-02 2019-03-12 Oppo广东移动通信有限公司 The training method of image processing model scratches drawing method, device, medium and terminal
CN109712145A (en) * 2018-11-28 2019-05-03 山东师范大学 A kind of image matting method and system
CN112541927A (en) * 2020-12-18 2021-03-23 Oppo广东移动通信有限公司 Method, device, equipment and storage medium for training and matting model
CN113379786A (en) * 2021-06-30 2021-09-10 深圳市斯博科技有限公司 Image matting method and device, computer equipment and storage medium
CN113409224A (en) * 2021-07-09 2021-09-17 浙江大学 Image target pertinence enhancing method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109461167A (en) * 2018-11-02 2019-03-12 Oppo广东移动通信有限公司 The training method of image processing model scratches drawing method, device, medium and terminal
CN109712145A (en) * 2018-11-28 2019-05-03 山东师范大学 A kind of image matting method and system
CN112541927A (en) * 2020-12-18 2021-03-23 Oppo广东移动通信有限公司 Method, device, equipment and storage medium for training and matting model
CN113379786A (en) * 2021-06-30 2021-09-10 深圳市斯博科技有限公司 Image matting method and device, computer equipment and storage medium
CN113409224A (en) * 2021-07-09 2021-09-17 浙江大学 Image target pertinence enhancing method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11989701B2 (en) 2014-10-03 2024-05-21 Ecoatm, Llc System for electrically testing mobile devices at a consumer-operated kiosk, and associated devices and methods

Similar Documents

Publication Publication Date Title
US11170248B2 (en) Video capture in data capture scenario
CN111985306B (en) OCR and information extraction method applied to medical field document
CN110569874A (en) Garbage classification method and device, intelligent terminal and storage medium
CN114170435A (en) Method and device for screening appearance images for recovery detection
CN110751149A (en) Target object labeling method and device, computer equipment and storage medium
CN111160288A (en) Gesture key point detection method and device, computer equipment and storage medium
CN113298078A (en) Equipment screen fragmentation detection model training method and equipment screen fragmentation detection method
CN110659884A (en) Electronic visa application method and device
CN111832561B (en) Character sequence recognition method, device, equipment and medium based on computer vision
CN113807342A (en) Method and related device for acquiring equipment information based on image
CN112183296A (en) Simulated bill image generation and bill image recognition method and device
CN114219105A (en) Equipment recovery method, terminal and system
CN111311022A (en) Power generation amount prediction method, device, equipment and computer readable storage medium
CN114550051A (en) Vehicle loss detection method and device, computer equipment and storage medium
CN115311676A (en) Picture examination method and device, computer equipment and storage medium
CN114155260A (en) Appearance image matting model training method and matting method for recycling detection
CN114049631A (en) Data labeling method and device, computer equipment and storage medium
CN107316131A (en) A kind of electric power meter mounting process quality detecting system based on image recognition
CN112532884B (en) Identification method and device and electronic equipment
CN114283416A (en) Processing method and device for vehicle insurance claim settlement pictures
CN114186702A (en) Method and device for processing appearance image of recovery detection
CN113538291A (en) Card image tilt correction method and device, computer equipment and storage medium
CN114170419A (en) Equipment region image extraction method and device under natural scene
CN113128244A (en) Scanning method and device and electronic equipment
US10095714B2 (en) Mobile device capable of offline and online synchronous image identifying, an image identifying system, and a storage medium for the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination