CN113901996A - Equipment screen perspective detection model training method and equipment screen perspective detection method - Google Patents

Equipment screen perspective detection model training method and equipment screen perspective detection method Download PDF

Info

Publication number
CN113901996A
CN113901996A CN202111148190.8A CN202111148190A CN113901996A CN 113901996 A CN113901996 A CN 113901996A CN 202111148190 A CN202111148190 A CN 202111148190A CN 113901996 A CN113901996 A CN 113901996A
Authority
CN
China
Prior art keywords
perspective
equipment
screen
feature data
attribute information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111148190.8A
Other languages
Chinese (zh)
Inventor
田寨兴
许锦屏
余卫宇
廖伟权
刘嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Epbox Information Technology Co ltd
Original Assignee
Guangzhou Epbox Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Epbox Information Technology Co ltd filed Critical Guangzhou Epbox Information Technology Co ltd
Priority to CN202111148190.8A priority Critical patent/CN113901996A/en
Publication of CN113901996A publication Critical patent/CN113901996A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a training method of an equipment screen perspective detection model and an equipment screen perspective detection method. Further, converting the equipment attribute information and the perspective label into attribute feature data, establishing a feature data set according to the attribute feature data, and training a classification model for detecting whether the screen of the equipment is perspective or not according to the feature data set. Based on the method, the classification model is trained to detect whether the screen of the equipment is image-transparent or not by using the equipment attribute information and the predetermined image-transparent label thereof as historical data, so that interference factors encountered in detection by using image recognition are avoided, and the stability of image-transparent detection is ensured. Meanwhile, the classification model can be continuously trained through the data volume of the intelligent equipment, so that the accuracy of the perspective detection is gradually improved, and the hardware cost of the perspective detection is effectively reduced.

Description

Equipment screen perspective detection model training method and equipment screen perspective detection method
Technical Field
The invention relates to the technical field of electronic products, in particular to a training method of an equipment screen perspective detection model and an equipment screen perspective detection method.
Background
With the development of electronic product technology, various intelligent devices such as smart phones, notebook computers, tablet computers, and the like are developed. At present, along with the rapid development of economy and technology, the popularization and the updating speed of intelligent equipment are also faster and faster. Taking a smart phone as an example, the coming of the 5G era accelerates the generation change of the smart phone. In the iterative process of the intelligent equipment, effective recovery is one of effective utilization means of the residual value of the intelligent equipment, and the chemical pollution to the environment and the waste can be reduced.
The screen is used as a display and man-machine interaction part of the intelligent device, and has a remarkable influence on the recycling evaluation of the intelligent device. Particularly, the experience of subsequent users is seriously influenced by whether the transparent image exists on the screen of the equipment, so that the recycling value is influenced. Therefore, in the process of recycling the smart device, it is necessary to detect whether a perspective phenomenon exists on the screen of the smart device.
The traditional mode for detecting whether the screen of the intelligent device has the picture passing through is mainly to carry out image recognition by shooting a two-dimensional code displayed by the screen of the device and identify the picture passing through characteristic for detection and judgment. The detection method can detect the perspective image characteristics of the shot picture, and the gamma conversion is added to carry out detail enhancement processing on the perspective image characteristics. However, in the recovery fields such as the self-service recovery field, the shooting of the intelligent device is often influenced by an operator, the operation requirement of the traditional perspective detection cannot be met, and interferences such as too dark of a device screen, light influence or angle influence are generated, so that the perspective characteristic of the device screen cannot be acquired by a camera, and even if the detail enhancement processing is performed, whether the perspective exists on the device screen cannot be identified. Simultaneously, the camera that shoots the equipment screen requires also to be higher, has improved intelligent device's recovery cost.
In summary, it can be seen that the above disadvantages exist in the conventional manner for detecting whether a perspective exists on a screen of a smart device.
Disclosure of Invention
Therefore, it is necessary to provide an apparatus perspective view detection model training method and an apparatus perspective view detection method for overcoming the defects of the conventional method for detecting whether a perspective view exists on the screen of an intelligent apparatus.
A training method for a device perspective screen detection model comprises the following steps:
acquiring equipment attribute information of each intelligent equipment;
adding map-penetrating labels to the attribute information of each device; the perspective label is used for representing a screen perspective image or a screen imperviousness image of the corresponding intelligent equipment;
converting the equipment attribute information and the map penetrating label into attribute characteristic data;
and establishing a feature data set according to the attribute feature data, and training a classification model for detecting whether the screen of the equipment is transparent or not according to the feature data set.
According to the equipment screen perspective detection model training method, after the equipment attribute information of each intelligent equipment is obtained, the perspective label is added to the equipment attribute information. Further, converting the equipment attribute information and the perspective label into attribute feature data, establishing a feature data set according to the attribute feature data, and training a classification model for detecting whether the screen of the equipment is perspective or not according to the feature data set. Based on the method, the classification model is trained to detect whether the screen of the equipment is image-transparent or not by using the equipment attribute information and the predetermined image-transparent label thereof as historical data, so that interference factors encountered in detection by using image recognition are avoided, and the stability of image-transparent detection is ensured. Meanwhile, the classification model can be continuously trained through the data volume of the intelligent equipment, so that the accuracy of the perspective detection is gradually improved, and the hardware cost of the perspective detection is effectively reduced.
In one embodiment, the process of converting the device attribute information and the map-through label into attribute feature data includes the steps of:
and performing discretization processing on the equipment attribute information and the transparent image label to obtain attribute feature data.
In one embodiment, before the process of converting the device attribute information and the map-through label into the attribute feature data, the method further includes the following steps:
and performing information preprocessing on the equipment attribute information.
In one embodiment, the process of preprocessing the device attribute information includes the following steps:
and carrying out missing value processing and abnormal value processing on the equipment attribute information.
In one embodiment, the classification model comprises a naive bayes model.
In one embodiment, a process for creating a feature data set according to attribute feature data and training a classification model for detecting whether a screen of a device is transparent or not according to the feature data set includes the steps of:
taking attribute feature data corresponding to the equipment attribute information as a sample data feature attribute set, and taking attribute feature data corresponding to the map-through label as a class variable;
determining the prior probability of the class variable;
obtaining a computation model of posterior probability according to the sample data characteristic attribute set and the prior probability;
and obtaining a classification model which is used for taking the perspective map label corresponding to the category of the maximum posterior probability in the calculation model as an output result based on the calculation model of the posterior probability.
In one embodiment, the process of creating a feature data set according to the attribute feature data and training a classification model for detecting whether the screen of the device is transparent or not according to the feature data set further includes the steps of:
the output result is optimized by a loss function.
In one embodiment, the process of outputting the result is optimized by a loss function, as follows:
Figure BDA0003286235800000031
wherein n represents the total amount of samples, A samples, Y represents the actual map-through label, Y represents the prediction map-through label, and epsilon represents the error term.
In one embodiment, the device attribute information includes a device brand type, a device screen type, a device factory time, a device battery usage, a device holder gender, and/or a device holder age.
A device perspective screen detection model training device comprises:
the first information acquisition module is used for acquiring the equipment attribute information of each intelligent equipment;
the label adding module is used for adding map-penetrating labels to the attribute information of each device; the perspective label is used for representing a screen perspective image or a screen imperviousness image of the corresponding intelligent equipment;
the first data conversion module is used for converting the equipment attribute information and the map penetrating label into attribute characteristic data;
and the model training module is used for establishing a characteristic data set according to the attribute characteristic data and training a classification model for detecting whether the screen of the equipment is transparent or not according to the characteristic data set.
According to the device screen perspective detection model training device, after the device attribute information of each intelligent device is obtained, the perspective label is added to the device attribute information. Further, converting the equipment attribute information and the perspective label into attribute feature data, establishing a feature data set according to the attribute feature data, and training a classification model for detecting whether the screen of the equipment is perspective or not according to the feature data set. Based on the method, the classification model is trained to detect whether the screen of the equipment is image-transparent or not by using the equipment attribute information and the predetermined image-transparent label thereof as historical data, so that interference factors encountered in detection by using image recognition are avoided, and the stability of image-transparent detection is ensured. Meanwhile, the classification model can be continuously trained through the data volume of the intelligent equipment, so that the accuracy of the perspective detection is gradually improved, and the hardware cost of the perspective detection is effectively reduced.
A computer storage medium having computer instructions stored thereon, the computer instructions when executed by a processor implement the device perspective screen detection model training method of any of the above embodiments.
After the computer storage medium obtains the device attribute information of each intelligent device, a map-through label is added to each device attribute information. Further, converting the equipment attribute information and the perspective label into attribute feature data, establishing a feature data set according to the attribute feature data, and training a classification model for detecting whether the screen of the equipment is perspective or not according to the feature data set. Based on the method, the classification model is trained to detect whether the screen of the equipment is image-transparent or not by using the equipment attribute information and the predetermined image-transparent label thereof as historical data, so that interference factors encountered in detection by using image recognition are avoided, and the stability of image-transparent detection is ensured. Meanwhile, the classification model can be continuously trained through the data volume of the intelligent equipment, so that the accuracy of the perspective detection is gradually improved, and the hardware cost of the perspective detection is effectively reduced.
A computer device comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to implement the device perspective image detection model training method of any one of the embodiments.
After the computer device obtains the device attribute information of each intelligent device, a map-through label is added to each device attribute information. Further, converting the equipment attribute information and the perspective label into attribute feature data, establishing a feature data set according to the attribute feature data, and training a classification model for detecting whether the screen of the equipment is perspective or not according to the feature data set. Based on the method, the classification model is trained to detect whether the screen of the equipment is image-transparent or not by using the equipment attribute information and the predetermined image-transparent label thereof as historical data, so that interference factors encountered in detection by using image recognition are avoided, and the stability of image-transparent detection is ensured. Meanwhile, the classification model can be continuously trained through the data volume of the intelligent equipment, so that the accuracy of the perspective detection is gradually improved, and the hardware cost of the perspective detection is effectively reduced.
A method for detecting device screen perspective comprises the following steps:
acquiring equipment attribute information of intelligent equipment to be tested;
converting the equipment attribute information of the intelligent equipment to be tested into attribute characteristic data;
and inputting the attribute feature data into the classification model to obtain a device screen perspective detection result.
According to the device screen perspective detection method, after the device attribute information of the to-be-detected intelligent device is obtained, the device attribute information of the to-be-detected intelligent device is converted into the attribute feature data, the attribute feature data is input into the classification model, and a device screen perspective detection result is obtained. Based on the method, whether the screen of the equipment is transparent or not is detected through the pre-trained classification model, interference factors encountered by detection through image recognition are avoided, and the stability of transparent detection is ensured. Meanwhile, the classification model can be continuously trained through the data volume of the intelligent equipment, so that the accuracy of the perspective detection is gradually improved, and the hardware cost of the perspective detection is effectively reduced.
An apparatus screen perspective detection device, comprising:
the second information acquisition module is used for acquiring the equipment attribute information of the intelligent equipment to be tested;
the second data conversion module is used for converting the equipment attribute information of the intelligent equipment to be tested into attribute characteristic data;
and the result output module is used for inputting the attribute feature data into the classification model to obtain the device screen perspective detection result.
After the device attribute information of the to-be-detected intelligent device is obtained, the device attribute information of the to-be-detected intelligent device is converted into attribute feature data, and the attribute feature data is input into the classification model to obtain a device screen perspective detection result. Based on the method, whether the screen of the equipment is transparent or not is detected through the pre-trained classification model, interference factors encountered by detection through image recognition are avoided, and the stability of transparent detection is ensured. Meanwhile, the classification model can be continuously trained through the data volume of the intelligent equipment, so that the accuracy of the perspective detection is gradually improved, and the hardware cost of the perspective detection is effectively reduced.
A computer storage medium having computer instructions stored thereon, the computer instructions when executed by a processor implement the device perspective detection method of any of the above embodiments.
After the computer storage medium obtains the device attribute information of the intelligent device to be tested, the device attribute information of the intelligent device to be tested is converted into attribute feature data, the attribute feature data is input into the classification model, and a device screen perspective detection result is obtained. Based on the method, whether the screen of the equipment is transparent or not is detected through the pre-trained classification model, interference factors encountered by detection through image recognition are avoided, and the stability of transparent detection is ensured. Meanwhile, the classification model can be continuously trained through the data volume of the intelligent equipment, so that the accuracy of the perspective detection is gradually improved, and the hardware cost of the perspective detection is effectively reduced.
A computer device comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the device perspective detection method of any one of the embodiments.
After the computer equipment obtains the equipment attribute information of the intelligent equipment to be tested, the equipment attribute information of the intelligent equipment to be tested is converted into attribute characteristic data, the attribute characteristic data is input into the classification model, and an equipment screen perspective detection result is obtained. Based on the method, whether the screen of the equipment is transparent or not is detected through the pre-trained classification model, interference factors encountered by detection through image recognition are avoided, and the stability of transparent detection is ensured. Meanwhile, the classification model can be continuously trained through the data volume of the intelligent equipment, so that the accuracy of the perspective detection is gradually improved, and the hardware cost of the perspective detection is effectively reduced.
Drawings
FIG. 1 is a flowchart of a device perspective view detection model training method according to an embodiment;
FIG. 2 is a flowchart of a device perspective view detection model training method according to another embodiment;
FIG. 3 is a block diagram of an apparatus perspective view detection model training apparatus according to an embodiment;
FIG. 4 is a flowchart of a method for detecting a perspective view of a device according to an embodiment;
FIG. 5 is a block diagram of an apparatus perspective view detection device according to an embodiment;
FIG. 6 is a schematic diagram of an internal structure of a computer according to an embodiment.
Detailed Description
For better understanding of the objects, technical solutions and effects of the present invention, the present invention will be further explained with reference to the accompanying drawings and examples. Meanwhile, the following described examples are only for explaining the present invention, and are not intended to limit the present invention.
The embodiment of the invention provides a training method of an equipment screen perspective detection model.
Fig. 1 is a flowchart of an apparatus perspective view detection model training method according to an embodiment, and as shown in fig. 1, the apparatus perspective view detection model training method according to an embodiment includes steps S100 to S103:
s100, acquiring equipment attribute information of each intelligent equipment;
s101, adding a map penetrating label for each piece of equipment attribute information; the perspective label is used for representing a screen perspective image or a screen imperviousness image of the corresponding intelligent equipment;
s102, converting the equipment attribute information and the map penetrating label into attribute characteristic data;
s103, establishing a feature data set according to the attribute feature data, and training a classification model for detecting whether the screen of the equipment is transparent or not according to the feature data set.
The device attribute information is associated with the corresponding intelligent device, and includes inherent attribute information, production information, use information, or user information of the intelligent device, and the like of the intelligent device. And selecting a part related to screen use or screen perspective as equipment attribute information from the information according to the screen characteristics of the intelligent equipment. It should be noted that, in the implementation setting of the device perspective screen detection model training method, relevant personnel can subjectively adjust the selection of the device attribute information according to the device characteristics of the intelligent device.
As a preferred embodiment, the device attribute information includes a device brand type, a device screen type, a device factory time, a device battery usage, a device holder gender, and/or a device holder age.
In the recycling process, according to the recycling historical data of the intelligent device, various subjective and objective information related to the perspective screen can be reflected, such as the influence of brands, users and the like on the perspective screen probability in the recycling device. For example, the screen type of the device is an OLED screen, and the screen perspective probability can be improved most when the battery heats and the screen is bright for a long time; the equipment holder is man, and the using behavior (such as playing games) of the equipment holder is to cause long-term heating of the battery; the age of the device holder may affect the screen brightness adjustment. Based on the selection of the equipment attribute information, the probability influence of the association among the information on the perspective of the screen of the equipment can be ensured, and the reference value of the equipment attribute information is improved.
The following table 1 "device attribute information table" shows the device attribute information acquisition in the recycling process. It should be noted that, in table 1, the device attribute information is arranged in the form of an order in the recycling process, and an intelligent device corresponds to an order number.
TABLE 1 device Attribute information Table
Figure BDA0003286235800000091
As shown in table 1, the device attribute information determination is performed in the device brand type, the device screen type, the device factory time, the device battery usage, the device holder gender, and/or the device holder age. It should be noted that the above-mentioned device attribute information is only an example, and does not represent the type limitation of the device attribute information, and on the premise of satisfying the association with the device perspective probability, the relevant personnel may also select other types of information such as the device recycling time.
As shown in table 1, after the device attribute information is determined, a map-through label is added to the device attribute information. The transparent image labels correspond to the intelligent devices corresponding to the device attribute information one by one, and represent the transparent images or the opaque images of the screens of the corresponding intelligent devices.
The corresponding perspective labels comprise two types, namely screen perspective labels or screen opaque labels, and in the subsequent classification model, the output result executed by the classification model is one type of the perspective labels.
In one embodiment, the data standard of the device attribute information is unified in advance to reduce the training calculation amount and difficulty of the subsequent classification model. Based on this, fig. 2 is a flowchart of a device perspective view detection model training method according to another embodiment, and as shown in fig. 2, a process of converting device attribute information and perspective view labels into attribute feature data in step S102 includes step S201:
s201, discretizing the equipment attribute information and the map penetrating label to obtain attribute feature data.
The equipment attribute information and the map-transparent label are subjected to discretization processing, so that the data formats of the equipment attribute information and the map-transparent label are unified on one hand, and the type characteristics of the equipment attribute information are adjusted conveniently on the other hand. Meanwhile, through discretization processing, training calculation and updating iteration of a subsequent classification model are facilitated. And respectively setting discretization spaces for the equipment attribute information and the map-penetrating label, and processing the equipment attribute information and the map-penetrating label into a pure numerical value form.
Taking the equipment factory time as an example, the factory time is dispersed according to the month which is accurate to the month and is away from the recovery time so as to determine the actual use time of the intelligent equipment, and the actual use time is standardized to be within the (0, 10) interval.
Taking the age of the device holder as an example, the age of the device holder may be divided into five intervals of 0 (children: 3 to 10 years old), 1 (teenagers: 11 to 18 years old), 2 (teenagers: 19 to 35 years old), 3 (middle aged: 36 to 50 years old), 4 (old: 51 years old and above) and the like.
Based on this, the device attribute information is discretized according to the self information feature or the section feature of the device attribute information. The following explains the discretized device attribute information by taking a table 2 "discrete data table" as an example.
TABLE 2 discrete data sheet
Figure BDA0003286235800000101
Based on this, the pieces of device attribute information are discretized into pieces of data of table 2.
It should be noted that the discretization processing manner in step S201 can be flexibly determined according to the characteristics of the device attribute information, and the above embodiments do not represent a unique limitation on the discretization processing manner.
In one embodiment, as shown in fig. 2, before the process of converting the device attribute information and the map-through tag into the attribute feature data in step S102, step S200 is further included:
s200, performing information preprocessing on the equipment attribute information.
And performing information preprocessing on the equipment attribute information, wherein the preprocessing comprises missing value processing, abnormal value processing, weighted average, variance solving and other preprocessing modes, removing interference information or invalid information in the equipment attribute information, and reducing the data volume of subsequent data processing.
Based on this, after obtaining the attribute feature data, a feature data set is established. The intelligent device corresponds to one group of feature data, and the feature data set comprises multiple groups of feature data.
In one embodiment, the groups within the feature dataset are divided by 7: 1: 2, dividing the training set, the test set and the test set. And carrying out classification model training by using the training set, carrying out classification model inspection by using the inspection set, and carrying out classification model output result detection by using the test set.
And performing characteristic association training on the equipment attribute information and the output result through a classification model, wherein the output result corresponds to a perspective image label form, and two classes of classification of perspective images and non-perspective images are completed. Each group of feature data in the feature data set comprises attribute feature data corresponding to the equipment attribute information and attribute feature data corresponding to the map-transparent label.
In one embodiment, the classification model includes a decision tree module and a naive bayes model. As a better implementation mode, the classification model adopts a naive Bayes model to better perform updating iteration based on the attribute characteristic data after discretization, and the detection accuracy of the naive Bayes model is gradually improved in the subsequent updating.
In one embodiment, as shown in fig. 2, the process of creating a feature data set according to the attribute feature data in step S103 and training a classification model for detecting whether the device screen is overdrawn according to the feature data set includes steps S300 to S303:
s300, taking the attribute feature data corresponding to the equipment attribute information as a sample data feature attribute set, and taking the attribute feature data corresponding to the transparent graph label as a class variable;
s301, determining the prior probability of a class variable;
s302, obtaining a computation model of posterior probability according to the sample data characteristic attribute set and the prior probability;
and S303, based on the computation model of the posterior probability, obtaining a classification model which is used for taking the perspective map label corresponding to the category of the maximum posterior probability in the computation model as an output result.
As shown in table 1 and table 2 above, based on the order number, the sample data set corresponding to the attribute feature data of the plurality of orders is determined as D ═ D1,d2,...,dn}. And determining a sample data characteristic attribute set under each order based on the sample data set. Taking the above device attribute information as an example, a sample data characteristic attribute set a ═ a corresponding to an order is set1,a2,a3,a4,a5,a6Where a { 'device brand type', 'device screen type', 'device factory time', 'device battery usage', 'device holder gender', 'device holder age' }. Thus, a1To a6Independent and random from each other.
Meanwhile, the class variables are determined based on the perspective labels. Wherein the map-through label is converted into attribute feature data "0" and "1". Thus, the class variable C ═ { C ═ C1,c2Where {1, 0}, then the prior probability of C is p (C), which can be calculated by combining table 1 and table 2:
the posterior probability P (C | a) is derived from probability theory as:
Figure BDA0003286235800000121
since the device characteristic attributes are independent of each other, given the attribute characteristic data C of the perspective label, the above equation P (a | C) can be further expressed as:
Figure BDA0003286235800000122
in summary, find the category ciThe posterior probability of (a) is:
Figure BDA0003286235800000123
based on this, P (c) of the data characteristic attribute set A of each order is calculated1I A) and P (c)2And | A), comparing the sizes of the orders, wherein the perspective labels with high probability are the categories of the orders, namely, the classification model detects and outputs a result Y whether the orders are perspective or not. In other words, in the use of the classification model, the device characteristic attribute of the order to be detected is input into the classification model in the form of a sample data characteristic attribute set, and the output result is obtained to determine the perspective view condition of the order to be detected. Whether the intelligent equipment corresponding to the order is transparent or not is detected through the equipment characteristic attribute, so that the use cost of the camera equipment is saved, and the difficulty and cost of transparent detection are reduced.
In one embodiment, the attribute feature data of the intelligent device used for training is divided into a training set and a checking set by taking an order as a unit, the training of the classification model from the step S300 to the step S303 is performed through the training set, and the accuracy of the classification model is checked through the checking set. Namely, the attribute characteristic data of the equipment attribute information in the inspection set is input into a classification model for detection, and the output result is compared with the transparent image label of the inspection set. And when the output result is inconsistent with the perspective labels of the inspection set, configuring a training set with more orders to continuously optimize the training classification model.
As a preferred embodiment, the ratio of training set to test set is 7: 1.
in one embodiment, as shown in fig. 2, the process of creating a feature data set according to the attribute feature data in step S103, and training a classification model for detecting whether the device screen is overdrawn according to the feature data set further includes step S400:
and S400, optimizing an output result through a loss function.
A loss function is determined by constructing a training-optimized objective function for the output results. And the degree of closeness of the detection result corresponding to the loss function value of the loss function and the real result. In one embodiment, the process of outputting the result is optimized by a loss function in step S400 as follows:
Figure BDA0003286235800000131
wherein n represents the total amount of samples, A samples, Y represents the actual map-through label, Y represents the prediction map-through label, and epsilon represents the error term. As a preferred embodiment, the error term is 0.15.
In the method for training the device screen perspective detection model in any embodiment, after the device attribute information of each intelligent device is obtained, a perspective label is added to each device attribute information. Further, converting the equipment attribute information and the perspective label into attribute feature data, establishing a feature data set according to the attribute feature data, and training a classification model for detecting whether the screen of the equipment is perspective or not according to the feature data set. Based on the method, the classification model is trained to detect whether the screen of the equipment is image-transparent or not by using the equipment attribute information and the predetermined image-transparent label thereof as historical data, so that interference factors encountered in detection by using image recognition are avoided, and the stability of image-transparent detection is ensured. Meanwhile, the classification model can be continuously trained through the data volume of the intelligent equipment, so that the accuracy of the perspective detection is gradually improved, and the hardware cost of the perspective detection is effectively reduced.
The embodiment of the invention also provides a device screen perspective detection model training device.
Fig. 3 is a block diagram of an apparatus perspective view detection model training apparatus according to an embodiment, and as shown in fig. 3, the apparatus perspective view detection model training apparatus according to an embodiment includes a first information obtaining module 100, a tag adding module 101, a first data conversion module 102, and a model training module 103:
a first information obtaining module 100, configured to obtain device attribute information of each intelligent device;
a label adding module 101, configured to add a transparent label to each piece of device attribute information; the perspective label is used for representing a screen perspective image or a screen imperviousness image of the corresponding intelligent equipment;
the first data conversion module 102 is configured to convert the device attribute information and the map transparent tag into attribute feature data;
and the model training module 103 is used for establishing a feature data set according to the attribute feature data and training a classification model for detecting whether the screen of the equipment is transparent or not according to the feature data set.
According to the device screen perspective detection model training device, after the device attribute information of each intelligent device is obtained, the perspective label is added to the device attribute information. Further, converting the equipment attribute information and the perspective label into attribute feature data, establishing a feature data set according to the attribute feature data, and training a classification model for detecting whether the screen of the equipment is perspective or not according to the feature data set. Based on the method, the classification model is trained to detect whether the screen of the equipment is image-transparent or not by using the equipment attribute information and the predetermined image-transparent label thereof as historical data, so that interference factors encountered in detection by using image recognition are avoided, and the stability of image-transparent detection is ensured. Meanwhile, the classification model can be continuously trained through the data volume of the intelligent equipment, so that the accuracy of the perspective detection is gradually improved, and the hardware cost of the perspective detection is effectively reduced.
The embodiment of the invention also provides a device screen perspective detection method.
Fig. 4 is a flowchart of an apparatus perspective view detection method according to an embodiment, and as shown in fig. 4, the apparatus perspective view detection method according to an embodiment includes steps S500 to S502:
s500, acquiring equipment attribute information of the intelligent equipment to be tested;
s501, converting the equipment attribute information of the intelligent equipment to be tested into attribute characteristic data;
and S502, inputting the attribute feature data into the classification model to obtain a device screen perspective detection result.
The process of converting the device attribute information of the to-be-tested smart device into the attribute feature data in step S501 is the same as the conversion process in step S102.
According to the device screen perspective detection method, after the device attribute information of the to-be-detected intelligent device is obtained, the device attribute information of the to-be-detected intelligent device is converted into the attribute feature data, the attribute feature data is input into the classification model, and a device screen perspective detection result is obtained. Based on the method, whether the screen of the equipment is transparent or not is detected through the pre-trained classification model, interference factors encountered by detection through image recognition are avoided, and the stability of transparent detection is ensured. Meanwhile, the classification model can be continuously trained through the data volume of the intelligent equipment, so that the accuracy of the perspective detection is gradually improved, and the hardware cost of the perspective detection is effectively reduced.
The embodiment of the invention also provides a device for detecting the perspective view of the screen of the equipment.
Fig. 5 is a block diagram of an apparatus perspective view detection device according to an embodiment, and as shown in fig. 5, an apparatus perspective view detection method according to an embodiment includes a second information obtaining module 200, a second data conversion module 201, and a result output module 202:
the second information obtaining module 200 is configured to obtain device attribute information of the to-be-tested intelligent device;
the second data conversion module 201 is configured to convert the device attribute information of the to-be-tested intelligent device into attribute feature data;
and the result output module 202 is used for inputting the attribute feature data into the classification model to obtain the device perspective screen detection result.
After the device attribute information of the to-be-detected intelligent device is obtained, the device attribute information of the to-be-detected intelligent device is converted into attribute feature data, and the attribute feature data is input into the classification model to obtain a device screen perspective detection result. Based on the method, whether the screen of the equipment is transparent or not is detected through the pre-trained classification model, interference factors encountered by detection through image recognition are avoided, and the stability of transparent detection is ensured. Meanwhile, the classification model can be continuously trained through the data volume of the intelligent equipment, so that the accuracy of the perspective detection is gradually improved, and the hardware cost of the perspective detection is effectively reduced.
The embodiment of the invention also provides a computer storage medium, wherein computer instructions are stored on the computer storage medium, and when the instructions are executed by a processor, the method for training the device perspective screen detection model or the device perspective screen detection method in any embodiment is realized.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, the computer program can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a terminal, or a network device) to execute all or part of the methods of the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a RAM, a ROM, a magnetic or optical disk, or various other media that can store program code.
Corresponding to the computer storage medium, in an embodiment, there is also provided a computer device including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement any one of the device perspective detection model training method and the device perspective detection method in the embodiments.
The computer device may be a terminal, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a device perspective detection model training method or a device perspective detection method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
After the computer device obtains the device attribute information of each intelligent device, a map-through label is added to each device attribute information. Further, converting the equipment attribute information and the perspective label into attribute feature data, establishing a feature data set according to the attribute feature data, and training a classification model for detecting whether the screen of the equipment is perspective or not according to the feature data set. Based on the method, the classification model is trained to detect whether the screen of the equipment is image-transparent or not by using the equipment attribute information and the predetermined image-transparent label thereof as historical data, so that interference factors encountered in detection by using image recognition are avoided, and the stability of image-transparent detection is ensured. Meanwhile, the classification model can be continuously trained through the data volume of the intelligent equipment, so that the accuracy of the perspective detection is gradually improved, and the hardware cost of the perspective detection is effectively reduced.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only show some embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A training method for a device perspective image detection model is characterized by comprising the following steps:
acquiring equipment attribute information of each intelligent equipment;
adding a map-penetrating label to each piece of equipment attribute information; the perspective label is used for representing a corresponding screen perspective or screen imperviousness of the intelligent device;
converting the equipment attribute information and the map penetrating label into attribute characteristic data;
and establishing a feature data set according to the attribute feature data, and training a classification model for detecting whether the screen of the equipment is transparent or not according to the feature data set.
2. The device perspective screen detection model training method according to claim 1, wherein the process of converting the device attribute information and the perspective label into attribute feature data comprises the steps of:
and performing discretization processing on the equipment attribute information and the map penetrating label to obtain the attribute feature data.
3. The device perspective screen detection model training method according to claim 1, further comprising, before the process of converting the device attribute information and the perspective label into attribute feature data, the steps of:
and performing information preprocessing on the equipment attribute information.
4. The device perspective screen detection model training method according to claim 3, wherein the process of preprocessing the device attribute information comprises the steps of:
and carrying out missing value processing and abnormal value processing on the equipment attribute information.
5. The device perspective screen detection model training method of claim 1, wherein the classification model comprises a naive bayes model.
6. The device perspective detection model training method according to any one of claims 1 to 5, wherein the process of creating a feature data set based on attribute feature data and training a classification model for detecting whether a device screen is perspective based on the feature data set comprises the steps of:
taking the attribute feature data corresponding to the equipment attribute information as a sample data feature attribute set, and taking the attribute feature data corresponding to the map penetrating label as a class variable;
determining a prior probability of the class variable;
obtaining a computation model of posterior probability according to the sample data characteristic attribute set and the prior probability;
and obtaining a classification model which is used for taking the perspective map label corresponding to the category of the maximum posterior probability in the calculation model as an output result based on the calculation model of the posterior probability.
7. The device perspective detection model training method according to claim 6, wherein the process of creating a feature data set based on the attribute feature data and training a classification model for detecting whether the device screen is perspective based on the feature data set further comprises the steps of:
optimizing the output result by a loss function.
8. The device perspective screen detection model training method of claim 7, wherein the process of optimizing the output result by a loss function is as follows:
Figure FDA0003286235790000021
wherein n represents the total amount of samples, A samples, Y represents the actual map-through label, Y represents the prediction map-through label, and epsilon represents the error term.
9. The training method of the device perspective detection model according to any one of claims 1 to 5, wherein the device attribute information includes a device brand type, a device screen type, a device factory time, a device battery usage, a device holder gender, and/or a device holder age.
10. A method for detecting screen perspective of equipment is characterized by comprising the following steps:
acquiring equipment attribute information of intelligent equipment to be tested;
converting the equipment attribute information of the intelligent equipment to be tested into attribute characteristic data;
and inputting the attribute feature data into a classification model to obtain a device screen perspective detection result.
CN202111148190.8A 2021-09-29 2021-09-29 Equipment screen perspective detection model training method and equipment screen perspective detection method Pending CN113901996A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111148190.8A CN113901996A (en) 2021-09-29 2021-09-29 Equipment screen perspective detection model training method and equipment screen perspective detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111148190.8A CN113901996A (en) 2021-09-29 2021-09-29 Equipment screen perspective detection model training method and equipment screen perspective detection method

Publications (1)

Publication Number Publication Date
CN113901996A true CN113901996A (en) 2022-01-07

Family

ID=79189241

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111148190.8A Pending CN113901996A (en) 2021-09-29 2021-09-29 Equipment screen perspective detection model training method and equipment screen perspective detection method

Country Status (1)

Country Link
CN (1) CN113901996A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11989701B2 (en) 2014-10-03 2024-05-21 Ecoatm, Llc System for electrically testing mobile devices at a consumer-operated kiosk, and associated devices and methods

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11989701B2 (en) 2014-10-03 2024-05-21 Ecoatm, Llc System for electrically testing mobile devices at a consumer-operated kiosk, and associated devices and methods

Similar Documents

Publication Publication Date Title
Srinivasan et al. Biases in AI systems
CN107633265B (en) Data processing method and device for optimizing credit evaluation model
US11282000B2 (en) Systems and methods for predictive coding
US20190294921A1 (en) Field identification in an image using artificial intelligence
Yan et al. Characterizing and identifying reverted commits
CN112035846A (en) Unknown vulnerability risk assessment method based on text analysis
WO2022193753A1 (en) Continuous learning method and apparatus, and terminal and storage medium
CN112308069A (en) Click test method, device, equipment and storage medium for software interface
CN111767192B (en) Business data detection method, device, equipment and medium based on artificial intelligence
CN113987182A (en) Fraud entity identification method, device and related equipment based on security intelligence
CN114036531A (en) Multi-scale code measurement-based software security vulnerability detection method
CN113298078A (en) Equipment screen fragmentation detection model training method and equipment screen fragmentation detection method
CN114330533A (en) Equipment screen aging two-classification model training method and equipment screen aging detection method
CN113901996A (en) Equipment screen perspective detection model training method and equipment screen perspective detection method
CN114118526A (en) Enterprise risk prediction method, device, equipment and storage medium
CN112131475B (en) Interpretable and interactive user portrayal method and device
CN111738290B (en) Image detection method, model construction and training method, device, equipment and medium
RU2715024C1 (en) Method of trained recurrent neural network debugging
CN114330534A (en) Equipment screen perspective detection model training method and equipment screen perspective detection method
CN114298204A (en) Equipment screen scratch detection model training method and equipment screen scratch detection method
CN111597936A (en) Face data set labeling method, system, terminal and medium based on deep learning
CN116225956A (en) Automated testing method, apparatus, computer device and storage medium
Klosterman Data Science Projects with Python: A case study approach to gaining valuable insights from real data with machine learning
CN114494856A (en) Equipment model detection model training method and equipment model detection method
CN113887609A (en) Equipment screen aging detection model training method and equipment screen aging detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination