CN116501977B - Method and system for constructing user portraits in online detection commission - Google Patents

Method and system for constructing user portraits in online detection commission Download PDF

Info

Publication number
CN116501977B
CN116501977B CN202310752715.1A CN202310752715A CN116501977B CN 116501977 B CN116501977 B CN 116501977B CN 202310752715 A CN202310752715 A CN 202310752715A CN 116501977 B CN116501977 B CN 116501977B
Authority
CN
China
Prior art keywords
detection
portrait
feature
user
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310752715.1A
Other languages
Chinese (zh)
Other versions
CN116501977A (en
Inventor
王新祥
王亚平
单良
郑靓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Construction Project Quality Safety Inspection Station Co ltd
Original Assignee
Guangdong Construction Project Quality Safety Inspection Station Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Construction Project Quality Safety Inspection Station Co ltd filed Critical Guangdong Construction Project Quality Safety Inspection Station Co ltd
Priority to CN202310752715.1A priority Critical patent/CN116501977B/en
Publication of CN116501977A publication Critical patent/CN116501977A/en
Application granted granted Critical
Publication of CN116501977B publication Critical patent/CN116501977B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24317Piecewise classification, i.e. whereby each classification requires several discriminant rules
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a system for constructing a user image in an online detection commission, which relate to the technical field of intelligent recognition, and the method comprises the following steps: based on a detection request of a user, acquiring a user request record set by using user information in the detection request to identify a data type, identifying a data type identification result according to preset data type identification information, inputting the identified user request record set into a multi-layer identification model, analyzing the coverage of the portrait features by using the obtained identification features as portrait features, determining portrait feature coverage information, and constructing a user portrait by using the portrait features when the portrait feature coverage information meets the coverage requirement, thereby solving the technical problem of low accuracy of the constructed user portrait caused by lack of management and control of user detection in the prior art, realizing analysis of detection requirements of users on different detection items, detection frequency and preference of a detection scheme to construct the user portrait, and improving the accuracy of the user portrait.

Description

Method and system for constructing user portraits in online detection commission
Technical Field
The invention relates to the technical field of intelligent recognition, in particular to a method and a system for constructing a user image in an online detection commission.
Background
Along with the development of science and technology, particularly in the field of constructing user images, the user images are user tag systems constructed based on real information of a large number of target user groups, and the characteristics of target groups of products or services are correspondingly described. By collecting preference information, behavior information and the like of the user, a user portrait is constructed, the user can be better known, a detection scheme suitable for the target user is designed, and the technical problem of low accuracy of the constructed user portrait is caused by lack of control over user detection in the prior art.
Disclosure of Invention
The application provides a method and a system for constructing a user portrait in an online detection commission, which are used for solving the technical problem of low accuracy of the constructed user portrait caused by lack of control over user detection in the prior art.
In view of the above problems, the present application provides a method and a system for constructing a user image in an online detection request.
In a first aspect, the present application provides a method for constructing a user image in an online detection commission, where the method includes: based on a detection entrusting request of a user, acquiring a user entrusting record set by utilizing user information in the detection entrusting request, wherein the user entrusting record set comprises all record data of user entrusting detection in a preset time period; carrying out data type identification according to the user entrusting record set, and identifying a data type identification result according to preset data type identification information; inputting the identified user entrusting record set into a multi-layer identification model to obtain identification characteristics; utilizing the identification feature as an portrait feature; performing image feature coverage analysis on the image features to determine image feature coverage information; judging whether the portrait feature coverage information meets the coverage requirement, and constructing a user portrait by using the portrait feature when the portrait feature coverage information meets the coverage requirement.
Preferably, the data type identification is performed according to the user entrusting record set, and the data type identification result is identified according to preset data type identification information, including: constructing a preset data type identification library, wherein the preset data type identification library comprises detection objects, detection parameters, detection time, detection cost and detection evaluation results; setting data type identification information based on each data type in the preset data type identification library; constructing a mapping relation between each data type in the preset data type identification library and the data type identification information to obtain the preset data type identification information; identifying each data characteristic of the preset data type identification library according to the user entrusting record set to obtain a data type identification result; performing traversal matching by using the data type identification result and the preset data type identification information, and identifying the data type identification result by using a matching identification.
Preferably, before inputting the identified user delegated record set into the multi-layer recognition model, the method comprises: constructing a multi-layer network framework, wherein the multi-layer network framework comprises a detection category identification layer, a feature identification layer and a feature comparison fusion layer; based on the processing identification requirements of the detection category identification layer, the feature identification layer and the feature comparison fusion layer, extracting historical record data to construct a training data set of each network layer; training each network layer of the multi-layer network framework by utilizing the training data set to obtain model parameters of each network layer; and integrating and connecting the multi-layer network frames based on the model parameters of each network layer to obtain a multi-layer identification model.
Preferably, performing each network layer training on the multi-layer network framework by using the training data set includes: randomly extracting N groups of training data from the training data set, wherein N is a positive integer greater than 2; respectively carrying out network layer training on the N groups of training data to obtain N classification models; calculating recognition results based on the N classification models to obtain N classification error rates; updating model parameters of the N groups of training data according to N classification error rates, repeatedly extracting the next group of training data, iterating, fusing the iterated classification models to generate an optimal classification model, and obtaining model parameters of each network layer based on the optimal classification model.
Preferably, the image feature coverage analysis is performed on the image feature, and the image feature coverage information is determined, including: determining detection target information according to a detection entrusting request of a user; performing detection evaluation parameter analysis on the detection target information to obtain detection evaluation parameters, and performing detection target parameter analysis to obtain detection target parameters; extracting an evaluation grade and a grade division standard based on the detection evaluation parameter; determining evaluation image characteristics according to the detection evaluation parameters, the evaluation grades and the grade division standards; extracting detection difference indexes according to the detection target parameters, wherein the detection difference indexes are difference information of detection target results corresponding to different detection means and detection modes, and determining detection target image characteristics based on the detection target parameters and the detection difference indexes; constructing an image feature coverage radar chart based on the evaluation image features and the detection target image features to obtain an image feature coverage evaluation list; and carrying out coverage analysis on the portrait features based on the portrait feature coverage evaluation list, and obtaining portrait feature coverage information according to the comparison result of the portrait features and the portrait feature coverage evaluation list, wherein the portrait feature coverage information is used for evaluating the coverage degree of the portrait features.
Preferably, after determining whether the image feature coverage information meets the coverage requirement, the method includes: when the image characteristics are not satisfied, carrying out association analysis on the missing characteristics and the image characteristics to obtain associated image characteristics, wherein the associated image characteristics are image characteristics with association with the missing characteristics; carrying out feature prediction according to the associated portrait features and the association between the associated portrait features and the missing features to obtain missing prediction features; and supplementing the image features by using the missing predicted features.
Preferably, the feature prediction is performed according to the associated portrait feature and the association between the associated portrait feature and the missing feature, and the method comprises the following steps: determining a contradiction association portrait characteristic and a mutual promotion association portrait characteristic according to the association portrait characteristic; carrying out weight distribution on the conflict-associated portrait features, the interaction-associated portrait features and the missing features by using a Defield algorithm to obtain feature weights; constructing a feature fitness function based on the interference associated portrait features, the association of the interaction associated portrait features and the feature weights; and carrying out random feature distribution on the missing features based on the feature fitness function, carrying out iterative optimization through a feature fitness calculation result, and determining the optimal missing features as feature prediction results.
In a second aspect, the present application provides a system for constructing a user image in an online detection commission, the system comprising: the record set acquisition module is used for acquiring a user entrusting record set by utilizing user information in a detection entrusting request based on the detection entrusting request of a user, wherein the user entrusting record set comprises all record data detected by the user entrusting in a preset time period; the data type identification module is used for carrying out data type identification according to the user entrusting record set and identifying a data type identification result according to preset data type identification information; the first input module is used for inputting the identified user entrusting record set into the multi-layer identification model to obtain identification characteristics; the portrait feature module is used for utilizing the identification feature as a portrait feature; the analysis module is used for carrying out portrait feature coverage analysis on the portrait features and determining portrait feature coverage information; and the first judging module is used for judging whether the portrait feature coverage information meets the coverage requirement or not, and when the portrait feature coverage information meets the coverage requirement, constructing a portrait of the user by using the portrait feature.
One or more technical schemes provided by the application have at least the following technical effects or advantages:
the application provides a method and a system for constructing user portraits in an online detection commission, relates to the technical field of intelligent recognition, solves the technical problem of low accuracy of constructed user portraits caused by lack of control over user detection in the prior art, realizes analysis of detection requirements of users on different detection items, detection frequency and preference of a detection scheme, constructs the user portraits, and improves the accuracy of the user portraits.
Drawings
FIG. 1 is a flow chart of a method for constructing a user image in an online detection commission;
FIG. 2 is a schematic diagram of a process for identifying the type of identification data in a method for constructing a user image in an online detection commission;
FIG. 3 is a schematic flow chart of a multi-layer recognition model obtained in a method for constructing a user image in an on-line detection commission;
FIG. 4 is a schematic diagram of a process for determining image feature coverage information in a method for constructing a user image in an online detection commission;
FIG. 5 is a schematic diagram of the process of supplementing the portrait features in the method for constructing the user portrait in the on-line detection commission;
FIG. 6 is a schematic diagram of the construction system of the user portrayal in the online detection client according to the present application.
Reference numerals illustrate: the system comprises a record set acquisition module 1, a data type identification module 2, a first input module 3, an portrait characteristic module 4, an analysis module 5 and a first judgment module 6.
Detailed Description
The application provides a method and a system for constructing a user portrait in an online detection commission, which are used for solving the technical problem of low accuracy of the constructed user portrait caused by lack of control over user detection in the prior art.
Embodiment one: as shown in FIG. 1, the embodiment of the application provides a method for constructing a user portrait in an online detection commission, which comprises the following steps:
step S100: based on a detection entrusting request of a user, acquiring a user entrusting record set by utilizing user information in the detection entrusting request, wherein the user entrusting record set comprises all record data of user entrusting detection in a preset time period;
specifically, the method for constructing a user portrait in online detection commission provided by the embodiment of the application is applied to a system for constructing a user portrait in online detection commission, when a user performs online detection commission, according to data of third party detection commission in past time of the user, in order to ensure that the user can be accurately analyzed according to detection requirements of different detection items, detection frequencies of different items, preference of a detection scheme and the like in the later period, the purpose of constructing the user portrait is achieved, therefore, firstly, a detection commission request of the current online user needs to be extracted, the detection commission request contains personal information, commission request information, detection requirement information, detection scheme information and the like of the user, and meanwhile, user information contained in the extracted detection commission request is utilized to collect a user commission record set corresponding to the user, and all record data commission detected by the user in a preset time period can be set as a month, namely, the record data of the user commission detection in the previous month is extracted by taking the current moment as a reference, and important reference is carried out on the user in online detection commission.
Step S200: carrying out data type identification according to the user entrusting record set, and identifying a data type identification result according to preset data type identification information;
further, as shown in fig. 2, step S200 of the present application further includes:
step S210: constructing a preset data type identification library, wherein the preset data type identification library comprises detection objects, detection parameters, detection time, detection cost and detection evaluation results;
step S220: setting data type identification information based on each data type in the preset data type identification library;
step S230: constructing a mapping relation between each data type in the preset data type identification library and the data type identification information to obtain the preset data type identification information;
step S240: identifying each data characteristic of the preset data type identification library according to the user entrusting record set to obtain a data type identification result;
step S250: performing traversal matching by using the data type identification result and the preset data type identification information, and identifying the data type identification result by using a matching identification.
Specifically, in order to improve the accuracy of later user portrayal, it is necessary to identify the data contained in the user entrusting record set, namely, identify the data type of the user entrusting record set according to different data types, the data type may include integer, floating point, logic type and character type, further acquire the data type identification result corresponding to the user entrusting record set, and simultaneously identify the data type in the acquired data type identification result according to the preset data type identification information, namely, acquire information about the preset data type identification information, wherein the information acquired by the preset data type identification information includes but is not limited to the detection object, the detection parameter, the detection cost, the detection evaluation result and the like, the detection object refers to the detection data generated in the detection process of the user entrusting record set, the detection time refers to the time length of the detection object in the user entrusting record set, the detection cost refers to the time required by the detection object in the user entrusting record set, the detection cost refers to the detection object required by the detection object in the user entrusting record set, the detection cost is the detection object is set on the basis of the preset data type identification information, the detection cost is set, the detection cost is calculated on the basis of the preset data type identification information is set, the data type is further calculated on the basis of the type identification information, the data type identification information is calculated on the basis of the preset type identification information, the data type is further calculated, the data type is calculated, and the type is calculated based on the type information, and the type is calculated based on the type and the type information, the construction process of the mapping relation can be that a value is taken in each data type, one value is taken in the data type identification information, the data type identification information is corresponding to only one value, a plurality of values are taken in the data type identification information, the preset data type identification information is obtained based on the constructed mapping relation, further, on the basis of the obtained user entrusting record set, according to the characteristics contained in each data in the preset data type identification library, the characteristics contained in each data can be data capacity characteristics, data type characteristics, data speed characteristics, data variability characteristics, data authenticity characteristics and the like, the data types in the user entrusting record set are identified, the identification result is recorded as the data type identification result, further, access matching is carried out on each information node contained in the preset data type identification information through the data identification result in sequence, matching identification is carried out on the data identification result and the preset data type identification information successfully, matching identification is carried out on the data type identification result, the data type identification is the automatic image type identification is the image identification record, the user is reversely detected, the user entrusting record is detected, the user is classified, and the user is detected, and the user entrusted content is analyzed, and the user is detected, and the user is classified.
Step S300: inputting the identified user entrusting record set into a multi-layer identification model to obtain identification characteristics;
further, as shown in fig. 3, step S300 of the present application further includes:
step S310: constructing a multi-layer network framework, wherein the multi-layer network framework comprises a detection category identification layer, a feature identification layer and a feature comparison fusion layer;
step S320: based on the processing identification requirements of the detection category identification layer, the feature identification layer and the feature comparison fusion layer, extracting historical record data to construct a training data set of each network layer;
step S330: training each network layer of the multi-layer network framework by utilizing the training data set to obtain model parameters of each network layer;
step S340: and integrating and connecting the multi-layer network frames based on the model parameters of each network layer to obtain a multi-layer identification model.
Further, step S330 of the present application includes:
step S331: randomly extracting N groups of training data from the training data set, wherein N is a positive integer greater than 2;
step S332: respectively carrying out network layer training on the N groups of training data to obtain N classification models;
step S333: calculating recognition results based on the N classification models to obtain N classification error rates;
Step S334: updating model parameters of the N groups of training data according to N classification error rates, repeatedly extracting the next group of training data, iterating, fusing the iterated classification models to generate an optimal classification model, and obtaining model parameters of each network layer based on the optimal classification model.
Specifically, in order to ensure the accuracy of user portrait construction, the user entrusted record set completed by the identification needs to be input into a multi-layer identification model, and the multi-layer identification model is constructed through the following steps: firstly, constructing a multi-layer network frame in a multi-layer identification model, wherein the multi-layer network frame comprises a detection type identification layer, a feature identification layer and a feature comparison fusion layer, the detection type identification layer is a layer for identifying detection types required to be carried out in a user entrusting record set with data type identification, the feature identification layer is a layer for identifying data features contained in the user entrusting record set with data type identification, the feature comparison fusion layer is a layer for comparing different data features existing in the user entrusting record set with data type identification and fusing data features with high similarity, further, extracting historical record data according to data processing identification requirements contained in the detection type identification layer, the feature identification layer and the feature comparison fusion layer, constructing training data sets of all network layers on the basis, carrying out data training on all network layers of the multi-layer network framework by utilizing the training data sets on the basis of an Adaboost algorithm, wherein the Adaboost algorithm is to screen all network layers with the smallest weight coefficient from all trained network layers through the weight of the training data sets and the weight of all network layers to form a final multi-layer network framework, firstly randomly extracting N groups of training data from the training data sets, wherein N is a positive integer larger than 2, further, respectively carrying out the training of the network layers on the extracted N groups of training data, wherein the training process can be to define an initialization function and training data, inputting each group of training data in the N groups of training data into the network layers, carrying out output supervision adjustment of the network layers through supervision data corresponding to the group of training data, the supervision data is supervision data corresponding to N groups of training data one by one. When the output result of the network layer is consistent with the supervision data, the current group training is finished, all training data in the N groups of training data are trained, N classification models are obtained, further, on the basis of the calculated recognition results output by the obtained N classification models, the calculated recognition results are compared with data classification, so that N classification error rates are obtained, when the difference between the comparison results is large, the classification error rates are high, finally, the obtained N classification error rate pairs are compared to update, the next group of training data is repeatedly extracted by iteration, and the iterative N classification models are fused, namely, the N classification models are fused to improve the generalization capability due to the fact that fitting is easy to occur to a single model, the prediction precision of a single model is not high, N classification models can often promote the prediction capability to be too large or too small for a data set, different data subsets can be generated by dividing and replacing the data sets respectively, then different classifiers are trained by using the data subsets, finally the data subsets are combined into an optimal classification model, parameters of each network layer model are extracted on the basis of the optimal classification model, a multi-layer recognition model is obtained after the multi-layer network frames are integrated and connected on the basis of the optimal classification model, and therefore, after the identified user entrusting record set is input into the multi-layer recognition model, recognition features of the user entrusting record set are output, and a user image tamping basis is carried out on a user in on-line detection entrusting for subsequent realization.
Step S400: utilizing the identification feature as an portrait feature;
specifically, the identified user request record set is input into the identification features output by the multi-layer identification model to be used as a basis, the portrait features of the user are determined, the identification features output by the multi-layer identification model can comprise the identification features such as the continuity identification features of the user, the variability identification features of the user and the correlation identification features of the user, and the determined portrait features of the user can be the attribute portrait features of the user, the preference portrait features of the user, the behavior portrait features of the user and the like, so that the online detection request of the user is realized.
Step S500: performing image feature coverage analysis on the image features to determine image feature coverage information;
further, as shown in fig. 4, step S500 of the present application further includes:
step S510: determining detection target information according to a detection entrusting request of a user;
step S520: performing detection evaluation parameter analysis on the detection target information to obtain detection evaluation parameters, and performing detection target parameter analysis to obtain detection target parameters;
Step S530: extracting an evaluation grade and a grade division standard based on the detection evaluation parameter;
step S540: determining evaluation image characteristics according to the detection evaluation parameters, the evaluation grades and the grade division standards;
step S550: extracting detection difference indexes according to the detection target parameters, wherein the detection difference indexes are difference information of detection target results corresponding to different detection means and detection modes, and determining detection target image characteristics based on the detection target parameters and the detection difference indexes;
step S560: constructing an image feature coverage radar chart based on the evaluation image features and the detection target image features to obtain an image feature coverage evaluation list;
step S570: and carrying out coverage analysis on the portrait features based on the portrait feature coverage evaluation list, and obtaining portrait feature coverage information according to the comparison result of the portrait features and the portrait feature coverage evaluation list, wherein the portrait feature coverage information is used for evaluating the coverage degree of the portrait features.
Specifically, since the accuracy of the image of the user is determined according to the coverage degree of the image features, that is, the greater the coverage degree of the image features, the higher the accuracy of the image of the user, so that the image feature coverage of the obtained image features needs to be analyzed, that is, firstly, on the basis of the detection request of the user, the detection object required to be detected by the user in the detection request is extracted, so as to determine the detection object information, further, the determined detection object information is respectively subjected to analysis of the detection evaluation parameter and the detection object parameter, the detection evaluation parameter is preset according to the detection parameter standard in answer data by the relevant technician, that is, the higher the detection object information is evaluated as the detection object information is matched with the preset standard detection parameter, that is, the detection evaluation parameter is acquired, that is, the detection object required to be detected can be analyzed including the detection concentration parameter, the detection content parameter and the detection quality parameter, so that the detection object parameter is acquired, further, the detection object information is respectively subjected to analysis of the detection evaluation parameter, the detection evaluation parameter is subjected to the analysis of the detection object parameter, the detection object information is subjected to the analysis of which is the detection object information, and the detection object information is required to be detected by the detection object information is the detection object information, and the detection object information is required to be detected by the detection object.
Further, the method comprises the steps of evaluating detection target information based on detection evaluation parameters, evaluation grades and grading standards, determining detection target information with the evaluation grades being in a first level, namely, detection target information with the matching degree being more than or equal to 80%, as an evaluation image feature, extracting detection difference indexes according to the detection target parameters, determining the detection target image feature, wherein the detection difference indexes are difference information of detection target results corresponding to different detection means and detection modes, displaying image feature data in the form of a two-dimensional chart of the evaluation image feature and the detection target image feature, which are expressed on the axis from the same point, on the basis of the determined evaluation image feature and the detection target image feature, so as to complete the construction of an image feature coverage radar image, taking the evaluation image feature as a transverse table head, taking the detection target image feature as a longitudinal table head, filling the associated data, obtaining an image feature coverage evaluation list, and on the basis, performing coverage analysis on the image feature and image feature coverage evaluation coverage list, namely, recording matched feature data as coverage information according to the determined evaluation image feature coverage information, and obtaining more accurate image feature coverage information, namely, if the coverage of the image feature coverage information is more than one user is more than one, on the basis of the user, and the user has a high coverage degree.
Step S600: judging whether the portrait feature coverage information meets the coverage requirement, and constructing a user portrait by using the portrait feature when the portrait feature coverage information meets the coverage requirement.
Further, as shown in fig. 5, step S600 of the present application further includes:
step S610: when the image characteristics are not satisfied, carrying out association analysis on the missing characteristics and the image characteristics to obtain associated image characteristics, wherein the associated image characteristics are image characteristics with association with the missing characteristics;
step S620: carrying out feature prediction according to the associated portrait features and the association between the associated portrait features and the missing features to obtain missing prediction features;
step S630: and supplementing the image features by using the missing predicted features.
Further, step S620 of the present application includes:
step S621: determining a contradiction association portrait characteristic and a mutual promotion association portrait characteristic according to the association portrait characteristic;
step S622: carrying out weight distribution on the conflict-associated portrait features, the interaction-associated portrait features and the missing features by using a Defield algorithm to obtain feature weights;
step S623: constructing a feature fitness function based on the interference associated portrait features, the association of the interaction associated portrait features and the feature weights;
Step S624: and carrying out random feature distribution on the missing features based on the feature fitness function, carrying out iterative optimization through a feature fitness calculation result, and determining the optimal missing features as feature prediction results.
Specifically, the coverage requirement can be set to 85% by comparing and judging the rest of coverage requirements based on the obtained image feature coverage information, when the image feature coverage information meets the coverage requirement, namely, the coverage rate of the image feature coverage information to the image feature is more than or equal to 85%, the information coverage of the image feature at the moment is considered to be comprehensive and the accuracy is high, the user image is constructed by utilizing the image feature at the moment, when the image feature coverage information does not meet the coverage requirement, namely, the coverage rate of the image feature coverage information to the image feature is less than 85%, the information coverage of the image feature at the moment is considered to be incomplete and the accuracy is low, the relevance analysis is performed on the missing feature and the image feature at the moment, the frequent pattern, the relevance or the causal structure existing between the missing feature and the image feature is searched in the image feature coverage information, obtaining the associated portrait features, wherein the associated portrait features are portrait features with relevance to the missing features, predicting the features missing in the portrait feature coverage information according to the associated portrait features and the relevance to the missing features, measuring and calculating the missing features according to linear regression and rules on the basis of the existing portrait features to obtain the missing predicted features in the portrait feature coverage information in advance, further regarding the associated portrait features as the basis, regarding the associated portrait features with weak relevance between the portrait features, namely, the associated portrait features with less than 20 percent as conflicting associated portrait features, referring to the associated portrait features with strong relevance between the portrait features, namely, the associated portrait features with more than 80 percent as mutual-promotion associated portrait features, the method is characterized in that two or more than two portrait features have a mutual promotion relationship, then the weight distribution is carried out on the conflict-related portrait features, the mutual promotion-related portrait features and the missing features by using a Defield algorithm, the weight opinions of the expert group members are inquired by adopting a back-to-back communication mode, the opinion of the expert group members gradually tends to be concentrated after repeated inquiry and feedback for several times, and finally a collective weight distribution judgment result with high accuracy is obtained, so that the weight of each feature is obtained.
Then, based on the correlation of the conflict correlation portrait features, the correlation of the interaction correlation portrait features and the feature weights, a feature fitness function is constructed, namely a corresponding relation between the touch correlation portrait features, the correlation of the interaction correlation portrait features and the feature weights and the portrait feature fitness can be a real value function, meanwhile, the missing features are randomly distributed through the constructed feature fitness function, namely the predicted missing features are randomly extracted, distributed and supplemented, iterative optimization is carried out through a feature fitness calculation result, the feature fitness calculation result is determined by the weighted average of the portrait features, the optimal missing features can be determined according to the determination result to serve as feature prediction results, and the feature prediction results contained in the missing prediction features are utilized to supplement the portrait features, so that user portrait improvement in online detection commission is achieved, and the accuracy of perpetual portrait construction is improved.
In summary, the method for constructing the user portrait in the online detection request provided by the embodiment of the application at least comprises the following technical effects that the user portrait is constructed by analyzing the detection requirements of users on different detection items, the detection frequency and the preference of a detection scheme, and the accuracy of the user portrait is improved.
Embodiment two: based on the same inventive concept as the method for constructing a user portrait in an online detection order in the foregoing embodiment, as shown in fig. 6, the present application provides a system for constructing a user portrait in an online detection order, the system comprising:
the record set acquisition module 1 is used for acquiring a user entrusting record set by utilizing user information in a detection entrusting request based on the detection entrusting request of a user, wherein the user entrusting record set comprises all record data detected by the user entrusting in a preset time period;
the data type identification module 2 is used for carrying out data type identification according to the user entrusting record set and identifying a data type identification result according to preset data type identification information;
the first input module 3 is used for inputting the identified user entrusting record set into the multi-layer identification model to obtain identification characteristics;
an portrait feature module 4, wherein the portrait feature module 4 is used for using the identification feature as a portrait feature;
the analysis module 5 is used for carrying out portrait feature coverage analysis on the portrait features and determining portrait feature coverage information;
The first judging module 6 is used for judging whether the portrait feature coverage information meets the coverage requirement, and when the portrait feature coverage information meets the coverage requirement, the portrait feature of the user is utilized to construct a portrait of the user.
Further, the system further comprises:
the identification library module is used for constructing a preset data type identification library, wherein the identification library comprises detection objects, detection parameters, detection time, detection cost and detection evaluation results;
the data type module is used for setting data type identification information based on each data type in the preset data type identification library;
the mapping relation module is used for constructing the mapping relation between each data type in the preset data type identification library and the data type identification information to obtain the preset data type identification information;
the identification module is used for identifying each data characteristic of the preset data type identification library according to the user entrusting record set and obtaining a data type identification result;
and the traversal matching module is used for carrying out traversal matching by utilizing the data type identification result and the preset data type identification information, and identifying the data type identification result by utilizing a matching identification.
Further, the system further comprises:
the framework construction module is used for constructing a multi-layer network framework, wherein the multi-layer network framework comprises a detection category identification layer, a feature identification layer and a feature comparison fusion layer;
the recognition requirement module is used for extracting historical record data to construct training data sets of all network layers based on the processing recognition requirements of the detection category recognition layer, the feature recognition layer and the feature comparison fusion layer;
the training module is used for training each network layer of the multi-layer network frame by utilizing the training data set to obtain model parameters of each network layer;
and the integration connection module is used for integrating and connecting the multi-layer network frames based on the model parameters of each network layer to obtain a multi-layer identification model.
Further, the system further comprises:
the training data module is used for randomly extracting N groups of training data from the training data set, wherein N is a positive integer greater than 2;
the network layer training module is used for respectively carrying out network layer training on the N groups of training data to obtain N classification models;
The calculation and identification module is used for calculating and identifying results based on the N classification models to obtain N classification error rates;
and the iteration module is used for updating the model parameters of the N groups of training data according to the N classification error rates, repeatedly extracting the next group of training data, iterating, fusing the iterated classification models to generate an optimal classification model, and acquiring the model parameters of each network layer based on the optimal classification model.
Further, the system further comprises:
the information determining module is used for determining detection target information according to a detection entrusting request of a user;
the parameter analysis module is used for carrying out detection evaluation parameter analysis on the detection target information to obtain detection evaluation parameters and carrying out detection target parameter analysis to obtain detection target parameters;
the dividing standard module is used for extracting an evaluation grade and a grade dividing standard based on the detection evaluation parameter;
the dividing standard module is used for determining the evaluation portrait features according to the detection evaluation parameters, the evaluation grades and the grade dividing standards;
The first feature determining module is used for extracting detection difference indexes according to the detection target parameters, wherein the detection difference indexes are difference information of detection target results corresponding to different detection means and detection modes, and the detection target image features are determined based on the detection target parameters and the detection difference indexes;
the coverage radar image module is used for constructing an image feature coverage radar image based on the evaluation image features and the detection target image features to obtain an image feature coverage evaluation list;
the coverage analysis module is used for carrying out coverage analysis on the portrait features based on the portrait feature coverage evaluation list, obtaining portrait feature coverage information according to the comparison result of the portrait features and the portrait feature coverage evaluation list, and the portrait feature coverage information is used for evaluating the coverage degree of the portrait features.
Further, the system further comprises:
the relevance analysis module is used for carrying out relevance analysis on the missing feature and the portrait feature to obtain a relevant portrait feature when the missing feature is not satisfied, wherein the relevant portrait feature is a portrait feature with relevance with the missing feature;
The feature prediction module is used for performing feature prediction according to the associated portrait features and the association between the associated portrait features and the missing features to obtain missing prediction features;
and the supplementing module is used for supplementing the portrait features by utilizing the missing predicted features.
Further, the system further comprises:
the second feature determining module is used for determining conflict association portrait features and interaction association portrait features according to the association portrait features;
the weight distribution module is used for distributing weights of the conflict associated portrait features, the interaction associated portrait features and the missing features by using a Defield algorithm to obtain weights of the features;
the function construction module is used for constructing a feature fitness function based on the correlation of the conflict correlation portrait features, the mutual promotion correlation portrait features and the feature weights;
and the iterative optimization module is used for carrying out random feature distribution on the missing features based on the feature fitness function, carrying out iterative optimization through a feature fitness calculation result, and determining the optimal missing features as feature prediction results.
The above detailed description of the method for constructing the user portrait in the on-line detection request will be clear to those skilled in the art, and the apparatus disclosed in this embodiment is relatively simple to describe, and the relevant points will be described with reference to the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (7)

1. The method for constructing the user image in the online detection commission is characterized by comprising the following steps:
based on a detection entrusting request of a user, acquiring a user entrusting record set by utilizing user information in the detection entrusting request, wherein the user entrusting record set comprises all record data of user entrusting detection in a preset time period;
Carrying out data type identification according to the user entrusting record set, and identifying a data type identification result according to preset data type identification information;
inputting the identified user entrusting record set into a multi-layer identification model to obtain identification characteristics;
utilizing the identification feature as an portrait feature;
performing image feature coverage analysis on the image features to determine image feature coverage information;
judging whether the portrait feature coverage information meets the coverage requirement, and constructing a user portrait by using the portrait features when the portrait feature coverage information meets the coverage requirement;
wherein, carry out portrait characteristic coverage analysis to the portrait characteristic, confirm portrait characteristic coverage information, include:
determining detection target information according to a detection entrusting request of a user;
performing detection evaluation parameter analysis on the detection target information to obtain detection evaluation parameters, and performing detection target parameter analysis to obtain detection target parameters;
extracting an evaluation grade and a grade division standard based on the detection evaluation parameter;
determining evaluation image characteristics according to the detection evaluation parameters, the evaluation grades and the grade division standards;
extracting detection difference indexes according to the detection target parameters, wherein the detection difference indexes are difference information of detection target results corresponding to different detection means and detection modes, and determining detection target image characteristics based on the detection target parameters and the detection difference indexes;
Constructing an image feature coverage radar chart based on the evaluation image features and the detection target image features to obtain an image feature coverage evaluation list;
and carrying out coverage analysis on the portrait features based on the portrait feature coverage evaluation list, and obtaining portrait feature coverage information according to the comparison result of the portrait features and the portrait feature coverage evaluation list, wherein the portrait feature coverage information is used for evaluating the coverage degree of the portrait features.
2. The method of claim 1, wherein performing data type identification according to the user delegated record set and identifying a data type identification result according to preset data type identification information, comprises:
constructing a preset data type identification library, wherein the preset data type identification library comprises detection objects, detection parameters, detection time, detection cost and detection evaluation results;
setting data type identification information based on each data type in the preset data type identification library;
constructing a mapping relation between each data type in the preset data type identification library and the data type identification information to obtain the preset data type identification information;
identifying each data characteristic of the preset data type identification library according to the user entrusting record set to obtain a data type identification result;
Performing traversal matching by using the data type identification result and the preset data type identification information, and identifying the data type identification result by using a matching identification.
3. The method of claim 1, comprising, prior to entering the identified set of user delegated records into the multi-layer recognition model:
constructing a multi-layer network framework, wherein the multi-layer network framework comprises a detection category identification layer, a feature identification layer and a feature comparison fusion layer;
based on the processing identification requirements of the detection category identification layer, the feature identification layer and the feature comparison fusion layer, extracting historical record data to construct a training data set of each network layer;
training each network layer of the multi-layer network framework by utilizing the training data set to obtain model parameters of each network layer;
and integrating and connecting the multi-layer network frames based on the model parameters of each network layer to obtain a multi-layer identification model.
4. The method of claim 3, wherein training the multi-layer network framework for each network layer using the training data set comprises:
randomly extracting N groups of training data from the training data set, wherein N is a positive integer greater than 2;
Respectively carrying out network layer training on the N groups of training data to obtain N classification models;
calculating recognition results based on the N classification models to obtain N classification error rates;
updating model parameters of the N groups of training data according to N classification error rates, repeatedly extracting the next group of training data, iterating, fusing the iterated classification models to generate an optimal classification model, and obtaining model parameters of each network layer based on the optimal classification model.
5. The method of claim 1, wherein after determining whether the portrait feature overlay information meets an overlay requirement, comprising:
when the image characteristics are not satisfied, carrying out association analysis on the missing characteristics and the image characteristics to obtain associated image characteristics, wherein the associated image characteristics are image characteristics with association with the missing characteristics;
carrying out feature prediction according to the associated portrait features and the association between the associated portrait features and the missing features to obtain missing prediction features;
and supplementing the image features by using the missing predicted features.
6. The method of claim 5, wherein performing feature prediction based on the associated portrait features and their association with missing features comprises:
Determining a contradiction association portrait characteristic and a mutual promotion association portrait characteristic according to the association portrait characteristic;
carrying out weight distribution on the conflict-associated portrait features, the interaction-associated portrait features and the missing features by using a Defield algorithm to obtain feature weights;
constructing a feature fitness function based on the interference associated portrait features, the association of the interaction associated portrait features and the feature weights;
and carrying out random feature distribution on the missing features based on the feature fitness function, carrying out iterative optimization through a feature fitness calculation result, and determining the optimal missing features as feature prediction results.
7. The system for constructing the user image in the online detection commission is characterized by comprising the following components:
the record set acquisition module is used for acquiring a user entrusting record set by utilizing user information in a detection entrusting request based on the detection entrusting request of a user, wherein the user entrusting record set comprises all record data detected by the user entrusting in a preset time period;
the data type identification module is used for carrying out data type identification according to the user entrusting record set and identifying a data type identification result according to preset data type identification information;
The first input module is used for inputting the identified user entrusting record set into the multi-layer identification model to obtain identification characteristics;
the portrait feature module is used for utilizing the identification feature as a portrait feature;
the analysis module is used for carrying out portrait feature coverage analysis on the portrait features and determining portrait feature coverage information;
the first judging module is used for judging whether the portrait feature coverage information meets the coverage requirement or not, and when the portrait feature coverage information meets the coverage requirement, constructing a user portrait by utilizing the portrait feature;
the information determining module is used for determining detection target information according to a detection entrusting request of a user;
the parameter analysis module is used for carrying out detection evaluation parameter analysis on the detection target information to obtain detection evaluation parameters and carrying out detection target parameter analysis to obtain detection target parameters;
the dividing standard module is used for extracting an evaluation grade and a grade dividing standard based on the detection evaluation parameter;
the dividing standard module is used for determining the evaluation portrait features according to the detection evaluation parameters, the evaluation grades and the grade dividing standards;
The first feature determining module is used for extracting detection difference indexes according to the detection target parameters, wherein the detection difference indexes are difference information of detection target results corresponding to different detection means and detection modes, and the detection target image features are determined based on the detection target parameters and the detection difference indexes;
the coverage radar image module is used for constructing an image feature coverage radar image based on the evaluation image features and the detection target image features to obtain an image feature coverage evaluation list;
the coverage analysis module is used for carrying out coverage analysis on the portrait features based on the portrait feature coverage evaluation list, obtaining portrait feature coverage information according to the comparison result of the portrait features and the portrait feature coverage evaluation list, and the portrait feature coverage information is used for evaluating the coverage degree of the portrait features.
CN202310752715.1A 2023-06-26 2023-06-26 Method and system for constructing user portraits in online detection commission Active CN116501977B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310752715.1A CN116501977B (en) 2023-06-26 2023-06-26 Method and system for constructing user portraits in online detection commission

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310752715.1A CN116501977B (en) 2023-06-26 2023-06-26 Method and system for constructing user portraits in online detection commission

Publications (2)

Publication Number Publication Date
CN116501977A CN116501977A (en) 2023-07-28
CN116501977B true CN116501977B (en) 2023-09-01

Family

ID=87325016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310752715.1A Active CN116501977B (en) 2023-06-26 2023-06-26 Method and system for constructing user portraits in online detection commission

Country Status (1)

Country Link
CN (1) CN116501977B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109711874A (en) * 2018-12-17 2019-05-03 平安科技(深圳)有限公司 User's portrait generation method, device, computer equipment and storage medium
CN113836195A (en) * 2021-09-02 2021-12-24 国家电网有限公司客户服务中心 Knowledge recommendation method and system based on user portrait
CN114201516A (en) * 2020-09-03 2022-03-18 腾讯科技(深圳)有限公司 User portrait construction method, information recommendation method and related device
CN115936758A (en) * 2022-12-30 2023-04-07 企知道网络技术有限公司 Intelligent customer-extending method based on big data and related device
CN115994259A (en) * 2021-10-20 2023-04-21 上海点掌文化科技股份有限公司 User portrait generation method and device, storage medium and terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109711874A (en) * 2018-12-17 2019-05-03 平安科技(深圳)有限公司 User's portrait generation method, device, computer equipment and storage medium
CN114201516A (en) * 2020-09-03 2022-03-18 腾讯科技(深圳)有限公司 User portrait construction method, information recommendation method and related device
CN113836195A (en) * 2021-09-02 2021-12-24 国家电网有限公司客户服务中心 Knowledge recommendation method and system based on user portrait
CN115994259A (en) * 2021-10-20 2023-04-21 上海点掌文化科技股份有限公司 User portrait generation method and device, storage medium and terminal
CN115936758A (en) * 2022-12-30 2023-04-07 企知道网络技术有限公司 Intelligent customer-extending method based on big data and related device

Also Published As

Publication number Publication date
CN116501977A (en) 2023-07-28

Similar Documents

Publication Publication Date Title
CN109345302B (en) Machine learning model training method and device, storage medium and computer equipment
CN109062962B (en) Weather information fused gated cyclic neural network interest point recommendation method
CN106951471B (en) SVM-based label development trend prediction model construction method
CN110889450B (en) Super-parameter tuning and model construction method and device
CN109471982B (en) Web service recommendation method based on QoS (quality of service) perception of user and service clustering
CN112990972A (en) Recommendation method based on heterogeneous graph neural network
CN112131480A (en) Personalized commodity recommendation method and system based on multilayer heterogeneous attribute network representation learning
CN108897750B (en) Personalized place recommendation method and device integrating multiple contextual information
CN114549046A (en) Sales prediction method, system, device and storage medium based on fusion model
CN111582538A (en) Community value prediction method and system based on graph neural network
CN107016416B (en) Data classification prediction method based on neighborhood rough set and PCA fusion
CN112149352B (en) Prediction method for marketing activity clicking by combining GBDT automatic characteristic engineering
CN111931043B (en) Recommending method and system for science and technology resources
CN115358809A (en) Multi-intention recommendation method and device based on graph comparison learning
CN111898860A (en) Site selection and operation strategy generation method for digital audio-visual place and storage medium
CN115456707A (en) Method and device for providing commodity recommendation information and electronic equipment
CN117271905B (en) Crowd image-based lateral demand analysis method and system
CN109597944B (en) Single-classification microblog rumor detection model based on deep belief network
CN116501977B (en) Method and system for constructing user portraits in online detection commission
CN113111256A (en) Production work order recommendation method based on depth knowledge map
CN113032688B (en) Method for predicting access position of social network user at given future time
CN112084415B (en) Recommendation method based on analysis of long-term and short-term time coupling relation between user and project
CN113689234A (en) Platform-related advertisement click rate prediction method based on deep learning
JP7487587B2 (en) Operation prediction device, model learning method thereof, and operation prediction method
CN116701772B (en) Data recommendation method and device, computer readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant