CN117113201A - Method, device, equipment and storage medium for determining portrait - Google Patents

Method, device, equipment and storage medium for determining portrait Download PDF

Info

Publication number
CN117113201A
CN117113201A CN202210518355.4A CN202210518355A CN117113201A CN 117113201 A CN117113201 A CN 117113201A CN 202210518355 A CN202210518355 A CN 202210518355A CN 117113201 A CN117113201 A CN 117113201A
Authority
CN
China
Prior art keywords
target
portrait
user
neural network
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210518355.4A
Other languages
Chinese (zh)
Inventor
周徐
方东旭
方义成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Group Chongqing Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Group Chongqing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Group Chongqing Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202210518355.4A priority Critical patent/CN117113201A/en
Publication of CN117113201A publication Critical patent/CN117113201A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a method, a device, equipment and a storage medium for determining an image. The method comprises the following steps: a method of determining an image, the method comprising: acquiring demand information of a target operator and characteristic data of a different network user; determining a target service index corresponding to the requirement information according to a corresponding relation between preset requirement information and service indexes; extracting target characteristic data corresponding to the target business index from the characteristic data; inputting target characteristic data corresponding to the target business index into a portrait model, and determining portraits of the different network users; the portrait model is trained by adopting the target service index, sample images of each user in a plurality of users of the target operator and feature data corresponding to the target service index of each user. The method provided by the embodiment of the application can improve the portrait accuracy of the different network users.

Description

Method, device, equipment and storage medium for determining portrait
Technical Field
The application belongs to the technical field of user portraits, and particularly relates to a method, a device, equipment and a storage medium for determining portraits.
Background
The portrait can reflect the service requirement of the user, if the portrait of a certain user is a high-value user, the user needs a high-value service, and if an operator formulates the high-value service for the user of which the portrait is the high-value user, the satisfaction degree of the user is improved. To improve user satisfaction, operators need to make services according to the service requirements of users, so it is necessary to determine the portraits of users before making the services.
For different network users of operators, the portraits of the different network users are generally estimated by the number of resident population of a certain area, the area scene, the measured different frequency signals and the like.
Because the indexes adopted in the process of determining the image of the different-network user are complex, part of indexes generate interference on the image, and the image accuracy of the determined different-network user is poor.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for determining portraits, which can improve the accuracy of portraits of users with different networks.
In a first aspect, an embodiment of the present application provides a method for determining an image, the method including:
acquiring demand information of a target operator and characteristic data of a different network user;
determining a target service index corresponding to the demand information according to the corresponding relation between the preset demand information and the service index;
Extracting target characteristic data corresponding to target service indexes from the characteristic data;
inputting target characteristic data corresponding to the target business index into a portrait model to determine portraits of users of different networks;
the portrait model is trained by adopting a target service index, a sample portrait of each user in a plurality of users of a target operator and feature data corresponding to the target service index of each user.
In one possible implementation, the image model includes a first neural network and a second neural network; the first neural network comprises a plurality of first sub-neural networks, and the types of each first sub-neural network in the plurality of first sub-neural networks are different from each other; inputting target characteristic data corresponding to target business indexes into a portrait model to determine portraits of different network users, comprising:
inputting the target characteristic data into a plurality of first sub-neural networks respectively to obtain the portrait probability corresponding to the portrait characteristic output by each first sub-neural network;
and inputting the portrait probability corresponding to the portrait features output by each first sub-neural network into a second neural network to obtain the portrait of the different-network user.
In one possible implementation manner, the target service index includes a plurality of influencing factors, and the target feature data corresponding to the target service index includes data corresponding to each influencing factor in the plurality of influencing factors; the method further comprises the steps of:
Acquiring coefficients corresponding to each first sub-neural network;
according to the image probability and coefficient of each first sub-neural network, determining a first neural network with the largest first influence weight on the portrait in the plurality of first sub-neural networks, and taking the first sub-neural network with the largest first influence weight on the portrait as a target neural network;
acquiring the weight corresponding to each influence factor from the target neural network;
and determining at least one influence factor meeting the preset condition according to the weight of each influence factor and the data corresponding to each influence factor.
In one possible implementation manner, determining a first sub-neural network with the greatest first influence weight on the portrait in the plurality of first sub-neural networks according to the image probability and the coefficient of each first sub-neural network includes:
determining a first influence weight of each first sub-neural network on the image according to the image probability and the coefficient of each first sub-neural network;
and determining a first sub-neural network with the largest first influence weight in the plurality of first sub-neural networks according to the first influence weight of each first sub-neural network on the image.
In one possible implementation manner, determining at least one influence factor meeting the preset condition according to the weight of each influence factor and the data corresponding to each influence factor includes:
Determining a second influence weight of each influence factor on the portrait probability output by the target neural network according to the weight of each influence factor and the data corresponding to each influence factor;
and determining at least one influence factor meeting the preset condition according to the second influence weight of each influence factor on the portrait probability output by the target neural network.
In one possible implementation manner, before inputting the target feature data corresponding to the target business index into the portrait model to obtain the portrait of the different network user, the method further includes:
acquiring a sample image of each user in a plurality of users of a target operator and feature data corresponding to a target service index of each user;
and training to obtain a portrait model by adopting the target service index, the sample portrait of each user in the plurality of users of the target operator and the characteristic data corresponding to the target service index of each user.
In a second aspect, an embodiment of the present application provides an apparatus for determining an image, the apparatus including:
the acquisition module is used for acquiring the demand information of the target operator and the characteristic data of the heterogeneous network user;
the determining module is used for determining a target service index corresponding to the demand information according to the corresponding relation between the preset demand information and the service index;
The extraction module is used for extracting target characteristic data corresponding to the target business index from the characteristic data;
the determining module is also used for inputting target characteristic data corresponding to the target business index into the portrait model and determining portraits of different network users;
the portrait model is trained by adopting a target service index, a sample portrait of each user in a plurality of users of a target operator and feature data corresponding to the target service index of each user.
In one possible implementation, the image model includes a first neural network and a second neural network; the first neural network comprises a plurality of first sub-neural networks, and the types of each first sub-neural network in the plurality of first sub-neural networks are different from each other;
the determining module is specifically configured to:
inputting the target characteristic data into a plurality of first sub-neural networks respectively to obtain the portrait probability corresponding to the portrait characteristic output by each first sub-neural network;
and inputting the portrait probability corresponding to the portrait features output by each first sub-neural network into a second neural network to obtain the portrait of the different-network user.
In one possible implementation manner, the target service index includes a plurality of influencing factors, and the target feature data corresponding to the target service index includes data corresponding to each influencing factor in the plurality of influencing factors;
The acquisition module is also used for acquiring the coefficient corresponding to each first sub-neural network;
the determining module is further used for determining a first neural network with the largest first influence weight on the portrait in the plurality of first sub-neural networks according to the image probability and the coefficient of each first sub-neural network, and taking the first sub-neural network with the largest first influence weight as a target neural network;
the acquisition module is also used for acquiring the weight corresponding to each influence factor from the target neural network;
the determining module is further configured to determine at least one influencing factor that meets the preset condition according to the weight of each influencing factor and the data corresponding to each influencing factor.
In one possible implementation manner, the determining module is specifically configured to:
determining a first influence weight of each first sub-neural network on the image according to the image probability and the coefficient of each first sub-neural network;
and determining a first sub-neural network with the largest first influence weight in the plurality of first sub-neural networks according to the first influence weight of each first sub-neural network on the image.
In one possible implementation manner, the determining module is specifically configured to:
determining a second influence weight of each influence factor on the portrait probability output by the target neural network according to the weight of each influence factor and the data corresponding to each influence factor;
And determining at least one influence factor meeting the preset condition according to the second influence weight of each influence factor on the portrait probability output by the target neural network and the data corresponding to each influence factor.
In one possible implementation manner, the obtaining module is further configured to obtain a sample image of each user in the plurality of users of the target operator, and feature data corresponding to the target service index of each user;
the device also comprises a training module which is used for training to obtain the portrait model by adopting the target business index, the sample portrait of each user in the plurality of users of the target operator and the feature data corresponding to the target business index of each user.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory storing computer program instructions; the processor, when executing the computer program instructions, implements the method as in the first aspect or any of the possible implementations of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement a method as in the first aspect or any of the possible implementations of the first aspect.
In a fifth aspect, embodiments of the application provide a computer program product, instructions in which, when executed by a processor of an electronic device, cause the electronic device to perform a method as in the first aspect or any of the possible implementations of the first aspect.
The embodiment of the application provides a method, a device, equipment and a storage medium for determining portraits, which are used for acquiring demand information of a target operator and characteristic data of a different network user when the target operator needs the portraits of the different network user; determining a target service index corresponding to the demand information according to the corresponding relation between the preset demand information and the service index; extracting target characteristic data corresponding to target service indexes from the characteristic data; and inputting target characteristic data corresponding to the target business index into a portrait model to determine portraits of users of different networks. Because the target service index selected based on the demand information and the portrait model for determining the portrait are obtained by training the target service index, the sample portrait of the user of the target operator and the feature data corresponding to the target service index of the user, even if the acquired feature data of the different network user cannot directly represent the attribute features of the different network user, the portrait model and the portrait determined by the target feature data corresponding to the target service index of the different network user can accurately represent the attribute features of the different network user required by the target operator, thereby realizing improvement of the portrait accuracy of the different network user.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present application, the drawings that are needed to be used in the embodiments of the present application will be briefly described, and it is possible for a person skilled in the art to obtain other drawings according to these drawings without inventive effort.
FIG. 1 is a flow chart of a method for determining an image according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an apparatus for determining an image according to an embodiment of the present application;
fig. 3 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application will be described in detail below, and in order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings and the detailed embodiments. It should be understood that the specific embodiments described herein are merely configured to illustrate the application and are not configured to limit the application. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the application by showing examples of the application.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The portrait can reflect the service requirement of the user, if the portrait of a certain user is a high-value user, the user needs a high-value service, and if an operator formulates the high-value service for the user of which the portrait is the high-value user, the satisfaction degree of the user is improved. To improve user satisfaction, operators need to make services according to the service requirements of users, so it is necessary to determine the portraits of users before making the services. For different network users of operators, the portraits of the different network users are generally estimated by the number of resident population of a certain area, the area scene, the measured different frequency signals and the like. Because the indexes adopted in the process of determining the image of the different-network user are complex, part of indexes generate interference on the image, and the image accuracy of the determined different-network user is poor.
The embodiment of the application provides a method, a device, equipment and a storage medium for determining portraits, which are used for acquiring demand information of a target operator and characteristic data of a different network user when the target operator needs the portraits of the different network user; determining a target service index corresponding to the demand information according to the corresponding relation between the preset demand information and the service index; extracting target characteristic data corresponding to target service indexes from the characteristic data; inputting target characteristic data corresponding to the target business index into a portrait model to determine portraits of users of different networks; because the target service index selected based on the demand information and the portrait model for determining the portrait are obtained by training the target service index, the sample portrait of the user of the target operator and the feature data corresponding to the target service index of the user, even if the acquired feature data of the different network user cannot directly represent the attribute features of the different network user, the portrait model and the portrait determined by the target feature data corresponding to the target service index of the different network user can accurately represent the attribute features of the different network user required by the target operator, thereby realizing improvement of the portrait accuracy of the different network user.
The execution main body of the method provided by the embodiment of the application is a terminal with a data acquisition function and a data processing function, such as a server, a computer and the like.
A method for determining an image according to an embodiment of the present application will be described in detail with reference to fig. 1.
As shown in fig. 1, the method may include the steps of:
s110, obtaining the demand information of the target operator and the characteristic data of the heterogeneous network user.
When the portrait of the heterogeneous network user is determined for the target operator, the requirement information of the target operator and the characteristic data of the heterogeneous network user are acquired.
The requirement information of the target operator can be information such as a heterogeneous network user whose portrait is a high-value user, or a heterogeneous network user whose portrait is a perception difference.
In one example, the feature data of the different network user is obtained from the internet company, where the feature data of the different network user may include feature data corresponding to at least one index of the following:
the method comprises the steps of a cell identifier occupied by a user, an area code identifier of a cell occupied by the user, a neighboring cell identifier set, an operator identifier, an abbreviation of a physical cell identifier, a timestamp, a reporting original longitude, a reporting original latitude, an altitude, whether the reporting original latitude, the reporting original altitude, indoor/outdoor, a Wireless-broadband (wifi) of a mobile phone, a received signal strength indication, a dynamic network type, a reference signal received power, a signal-to-interference-plus-noise ratio, a reference signal received quality, a cell identifier, a base station identifier, a neighboring reference signal received strength set, a terminal temporary network identifier, a user terminal brand, a user terminal model, a connected wifi name, a connected wifi media access control address (media access control, mac), a wifi signal strength, a broadband operator, a packet name using an Application program (Application), a signal strength using a synchronization signal of a fifth-generation mobile communication technology (5th Generation Mobile Communication Technology,5G), a signal strength of a 5G channel state information (Channel State Information, a signal strength of a 5G networking mode, and a 5G signal-to-interference-plus-noise ratio.
S120, determining a target service index corresponding to the demand information according to the corresponding relation between the preset demand information and the service index.
Presetting a corresponding relation between the demand information and the service index, and searching the corresponding relation between the preset demand information and the service index to obtain a target service index corresponding to the demand information in the corresponding relation.
In one example, the demand information is a heterogeneous network user portrayed as a high-value user, and the corresponding target business index may include at least one of the following:
user attribute index, user service index, user perception index.
The feature data corresponding to the user attribute indexes are updated once a month, and the user attribute indexes can comprise indexes such as user terminal brands, user terminal models, operator identifiers, broadband operators and the like. The feature data corresponding to the user service index is updated once a week, and the user service index can comprise indexes such as the number of daily current-sharing media sampling points, the number of daily World Wide Web (Web) sampling points, the number of daily instant messaging sampling points, the number of daily peer-to-peer network P2P sampling points, the ratio of daily Wifi sampling points, the number of daily total sampling points and the like. The feature data corresponding to the user perception index is updated once a week, and the user perception index can comprise indexes such as average daily signal intensity, average daily signal to interference plus noise ratio (Signal to Interference plus Noise Ratio, SINR), average daily weak coverage sampling point proportion, average continuous quality difference times and the like.
S130, extracting target characteristic data corresponding to the target business index from the characteristic data.
And extracting the characteristic data corresponding to the target service index from the characteristic data of the different network users, and taking the characteristic data corresponding to the target service index as target characteristic data.
S140, inputting the target characteristic data corresponding to the target business index into the portrait model to determine the portrait of the different network user.
And inputting the target characteristic data corresponding to the target business index into a portrait model, and outputting the portrait of the different network user by the portrait model.
In one example, the requirement information of the target operator may be a foreign network user whose portrait is a high-value user, and the output portrait of the foreign network user is a high-value user or a non-high-value user, and the portrait and the identification of the foreign network user whose portrait is a high-value user are provided to the target operator.
In some embodiments, each period extracts the target feature data corresponding to the target service index in the previous period once, and inputs the target feature data corresponding to the target service index in the previous period into the portrait model to obtain the portrait of the user with different networks determined in the current period.
The method provided by the embodiment of the application periodically outputs the portraits of the different network users, can update the portraits of the different network users along with the change of the characteristic data of the different network users, and improves the real-time performance and accuracy of the determined portraits of the different network users.
The portrait model is trained by adopting a target service index, a sample portrait of each user in a plurality of users of a target operator and feature data corresponding to the target service index of each user.
When a target operator needs portraits of different network users, the method provided by the embodiment of the application acquires the requirement information of the target operator and the characteristic data of the different network users; determining a target service index corresponding to the demand information according to the corresponding relation between the preset demand information and the service index; extracting target characteristic data corresponding to target service indexes from the characteristic data; and inputting target characteristic data corresponding to the target business index into a portrait model to determine portraits of users of different networks. Because the target service index selected based on the demand information and the portrait model for determining the portrait are obtained by training the target service index, the sample portrait of the user of the target operator and the feature data corresponding to the target service index of the user, even if the acquired feature data of the different network user cannot directly represent the attribute features of the different network user, the portrait model and the portrait determined by the target feature data corresponding to the target service index of the different network user can accurately represent the attribute features of the different network user required by the target operator, thereby realizing improvement of the portrait accuracy of the different network user.
In some embodiments, the image model includes a first neural network and a second neural network; the first neural network includes a plurality of first sub-neural networks, and each of the plurality of first sub-neural networks is different from each other in type.
S140: inputting target feature data corresponding to the target business index into the portrayal model to determine the portrayal of the different network user, the method can comprise the following steps:
first, target feature data are respectively input into a plurality of first sub-neural networks, and portrait probability corresponding to portrait features output by each first sub-neural network is obtained.
And respectively inputting the target characteristic data into a plurality of first sub-neural networks, and respectively determining the portrait probability corresponding to the portrait characteristic by each first sub-neural network.
The portrait characteristic can represent the probability that the different network user meets the requirement information, and the portrait probability can represent the probability value that the different network user meets the requirement information.
In one example, the requirement information is a different network user whose portraits are high-value users, and the portraits feature a probability that the different network user is a high-value user, and the portraits probability is a probability value that the different network user is a high-value user, such as: the probability that one first sub-neural network outputs the different network user a as a high value user is 0.9.
In one example, the first sub-neural network is a random forest model, an XgBoost model, or a support vector machine model, and the first neural network includes at least two of the random forest model, the XgBoost model, or the support vector machine model.
Then, the portrait probability corresponding to the portrait features output by each first sub-neural network is input into a second neural network to obtain the portrait of the different network user.
And inputting the portrait probability corresponding to the portrait features output by each first sub-neural network into a second neural network, and outputting the portrait of the different network user by the second neural network.
In one example, the second neural network is a logistic regression model that outputs portraits of users of different networks according to portraits probabilities corresponding to the portraits features output by each of the first sub-neural networks. Such as: the output portrait of the different network user A is a high-value user.
Compared with the portrait determined by a single neural network, the portrait is determined by using a plurality of layers of neural networks, so that the accuracy of the determined portrait is improved.
In some embodiments, the target business index includes a plurality of influencing factors, and the target feature data corresponding to the target business index includes data corresponding to each influencing factor in the plurality of influencing factors; after determining the representation of the different network user, the method may further comprise the steps of:
First, a coefficient corresponding to each first sub-neural network is acquired.
And when the first neural network and the second neural network are trained and adjusted, recording the coefficient corresponding to each first sub-neural network after each training and adjustment, extracting the coefficient corresponding to each first sub-neural network after the last training and adjustment from the recorded coefficients after the portrait model is obtained.
In one example, the second neural network is a logistic regression model, which may be represented by the following equation:
wherein y=1 represents an image satisfying the demand information among images of different network users, for example: the demand information is a foreign network user whose portraits are high-value users, and y=1 represents the portraits of the high-value users among the portraits of the foreign network users; x represents the portrait probability corresponding to the portrait features input into the second neural network, and θ represents the set of coefficients corresponding to the plurality of first sub-neural networks. When the first neural network includes three first sub-neural networks of a random forest model, an XgBoost model and a support vector machine model, θ= (θ) 1 ,θ 2 ,θ 3 ) Corresponding coefficient theta of random forest model 1 Corresponding coefficient theta of Xgboost model 2 Support vector machine model type corresponding coefficient theta 3
And secondly, determining a first neural network with the largest first influence weight on the portrait in the plurality of first sub-neural networks according to the image probability and the coefficient of each first sub-neural network, and taking the first sub-neural network with the largest first influence weight on the portrait as a target neural network.
And calculating the first influence weight of each first sub-neural network in the plurality of first sub-neural networks on the image according to the image probability and the coefficient of each first sub-neural network, comparing the first influence weight of each first sub-neural network on the image, and taking the first sub-neural network with the largest first influence weight on the image as a target neural network.
The target neural network having the greatest first influence weight on the representation represents the first sub-neural network having the greatest influence on the representation.
And obtaining the weight corresponding to each influence factor from the target neural network.
The influence factors represent indexes in the target business indexes, and the weight of each index has influence on the portrait probability corresponding to the portrait features output by the target neural network. In order to determine influence factors with large influence on the portrait, weights corresponding to the influence factors are extracted from the target neural network.
And finally, determining at least one influence factor meeting the preset condition according to the weight of each influence factor and the data corresponding to each influence factor.
The preset condition may include N before the influence degree, where N is a positive integer.
According to the weight of each influence factor and the data corresponding to each influence factor, determining the influence degree of each influence factor, and sequencing the influence degree of each influence factor from big to small to obtain the influence factors of N before the influence degree.
The method provided by the embodiment of the application firstly selects a target neural network with the greatest influence on the portrait from a plurality of first sub-neural networks based on coefficients of the first sub-neural networks and portrait probability; and then, based on the weight of each influence factor in the target neural network and the data corresponding to each influence factor, selecting an influence factor with larger influence on the image probability from a plurality of influence factors, namely, the influence factor with larger influence on the image, providing a basis for optimizing the image model, and further improving the accuracy of the image model after optimizing the image model based on the influence factor with larger influence on the image.
In some embodiments, determining a first sub-neural network of the plurality of first sub-neural networks having a greatest first impact weight on the representation according to the image probability and the coefficient of each first sub-neural network may include:
first, according to the image probability and coefficient of each first sub-neural network, determining the first influence weight of each first sub-neural network on the image.
For each first sub-neural network, calculating the product of the portrait probability of the first sub-neural network and the coefficient, and taking the product as the first influence weight of the first sub-neural network on the portrait.
And then, determining the first sub-neural network with the largest first influence weight in the plurality of first sub-neural networks according to the first influence weight of each first sub-neural network on the image.
And comparing the first influence weights of the plurality of first sub-neural networks on the image, and determining the first sub-neural network with the largest first influence weight in the plurality of first sub-neural networks.
The first influence weight represents the degree of influence of the first sub-neural network on the representation.
The method provided by the embodiment of the application obtains the first sub-neural network with the greatest influence degree of the image in the plurality of first sub-neural networks, provides a basis for optimizing the image model, and further improves the accuracy of the image model after optimizing the image model based on the first sub-neural network with great influence on the image.
In some embodiments, determining at least one influencing factor satisfying the preset condition according to the weight of each influencing factor and the data corresponding to each influencing factor may include the following steps:
first, according to the weight of each influence factor and the data corresponding to each influence factor, determining a second influence weight of each influence factor on the portrait probability output by the target neural network.
For each influence factor, calculating the absolute value of the product of the weight of the influence factor and the data corresponding to the influence factor, and taking the absolute value of the product as the second influence weight of the influence factor on the portrait probability output by the target neural network.
The second influence weight represents the influence degree of the influence factor on the portrait probability output by the target neural network.
And then, determining at least one influence factor meeting the preset condition according to the second influence weight of each influence factor on the portrait probability output by the target neural network.
The preset condition may include N before the influence degree, where N is a positive integer.
And ordering the second influence weight of each influence factor from big to small to obtain the influence factors of the N ranked in front.
The method provided by the embodiment of the application obtains the influence factors with larger influence degree on the image probability of the target neural network, namely the influence factors with larger influence degree on the image, provides a basis for optimizing the image model, and further improves the accuracy of the image model after optimizing the image model based on the influence factors with larger influence on the image.
In some embodiments, at S140: before inputting the target feature data corresponding to the target business index into the portrait model to obtain the portrait of the heterogeneous network user, the method can further comprise:
firstly, a sample image of each user in a plurality of users of a target operator and feature data corresponding to a target service index of each user are obtained.
Firstly, determining a sample image of each user according to service data of each user in a plurality of users of a target operator, and extracting feature data corresponding to target service indexes of each user.
The service data is data which directly reflects attribute characteristics of the user corresponding to the requirement information.
In one example, where the demand information of the target operator is a heterogeneous network user portrayed as a high value user, the business data includes user package information that directly embodies the value of the user.
In some embodiments, the service data of the same user corresponds to the feature data corresponding to the target service index, and the sample portrait of each user is determined according to the service data of each user in the plurality of users of the target operator, so that the feature data corresponding to the target service index of the user corresponds to the sample portrait, and the feature data corresponding to the target service index of one user and the sample portrait are used as one sample.
In some embodiments, the plurality of users include users who meet the requirement information and users who do not meet the requirement information, the sample image of the user who meets the requirement information and the feature data corresponding to the target business index are taken as positive samples, and the sample image of the user who does not meet the requirement information and the feature data corresponding to the target business index are taken as negative samples.
In one example, all positive samples and part of the negative samples are extracted for training the representation model. To obtain a better model, the equalization of positive and negative samples needs to be optimized:
the optimization objective is to have a positive sample ratio p=m/(m+n), where m represents the number of positive samples and n represents the number of negative samples extracted.
If P < Q, since the data of the positive samples are already extracted in full quantity, only the technical process of up-sampling can be performed on the positive samples, some positive samples are synthesized, the synthesized sample quantity is ceilings (q×n/(1-Q)) -m, where ceilings are rounded up, and Q is a preset threshold.
The strategy for synthesizing positive samples is to randomly select one sample b from its nearest neighbor for each minority sample a, and then randomly select one point on the line between a, b as the newly synthesized minority sample.
If P > Q, negative samples are extracted from the negative samples that have not been extracted, the number of negative samples extracted being ceiling ((m-Q x m)/Q).
And then training to obtain a portrait model by adopting the target service index, the sample portrait of each user in the plurality of users of the target operator and the characteristic data corresponding to the target service index of each user.
Inputting the target business index and the characteristic data corresponding to the target business index of each user into the portrait model to be trained to obtain a first portrait, comparing the first portrait of each user with the sample portrait, and calculating the accuracy of the portrait model to be trained. When the accuracy rate does not meet the preset accuracy rate, adjusting the portrait model to be trained; and when the accuracy rate meets the preset accuracy rate, obtaining the portrait model.
According to the method provided by the embodiment of the application, the portrait model is obtained by training the sample portrait of the user of the target operator and the feature data corresponding to the target service index of the user, and even if the acquired feature data of the different network user cannot directly embody the attribute features of the different network user, the portrait model and the portrait determined by the target feature data corresponding to the target service index of the different network user can accurately embody the attribute features of the different network user required by the target operator, so that the portrait accuracy of the different network user is improved.
The embodiment of the application also provides a device for determining an image, as shown in fig. 2, the device 200 may include: an acquisition module 210, a determination module 220, and an extraction module 230.
The obtaining module 210 is configured to obtain the requirement information of the target operator and the feature data of the heterogeneous network user.
The determining module 220 is configured to determine a target service indicator corresponding to the requirement information according to a preset correspondence between the requirement information and the service indicator.
The extracting module 230 is configured to extract target feature data corresponding to the target business index from the feature data.
The determining module 220 is further configured to input target feature data corresponding to the target business index into the portrait model, and determine a portrait of the user with different networks.
The portrait model is trained by adopting a target service index, a sample portrait of each user in a plurality of users of a target operator and feature data corresponding to the target service index of each user.
According to the device provided by the embodiment of the application, when the target operator needs the portrait of the heterogeneous network user, the requirement information of the target operator and the characteristic data of the heterogeneous network user are acquired; determining a target service index corresponding to the demand information according to the corresponding relation between the preset demand information and the service index; extracting target characteristic data corresponding to target service indexes from the characteristic data; and inputting target characteristic data corresponding to the target business index into a portrait model to determine portraits of users of different networks. Because the target service index selected based on the demand information and the portrait model for determining the portrait are obtained by training the target service index, the sample portrait of the user of the target operator and the feature data corresponding to the target service index of the user, even if the acquired feature data of the different network user cannot directly represent the attribute features of the different network user, the portrait model and the portrait determined by the target feature data corresponding to the target service index of the different network user can accurately represent the attribute features of the different network user required by the target operator, thereby realizing improvement of the portrait accuracy of the different network user.
In some embodiments, the image model includes a first neural network and a second neural network; the first neural network comprises a plurality of first sub-neural networks, and the types of each first sub-neural network in the plurality of first sub-neural networks are different from each other;
the determining module 220 may specifically be configured to:
inputting the target characteristic data into a plurality of first sub-neural networks respectively to obtain the portrait probability corresponding to the portrait characteristic output by each first sub-neural network;
and inputting the portrait probability corresponding to the portrait features output by each first sub-neural network into a second neural network to obtain the portrait of the different-network user.
The device provided by the embodiment of the application adopts the first neural network and the second neural network to determine the portrait, and compared with the portrait determined by adopting a single neural network, the portrait is determined more accurately by adopting a plurality of layers of neural networks, thereby improving the accuracy of the determined portrait.
In some embodiments, the target business index includes a plurality of influencing factors, and the target feature data corresponding to the target business index includes data corresponding to each influencing factor in the plurality of influencing factors.
The obtaining module 210 is further configured to obtain coefficients corresponding to each first sub-neural network;
The determining module 220 is further configured to determine, according to the image probability and the coefficient of each first sub-neural network, a first neural network with the greatest first influence weight on the image in the plurality of first sub-neural networks, and take the first sub-neural network with the greatest first influence weight as the target neural network;
the obtaining module 210 is further configured to obtain a weight corresponding to each influence factor from the target neural network;
the determining module 220 is further configured to determine at least one influencing factor that meets the preset condition according to the weight of each influencing factor and the data corresponding to each influencing factor.
The device provided by the embodiment of the application firstly selects a target neural network with the greatest influence on the portrait from a plurality of first sub-neural networks based on coefficients of the first sub-neural networks and portrait probability; and then, based on the weight of each influence factor in the target neural network and the data corresponding to each influence factor, selecting an influence factor with larger influence on the image probability from a plurality of influence factors, namely, the influence factor with larger influence on the image, providing a basis for optimizing the image model, and further improving the accuracy of the image model after optimizing the image model based on the influence factor with larger influence on the image.
In some embodiments, the determining module 220 may be further specifically configured to:
determining a first influence weight of each first sub-neural network on the image according to the image probability and the coefficient of each first sub-neural network;
and determining a first sub-neural network with the largest first influence weight in the plurality of first sub-neural networks according to the first influence weight of each first sub-neural network on the image.
The device provided by the embodiment of the application obtains the first sub-neural network with the greatest influence degree of the image in the plurality of first sub-neural networks, provides a basis for optimizing the image model, and further improves the accuracy of the image model after optimizing the image model based on the first sub-neural network with great influence on the image.
In some embodiments, the determining module 220 may be further specifically configured to:
determining a second influence weight of each influence factor on the portrait probability output by the target neural network according to the weight of each influence factor and the data corresponding to each influence factor;
and determining at least one influence factor meeting the preset condition according to the second influence weight of each influence factor on the portrait probability output by the target neural network and the data corresponding to each influence factor.
The device provided by the embodiment of the application obtains the influence factors with larger influence degree on the image probability of the target neural network, namely the influence factors with larger influence degree on the image, provides a basis for optimizing the image model, and further improves the accuracy of the image model after optimizing the image model based on the influence factors with larger influence on the image.
In some embodiments, the obtaining module 210 is further configured to obtain a sample image of each user of the plurality of users of the target operator, and feature data corresponding to the target business index of each user.
The apparatus 200 may also include a training module 240.
The training module 240 is configured to train to obtain a portrait model by using the target service index, the sample portrait of each user in the multiple users of the target operator, and the feature data corresponding to the target service index of each user.
The device provided by the embodiment of the application adopts the sample portrait of the user of the target operator and the feature data corresponding to the target service index of the user, trains to obtain the portrait model, and can accurately embody the attribute features of the different network user required by the target operator and improve the portrait accuracy of the different network user even if the acquired feature data of the different network user can not directly embody the attribute features of the different network user and adopts the portrait model and the portrait determined by the target feature data corresponding to the corresponding target service index of the different network user.
The device for determining the portrait provided by the embodiment of the application executes each step in the method shown in fig. 1, and can achieve the technical effect of improving the portrait accuracy of the users in different networks, and for brevity of description, detailed description is omitted.
Fig. 3 shows a schematic hardware structure of an electronic device according to an embodiment of the present application.
A processor 301 and a memory 302 storing computer program instructions may be included in an electronic device.
In particular, the processor 301 may include a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present application.
Memory 302 may include mass storage for data or instructions. By way of example, and not limitation, memory 302 may comprise a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the foregoing. Memory 302 may include removable or non-removable (or fixed) media, where appropriate. Memory 302 may be internal or external to the integrated gateway disaster recovery device, where appropriate. In a particular embodiment, the memory 302 is a non-volatile solid-state memory.
The memory may include Read Only Memory (ROM), random Access Memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. Thus, in general, the memory includes one or more tangible (non-transitory) computer-readable storage media (e.g., memory devices) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors) it is operable to perform the operations described with reference to methods in accordance with aspects of the present disclosure.
The processor 301 implements the method of determining an image of any of the above embodiments by reading and executing computer program instructions stored in the memory 302.
In one example, the electronic device may also include a communication interface 303 and a bus 310. As shown in fig. 3, the processor 301, the memory 302, and the communication interface 303 are connected to each other by a bus 310 and perform communication with each other.
The communication interface 303 is mainly used to implement communication between each module, device, unit and/or apparatus in the embodiment of the present application.
Bus 310 includes hardware, software, or both that couple the components of the online data flow billing device to each other. By way of example, and not limitation, the buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. Bus 310 may include one or more buses, where appropriate. Although embodiments of the application have been described and illustrated with respect to a particular bus, the application contemplates any suitable bus or interconnect.
The electronic device may perform the method of determining an image in an embodiment of the present application, thereby implementing the method of determining an image described in connection with fig. 1.
In addition, in combination with the method for determining an image in the above embodiment, an embodiment of the present application may be implemented by providing a computer readable storage medium. The computer readable storage medium has stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement a method of determining an image in any of the above embodiments.
In connection with the method of determining an image in the above-described embodiments, embodiments of the present application may be implemented by providing a computer program product. Instructions in the computer program product when executed by a processor of an electronic device implement a method of determining an image in any of the above embodiments.
It should be understood that the application is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and shown, and those skilled in the art can make various changes, modifications and additions, or change the order between steps, after appreciating the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this disclosure describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, or may be performed in a different order from the order in the embodiments, or several steps may be performed simultaneously.
In the foregoing, only the specific embodiments of the present application are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present application is not limited thereto, and any equivalent modifications or substitutions can be easily made by those skilled in the art within the technical scope of the present application, and they should be included in the scope of the present application.

Claims (10)

1. A method of determining an image, the method comprising:
acquiring demand information of a target operator and characteristic data of a different network user;
determining a target service index corresponding to the requirement information according to a corresponding relation between preset requirement information and service indexes;
extracting target characteristic data corresponding to the target business index from the characteristic data;
inputting target characteristic data corresponding to the target business index into a portrait model, and determining portraits of the different network users;
the portrait model is trained by adopting the target service index, sample images of each user in a plurality of users of the target operator and feature data corresponding to the target service index of each user.
2. The method of claim 1, wherein the representation model comprises a first neural network and a second neural network; the first neural network comprises a plurality of first sub-neural networks, and the types of each first sub-neural network in the plurality of first sub-neural networks are different from each other; inputting the target characteristic data corresponding to the target business index into a portrait model to determine the portrait of the different network user, comprising the following steps:
inputting the target characteristic data into the plurality of first sub-neural networks respectively to obtain portrait probability corresponding to portrait characteristics output by each first sub-neural network;
and inputting the portrait probability corresponding to the portrait features output by each first sub-neural network into the second neural network to obtain the portrait of the user with different networks.
3. The method of claim 2, wherein the target business index comprises a plurality of influencing factors, and the target characteristic data corresponding to the target business index comprises data corresponding to each influencing factor in the plurality of influencing factors; the method further comprises the steps of:
acquiring coefficients corresponding to each first sub-neural network;
Determining a first neural network with the largest first influence weight on the portrait in the plurality of first neural networks according to the image probability and the coefficient of each first neural network, and taking the first neural network with the largest first influence weight on the portrait as a target neural network;
acquiring the weight corresponding to each influence factor from the target neural network;
and determining at least one influence factor meeting a preset condition according to the weight of each influence factor and the data corresponding to each influence factor.
4. The method of claim 3, wherein determining a first sub-neural network of the plurality of first sub-neural networks having a greatest first impact weight on the representation based on the image probability and coefficients of each of the first sub-neural networks comprises:
determining a first influence weight of each first sub-neural network on the portrait according to the image probability and the coefficient of each first sub-neural network;
and determining a first sub-neural network with the largest first influence weight in the plurality of first sub-neural networks according to the first influence weight of each first sub-neural network on the portrait.
5. A method according to claim 3, wherein the determining at least one influencing factor satisfying a preset condition according to the weight of each influencing factor and the data corresponding to each influencing factor comprises:
determining a second influence weight of each influence factor on the portrait probability output by the target neural network according to the weight of each influence factor and the data corresponding to each influence factor;
and determining at least one influence factor meeting the preset condition according to the second influence weight of each influence factor on the portrait probability output by the target neural network.
6. The method of claim 1, wherein before inputting the target feature data corresponding to the target business index into a portrayal model to obtain a portrayal of the heterogeneous network user, the method further comprises:
acquiring a sample image of each user in a plurality of users of the target operator and feature data corresponding to the target service index of each user;
and training to obtain the portrait model by adopting the target service index, the sample portrait of each user in a plurality of users of the target operator and the characteristic data corresponding to the target service index of each user.
7. An apparatus for determining an image, the apparatus comprising:
the acquisition module is used for acquiring the demand information of the target operator and the characteristic data of the heterogeneous network user;
the determining module is used for determining a target service index corresponding to the requirement information according to the corresponding relation between the preset requirement information and the service index;
the extraction module is used for extracting target characteristic data corresponding to the target business index from the characteristic data;
the determining module is further used for inputting target characteristic data corresponding to the target service index into a portrait model and determining portraits of the different network users;
the portrait model is trained by adopting the target service index, sample images of each user in a plurality of users of the target operator and feature data corresponding to the target service index of each user.
8. An electronic device, the device comprising: a processor and a memory storing computer program instructions; the method of determining an image according to any one of claims 1-6 when said processor executes said computer program instructions.
9. A computer readable storage medium having stored thereon computer program instructions which when executed by a processor implement a method of determining an image according to any of claims 1-6.
10. A computer program product, characterized in that instructions in the computer program product, when executed by a processor of an electronic device, cause the electronic device to perform the method of determining an image according to any of claims 1-6.
CN202210518355.4A 2022-05-13 2022-05-13 Method, device, equipment and storage medium for determining portrait Pending CN117113201A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210518355.4A CN117113201A (en) 2022-05-13 2022-05-13 Method, device, equipment and storage medium for determining portrait

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210518355.4A CN117113201A (en) 2022-05-13 2022-05-13 Method, device, equipment and storage medium for determining portrait

Publications (1)

Publication Number Publication Date
CN117113201A true CN117113201A (en) 2023-11-24

Family

ID=88811535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210518355.4A Pending CN117113201A (en) 2022-05-13 2022-05-13 Method, device, equipment and storage medium for determining portrait

Country Status (1)

Country Link
CN (1) CN117113201A (en)

Similar Documents

Publication Publication Date Title
CN108650628B (en) Indoor positioning method combining distance measurement and fingerprint based on Wi-Fi network
CN106993048B (en) Determine method and device, information recommendation method and the device of recommendation information
CN109672980B (en) Method, device and storage medium for determining wireless local area network hotspot corresponding to interest point
CN111327450B (en) Method, device, equipment and medium for determining quality difference reason
WO2020024597A1 (en) Indoor positioning method and apparatus
CN109963253B (en) Method and device for identifying geographic position of user residence
CN109889975B (en) Terminal fingerprint positioning method based on NB-IoT
CN117113201A (en) Method, device, equipment and storage medium for determining portrait
CN110992230A (en) Full-scale demographic method, device and server based on terminal signaling data
Wang et al. Deep learning method for generalized modulation classification under varying noise condition
CN113347660A (en) Communication signal detection method, apparatus, device and medium
CN109699044B (en) Method, device, equipment and medium for determining atmospheric waveguide interference
CN111372073A (en) Video quality evaluation method, device, equipment and medium
CN115209397B (en) Method, device, equipment and computer storage medium for determining potential user terminal
CN113260045B (en) Method, device, equipment and storage medium for determining geographic position of router
CN114363983B (en) Method for directionally acquiring terminal information
CN111314926B (en) Coverage relation determination method and device and computer readable storage medium
CN113115200B (en) User relationship identification method and device and computing equipment
EP3783925B1 (en) Determination of indoor/outdoor in mobile networks using deep learning
CN111988788B (en) 5G positioning network design method and system for rail transit
CN112449369B (en) Method, device and equipment for identifying problem cell and computer storage medium
CN114866433A (en) User service perception evaluation method, device, equipment and computer storage medium
CN106650821A (en) Method and device for transmitting information
CN112203286B (en) Method, device and equipment for evaluating coverage stability of wireless network
CN118118915A (en) Network quality evaluation method, device, equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination