WO2022048515A1 - 一种测评实现方法、装置和存储介质 - Google Patents

一种测评实现方法、装置和存储介质 Download PDF

Info

Publication number
WO2022048515A1
WO2022048515A1 PCT/CN2021/115330 CN2021115330W WO2022048515A1 WO 2022048515 A1 WO2022048515 A1 WO 2022048515A1 CN 2021115330 W CN2021115330 W CN 2021115330W WO 2022048515 A1 WO2022048515 A1 WO 2022048515A1
Authority
WO
WIPO (PCT)
Prior art keywords
questionnaire
user
target
topic
feature
Prior art date
Application number
PCT/CN2021/115330
Other languages
English (en)
French (fr)
Inventor
程印超
郭叶
颜红燕
Original Assignee
中国移动通信有限公司研究院
中国移动通信集团有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国移动通信有限公司研究院, 中国移动通信集团有限公司 filed Critical 中国移动通信有限公司研究院
Publication of WO2022048515A1 publication Critical patent/WO2022048515A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising

Definitions

  • the present application relates to the field of business support, and in particular, to an evaluation implementation method, device and storage medium.
  • the main purpose of the present application is to provide an evaluation implementation method, device and storage medium.
  • the embodiment of the present application provides a method for realizing evaluation, which is applied to a server, and the method includes:
  • the first questionnaire is determined based on the first label set; the first label set includes: user preference labels; the user preference labels are determined based on historical user data;
  • the parameters of the target feature are determined based on the environmental state and the behavioral state;
  • a response result for the first questionnaire sent by the first device is received.
  • the determining of the first questionnaire based on the first label set includes:
  • the recommendation degree of each of the items is determined; based on the recommendation degree of each item, the first target item is determined ; the questionnaire question bank includes at least one question;
  • the first comprehensive score of the first topic is determined based on the first comprehensive score corresponding to each of the first topics;
  • This cycle is repeated until a preset number of target questions is determined, and the first questionnaire is obtained based on the preset number of target questions.
  • the probability of at least one candidate device is obtained; the probability represents the possibility that a response result can be obtained by performing questionnaire evaluation on the corresponding candidate device;
  • a target device that meets the preset probability requirement is determined; the meeting the preset probability requirement indicates that the probability exceeds a preset threshold.
  • the number of the target features is at least one
  • a probability for at least one candidate device is obtained.
  • the first time period represents the time when the reply result is generated
  • the response result is adjusted to obtain a target response result.
  • the user behavior data includes at least one of the following: voice, body movement, and facial expression;
  • the determining the reliability of the response result based on the user behavior data includes:
  • the credibility is determined according to the at least one credibility value and the weight corresponding to each user behavior data.
  • the user behavior data includes: user behavior sub-data for each topic;
  • the determining the reliability of the response result based on the user behavior data includes:
  • adjusting the response result based on the reliability includes:
  • the score for each item in the reply result is adjusted.
  • the embodiment of the present application provides an evaluation implementation method, which is applied to a device, and the method includes:
  • the first questionnaire is determined based on a first tag set;
  • the first tag set includes: a user preference tag;
  • the user preference tag is determined based on historical user data;
  • the reply result is sent to the server.
  • the collected parameters of the at least one first feature are sent to the server.
  • the method also includes:
  • corresponding user behavior data including at least one of the following: voice, body movement, facial expression; the user behavior data is used to adjust the response result;
  • the collection of user behavior data includes:
  • the sending the user behavior data to the server includes: sending the user behavior sub-data and corresponding topics collected in each collection time period in the at least one collection time period to the server.
  • An embodiment of the present application provides an evaluation implementation device, which is applied to a server, and the device includes:
  • a first processing module configured to determine a first questionnaire based on a first label set; the first label set includes: a user preference label; the user preference label is determined based on historical user data;
  • the parameters of the target feature are determined based on the environmental state and the behavioral state;
  • a first communication module configured to send the first questionnaire to a first device; the first questionnaire is presented by the first device;
  • the first processing module is configured to determine the recommendation degree of each item based on the correlation between the user preference tag in the first tag set and each item in the questionnaire item bank; The recommendation degree of each of the topics is determined, and the first target topic is determined; the questionnaire question bank includes at least one topic;
  • the first comprehensive score of the first topic is determined based on the first comprehensive score corresponding to each of the first topics;
  • This cycle is repeated until a preset number of target questions is determined, and the first questionnaire is obtained based on the preset number of target questions.
  • the first processing module is configured to obtain at least one parameter of the first feature
  • the probability of at least one candidate device is obtained; the probability represents the possibility that a response result can be obtained by performing questionnaire evaluation on the corresponding candidate device;
  • a target device that meets the preset probability requirement is determined; the meeting the preset probability requirement indicates that the probability exceeds a preset threshold.
  • the number of the target features is at least one
  • the first processing module is configured to calculate the product of the influence factor corresponding to each of the target features and the parameter according to the parameter of each of the target features in the at least one target feature;
  • a probability for at least one candidate device is obtained.
  • the first communication module is further configured to acquire user behavior data within a first time period; the first time period represents the time when the reply result is generated;
  • the first processing module is further configured to determine the reliability of the reply result based on the user behavior data
  • the response result is adjusted to obtain a target response result.
  • the user behavior data includes at least one of the following: voice, body movement, and facial expression;
  • the first processing module is further configured to use a preset behavior analysis model to analyze at least one of the voice, the body movement, and the expression to obtain at least one credible value;
  • the user behavior data includes: user behavior sub-data for each topic;
  • the first processing module is further configured to analyze the user behavior sub-data corresponding to each topic by using a preset behavior analysis model, and obtain at least one credible value corresponding to each topic;
  • the first processing module is further configured to adjust the score for each question in the reply result according to the reliability corresponding to each question.
  • the embodiment of the present application provides an evaluation implementation device, which is applied to equipment, and the device includes:
  • a second communication module configured to receive and present a first questionnaire; the first questionnaire is determined based on a first tag set; the first tag set includes: user preference tags; the user preference tags are determined based on historical user data;
  • a second processing module configured to obtain a response result for the first questionnaire
  • the second communication module is further configured to send the reply result to the server.
  • the second communication module is further configured to send the collected parameter of the at least one first feature to the server.
  • the collection module is further configured to collect user behavior data;
  • the corresponding user behavior data includes at least one of the following: voice, body movement, and facial expression; the user behavior data is used to adjust the response result;
  • the second communication module is further configured to send the user behavior data to a server.
  • the collection module is configured to determine a topic corresponding to at least one collection time period, and user behavior sub-data collected in each collection time period in the at least one collection time period;
  • the second communication module is configured to send the user behavior sub-data and corresponding topics collected in each collection time period of the at least one collection time period to the server.
  • An embodiment of the present application provides a device for implementing evaluation, including a memory, a processor, and a computer program stored in the memory and running on the processor.
  • the processor executes the program, the processor implements any task executed on the server side. a step of the method; or,
  • Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, implements the steps of any one of the methods executed by the server side; or,
  • An evaluation implementation method, device, and storage medium provided by an embodiment of the present application, the method includes: a server determines a first questionnaire based on a first label set; the first label set includes: a user preference label; the user The preference tag is determined based on historical user data; the parameters of the target feature are acquired, and the first device is determined based on the parameters of the target feature; the parameters of the target feature are determined based on the environmental state and the behavior state; the first questionnaire is sent to the first device device; the first questionnaire is presented by the first device; the response result for the first questionnaire sent by the first device is received; in this way, the user participation rate is improved and the evaluation quality is improved;
  • another evaluation implementation method, device, and storage medium provided by the embodiments of the present application include: receiving and presenting a first questionnaire by a device; determining the first questionnaire based on a first label set; A label set includes: user preference labels; the user preference labels are determined based on historical user data; a response result for the first questionnaire is obtained;
  • FIG. 1 is a schematic flowchart of a method for implementing an evaluation provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of another evaluation implementation method provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of still another method for implementing an evaluation provided by an embodiment of the present application
  • FIG. 4 is a schematic diagram of a collaborative cross-analysis of response results and multi-dimensional data provided by an embodiment of the present application
  • FIG. 5 is a schematic structural diagram of an evaluation implementation device provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of another evaluation implementation device provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of still another evaluation implementation device provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of still another evaluation implementation apparatus provided by an embodiment of the present application.
  • User survey evaluation is a method of data collection and aggregation using scientific methods to provide basic data support for enterprises to monitor service status, business quality, overall satisfaction, and market analysis.
  • the current mainstream survey method mainly pushes the questionnaire to the user's mobile phone or computer for invitation and evaluation through emails or text messages. No user is treated differently. Because the user's own and surrounding environmental conditions are not considered, users are perfunctory. At the same time, the user information collected by the current general survey method is mainly the text information fed back by the user, and the type of information is single, because the survey is the user's subjective feedback, so a single information will lead to the distortion of the user's true attitude .
  • the server determines the first questionnaire based on the first label set; the first label set includes: user preference labels; the user preference labels are determined based on historical user data; parameters, the first device is determined based on the parameters of the target feature; the parameters of the target feature are determined based on the environmental state and the behavioral state; the first questionnaire is sent to the first device; the first questionnaire is sent by the first device presentation; receiving a response result for the first questionnaire sent by the first device; correspondingly, the device receives and presents the first questionnaire; the first questionnaire is determined based on a first label set; the first label set includes : a user preference tag; the user preference tag is determined based on historical user data; a response result to the first questionnaire is obtained; and the response result is sent to the server.
  • FIG. 1 is a schematic flowchart of an evaluation implementation method provided by an embodiment of the present application; as shown in FIG. 1 , the method is applied to a server, and the method includes:
  • Step 101 Determine a first questionnaire based on a first label set; the first label set includes: a user preference label; the user preference label is determined based on historical user data;
  • Step 102 Obtain parameters of the target feature, and determine the first device based on the parameters of the target feature; and determine the parameters of the target feature based on the environmental state and the behavioral state;
  • Step 103 Send the first questionnaire to a first device; the first questionnaire is presented by the first device;
  • Step 104 Receive a response result for the first questionnaire sent by the first device.
  • the method further includes determining a first set of tags for each user.
  • the determining of the first label set for each user includes:
  • the historical user data includes at least one of the following: basic information, communication consumption, device information, service usage behavior , Internet behavior, location, business complaints and historical survey evaluation data;
  • the first label set includes: at least one user preference label.
  • Min-max normalization refers to the linear transformation of the original data
  • minA and maxA can be set as the minimum and maximum values of attribute A, respectively, and an original value x of A is mapped to the interval [0,1 through min-max normalization.
  • new data (original data-minimum value)/(maximum value-minimum value).
  • the first label set corresponding to each user is determined, including:
  • At least one user preference tag corresponding to each user is determined, and based on the determined at least one user preference tag, a first tag set corresponding to the user is determined.
  • the standardized data includes the following information after standardized processing: basic information, communication consumption, equipment information, business usage behavior, online behavior, location, business complaints and historical survey and evaluation data;
  • Basic information which can include: gender, age, region, occupation, etc.;
  • Device information which can include: smart TVs, smart speakers, smart refrigerators, smart projectors, mobile phones, computers, tablet computers (Pad, Portable android device), smart photo albums, smart wearable devices, etc.;
  • Business usage behavior including: the situation of activating services (such as activating 4G service, activating 5G service, activating call service, activating vr service, etc.);
  • the location can include: the location of different devices, the location of the base station that can provide signals for the device, and the cell supported by the base station that provides services; it can further determine whether the device is indoors or outdoors, etc. If it is determined that the device is indoors, also It can be further determined as the location in the bedroom, living room, etc.; it can be realized based on the positioning function of the device, and I won't go into details here;
  • Service complaints which can include: services that users have complained about (such as complaints about 4g network quality dissatisfaction);
  • User preference tags can be determined based on the above content, for example: 4g users, virtual reality (VR, Virtual Reality) users, 4g quality dissatisfaction, 4g self-pay, wireless Internet (WIFI) users, etc.
  • VR virtual reality
  • WIFI wireless Internet
  • the determining the first questionnaire based on the first label set includes:
  • the recommendation degree of each item is determined, and the first target item is determined based on the recommendation degree of each item ;
  • the questionnaire question bank includes at least one question;
  • the first comprehensive score of the first topic is determined based on the first comprehensive score corresponding to each of the first topics;
  • This cycle is repeated until a preset number of target questions is determined, and the first questionnaire is obtained based on the preset number of target questions.
  • the questionnaire question bank includes: at least one preset question.
  • the first label set includes: user preference labels for corresponding users generated according to historical user data.
  • the determining the first questionnaire based on the first label set includes:
  • the recommendation degree of each item is determined, and the first target item is determined based on the recommendation degree of each item ;
  • the questionnaire question bank includes at least one question;
  • the first target topic and the first preset number of second target topics obtain a second preset number of target topics (equivalent to obtaining a preset number of target topics);
  • the first questionnaire is obtained based on the second preset number of target questions.
  • the determining of the first questionnaire based on the first label set includes:
  • the correlation between at least one question in the questionnaire question bank and the user preference label is calculated one by one according to preset rules, and the recommendation degree for each question is determined based on the correlation, including:
  • the correlation between the stem content of each question and the user preference tag is calculated one by one according to the preset rules.
  • the preset rule may be preset, and may be any correlation calculation method.
  • Step 002 Arrange from large to small according to the recommended value, and take the candidate topic with the greatest degree of recommendation as the first topic (that is, the above-mentioned first target topic), and add it to the survey questionnaire to be sent, that is, the first questionnaire;
  • the step 002 includes:
  • At least two candidate topics are screened in combination with other user preference tags to obtain the first topic.
  • the user preference tags include: 4g users and vr users. Based on the correlation between 4g users and vr users and each topic, the first topic corresponding to "4g user", the second topic corresponding to "vr user”, and the two are obtained respectively. At this time, you can combine other tags related to 4g users (such as 4g related business usage time, etc.) and other tags related to vr users (such as vr related business usage time, etc.), to calculate the recommendation of topic 1 degree, the recommendation degree of topic two.
  • 4g users such as 4g related business usage time, etc.
  • vr users such as vr related business usage time, etc.
  • Step 003 Calculate the correlation degree one by one according to the stem content of the first topic (that is, the first topic above) and the candidate topics not displayed in the questionnaire question bank (that is, other topics except the first topic) one by one ; Based on the comprehensive evaluation and scoring of the recommendation and relevance of each candidate topic, the comprehensive score is obtained;
  • Step 004 arranging the next topic according to the comprehensive score from large to small, as the next topic most appropriate to the user, adding the determined next topic to the survey questionnaire to be sent;
  • Step 005 Repeat the above steps 003-004.
  • the recommendation of candidate questions is terminated, and a survey questionnaire is obtained based on the determined candidate questions.
  • Table 1 is a table of comprehensive scores of questions in the questionnaire; among them, the recommended value of question 1 is the largest, as the first question in the first questionnaire; the comprehensive score of question 5 is the largest, as the first questionnaire The second question in the questionnaire; the comprehensive score of question 3 is the second, as the third question in the first questionnaire; and so on, question 2...question N, as the next question in turn. In this way, the final first questionnaire can be obtained.
  • the following relevance degree represents the relevance degree between the corresponding topic and the first topic (ie, topic 1).
  • the first questionnaire generated by the above method contains topics related to the user's user preference label; the user pays more attention to related topics, and the questionnaire survey results provided by it are also more authentic. In this way, the quality and credibility of the questionnaire survey can be improved. Spend.
  • the obtaining parameters of the target feature and determining the first device based on the parameters of the target feature include:
  • the probability of at least one candidate device is obtained; the probability represents the possibility that a response result can be obtained by performing questionnaire evaluation on the corresponding candidate device;
  • a target device that meets the preset probability requirement is determined; the meeting the preset probability requirement indicates that the probability exceeds a preset threshold.
  • the number of the first features may be one or more;
  • the number of target features obtained by screening can be one or more.
  • the preset feature requirements include: at least one reference feature; the reference feature is a feature that has an important influence on the success of the investigation.
  • the first feature that meets the preset feature requirement is a feature that belongs to the reference feature.
  • the obtaining the probability for at least one candidate device according to the parameters of the target feature includes:
  • a probability for at least one candidate device is obtained.
  • the following formula can be used to calculate the probability for at least one candidate device:
  • p(u i , t j ) represents the probability that the questionnaire is successfully sent to the device j of user i;
  • the target features include: device type (determined based on the multiple devices the user has and the user's favorite device), so that the corresponding probability of selecting the corresponding device is determined;
  • the target feature may also include: push time (determined based on the desired push time), so that the probability corresponding to the corresponding time is determined;
  • the target features may also include: other features, for example, user location (such as living room, kitchen, bedroom, etc.), busy and idle state (a user state feature), dynamic state (a user state feature), user behavior preferences (such as user Favorite device, usage time, frequency, etc.), date and time characteristics (such as whether it is currently a working day, current time period), device information (such as device type, device status, interaction method (voice, touch screen, gesture, etc.), device location, etc.).
  • user location such as living room, kitchen, bedroom, etc.
  • a user state feature such as busy and idle state
  • dynamic state a user state feature
  • user behavior preferences such as user Favorite device, usage time, frequency, etc.
  • date and time characteristics such as whether it is currently a working day, current time period
  • device information such as device type, device status, interaction method (voice, touch screen, gesture, etc.), device location, etc.
  • the above-mentioned first feature includes at least the content included in the above-mentioned target feature, which will not
  • time-related information such as the use time of different devices, the current time (such as working days, rest days), etc.), and 24 hours a day can be divided, and different time periods correspond to different values; for example: The value of the evening time period (such as 20:00-22:00) is 40%, the rest time period (such as 12:30-13:00) is 30%, and so on, the value can be configured for different time periods; The total value of the above-mentioned multiple different time periods may be 1.
  • the location feature it can be determined by combining the relationship between the device location and the user location; in addition, the value of the corresponding location relationship for different device types is also different, for example, the requirements for the corresponding location relationship for mobile phones and smart speakers are different; that is, For different devices to be predicted, different values of location information can be configured. For example, for mobile phones, the value is 60% when the distance is less than 1 meter, and the value is 10% when the distance exceeds 10 meters; for smart speakers, The value of the distance within 1-5 meters is 60%, and the value of the distance more than 5 meters is 10%;
  • the device may be a smart home device, including: smart TV, smart speaker, smart refrigerator, smart projector, mobile phone, computer, tablet computer (PAD), smart photo album, smart wearable device and other devices that can interact with users; different devices Corresponding to different interaction methods, examples are shown in Table 2 below.
  • the devices are scheduled by the server.
  • one device needs to be selected from each device to send the questionnaire, and the selected device is the device with the highest possibility for the user to receive and conduct research.
  • Each device has multiple interaction modes, as shown in Table 2 below:
  • the method further includes:
  • the first time period represents the time when the response result is generated
  • the response result is adjusted to obtain a target response result.
  • the user behavior data includes at least one of the following: sounds, body movements, and expressions;
  • the determining the reliability of the response result based on the user behavior data includes:
  • the credibility is determined according to the at least one credibility value and the weight corresponding to each user behavior data.
  • the user behavior data includes: user behavior sub-data for each topic;
  • the determining the reliability of the response result based on the user behavior data includes:
  • the score for each item in the reply result is adjusted.
  • the server collects the current user environment and behavior status, determines the parameters of the target characteristics, and combines with the information of smart home devices to intelligently push the strategy according to the evaluation task, and selects the best time, device, and interaction method (that is, selecting the first device and the push method). time), push the first questionnaire to the user for research invitation, thereby improving the success rate of the research, and pushing on various devices can also improve the fun, thereby improving the user experience.
  • the method further includes: determining at least one reference feature and determining an impact factor corresponding to each reference feature; specifically:
  • the reference data sets include: at least one reference feature and a parameter corresponding to each reference feature;
  • models such as decision tree and logistic regression are used for model training, and based on the model training results, the influence factors of each reference feature on the success of user research are determined; the influence factors represent each reference The importance of the feature's influence on the result;
  • the most important N reference features are screened as the target feature.
  • the preset number and the N are preset and saved by the developer based on requirements.
  • the reference user is a user who has succeeded in the research.
  • the reference data set represents the relevant data set corresponding to the questionnaires successfully investigated.
  • Step 011 obtaining a reference data set successfully participating in the smart home research, and constructing corresponding user characteristics according to the historical user data in the reference data set;
  • the user characteristics may include at least one of the following:
  • User location characteristics user location information that can be detected by smart homes, such as living room, kitchen, bedroom, etc.;
  • User status characteristics The user's busy and idle status, dynamic status, etc. collected by the smart home can be obtained through the analysis of video image information collected by smart cameras, smart security sensors and other equipment;
  • Smart home device user behavior preference smart home device category, usage time, frequency, etc.
  • Date and time characteristics whether it is a working day, the current time period
  • Smart home device information device type, device status, interaction method (voice, touch screen, gesture, etc.), location, etc.;
  • Step 012 feature importance analysis and model training
  • Step 012 specifically includes:
  • models such as decision tree and logistic regression are used for model training, and based on the model training results, the influence factors of each reference feature on the success of the user survey are determined; the influence factor represents the effect of each reference feature on the results. the importance of the impact;
  • the accuracy the area under the ROC curve (AUC, Area Under Curve) and other indicators can be used to evaluate, and the optimized model can be obtained.
  • the probability of successful investigation can be determined by the following formula:
  • x ik is the k-th feature variable corresponding to user i
  • w ik is the influence factor corresponding to the feature variable x ik of user i, which can be determined based on the model training result
  • p(u i , t j ) represents the influence factor to user i The probability that device j successfully sends the questionnaire.
  • step 012 use the model trained in step 012 to predict the smart home equipment for the users to be surveyed, obtain the probability of the user successfully participating in the survey under each smart home device, and select the smart home device with the greatest probability of success to carry out the push and evaluation tasks.
  • FIG. 2 is a schematic flowchart of another evaluation implementation method provided by an embodiment of the present application; as shown in FIG. 2 , the method is applied to a device, and the device is specifically any of the above smart home devices; the device can be combined with The server interacts; the method includes:
  • Step 201 Receive and present a first questionnaire; the first questionnaire is determined based on a first tag set; the first tag set includes: user preference tags; the user preference tags are determined based on historical user data;
  • Step 202 obtaining a response result for the first questionnaire
  • Step 203 Send the reply result to the server.
  • the method further includes:
  • the collected parameters of the at least one first feature are sent to the server.
  • the first feature includes at least: user location (such as living room, kitchen, bedroom, etc.), busy and idle state (a user state feature), dynamic and static state (a user state feature), and the like. The details have been described in the method shown in FIG. 1 and will not be repeated here.
  • user location such as living room, kitchen, bedroom, etc.
  • busy and idle state a user state feature
  • dynamic and static state a user state feature
  • the method further includes:
  • corresponding user behavior data including at least one of the following: voice, body movement, facial expression; the user behavior data is used to adjust the response result;
  • the collecting user behavior includes:
  • the sending the user behavior data to the server includes: sending the user behavior sub-data and corresponding topics collected in each collection time period in the at least one collection time period to the server.
  • FIG. 3 is a schematic flow chart of yet another method for implementing evaluation and evaluation provided by an embodiment of the present application; as shown in FIG. 3 , the method is applied to a server, and the server includes: a research evaluation platform and a smart home control center; including:
  • Step 301 The research and evaluation platform performs data collection and preprocessing on user-related data
  • user-related data including: historical user data, historical survey evaluation data;
  • the historical user data includes: basic information, communication consumption, equipment information, service usage behavior, online behavior, location, and service complaints;
  • the step 301 specifically includes:
  • Step 302 the research and evaluation platform generates a customized evaluation task for the user
  • Step 302 includes: generating a corresponding evaluation task according to historical user data and historical survey evaluation data of a certain user.
  • the evaluation task includes: a questionnaire.
  • the smart home control center interacts with at least one smart home device; the questionnaire can be sent to the corresponding smart home device through the smart home control center.
  • Step 304 the research and evaluation platform calculates the best time and equipment for the research, and pushes the evaluation task to the user;
  • Step 304 specifically includes: the smart home control center collects the current user environment and behavioral state, and sends the collected current user environment and behavioral state to the research and evaluation platform;
  • the research and evaluation platform uses intelligent push strategies to select the best time, equipment, and interaction method to push the evaluation tasks to users for research invitations.
  • the smart push strategy is trained based on historical user data who successfully participated in smart home research.
  • the training process includes: constructing user features based on historical user data; performing feature importance analysis and model training based on user features; calculating the influence factors of each user feature on the success of user research. The specific process has been described in the method shown in FIG. 1 and will not be repeated here.
  • the intelligent push strategy is used to select the best time, equipment, and interaction method, including:
  • Predict the smart home equipment for the users to be surveyed get the probability of users successfully participating in the survey under each smart home device, and select the smart home device with the highest probability of success to push the evaluation tasks and invite surveys.
  • the probability calculation can use the following formula:
  • x ik is the k-th feature variable corresponding to user i; w ik is the influence factor corresponding to the feature variable x ik of user i; p(u i , t j ) represents the success of sending the questionnaire to device j of user i probability.
  • Step 305 When the user conducts research, the smart home control center synchronously collects data such as user expressions, voices, and actions, and sends the collected data to the research and evaluation platform;
  • the smart home control center can synchronously collect data such as user expressions, voices, and actions through the smart home devices (a device with a camera, a device with a microphone) that communicate with it.
  • the smart home devices a device with a camera, a device with a microphone
  • Step 306 determine whether the user research is completed; if it is determined that the user research is completed, then go to step 308; if it is determined that the user research is not completed, then go to step 307;
  • step 306 specifically includes: the research and evaluation platform determines whether a reply result of the evaluation task is received, and if the reply result is received, then proceeds to step 308 , and if it is determined that the reply result is not received, then proceeds to step 307 .
  • step 306 specifically includes: the smart home control center judges in real time whether the reply result of the evaluation task is received, determines that the reply result is received, and sends the reply result to the research and evaluation platform; determines that the reply result has not been received. , then go to step 307 .
  • Step 307 cooperate with other smart home devices, continue to investigate and invite;
  • step 304 may be specifically performed again to determine other smart home devices for investigation.
  • Step 308 the research and evaluation platform conducts a multi-dimensional cross-analysis on the research result data.
  • the user survey results are cross-analyzed based on the collected user multi-dimensional data, and the data collected during the survey, such as voice, motion, and facial expressions, are combined to conduct a collaborative cross-analysis on the survey results.
  • Step 3081 Determine the user's emotion when conducting research.
  • Analyzing the action data (which can be collected video data or image data), obtaining corresponding action information, and determining the user's second emotion according to the action information;
  • the first emotion, the second emotion, and the third emotion can be processed according to the preset processing strategy to obtain the target emotion; according to the target emotion, the answer result can be adjusted according to the preset adjustment strategy (for example, it is determined that the user feels troublesome) Or the mood is negative, you can reduce the score by one grade accordingly).
  • the preset processing strategy is set by the developer according to the needs, for example, the emotion can be weighted; the specific preset adjustment strategy is set by the developer according to the needs, for example, the score can be adjusted accordingly for a certain emotion, Alternatively, mark this result as not available for reference.
  • FIG. 4 is a schematic diagram of a collaborative cross-analysis of response results and multi-dimensional data provided by an embodiment of the present application; as shown in FIG. 4 , the action information may include: shrugging shoulders, pinching waist, etc.; speech semantics may include: troublesome, slow, etc.; face Information can include: frowning, staring, etc.
  • the user's emotion is determined based on the above information, and the answering result can be adjusted according to the user's emotion.
  • FIG. 5 is a schematic structural diagram of an evaluation implementation device provided by an embodiment of the present application; as shown in FIG. 5 , the device is applied to a server, and the device includes: a research evaluation platform and a smart home control center;
  • the smart home control center mainly includes: a smart device feature management module, a user behavior and state recognition module, an intelligent push decision module, an intelligent interactive decision module, an investigation progress monitoring module, and a smart device collaboration module;
  • the research and evaluation platform includes: a questionnaire design module, a questionnaire release module, a stored questionnaire database module, a multi-dimensional data analysis module, a quota setting module, a short link module, a sample module, an anti-cheating module, and an incentive module;
  • the above-mentioned method implemented by the server is completed through the cooperative processing of the above-mentioned modules.
  • the anti-cheating module the response result of the user's reply is detected, and the result with higher authenticity is obtained;
  • the short link module send a questionnaire to the device through a short link to request the user on the device side to respond to the questionnaire;
  • Modules can be divided for different functions, but it should be noted that the module division in FIG. 5 is only an example. In practical applications, the above processing can be allocated to different program modules according to needs. The structure is divided into different program modules to perform all or part of the processing described above. In addition, the apparatus provided in the above-mentioned embodiment and the embodiment of the corresponding method belong to the same concept, and the specific implementation process thereof is detailed in the method embodiment, which will not be repeated here.
  • FIG. 6 is a schematic structural diagram of another evaluation implementation device provided by an embodiment of the present application; the device is applied to a server. As shown in FIG. 6 , the device includes:
  • a first processing module configured to determine a first questionnaire based on a first label set; the first label set includes: a user preference label; the user preference label is determined based on historical user data;
  • the parameters of the target feature are determined based on the environmental state and the behavioral state;
  • a first communication module configured to send the first questionnaire to a first device; the first questionnaire is presented by the first device;
  • the first processing module is configured to determine the recommendation degree of each item based on the correlation between the user preference tag in the first tag set and each item in the questionnaire item bank;
  • the recommendation degree of the topic determines the first target topic;
  • the questionnaire question bank includes at least one topic;
  • the first comprehensive score of the first topic is determined based on the first comprehensive score corresponding to each of the first topics;
  • This cycle is repeated until a preset number of target questions is determined, and the first questionnaire is obtained based on the preset number of target questions.
  • the first processing module is configured to obtain at least one parameter of the first feature
  • the probability of at least one candidate device is obtained; the probability represents the possibility that a response result can be obtained by performing questionnaire evaluation on the corresponding candidate device;
  • a target device that meets the preset probability requirement is determined; the meeting the preset probability requirement indicates that the probability exceeds a preset threshold.
  • the number of the target features is at least one
  • the first processing module is configured to calculate the product of the influence factor corresponding to each of the target features and the parameter according to the parameter of each of the target features in the at least one target feature;
  • a probability for at least one candidate device is obtained.
  • the first communication module is further configured to acquire user behavior data within a first time period; the first time period represents the time when the reply result is generated;
  • the first processing module is further configured to determine the reliability of the reply result based on the user behavior data
  • the response result is adjusted to obtain a target response result.
  • the user behavior data includes at least one of the following: voice, body movement, and expression;
  • the first processing module is further configured to use a preset behavior analysis model to analyze at least one of the voice, the body movement, and the expression to obtain at least one credible value;
  • the credibility is determined according to the at least one credibility value and the weight corresponding to each user behavior data.
  • the user behavior data includes: user behavior sub-data for each topic;
  • the first processing module is further configured to analyze the user behavior sub-data corresponding to each topic by using a preset behavior analysis model, and obtain at least one credible value corresponding to each topic;
  • the first processing module is further configured to adjust the score for each question in the reply result according to the reliability corresponding to each question.
  • the evaluation implementation device provided in the above embodiment implements the corresponding evaluation implementation method
  • only the division of the above program modules is used as an example for illustration.
  • the above processing can be allocated to different program modules as required. To complete, that is, to divide the internal structure of the server into different program modules to complete all or part of the above-described processing.
  • the apparatus provided in the above-mentioned embodiment and the embodiment of the corresponding method belong to the same concept, and the specific implementation process thereof is detailed in the method embodiment, which will not be repeated here.
  • FIG. 7 is a schematic structural diagram of another evaluation implementation device provided by an embodiment of the present application; the device is applied to equipment. As shown in FIG. 7 , the device includes:
  • a second communication module configured to receive and present a first questionnaire; the first questionnaire is determined based on a first tag set; the first tag set includes: user preference tags; the user preference tags are determined based on historical user data;
  • a second processing module configured to obtain a response result for the first questionnaire
  • the second communication module is further configured to send the reply result to the server.
  • the device further includes: a collection module configured to collect at least one parameter of the first feature;
  • the second communication module is further configured to send the collected parameter of the at least one first feature to the server.
  • the collection module is further configured to collect user behavior data;
  • the corresponding user behavior data includes at least one of the following: voice, body movement, and facial expression; the user behavior data is used to adjust the response result;
  • the second communication module is further configured to send the user behavior data to a server.
  • the collection module is configured to determine a topic corresponding to at least one collection time period, and user behavior sub-data collected in each collection time period in the at least one collection time period;
  • the second communication module is configured to send the user behavior sub-data and corresponding topics collected in each collection time period of the at least one collection time period to the server.
  • the evaluation implementation device provided in the above embodiment implements the corresponding evaluation implementation method
  • only the division of the above program modules is used as an example for illustration.
  • the above processing can be allocated to different program modules as required. To complete, that is, to divide the internal structure of the corresponding device into different program modules, so as to complete all or part of the above-described processing.
  • the apparatus provided in the above-mentioned embodiment and the embodiment of the corresponding method belong to the same concept, and the specific implementation process thereof is detailed in the method embodiment, which will not be repeated here.
  • FIG. 8 is a schematic structural diagram of an evaluation implementation apparatus provided by an embodiment of the present application; as shown in FIG. 8 , the apparatus 80 includes: a processor 801 and a memory configured to store a computer program that can run on the processor 802;
  • the processor 801 when the apparatus can be applied to a server, the processor 801 is configured to, when running the computer program, execute: determining a first questionnaire based on a first label set; the first label set includes: a user preference label; The user preference tag is determined based on historical user data; the parameters of the target feature are acquired, and the first device is determined based on the parameters of the target feature; the parameters of the target feature are determined based on the environmental state and behavioral state; the first questionnaire is sent to a first device; the first questionnaire is presented by the first device; and a response result for the first questionnaire sent by the first device is received.
  • the processor 801 when the apparatus can be applied to a device, when the processor 801 is configured to run the computer program, execute: receive and present a first questionnaire; the first questionnaire is determined based on a first tag set; the first The tag set includes: a user preference tag; the user preference tag is determined based on historical user data; and a response result for the first questionnaire is obtained.
  • the apparatus 80 may further include: at least one network interface 803 .
  • the various components in the device 80 are coupled together by a bus system 804 .
  • the bus system 804 is used to implement connection communication between these components.
  • the bus system 804 also includes a power bus, a control bus, and a status signal bus.
  • the various buses are labeled as bus system 804 in FIG. 8 .
  • the number of the processors 801 may be at least one.
  • the network interface 803 is used for wired or wireless communication between the apparatus 80 and other devices.
  • the memory 802 in this embodiment of the present application is used to store various types of data to support the operation of the apparatus 80 .
  • the methods disclosed in the above embodiments of the present application may be applied to the processor 801 or implemented by the processor 801 .
  • the processor 801 may be an integrated circuit chip with signal processing capability. In the implementation process, each step of the above-mentioned method can be completed by an integrated logic circuit of hardware in the processor 801 or an instruction in the form of software.
  • the above-mentioned processor 801 may be a general-purpose processor, a digital signal processor (DSP, DiGital Signal Processor), or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like.
  • the processor 801 may implement or execute the methods, steps, and logical block diagrams disclosed in the embodiments of this application.
  • a general purpose processor may be a microprocessor or any conventional processor or the like.
  • the software module may be located in a storage medium, and the storage medium is located in the memory 802, and the processor 801 reads the information in the memory 802, and completes the steps of the foregoing method in combination with its hardware.
  • apparatus 80 may be implemented by one or more Application Specific Integrated Circuit (ASIC, Application Specific Integrated Circuit), DSP, Programmable Logic Device (PLD, Programmable Logic Device), Complex Programmable Logic Device (CPLD, Complex Programmable Logic Device), Field-Programmable Gate Array (FPGA, Field-Programmable Gate Array), general-purpose processor, controller, microcontroller (MCU, Micro Controller Unit), microprocessor (Microprocessor), or other electronic components implementation for performing the aforementioned method.
  • ASIC Application Specific Integrated Circuit
  • DSP Programmable Logic Device
  • PLD Programmable Logic Device
  • CPLD Complex Programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • controller controller
  • microcontroller MCU, Micro Controller Unit
  • microprocessor Microprocessor
  • Embodiments of the present application also provide a computer-readable storage medium, on which a computer program is stored;
  • the computer-readable storage medium when the computer-readable storage medium is applied to the server, when the computer program is run by the processor, execute: determine the first questionnaire based on the first label set; the first label set includes: a user preference label; The user preference tag is determined based on historical user data; the parameters of the target feature are obtained, and the first device is determined based on the parameters of the target feature; the parameters of the target feature are determined based on the environmental state and behavioral state; the first questionnaire is sent to a first device; the first questionnaire is presented by the first device; and a response result for the first questionnaire sent by the first device is received.
  • the computer program is run by the processor, the corresponding processes implemented by the server in each method of the embodiments of the present application are implemented, which are not repeated here for brevity.
  • the computer-readable storage medium When the computer-readable storage medium is applied to a device, when the computer program is run by the processor, execute: receive and present a first questionnaire; the first questionnaire is determined based on a first label set; the first label set, It includes: a user preference tag; the user preference tag is determined based on historical user data; and a response result for the first questionnaire is obtained.
  • the computer program when the computer program is run by the processor, the corresponding processes implemented by the device in each method of the embodiments of the present application are implemented, which will not be repeated here for brevity.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be electrical, mechanical or other forms. of.
  • the unit described above as a separate component may or may not be physically separated, and the component displayed as a unit may or may not be a physical unit, that is, it may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may all be integrated into one processing unit, or each unit may be separately used as a unit, or two or more units may be integrated into one unit; the above integration
  • the unit can be implemented either in the form of hardware or in the form of hardware plus software functional units.
  • the aforementioned program can be stored in a computer-readable storage medium, and when the program is executed, execute Including the steps of the above method embodiment; and the aforementioned storage medium includes: a mobile storage device, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk or an optical disk and other various A medium on which program code can be stored.
  • ROM read-only memory
  • RAM random access memory
  • magnetic disk or an optical disk and other various A medium on which program code can be stored.
  • the above-mentioned integrated units of the present application are implemented in the form of software function modules and sold or used as independent products, they may also be stored in a computer-readable storage medium.
  • the computer software products are stored in a storage medium and include several instructions for A computer device (which may be a personal computer, a server, or a network device, etc.) is caused to execute all or part of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic disk or an optical disk and other mediums that can store program codes.

Abstract

一种测评实现方法、装置和存储介质,所述方法包括:基于第一标签集确定第一问卷,所述第一标签集,包括:用户偏好标签,所述用户偏好标签基于历史用户数据确定(101);获取目标特征的参数,基于所述目标特征的参数确定第一设备,所述目标特征的参数基于环境状态和行为状态确定(102);将所述第一问卷发送到第一设备,所述第一问卷由所述第一设备呈现(103);接收所述第一设备发送的针对第一问卷的答复结果(104)。

Description

一种测评实现方法、装置和存储介质
相关申请的交叉引用
本申请基于申请号为202010902906.8、申请日为2020年09月01日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及业务支撑领域,尤其涉及一种测评实现方法、装置和存储介质。
背景技术
随着社会的进步以及科技的发展,用户对网络、业务、服务的要求越来越高。电信运营商、电商、零售业、金融、保险等行业通常会用用户调查测评的方式来开展业务满意度测评及净推荐值(NPS,Net Promoter Score)监测,倾听用户反馈并挖掘业务问题,及时调整用户运营策略,提升公司运营效果。
如何根据用户自身环境、行为、设备特征等差异,智能选择最佳的调查内容、邀约时机、渠道设备和调查交互方式,并更全面的采集用户调查过程中的真实状态和反馈,是提升用户参与度及测评质量的关键因素。
发明内容
有鉴于此,本申请的主要目的在于提供一种测评实现方法、装置和存储介质。
为达到上述目的,本申请的技术方案是这样实现的:
本申请实施例提供了一种测评实现方法,应用于服务器,所述方法包 括:
基于第一标签集确定第一问卷;所述第一标签集,包括:用户偏好标签;所述用户偏好标签基于历史用户数据确定;
获取目标特征的参数,基于所述目标特征的参数确定第一设备;所述目标特征的参数基于环境状态和行为状态确定;
将所述第一问卷发送到第一设备;所述第一问卷由所述第一设备呈现;
接收所述第一设备发送的针对第一问卷的答复结果。
较佳地,所述基于第一标签集确定第一问卷,包括:
基于所述第一标签集中的用户偏好标签与问卷题库中的每个题目之间的相关性,确定每个所述题目的推荐度;基于每个所述题目的推荐度,确定第一目标题目;所述问卷题库包括至少一个题目;
确定所述第一目标题目和所述问卷题库中除第一目标题目外的至少一个第一题目的关联度;基于每个所述第一题目对应的关联度和推荐度,确定每个所述第一题目的第一综合分值;基于每个所述第一题目对应的第一综合分值,确定第二目标题目;
确定所述第二目标题目和所述问卷题库中除第一目标题目及第二目标题目外的至少一个第二题目的关联度;基于每个所述第二题目对应的关联度和推荐度,确定每个所述第二题目对应的第二综合分值;基于每个所述第二题目对应的第二综合分值,确定第三目标题目;
依此循环,直至确定预设数量的目标题目,基于所述预设数量的目标题目得到所述第一问卷。
较佳地,所述获取目标特征的参数,基于所述目标特征的参数确定第一设备,包括:
获取至少一个第一特征的参数;
从所述至少一个第一特征中选择符合预设特征要求的第一特征,作为 目标特征;
根据所述目标特征的参数,得到针对至少一个候选设备的概率;所述概率表征通过相应候选设备进行问卷测评可得到答复结果的可能性;
基于所述至少一个候选设备中每个候选设备对应的概率,确定符合预设概率要求的目标设备;所述符合预设概率要求表征概率超过预设阈值。
较佳地,所述目标特征的数量为至少一个;
所述根据所述目标特征的参数,得到针对至少一个候选设备的概率,包括:
根据至少一个目标特征中每个所述目标特征的参数,计算每个所述目标特征对应的影响因子和参数的乘积;
基于每个所述目标特征对应的影响因子和所述目标特征的参数的乘积,得到针对至少一个候选设备的概率。
较佳地,所述方法还包括:
获取第一时间段内的用户行为数据;所述第一时间段表征生成答复结果的时间;
基于所述用户行为数据,确定针对所述答复结果的可信度;
基于所述可信度,调整所述答复结果,得到目标答复结果。
较佳地,所述用户行为数据,包括以下至少之一:声音、肢体动作、表情;
所述基于所述用户行为数据,确定针对答复结果的可信度,包括:
运用预设的行为分析模型分析所述声音、所述肢体动作、所述表情中的至少之一,得到至少一个可信值;
根据所述至少一个可信值和各用户行为数据对应的权重,确定可信度。
较佳地,所述用户行为数据,包括:针对每个题目的用户行为子数据;
所述基于所述用户行为数据,确定针对答复结果的可信度,包括:
运用预设的行为分析模型分析每个题目对应的用户行为子数据,得到每个题目对应的至少一个可信值;
根据每个题目对应的所述至少一个可信值和各用户行为数据对应的权重,确定针对每个题目的可信度;
相应的,所述基于所述可信度,调整所述答复结果,包括:
根据每个题目对应的可信度,调整所述答复结果中针对每个题目的分值。
本申请实施例提供了一种测评实现方法,应用于设备,所述方法包括:
接收并呈现第一问卷;所述第一问卷基于第一标签集确定;所述第一标签集,包括:用户偏好标签;所述用户偏好标签基于历史用户数据确定;
获得针对第一问卷的答复结果;
将所述答复结果发送给服务器。
较佳地,所述方法还包括:
采集至少一个第一特征的参数;
将采集的至少一个第一特征的参数发送给服务器。
较佳地,所述方法还包括:
采集用户行为数据;相应用户行为数据,包括以下至少之一:声音、肢体动作、表情;所述用户行为数据用于对所述答复结果进行调整;
将所述用户行为数据发送给服务器。
较佳地,所述采集用户行为数据,包括:
确定至少一个采集时间段对应的题目,及至少一个采集时间段中每个采集时间段内采集的用户行为子数据;
所述将所述用户行为数据发送给服务器,包括:将所述至少一个采集时间段中每个采集时间段内采集的用户行为子数据及对应的题目,发送给服务器。
本申请实施例提供了一种测评实现装置,应用于服务器,所述装置包括:
第一处理模块,配置为基于第一标签集确定第一问卷;所述第一标签集,包括:用户偏好标签;所述用户偏好标签基于历史用户数据确定;
以及,获取目标特征的参数,基于所述目标特征的参数确定第一设备;所述目标特征的参数基于环境状态和行为状态确定;
第一通信模块,配置为将所述第一问卷发送到第一设备;所述第一问卷由所述第一设备呈现;
以及,接收所述第一设备发送的针对第一问卷的答复结果。
较佳地,所述第一处理模块,配置为基于所述第一标签集中的用户偏好标签与问卷题库中的每个题目之间的相关性,确定每个所述题目的推荐度;基于每个所述题目的推荐度,确定第一目标题目;所述问卷题库包括至少一个题目;
确定所述第一目标题目和所述问卷题库中除第一目标题目外的至少一个第一题目的关联度;基于每个所述第一题目对应的关联度和推荐度,确定每个所述第一题目的第一综合分值;基于每个所述第一题目对应的第一综合分值,确定第二目标题目;
确定所述第二目标题目和所述问卷题库中除第一目标题目及第二目标题目外的至少一个第二题目的关联度;基于每个所述第二题目对应的关联度和推荐度,确定每个所述第二题目对应的第二综合分值;基于每个所述第二题目对应的第二综合分值,确定第三目标题目;
依此循环,直至确定预设数量的目标题目,基于所述预设数量的目标题目得到所述第一问卷。
较佳地,所述第一处理模块,配置为获取至少一个第一特征的参数;
从所述至少一个第一特征中选择符合预设特征要求的第一特征,作为 目标特征;
根据所述目标特征的参数,得到针对至少一个候选设备的概率;所述概率表征通过相应候选设备进行问卷测评可得到答复结果的可能性;
基于所述至少一个候选设备中每个候选设备对应的概率,确定符合预设概率要求的目标设备;所述符合预设概率要求表征概率超过预设阈值。
较佳地,所述目标特征的数量为至少一个;
所述第一处理模块,配置为根据至少一个目标特征中每个所述目标特征的参数,计算每个所述目标特征对应的影响因子和参数的乘积;
基于每个所述目标特征对应的影响因子和所述目标特征的参数的乘积,得到针对至少一个候选设备的概率。
较佳地,所述第一通信模块,还配置为获取第一时间段内的用户行为数据;所述第一时间段表征生成答复结果的时间;
所述第一处理模块,还配置为基于所述用户行为数据,确定针对所述答复结果的可信度;
基于所述可信度,调整所述答复结果,得到目标答复结果。
较佳地,所述用户行为数据,包括以下至少之一:声音、肢体动作、表情;
所述第一处理模块,还配置为运用预设的行为分析模型分析所述声音、所述肢体动作、所述表情中的至少之一,得到至少一个可信值;
根据所述至少一个可信值和各用户行为数据对应的权重,确定可信度。
较佳地,所述用户行为数据,包括:针对每个题目的用户行为子数据;
所述第一处理模块,还配置为运用预设的行为分析模型分析每个题目对应的用户行为子数据,得到每个题目对应的至少一个可信值;
根据每个题目对应的所述至少一个可信值和各用户行为数据对应的权重,确定针对每个题目的可信度;
相应的,所述第一处理模块,还配置为根据每个题目对应的可信度,调整所述答复结果中针对每个题目的分值。
本申请实施例提供了一种测评实现装置,应用于设备,所述装置包括:
第二通信模块,配置为接收并呈现第一问卷;所述第一问卷基于第一标签集确定;所述第一标签集,包括:用户偏好标签;所述用户偏好标签基于历史用户数据确定;
第二处理模块,配置为获得针对第一问卷的答复结果;
所述第二通信模块,还配置为将所述答复结果发送给服务器。
较佳地,所述装置还包括:采集模块,配置为采集至少一个第一特征的参数;
所述第二通信模块,还配置为将采集的至少一个第一特征的参数发送给服务器。
较佳地,所述采集模块,还配置为采集用户行为数据;相应用户行为数据,包括以下至少之一:声音、肢体动作、表情;所述用户行为数据用于对所述答复结果进行调整;
所述第二通信模块,还配置为将所述用户行为数据发送给服务器。
较佳地,所述采集模块,配置为确定至少一个采集时间段对应的题目,及至少一个采集时间段中每个采集时间段内采集的用户行为子数据;
相应的,所述第二通信模块,配置为将所述至少一个采集时间段中每个采集时间段内采集的用户行为子数据及对应的题目,发送给服务器。
本申请实施例提供了一种测评实现装置,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现由上述服务器侧执行的任一项所述方法的步骤;或者,
所述处理器执行所述程序时实现由上述设备侧执行的任一项所述方法的步骤。
本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现由上述服务器侧执行的任一项所述方法的步骤;或者,
所述计算机程序被处理器执行时实现由上述设备侧执行的任一项所述方法的步骤。
本申请实施例所提供的一种测评实现方法、装置和存储介质,所述方法包括:服务器基于第一标签集确定第一问卷;所述第一标签集,包括:用户偏好标签;所述用户偏好标签基于历史用户数据确定;获取目标特征的参数,基于所述目标特征的参数确定第一设备;所述目标特征的参数基于环境状态和行为状态确定;将所述第一问卷发送到第一设备;所述第一问卷由所述第一设备呈现;接收所述第一设备发送的针对第一问卷的答复结果;如此,提高用户参与率,提升测评质量;
相应的,本申请实施例所提供的另一种测评实现方法、装置和存储介质,所述方法包括:设备接收并呈现第一问卷;所述第一问卷基于第一标签集确定;所述第一标签集,包括:用户偏好标签;所述用户偏好标签基于历史用户数据确定;获得针对第一问卷的答复结果;将所述答复结果发送给服务器;如此,提高用户参与率,提升测评质量。
附图说明
图1为本申请实施例提供的一种测评实现方法的流程示意图;
图2为本申请实施例提供的另一种测评实现方法的流程示意图;
图3为本申请实施例提供的再一种测评实现方法的流程示意图;
图4为本申请实施例提供的一种答复结果和多维数据协同交叉分析的示意图;
图5为本申请实施例提供的一种测评实现装置的结构示意图;
图6为本申请实施例提供的另一种测评实现装置的结构示意图;
图7为本申请实施例提供的再一种测评实现装置的结构示意图;
图8为本申请实施例提供的还一种测评实现装置的结构示意图。
具体实施方式
在结合实施例对本申请再作进一步详细的说明,先对相关技术进行说明。
用户调查测评是运用科学的方法进行数据采集和汇聚的方法,为企业监测服务现状、业务质量、整体满意度、市场分析等提供基础数据支撑。
当前主流的调查方法主要是通过邮件或者短信等方式将调查问卷推送给用户手机或者电脑进行邀约和测评,任何用户都无差别对待,由于没有考虑用户的自身和周边环境状态,而导致用户敷衍了事,或者拒答率较高等题目;同时,当前通用的调查方法采集的用户信息主要是用户反馈的文本信息,信息种类单一,因为调查是用户的主观反馈,所以单一信息会导致用户真实态度失真。
现在的用户调查主要实现方式是把测评任务无差别的推送给广大用户,用户通过万维网(WEB,World Wide Web)页面进行信息反馈,没有考虑因用户环境、用户状态、便捷性、信息虚假等因素,而造成用户敷衍、受访体验差、拒绝率高、信息不全面等题目,导致整体调研结果失真和不准确,进一步影响企业决策和业务服务改进。
随着社会进步和科技发展,可接入网络的设备种类和形态呈多样化,尤其近几年随着智能家居的兴起,可与用户接触和交互的设备、设备呈现智能化和普遍化,这样就给多种形态的调查带来了便利;但同时单一的设备也成了用户信息采集的瓶颈,因为之前触达用户最直接的是手机终端,但随着终端的泛化,当前手机终端只是网络接入的一种,这也是造成调研回复率下降的一个原因。
如何根据用户自身环境、行为、设备特征等差异,而智能选择最佳的 调查内容、邀约时机、渠道设备和调查交互方式,并更全面的采集用户调查过程中的真实状态和反馈,是提升用户参与度及测评质量的关键因素。
基于此,本申请实施例提供的方法,服务器基于第一标签集确定第一问卷;所述第一标签集,包括:用户偏好标签;所述用户偏好标签基于历史用户数据确定;获取目标特征的参数,基于所述目标特征的参数确定第一设备;所述目标特征的参数基于环境状态和行为状态确定;将所述第一问卷发送到第一设备;所述第一问卷由所述第一设备呈现;接收所述第一设备发送的针对第一问卷的答复结果;相应的,设备接收并呈现第一问卷;所述第一问卷基于第一标签集确定;所述第一标签集,包括:用户偏好标签;所述用户偏好标签基于历史用户数据确定;获得针对第一问卷的答复结果;将所述答复结果发送给服务器。
图1为本申请实施例提供的一种测评实现方法的流程示意图;如图1所示,所述方法应用于服务器,所述方法包括:
步骤101、基于第一标签集确定第一问卷;所述第一标签集,包括:用户偏好标签;所述用户偏好标签基于历史用户数据确定;
步骤102、获取目标特征的参数,基于所述目标特征的参数确定第一设备;所述目标特征的参数基于环境状态和行为状态确定;
步骤103、将所述第一问卷发送到第一设备;所述第一问卷由所述第一设备呈现;
步骤104、接收所述第一设备发送的针对第一问卷的答复结果。
在一些实施例中,所述方法还包括:确定针对每个用户的第一标签集。
具体地,所述确定针对每个用户的第一标签集,包括:
获取预设历史时间段内多个用户(可以是运营商提供服务的所有用户)的历史用户数据;所述历史用户数据,包括以下至少之一:基本信息、通信消费、设备信息、业务使用行为、上网行为、所处位置、业务投诉和历 史调查测评数据;
对多个用户的历史用户数据进行标准化处理,得到多个用户的标准化处理后的数据;
针对每个用户对应的标准化处理后的数据,确定每个用户对应的第一标签集。所述第一标签集,包括:至少一个用户偏好标签。
这里,标准化处理可以采用任意标准化规则进行,对于数值类,例如:Min-max标准化、z-score标准化等。其中,Min-max标准化指对原始数据进行线性变换,可以设minA和maxA分别为属性A的最小值和最大值,将A的一个原始值x通过min-max标准化映射成在区间[0,1]中的值x',其公式为:新数据=(原数据-最小值)/(最大值-最小值)。z-score标准化指基于原始数据的均值(mean)和标准差(standard deviation)进行数据的标准化;新数据=(原数据-均值)/标准差。
这里,针对多个用户中每个用户对应的标准化处理后的数据,确定每个用户对应的第一标签集,包括:
针对每个用户对应的标准化处理后的数据,确定每个用户对应的至少一个用户偏好标签,基于确定的至少一个用户偏好标签,确定用户对应的第一标签集。
例如,所述标准化处理后的数据,包括经过标准化处理后的以下信息:基本信息、通信消费、设备信息、业务使用行为、上网行为、所处位置、业务投诉和历史调查测评数据;
基本信息,可以包括:性别、年龄、地域、职业等;
通信消费,可以包括:消费费用(如每月的消费的费用)、不同业务订购的额度(如订购的第五代移动通信技术(5g,5th-Generation)流量、第四代移动通信技术(4g,the 4th generation mobile communication technology)流量、短信条数、通话时长等)等;
设备信息,可以包括:智能电视、智能音箱、智能冰箱、智能投影仪、手机、电脑、平板电脑(Pad,Portable android device)、智能相册、智能穿戴设备等;
业务使用行为,包括:开通业务的情况(如开通4g服务、开通5g服务、开通通话服务、开通vr业务等);
上网行为,可以包括:移动数据、无线网络(WIFI)、上网时长、不同类型应用程序(APP,Application)分别对应的时长(如视频类APP使用时长、社交类APP使用时长、游戏类APP使用时长)等;
所处位置,可以包括:不同设备所处的位置,可以为设备提供信号的基站的位置、提供服务的基站支持的小区;还可以进一步确定设备在室内或室外等,如果确定设备在室内,还可以进一步确定在卧室、客厅等作为所处位置;具体可以基于设备的定位功能实现,这里不多赘述;
业务投诉,可以包括:用户投诉过的业务(如4g网络质量不满等投诉);
历史调查测评数据,可以包括:用户曾经处理过的调查测评问卷、对应的答复结果、及对应的特征信息(特征信息包括:答复所采用的方式、答复时间、答复时的状态等)。
基于上述内容可以确定用户偏好标签,例如:4g用户、虚拟现实技术(VR,Virtual Reality)用户、4g质量不满、4g自费、无线上网(WIFI)用户等。
在一些实施例中,所述基于第一标签集确定第一问卷,包括:
基于所述第一标签集中的用户偏好标签与问卷题库中的每个题目之间的相关性,确定每个所述题目的推荐度,基于每个所述题目的推荐度,确定第一目标题目;所述问卷题库包括至少一个题目;
确定所述第一目标题目和所述问卷题库中除第一目标题目外的至少一个第一题目的关联度;基于每个所述第一题目对应的关联度和推荐度,确 定每个所述第一题目的第一综合分值;基于每个所述第一题目对应的第一综合分值,确定第二目标题目;
确定所述第二目标题目和所述问卷题库中除第一目标题目及第二目标题目外的至少一个第二题目的关联度;基于每个所述第二题目对应的关联度和推荐度,确定每个所述第二题目对应的第二综合分值;基于每个所述第二题目对应的第二综合分值,确定第三目标题目;
依此循环,直至确定预设数量的目标题目,基于所述预设数量的目标题目得到所述第一问卷。
其中,所述问卷题库,包括:预设的至少一个题目。所述第一标签集,包括:根据历史用户数据生成的针对相应用户的用户偏好标签。
在另一实施例中,所述基于第一标签集确定第一问卷,包括:
基于所述第一标签集中的用户偏好标签与问卷题库中的每个题目之间的相关性,确定每个所述题目的推荐度,基于每个所述题目的推荐度,确定第一目标题目;所述问卷题库包括至少一个题目;
确定所述第一目标题目和所述问卷题库中除第一目标题目外的至少一个第一题目的关联度;基于每个所述第一题目对应的关联度和推荐度,确定每个所述第一题目的第一综合分值;
基于每个所述第一题目对应的第一综合分值进行排序,确定第一预设数量的第二目标题目;
根据第一目标题目和第一预设数量的第二目标题目,得到第二预设数量的目标题目(相当于得到预设数量的目标题目);
基于所述第二预设数量的目标题目得到所述第一问卷。
以下提供一种具体示例说明,所述基于第一标签集确定第一问卷,包括:
步骤001、将问卷题库中的至少一个题目与用户偏好标签按照预设规则 逐一计算相关性(或贴切度),基于相关性确定针对每个题目的推荐度;
这里,所述将问卷题库中的至少一个题目与用户偏好标签按照预设规则逐一计算相关性,基于相关性确定针对每个题目的推荐度,包括:
将每个题目的题干内容与用户偏好标签按预设规则逐一计算相关性。
所述预设规则可以预先设定,可以是任意相关性计算方法。
步骤002、按照推荐值由大到小排列,将推荐度最大的候选题目作为首条题目(即上述第一目标题目),添加到待发送的调研问卷、即所述第一问卷中;
所述步骤002,包括:
将不同用户偏好标签与每个题目计算相关性,确定与用户偏好标签相关性最高的题目,作为候选题目;
相应于所述候选题目有且仅有一个时,将确定的题目作为首条题目;
相应于所述候选题目有且仅数量为至少两个,结合其他用户偏好标签对至少两个候选题目进行筛选,得到首条题目。
例如,用户偏好标签包括:4g用户、vr用户,基于4g用户、vr用户与每个题目计算相关度,分别得到对应于“4g用户”的题目一、对应于“vr用户”的题目二,两者的相关度相同且最高;此时,可以结合4g用户相关的其他标签(如4g相关业务使用时长等)、vr用户相关的其他标签(如vr相关业务使用时长等),计算题目一的推荐度、题目二的推荐度。
假设,4g相关业务使用时长大于vr相关业务使用时长,提高题目一的推荐度,则将题目一作为第一题目。
步骤003、按照首条题目(即上述第一题目)的题干内容与调查问卷题库中未显示的候选题目(也就是说,除所述首条题目外的其他其目)逐一进行关联度计算;基于各个候选题目的推荐度和关联度综合评估打分,得到综合分值;
步骤004、按照综合分值由大到小排列给出下一道题目,作为与用户最贴切的下一道题目,将确定的下一道题目添加到所述待发送的调研问卷;
步骤005、重复上述步骤003-004,当已经达到预设的问卷最大题目数量,终止候选题目推荐,基于确定的候选题目得到调研问卷。
如下表1所示,表1为一种问卷题目综合分值的表格;其中,题目1的推荐值最大,作为第一问卷中的首道题目;题目5的综合分值最大,作为第一问卷中的第二道题目;题目3的综合分值第二,作为第一问卷中的第三道题目;以此类推,题目2……题目N,依次作为下一道题目。如此可以得到最终的第一问卷。
以下推荐值表征相应题目与各用户偏好标签的相关性最高的值;
以下关联度表征相应题目与首条题目(即题目1)的关联度。
题目编号 推荐值 关联度 综合分值
题目5 第五大推荐值 最大关联度 最大综合分值
题目3 第三推荐值 次大关联度 第二大综合分值
题目2 次最大推荐值 第五大关联度 第三大综合分值
…… …… …… ……
题目N 最小推荐值 最小关联度 最小综合分值
题目1 最大推荐值 已显示,不再计算 已显示,不再计算
表1
通过上述方式生成的第一问卷,其包含的题目与用户的用户偏好标签相关;用户也更为关注相关题目,其提供的问卷调查结果也更真实,如此,可以提高问卷调查的质量、可信度。
在一些实施例中,所述获取目标特征的参数,基于所述目标特征的参数确定第一设备,包括:
获取至少一个第一特征的参数;
从所述至少一个第一特征中选择符合预设特征要求的第一特征,作为目标特征;
根据所述目标特征的参数,得到针对至少一个候选设备的概率;所述概率表征通过相应候选设备进行问卷测评可得到答复结果的可能性;
基于所述至少一个候选设备中每个候选设备对应的概率,确定符合预设概率要求的目标设备;所述符合预设概率要求表征概率超过预设阈值。
这里,所述第一特征的数量可以为一个或多个;
筛选得到的目标特征的数量可以为一个或多个。
所述预设特征要求,包括:至少一个参考特征;所述参考特征为对调研成功与否的重要影响的特征。
所述符合预设特征要求的第一特征为属于所述参考特征的特征。
在一些实施例中,所述目标特征的数量为至少一个;
所述根据所述目标特征的参数,得到针对至少一个候选设备的概率,包括:
根据至少一个目标特征中每个所述目标特征的参数,计算每个所述目标特征对应的影响因子和参数的乘积;
基于每个所述目标特征对应的影响因子和所述目标特征的参数的乘积,得到针对至少一个候选设备的概率。
具体来说,可以采用以下公式,计算针对至少一个候选设备的概率:
Figure PCTCN2021115330-appb-000001
其中,p(u i,t j)表示向用户i的设备j发送调查问卷成功的概率;
w ik=f(x ik)表示用户i的目标特征x ik对应影响因子为w ik;所述影响因子(w ik)为权重;所述x ik为目标特征的参数,即具体取值;
目标特征包括:设备类型(基于用户具有的多个设备、用户喜好设备 确定),如此,确定选择相应设备对应的概率;
目标特征还可以包括:推送时间(基于希望推送的时间确定),如此,确定选择相应时间对应的概率;
目标特征还可以包括:其他特征,例如,用户位置(如客厅、厨房、卧室等)、忙闲状态(一种用户状态特征)、动静状态(一种用户状态特征)、用户行为偏好(如用户喜好设备、使用时间、频次等)、日期时间特征(如当前是否工作日、当前所处时间段)、设备信息(如设备类型、设备状态、交互方式(语音、触屏、手势等)、设备位置等)。上述第一特征,至少包括以上目标特征所包含的内容,这里不再赘述。
以下对不同目标特征的参数的取值进行说明;
针对设备类型,可以结合设备相关信息确定(如用户具有的多个设备和用户行为偏好中的用户喜好设备),为多个设备分别进行配置取值,例如:第一个设备(如:手机,喜好度最高)对应50%、第二个设备(如:智能电视,喜好度其次)对应30%,依次类推,可以为多个设备配置取值;上述多个设备的取值总值可以为1。
针对推送时间,可以结合时间相关信息确定(如针对不同设备的使用时间、当前时间(如工作日、休息日)等),可以对一天24小时进行划分,不同时间段对应不同取值;例如:晚上时间段(如20:00-22:00)取值为40%,休息时间段(如12:30-13:00)取值为30%,依次类推,可以为不同时间段配置取值;上述多个不同时间段的取值总值可以为1。
针对位置特征,可以结合设备位置和用户位置的关系确定;另外,针对不同设备类型其对应的位置关系的取值也不同,例如:针对手机、智能音响对应位置关系的要求不同;也就是说,对于待预测的不同设备,可以配置不同的位置信息对应的取值,如针对手机,距离在1米以的取值为60%,而距离超过10米时取值为10%;针对智能音箱,距离在1-5米内的取值为 60%,距离超过5米的取值为10%;
针对忙闲状态,对于忙的状态和闲的状态配置不同取值,如忙的状态配置10%,闲的状态配置90%;
针对动静状态,对于动的状态和静的状态配置不同取值,如动的状态配置10%,静的状态配置90%。
需要说明的是,上述取值仅仅是一种示例,实际应用时可以由开发人员基于经验、需求、调研成功的问卷相关的数据等进行配置。这里不做限定。
以上用户忙闲状态、动静状态、用户位置等用户相关信息,可以通过设备检测后发送给服务器。
所述设备可以为智能家居设备,包括:智能电视、智能音箱、智能冰箱、智能投影仪、手机、电脑、平板电脑(PAD)、智能相册、智能穿戴设备等可与用户交互的设备;不同设备对应不同交互方式,示例如下表2。所述设备通过服务器进行调度。
本申请实施例中,需要从各个设备中选择一个设备发送调查问卷,选择的设备为用户接收并进行调研的可能性最高的设备。每个设备有多种交互方式,如下表2所示:
设备 交互方式一 交互方式二 交互方式三
智能电视 声音交互 触摸交互 动作交互
智能投影仪 声音交互 动作交互 ……
智能音箱 语音交互 触摸交互 ……
智能穿衣镜 声音交互 触摸交互 ……
表2
在一些实施例中,所述方法还包括:
获取第一时间段内的用户行为数据;所述第一时间段表征生成答复结 果的时间;
基于所述用户行为数据,确定针对所述答复结果的可信度;
基于所述可信度,调整所述答复结果,得到目标答复结果。
在一些实施例中,所述用户行为数据,包括以下至少之一:声音、肢体动作、表情;
所述基于所述用户行为数据,确定针对答复结果的可信度,包括:
运用预设的行为分析模型分析所述声音、所述肢体动作、所述表情中的至少之一,得到至少一个可信值;
根据所述至少一个可信值和各用户行为数据对应的权重,确定可信度。
在一些实施例中,所述用户行为数据,包括:针对每个题目的用户行为子数据;
所述基于所述用户行为数据,确定针对答复结果的可信度,包括:
运用预设的行为分析模型分析每个题目对应的用户行为子数据,得到每个题目对应的至少一个可信值;
根据每个题目对应的所述至少一个可信值和各用户行为数据对应的权重,确定针对每个题目的可信度;
相应的,所述基于所述可信度,调整所述答复结果,包括:
根据每个题目对应的可信度,调整所述答复结果中针对每个题目的分值。
通过上述方法,服务器采集当前用户环境及行为状态,确定目标特征的参数,并结合智能家居设备信息,根据测评任务智能推送策略,选择最佳时机、设备、交互方式(即选择第一设备和推送时间),将第一问卷推送给用户进行调研邀约,从而提高调研成功率,并且,多种设备进行推送,也可以提高趣味性,从而提高用户体验度。
在一些实施例中,所述方法还包括:确定至少一个参考特征并确定每 个参考特征对应的影响因子;具体包括:
获取预设数量的参考用户的参考数据集;所述参考数据集包括:至少一个参考特征和每个参考特征对应的参数;
根据每个用户的所述参考特征对应的参数,采用决策树、逻辑回归等模型进行模型训练,基于模型训练结果确定各个参考特征对用户调研成功与否的影响因子;所述影响因子表征各个参考特征对结果影响的重要度;
根据各个参考特征对应的影响因子,筛选最重要的N个参考特征,作为所述目标特征。
这里,所述预设数量、所述N由开发人员基于需求预先设定并保存。所述参考用户为调研成功的用户。所述参考数据集表征调研成功的问卷对应的相关数据集。
对上述确定至少一个参考特征并确定每个参考特征对应的影响因子具体说明如下。具体包括:
步骤011、获取成功参与智能家居调研的参考数据集,根据参考数据集中的历史用户数据构建相应的用户特征;
这里,用户特征可包括以下至少之一:
用户基本信息特征:年龄、性别、地域等;
用户位置特征:智能家居能检测到的用户位置信息,如客厅、厨房、卧室等;
用户状态特征:智能家居采集的用户忙闲状态、动静状态等,可通过智能摄像头、智能安防传感器等设备采集视频图像信息分析得到;
智能家居设备用户行为偏好:智能家居设备类别、使用时间、频次等;
日期时间特征:是否工作日、当前所处时间段;
智能家居设备信息:设备类型、设备状态、交互方式(语音、触屏、手势等)、所在位置等;
对用户数据进行特征化,表示如下:user i=[x i1,x i2,x i3,…,x ik,…,x in];其中,x ik为用户i对应的第k个特征变量;
步骤012、特征重要度分析及模型训练;
步骤012具体包括:
基于历史成功邀约调查的历史用户数据,采用决策树、逻辑回归等模型进行模型训练,基于模型训练结果确定各个参考特征对用户调研成功与否的影响因子;所述影响因子表征各个参考特征对结果影响的重要度;
影响因子表示如下:w ik=f(x ik),表示用户i的特征变量x ik对应影响因子为w ik
筛选出最重要的n个特征,则用户i对应的所有特征变量的影响因子表示为:w i={w i1,w i2,w i3,…,w ik,…,w in};
利用以上特征,对模型进行调优,可采用准确率、ROC曲线下方的面积大小(AUC,Area Under Curve)等指标进行评估,得到优化后的模型。
对于用户i、将测评任务推送至设备j,调研成功的概率可以采用下式确定:
Figure PCTCN2021115330-appb-000002
其中,x ik为用户i对应的第k个特征变量;w ik为用户i的特征变量x ik对应的影响因子,可以基于模型训练结果确定;p(u i,t j)表示向用户i的设备j发送调查问卷成功的概率。
其中,利用步骤012中训练得到的模型,对将要调研的用户进行智能家居设备预测,得到每个智能家居设备下用户成功参与调研的概率,选择成功概率最大的智能家居设备进行测评任务的推送和调研邀约。
相应的,本申请实施例还提供了一种应用于设备侧的测评实现方法,具体结合图2进行说明如下。
图2为本申请实施例提供的另一种测评实现方法的流程示意图;如图2所示,所述方法应用于设备,所述设备具体为以上任意一种智能家居设备;所述设备可以与服务器进行交互;所述方法包括:
步骤201、接收并呈现第一问卷;所述第一问卷基于第一标签集确定;所述第一标签集,包括:用户偏好标签;所述用户偏好标签基于历史用户数据确定;
步骤202、获得针对第一问卷的答复结果;
步骤203、将所述答复结果发送给服务器。
在一些实施例中,所述方法还包括:
采集至少一个第一特征的参数;
将采集的至少一个第一特征的参数发送给服务器。
所述第一特征至少包括:用户位置(如客厅、厨房、卧室等)、忙闲状态(一种用户状态特征)、动静状态(一种用户状态特征)等。具体已在图1所示方法中描述,这里不再赘述。
在一些实施例中,所述方法还包括:
采集用户行为数据;相应用户行为数据,包括以下至少之一:声音、肢体动作、表情;所述用户行为数据用于对所述答复结果进行调整;
将所述用户行为数据发送给服务器。
在一些实施例中,所述采集用户行为,包括:
确定至少一个采集时间段对应的题目,及至少一个采集时间段中每个采集时间段内采集的用户行为子数据;
所述将所述用户行为数据发送给服务器,包括:将所述至少一个采集时间段中每个采集时间段内采集的用户行为子数据及对应的题目,发送给服务器。
以上相关数据的使用由服务器侧执行,具体已在图1所示方法中说明, 这里不再赘述。
图3为本申请实施例提供的再一种测评实现方法的流程示意图;如图3所示,所述方法应用于服务器,所述服务器包括:调研测评平台、智能家居控制中心;包括:
步骤301、调研测评平台对用户相关数据进行数据采集与预处理;
其中,用户相关数据,包括:历史用户数据、历史调查测评数据;
所述历史用户数据,包括:基本信息、通信消费、设备信息、业务使用行为、上网行为、所处位置、业务投诉;
所述步骤301具体包括:
获取某一历史时间段内所有用户的历史用户数据(包括上述基本信息、通信消费、设备信息、业务使用行为、上网行为、所处位置、业务投诉)和历史调查测评数据,并进行标准化处理。标准化处理方法已在图1所示方法中说明,这里不做限定。
步骤302、调研测评平台为用户生成定制化测评任务;
步骤302,包括:根据某一用户的历史用户数据、历史调查测评数据生成相应的测评任务。所述测评任务包括:调查问卷。
所述调查问卷相当于图1所述的第一问卷,具体生成方法已在图1所示方法中详细说明,这里不再赘述。
步骤303、将测评任务推送到智能家居控制中心;
所述智能家居控制中心与至少一个智能家居设备进行交互;通过所述智能家居控制中心可以将调查问卷发送给相应的智能家居设备。
步骤304、调研测评平台计算调研最佳时机与设备,推送测评任务给用户;
步骤304具体包括:智能家居控制中心采集当前用户环境及行为状态,将采集的当前用户环境及行为状态发送给调研测评平台;
调研测评平台根据上述当前用户环境及行为状态,并结合智能家居设备信息,运用智能推送策略,选择最佳时机、设备、交互方式,将测评任务推送给用户进行调研邀约。
具体来说,智能推送策略根据成功参与智能家居调研的历史用户数据进行训练得到。训练过程包括:根据历史用户数据构建用户特征;根据用户特征进行特征重要度分析和模型训练;计算得到各个用户特征对用户调研成功与否的影响因子。具体过程已在图1所示方法中说明,这里不再赘述。
所述运用智能推送策略,选择最佳时机、设备、交互方式,包括:
对将要调研的用户进行智能家居设备预测,得到每个智能家居设备下用户成功参与调研的概率,选择成功概率最大的智能家居设备进行测评任务的推送和调研邀约。
所述概率计算可以采用下式:
Figure PCTCN2021115330-appb-000003
其中,x ik为用户i对应的第k个特征变量;w ik为用户i的特征变量x ik对应的影响因子;p(u i,t j)表示向用户i的设备j发送调查问卷成功的概率。
上述特征变量相当于图1所示方法中的目标特征的参数;关于目标特征的参数的取值和影响因子的确定已在图1所示方法中说明,这里不再赘述。
步骤305、用户执行调研时,智能家居控制中心同步采集用户表情、声音、动作等数据,将采集的数据发送给调研测评平台;
这里,智能家居控制中心可以通过与其通信的智能家居设备(某一具有摄像头的设备、某一具有麦克风的设备),同步采集用户表情、声音、动作等数据。
步骤306、判断用户调研是否完成;确定用户调研完成,则进入步骤308,确定用户调研未完成,则进入步骤307;
在一实施例中,步骤306具体包括:调研测评平台判断是否接收到测评任务的答复结果,确定收到所述答复结果,则进入步骤308,确定未收到答复结果,则进入步骤307。
另一实施例中,步骤306具体包括:智能家居控制中心实时判断是否接收到测评任务的答复结果,确定收到所述答复结果,则将答复结果发送到调研测评平台;确定未收到答复结果,则进入步骤307。
步骤307、协同其他智能家居设备,继续调研邀约;
这里,具体可以重新执行步骤304,确定其他智能家居设备进行调研。
步骤308、调研测评平台对调研结果数据多维交叉分析。
这里,依据采集的用户多维数据对用户调研结果进行交叉分析,并综合调研过程中采集到的的语音、动作、表情等数据,对调研结果进行协同交叉分析。
步骤3081、确定用户进行调研时的情绪。
具体可以包括执行以下至少之一:
分析语音数据,得到相应的语音语义,根据语音语义确定用户的第一情绪;
分析动作数据(可以为采集的视频数据或图像数据),得到相应的动作信息,根据动作信息确定用户的第二情绪;
分析标签数据(可以为采集的视频数据或图像数据),得到相应的面部信息,根据面部信息确定用户的第三情绪。
步骤3082、根据用户情绪对答题作进一步调整。
具体来说,可以根据上述第一情绪、第二情绪、第三情绪按预设处理策略进行处理,得到目标情绪;根据目标情绪按照预设调整策略对答题结 果进行调整(如确定用户觉着麻烦或情绪是负面的,则可以相应对分值减一等)。
具体地预设处理策略由开发人员根据需要设定,例如可以对情绪进行加权处理;具体地预设调整策略由开发人员根据需要设定,例如可以是针对某一情绪相应对分值进行调整,或者,标记此结果可参考性不高等。
图4为本申请实施例提供的一种答复结果和多维数据协同交叉分析的示意图;如图4所示,动作信息可以包括:耸肩、掐腰等;语音语义可以包括:麻烦、慢等;面部信息可以包括:皱眉、瞪眼等。
基于上述信息确定用户情绪,根据用户情绪可以对答题结果进行调整。
图5为本申请实施例提供的一种测评实现装置的结构示意图;如图5所示,所述装置应用于服务器,所述装置包括:调研测评平台和智能家居控制中心;
所述智能家居控制中心,主要包括:智能设备特征管理模块、用户行为和状态识别模块、智能推送决策模块、智能交互决策模块、调查进展监控模块及智能设备协同模块;
所述调研测评平台,包括:问卷设计模块、问卷发布模块、存储问卷库模块、多维数据分析模块、配额设置模块、短链接模块、样本模块、防作弊模块、激励模块;
通过上述模块协同处理以完成上述服务器实现的方法。
并且,还可以通过激励模块,提供给用户奖励,以提高用户答复概率;
通过防作弊模块,检测用户回复的答复结果,得到真实性更高的结果;
通过短链接模块,通过短链接向设备发送调查问卷,以请求设备侧的用户进行问卷答复;
通过配额设置模块,管理调查问卷的发送数量,得到相应数量要求的调查问卷的答复结果。
针对不同的功能可以进行模块划分,但需要说明的是,图5中的模块划分仅仅是一种示例,实际应用中,可以根据需要而将上述处理分配由不同的程序模块完成,即将服务器的内部结构划分成不同的程序模块,以完成以上描述的全部或者部分处理。另外,上述实施例提供的装置与相应方法的实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
图6为本申请实施例提供的另一种测评实现装置的结构示意图;所述装置应用于服务器,如图6所示,所述装置包括:
第一处理模块,配置为基于第一标签集确定第一问卷;所述第一标签集,包括:用户偏好标签;所述用户偏好标签基于历史用户数据确定;
以及,获取目标特征的参数,基于所述目标特征的参数确定第一设备;所述目标特征的参数基于环境状态和行为状态确定;
第一通信模块,配置为将所述第一问卷发送到第一设备;所述第一问卷由所述第一设备呈现;
以及,接收所述第一设备发送的针对第一问卷的答复结果。
具体地,所述第一处理模块,配置为基于所述第一标签集中的用户偏好标签与问卷题库中的每个题目之间的相关性,确定每个所述题目的推荐度;基于每个所述题目的推荐度,确定第一目标题目;所述问卷题库包括至少一个题目;
确定所述第一目标题目和所述问卷题库中除第一目标题目外的至少一个第一题目的关联度;基于每个所述第一题目对应的关联度和推荐度,确定每个所述第一题目的第一综合分值;基于每个所述第一题目对应的第一综合分值,确定第二目标题目;
确定所述第二目标题目和所述问卷题库中除第一目标题目及第二目标题目外的至少一个第二题目的关联度;基于每个所述第二题目对应的关联 度和推荐度,确定每个所述第二题目对应的第二综合分值;基于每个所述第二题目对应的第二综合分值,确定第三目标题目;
依此循环,直至确定预设数量的目标题目,基于所述预设数量的目标题目得到所述第一问卷。
具体地,所述第一处理模块,配置为获取至少一个第一特征的参数;
从所述至少一个第一特征中选择符合预设特征要求的第一特征,作为目标特征;
根据所述目标特征的参数,得到针对至少一个候选设备的概率;所述概率表征通过相应候选设备进行问卷测评可得到答复结果的可能性;
基于所述至少一个候选设备中每个候选设备对应的概率,确定符合预设概率要求的目标设备;所述符合预设概率要求表征概率超过预设阈值。
具体地,所述目标特征的数量为至少一个;
所述第一处理模块,配置为根据至少一个目标特征中每个所述目标特征的参数,计算每个所述目标特征对应的影响因子和参数的乘积;
基于每个所述目标特征对应的影响因子和所述目标特征的参数的乘积,得到针对至少一个候选设备的概率。
具体地,所述第一通信模块,还配置为获取第一时间段内的用户行为数据;所述第一时间段表征生成答复结果的时间;
所述第一处理模块,还配置为基于所述用户行为数据,确定针对所述答复结果的可信度;
基于所述可信度,调整所述答复结果,得到目标答复结果。
具体地,所述用户行为数据,包括以下至少之一:声音、肢体动作、表情;
所述第一处理模块,还配置为运用预设的行为分析模型分析所述声音、所述肢体动作、所述表情中的至少之一,得到至少一个可信值;
根据所述至少一个可信值和各用户行为数据对应的权重,确定可信度。
具体地,所述用户行为数据,包括:针对每个题目的用户行为子数据;
所述第一处理模块,还配置为运用预设的行为分析模型分析每个题目对应的用户行为子数据,得到每个题目对应的至少一个可信值;
根据每个题目对应的所述至少一个可信值和各用户行为数据对应的权重,确定针对每个题目的可信度;
相应的,所述第一处理模块,还配置为根据每个题目对应的可信度,调整所述答复结果中针对每个题目的分值。
需要说明的是:上述实施例提供的测评实现装置在实现相应测评实现方法时,仅以上述各程序模块的划分进行举例说明,实际应用中,可以根据需要而将上述处理分配由不同的程序模块完成,即将服务器的内部结构划分成不同的程序模块,以完成以上描述的全部或者部分处理。另外,上述实施例提供的装置与相应方法的实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
图7为本申请实施例提供的另一种测评实现装置的结构示意图;所述装置应用于设备,如图7所示,所述装置包括:
第二通信模块,配置为接收并呈现第一问卷;所述第一问卷基于第一标签集确定;所述第一标签集,包括:用户偏好标签;所述用户偏好标签基于历史用户数据确定;
第二处理模块,配置为获得针对第一问卷的答复结果;
所述第二通信模块,还配置为将所述答复结果发送给服务器。
具体地,所述装置还包括:采集模块,配置为采集至少一个第一特征的参数;
所述第二通信模块,还配置为将采集的至少一个第一特征的参数发送给服务器。
具体地,所述采集模块,还配置为采集用户行为数据;相应用户行为数据,包括以下至少之一:声音、肢体动作、表情;所述用户行为数据用于对所述答复结果进行调整;
所述第二通信模块,还配置为将所述用户行为数据发送给服务器。
具体地,所述采集模块,配置为确定至少一个采集时间段对应的题目,及至少一个采集时间段中每个采集时间段内采集的用户行为子数据;
相应的,所述第二通信模块,配置为将所述至少一个采集时间段中每个采集时间段内采集的用户行为子数据及对应的题目,发送给服务器。
需要说明的是:上述实施例提供的测评实现装置在实现相应测评实现方法时,仅以上述各程序模块的划分进行举例说明,实际应用中,可以根据需要而将上述处理分配由不同的程序模块完成,即将相应设备的内部结构划分成不同的程序模块,以完成以上描述的全部或者部分处理。另外,上述实施例提供的装置与相应方法的实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
图8为本申请实施例提供的一种测评实现装置的结构示意图;如图8所示,所述装置80包括:处理器801和配置为存储能够在所述处理器上运行的计算机程序的存储器802;
其中,所述装置可以应用于服务器时,所述处理器801配置为运行所述计算机程序时,执行:基于第一标签集确定第一问卷;所述第一标签集,包括:用户偏好标签;所述用户偏好标签基于历史用户数据确定;获取目标特征的参数,基于所述目标特征的参数确定第一设备;所述目标特征的参数基于环境状态和行为状态确定;将所述第一问卷发送到第一设备;所述第一问卷由所述第一设备呈现;接收所述第一设备发送的针对第一问卷的答复结果。
所述处理器运行所述计算机程序时实现本申请实施例的各个方法中由 服务器的相应流程,为了简洁,在此不再赘述。
其中,所述装置可以应用于设备时,所述处理器801配置为运行所述计算机程序时,执行:接收并呈现第一问卷;所述第一问卷基于第一标签集确定;所述第一标签集,包括:用户偏好标签;所述用户偏好标签基于历史用户数据确定;获得针对第一问卷的答复结果。
所述处理器运行所述计算机程序时实现本申请实施例的各个方法中由设备的相应流程,为了简洁,在此不再赘述。
实际应用时,所述装置80还可以包括:至少一个网络接口803。所述装置80中的各个组件通过总线系统804耦合在一起。可理解,总线系统804用于实现这些组件之间的连接通信。总线系统804除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图8中将各种总线都标为总线系统804。其中,所述处理器801的个数可以为至少一个。网络接口803用于装置80与其他设备之间有线或无线方式的通信。
本申请实施例中的存储器802用于存储各种类型的数据以支持装置80的操作。
上述本申请实施例揭示的方法可以应用于处理器801中,或者由处理器801实现。处理器801可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器801中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器801可以是通用处理器、数字信号处理器(DSP,DiGital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。处理器801可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤,可以直接体现为硬件译码处理器执行完成,或者用译码处理器 中的硬件及软件模块组合执行完成。软件模块可以位于存储介质中,该存储介质位于存储器802,处理器801读取存储器802中的信息,结合其硬件完成前述方法的步骤。
在示例性实施例中,装置80可以被一个或多个应用专用集成电路(ASIC,Application Specific Integrated Circuit)、DSP、可编程逻辑器件(PLD,Programmable Logic Device)、复杂可编程逻辑器件(CPLD,Complex Programmable Logic Device)、现场可编程门阵列(FPGA,Field-Programmable Gate Array)、通用处理器、控制器、微控制器(MCU,Micro Controller Unit)、微处理器(Microprocessor)、或其他电子元件实现,用于执行前述方法。
本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机程序;
其中,所述计算机可读存储介质应用于服务器时,所述计算机程序被处理器运行时,执行:基于第一标签集确定第一问卷;所述第一标签集,包括:用户偏好标签;所述用户偏好标签基于历史用户数据确定;获取目标特征的参数,基于所述目标特征的参数确定第一设备;所述目标特征的参数基于环境状态和行为状态确定;将所述第一问卷发送到第一设备;所述第一问卷由所述第一设备呈现;接收所述第一设备发送的针对第一问卷的答复结果。所述计算机程序被处理器运行时实现本申请实施例的各个方法中由服务器实现的相应流程,为了简洁,在此不再赘述。
所述计算机可读存储介质应用于设备时,所述计算机程序被处理器运行时,执行:接收并呈现第一问卷;所述第一问卷基于第一标签集确定;所述第一标签集,包括:用户偏好标签;所述用户偏好标签基于历史用户数据确定;获得针对第一问卷的答复结果。其中,所述计算机程序被处理器运行时实现本申请实施例的各个方法中由设备实现的相应流程,为了简洁,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本申请各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本申请上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一 个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。

Claims (15)

  1. 一种测评实现方法,应用于服务器,所述方法包括:
    基于第一标签集确定第一问卷;所述第一标签集,包括:用户偏好标签;所述用户偏好标签基于历史用户数据确定;
    获取目标特征的参数,基于所述目标特征的参数确定第一设备;所述目标特征的参数基于环境状态和行为状态确定;
    将所述第一问卷发送到第一设备;所述第一问卷由所述第一设备呈现;
    接收所述第一设备发送的针对第一问卷的答复结果。
  2. 根据权利要求1所述的方法,其中,所述基于第一标签集确定第一问卷,包括:
    基于所述第一标签集中的用户偏好标签与问卷题库中的每个题目之间的相关性,确定每个所述题目的推荐度;基于每个所述题目的推荐度,确定第一目标题目;所述问卷题库包括至少一个题目;
    确定所述第一目标题目和所述问卷题库中除第一目标题目外的至少一个第一题目的关联度;基于每个所述第一题目对应的关联度和推荐度,确定每个所述第一题目的第一综合分值;基于每个所述第一题目对应的第一综合分值,确定第二目标题目;
    确定所述第二目标题目和所述问卷题库中除第一目标题目及第二目标题目外的至少一个第二题目的关联度;基于每个所述第二题目对应的关联度和推荐度,确定每个所述第二题目对应的第二综合分值;基于每个所述第二题目对应的第二综合分值,确定第三目标题目;
    依此循环,直至确定预设数量的目标题目,基于所述预设数量的目标题目得到所述第一问卷。
  3. 根据权利要求1或2所述的方法,其中,所述获取目标特征的参数,基于所述目标特征的参数确定第一设备,包括:
    获取至少一个第一特征的参数;
    从所述至少一个第一特征中选择符合预设特征要求的第一特征,作为目标特征;
    根据所述目标特征的参数,得到针对至少一个候选设备的概率;所述概率表征通过相应候选设备进行问卷测评可得到答复结果的可能性;
    基于所述至少一个候选设备中每个候选设备对应的概率,确定符合预设概率要求的目标设备;所述符合预设概率要求表征概率超过预设阈值。
  4. 根据权利要求3所述的方法,其中,所述目标特征的数量为至少一个;
    所述根据所述目标特征的参数,得到针对至少一个候选设备的概率,包括:
    根据至少一个目标特征中每个所述目标特征的参数,计算每个所述目标特征对应的影响因子和参数的乘积;
    基于每个所述目标特征对应的影响因子和所述目标特征的参数的乘积,得到针对至少一个候选设备的概率。
  5. 根据权利要求1所述的方法,其中,所述方法还包括:
    获取第一时间段内的用户行为数据;所述第一时间段表征生成答复结果的时间;
    基于所述用户行为数据,确定针对所述答复结果的可信度;
    基于所述可信度,调整所述答复结果,得到目标答复结果。
  6. 根据权利要求5所述的方法,其中,所述用户行为数据,包括以下至少之一:声音、肢体动作、表情;
    所述基于所述用户行为数据,确定针对答复结果的可信度,包括:
    运用预设的行为分析模型分析所述声音、所述肢体动作、所述表情中的至少之一,得到至少一个可信值;
    根据所述至少一个可信值和各用户行为数据对应的权重,确定可信度。
  7. 根据权利要求6所述的方法,其中,所述用户行为数据,包括:针对每个题目的用户行为子数据;
    所述基于所述用户行为数据,确定针对答复结果的可信度,包括:
    运用预设的行为分析模型分析每个题目对应的用户行为子数据,得到每个题目对应的至少一个可信值;
    根据每个题目对应的所述至少一个可信值和各用户行为数据对应的权重,确定针对每个题目的可信度;
    相应的,所述基于所述可信度,调整所述答复结果,包括:
    根据每个题目对应的可信度,调整所述答复结果中针对每个题目的分值。
  8. 一种测评实现方法,应用于设备,所述方法包括:
    接收并呈现第一问卷;所述第一问卷基于第一标签集确定;所述第一标签集,包括:用户偏好标签;所述用户偏好标签基于历史用户数据确定;
    获得针对第一问卷的答复结果;
    将所述答复结果发送给服务器。
  9. 根据权利要求8所述的方法,其中,所述方法还包括:
    采集至少一个第一特征的参数;
    将采集的至少一个第一特征的参数发送给服务器。
  10. 根据权利要求8所述的方法,其中,所述方法还包括:
    采集用户行为数据;相应用户行为数据,包括以下至少之一:声音、肢体动作、表情;所述用户行为数据用于对所述答复结果进行调整;
    将所述用户行为数据发送给服务器。
  11. 根据权利要求10所述的方法,其中,所述采集用户行为数据,包括:
    确定至少一个采集时间段对应的题目,及至少一个采集时间段中每个采集时间段内采集的用户行为子数据;
    所述将所述用户行为数据发送给服务器,包括:将所述至少一个采集时间段中每个采集时间段内采集的用户行为子数据及对应的题目,发送给服务器。
  12. 一种测评实现装置,应用于服务器,所述装置包括:
    第一处理模块,配置为基于第一标签集确定第一问卷;所述第一标签集,包括:用户偏好标签;所述用户偏好标签基于历史用户数据确定;
    以及,获取目标特征的参数,基于所述目标特征的参数确定第一设备;所述目标特征的参数基于环境状态和行为状态确定;
    第一通信模块,配置为将所述第一问卷发送到第一设备;所述第一问卷由所述第一设备呈现;
    以及,接收所述第一设备发送的针对第一问卷的答复结果。
  13. 一种测评实现装置,应用于设备,所述装置包括:
    第二通信模块,配置为接收并呈现第一问卷;所述第一问卷基于第一标签集确定;所述第一标签集,包括:用户偏好标签;所述用户偏好标签基于历史用户数据确定;
    第二处理模块,配置为获得针对第一问卷的答复结果;
    所述第二通信模块,还配置为将所述答复结果发送给服务器。
  14. 一种测评实现装置,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现权利要求1至7任一项所述方法的步骤;或者,
    所述处理器执行所述程序时实现权利要求8至11任一项所述方法的步骤。
  15. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机 程序被处理器执行时实现权利要求1至7任一项所述方法的步骤;或者,
    所述计算机程序被处理器执行时实现权利要求8至11任一项所述方法的步骤。
PCT/CN2021/115330 2020-09-01 2021-08-30 一种测评实现方法、装置和存储介质 WO2022048515A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010902906.8A CN114119055A (zh) 2020-09-01 2020-09-01 一种测评实现方法、装置和存储介质
CN202010902906.8 2020-09-01

Publications (1)

Publication Number Publication Date
WO2022048515A1 true WO2022048515A1 (zh) 2022-03-10

Family

ID=80360655

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/115330 WO2022048515A1 (zh) 2020-09-01 2021-08-30 一种测评实现方法、装置和存储介质

Country Status (2)

Country Link
CN (1) CN114119055A (zh)
WO (1) WO2022048515A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116151694A (zh) * 2023-04-19 2023-05-23 中国建筑科学研究院有限公司 数据管理系统、方法及计算设备
CN117557005A (zh) * 2024-01-08 2024-02-13 北京沃东天骏信息技术有限公司 调研数据处理方法、装置和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002312645A (ja) * 2001-04-17 2002-10-25 Fujitsu Ltd 情報処理装置、プログラム、および商品配達データを処理する方法
CN107368724A (zh) * 2017-06-14 2017-11-21 广东数相智能科技有限公司 基于声纹识别的防作弊网络调研方法、电子设备及存储介质
CN108197305A (zh) * 2018-01-30 2018-06-22 深圳壹账通智能科技有限公司 调查问卷测评处理方法、装置、计算机设备和存储介质
CN109166386A (zh) * 2018-10-25 2019-01-08 重庆鲁班机器人技术研究院有限公司 儿童逻辑思维辅助训练方法、装置及机器人
CN109784654A (zh) * 2018-12-17 2019-05-21 平安国际融资租赁有限公司 任务生成方法、装置、计算机设备和存储介质
CN109902250A (zh) * 2019-01-14 2019-06-18 平安科技(深圳)有限公司 问卷调查的共享方法、共享装置、计算机设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002312645A (ja) * 2001-04-17 2002-10-25 Fujitsu Ltd 情報処理装置、プログラム、および商品配達データを処理する方法
CN107368724A (zh) * 2017-06-14 2017-11-21 广东数相智能科技有限公司 基于声纹识别的防作弊网络调研方法、电子设备及存储介质
CN108197305A (zh) * 2018-01-30 2018-06-22 深圳壹账通智能科技有限公司 调查问卷测评处理方法、装置、计算机设备和存储介质
CN109166386A (zh) * 2018-10-25 2019-01-08 重庆鲁班机器人技术研究院有限公司 儿童逻辑思维辅助训练方法、装置及机器人
CN109784654A (zh) * 2018-12-17 2019-05-21 平安国际融资租赁有限公司 任务生成方法、装置、计算机设备和存储介质
CN109902250A (zh) * 2019-01-14 2019-06-18 平安科技(深圳)有限公司 问卷调查的共享方法、共享装置、计算机设备及存储介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116151694A (zh) * 2023-04-19 2023-05-23 中国建筑科学研究院有限公司 数据管理系统、方法及计算设备
CN116151694B (zh) * 2023-04-19 2023-06-20 中国建筑科学研究院有限公司 数据管理系统、方法及计算设备
CN117557005A (zh) * 2024-01-08 2024-02-13 北京沃东天骏信息技术有限公司 调研数据处理方法、装置和存储介质

Also Published As

Publication number Publication date
CN114119055A (zh) 2022-03-01

Similar Documents

Publication Publication Date Title
CN110245213B (zh) 调查问卷生成方法、装置、设备和存储介质
US9881062B2 (en) Operation and method for prediction and management of the validity of subject reported data
US20190102652A1 (en) Information pushing method, storage medium and server
US8577716B2 (en) System and method of ongoing evaluation reporting and analysis
WO2022048515A1 (zh) 一种测评实现方法、装置和存储介质
CN110235154A (zh) 使用特征关键词将会议与项目进行关联
US20200118168A1 (en) Advertising method, device and system, and computer-readable storage medium
CN108681970A (zh) 基于大数据的理财产品推送方法、系统及计算机存储介质
TW202133005A (zh) 在線資料採集的方法及系統
US20200273050A1 (en) Systems and methods for predicting subscriber churn in renewals of subscription products and for automatically supporting subscriber-subscription provider relationship development to avoid subscriber churn
WO2019137485A1 (zh) 一种业务分值的确定方法、装置及存储介质
WO2013119280A1 (en) Tools and methods for determining relationship values
WO2021155691A1 (zh) 用户画像生成方法、装置、存储介质及设备
EP3594864A1 (en) Factor inference device, factor inference system, and factor inference method
CA2822077A1 (en) Method and apparatus for segmenting a customer base and interacting with customers of the segmented customer base
US20120221345A1 (en) Helping people with their health
CN109785000A (zh) 客户资源分配方法、装置、存储介质和终端
US11704705B2 (en) Systems and methods for an intelligent sourcing engine for study participants
US20210312000A1 (en) Live bi-directional video/audio feed generation between a consumer and a service provider
CN113163063B (zh) 智能外呼系统及方法
US20230325944A1 (en) Adaptive wellness collaborative media system
GB2517358A (en) Recommendation creation system
US20210173823A1 (en) Network-based content submission and contest management
CN113706298A (zh) 一种延期业务处理方法及装置
CN113127755A (zh) 一种人工智能虚拟形象信息推荐算法系统及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21863580

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29.06.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21863580

Country of ref document: EP

Kind code of ref document: A1