WO2022111071A1 - 用户画像生成方法、装置、服务器及存储介质 - Google Patents

用户画像生成方法、装置、服务器及存储介质 Download PDF

Info

Publication number
WO2022111071A1
WO2022111071A1 PCT/CN2021/122906 CN2021122906W WO2022111071A1 WO 2022111071 A1 WO2022111071 A1 WO 2022111071A1 CN 2021122906 W CN2021122906 W CN 2021122906W WO 2022111071 A1 WO2022111071 A1 WO 2022111071A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
portrait
portraits
data
user data
Prior art date
Application number
PCT/CN2021/122906
Other languages
English (en)
French (fr)
Inventor
金越
李亚乾
郭彦东
杨林
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2022111071A1 publication Critical patent/WO2022111071A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation

Definitions

  • the present application relates to the field of computer technologies, and more particularly, to a method, device, server, and storage medium for generating a user portrait.
  • relevant recommendation systems can push information based on user portraits.
  • the present application proposes a method, device, server and storage medium for generating a user portrait to improve the above problems.
  • the present application provides a user portrait generation method, the method includes: responding to a user portrait generation instruction; obtaining user data collected by multiple devices bound to the same user ID, to obtain multiple types of user data; The multiple types of user data are generated, and a user portrait corresponding to the user ID is generated.
  • the present application provides a user portrait generation device, the device includes: a portrait generation scene trigger unit for responding to a user portrait generation instruction; a data acquisition unit for acquiring multiple devices bound to the same user ID The separately collected user data obtains multiple types of user data; the portrait generation unit is configured to generate user portraits corresponding to the user identifiers based on the multiple types of user data.
  • the present application provides a server including a processor and a memory; one or more programs are stored in the memory and configured to be executed by the processor to implement the above method.
  • the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, wherein the above-mentioned method is executed when the program code is executed by a processor.
  • FIG. 1 shows a schematic diagram of an application environment of the user portrait generation method proposed in the embodiment of the present application
  • FIG. 2 is a schematic diagram of another application environment of the user portrait generation method proposed in the embodiment of the present application.
  • FIG. 3 shows a schematic diagram of yet another application environment of the user portrait generation method proposed in the embodiment of the present application
  • FIG. 4 shows a flowchart of a method for generating a user portrait proposed by an embodiment of the present application
  • FIG. 5 shows a flowchart of a method for generating a user portrait proposed by another embodiment of the present application
  • FIG. 6 shows a flowchart of a method for generating a user portrait proposed by another embodiment of the present application
  • FIG. 7 shows a flowchart of a method for generating a user portrait proposed by another embodiment of the present application.
  • FIG. 8 shows a structural block diagram of a user portrait generation device proposed by an embodiment of the present application.
  • FIG. 9 shows a structural block diagram of a user portrait generation device proposed by another embodiment of the present application.
  • FIG. 10 shows a structural block diagram of another electronic device of the present application for executing the method for generating a user portrait according to an embodiment of the present application
  • FIG. 11 is a storage unit for storing or carrying a program code for implementing the method for generating a user portrait according to the embodiment of the present application according to an embodiment of the present application.
  • User portrait is a very popular research direction in related fields.
  • a user portrait ie, a user's user portrait
  • a user portrait tag of a related user can be generated. After the user portrait tag is generated, more targeted content can be pushed to the user, which can reduce the push operation cost.
  • the generated user portrait label may represent that the user likes sports, the user likes to eat rice, and the user likes to watch sports games.
  • a message matching the user's portrait can be pushed.
  • the user is using an app for ordering food, more information about rice can be pushed.
  • he can push more video content about sports games.
  • the inventor found that in the related user portrait generation method, there is still a problem that the generated user portrait is not precise enough, which will also cause the information pushed based on the user portrait to be inaccurate, resulting in a certain waste of resources.
  • the user data on which the relevant user portrait is generated is generally collected based on a single platform.
  • the user data of the user is usually obtained through a shopping platform or a software download platform. .
  • the inventor proposes the user portrait generation method, device, server and storage medium provided by the present application, so that after responding to the user portrait generation instruction, the user data of different types collected by multiple devices bound to the same user ID will be obtained. to obtain multiple types of user data, and then generate a user portrait corresponding to the user ID based on the multiple types of user data. Therefore, through the aforementioned method, for the same user, different types of user data can be collected from multiple devices bound to it, and then a user portrait of the user can be generated according to different types of user data collected from different devices, which is conducive to making The generated user portrait is more detailed and accurate. Furthermore, while making the user portrait more accurate, it is beneficial to make the pushed information more matched with the user when information is pushed based on the user portrait.
  • the application environment shown in FIG. 1 includes a device 100 and a first server 110 , where the first server 110 can be used to run a business system.
  • the method for generating a user portrait provided by the embodiment of the present application may be executed in the first server 110 .
  • the device 100 can perform data interaction with the first server 110 through the network.
  • the device 100 may send the collected user data to the server 110 .
  • a user portrait is generated according to user data collected by multiple devices.
  • the user data collected by the device is uploaded to the first server 110 .
  • the devices 100 respectively running in different devices may directly communicate with the first server 110 .
  • the multiple devices 100 may communicate with the gateway 120 first, and then the gateway 120 centrally uploads the user data collected by the multiple devices to the first A server 110.
  • the first server 110 and the second server may also be used. 130 to run together.
  • the first server 110 may be responsible for obtaining user data collected by multiple devices bound to the same user ID, and obtain multiple types of user data; then The second server 130 performs the subsequent generation of the user portrait corresponding to the user identifier based on the multiple types of user data.
  • each step in the method for generating a user portrait provided in this embodiment may also be configured to be executed by a separate server.
  • the first server 110 and the second server 130 may both be independent physical servers, or may be server clusters or distributed systems composed of multiple physical servers, or may provide cloud services, cloud databases, cloud computing, and cloud functions. , cloud storage, network services, cloud communications, middleware services, domain name services, security services, CDN, and cloud servers for basic cloud computing services such as big data and artificial intelligence platforms.
  • the electronic device where the device 100 is located may be a smartphone, a tablet computer, a notebook computer, a desktop computer, a smart watch, etc., but is not limited thereto.
  • a method for generating a user portrait provided by an embodiment of the present application includes:
  • the user portrait generation scene may be entered in response to the user portrait generation instruction, and the user portrait generation scene represents the scene in which the user portrait is generated. After entering the user portrait generation scene, start acquiring user data for the subsequent steps to generate user portraits.
  • the user portrait generation instruction may be triggered periodically in response to the user portrait, or the user portrait generation instruction may be responded to in response to the generation instruction.
  • S120 Acquire user data separately collected by multiple devices bound to the same user ID, and obtain multiple types of user data.
  • multiple devices may be bound to the same user ID, and then subsequent user portrait generation may be performed by using user data collected by multiple devices bound to the same user ID.
  • the types of user data collected by at least two devices are different, so that multiple types of user data can be acquired.
  • multiple devices bound to the same user identity include smart phones, wearable devices, and smart TVs.
  • the user data collected by the smart phone represents the running status of the application program in the smart phone.
  • the smartphone usually executes the corresponding function through the installed application program.
  • the smartphone may collect statistics on the running time and the running time period of each application as user data.
  • the user data collected by the wearable device represents the user's health and movement.
  • the wearable device can collect data representing the user's health and exercise conditions, such as the user's heart rate, number of exercise steps, and exercise distance, and use it as user data.
  • the user data collected by the smart TV represents the user's video viewing preference.
  • the user can browse video programs through the smart TV, and then the smart TV can collect the user's preference for video programs as user data.
  • the types of user data collected by different devices may have the same part or may be completely different. It can be understood that different types of user data collected by different devices have the same type of user data. Exemplarily, for both smart phones and smart TVs, user data of the type of user preference for video programs can be collected. Furthermore, the types of user data collected by different devices are completely different, which can be understood as different types of user data collected by different devices. Exemplarily, multiple devices bound to the same user ID include device A, device B, and device C, then device A can collect user data of type a, device B can collect user data of type b, and device C can collect user data of type b. User data of type c, where type a, type b, and type c are completely different types.
  • the user data includes data extracted from locally collected images or network images.
  • data extracted from locally collected images or network images can be based on CV (Computer Vision) technology.
  • the locally collected images may include pictures or videos collected by the device through an image collection device set by the device itself.
  • the device may collect pictures or videos in response to an instruction triggered by the user, or may periodically collect pictures or videos based on a specified event.
  • the network image includes images downloaded by the device over the network. Exemplarily, when a user browses his own web album through the device, the device can download the image of the web album to the local, so that user data can be extracted from it later.
  • S130 Generate a user portrait corresponding to the user identifier based on the multi-type user data.
  • multiple types of user data can represent user preferences from different dimensions, and then a user portrait corresponding to the user ID is generated based on the multiple types of user data, which can make the generated user portrait more comprehensive and detailed. user preferences.
  • the use time of the sports application program is greater than the set time threshold, it can be determined that the user to which the user ID belongs belongs to the user of the sports preference category, and then The generated user profiles may include profiles of sports preferences.
  • the user data collected by the wearable device bound with the same user ID includes that the user's heart rate will always be in a high state in the evening, it can be determined that the user to which the user ID belongs prefers to exercise in the evening, and then the generated data is generated.
  • the user portrait of the user can include a portrait of exercising in the evening, then a portrait of comprehensive exercise preference and a portrait of exercising in the evening, the user portrait corresponding to the generated user ID is a user who likes to exercise and likes to exercise in the evening.
  • a method for generating user portraits In a method for generating user portraits provided by this embodiment, after responding to a user portrait generating instruction, different types of user data collected by multiple devices bound to the same user ID are obtained, and multiple types of user data are obtained. Multi-type user data, and generate a user portrait corresponding to the user ID. Therefore, through the aforementioned method, for the same user, different types of user data can be collected from multiple devices bound to it, and then a user portrait of the user can be generated according to different types of user data collected from different devices, which is conducive to making The generated user portrait is more detailed and accurate. Furthermore, while making the user portrait more accurate, it is beneficial to make the pushed information more matched with the user when information is pushed based on the user portrait.
  • a method for generating a user portrait provided by an embodiment of the present application includes:
  • S220 Acquire user data separately collected by multiple devices bound to the same user ID, and obtain multiple types of user data.
  • S230 Generate respective reference user portraits corresponding to the multiple types of user data, and obtain multiple reference user portraits.
  • a corresponding user portrait may be separately generated as a reference user portrait.
  • the reference user portrait may be a user portrait reflecting the user's preference in a specified aspect.
  • the multiple devices bound to the user identity may include a smart phone, a wireless headset, a smart bracelet, and a smart TV.
  • the reference user portrait generated based on the user data collected by the smartphone may be a user portrait reflecting the user's preference in the aspect of using the application.
  • the reference user portrait generated based on the user data collected by the wireless headset may be a user portrait reflecting the user's preference in terms of listening habits and usage.
  • the reference user portrait generated based on the user data collected by the smart bracelet can be a user portrait reflecting the user's health.
  • the reference user portrait generated based on the user data collected by the smart TV may be a user portrait reflecting the user's preference in watching video programs.
  • S240 Generate a user portrait corresponding to the user identifier based on the multiple reference user portraits.
  • the reference user portrait may represent the user portrait in a certain aspect, and then the user portrait corresponding to the user ID generated based on multiple reference user portraits may be more comprehensive and refined.
  • User portrait there may be various ways of generating a user portrait corresponding to a user ID based on multiple reference user portraits.
  • the user portrait includes multiple portrait description parameters
  • the generating the user portrait corresponding to the user ID based on the multiple reference user portraits includes: combining the multiple reference user portraits with the multiple reference user portraits. Match each of the portrait description parameters one by one, and obtain the respective matched reference user portraits of the plurality of portrait description parameters, so as to generate a user portrait corresponding to the user ID.
  • the portrait description parameters can be understood as the description content that further defines the user portrait, then when the user portrait includes multiple portrait description parameters, the multiple portrait description parameters can be used to generate more comprehensively and finely.
  • the portrait description parameters may include exercise time information, exercise location information, listening habit information, and video program hobby information, and then the user portraits can be generated more comprehensively and finely through these portrait description parameters.
  • the reference user portrait is also a user portrait used to represent the user's preference in a certain aspect. Then, by matching multiple reference user portraits with the multiple portrait description parameters one by one, it is convenient to have more portrait descriptions. Parameters can be matched to the corresponding content. It can be understood that, in this embodiment, the reference user portrait that matches the portrait description parameter is used as the content corresponding to the portrait description parameter.
  • each of the multiple reference user portraits is matched with the multiple portrait description parameters.
  • the multiple reference user portraits include reference user portrait A, reference user portrait B, reference user portrait C, and reference user portrait D.
  • the plurality of profile description parameters include profile description parameter a, profile description parameter b, profile description parameter c, profile description parameter d, profile description parameter e, and profile description parameter f.
  • the reference user profile A will be matched with profile description parameter a, profile description parameter b, profile description parameter c, profile description parameter d, profile description parameter e, and profile description parameter f.
  • follow-up The reference user portrait B, the reference user portrait C, and the reference user portrait D will be matched in the same way as the reference user portrait A is matched.
  • the matching may be based on semantic content.
  • a semantic content template can be pre-corresponded to each portrait description parameter.
  • the content of the reference user portrait can be semantically matched with the semantic content template corresponding to each portrait description parameter, so as to determine the corresponding content template.
  • the reference user profile that matches the profile description parameters.
  • the exercise time information represents when the user prefers to exercise, then the semantic content template corresponding to the exercise time information can be "the user likes to exercise at t1" or "t1".
  • the semantic content can be matched with the semantic content template corresponding to the exercise time information, and then it can be determined that the content corresponding to the exercise time information is "the user likes to exercise at t2".
  • the multiple reference user portraits are matched with the multiple portrait description parameters one by one, and the reference user portraits matched with the multiple portrait description parameters are obtained to obtain the matching reference user portraits.
  • Generating the user portrait corresponding to the user identification may include:
  • the abnormal portraits in the multiple reference user portraits are deleted to obtain the remaining reference user portraits, wherein the abnormal portraits are portraits selected by the user for deletion.
  • the device bound with the same user ID may not be used by the device bound user at all times, and the user data collected by the device when the device is not used by the device bound user. May not reflect device-bound user preferences.
  • the multiple reference user portraits can be pushed to the device binding user, so that the device binding user can confirm.
  • the device-bound user can choose the portrait that needs to be deleted, and then determine the portrait that the user chooses to delete as an abnormal portrait.
  • the remaining reference user portraits are matched with the multiple portrait description parameters one by one, and the reference user portraits matched with each of the multiple portrait description parameters are obtained, so as to generate a user portrait corresponding to the user ID.
  • the generating the user portraits corresponding to the user IDs based on the multiple reference user portraits includes: detecting whether there are user portraits that do not match each other among the multiple reference user portraits; If there is a pair of user portraits that do not match each other, obtain a first user portrait, where the first user portrait is a user portrait other than the pair of user portraits that do not match each other in the multiple reference user portraits; based on the The first user portrait updates the portrait content represented by the pair of user portraits that do not match each other to obtain a second user portrait; based on the first user portrait and the second user portrait, a corresponding user ID is generated.
  • User portrait is detecting whether there are user portraits that do not match each other among the multiple reference user portraits; If there is a pair of user portraits that do not match each other, obtain a first user portrait, where the first user portrait is a user portrait other than the pair of user portraits that do not match each other in the multiple reference user portraits; based on the The first user portrait updates the portrait content
  • the types of user data collected by different devices may be different in the embodiments of the present application, the types of reference user portraits that may be obtained based on different types of user data are the same.
  • the user data collected by the smartphone may determine that the user is a female user.
  • a reference user portrait In the user data about video program types collected by the smart TV, the viewing time of military programs or other political programs is greater than the set time threshold, and based on the user data collected by the smart TV, it may be determined that the user is a male user. Refer to the user profile.
  • the reference user portrait with the corresponding content is male and the reference user portrait with the corresponding content are the same type of portrait, but the content they express is contradictory, then this A pair of user portraits can be understood as a pair of user portraits that do not match each other.
  • the mismatched user portraits can be updated, so that the mismatched content can be eliminated, and the final generated user portraits can be further improved. accuracy.
  • the updating of the portrait content represented by the pair of user portraits that do not match each other based on the first user portrait, to obtain a second user portrait includes: if the mutual mismatches The pair of user portraits are contradictory, and the user portrait that matches the first user portrait among the pair of user portraits that do not match each other is used as the second user portrait.
  • the multiple reference user portraits in addition to the user portraits that do not match each other, there may also be another reference user portrait, and the other user portrait may be used as the first user portrait in this embodiment.
  • the first user portrait can also represent the user's preference in some aspects.
  • the inventor found that the portraits of different aspects of the same user can be related to a certain extent. For example, for female users, when listening to music, they may prefer to listen to lyrical songs, while male users prefer to listen to music. Rock or hip hop genre songs. In this manner, the first user portrait can be used to detect which portrait matches the user among the mutually unmatched user portraits.
  • the first user portrait also includes user data representing listening habits collected based on wireless earphones, and the user data representing listening habits indicates that the user prefers songs of rock or hip-hop genre, then you can It is determined that the above-mentioned determined portrait of the user is a female with a high probability is wrong, and then the portrait representing the gender of a male can be used as the second user portrait.
  • the user portraits that do not match each other are in addition to the aforementioned situation that the represented contents are contradictory, there are other situations, for example, if the portrait content is a numerical portrait, and the content represented by the portraits is not the same In the case of , it will also be determined as user portraits that do not match each other.
  • the multiple devices include a smart phone, a smart watch, and a smart TV. Among them, based on the user data collected by the smartphone (which can be user data about the running time of the application), it is determined that the user's age is in the age group of 20 to 30 years old, then the reference user portrait determined based on the user data collected by the smartphone represents the representation of the user.
  • the user is between 20 and 30 years old, and based on the video program data collected by the smart TV, it is determined that the user is between 6 and 12 years old. It can be seen from the foregoing that it is unreasonable for the same user to be in the age group of 20 to 30 years old and also in the age group of 6 to 12 years old.
  • the content of the portraits represented by the pair of user portraits that do not match each other is updated based on the first user portrait, and the first user portrait is obtained.
  • Two user portraits including: if the pair of user portraits that do not match each other are numerical portraits, obtaining the respective weights of the pair of user portraits that do not match each other based on the first user portrait; based on the weights, Weighted summation is performed on the portrait contents represented by the pair of user portraits that do not match each other to obtain a second user portrait.
  • the first user portrait may not directly reflect the age of the user, but may also reflect the age of the user to a certain extent.
  • the user data collected through the smart watch reflecting the user's health state and exercise state can also represent the age group of the user.
  • the first user portrait can be used to determine which age group the user is actually more likely to belong to.
  • multiple weight ratios can be pre-configured, and the weight ratios represent the respective weights of conflicting user portraits, and then based on the first user portrait, it is determined which weight ratio is currently selected from the multiple weight ratios. The respective weights of conflicting user portraits.
  • not all reference user portraits can be used to represent the user's age.
  • the user portraits that can represent the age of the user in the first user portrait can be obtained first, and then the number of matches between the user portraits that can represent the age of the user and the aforementioned user portraits that do not match each other is used to determine the conflicting user portraits. the weight of.
  • the user portraits that do not match each other include user portraits representing the age group of users from 20 to 30 years old, and user portraits representing the age group of users from 6 to 12 years old.
  • the weight ratio matched by the ratio of the first quantity to the second data quantity may be used as the respective weights occupied by the user portraits that do not match each other.
  • the multiple reference user portraits collected based on multiple devices include reference user portrait A, reference user portrait B, reference user portrait C, reference user portrait D, and reference user portrait E.
  • the reference user portrait A and the reference user portrait B are the same type of user portrait (for example, both are portraits representing the age of the user), but the portrait content represented by the reference user portrait A and the portrait content represented by the reference user portrait B different.
  • the reference user portrait A and the reference user portrait B are user portraits that do not match each other.
  • the reference user portrait C, the reference user portrait D and the reference user portrait E are the first user portrait, and the reference user portrait among them is the first user portrait. C.
  • Both reference user portrait D and reference user portrait E can reflect age, and reference user portrait C, reference user portrait D and reference user portrait E will be further matched with reference user portrait A and reference user portrait B, respectively.
  • the reference user portrait C matches the reference user portrait A (the matching may be that the content represented by the user portrait C is the same as the content represented by the reference user portrait A), and the reference user portrait D and the reference user portrait E are matched with the reference user portrait B. . Then, it can be determined that the weight corresponding to the reference user portrait B is greater than the weight of the reference user portrait A.
  • a user has more and more devices, there may be more and more devices bound to the same user ID.
  • multiple devices bound to the same user ID may not be used by the same user, which may cause the user data collected by the aforementioned devices bound with the same user ID to generate mismatched user portraits .
  • his user ID is bound to a smartphone, a smart watch, and a smart TV at the same time.
  • smart phones and smart watches are carried around, they may be used by users themselves, while smart TVs may be used by other relatives of users, which will cause the actual user data collected by smart TVs to be used.
  • the personas represented do not match the users themselves.
  • the method further includes: acquiring target user data, among the pair of user portraits that do not match each other, the user portrait that does not match the first user portrait is generated based on the target user data; controlling the target device to stop collecting all the user portraits.
  • the target user data, and the target device is a device that collects the target user data.
  • the user portrait cannot accurately reflect the user's preferences, so the user data based on which the user portrait that does not match the first user portrait is generated may not actually be generated based on the behavior of the user to which the user ID belongs.
  • the target device By controlling the target device to stop collecting the target user data, it is beneficial to prevent the collected target user data from interfering with the finally generated user portrait, so as to improve the accuracy of the finally generated portrait.
  • some devices may also be controlled to be removed from the plurality of devices corresponding to the user identification.
  • the device can be removed.
  • the removal of the device refers to the removal of the device in the user portrait generation scenario, and the device itself is still in the state of binding the user ID, so as to obtain multiple devices bound with the same user ID. In the process of separately collecting the user data, the user data collected by the removed mobile device will no longer be obtained.
  • the multiple devices include smart phones, smart watches, and smart TVs, and in the case that all types of user data collected by the smart TV are determined to be target user data, they will be used in the user portrait generation scenario.
  • the smart TV is removed, so that when acquiring user data, only the user data collected by the smart phone and the smart watch can be acquired.
  • a method for generating a user portrait in the aforementioned manner, for the same user, different types of user data can be collected from multiple devices bound to the user, and then different types of user data collected from different devices can be collected.
  • the user portrait of the user is generated, so as to make the generated user portrait more precise and accurate. Furthermore, while making the user portrait more accurate, it is beneficial to make the pushed information more matched with the user when information is pushed based on the user portrait.
  • a method for generating a user portrait provided by an embodiment of the present application includes:
  • S320 Acquire user data separately collected by multiple devices bound to the same user ID, where the user data corresponds to an ID, and the ID is used to represent whether the corresponding user data belongs to a user corresponding to the user ID.
  • the device can perform identification first, and then determine whether the currently collected user data is actually generated by the user bound to the device according to the result of the identification.
  • an identity identifier can also be added to the currently collected user data according to the identity recognition result, so that whether the corresponding user data belongs to the user corresponding to the user identifier (ie the user bound to the device) can be detected through the identity identifier.
  • the user identifier bound to the multiple devices is user_a
  • the user data collected by the multiple devices includes user data A, user data B, and user data C, wherein the identifier corresponding to the user data A is user_a, and the user data The corresponding identity of B is false, and the corresponding identity of user data C is user_a.
  • the identifications corresponding to user data A and user data C are both user_a and the aforementioned user identification, then it can be determined that user data A and user data C are actually generated by the user bound to the device, and The identity identifier corresponding to the user data B is different from the aforementioned user identifier, and it can be determined that the user data B is not actually generated by the user bound to the device.
  • the identity identifier is generated based on the image collected by the image acquisition device of the device during the process of collecting user data; wherein, if the identity recognition fails, the generated identity identifier The user data corresponding to the representation does not belong to the user corresponding to the user identification, and if the identification is successful, the generated identification indicates that the user data corresponding to the user identification belongs to the user corresponding to the user identification.
  • the user corresponding to the user identification bound to the device is the user bound to the device, and the identification can be understood as identifying whether the user is a user bound to the device.
  • the user bound to the device can enter his own face image in the device in advance. When the device starts to collect user data, it can collect images through its own image acquisition device, and identify whether there is a device in the collected image. The face image of the bound user.
  • the identification corresponding to the currently collected user data is the user identification bound to the device; correspondingly, if it is not detected, it is determined that the identification has failed, and then The ID corresponding to the currently collected user data is a specified character string (for example, the aforementioned false), and the specified character string is different from the user ID bound to the device.
  • the identity identifier is generated based on the location comparison result of the device and a reference device during the process of collecting user data
  • the reference device may be a mobile phone, a smart tablet, a smart bracelet, or a smart watch etc.; wherein, if the location comparison is inconsistent, the user data corresponding to the generated identity token does not belong to the user corresponding to the user identity, and if the location comparison is consistent, the generated identity token corresponds to the user data belonging to the the user corresponding to the user ID described above.
  • the device may send a location request instruction to the reference device, so as to request the location of the reference device, and then receive the location returned by the reference device in response to the location request instruction. If the device compares and finds that the location returned by the reference device is inconsistent with the device's own location, it can be determined that the user bound to the device is not currently nearby, and then it can be determined that the current user data is not actually generated by the user bound to the device. Correspondingly, if the device compares and finds that the location returned by the reference device is consistent with the device's own location, it can be determined that the user bound to the device is actually nearby, and then it can be determined that the current user data is actually generated by the device bound user. .
  • S330 Acquire user data to be deleted, where the user data to be deleted is data whose corresponding identity identifier in the user data is different from the user identifier.
  • S340 Delete the user data to be deleted from the user data collected respectively by the multiple devices to obtain the remaining user data.
  • S350 Obtain multiple types of user data based on the remaining user data.
  • S360 Based on the multi-type user data, generate a user portrait corresponding to the user identifier.
  • a method for generating a user portrait by configuring the corresponding identity identifiers for user data, after acquiring user data collected by multiple devices, the user data corresponding to the identity identifiers can be identified with multiple devices.
  • the user data that does not actually belong to the user corresponding to the user ID is eliminated according to the comparison result of the user IDs that are bound together, so that a user portrait of the user corresponding to the user ID can be generated more accurately.
  • a method for generating a user portrait includes:
  • S420 Acquire user data separately collected by multiple devices bound to the same user ID, and obtain multiple types of user data.
  • S430 Generate a user portrait corresponding to the user identifier based on the multi-type user data.
  • S450 Generate a group portrait corresponding to the target behavior based on the respective user portraits of the multiple users.
  • the target behavior may be an item purchase behavior, a use behavior of an application program, or a content browsing behavior.
  • the users who all have the target behavior are obtained by statistics, the users who all have the target behavior can be regarded as a user group, and then the group portrait of the user group can be obtained.
  • the target behavior is an behavior of purchasing an electronic device of a specified model
  • the portraits of the user group who have purchased the electronic device of the specified model can be obtained by statistics.
  • a method for generating a user portrait in the aforementioned manner, for the same user, different types of user data can be collected from multiple devices bound to the user, and then different types of user data collected from different devices can be collected.
  • the user portrait of the user is generated, so as to make the generated user portrait more precise and accurate. Furthermore, while making the user portrait more accurate, it is beneficial to make the pushed information more matched with the user when information is pushed based on the user portrait.
  • multiple users with the same target behavior may be acquired based on the user portraits of the multiple users, and then the corresponding target behaviors are generated based on the respective user portraits of the multiple users.
  • Group portrait so that product positioning and information push can be carried out based on the group portrait.
  • an apparatus 400 for generating a user portrait provided by an embodiment of the present application, the apparatus 400 includes:
  • the portrait generation scene triggering unit 410 is used to respond to the user portrait generation instruction.
  • the data acquisition unit 420 is configured to acquire user data separately collected by multiple devices bound to the same user ID, and obtain multiple types of user data.
  • the portrait generating unit 430 is configured to generate a user portrait corresponding to the user ID based on the multi-type user data.
  • the portrait generation unit 430 is specifically configured to generate reference user portraits corresponding to each of the multiple types of user data, and obtain multiple reference user portraits; generate a user corresponding to the user ID based on the multiple reference user portraits portrait.
  • the user portrait includes multiple portrait description parameters.
  • the portrait generation unit 430 is specifically configured to match the multiple reference user portraits with the multiple portrait description parameters one by one, and obtain the reference user portraits matched with each of the multiple portrait description parameters, so as to generate the user Identifies the corresponding user portrait.
  • the portrait generation unit 430 is specifically configured to detect whether there are user portraits that do not match each other among the multiple reference user portraits; if a pair of user portraits that do not match each other is detected, obtain the first user portrait, The first user portrait is a user portrait other than the pair of user portraits that do not match each other in the plurality of reference user portraits; based on the first user portrait, the pair of user portraits that do not match each other are determined.
  • the represented portrait content is updated to obtain a second user portrait; a user portrait corresponding to the user ID is generated based on the first user portrait and the second user portrait.
  • the portrait generation unit 430 is specifically configured to, if the pair of user portraits that do not match with each other contradict each other, the user who matches the first user portrait among the pair of user portraits that do not match each other.
  • the portrait is used as the second user portrait.
  • the portrait generation unit 430 is specifically configured to obtain target user data. Among the pair of user portraits that do not match each other, the user portrait that does not match the first user portrait is based on the target user portrait.
  • User data generation controlling a target device to stop collecting the target user data, where the target user device is a device that collects the target user data.
  • the portrait generation unit 430 is specifically configured to obtain the corresponding pair of user portraits based on the first user portrait if the pair of user portraits that do not match each other are numerical portraits. Weight; based on the weight, weighted summation of the portrait contents represented by the pair of user portraits that do not match each other is performed to obtain a second user portrait.
  • the apparatus 400 further includes:
  • the group portrait generating unit 440 is configured to acquire multiple users with target behaviors; and generate group portraits corresponding to the target behaviors based on respective user portraits of the multiple users.
  • a user portrait generation device after responding to a user portrait generation instruction, acquires user data of different types collected by multiple devices bound to the same user ID, and obtains multiple types of user data, and then based on the Multi-type user data, and generate a user portrait corresponding to the user ID. Therefore, through the aforementioned method, for the same user, different types of user data can be collected from multiple devices bound to it, and then a user portrait of the user can be generated according to different types of user data collected from different devices, which is conducive to making The generated user portrait is more detailed and accurate. Furthermore, while making the user portrait more accurate, it is beneficial to make the pushed information more matched with the user when information is pushed based on the user portrait.
  • a server provided by the present application will be described below with reference to FIG. 10 .
  • another embodiment of the present application further provides a server 200 including a processor 104 that can execute the foregoing method for generating a user portrait.
  • Server 200 also includes memory 104 , and network module 106 .
  • the memory 104 stores a program that can execute the content in the foregoing embodiments
  • the processor 102 can execute the program stored in the memory 104 .
  • the internal structure of the processor 102 may be as shown in FIG. 1 .
  • the processor 102 may include one or more cores for processing data and a message matrix unit.
  • the processor 102 utilizes various interfaces and lines to connect various parts of the entire server 200, and executes electronic processing by running or executing instructions, programs, code sets or instruction sets stored in the memory 104, and calling data stored in the memory 104.
  • the processor 102 may adopt at least one of digital signal processing (Digital Signal Processing, DSP), field-programmable gate array (Field-Programmable Gate Array, FPGA), and programmable logic array (Programmable Logic Array, PLA).
  • DSP Digital Signal Processing
  • FPGA Field-Programmable Gate Array
  • PLA programmable logic array
  • the processor 102 may integrate one or a combination of a central processing unit (Central Processing Unit, CPU), a graphics processing unit (Graphics Processing Unit, GPU), a modem, and the like.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • the CPU mainly handles the operating system, user interface and application programs, etc.
  • the GPU is used for rendering and drawing of the display content
  • the modem is used to handle wireless communication. It can be understood that, the above-mentioned modem may not be integrated into the processor 102, and is implemented by a communication chip alone.
  • the memory 104 may include a random access memory (Random Access Memory, RAM), or may include a read-only memory (Read-Only Memory, ROM). Memory 104 may be used to store instructions, programs, codes, sets of codes, or sets of instructions.
  • the memory 104 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing the operating system, instructions for implementing at least one function (such as a touch function, a sound playback function, an image playback function, etc.) , instructions for implementing the following method embodiments, and the like.
  • the memory 104 may store a device for generating user portraits. Wherein, the device for generating the user portrait may be the aforementioned device 400 .
  • the storage data area may also store data created by the server 100 in use (eg phone book, audio and video data, chat record data) and the like.
  • the network module 106 is used for receiving and sending electromagnetic waves, realizing mutual conversion between electromagnetic waves and electrical signals, so as to communicate with a communication network or other devices, for example, communicate with an audio playback device.
  • the network module 106 may include various existing circuit elements for performing these functions, eg, antennas, radio frequency transceivers, digital signal processors, encryption/decryption chips, subscriber identity module (SIM) cards, memory, etc. .
  • the network module 106 can communicate with various networks such as the Internet, an intranet, a wireless network, or communicate with other devices through a wireless network.
  • the aforementioned wireless network may include a cellular telephone network, a wireless local area network, or a metropolitan area network.
  • the network module 106 may interact with the base station for information.
  • FIG. 11 shows a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application.
  • the computer-readable storage medium 1100 stores program codes, and the program codes can be invoked by the processor to execute the methods described in the above method embodiments.
  • the computer-readable storage medium 1100 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
  • the computer-readable storage medium 1100 includes a non-transitory computer-readable storage medium.
  • the computer-readable storage medium 1100 has storage space for program code 1110 that performs any of the method steps in the above-described methods.
  • the program codes can be read from or written to one or more computer program products.
  • Program code 1110 may be compressed, for example, in a suitable form.
  • a user portrait generation method, device, server and storage medium provided by this application will acquire user data of different types collected by multiple devices bound to the same user ID after responding to the user portrait generation instruction. to obtain multiple types of user data, and then generate a user portrait corresponding to the user ID based on the multiple types of user data. Therefore, through the aforementioned method, for the same user, different types of user data can be collected from multiple devices bound to it, and then a user portrait of the user can be generated according to different types of user data collected from different devices, which is conducive to making The generated user portrait is more detailed and accurate. Furthermore, while making the user portrait more accurate, it is beneficial to make the pushed information more matched with the user when information is pushed based on the user portrait.

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

一种用户画像生成方法、装置、服务器及存储介质。方法包括:响应用户画像生成指令(S110);获取绑定同一用户标识的多个设备分别采集的用户数据,得到多类用户数据(S120),其中,至少两个设备所采集的用户数据的类型不同;基于多类用户数据,生成用户标识对应的用户画像(130)。针对同一用户,可以从其绑定的多个设备分别采集不同类型的用户数据,进而根据从不同设备采集的不同类型的用户数据来生成用户的用户画像,从而有利于使得所生成的用户画像更加的精细且准确。进而,在使得用户画像更加准确的同时,有利于使得基于用户画像进行信息推送时,可以使得所推送的信息与用户更加匹配。

Description

用户画像生成方法、装置、服务器及存储介质
相关申请的交叉引用
本申请要求于2020年11月25日提交的申请号为202011339892.X的中国申请的优先权,其在此出于所有目的通过引用将其全部内容并入本文。
技术领域
本申请涉及计算机技术领域,更具体地,涉及一种用户画像生成方法、装置、服务器及存储介质。
背景技术
随着互联网业务的发展,数据也在呈现爆炸式的增长,在各行各业对数据的操作需求越来越多,例如相关的推荐系统可以根据用户画像进行信息的推送。
发明内容
鉴于上述问题,本申请提出了一种用户画像生成方法、装置、服务器及存储介质,以改善上述问题。
第一方面,本申请提供了一种用户画像生成方法,所述方法包括:响应用户画像生成指令;获取绑定同一用户标识的多个设备分别采集的用户数据,得到多类用户数据;基于所述多类用户数据,生成所述用户标识对应的用户画像。
第二方面,本申请提供了一种用户画像生成装置,所述装置包括:画像生成场景触发单元,用于响应用户画像生成指令;数据获取单元,用于获取绑定同一用户标识的多个设备分别采集的用户数据,得到多类用户数据;画像生成单元,用于基于所述多类用户数据,生成所述用户标识对应的用户画像。
第三方面,本申请提供了一种服务器,包括处理器以及存储器;一个或多个程序被存储在所述存储器中并被配置为由所述处理器执行以实现上述的方法。
第四方面,本申请提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有程序代码,其中,在所述程序代码被处理器运行时执行上述的方法。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了本申请实施例中提出的用户画像生成方法的一种应用环境示意图;
图2示出了本申请实施例中提出的用户画像生成方法的另一种应用环境示意图;
图3示出了本申请实施例中提出的用户画像生成方法的再一种应用环境示意图;
图4示出了本申请一实施例提出的一种用户画像生成方法的流程图
图5示出了本申请另一实施例提出的一种用户画像生成方法的流程图;
图6示出了本申请再一实施例提出的一种用户画像生成方法的流程图;
图7示出了本申请又一实施例提出的一种用户画像生成方法的流程图;
图8示出了本申请实施例提出的一种用户画像生成装置的结构框图;
图9示出了本申请另一实施例提出的一种用户画像生成装置的结构框图;
图10示出了本申请的用于执行根据本申请实施例的用户画像生成方法的另一种电子设备的结构框图;
图11是本申请实施例的用于保存或者携带实现根据本申请实施例的用户画像生成方法的程序代码的存储单元。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
用户画像是相关领域中一个十分热门的研究方向。通过对用户的活跃行为以及消费行为等信息进行处理、挖掘以及刻画,可以形成用户画像(即用户的用户画像),进而生成相关用户的用户画像标签。在用户画像标签生成完成后,可以更针对性地对该用户进行合适的内容推送,可以实现降低推送运营成本。
例如,生成某个用户的用户画像标签后,该生成的用户画像标签可以表征该用户喜欢体育运动、该用户喜欢吃米饭以及该用户喜欢看体育比赛。那么在这种情况下,在给该用户推送消息时,可以推送匹配用户画像的消息。例如,当该用户在使用点餐类的应用程序的过程中,可以推送更多关于米饭的信息。而当该用户在使用视频类的应用程序时,可以推送更多关于体育比赛的视频内容。
发明人在研究中发现,在相关的用户画像的生成方法中,还存在所生成的用户画像不够精细的问题,进而也会造成基于根据用户画像所推送的信息不够准确,从而造成一定的资源浪费。例如,对于某个用户而言,在相关的用户画像生成的过程中所基于的用户数据一般是基于单一平台所采集得到的,例如,通常是通过购物平台或者软件下载平台来获取用户的用户数据。
因此,发明人提出了本申请提供的用户画像生成方法、装置、服务器及存储介质,从而在响应用户画像生成指令后,会获取绑定同一用户标识的多个设备分别采集的类型不同的用户数据,得到多类用户数据,进而基于所述多类用户数据,生成所述用户标识对应的用户画像。从而通过前述方式,针对同一用户,可以从其绑定的多个设备分别采集不同类型的用户数据,进而根据从不同设备采集的不同类型的用户数据来生成该用户的用户画像,从而有利于使得所生成的用户画像更加的精细且准确。进而,在使得用户画像更加准确的同时,有利于使得基于用户画像进行信息推送时,可以使得所推送的信息与用户更加匹配。
下面将结合附图对本申请实施例所涉及的应用环境进行介绍。
请参阅图1,如图1所示的应用环境中包括有设备100以及第一服务器110,其中第一服务器110可以用于运行业务系统。在图1所示的应景场景中,本申请实施例所提供的用户画像生成方法可以运行于第一服务器110中。在这种方式中,设备100可以通过网络与第一服务器110进行数据交互。其中,设备100可以将所采集的用户数据发送到服务器110中。在本申请实施例中,会根据多个设备采集的用户数据来生成用户画像,可选的,设备100可以有多个,该多个设备100可以分别运行在不同的设备中,进而可以将不同设备所采集的用户数据上传给第一服务器110。可选的,该分别运行在不同设备中的设备100可以均直接与第一服务器110进行通信。
此外,如图2所示的应用场景,在有多个设备100的情况下,该多个设备100可以先与网关120进行通信,进而由网关120集中将多种设备采集的用户数据上传给第一服务器110。
除了前述的由单一的服务器来执行本申请实施例所提供的用户画像生成方法外,也可以由多个服务器共同执行,例如,如图3所示,也可以通过第一服务器110以及第二服务器130共同来运行。在由第一服务器110以及第二服务器130共同来运行的这种方式中,可以由第一服务器110负责获取绑定同一用户标识的多个设备分别采集的用户数据,得到多类用户数据;然后由第二服务器130来执行后续的基于所述多类用户数据,生成所述用户标识对应的用户画像。此外,也可以配置本实施例所提供的用户画像生成方法中的每个步骤分别由一个单独的服务器来执行。
其中,第一服务器110以及第二服务器130均可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN、以及大数据和人工智能平台等基础云计算服 务的云服务器。设备100所在的电子设备可以为智能手机外,还可以是平板电脑、笔记本电脑、台式计算机、智能手表等,但并不局限于此。
下面将结合附图具体描述本申请的各实施例。
请参阅图4,本申请实施例提供的一种用户画像生成方法,所述方法包括:
S110:响应用户画像生成指令。
其中,可响应用户画像生成指令而进入用户画像生成场景,该用户画像生成场景表征进行用户画像生成的场景。在进入到用户画像生成场景后,开始获取用户数据以便后续步骤生成用户画像。在本实施例中,可以周期性的触发响应用户画像生成指令,也可以响应于生成指令而响应用户画像生成指令。
S120:获取绑定同一用户标识的多个设备分别采集的用户数据,得到多类用户数据。
其中,在本申请实施例中对于同一用户标识可以绑定有多个设备,继而可以通过同一个用户标识所绑定的多种设备各自采集的用户数据来进行后续的用户画像生成。再者,在本实施例中,至少两个设备所采集的用户数据的类型不同,进而使得可以获取到多类用户数据。示例性的,同一个用户标识绑定的多个设备包括智能手机、穿戴设备以及智能电视。其中,智能手机所采集的用户数据表征的是智能手机中的应用程序的运行情况。
需要说明的是,智能手机通常是通过其所安装的应用程序来执行对应的功能,可选的,智能手机可以对每个应用程序的运行时长以及运行时间段进行统计以作为用户数据。穿戴设备所采集的用户数据表征的是用户的健康情况以及运动情况。穿戴设备可以对用户的心率、运动步数以及运动距离等表征的是用户的健康情况以及运动情况的数据进行采集,以及作为用户数据。
智能电视所采集的用户数据表征的是用户的视频观看喜好情况。用户可以通过智能电视进行视频节目的浏览,进而智能电视可以采集用户对于视频节目的偏好来作为用户数据。
需要说明的是,对于不同的设备所采集的用户数据的类型可以有相同的部分,也可以完全不同。其中,不同的设备所采集的用户数据的类型有相同的部分可以理解为不同的设备可以采集到有相同类型的用户数据。示例性的,对于智能手机和智能电视这两种设备均可以采集到用户对于视频节目的偏好这一类型的用户数据。再者,不同的设备所采集的用户数据的类型完全不同,可以理解为不同的设备各自采集对用户数据的类型均不同。示例性的,同一用户标识绑定的多个设备包括设备A、设备B以及设备C,那么其中的设备A可以采集类型a的用户数据,设备B可以采集类型b的用户数据,设备C可以采集类型c的用户数据,其中,类型a、类型b以及类型c均是完全不同的类型。
需要说明的是,该用户数据包括从本地采集图像或者网络图像中提取的数据。可选的,可以基于CV(Computer Vision)技术从本地采集图像或者网络图像中提取的数据。其中,该本地采集图像可以包括设备通过自身所设置的图像采集器件所采集的图片或者视频。其中,设备可以响应于用户触发的指令采集图片或者视频,也可以基于指定的事件定期采集图片或者视频。该网络图像包括设备通过网络所下载的图像。示例性的,若用户在通过设备浏览自己的网络相册时,设备可以将网络相册的图像下载到本地,以便后续可以从其中提取出用户数据。
S130:基于所述多类用户数据,生成所述用户标识对应的用户画像。
在本实施例中,多类用户数据可以从不同的维度来表征用户的偏好,进而基于多类用户数据生成所述用户标识对应的用户画像,可以使得所生成的用户画像更加全面且精细的体现用户的喜好。
例如,若在智能手机采集的包括应用程序的运行时长的用户数据中,运动类的应用程序的使用时长大于设定的时间阈值,那么可以确定用户标识所属的用户属于运动偏好类的用户,进而所生成的用户画像可以包括运动偏好的画像。再者,绑定有同一用户标识的穿戴设备采集的用户数据 中包括用户会在傍晚期间的心率会一直处于较高的状态,那么可以确定用户标识所属的用户偏好在傍晚进行运动,进而所生成的用户画像可以包括在傍晚进行运动的画像,那么综合运动偏好的画像以及在傍晚进行运动的画像,所生成的用户标识对应的用户画像为喜好运动且喜欢在傍晚进行运动。
本实施例提供的一种用户画像生成方法,在响应用户画像生成指令后,会获取绑定同一用户标识的多个设备分别采集的类型不同的用户数据,得到多类用户数据,进而基于所述多类用户数据,生成所述用户标识对应的用户画像。从而通过前述方式,针对同一用户,可以从其绑定的多个设备分别采集不同类型的用户数据,进而根据从不同设备采集的不同类型的用户数据来生成该用户的用户画像,从而有利于使得所生成的用户画像更加的精细且准确。进而,在使得用户画像更加准确的同时,有利于使得基于用户画像进行信息推送时,可以使得所推送的信息与用户更加匹配。
请参阅图5,本申请实施例提供的一种用户画像生成方法,所述方法包括:
S210:响应用户画像生成指令。
S220:获取绑定同一用户标识的多个设备分别采集的用户数据,得到多类用户数据。
S230:生成所述多类用户数据各自对应的参考用户画像,得到多个参考用户画像。
作为一种方式,在本申请实施例中,对于每类的用户数据可以单独生成一种对应的用户画像作为参考用户画像。其中,参考用户画像可以为反映用户在指定方面的偏好的用户画像。示例性的,对于该用户标识所绑定的多个设备可以包括有智能手机、无线耳机、智能手环以及智能电视。其中,基于智能手机所采集的用户数据所生成的参考用户画像可以为反映用户在应用程序使用这一方面偏好的用户画像。基于无线耳机所采集的用户数据所生成的参考用户画像可以为反映用户在听音习惯使用这一方面偏好的用户画像。基于智能手环所采集的用户数据所生成的参考用户画像可以为反映用户在健康这一方面的用户画像。基于智能电视所采集的用户数据所生成的参考用户画像可以为反映用户在视频节目观看这一方面偏好的用户画像。
S240:基于所述多个参考用户画像生成所述用户标识对应的用户画像。
需要说明的是,在本申请实施例中参考用户画像可能表征的是用户在某一方面的用户画像,进而基于多个参考用户画像生成所述用户标识对应的用户画像可以为更加全面且精细的用户画像。其中,在本申请实施例中,可以有多种的基于多个参考用户画像生成用户标识对应的用户画像的方式。
作为一种方式,所述用户画像包括多个画像描述参数,所述基于所述多个参考用户画像生成所述用户标识对应的用户画像,包括:将所述多个参考用户画像与所述多个画像描述参数一一进行匹配,获取所述多个画像描述参数各自匹配的参考用户画像,以生成所述用户标识对应的用户画像。
需要说明的是,画像描述参数可以理解为对用户画像进行进一步限定的描述内容,那么在用户画像包括多个画像描述参数的情况下,可以通过该多个画像描述参数来更加全面且精细生成得到用户画像。可选的,该画像描述参数可以包括有运动时间信息、运动地点信息、听音习惯信息以及视频节目爱好信息,进而通过这几个画像描述参数可以更加全面且精细的生成用户画像。
其中的参考用户画像也是用于表征用户在某一方面的偏好的用户画像,那么通过将多个参考用户画像与所述多个画像描述参数一一进行匹配,以便于有更多的画像描述参数可以匹配到对应的内容。可以理解的是,在本实施例中将与画像描述参数匹配的参考用户画像作为画像描述参数对应的内容。
需要说明的是,其中的将多个参考用户画像与所述多个画像描述参数一一进行匹配,可以理解为将多个参考用户画像各自与多个画像描述参数分别进行匹配。示例性的,若多个参考用户画像包括参考用户画像A、参考用户画像B、参考用户画像C以及参考用户画像D。多个画像描述参数包括画像描述参数a、画像描述参数b、画像描述参数c、画像描述参数d、画像描述参数e以及画像描述参数f。那么 在匹配过程中,会将参考用户画像A分别与画像描述参数a、画像描述参数b、画像描述参数c、画像描述参数d、画像描述参数e以及画像描述参数f进行匹配,类似的,后续会将参考用户画像B、参考用户画像C以及参考用户画像D按照参考用户画像A进行匹配的方式进行相同的匹配。
可选的,在将参考用户画像与画像描述参数进行匹配的过程中,可以是基于语义内容进行匹配。作为一种方式,对于每个画像描述参数可以预先对应有语义内容模板,在匹配的过程中,可以将参考用户画像的内容与每个画像描述参数对应的语义内容模板进行语义匹配,以便确定与画像描述参数匹配的参考用户画像。示例性的,以运动时间信息为例,运动时间信息表征的是用户偏好在什么时间进行运动,那么运动时间信息所对应的语义内容模板可以为“用户喜欢在t1时运动”或者可以为“t1时为用户喜欢的运动时间”,对应的,若某个参考用户画像的内容为“用户喜欢在t2时运动”,那么基于语义内容进行匹配的过程中,可以发现该用户喜欢在t2时运动的参考用户画像在语义内容与该运动时间信息对应的语义内容模板是可以匹配上,进而就可以确定运动时间信息对应的内容为“用户喜欢在t2时运动”。
需要说明的是,在这种方式中,该将所述多个参考用户画像与所述多个画像描述参数一一进行匹配,获取所述多个画像描述参数各自匹配的参考用户画像,以生成所述用户标识对应的用户画像,可以包括:
将所述多个参考用户画像中的异常画像进行删除,得到剩余的参考用户画像,其中,所述异常画像为用户选择进行删除的画像。
需要说明的是,因为绑定有同一个用户标识的设备可能并不是随时都是由设备绑定用户在使用的,进而在设备不是由设备绑定用户使用的情况下,设备所采集的用户数据可能并不能反映设备绑定用户的偏好。那么作为一种改善方式,可以在得到多个参考用户画像后,将该多个参考用户画像推送给设备绑定用户,以便设备绑定用户进行确认。在确认过程中,设备绑定用户可以选修需要进行删除的画像,进而将用户选择进行删除的画像确定为异常画像。然后,将所述剩余的参考用户画像与所述多个画像描述参数一一进行匹配,获取所述多个画像描述参数各自匹配的参考用户画像,以生成所述用户标识对应的用户画像。
作为另外一种方式,所述基于所述多个参考用户画像生成所述用户标识对应的用户画像,包括:检测所述多个参考用户画像之间是否有互相不匹配的用户画像;若检测到有互相不匹配的一对用户画像,获取第一用户画像,所述第一用户画像为所述多个参考用户画像中除所述互相不匹配的一对用户画像以外的用户画像;基于所述第一用户画像对所述互相不匹配的一对用户画像所表征的画像内容进行更新,得到第二用户画像;基于所述第一用户画像与所述第二用户画像生成所述用户标识对应的用户画像。
需要说明的是,虽然在本申请实施例中不同的设备所采集的用户数据的类型可能不同,但是基于不同类型的用户数据可能得到参考用户画像的类型是相同的。例如,在智能手机采集的关于应用程序时间的用户数据中,关于购物或者女生专用的应用程序的使用时长大于设定的时间阈值,那么基于智能手机采集的用户数据可能会确定用户为女性用户这一参考用户画像。在智能电视采集的关于视频节目类型的用户数据中,关于军事节目或者其他政治类的节目的观看时间大于设定的时间阈值,进而基于智能电视采集的用户数据可能会确定用户为男性用户这一参考用户画像。可以理解的是,针对同一个用户标识,对应内容为男性的参考用户画像和对应内容为女性的参考用户画像它们的画像类型都是相同的,但是它们所表达的内容是相互矛盾的,那么这一对用户画像可以理解为一种互相不匹配的用户画像。
在检测到多个参考用户画像中存在有互相不匹配的用户画像时,可以对该互相不匹配的用户画像进行更新,以便可以的消除不匹配的内容,进而进一步的提升最终所生成的用户画像的准确 性。
其中,作为一种更新方式,所述基于所述第一用户画像对所述互相不匹配的一对用户画像所表征的画像内容进行更新,得到第二用户画像,包括:若所述互相不匹配的一对用户画像之间相互矛盾,将所述互相不匹配的一对用户画像中与所述第一用户画像匹配的用户画像作为第二用户画像。
需要说明的是,在多个参考用户画像中,除了互相不匹配的用户画像以外,还可以有另外的参考用户画像,而该另外的用户画像在本实施例中可以作为第一用户画像。
可以理解的是,该第一用户画像也可以在一些方面表征用户的偏好。发明人在研究中发现同一用户的不同方面的画像可以是具有一定的关联性的,例如,对于女性用户而言,在听音乐时可能更喜欢听偏抒情类的歌曲,而男性用户更喜欢听摇滚或者嘻哈类型的歌曲。在这种方式下,可以通过第一用户画像来检测该互相不匹配的用户画像中具体哪个画像与用户匹配。可选的,若该第一用户画像中还有包括基于无线耳机采集的表征听音习惯的用户数据,且该表征听音习惯的用户数据表征用户偏好为摇滚或者嘻哈类型的歌曲,那么则可以确定前述确定的用户为女性的画像较大概率是错误的,那么进而可以将表征性别为男性的这一画像作为第二用户画像。
在本实施例中,互相不匹配的用户画像除了前述的所表征的内容相互矛盾的情况外,还有另外的情况,例如,若画像内容为数值类的画像,且画像所表征的内容不相同的情况下,也会确定为互相不匹配的用户画像。示例性的,若多个设备包括有智能手机、智能手表以及智能电视。其中,基于智能手机采集的用户数据(可以为关于应用程序运行时长的用户数据)确定用户的年龄在20岁到30岁的年龄段,那么基于智能手机采集的用户数据所确定的参考用户画像表征的是用户处于20岁到30岁之间,而基于智能电视采集的视频节目数据确定用户是处于6岁到12岁之间。通过前述内容可知,对于同一个用户而言,既处于20岁到30岁的年龄段,又同时处于6岁到12岁这个年龄段是不太合理的。
可选的,作为一种更新数据类的相互不匹配的用户画像的方式,所述基于所述第一用户画像对所述互相不匹配的一对用户画像所表征的画像内容进行更新,得到第二用户画像,包括:若所述互相不匹配的一对用户画像为数值类画像,基于所述第一用户画像得到所述互相不匹配的一对用户画像各自对应的权重;基于所述权重,将所述互相不匹配的一对用户画像所表征的画像内容进行加权求和,得到第二用户画像。需要说明的是,第一用户画像可能不能很直接的体现用户所在的年龄,但是可能也会在一定程度上反应用户的年龄。例如,对于通过智能手表所采集的反应用户的健康状态以及运动状态的用户数据也可以表征用户所处的年龄段。
那么在这种方式下,可以通过第一用户画像来确定用户实际上更大可能性是属于哪个年龄段。可选的,可以预先配置多个权重比例,该权重比例表征了互相矛盾的用户画像各自对应的权重,然后基于第一用户画像来确定当前从该多个权重比例中具体选择哪个权重比例来作互相矛盾的用户画像各自的权重。
需要说明的是,对于参考用户画像而言,并不一定所有的参考用户画像都可以用于表征用户的年龄。可选的,可以先获取第一用户画像中可以表征用户年龄的用户画像,然后将可以表征用户年龄的用户画像中与前述的互相不匹配的用户画像的匹配数量来确定互相矛盾的用户画像各自的权重。其中,相互不匹配的用户画像包括表征用户年龄段在20岁到30岁的用户画像,以及表征用户年龄段在6岁到12岁的用户画像。那么若第一用户画像中可以表征用户年龄的用户画像与表征用户年龄段在20岁到30岁的用户画像匹配的第一数量,大于第一用户画像中可以表征用户年龄的用户画像中与表征用户年龄段在6岁到12岁的用户画像匹配的第二数量,那么则确定的权 重比例中表征用户年龄段在20岁到30岁的用户画像的权重会大于表征用户年龄段在6岁到12岁的用户画像对应的权重,以便使得所确定的用户实际年龄更加靠近20岁到30岁这个区间。具体的,可以将预先配置多个权重比例中,第一数量与第二数据量的比值所匹配的权重比例来作为互相不匹配的用户画像各自所占的权重。
下面通过一个实例再对基于权重来确定第二用户画像进行说明。
示例性的,若基于多个设备采集的多个参考用户画像包括参考用户画像A、参考用户画像B、参考用户画像C、参考用户画像D以及参考用户画像E。其中,参考用户画像A与参考用户画像B为同一个类型的用户画像(例如,均是表征用户年龄的画像),但是参考用户画像A所表征的画像内容与参考用户画像B所表征的画像内容不同。那么参考用户画像A和参考用户画像B为互相不匹配的用户画像,对应的,其中的参考用户画像C、参考用户画像D以及参考用户画像E则为第一用户画像,并且其中的参考用户画像C、参考用户画像D以及参考用户画像E均可以反映年龄,则会进一步的将参考用户画像C、参考用户画像D以及参考用户画像E分别与参考用户画像A以及参考用户画像B进行匹配,若参考用户画像C与参考用户画像A匹配(该匹配可以为用户画像C所表征的内容与参考用户画像A所表征的内容相同),而参考用户画像D以及参考用户画像E与参考用户画像B匹配。那么则可以确定参考用户画像B对应的权重大于参考用户画像A的权重。
需要说明的是,随着用户所拥有的设备越来越多,那么同一个用户标识所绑定的设备也可能会越来越多。但是,绑定同一个用户标识的多个设备可能并不是同一个用户在使用,进而就可能会造成前述的绑定有同一个用户标识的设备所采集的用户数据会生成相互不匹配的用户画像。例如,对于某个用户而言,其用户标识同时绑定了智能手机、智能手表以及智能电视。其中,因为智能手机和智能手表因为随身携带的原因,可能是由用户自己在使用,而其中的智能电视则可能是由用户的其他亲人在使用,进而就会造成智能电视所采集的用户数据实际所表征的用户画像与用户本身并不匹配。再者,对于同一个设备可能所采集的用户数据会有多个类型。
作为一种方式,所述若所述互相不匹配的一对用户画像之间相互矛盾,将所述互相不匹配的一对用户画像中与所述第一用户画像匹配的用户画像作为第二用户画像之后还包括:获取目标用户数据,在所述互相不匹配的一对用户画像中,与所述第一用户画像不匹配的用户画像为基于所述目标用户数据生成;控制目标设备停止采集所述目标用户数据,所述目标设备为采集所述目标用户数据的设备。需要说明的是,在前述的互相不匹配的用户画像中,会有至少一个用户画像其实并不能很准确的反应用户的偏好,即在互相不配的用户画像中,与第一用户画像不匹配的用户画像其实不能很准确的反应用户的偏好,那么对于生成该与第一用户画像不匹配的用户画像时所基于的用户数据,则实际上可能也不是基于用户标识所属用户的行为产生的,那么通过控制目标设备停止采集目标用户数据,有利于避免所采集的目标用户数据对最终所生成的用户画像造成干扰,以便于提升最终所生成画像的准确性。
再者,除了前述的控制停止采集目标用户数据外,也可以控制将一些设备从用户标识对应的多个设备中移出。作为一种方式,若设备所采集的多个类型的用户数据均为目标用户数据,则可以将该设备移出。其中,需要说明的是,将设备进行移出是指在用户画像生成场景中将设备进行移出,而该设备本身还是处于绑定用户标识的状态,进而以便在获取绑定同一用户标识的多个设备分别采集的用户数据的过程中,不会再去获取被移出的移动设备所采集的用户数据。示例性的,若多个设备包括智能手机、智能手表以及智能电视,而在该智能电视所采集的所有类型的用户数据均被确定为目标用户数据的情况下,则会在用户画像生成场景中将该智能电视移出,以便在获取用户数据时,可以只获取智能手机以及智能手表所采集的用户数据。
本实施例提供的一种用户画像生成方法,通过前述方式,针对同一用户,可以从其绑定的多个设备分别采集不同类型的用户数据,进而根据从不同设备采集的不同类型的用户数据来生成该用户的用户画像,从而有利于使得所生成的用户画像更加的精细且准确。进而,在使得用户画像更加准确的同时,有利于使得基于用户画像进行信息推送时,可以使得所推送的信息与用户更加匹配。并且,在本实施例中针对所生成的多个参考用户画像,还会先确定该多个参考用户画像之间是否有互相不匹配的用户画像,进而有利于提升最终所生成的用户画像的准确性。
请参阅图6,本申请实施例提供的一种用户画像生成方法,所述方法包括:
S310:响应用户画像生成指令。
S320:获取绑定同一用户标识的多个设备分别采集的用户数据,用户数据对应有身份标识,所述身份标识用于表征对应用户数据是否属于所述用户标识对应的用户。
需要说明的是,设备在采集用户数据的过程中可以先进行身份识别,进而根据身份识别的结果确定当前所采集的用户数据是否是由设备所绑定用户实际产生的。并且,还可以根据身份识别结果为当前采集的用户数据添加身份标识,以便使得可以通过该身份标识来检测对应用户数据是否属于用户标识对应的用户(即设备所绑定用户)。
示例性的,多个设备所绑定的用户标识为user_a,多个设备采集的用户数据包括用户数据A、用户数据B以及用户数据C,其中,用户数据A对应的身份标识为user_a,用户数据B对应的身份标识为false,用户数据C对应有身份标识为user_a。经过比对后可以发现,用户数据A和用户数据C所对应的身份标识与前述的用户标识均为user_a,那么则可以确定用户数据A和用户数据C为设备所绑定用户实际产生的,而其中的用户数据B对应的身份标识则与前述的用户标识不同,则可以确定确定用户数据B不是设备所绑定用户实际产生的。
在本实施例中,可以有多种的生成身份标识的方式。
作为一种方式,所述身份标识为所述设备在采集用户数据过程中,基于所述设备的图像采集器件所采集的图像进行身份识别生成;其中,若身份识别失败,则所生成的身份标识表征对应的用户数据不属于所述用户标识对应的用户,若身份识别成功,则所生成的身份标识表征对应的用户数据属于所述用户标识对应的用户。
其中,设备所绑定用户标识对应的用户为设备所绑定用户,该身份识别可以理解为识别用户是否为设备所绑定用户。可选的,设备所绑定用户可以预先在设备中录入自己的人脸图像,设备在开始采集用户数据的时候可以通过本身的图像采集器件进行图像采集,并识别所采集的图像中是否有设备所绑定用户的人脸图像。若检测到有,则确定身份识别成功,进而为当前所采集的用户数据所对应配置的身份标识则为设备所绑定的用户标识;对应的,若检测到没有,则确定身份识别失败,进而为当前所采集的用户数据所对应配置的身份标识为指定字符串(例如,前述的false),并且该指定字符串与设备所绑定的用户标识不同。
作为另外一种方式,所述身份标识为设备在采集用户数据过程中,基于所述设备与参考设备的位置比对结果生成,所述参考设备可以为手机、智能平板、智能手环或者智能手表等;其中,若位置比对不一致,则所生成的身份标识表征对应的用户数据不属于所述用户标识对应的用户,若位置比对一致,则所生成的身份标识表征对应的用户数据属于所述用户标识对应的用户。
可选的,设备可以在采集用户数据的过程中,向参考设备发送位置请求指令,以便可以请求到参考设备的位置,进而接收参考设备响应于该位置请求指令返回的位置。若设备对比发现参考设备返回的位置与设备自己的位置不一致,则可以确定设备所绑定用户当前实际并不在附近,进而可以确定当前的用户数据并不是由设备绑定用户实际产生的。对应的,若设备对比发现参考设 备返回的位置与设备自己的位置一致,则可以确定设备所绑定用户当前实际就在附近,进而可以确定当前的用户数据并是由设备绑定用户实际产生的。
S330:获取待删除用户数据,所述待删除用户数据为所述用户数据中对应的身份标识与所述用户标识不相同的数据。
S340:将所述待删除用户数据从所述多个设备分别采集的用户数据删除,得到剩余的用户数据。
S350:基于所述剩余的用户数据,得到多类用户数据。
S360:基于所述多类用户数据,生成所述用户标识对应的用户画像。
本实施例提供的一种用户画像生成方法,通过给用户数据配置对应的身份标识的方式,可以使得在获取到多个设备采集的用户数据后,通过用户数据所对应的身份标识与多个设备所共同绑定的用户标识的比对结果,来将实际不属于用户标识对应用户的用户数据进行剔除,以便能够更加精准的生成用户标识对应用户的用户画像。
请参阅图7,本申请实施例提供的一种用户画像生成方法,所述方法包括:
S410:响应用户画像生成指令。
S420:获取绑定同一用户标识的多个设备分别采集的用户数据,得到多类用户数据。
S430:基于所述多类用户数据,生成所述用户标识对应的用户画像。
S440:获取均具有目标行为的多个用户。
S450:基于所述多个用户各自的用户画像生成所述目标行为对应的群体画像。
需要说明的是,在本实施例中目标行为可以为物品购买行为,可以为应用程序的使用行为,还可以为内容浏览行为。在将均具有目标行为的用户进行统计得到后,可以将该均具有目标行为的用户作为一个用户群体,进而获取到该用户群体的群体画像。例如,若该目标行为购买了指定型号的电子设备的行为,那么通过本实施例的方法则可以统计得到购买了指定型号的电子设备的用户群体的画像。
本实施例提供的一种用户画像生成方法,通过前述方式,针对同一用户,可以从其绑定的多个设备分别采集不同类型的用户数据,进而根据从不同设备采集的不同类型的用户数据来生成该用户的用户画像,从而有利于使得所生成的用户画像更加的精细且准确。进而,在使得用户画像更加准确的同时,有利于使得基于用户画像进行信息推送时,可以使得所推送的信息与用户更加匹配。并且,在本实施例中,在生成用户画像后还可以基于多个用户的用户画像来获取具有相同的目标行为的多个用户,进而基于多个用户各自的用户画像生成所述目标行为对应的群体画像,以便可以基于该群体画像来进行产品定位以及信息推送。
请参阅图8,本申请实施例提供的一种用户画像生成装置400,所述装置400包括:
画像生成场景触发单元410,用于响应用户画像生成指令。
数据获取单元420,用于获取绑定同一用户标识的多个设备分别采集的用户数据,得到多类用户数据。
画像生成单元430,用于基于所述多类用户数据,生成所述用户标识对应的用户画像。
作为一种方式,画像生成单元430,具体用于生成所述多类用户数据各自对应的参考用户画像,得到多个参考用户画像;基于所述多个参考用户画像生成所述用户标识对应的用户画像。
可选的,所述用户画像包括多个画像描述参数。画像生成单元430,具体用于将所述多个参考用户画像与所述多个画像描述参数一一进行匹配,获取所述多个画像描述参数各自匹配的参考用户画像,以生成所述用户标识对应的用户画像。
可选的,画像生成单元430,具体用于检测所述多个参考用户画像之间是否有互相不匹配的用户画像;若检测到有互相不匹配的一对用户画像,获取第一用户画像,所述第一用户画像为所述多个参考用 户画像中除所述互相不匹配的一对用户画像以外的用户画像;基于所述第一用户画像对所述互相不匹配的一对用户画像所表征的画像内容进行更新,得到第二用户画像;基于所述第一用户画像与所述第二用户画像生成所述用户标识对应的用户画像。
可选的,画像生成单元430,具体用于若所述互相不匹配的一对用户画像之间相互矛盾,将所述互相不匹配的一对用户画像中与所述第一用户画像匹配的用户画像作为第二用户画像。
其中,作为一种方式,画像生成单元430,具体用于获取目标用户数据,在所述互相不匹配的一对用户画像中,与所述第一用户画像不匹配的用户画像为基于所述目标用户数据生成;控制目标设备停止采集所述目标用户数据,所述目标用户设备为采集所述目标用户数据的设备。
作为一种方式,画像生成单元430,具体用于若所述互相不匹配的一对用户画像为数值类画像,基于所述第一用户画像得到所述互相不匹配的一对用户画像各自对应的权重;基于所述权重,将所述互相不匹配的一对用户画像所表征的画像内容进行加权求和,得到第二用户画像。
可选的,如图9所示,所述装置400还包括:
群体画像生成单元440,用于获取均具有目标行为的多个用户;基于所述多个用户各自的用户画像生成所述目标行为对应的群体画像。
本实施例提供的一种用户画像生成装置,在响应用户画像生成指令后,会获取绑定同一用户标识的多个设备分别采集的类型不同的用户数据,得到多类用户数据,进而基于所述多类用户数据,生成所述用户标识对应的用户画像。从而通过前述方式,针对同一用户,可以从其绑定的多个设备分别采集不同类型的用户数据,进而根据从不同设备采集的不同类型的用户数据来生成该用户的用户画像,从而有利于使得所生成的用户画像更加的精细且准确。进而,在使得用户画像更加准确的同时,有利于使得基于用户画像进行信息推送时,可以使得所推送的信息与用户更加匹配。
需要说明的是,本申请中装置实施例与前述方法实施例是相互对应的,装置实施例中具体的原理可以参见前述方法实施例中的内容,此处不再赘述。
下面将结合图10对本申请提供的一种服务器进行说明。
请参阅图10,基于上述的用户画像生成方法,本申请实施例还提供的另一种包括可以执行前述用户画像生成方法的处理器104的服务器200。服务器200还包括存储器104、以及网络模块106。其中,该存储器104中存储有可以执行前述实施例中内容的程序,而处理器102可以执行该存储器104中存储的程序。其中的处理器102的内部结构可以如图1所示。
其中,处理器102可以包括一个或者多个用于处理数据的核以及消息矩阵单元。处理器102利用各种接口和线路连接整个服务器200内的各个部分,通过运行或执行存储在存储器104内的指令、程序、代码集或指令集,以及调用存储在存储器104内的数据,执行电子设备200的各种功能和处理数据。可选地,处理器102可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器102可集成中央处理器(Central Processing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器102中,单独通过一块通信芯片进行实现。
存储器104可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory,ROM)。存储器104可用于存储指令、程序、代码、代码集或指令集。存储器104可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现下述各个方法实施例的指令等。例如, 存储器104中可以存储有用户画像生成的装置。其中,该用户画像生成的装置可以为前述的装置400。存储数据区还可以存储服务器100在使用中所创建的数据(比如电话本、音视频数据、聊天记录数据)等。
所述网络模块106用于接收以及发送电磁波,实现电磁波与电信号的相互转换,从而与通讯网络或者其他设备进行通讯,例如和音频播放设备进行通讯。所述网络模块106可包括各种现有的用于执行这些功能的电路元件,例如,天线、射频收发器、数字信号处理器、加密/解密芯片、用户身份模块(SIM)卡、存储器等等。所述网络模块106可与各种网络如互联网、企业内部网、无线网络进行通讯或者通过无线网络与其他设备进行通讯。上述的无线网络可包括蜂窝式电话网、无线局域网或者城域网。例如,网络模块106可以与基站进行信息交互。
请参考图11,其示出了本申请实施例提供的一种计算机可读存储介质的结构框图。该计算机可读存储介质1100中存储有程序代码,所述程序代码可被处理器调用执行上述方法实施例中所描述的方法。
计算机可读存储介质1100可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。可选地,计算机可读存储介质1100包括非易失性计算机可读介质(non-transitory computer-readable storage medium)。计算机可读存储介质1100具有执行上述方法中的任何方法步骤的程序代码1110的存储空间。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。程序代码1110可以例如以适当形式进行压缩。
综上所述,本申请提供的一种用户画像生成方法、装置、服务器及存储介质,在响应用户画像生成指令后,会获取绑定同一用户标识的多个设备分别采集的类型不同的用户数据,得到多类用户数据,进而基于所述多类用户数据,生成所述用户标识对应的用户画像。从而通过前述方式,针对同一用户,可以从其绑定的多个设备分别采集不同类型的用户数据,进而根据从不同设备采集的不同类型的用户数据来生成该用户的用户画像,从而有利于使得所生成的用户画像更加的精细且准确。进而,在使得用户画像更加准确的同时,有利于使得基于用户画像进行信息推送时,可以使得所推送的信息与用户更加匹配。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不驱使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (20)

  1. 一种用户画像生成方法,其特征在于,所述方法包括:
    响应用户画像生成指令;
    获取绑定同一用户标识的多个设备分别采集的用户数据,得到多类用户数据;
    基于所述多类用户数据,生成所述用户标识对应的用户画像。
  2. 根据权利要求1所述的方法,其特征在于,所述基于所述多类用户数据,生成所述用户标识对应的用户画像,包括:
    生成所述多类用户数据各自对应的参考用户画像,得到多个参考用户画像;
    基于所述多个参考用户画像生成所述用户标识对应的用户画像。
  3. 根据权利要求2所述的方法,其特征在于,所述基于所述多个参考用户画像生成所述用户标识对应的用户画像,包括:
    检测所述多个参考用户画像之间是否有互相不匹配的用户画像;
    若检测到有互相不匹配的一对用户画像,获取第一用户画像,所述第一用户画像为所述多个参考用户画像中除所述互相不匹配的一对用户画像以外的用户画像;
    基于所述第一用户画像对所述互相不匹配的一对用户画像所表征的画像内容进行更新,得到第二用户画像;
    基于所述第一用户画像与所述第二用户画像生成所述用户标识对应的用户画像。
  4. 根据权利要求3所述的方法,其特征在于,所述基于所述第一用户画像对所述互相不匹配的一对用户画像所表征的画像内容进行更新,得到第二用户画像,包括:
    若所述互相不匹配的一对用户画像之间相互矛盾,将所述互相不匹配的一对用户画像中与所述第一用户画像匹配的用户画像作为第二用户画像。
  5. 根据权利要求4所述的方法,其特征在于,所述若所述互相不匹配的一对用户画像之间相互矛盾,将所述互相不匹配的一对用户画像中与所 述第一用户画像匹配的用户画像作为第二用户画像之后还包括:
    获取目标用户数据,在所述互相不匹配的一对用户画像中,与所述第一用户画像不匹配的用户画像为基于所述目标用户数据生成;
    控制目标设备停止采集所述目标用户数据,所述目标设备为采集所述目标用户数据的设备。
  6. 根据权利要求3所述的方法,其特征在于,所述基于所述第一用户画像对所述互相不匹配的一对用户画像所表征的画像内容进行更新,得到第二用户画像,包括:
    若所述互相不匹配的一对用户画像为数值类画像,基于所述第一用户画像得到所述互相不匹配的一对用户画像各自对应的权重;
    基于所述权重,将所述互相不匹配的一对用户画像所表征的画像内容进行加权求和,得到第二用户画像。
  7. 根据权利要求2所述的方法,其特征在于,所述用户画像包括多个画像描述参数,所述基于所述多个参考用户画像生成所述用户标识对应的用户画像,包括:
    将所述多个参考用户画像与所述多个画像描述参数一一进行匹配,获取所述多个画像描述参数各自匹配的参考用户画像,以生成所述用户标识对应的用户画像。
  8. 根据权利要求7所述的方法,其特征在于,所述将所述多个参考用户画像与所述多个画像描述参数一一进行匹配,获取所述多个画像描述参数各自匹配的参考用户画像,以生成所述用户标识对应的用户画像,包括:
    将所述多个参考用户画像的语义内容与所述多个画像描述参数对应的语义内容模板进行语义匹配,获取所述多个画像描述参数各自匹配的参考用户画像,以生成所述用户标识对应的用户画像。
  9. 根据权利要求7所述的方法,其特征在于,所述将所述多个参考用户画像与所述多个画像描述参数一一进行匹配,获取所述多个画像描述参数各自匹配的参考用户画像,以生成所述用户标识对应的用户画像,包括:
    将所述多个参考用户画像中的异常画像进行删除,得到剩余的参考用户画像,其中,所述异常画像为用户选择进行删除的画像;
    将所述剩余的参考用户画像与所述多个画像描述参数一一进行匹配, 获取所述多个画像描述参数各自匹配的参考用户画像,以生成所述用户标识对应的用户画像。
  10. 根据权利要求7-9任一所述的方法,其特征在于,所述画像描述参数包括运动时间信息、运动地点信息、听音习惯信息以及视频节目爱好信息中的至少一项。
  11. 根据权利要求1所述的方法,其特征在于,用户数据对应有身份标识,所述身份标识用于表征对应用户数据是否属于所述用户标识对应的用户;所述获取绑定同一用户标识的多个设备分别采集的用户数据,得到多类用户数据,包括:
    获取绑定同一用户标识的多个设备分别采集的用户数据;
    获取待删除用户数据,所述待删除用户数据为所述用户数据中对应的身份标识与所述用户标识不相同的数据;
    将所述待删除用户数据从所述多个设备分别采集的用户数据删除,得到剩余的用户数据;
    基于所述剩余的用户数据,得到多类用户数据。
  12. 根据权利要求11所述的方法,其特征在于,所述身份标识为所述设备在采集用户数据过程中,基于所述设备的图像采集器件所采集的图像进行身份识别生成;
    其中,若身份识别失败,则所生成的身份标识表征对应的用户数据不属于所述用户标识对应的用户,若身份识别成功,则所生成的身份标识表征对应的用户数据属于所述用户标识对应的用户。
  13. 根据权利要求11所述的方法,其特征在于,所述身份标识为设备在采集用户数据过程中,基于所述设备与参考设备的位置比对结果生成;
    其中,若位置比对不一致,则所生成的身份标识表征对应的用户数据不属于所述用户标识对应的用户,若位置比对一致,则所生成的身份标识表征对应的用户数据属于所述用户标识对应的用户。
  14. 根据权利要求1所述的方法,其特征在于,所述用户数据包括从本地采集图像或者网络图像中提取的数据。
  15. 根据权利要求14所述的方法,其特征在于,所述本地采集图像包括设备通过自身所设置的图像采集器件所采集的图片或者视频;所述网络 图像包括设备通过网络所下载的图像。
  16. 根据权利要求1-15任一所述的方法,其特征在于,所述基于所述多类用户数据,生成所述用户标识对应的用户画像之后还包括:
    获取均具有目标行为的多个用户,其中,所述多个用户对应多个不同的用户标识;
    基于所述多个用户各自的用户画像生成所述目标行为对应的群体画像。
  17. 根据权利要求16所述的方法,其特征在于,所述目标行为包括物品购买行为、应用程序的使用行为以及内容浏览行为中的至少一种。
  18. 一种用户画像生成装置,其特征在于,所述装置包括:
    画像生成场景触发单元,用于响应用户画像生成指令;
    数据获取单元,用于获取绑定同一用户标识的多个设备分别采集的用户数据,得到多类用户数据;
    画像生成单元,用于基于所述多类用户数据,生成所述用户标识对应的用户画像。
  19. 一种服务器,其特征在于,包括处理器以及存储器;一个或多个程序被存储在所述存储器中并被配置为由所述处理器执行以实现权利要求1-17任一所述的方法。
  20. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有程序代码,其中,在所述程序代码被处理器运行时执行权利要求1-17任一所述的方法。
PCT/CN2021/122906 2020-11-25 2021-10-09 用户画像生成方法、装置、服务器及存储介质 WO2022111071A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011339892.XA CN112328895A (zh) 2020-11-25 2020-11-25 用户画像生成方法、装置、服务器及存储介质
CN202011339892.X 2020-11-25

Publications (1)

Publication Number Publication Date
WO2022111071A1 true WO2022111071A1 (zh) 2022-06-02

Family

ID=74307959

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/122906 WO2022111071A1 (zh) 2020-11-25 2021-10-09 用户画像生成方法、装置、服务器及存储介质

Country Status (2)

Country Link
CN (1) CN112328895A (zh)
WO (1) WO2022111071A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112328895A (zh) * 2020-11-25 2021-02-05 Oppo广东移动通信有限公司 用户画像生成方法、装置、服务器及存储介质
CN113094582A (zh) * 2021-03-31 2021-07-09 联想(北京)有限公司 处理方法及装置、电子设备

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1120722A2 (en) * 2000-01-13 2001-08-01 Applied Psychology Research Limited Method and apparatus for generating categorization data
US20130179944A1 (en) * 2012-01-11 2013-07-11 Douglas Everett Kozlay Personal area network (PAN) ID-authenticating systems, apparatus, method
CN106469202A (zh) * 2016-08-31 2017-03-01 杭州探索文化传媒有限公司 一种影视大数据平台的数据分析方法
CN106980663A (zh) * 2017-03-21 2017-07-25 上海星红桉数据科技有限公司 基于海量跨屏行为数据的用户画像方法
US20170368683A1 (en) * 2016-06-28 2017-12-28 Shenzhen Gowild Robotics Co., Ltd. User portrait based skill package recommendation device and method
CN109034907A (zh) * 2018-08-06 2018-12-18 北京三快在线科技有限公司 广告数据投放方法及装置、电子设备、存储介质
CN109191173A (zh) * 2018-07-26 2019-01-11 飞立股份有限公司 一种一站式全屏精准程序化数据管理方法
CN110765170A (zh) * 2019-09-26 2020-02-07 维沃移动通信有限公司 一种用户画像的生成方法及可穿戴设备
CN111479143A (zh) * 2020-03-12 2020-07-31 深圳市酷开网络科技有限公司 一种基于用户画像的电视广告推送方法及电子设备
CN112328895A (zh) * 2020-11-25 2021-02-05 Oppo广东移动通信有限公司 用户画像生成方法、装置、服务器及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109002490B (zh) * 2018-06-26 2020-09-04 腾讯科技(北京)有限公司 用户画像生成方法、装置、服务器及存储介质
CN110069702A (zh) * 2019-03-15 2019-07-30 深圳壹账通智能科技有限公司 用户行为数据分析方法、装置、计算机设备及存储介质
CN111091351A (zh) * 2019-12-16 2020-05-01 北京政信1890智能科技有限公司 用户画像构建方法、装置、电子设备和可读存储介质
CN111949879A (zh) * 2020-08-14 2020-11-17 汉海信息技术(上海)有限公司 推送消息的方法、装置、电子设备及可读存储介质

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1120722A2 (en) * 2000-01-13 2001-08-01 Applied Psychology Research Limited Method and apparatus for generating categorization data
US20130179944A1 (en) * 2012-01-11 2013-07-11 Douglas Everett Kozlay Personal area network (PAN) ID-authenticating systems, apparatus, method
US20170368683A1 (en) * 2016-06-28 2017-12-28 Shenzhen Gowild Robotics Co., Ltd. User portrait based skill package recommendation device and method
CN106469202A (zh) * 2016-08-31 2017-03-01 杭州探索文化传媒有限公司 一种影视大数据平台的数据分析方法
CN106980663A (zh) * 2017-03-21 2017-07-25 上海星红桉数据科技有限公司 基于海量跨屏行为数据的用户画像方法
CN109191173A (zh) * 2018-07-26 2019-01-11 飞立股份有限公司 一种一站式全屏精准程序化数据管理方法
CN109034907A (zh) * 2018-08-06 2018-12-18 北京三快在线科技有限公司 广告数据投放方法及装置、电子设备、存储介质
CN110765170A (zh) * 2019-09-26 2020-02-07 维沃移动通信有限公司 一种用户画像的生成方法及可穿戴设备
CN111479143A (zh) * 2020-03-12 2020-07-31 深圳市酷开网络科技有限公司 一种基于用户画像的电视广告推送方法及电子设备
CN112328895A (zh) * 2020-11-25 2021-02-05 Oppo广东移动通信有限公司 用户画像生成方法、装置、服务器及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIU YAN: " Construction and Application of Campus Employment User Portrait Based on Big Data", MODERN INFORMATION TECHNOLOGY, vol. 3, no. 17, 10 September 2019 (2019-09-10), pages 110 - 112, XP055935268, ISSN: 2096-4706 *

Also Published As

Publication number Publication date
CN112328895A (zh) 2021-02-05

Similar Documents

Publication Publication Date Title
US11290775B2 (en) Computerized system and method for automatically detecting and rendering highlights from streaming videos
US10834218B2 (en) Event information system classifying messages using machine learning classification model and pushing selected message to user
US11270343B2 (en) Method and apparatus for generating targeted label, and storage medium
US10447645B2 (en) Computerized system and method for automatically creating and communicating media streams of digital content
WO2022111071A1 (zh) 用户画像生成方法、装置、服务器及存储介质
CN106953935B (zh) 一种媒体信息推送方法、装置及存储介质
CN104244032A (zh) 推送多媒体数据的方法和装置
WO2021159393A1 (zh) 信息推送方法、装置、服务器及存储介质
CN108520471B (zh) 重叠社区发现方法、装置、设备及存储介质
CN107612974B (zh) 信息推荐方法、装置、移动终端及存储介质
WO2021138823A1 (zh) 信息推送方法、装置、服务器及存储介质
CN105577815A (zh) 一种阅读精准投送系统的投送方法及投送系统及处理器
CN108235812A (zh) 一种广告展示方法及终端
CN111435377A (zh) 应用推荐方法、装置、电子设备以及存储介质
CN113127723B (zh) 用户画像处理方法、装置、服务器及存储介质
CN110909241B (zh) 信息推荐方法、用户标识推荐方法、装置及设备
WO2021163970A1 (zh) 短信推送方法、装置、服务器及存储介质
CN115023922B (zh) 信息推送方法、装置、服务器及存储介质
WO2021087684A1 (zh) 用户行为数据处理方法、装置、服务器及存储介质
CN108874976B (zh) 搜索内容推荐方法、装置、终端设备及存储介质
WO2021072664A1 (zh) 信息获取方法、装置、系统、电子设备以及存储介质
CN113886674A (zh) 资源推荐方法、装置、电子设备及存储介质
CN113704596A (zh) 用于生成召回信息集合的方法和装置
WO2021174495A1 (zh) 信息推送方法、装置、服务器及存储介质
WO2024093967A1 (zh) 一种推荐方法以及相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21896573

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21896573

Country of ref document: EP

Kind code of ref document: A1