CN113836326A - User preference intelligent extraction method based on self-media platform - Google Patents

User preference intelligent extraction method based on self-media platform Download PDF

Info

Publication number
CN113836326A
CN113836326A CN202111080027.2A CN202111080027A CN113836326A CN 113836326 A CN113836326 A CN 113836326A CN 202111080027 A CN202111080027 A CN 202111080027A CN 113836326 A CN113836326 A CN 113836326A
Authority
CN
China
Prior art keywords
user
video
preference
self
historical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111080027.2A
Other languages
Chinese (zh)
Inventor
杜平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Balinghou Technology Co ltd
Original Assignee
Chongqing Balinghou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Balinghou Technology Co ltd filed Critical Chongqing Balinghou Technology Co ltd
Priority to CN202111080027.2A priority Critical patent/CN113836326A/en
Publication of CN113836326A publication Critical patent/CN113836326A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/45Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/483Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7844Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using original textual content or text extracted from visual content or transcript of audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7847Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content

Abstract

The invention discloses a user preference intelligent extraction method based on a self-media platform, which comprises the following steps of S1, user registration; a. the user downloads corresponding self-media software through an application store; b. the user inputs the mobile phone number and receives the verification code to register and log in; c. the user sets a login password of the self-media software by himself, the method utilizes self-data collection and data analysis of a self-media platform to form historical behavior characteristic vectors of the user for historical video characteristics of the user, a plurality of historical preference modes are obtained, then the current behavior characteristic vector of the appointed user is determined based on the historical video characteristic vector and the real-time video characteristic vector of the appointed user, the current preference mode of the appointed user is determined by combining the historical preferences obtained in advance, and therefore the current preference of each user can be determined in real time on line, the technical effect of intelligent preference extraction is achieved, and preference video recommendation of each user by the self-media platform is effectively facilitated.

Description

User preference intelligent extraction method based on self-media platform
Technical Field
The invention discloses a user preference intelligent extraction method based on a self-media platform, and belongs to the technical field of intelligent analysis.
Background
Self-media refers to a way in which the general public publishes their own facts and news through a network or the like. "self Media", English is "We Media". Is a way for the general public to provide and share their own facts and news after connecting with the global knowledge system through digital technology. The system is a propagator of privatization, civilization, generalization and autonomy, and a general name of a new medium for transmitting normative and non-normative information to most unspecific or specified single people by means of modernization and electronization. Meanwhile, mobile end users are increasing continuously and even become 2 times as many as PC end users, the requirements of people on simplicity, rapidness and interestingness are also increasing, from fragmentation reading to short video watching, Chinese self-media is rapidly developed, and when an existing self-media platform is operated, user preferences cannot be well mastered, so that certain video recommendation errors exist, and self-media development is influenced.
Disclosure of Invention
The invention aims to solve the defects and provide an intelligent user preference extraction method based on a self-media platform.
A user preference intelligent extraction method based on a self-media platform comprises the following steps:
s1, registering the user;
a. the user downloads corresponding self-media software through an application store;
b. the user inputs the mobile phone number and receives the verification code to register and log in;
c. the user sets the login password of the self-media software;
d. and the user performs self-login work of the self-media system software by receiving the mobile phone verification code or inputting the login password again.
e. The user fills in personal information and selects personal interests and hobbies, and data acquisition and storage of a background are carried out from the media system.
S2, the user searches:
a. the user carries out video retrieval through a retrieval item frame of the media platform, and after the retrieval is finished, relevant retrieved videos or characters are displayed by the media platform;
b. and collecting data information from the media platform server for the data retrieved by the user in the step S2, a.
c. Performing video data analysis on the video data information collected in the step S2, b;
d. classifying the video analysis data obtained in the step S2, c;
s3, video analysis;
a. step S2, crawling each item of recommended video retrieved by the user in step A, and collecting the staying time of the user in one or more videos;
b. recording the acquired data obtained in the step S3, a, simultaneously recording the number of videos or characters with the stay time of a user in the videos or characters not less than 5S, and simultaneously recording the videos;
c. analyzing the video or text information obtained in the step S3, b;
d. recording historical video information of a user in a month or a half year so as to obtain the historical video information, namely crawling context information data;
s4, preference extraction;
a. uniformly sorting and classifying the data information crawled in the steps S1, c, S2, d, S3 and d;
b. comparing the characteristics of the data information classified in the step S4, a;
c. performing preference extraction on the comparative feature data obtained in the step S4, b so as to obtain the specific preference and preference of the user;
d. the user preference data extracted in step S4, c is stored in the terminal, so as to facilitate the user to know the specific situation.
Preferably, in the steps S1 and c, the login password from the media software is an english-language-digit combination, and the number of the login password is not less than eight characters.
Preferably, the user personal information filling in steps S1 and e includes, but is not limited to, birth year and month, sex characteristics, education level.
Preferably, in step S2, b, the data information collection task includes, but is not limited to, keyword extraction, closed caption decoding, video feature extraction, music type extraction, and text information.
Preferably, the video information analyzed in step S3, c includes, but is not limited to, video type, video duration, video name, video BGM feature, and video music type.
Preferably, in step S3, in step d, historical video features of the user are extracted from the historical video information of the user to form a historical behavior feature vector of the user, where the historical video information is video information of a predetermined historical period, and the historical video feature vector is determined based on the word segmentation related to the historical video of the user and the number of times of historical behaviors of the user corresponding to the word segmentation.
Preferably, in step S4, c, the specific preference and preference of the user is a history preference mode, and the probability of occurrence of the current video feature vector in the history preference mode is calculated; and determining a sub-mode, which is used for determining the historical preference mode with the occurrence probability larger than a threshold value as the current preference mode of the specified user.
Preferably, the terminal data in step S4, d is stored as a self-storing and receiving unit module from the media platform server.
Compared with the prior art, the invention has the following beneficial effects:
the method comprises the steps of utilizing self-running data collection and data analysis of a self-media platform to form historical behavior characteristic vectors of users according to historical video characteristics of the users, obtaining one or more historical preference modes, determining the current behavior characteristic vectors of the appointed users according to the historical video characteristic vectors and the real-time video characteristic vectors of the appointed users, and determining the current preference modes of the appointed users according to the historical preference modes obtained in advance, so that the current preference of each user can be determined on line in real time, the technical effect of intelligent preference extraction is achieved, preference video recommendation of each user is effectively facilitated by the self-media platform, more user flow is facilitated to be obtained by a media platform company, and more economic profits are obtained according to a certain market value.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A user preference intelligent extraction method based on a self-media platform comprises the following steps:
s1, registering the user;
a. the user downloads corresponding self-media software through an application store;
b. the user inputs the mobile phone number and receives the verification code to register and log in;
c. the user sets the login password of the self-media software;
d. and the user performs self-login work of the self-media system software by receiving the mobile phone verification code or inputting the login password again.
e. The user fills in personal information and selects personal interests and hobbies, and data acquisition and storage of a background are carried out from the media system.
S2, the user searches:
a. the user carries out video retrieval through a retrieval item frame of the media platform, and after the retrieval is finished, relevant retrieved videos or characters are displayed by the media platform;
b. and collecting data information from the media platform server for the data retrieved by the user in the step S2, a.
c. Performing video data analysis on the video data information collected in the step S2, b;
d. classifying the video analysis data obtained in the step S2, c;
s3, video analysis;
a. step S2, crawling each item of recommended video retrieved by the user in step A, and collecting the staying time of the user in one or more videos;
b. recording the acquired data obtained in the step S3, a, simultaneously recording the number of videos or characters with the stay time of a user in the videos or characters not less than 5S, and simultaneously recording the videos;
c. analyzing the video or text information obtained in the step S3, b;
d. recording historical video information of a user in a month or a half year so as to obtain the historical video information, namely crawling context information data;
s4, preference extraction;
a. uniformly sorting and classifying the data information crawled in the steps S1, c, S2, d, S3 and d;
b. comparing the characteristics of the data information classified in the step S4, a;
c. performing preference extraction on the comparative feature data obtained in the step S4, b so as to obtain the specific preference and preference of the user;
d. the user preference data extracted in step S4, c is stored in the terminal, so as to facilitate the user to know the specific situation.
Preferably, in the steps S1 and c, the login password from the media software is an english-language-digit combination, and the number of the login password is not less than eight characters.
Preferably, the user personal information filling in steps S1 and e includes, but is not limited to, birth year and month, sex characteristics, education level.
Preferably, in step S2, b, the data information collection task includes, but is not limited to, keyword extraction, closed caption decoding, video feature extraction, music type extraction, and text information.
Preferably, the video information analyzed in step S3, c includes, but is not limited to, video type, video duration, video name, video BGM feature, and video music type.
Preferably, in step S3, in step d, historical video features of the user are extracted from the historical video information of the user to form a historical behavior feature vector of the user, where the historical video information is video information of a predetermined historical period, and the historical video feature vector is determined based on the word segmentation related to the historical video of the user and the number of times of historical behaviors of the user corresponding to the word segmentation.
Preferably, in step S4, c, the specific preference and preference of the user is a history preference mode, and the probability of occurrence of the current video feature vector in the history preference mode is calculated; and determining a sub-mode, which is used for determining the historical preference mode with the occurrence probability larger than a threshold value as the current preference mode of the specified user.
Preferably, the terminal data in step S4, d is stored as a self-storing and receiving unit module from the media platform server.
The first embodiment is as follows:
a user preference intelligent extraction method based on a self-media platform comprises the following steps:
s1, registering the user;
a. the user downloads corresponding self-media software through an application store;
b. the user inputs the mobile phone number and receives the verification code to register and log in;
c. the user sets the login password of the self-media software;
d. and the user performs self-login work of the self-media system software by receiving the mobile phone verification code or inputting the login password again.
e. The user fills in personal information and selects personal interests and hobbies, and data acquisition and storage of a background are carried out from the media system.
S2, the user searches:
a. the user carries out video retrieval through a retrieval item frame of the media platform, and after the retrieval is finished, relevant retrieved videos or characters are displayed by the media platform;
b. and collecting data information from the media platform server for the data retrieved by the user in the step S2, a.
c. Performing video data analysis on the video data information collected in the step S2, b;
d. classifying the video analysis data obtained in the step S2, c;
s3, video analysis;
a. step S2, crawling each item of recommended video retrieved by the user in step A, and collecting the staying time of the user in one or more videos;
b. recording the acquired data obtained in the step S3, a, simultaneously recording the number of videos or characters with the stay time of a user in the videos or characters not less than 5S, and simultaneously recording the videos;
c. analyzing the video or text information obtained in the step S3, b;
d. recording historical video information of a user in a month or a half year so as to obtain the historical video information, namely crawling context information data;
further, the mathematical description of the user preference obtaining model based on the context calculation is preliminarily as follows: m ═ { U, I, C, P }, U × I × C → P, where: u represents user information, I represents object resource information, C represents context information, P represents user preference, and model calculation is mainly based on user historical behaviors and user historical behavior context and is acquired by a data acquisition layer at the bottom of the model. The historical behavior of the user is used for describing the use condition of the user on an object resource (the current model takes the mobile network service as an object); the user historical behavior context is used for describing the context condition of the user when using the object resource.
S4, preference extraction;
a. uniformly sorting and classifying the data information crawled in the steps S1, c, S2, d, S3 and d;
b. comparing the characteristics of the data information classified in the step S4, a;
c. performing preference extraction on the comparative feature data obtained in the step S4, b so as to obtain the specific preference and preference of the user;
d. the user preference data extracted in step S4, c is stored in the terminal, so as to facilitate the user to know the specific situation.
Preferably, in the steps S1 and c, the login password from the media software is an english-language-digit combination, and the number of the login password is not less than eight characters.
Preferably, the user personal information filling in steps S1 and e includes, but is not limited to, birth year and month, sex characteristics, education level.
Preferably, in step S2, b, the data information collection task includes, but is not limited to, keyword extraction, closed caption decoding, video feature extraction, music type extraction, and text information;
further, data information is collected, and the titles and/or key attributes of objects of click information, collection information and interaction success information of each user under the historical behavior information of each user in a preset historical time period are extracted. Then, semantic word segmentation processing is performed on the information. For example, according to the type of a self-media video, keywords, modifiers, belonging type words, model words and the like of an object are reserved, and other words which are not meaningful are filtered, so that participles which represent a user and are related at a certain past behavior time point are obtained, namely historical behavior characteristics are obtained.
Preferably, the video information analyzed in step S3, c includes, but is not limited to, video type, video duration, video name, video BGM feature, video music type;
considering that a video generally has a certain duration and includes a plurality of consecutive frames of images, the sequence of images may be one group or a plurality of groups, and may be determined according to the duration of a video segment. Each group of image sequences is obtained from different video segments of the video, wherein the number of images in the image sequences is certain and can be specifically limited according to actual situations. In addition, if an animation or a moving picture also contains not less than N frames of images, the video can be regarded as the video. Specifically, for a single picture, the number N of images included in the single picture is 1, and for a moving picture, the number of images included in the moving picture is generally smaller than the number of images to be included in a group of image sequences, for example, a group of image sequences includes 30 images, and a moving picture includes 5 still images, and in this case, a group of image sequences including 30 images may be constructed by means of copying, frame interpolation, or the like;
in addition, when image information loss occurs during the copying process, the copied image with image information loss may be further subjected to image restoration, or the copied image with problems may be discarded and copied again, so as to ensure the integrity of the image information as much as possible and reduce the influence of errors.
Preferably, in step S3, in step d, historical video features of the user are extracted from the historical video information of the user to form a historical behavior feature vector of the user, where the historical video information is video information of a predetermined historical period, and the historical video feature vector is determined based on the word segmentation related to the historical video of the user and the number of times of historical behaviors of the user corresponding to the word segmentation.
Preferably, in step S4, c, the specific preference and preference of the user is a history preference mode, and the probability of occurrence of the current video feature vector in the history preference mode is calculated; and determining a sub-mode, which is used for determining the historical preference mode with the occurrence probability larger than a threshold value as the current preference mode of the specified user.
Preferably, the terminal data in step S4, d is stored as a self-storing and receiving unit module from the media platform server.
Example two:
a user preference intelligent extraction method based on a self-media platform comprises the following steps:
s1, registering the user;
a. the user downloads corresponding self-media software through an application store;
b. the user inputs the mobile phone number and receives the verification code to register and log in;
c. the user sets the login password of the self-media software;
d. and the user performs self-login work of the self-media system software by receiving the mobile phone verification code or inputting the login password again.
e. The user fills in personal information and selects personal interests and hobbies, and data acquisition and storage of a background are carried out from the media system.
S2, the user searches:
a. the user carries out video retrieval through a retrieval item frame of the media platform, and after the retrieval is finished, relevant retrieved videos or characters are displayed by the media platform;
b. and collecting data information from the media platform server for the data retrieved by the user in the step S2, a.
c. Performing video data analysis on the video data information collected in the step S2, b;
d. classifying the video analysis data obtained in the step S2, c;
s3, video analysis;
a. step S2, crawling each item of recommended video retrieved by the user in step A, and collecting the staying time of the user in one or more videos;
b. recording the acquired data obtained in the step S3, a, simultaneously recording the number of videos or characters with the stay time of a user in the videos or characters not less than 5S, and simultaneously recording the videos;
c. analyzing the video or text information obtained in the step S3, b;
d. recording historical video information of a user in a month or a half year so as to obtain the historical video information, namely crawling context information data;
further, the mathematical description of the user preference obtaining model based on the context calculation is preliminarily as follows: m ═ { U, I, C, P }, U × I × C → P, where: u represents user information, I represents object resource information, C represents context information, P represents user preference, and model calculation is mainly based on user historical behaviors and user historical behavior context and is acquired by a data acquisition layer at the bottom of the model. The historical behavior of the user is used for describing the use condition of the user on an object resource (the current model takes the mobile network service as an object); the user historical behavior context is used to describe the context conditions that the user is in when using the object resource,
furthermore, the individual user interest degree is extracted by calculating the user historical behavior context, and finally, the two data are fused and calculated to extract the more accurate individual user preference, wherein the calculation formula is as follows:
Geo'=Geo-(Geo-(Geo∩Pre));
Pre'=Pre-(Pre-(Geo∩Pre));
s4, preference extraction;
a. uniformly sorting and classifying the data information crawled in the steps S1, c, S2, d, S3 and d;
b. comparing the characteristics of the data information classified in the step S4, a;
c. performing preference extraction on the comparative feature data obtained in the step S4, b so as to obtain the specific preference and preference of the user;
d. the user preference data extracted in step S4, c is stored in the terminal, so as to facilitate the user to know the specific situation.
Preferably, in the steps S1 and c, the login password from the media software is an english-language-digit combination, and the number of the login password is not less than eight characters.
Preferably, the user personal information filling in steps S1 and e includes, but is not limited to, birth year and month, sex characteristics, education level.
Preferably, in step S2, b, the data information collection task includes, but is not limited to, keyword extraction, closed caption decoding, video feature extraction, music type extraction, and text information;
further, data information is collected, and the titles and/or key attributes of objects of click information, collection information and interaction success information of each user under the historical behavior information of each user in a preset historical time period are extracted. Then, semantic word segmentation processing is performed on the information. For example, according to the type of a self-media video, keywords, modifiers, belonging type words, model words and the like of an object are reserved, and other words which are not meaningful are filtered, so that participles which represent a user and are related at a certain past behavior time point are obtained, namely historical behavior characteristics are obtained;
for example: extracting the historical behavior information of the user within 360 days, wherein the historical behavior of the user can include but is not limited to video behaviors such as double-click and collection, and converting all behavior information in the historical behavior information into clicks, for example, converting the behavior of collecting video for 1 time into 40 clicks, converting the behavior of double-click video for 1 time into 20 clicks, and further counting the clicks corresponding to the participles related to each user behavior, and recording wi (ti, num), i being a natural number, wherein, the ith participle is related to the user behavior, num is the user click corresponding to the participle, and taking one day as a calculation period of the time point of the user behavior, then the historical behavior of the user at a certain day is recorded as h, the historical behavior of the user before K day is recorded as hk, including the participles related to the user behavior and the clicks corresponding to each participle at the same day, can be expressed as: wi }, h ═ w1, w2, w 3.
Preferably, the video information analyzed in step S3, c includes, but is not limited to, video type, video duration, video name, video BGM feature, video music type;
considering that a video generally has a certain duration and includes a plurality of consecutive frames of images, the sequence of images may be one group or a plurality of groups, and may be determined according to the duration of a video segment. Each group of image sequences is obtained from different video segments of the video, wherein the number of images in the image sequences is certain and can be specifically limited according to actual situations. In addition, if an animation or a moving picture also contains not less than N frames of images, the video can be regarded as the video. Specifically, for a single picture, the number N of images included in the single picture is 1. For a motion picture, the number of images included is generally less than the number of images that should be included in a group of image sequences, for example, a group of image sequences includes 30 images, and a group of still images included in the motion picture includes 5 images, and at this time, a group of image sequences including 30 images can be constructed by copying, frame interpolation, and the like;
in addition, when image information loss occurs during the copying process, the copied image with image information loss may be further subjected to image restoration, or the copied image with problems may be discarded and copied again, so as to ensure the integrity of the image information as much as possible and reduce the influence of errors.
Preferably, in step S3, in step d, historical video features of the user are extracted from the historical video information of the user to form a historical behavior feature vector of the user, where the historical video information is video information of a predetermined historical period, and the historical video feature vector is determined based on the word segmentation related to the historical video of the user and the number of times of historical behaviors of the user corresponding to the word segmentation.
Preferably, in step S4, c, the specific preference and preference of the user is a history preference mode, and the probability of occurrence of the current video feature vector in the history preference mode is calculated; and determining a sub-mode, which is used for determining the historical preference mode with the occurrence probability larger than a threshold value as the current preference mode of the specified user.
Preferably, the terminal data in step S4, d is stored as a self-storing and receiving unit module from the media platform server.
The method comprises the steps of utilizing self-running data collection and data analysis of a self-media platform to form historical behavior characteristic vectors of users according to historical video characteristics of the users, obtaining one or more historical preference modes, determining the current behavior characteristic vectors of the appointed users according to the historical video characteristic vectors and the real-time video characteristic vectors of the appointed users, and determining the current preference modes of the appointed users according to the historical preference modes obtained in advance, so that the current preference of each user can be determined on line in real time, the technical effect of intelligent preference extraction is achieved, preference video recommendation of each user is effectively facilitated by the self-media platform, more user flow is facilitated to be obtained by a media platform company, and more economic profits are obtained according to a certain market value.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (8)

1. A user preference intelligent extraction method based on a self-media platform is characterized by comprising the following steps: the method comprises the following steps:
s1, registering the user;
a. the user downloads corresponding self-media software through an application store;
b. the user inputs the mobile phone number and receives the verification code to register and log in;
c. the user sets the login password of the self-media software;
d. and the user performs self-login work of the self-media system software by receiving the mobile phone verification code or inputting the login password again.
e. The user fills in personal information and selects personal interests and hobbies, and data acquisition and storage of a background are carried out from the media system.
S2, the user searches:
a. the user carries out video retrieval through a retrieval item frame of the media platform, and after the retrieval is finished, relevant retrieved videos or characters are displayed by the media platform;
b. and collecting data information from the media platform server for the data retrieved by the user in the step S2, a.
c. Performing video data analysis on the video data information collected in the step S2, b;
d. classifying the video analysis data obtained in the step S2, c;
s3, video analysis;
a. step S2, crawling each item of recommended video retrieved by the user in step A, and collecting the staying time of the user in one or more videos;
b. recording the acquired data obtained in the step S3, a, simultaneously recording the number of videos or characters with the stay time of a user in the videos or characters not less than 5S, and simultaneously recording the videos;
c. analyzing the video or text information obtained in the step S3, b;
d. recording historical video information of a user in a month or a half year so as to obtain the historical video information, namely crawling context information data;
s4, preference extraction;
a. uniformly sorting and classifying the data information crawled in the steps S1, c, S2, d, S3 and d;
b. comparing the characteristics of the data information classified in the step S4, a;
c. performing preference extraction on the comparative feature data obtained in the step S4, b so as to obtain the specific preference and preference of the user;
d. the user preference data extracted in step S4, c is stored in the terminal, so as to facilitate the user to know the specific situation.
2. The intelligent extraction method for the user preference based on the self-media platform as claimed in claim 1, wherein: in the steps S1 and c, the self-media software login password is an english-digit combination, and the number of the login password is not less than eight characters.
3. The intelligent extraction method for the user preference based on the self-media platform as claimed in claim 1, wherein: the user personal information filling in of the steps S1 and e includes, but is not limited to, birth year and month, sex characteristics, education level.
4. The intelligent extraction method for the user preference based on the self-media platform as claimed in claim 1, wherein: in step S2, b, the data information collection operation includes, but is not limited to, keyword extraction, closed caption decoding, video feature extraction, music type extraction, and text information.
5. The intelligent extraction method for the user preference based on the self-media platform as claimed in claim 1, wherein: the video information analyzed in step S3, c includes, but is not limited to, video type, video duration, video name, video BGM feature, and video music type.
6. The intelligent extraction method for the user preference based on the self-media platform as claimed in claim 1, wherein: in step S3, extracting historical video features of the user from the historical video information of the user to form a historical behavior feature vector of the user, where the historical video information is video information of a predetermined historical period, and the historical video feature vector is determined based on the word segmentation related to the historical video of the user and the number of times of historical behaviors of the user corresponding to the word segmentation.
7. The intelligent extraction method for the user preference based on the self-media platform as claimed in claim 1, wherein: in step S4, c, the specific preference and preference of the user is a history preference mode, and the occurrence probability of the current video feature vector in the history preference mode is calculated; and determining a sub-mode, which is used for determining the historical preference mode with the occurrence probability larger than a threshold value as the current preference mode of the specified user.
8. The intelligent extraction method for the user preference based on the self-media platform as claimed in claim 1, wherein: the terminal data in step S4, d is stored as a self-storing and receiving unit module from the media platform server.
CN202111080027.2A 2021-09-15 2021-09-15 User preference intelligent extraction method based on self-media platform Pending CN113836326A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111080027.2A CN113836326A (en) 2021-09-15 2021-09-15 User preference intelligent extraction method based on self-media platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111080027.2A CN113836326A (en) 2021-09-15 2021-09-15 User preference intelligent extraction method based on self-media platform

Publications (1)

Publication Number Publication Date
CN113836326A true CN113836326A (en) 2021-12-24

Family

ID=78959418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111080027.2A Pending CN113836326A (en) 2021-09-15 2021-09-15 User preference intelligent extraction method based on self-media platform

Country Status (1)

Country Link
CN (1) CN113836326A (en)

Similar Documents

Publication Publication Date Title
CN110704674B (en) Video playing integrity prediction method and device
CN111859160B (en) Session sequence recommendation method and system based on graph neural network
CN112989209B (en) Content recommendation method, device and storage medium
CN113283238B (en) Text data processing method and device, electronic equipment and storage medium
CN112468853B (en) Television resource recommendation method and device, computer equipment and storage medium
CN111984824A (en) Multi-mode-based video recommendation method
CN111783712A (en) Video processing method, device, equipment and medium
WO2023108980A1 (en) Information push method and device based on text adversarial sample
CN111597446B (en) Content pushing method and device based on artificial intelligence, server and storage medium
CN111191133B (en) Service search processing method, device and equipment
CN114371946B (en) Information push method and information push server based on cloud computing and big data
CN110958472A (en) Video click rate rating prediction method and device, electronic equipment and storage medium
CN114186074A (en) Video search word recommendation method and device, electronic equipment and storage medium
CN114845149B (en) Video clip method, video recommendation method, device, equipment and medium
CN114491255A (en) Recommendation method, system, electronic device and medium
US20230316106A1 (en) Method and apparatus for training content recommendation model, device, and storage medium
CN115983873B (en) User data analysis management system and method based on big data
CN111127057B (en) Multi-dimensional user portrait recovery method
CN113836326A (en) User preference intelligent extraction method based on self-media platform
CN116503115B (en) Advertisement resource recommendation method and system based on Internet game platform
CN112287184B (en) Migration labeling method, device, equipment and storage medium based on neural network
CN113407772B (en) Video recommendation model generation method, video recommendation method and device
CN116861063B (en) Method for exploring commercial value degree of social media hot search
TWM551710U (en) User data gathering system
CN114880572B (en) Intelligent news client recommendation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination