CN111368209B - Information recommendation method and device, electronic equipment and computer-readable storage medium - Google Patents

Information recommendation method and device, electronic equipment and computer-readable storage medium Download PDF

Info

Publication number
CN111368209B
CN111368209B CN202010219926.5A CN202010219926A CN111368209B CN 111368209 B CN111368209 B CN 111368209B CN 202010219926 A CN202010219926 A CN 202010219926A CN 111368209 B CN111368209 B CN 111368209B
Authority
CN
China
Prior art keywords
image
category
determining
preset
categories
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010219926.5A
Other languages
Chinese (zh)
Other versions
CN111368209A (en
Inventor
郭冠军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010219926.5A priority Critical patent/CN111368209B/en
Publication of CN111368209A publication Critical patent/CN111368209A/en
Application granted granted Critical
Publication of CN111368209B publication Critical patent/CN111368209B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/535Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The embodiment of the disclosure relates to the technical field of information processing, and discloses an information recommendation method, an information recommendation device, electronic equipment and a computer-readable storage medium, wherein the information recommendation method comprises the following steps: acquiring the first preset image types and the ratios of the first preset image types currently included by the terminal equipment, wherein the first preset image types are preset by the terminal equipment according to a type setting instruction of a user and/or are pre-configured by the terminal equipment; then, determining preference information of the user according to each first preset image category and the proportion of each first preset image category; then, according to the preference information, the target object is determined, and object information of the target object is recommended to the user. The method of the embodiment of the disclosure can quickly and accurately find the target object which is interested by the user, effectively recommend information, improve the accuracy of information recommendation, ensure that the information recommendation has a great reference value for the user, and improve the satisfaction degree of the user.

Description

Information recommendation method and device, electronic equipment and computer-readable storage medium
Technical Field
The disclosed embodiments relate to the technical field of information processing, and in particular, to an information recommendation method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the increasing development of information technology, information recommendation has become an important content of current network technology application. In order to avoid the user spending much time and effort in searching and acquiring information, information recommendation is usually performed on the user so that the user can quickly find interesting information from a large amount of information. For example, content that may be of interest is recommended to a user when the user browses news, videos, advertisements, etc., and goods that are prone to purchase are recommended to the user when the user browses goods, for example.
However, in the specific implementation process, the inventor of the present disclosure finds that: although the types of information recommendation modes provided by the related technology are various, information to be recommended which is relatively interested by a user is mined according to historical behaviors of the user, and the influence of pictures and/or videos in a user album on the recommendation information is not considered, so that the recommended information is relatively blind, the accuracy of information recommendation is low, and certain limitation exists in the aspect of personalized recommendation.
Disclosure of Invention
The purpose of the disclosed embodiments is to address at least one of the above-mentioned deficiencies, and it is intended to provide a summary in order to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In one aspect, an information recommendation method is provided, including:
acquiring the first preset image types and the ratios of the first preset image types currently included by the terminal equipment, wherein the first preset image types are preset by the terminal equipment according to a type setting instruction of a user and/or are pre-configured by the terminal equipment;
determining preference information of the user according to the first predetermined image categories and the ratios of the first predetermined image categories;
and determining the target object according to the preference information, and recommending the object information of the target object to the user.
In one aspect, an information recommendation apparatus is provided, including:
the acquisition module is used for acquiring the first preset image types and the ratios of the first preset image types currently included by the terminal equipment, wherein the first preset image types are preset by the terminal equipment according to a type setting instruction of a user and/or are pre-configured by the terminal equipment;
the first determining module is used for determining preference information of the user according to the first preset image categories and the ratios of the first preset image categories;
and the second determining module is used for determining the target object according to the preference information and recommending the object information of the target object to the user.
In one aspect, an electronic device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the information recommendation method is implemented.
In one aspect, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the information recommendation method described above.
The information recommendation method provided by the embodiment of the disclosure determines the preference information of the user according to the first predetermined image categories and the ratio of the first predetermined image categories currently included in the terminal device, and recommends the object information of the target object to the user according to the preference information, not only can fully utilize the pictures and/or videos stored in the terminal device, but also does not need to acquire the specific image information of the pictures and/or videos stored in the terminal device, so that the values of the pictures and/or videos can be fully embodied under the condition of protecting the image privacy information of the user, thereby quickly and accurately finding the target object interested by the user, effectively recommending information, improving the accuracy of information recommendation, ensuring that the information recommendation has a great reference value for the user, and improving the probability of the user for viewing corresponding information, and the satisfaction degree of the user is improved.
Additional aspects and advantages of embodiments of the disclosure will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of an information recommendation method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a basic structure of an information recommendation device according to an embodiment of the disclosure;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing the devices, modules or units, and are not used for limiting the devices, modules or units to be different devices, modules or units, and also for limiting the sequence or interdependence relationship of the functions executed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the embodiments of the present disclosure will be described in further detail below with reference to the accompanying drawings.
The information recommendation method, device, electronic device and computer storage medium provided by the embodiments of the present disclosure aim to solve the above technical problems in the prior art.
The following describes in detail the technical solutions of the embodiments of the present disclosure and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
One embodiment of the present disclosure provides an information recommendation method, which is performed by a computer device, where the computer device may be a terminal or a server. The terminal may be a desktop device or a mobile terminal. The servers may be individual physical servers, clusters of physical servers, or virtual servers.
As shown in fig. 1, the method includes:
step S110, obtaining each first predetermined image category and a ratio of each first predetermined image category currently included by the terminal device, where each first predetermined image category is preset by the terminal device according to a category setting instruction of a user and/or is preconfigured by the terminal device.
Specifically, each image category (i.e., each first predetermined image category) pre-configured by the terminal device may be self-contained by the terminal device, i.e., the terminal device may carry a certain number of image categories (referred to as first predetermined image categories) by itself. When the terminal device stores various images through an application program such as an album, a gallery and the like, each first predetermined image category pre-configured by the terminal device is carried by the application program such as the album, the gallery and the like of the terminal device.
Specifically, the image category of the terminal device itself may be configured for the terminal device in advance by a developer of the terminal device, or may be configured for an application program related to image classification in the terminal device in advance by a developer in charge of the application program, where the preconfigured image category is generally set by the developer according to needs of most users. For example, the image types pre-configured for the terminal device by the developer are image type Y1, image type Y2 and image type Y3, respectively, that is, the image types carried by the terminal device are image type Y1, image type Y2 and image type Y3, respectively.
Specifically, if each first predetermined image category preconfigured for the terminal device is not satisfactory or is considered to be unable to meet the personalized requirements of the user during the process of using the terminal device by the user, the corresponding image category (i.e. each first predetermined image category) may be set in the terminal device through a corresponding category setting instruction according to the preference or requirement of the user. In the process of setting the image categories, the user may select to use some image categories pre-configured by the terminal device, such as retaining one or several image categories pre-configured by the terminal device (e.g. retaining image category Y2) that the user likes, or may select not to use any image categories pre-configured by the terminal device, such as deleting all image categories pre-configured by the terminal device.
Correspondingly, the terminal device may receive a category setting instruction of the user, and set a corresponding image category (i.e., each of the first predetermined image categories) according to the category setting instruction of the user. In one example, if the image types pre-configured by the terminal device 1 are image type Y1, image type Y2 and image type Y3, respectively, the user a selects to reserve image type Y2 and delete image type Y1 and image type Y3 during using the terminal device 1, and sets image type a, image type B, image type C and image type D in the terminal device 1 according to personal preference or requirement, that is, image type Y2, image type a, image type B, image type C and image type D are included in the terminal device 1 of the user a.
In another example, if the image types pre-configured by the terminal device 1 are image type Y1, image type Y2 and image type Y3, respectively, if the user a dislikes the image type Y1, image type Y2 and image type Y3 pre-configured in the terminal device during using the terminal device 1, the user a may select to delete the image type Y1, image type Y2 and image type Y3, and set the image type a, image type B, image type C and image type D in the terminal device 1 used by the user a according to personal preference or requirement, that is, the terminal device 1 of the user a includes the image type a, the image type B, the image type C and the image type D.
Specifically, the terminal device may classify various images (e.g., pictures, videos, etc.) included in the terminal device into corresponding first predetermined image categories according to predetermined image classification rules, or classify various currently acquired images into corresponding first predetermined image categories. After the terminal device classifies the various images into the corresponding first predetermined image categories, the server or a plug-in the terminal device can acquire the first predetermined image categories and the proportion of the first predetermined image categories currently included by the terminal device through information interaction with the terminal device, thereby laying a necessary foundation for information recommendation of a user in the follow-up process.
It should be noted that, the first predetermined image categories include, but are not limited to, a clothing category, a life category, a landscape category, a delicatessen category, a home category, a cultural art category, a science category, a pet category, a children category, and so on.
Step S120, determining preference information of the user according to each first predetermined image category and the ratio of each first predetermined image category.
Specifically, each of the first predetermined image categories pre-configured by the terminal device or each of the first predetermined image categories pre-set according to the category setting instruction of the user is some more important image categories for the user, and is often a more favorite image category for the user. Particularly, each first preset image category set in the terminal equipment by the user according to the preference or the requirement of the user through the corresponding category setting instruction can better reflect the preference of the user, has extremely high reference value, can better assist in analyzing the preference information of the user, and is very important reference information for analyzing the preference information of the user.
Specifically, after acquiring the first predetermined image categories and the ratios of the first predetermined image categories currently included in the terminal device, according to the obtained first predetermined image categories and the ratios of the first predetermined image categories, the preference information of the user of the terminal device (i.e. the user mentioned above) is analyzed to obtain the preference information of the user, so that no specific image information of pictures and/or videos stored in the terminal device needs to be acquired, and protecting the image privacy information of the user, by the first predetermined image type and the ratio of each first predetermined image type, the preference information of the user can be analyzed, so that the pictures and/or videos stored in the terminal equipment are fully utilized, the value of the pictures and/or videos stored in the terminal equipment is embodied to the greatest extent, and the follow-up object information recommending target objects to the user is guaranteed.
And step S130, determining the target object according to the preference information, and recommending the object information of the target object to the user.
Specifically, after the preference information of the user is determined, the corresponding target object to be recommended may be determined according to the preference information of the user, and after the target object to be recommended is determined, the object information of the target object to be recommended may be recommended to the user. The target objects include, but are not limited to, articles, goods, news, videos, advertisements, and the like. For example, when it is determined that the preference information of the user is gourmet, the target object to be recommended may be determined as a certain food or a certain food material, so that the object information of the certain food or the certain food material may be recommended to the user; for another example, when it is determined that the preference information of the user is home, the target object to be recommended may be determined as a certain article (e.g., a sofa, a cabinet, etc.) related to home, so that the object information of the certain article may be recommended to the user.
The information recommendation method provided by the embodiment of the disclosure determines the preference information of the user according to the first predetermined image categories and the ratio of the first predetermined image categories currently included in the terminal device, and recommends the object information of the target object to the user according to the preference information, not only can fully utilize the pictures and/or videos stored in the terminal device, but also does not need to acquire the specific image information of the pictures and/or videos stored in the terminal device, so that the values of the pictures and/or videos can be fully embodied under the condition of protecting the image privacy information of the user, thereby quickly and accurately finding the target object interested by the user, effectively recommending information, improving the accuracy of information recommendation, ensuring that the information recommendation has a great reference value for the user, and improving the probability of the user for viewing corresponding information, and the satisfaction degree of the user is improved.
The following describes the method of the embodiments of the present disclosure:
in one possible implementation, obtaining the proportion of each first predetermined image category includes: for each first predetermined image category, a ratio of a first number to a second number is determined, and the ratio is determined as a percentage of each first predetermined image category, the first number being the number of images included in each first predetermined image category, and the second number being the sum of the number of images included in each first predetermined image category.
Specifically, the terminal device may classify various images (e.g., pictures, videos, etc.) included in the terminal device into corresponding first predetermined image categories according to predetermined image classification rules, or classify various currently acquired images into corresponding first predetermined image categories. In one example, if the first predetermined image categories in the terminal device are image category a (e.g., home), image category B (e.g., food), image category C (e.g., children), and image category D (e.g., pets), and the various images currently included in the terminal device are respectively picture 1, picture 2, picture 3, …, and picture 25, the terminal device may classify 2 pictures in total, picture 1 and picture 2, picture 3, picture 5, picture …, and picture 7, respectively, into image category a, picture 8 pictures in total, picture 10, picture …, and picture 15, respectively, into image category C, and picture 16, picture 17, picture …, and picture 25, respectively, into image category D according to a predetermined image classification rule.
Specifically, after classifying various images (e.g., pictures, videos, etc.) into respective first predetermined image categories, the terminal device may calculate the occupation ratio of the respective first predetermined image categories according to the number of images included in the respective first predetermined image categories. Wherein, the ratio of each first predetermined image category is a ratio between the number of images included in each first predetermined image category (i.e. a first number) and a sum of the numbers of images included in the first predetermined image categories (i.e. a second number).
Based on the above example, the sum of the number of images included in each of the first predetermined image categories is: 2+5+8+10 is 25, where the image class a includes 2 images, the image class a may calculate a ratio of 2/25 to 8%, the image class B includes 5 images, the image class a may calculate a ratio of 5/25 to 20%, the image class C includes 8 images, the image class C may calculate a ratio of 8/25 to 32%, the image class D includes 10 images, and the image class D may calculate a ratio of 10/25 to 40%.
In one possible implementation manner, determining preference information of a user according to each first predetermined image category and a ratio of each first predetermined image category includes: determining the maximum ratio from the ratios of the first preset image categories, and determining the first preset image category corresponding to the maximum ratio; obtaining first regression information according to the maximum occupation ratio and a first preset image category corresponding to the maximum occupation ratio through a pre-trained neural network regression model; and determining the preference information of the user according to the first return information.
Specifically, after the terminal device classifies various images included in the terminal device into corresponding first predetermined image categories according to a predetermined image classification rule and calculates the ratios respectively corresponding to the first predetermined image categories, the preference information of the user may be determined according to the maximum value of the ratios. Wherein, in the process of determining the preference information of the user according to the maximum value of the occupation ratios, the following operations can be executed:
first, a maximum ratio is determined from ratios of the first predetermined image categories, and the first predetermined image category corresponding to the maximum ratio is determined. Based on the above example, the image class a has a duty of 8%, the image class B has a duty of 5/25-20%, the image class C has a duty of 8/25-32%, and the image class D has a duty of 10/25-40%, it can be seen that: the maximum value (i.e., the maximum ratio) of the ratios of the first predetermined image categories is 40%, and the first predetermined image category corresponding to the ratio of 40% is the image category D.
Then, after the maximum occupation ratio and the first predetermined image category corresponding to the maximum occupation ratio are determined, the preference information of the user can be determined according to the maximum occupation ratio and the first predetermined image category corresponding to the maximum occupation ratio. Based on the above example, the maximum ratio is 40%, and the first predetermined image category corresponding to the maximum ratio of 40% is the image category D, so the preference information of the user can be determined according to the maximum ratio of 40% and the image category D.
During the process of determining the preference information of the user according to the maximum occupation ratio and the first predetermined image category corresponding to the maximum occupation ratio, corresponding regression information (recorded as first regression information) can be obtained through a pre-trained neural network regression model according to the maximum occupation ratio and the first predetermined image category corresponding to the maximum occupation ratio. In practical application, the maximum ratio and the first predetermined image category corresponding to the maximum ratio may be input to a pre-trained neural network regression model, so as to perform regression analysis through the pre-trained neural network regression model, and regress corresponding regression information. In the above example, the maximum ratio is 40%, and the first predetermined image category corresponding to the maximum ratio of 40% is the image category D, so that the maximum ratio of 40% and the image category D can be input into the pre-trained neural network regression model, and regression information related to the image category D can be regressed by performing regression analysis through the pre-trained neural network regression model.
Then, after obtaining the first attribution information, the preference information of the user may be determined according to the first attribution information, for example, the consumption preference, news reading preference, video watching preference, and the like of the user may be determined according to the first attribution information. In one example, when the image category D is a child category, it may be regressed that the user is a child consumer that is inclined to children, i.e., the user's consumption preferences are items related to children (e.g., apparel, food, toys, etc.); in another example, when the image category D is a pet category, it may be regressed that the user is a pet consumer who prefers pets, i.e., the user's consumption preferences are items related to pets (e.g., pet food, pet accessories, pet footwear, etc.).
In one possible implementation manner, determining preference information of a user according to each first predetermined image category and a ratio of each first predetermined image category includes: determining the maximum N ratios from the ratios of the first preset image categories, and determining the first preset image categories corresponding to the maximum N ratios respectively, wherein N is an integer greater than 1; obtaining second regression information according to the maximum N ratios and the first preset image categories respectively corresponding to the maximum N ratios through a pre-trained neural network regression model; and determining the preference information of the user according to the second regression information.
Specifically, after the terminal device classifies each of the images included in the terminal device into each of the corresponding first predetermined image categories according to a predetermined image classification rule and calculates the ratios corresponding to each of the first predetermined image categories, the maximum N (for example, N is 2, N is 3, N is 5, and the like) ratios may be determined from the ratios, and then the preference information of the user may be determined according to the maximum N ratios. Wherein, in the process of determining the preference information of the user according to the maximum N proportion, the following operations may be performed:
first, the largest N ratios among the ratios of the first predetermined image categories are determined, and the first predetermined image categories corresponding to the largest N ratios are determined. If N is 2, based on the image class a, B, C, and D ratios in the above example, it can be seen that: of the ratios of the first predetermined image classes, the largest N ratios are 40% and 32%, respectively, with 40% corresponding to image class D and 32% corresponding to image class C.
And then, determining preference information of the user according to the maximum N ratios and the first preset image categories corresponding to the maximum N ratios respectively. Based on the above example, the preference information of the user may be determined according to the ratio of 32%, the ratio of 40%, the image category C, and the image category D.
In the process of determining the preference information of the user according to the maximum N ratios and the first predetermined image categories respectively corresponding to the maximum N ratios, the first predetermined image categories respectively corresponding to the maximum N ratios and the maximum N ratios can be input into the pre-trained neural network regression model, so that regression analysis is performed through the pre-trained neural network regression model, and the corresponding regression information is regressed. Based on the above example, the ratio of 40%, the ratio of 32%, the image type C, and the image type D may be input to a pre-trained neural network regression model, and regression analysis may be performed by the pre-trained neural network regression model to obtain regression information related to the image type C and the image type D.
Then, after the first return information is obtained, the preference information of the user can be determined according to the first return information. In one example, when image category C is a pet category and image category D is a child category, it can be concluded that the user is not only a child consumer that is inclined to children, but also a pet consumer that is inclined to pets, i.e., that the user's consumption preferences are items related to children (e.g., apparel, food, toys, etc.), items related to pets (e.g., pet food, pet accessories, pet footwear, etc.), and items that fuse children and pets (e.g., items that prevent pets from biting children).
In a possible implementation manner, before obtaining the first predetermined image categories currently included in the terminal device and the ratios corresponding to the first predetermined image categories, the method further includes: and determining a first preset image category corresponding to each image in the terminal equipment. The determining a first predetermined image category corresponding to each image in the terminal device includes: determining first image characteristics corresponding to the images respectively; determining first class weights of the images corresponding to the first preset image classes respectively according to the first image characteristics corresponding to the images respectively; and determining the first preset image category corresponding to each image according to each first category weight.
Specifically, before acquiring the first predetermined image categories currently included in the terminal device and the ratios respectively corresponding to the first predetermined image categories, the first predetermined image categories respectively corresponding to the images in the terminal device need to be determined. The images in the terminal device are to-be-classified images, and the to-be-classified images may be one or more images acquired by the terminal device through an image acquisition device carried by the terminal device, may also be one or more images acquired by the terminal device accessing the internet or social media, may also be one or more images acquired by performing information interaction with other terminal devices, and certainly may also be one or more images acquired through other channels, which is not limited in the embodiment of the present disclosure.
In general, since images in different shooting scenes often have different image characteristics, and images with different target objects also often have different image characteristics, one image (e.g., image 1) may have multiple image characteristics (i.e., the first image characteristic described above), for example, image 1 has image characteristics 1 and 2, and for example, image 2 has image characteristics 3, 4, and 5.
The image classification method includes that a plurality of image features of a certain image can be used as classification bases for classifying the certain image, namely, the certain image is classified according to the image features. Therefore, it is necessary to determine the image features (i.e., the first image features) corresponding to the images to be classified, so as to provide a precondition for the subsequent accurate classification of the images.
Specifically, after the first image features corresponding to the respective images are determined, the category weights of the respective images corresponding to the respective first predetermined image categories may be determined according to the respective first image features corresponding to the respective images. Wherein the sum of the class weights of each image corresponding to the first predetermined image classes is a predetermined value (e.g., 1, 2, 3, etc.).
In one example, if an image 1 has image features 1 and 2, and the terminal device 1 includes an image type a, an image type B, an image type C, and an image type D, a category weight of the image 1 corresponding to the image type a (e.g., a1), a category weight of the image 1 corresponding to the image type B (e.g., B1), a category weight of the image 1 corresponding to the image type C (e.g., C1), and a category weight of the image 1 corresponding to the image type D (e.g., D1) may be determined according to the image features 1 and 2, wherein the sum of a1, B1, C1, and D1 is a predetermined value (e.g., 1, 2, 3, etc.).
Specifically, after the class weights of the images corresponding to the first predetermined image classes are determined, the first predetermined image classes corresponding to the images can be determined according to the class weights of the images corresponding to the first predetermined image classes, so as to classify the images.
In one example, after determining that image 1 corresponds to a class weight A1 for image class A, a class weight B1 for image class B, a class weight C1 for image class C, and a class weight D1 for image class D, the image class corresponding to image 1 may be determined from A1, B1, C1, and D1. When the image type corresponding to the image 1 is determined according to a1, B1, C1 and D1, the image type corresponding to the image 1 can be determined according to the maximum value of a1, B1, C1 and D1. Such as a1 being the maximum of a1, B1, C1, and D1, image class a may be determined as image class of image 1, i.e., image 1 is classified as image class a.
In a possible implementation manner, the process of determining the first predetermined image categories corresponding to the images in the terminal device is implemented by a classification model, where the classification model includes a classification network and a pre-trained feature extraction network, the pre-trained feature extraction network is used to determine the first image features corresponding to the images, and the classification network is used to determine the first predetermined image categories corresponding to the images according to the first category weights corresponding to the first predetermined image categories.
Specifically, each first predetermined image category preconfigured by the terminal device is preconfigured by the developer, so that a classification model for classifying each image is trained in advance, and therefore, in the process that the terminal device classifies each image into each first predetermined image category preconfigured by the terminal device through the classification model, the terminal device can directly use the classification network trained offline in the classification model, and the classification network in the classification model does not need to be trained again.
Specifically, the terminal device needs to train a classification network in the classification model in the process of classifying each image into each first predetermined image category defined by the user through the classification model, where the training process of the classification network may be: firstly, obtaining a first preset number of first sample images, and determining first preset image categories corresponding to the first sample images respectively, wherein the process is equivalent to screening a certain number of sample images for each first preset image category; secondly, determining second image characteristics corresponding to the first sample images respectively through a pre-trained characteristic extraction network; then, training the classification network based on the second image features respectively corresponding to the first sample images and the first preset image categories respectively corresponding to the first sample images until the classification network meets a first preset condition.
Specifically, the pre-trained feature extraction network may be pre-trained offline in a server or other devices, wherein the feature extraction network may be trained offline through a certain number of sample images belonging to different image categories.
In one example, to ensure that the feature extraction network has a very strong feature extraction capability and accurately extracts image features, a large number of image categories (for example, 2000 image categories) may be preset, and a certain number of sample images (for example, tens of thousands of image images) may be screened out for each image category, and then, an effective feature of each sample image may be extracted by inputting the certain number of sample images screened out for each image category into the corresponding feature extraction network, and an image category of each sample image may be determined based on the extracted effective feature of each sample image. When the image category corresponding to each sample image cannot be correctly determined according to the extracted effective features of each sample image, the effective features extracted by the feature extraction network do not have reference value or have small reference value, and the feature extraction network needs to be trained until the image category of each sample image extracted according to the feature extraction network can be correctly identified.
It should be noted that the above-mentioned feature extraction network may be an intermediate layer of a convolutional neural network, and the image class corresponding to each sample image may be an output result of an output layer of the convolutional neural network.
Specifically, the classification network is obtained by the terminal device based on a first predetermined number of sample images (denoted as first sample images) and user-defined first predetermined image categories, where a certain number of sample images may be screened for each first predetermined image category to train the classification network, for example, when a certain first predetermined image category is a cat, a certain number of pictures including various cats may be screened as sample images (for example, 100 sheets) to train the classification network, and for example, when a certain first predetermined image category is a dog, a certain number of pictures including various dogs may be screened as sample images (for example, 110 sheets) to train the classification network. For other first predetermined image categories, the above method is also adopted to train the classification network, and details are not repeated here.
The screened certain number of pictures including various cats and the screened certain number of pictures including various dogs are the first predetermined number of sample images, in other words, the first predetermined number is the sum of the number of first sample images screened respectively for each first predetermined image category, and based on the above example, the first predetermined number is 210 (i.e., the sum of 100 and 110).
In a specific example, for example, each of the first predetermined image categories is a cat and a dog, respectively, the training process of the classification network may be: first, a certain number of sample images (for example, 100) about cats may be obtained, and the corresponding first predetermined image category of the sample images is labeled as cat, that is, the 100 sample images about cats are labeled as cat, a certain number of sample images (for example, 110) about dogs are obtained, and the corresponding first predetermined image category of the sample images is labeled as cat, that is, the 110 sample images about dogs are labeled as dog; then, determining 100 image features (recorded as second image features) corresponding to the cat sample images and 110 image features (recorded as second image features) corresponding to the dog sample images through a pre-trained feature extraction network; then, training the classification network based on the image features respectively corresponding to 100 sample images about the cat and the class labels respectively corresponding to 100 sample images about the cat until the classification network meets a first preset condition, and training the classification network based on the image features respectively corresponding to 110 sample images about the dog and the class labels respectively corresponding to 110 sample images about the dog until the classification network meets the first preset condition.
Specifically, when the classification network is trained based on the second image features corresponding to the first sample images and the first predetermined image classes corresponding to the first sample images, the second image features of each first sample image are input into the classification network for each first sample image, and the weights of each first sample image corresponding to each first predetermined image class are trained through the classification network, so that the maximum value of the weights obtained through the trained classification network is the class weight of each first sample image corresponding to each first predetermined image class of the first sample image.
Generally, after each image feature of a certain sample image is input to the classification network, a class weight of the certain sample image corresponding to each first predetermined image class is obtained, and the classification network classifies the certain sample image into the image class according to a maximum value in each class weight, for example, when the first predetermined image class corresponding to a maximum value in each class weight is a dog, the certain sample image is classified into the first predetermined image class of the dog, and for example, when the first predetermined image class corresponding to a maximum value in each class weight is a cat, the certain sample image is classified into the first predetermined image class of the cat. However, when the image class of a sample image classified by the classification network is different from the class label of the sample image, which indicates that there is an error in the classification result of the classification network, it is necessary to continue training the classification network, for example, to adjust the relevant parameters of the classification network until the maximum value of the class weights obtained by the trained classification network is the class weight of the class label of the sample image.
In one example, if the first predetermined image categories are cat, dog, cup, and shoe, respectively, the sample image is a picture of the cup (denoted as image S1), the category label of image S1 is the cup, and image S1 is one of the first sample images, then the image features of image S1 (denoted as second image features) may be input to the classification network to train the classification network through image S1. Wherein the classification network derives class weights for the image S1 corresponding to four first predetermined image classes of cat, dog, cup and shoe, and the classification network classifies the image S1 based on the maximum of the derived class weights, i.e., the image S1 is classified into the first predetermined image class corresponding to the maximum of the class weights, the image S1 is classified as the first predetermined image class of cat when the class weight of the image S1 corresponding to the first predetermined image class of cat is determined to be the maximum, and the image S1 is classified as the first predetermined image class of shoe when the class weight of the image S1 corresponding to the first predetermined image class of shoe is determined to be the maximum.
When the classification network wrongly classifies the image S1 as the first predetermined image category of cats, it indicates that the classification result of the classification network is extremely unreliable, and the training of the classification network is not finished, and the training of the classification network needs to be continued until the classification network correctly classifies the image S1 as the first predetermined image category of cups. In other words, during the training of the classification network through the image S1, the respective parameters of the classification network need to be continuously adjusted so that the maximum value among the weights obtained by the trained classification network is the class weight of the image S1 corresponding to the first predetermined image class of the water cup, for example, the class weight of the image S1 corresponding to the first predetermined image class of the water cup is 0.9, the class weight of the image corresponding to the first predetermined image class of the cat is 0.05, the class weight of the image corresponding to the first predetermined image class of the dog is 0.04, and the class weight of the image corresponding to the first predetermined image class of the shoe is 0.01.
And for each first sample image in the first sample images, training the classification network by inputting the second image features of each first sample image into the classification network until the maximum value of the class weights obtained by the trained classification network is the class weight of the first predetermined image class corresponding to each first sample image.
Specifically, the classification network satisfying the first predetermined condition may be that the classification accuracy of the classification network is greater than or equal to a predetermined threshold, where the predetermined threshold may be 95%, 98%, or the like, or may be 0.95, 0.98, or the like, or may be in other feasible numerical forms, and the embodiments of the present disclosure do not limit this. For example, when 100 sample images of a cat are input to the classification model, if 95 or 98 images can be correctly classified as a cat in the first predetermined image category, the classification network may be considered to be trained.
In addition, the classification network satisfying the first predetermined condition may also be that the number of iterative training of the classification network is greater than or equal to a predetermined number, and the predetermined number may be 1000, 1500, 3000, and so on. When the predetermined number of times is 1000, if 100 sample images about a cat are input to the classification model, when the number of times of iterative training of the classification network in the classification model is greater than or equal to 1000, the classification network may be considered to have been trained.
In addition, the classification network satisfying the first predetermined condition may also be convergence of a loss function of the classification network, wherein a value of the loss function characterizes a difference between a category of a first sample image output by the classification network and a second predetermined image category corresponding to the first sample image. For example, when 100 sample images of a cat are input to the classification model, if the loss function obtained by the classification network of the classification model for the 100 sample images of the cat has been stabilized, the loss function may be considered to have converged.
In one possible implementation manner, when it is detected that the first predetermined image categories are respectively updated to the second predetermined image categories according to the category updating instruction of the user, the classification network is retrained, and the classification network is obtained by retraining in the following manner: acquiring a second predetermined number of second sample images, and determining a second predetermined image category corresponding to each second sample image; secondly, determining second image characteristics corresponding to the second sample images through a pre-trained characteristic extraction network; and then, training the classification network based on the second image characteristics corresponding to the second sample images and the second preset image categories corresponding to the second sample images until the classification network meets the first preset condition.
Specifically, when the user is dissatisfied with one or more originally set first predetermined image categories or the originally set first predetermined image categories cannot meet the user requirements during the process of image classification of the acquired images through the trained classification network based on each first predetermined image category, the user may reset a certain number of new image categories (denoted as second predetermined image categories) according to the current personalized requirements. The user can update each first predetermined image category in the terminal device to each second predetermined image category respectively through the corresponding category update instruction, and correspondingly, the terminal device updates each first predetermined image category to each second predetermined image category respectively according to the category update instruction of the user.
After the terminal device updates each first predetermined image category to each second predetermined image category, the classification network needs to be retrained based on each updated second predetermined image category, so that the acquired images can be subsequently attributed to the corresponding second predetermined image categories based on the retrained classification network, and the acquired images are classified based on each second predetermined image category.
Specifically, the category update instruction includes any one of a category addition instruction, a category replacement instruction, and a category deletion instruction. In one case, when the category updating instruction is a category adding instruction, the first predetermined image categories are respectively updated to the second predetermined image categories according to the category updating instruction of the user, where the second predetermined image categories include the first predetermined image categories, and the second predetermined image categories are determined according to the category adding instruction of the user; that is, one or more predetermined image categories are newly added on the basis of the original first predetermined image categories, and the original first predetermined image categories and the newly added one or more predetermined image categories are used as second predetermined image categories, that is, the second predetermined image categories include the original first predetermined image categories.
In one example, if the original first predetermined image categories are image category a, image category B, image category C and image category D, respectively, and the newly added image categories are image category E and image category F, the updated image categories are image category a, image category B, image category C, image category D, image category E and image category F, that is, the image category in the terminal device is updated from the original image category a, image category B, image category C and image category D to image category a, image category B, image category C, image category D, image category E and image category F.
In another case, when the category update instruction is the category replacement instruction, the first predetermined image categories are respectively updated to the second predetermined image categories according to the category update instruction of the user, where the second predetermined image categories are determined according to the category addition instruction of the user, and the second predetermined image categories do not include the first predetermined image categories; that is, a certain number of second predetermined image categories, each different from each of the first predetermined image categories, are reset and used in place of the original respective first predetermined image categories.
In one example, if the original first predetermined image categories are image category a, image category B, image category C and image category D, respectively, and the second predetermined image categories determined by the user's category replacement instruction are image category E, image category F and image category G, respectively, the original image categories a, B, C and D are replaced by the image category E, F and G, that is, the image category in the terminal device is updated from the original image category a, B, C and D to the image category E, F and G.
In another case, when the category update instruction is the category deletion instruction, the first predetermined image categories are updated to the second predetermined image categories according to the category update instruction of the user, and the second predetermined image categories may be determined according to the category deletion instruction of the user, where the first predetermined image categories include the second predetermined image categories; that is, one or more first predetermined image categories are newly deleted on the basis of the original first predetermined image categories, and the remaining first predetermined image categories are regarded as second predetermined image categories, i.e., the original first predetermined image categories include the second predetermined image categories.
In one example, if the original first predetermined image categories are image category a, image category B, image category C and image category D, respectively, and the category deletion instruction of the user determines that the deleted first predetermined image category is image category C, respectively, then the remaining first predetermined image categories are image category a, image category B and image category D, respectively, and at this time, the image category a, image category B and image category D are updated to the image category a, image category B and image category D from the original image category a, image category B, image category C and image category D, respectively, as the first predetermined image category of the terminal device, that is, the image category in the terminal device.
Specifically, since the predetermined image class in the terminal device is updated, the classification network in the classification model of the terminal device needs to be retrained, i.e., retrained. Meanwhile, since the predetermined image category in the terminal device is updated, a batch of sample images matching the updated predetermined image category (referred to as a second predetermined image category) need to be selected again to retrain the classification network, that is, for each second predetermined image category, a certain number of sample images matching each second predetermined image category are selected for subsequent retrain the classification network. For convenience of description, the sum of a certain number of sample images respectively selected for each of the second predetermined image categories may be referred to as a second predetermined number of second sample images.
After the terminal device obtains a second predetermined number of second sample images, the classification network is retrained based on the second predetermined number of second sample images, and the retrained classification network is obtained.
Specifically, the process of the terminal device retraining the classification network based on the second predetermined number of second sample images may be: and determining third image characteristics corresponding to the second sample images respectively through the pre-trained characteristic extraction network, and training the classification network based on the third image characteristics corresponding to the second sample images respectively and second preset image categories corresponding to the second sample images respectively. The process is the same as the above-described process of training the classification network based on the first predetermined number of first sample images, and is not described herein again.
Specifically, after obtaining the classification network after retraining according to the three situations, the terminal device performs image classification on the images obtained according to the classification network after retraining in the subsequent image classification process, that is, after obtaining one image, the terminal device performs the following processing through a classification model (including a pre-trained feature extraction network and the classification network after retraining): determining each first image feature corresponding to the one image, then determining second category weights corresponding to each second predetermined image category of the one image according to each first image feature corresponding to the one image, and then determining each second predetermined image category corresponding to each image according to each second category weight.
Specifically, after the second predetermined image categories corresponding to the images are determined, when information recommendation is performed on the user, the occupation ratios of the second predetermined image categories and the second predetermined image categories currently included in the terminal device need to be obtained again, then, the preference information of the user is determined according to the occupation ratios of the second predetermined image categories and the second predetermined image categories, then, the target object is determined according to the determined preference information, and the object information of the target object is recommended to the user.
Fig. 2 is a schematic structural diagram of an information recommendation apparatus according to another embodiment of the disclosure, as shown in fig. 2, the apparatus 200 may include an obtaining module 201, a first determining module 202, and a second determining module 203, where:
an obtaining module 201, configured to obtain each first predetermined image category and a ratio of each first predetermined image category currently included in a terminal device, where each first predetermined image category is preset by the terminal device according to a category setting instruction of a user and/or is preconfigured by the terminal device;
a first determining module 202, configured to determine preference information of a user according to each first predetermined image category and a ratio of each first predetermined image category;
and the second determining module 203 is configured to determine the target object according to the preference information, and recommend the object information of the target object to the user.
In a possible implementation manner, the obtaining module is configured to, when obtaining the ratio of the first predetermined image categories, determine a ratio of a first number to a second number for each of the first predetermined image categories, and determine the ratio as the ratio of each of the first predetermined image categories, where the first number is a number of images included in each of the first predetermined image categories, and the second number is a sum of numbers of images included in each of the first predetermined image categories.
In a possible implementation manner, the first determining module is specifically configured to:
determining the maximum ratio from the ratios of the first preset image categories, and determining the first preset image category corresponding to the maximum ratio;
obtaining first regression information according to the maximum occupation ratio and a first preset image category corresponding to the maximum occupation ratio through a pre-trained neural network regression model;
and determining the preference information of the user according to the first return information.
In a possible implementation manner, the first determining module is specifically configured to:
determining the maximum N ratios from the ratios of the first preset image categories, and determining the first preset image categories corresponding to the maximum N ratios respectively, wherein N is an integer greater than 1;
obtaining second regression information according to the maximum N ratios and the first preset image categories respectively corresponding to the maximum N ratios through a pre-trained neural network regression model;
and determining the preference information of the user according to the second regression information.
In a possible implementation manner, the method further includes a third determining module;
the third determining module is used for determining the first preset image category corresponding to each image in the terminal equipment;
when determining the first predetermined image category corresponding to each image in the terminal device, the third determining module is configured to:
determining first image characteristics corresponding to the images respectively;
determining first class weights of the images corresponding to the first preset image classes respectively according to the first image characteristics corresponding to the images respectively;
and determining the first preset image category corresponding to each image according to each first category weight.
In a possible implementation manner, determining the first predetermined image categories corresponding to the images in the terminal device is implemented by a classification model, where the classification model includes a classification network and a pre-trained feature extraction network, the pre-trained feature extraction network is used to determine the first image features corresponding to the images, and the classification network is used to determine the first predetermined image categories corresponding to the images according to the first category weights corresponding to the first predetermined image categories;
when each first predetermined image category is preset by the terminal device according to a category setting instruction of the user, the classification network is obtained through a first training module, and the first training module is specifically configured to:
acquiring a first preset number of first sample images, and determining first preset image categories corresponding to the first sample images respectively;
determining second image characteristics corresponding to the first sample images respectively through a pre-trained characteristic extraction network;
and training the classification network based on the second image characteristics corresponding to the first sample images and the first preset image categories corresponding to the first sample images until the classification network meets a first preset condition.
In a possible implementation manner, the first training module, when training the classification network based on the second image features respectively corresponding to the first sample images and the first predetermined image categories respectively corresponding to the first sample images, is configured to input the second image features of each first sample image into the classification network for each first sample image, and train the category weights of each first sample image corresponding to each first predetermined image category through the classification network, so that a maximum value of the category weights obtained through the trained classification network is a category weight of each first sample image corresponding to each first predetermined image category of each first sample image.
In a possible implementation manner, the system further includes a second training module, where the second training module is specifically configured to:
when detecting that each first preset image category is respectively updated to each second preset image category according to a category updating instruction of a user, retraining the classification network, wherein the classification network is obtained by retraining in the following way:
acquiring a second predetermined number of second sample images, and determining a second predetermined image category corresponding to each second sample image;
determining second image characteristics corresponding to the second sample images respectively through a pre-trained characteristic extraction network;
and training the classification network based on the second image characteristics corresponding to the second sample images and the second preset image categories corresponding to the second sample images until the classification network meets the first preset condition.
In a possible implementation manner, the category updating instruction includes any one of a category adding instruction, a category replacing instruction and a category deleting instruction; when the second training module respectively updates each first preset image category into each second preset image category according to the category updating instruction of the user, executing any one of the following modes:
determining second preset image categories according to category adding instructions of users, wherein the second preset image categories comprise first preset image categories;
determining second predetermined image categories according to the category replacement instruction of the user, wherein the second predetermined image categories do not comprise the first predetermined image categories;
and determining second predetermined image categories according to the category deleting instruction of the user, wherein the first predetermined image categories comprise the second predetermined image categories.
In one possible implementation, the classification network satisfies a first predetermined condition, including any one of:
the classification accuracy of the classification network is greater than or equal to a predetermined threshold;
the iterative training times of the classification network are greater than or equal to the preset times;
the loss function of the classification network converges, the value of the loss function characterizing the difference between the class of the first sample image output by the classification network and the second predetermined image class corresponding to the first sample image.
The device provided by the embodiment of the disclosure determines the preference information of the user according to the first predetermined image categories and the ratios of the first predetermined image categories currently included by the terminal equipment, and recommending the object information of the target object to the user according to the preference information, not only can fully utilize pictures and/or videos stored in the terminal equipment, and there is no need to acquire specific image information of pictures and/or videos stored in the terminal device, so that the value of the pictures and/or videos can be fully embodied under the condition of protecting the image privacy information of the user, therefore, the target object which the user is interested in can be quickly and accurately found, information recommendation is effectively carried out, the accuracy of information recommendation is improved, the information recommendation has great reference value for the user, the probability of the user for checking corresponding information is improved, and the satisfaction degree of the user is improved.
It should be noted that the present embodiment is an apparatus embodiment corresponding to the method embodiment described above, and the present embodiment can be implemented in cooperation with the method embodiment described above. The related technical details mentioned in the above method embodiments are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the above-described method item embodiments.
Referring now to FIG. 3, a block diagram of an electronic device 300 suitable for use in implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
The electronic device comprises a memory and a processor, wherein the processor may be referred to as a processing device 301 described below, and the memory comprises at least one of a Read Only Memory (ROM)302, a Random Access Memory (RAM)303, and a storage device 308, which are described below:
as shown in fig. 3, the electronic device 300 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device 300 to communicate wirelessly or by wire with other devices to exchange data. While fig. 3 illustrates an electronic device 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 309, or installed from the storage means 308, or installed from the ROM 302. The computer program, when executed by the processing device 301, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring the first preset image types and the ratios of the first preset image types currently included by the terminal equipment, wherein the first preset image types are preset by the terminal equipment according to a type setting instruction of a user and/or are pre-configured by the terminal equipment; then, determining preference information of the user according to each first preset image category and the proportion of each first preset image category; then, according to the preference information, the target object is determined, and object information of the target object is recommended to the user.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules or units described in the embodiments of the present disclosure may be implemented by software or hardware. For example, the obtaining module may be further described as a module that obtains at least one event processing manner corresponding to a predetermined live event when the occurrence of the predetermined live event is detected.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided an information recommendation method including:
acquiring the first preset image types and the ratios of the first preset image types currently included by the terminal equipment, wherein the first preset image types are preset by the terminal equipment according to a type setting instruction of a user and/or are pre-configured by the terminal equipment;
determining preference information of the user according to the first predetermined image categories and the ratios of the first predetermined image categories;
and determining the target object according to the preference information, and recommending the object information of the target object to the user.
In one possible implementation, obtaining the proportion of each first predetermined image category includes:
for each first predetermined image category, a ratio of a first number to a second number is determined, and the ratio is determined as a percentage of each first predetermined image category, the first number being the number of images included in each first predetermined image category, and the second number being the sum of the number of images included in each first predetermined image category.
In one possible implementation manner, determining preference information of a user according to each first predetermined image category and a ratio of each first predetermined image category includes:
determining the maximum ratio from the ratios of the first preset image categories, and determining the first preset image category corresponding to the maximum ratio;
obtaining first regression information according to the maximum occupation ratio and a first preset image category corresponding to the maximum occupation ratio through a pre-trained neural network regression model;
and determining the preference information of the user according to the first return information.
In one possible implementation manner, determining preference information of a user according to each first predetermined image category and a ratio of each first predetermined image category includes:
determining the maximum N ratios from the ratios of the first preset image categories, and determining the first preset image categories corresponding to the maximum N ratios respectively, wherein N is an integer greater than 1;
obtaining second regression information according to the maximum N ratios and the first preset image categories respectively corresponding to the maximum N ratios through a pre-trained neural network regression model;
and determining the preference information of the user according to the second regression information.
In a possible implementation manner, before obtaining the first predetermined image categories currently included in the terminal device and the ratios corresponding to the first predetermined image categories, the method further includes:
determining a first preset image category corresponding to each image in the terminal equipment;
the determining a first predetermined image category corresponding to each image in the terminal device includes:
determining first image characteristics corresponding to the images respectively;
determining first class weights of the images corresponding to the first preset image classes respectively according to the first image characteristics corresponding to the images respectively;
and determining the first preset image category corresponding to each image according to each first category weight.
In a possible implementation manner, determining the first predetermined image categories corresponding to the images in the terminal device is implemented by a classification model, where the classification model includes a classification network and a pre-trained feature extraction network, the pre-trained feature extraction network is used to determine the first image features corresponding to the images, and the classification network is used to determine the first predetermined image categories corresponding to the images according to the first category weights corresponding to the first predetermined image categories;
when each first preset image category is preset by the terminal equipment according to a category setting instruction of a user, the classification network is obtained by training in the following mode:
acquiring a first preset number of first sample images, and determining first preset image categories corresponding to the first sample images respectively;
determining second image characteristics corresponding to the first sample images respectively through a pre-trained characteristic extraction network;
and training the classification network based on the second image characteristics corresponding to the first sample images and the first preset image categories corresponding to the first sample images until the classification network meets a first preset condition.
In one possible implementation manner, training the classification network based on each second image feature corresponding to each first sample image and each first predetermined image category corresponding to each first sample image includes:
and for each first sample image, inputting the second image features of each first sample image into a classification network, and training the class weight of each first sample image corresponding to the first preset image class through the classification network, so that the maximum value of the class weights obtained through the trained classification network is the class weight of the first preset image class of each first sample image corresponding to each first sample image.
In one possible implementation, the method further includes:
when detecting that each first preset image category is respectively updated to each second preset image category according to a category updating instruction of a user, retraining the classification network, wherein the classification network is obtained by retraining in the following way:
acquiring a second predetermined number of second sample images, and determining a second predetermined image category corresponding to each second sample image;
determining second image characteristics corresponding to the second sample images respectively through a pre-trained characteristic extraction network;
and training the classification network based on the second image characteristics corresponding to the second sample images and the second preset image categories corresponding to the second sample images until the classification network meets the first preset condition.
In a possible implementation manner, the category updating instruction includes any one of a category adding instruction, a category replacing instruction and a category deleting instruction; the mode of updating each first preset image category into each second preset image category according to the category updating instruction of the user comprises any one of the following modes:
determining second preset image categories according to category adding instructions of users, wherein the second preset image categories comprise first preset image categories;
determining second predetermined image categories according to the category replacement instruction of the user, wherein the second predetermined image categories do not comprise the first predetermined image categories;
and determining second predetermined image categories according to the category deleting instruction of the user, wherein the first predetermined image categories comprise the second predetermined image categories.
In one possible implementation, the classification network satisfies a first predetermined condition, including any one of:
the classification accuracy of the classification network is greater than or equal to a predetermined threshold;
the iterative training times of the classification network are greater than or equal to the preset times;
the loss function of the classification network converges, the value of the loss function characterizing the difference between the class of the first sample image output by the classification network and the second predetermined image class corresponding to the first sample image.
According to one or more embodiments of the present disclosure, there is provided an information recommendation apparatus including:
the acquisition module is used for acquiring the first preset image types and the ratios of the first preset image types currently included by the terminal equipment, wherein the first preset image types are preset by the terminal equipment according to a type setting instruction of a user and/or are pre-configured by the terminal equipment;
the first determining module is used for determining preference information of the user according to the first preset image categories and the ratios of the first preset image categories;
and the second determining module is used for determining the target object according to the preference information and recommending the object information of the target object to the user.
In a possible implementation manner, the obtaining module is configured to, when obtaining the ratio of the first predetermined image categories, determine a ratio of a first number to a second number for each of the first predetermined image categories, and determine the ratio as the ratio of each of the first predetermined image categories, where the first number is a number of images included in each of the first predetermined image categories, and the second number is a sum of numbers of images included in each of the first predetermined image categories.
In a possible implementation manner, the first determining module is specifically configured to:
determining the maximum ratio from the ratios of the first preset image categories, and determining the first preset image category corresponding to the maximum ratio;
obtaining first regression information according to the maximum occupation ratio and a first preset image category corresponding to the maximum occupation ratio through a pre-trained neural network regression model;
and determining the preference information of the user according to the first return information.
In a possible implementation manner, the first determining module is specifically configured to:
determining the maximum N ratios from the ratios of the first preset image categories, and determining the first preset image categories corresponding to the maximum N ratios respectively, wherein N is an integer greater than 1;
obtaining second regression information according to the maximum N ratios and the first preset image categories respectively corresponding to the maximum N ratios through a pre-trained neural network regression model;
and determining the preference information of the user according to the second regression information.
In a possible implementation manner, the method further includes a third determining module;
the third determining module is used for determining the first preset image category corresponding to each image in the terminal equipment;
when determining the first predetermined image category corresponding to each image in the terminal device, the third determining module is configured to:
determining first image characteristics corresponding to the images respectively;
determining first class weights of the images corresponding to the first preset image classes respectively according to the first image characteristics corresponding to the images respectively;
and determining the first preset image category corresponding to each image according to each first category weight.
In a possible implementation manner, determining the first predetermined image categories corresponding to the images in the terminal device is implemented by a classification model, where the classification model includes a classification network and a pre-trained feature extraction network, the pre-trained feature extraction network is used to determine the first image features corresponding to the images, and the classification network is used to determine the first predetermined image categories corresponding to the images according to the first category weights corresponding to the first predetermined image categories;
when each first predetermined image category is preset by the terminal device according to a category setting instruction of the user, the classification network is obtained through a first training module, and the first training module is specifically configured to:
acquiring a first preset number of first sample images, and determining first preset image categories corresponding to the first sample images respectively;
determining second image characteristics corresponding to the first sample images respectively through a pre-trained characteristic extraction network;
and training the classification network based on the second image characteristics corresponding to the first sample images and the first preset image categories corresponding to the first sample images until the classification network meets a first preset condition.
In a possible implementation manner, the first training module, when training the classification network based on the second image features respectively corresponding to the first sample images and the first predetermined image categories respectively corresponding to the first sample images, is configured to input the second image features of each first sample image into the classification network for each first sample image, and train the category weights of each first sample image corresponding to each first predetermined image category through the classification network, so that a maximum value of the category weights obtained through the trained classification network is a category weight of each first sample image corresponding to each first predetermined image category of each first sample image.
In a possible implementation manner, the system further includes a second training module, where the second training module is specifically configured to:
when detecting that each first preset image category is respectively updated to each second preset image category according to a category updating instruction of a user, retraining the classification network, wherein the classification network is obtained by retraining in the following way:
acquiring a second predetermined number of second sample images, and determining a second predetermined image category corresponding to each second sample image;
determining second image characteristics corresponding to the second sample images respectively through a pre-trained characteristic extraction network;
and training the classification network based on the second image characteristics corresponding to the second sample images and the second preset image categories corresponding to the second sample images until the classification network meets the first preset condition.
In a possible implementation manner, the category updating instruction includes any one of a category adding instruction, a category replacing instruction and a category deleting instruction; when the second training module respectively updates each first preset image category into each second preset image category according to the category updating instruction of the user, executing any one of the following modes:
determining second preset image categories according to category adding instructions of users, wherein the second preset image categories comprise first preset image categories;
determining second predetermined image categories according to the category replacement instruction of the user, wherein the second predetermined image categories do not comprise the first predetermined image categories;
and determining second predetermined image categories according to the category deleting instruction of the user, wherein the first predetermined image categories comprise the second predetermined image categories.
In one possible implementation, the classification network satisfies a first predetermined condition, including any one of:
the classification accuracy of the classification network is greater than or equal to a predetermined threshold;
the iterative training times of the classification network are greater than or equal to the preset times;
the loss function of the classification network converges, the value of the loss function characterizing the difference between the class of the first sample image output by the classification network and the second predetermined image class corresponding to the first sample image.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (7)

1. An information recommendation method, comprising:
determining a first preset image category corresponding to each image in the terminal equipment;
acquiring the current first preset image types and the proportion of the first preset image types included by the terminal equipment, wherein the first preset image types are preset by the terminal equipment according to a type setting instruction of a user and/or are pre-configured by the terminal equipment;
determining the largest N ratios from the ratios of the first preset image categories, and determining the first preset image categories corresponding to the largest N ratios respectively, wherein N is an integer larger than 1;
obtaining second regression information according to the maximum N ratios and first preset image categories respectively corresponding to the maximum N ratios through a pre-trained neural network regression model;
determining preference information of the user according to the second regression information;
determining a target object according to the preference information, and recommending the object information of the target object to the user;
the determining a first predetermined image category corresponding to each image in the terminal device includes:
determining first image characteristics corresponding to the images respectively;
determining first class weights of the images corresponding to the first preset image classes respectively according to the first image characteristics corresponding to the images respectively;
determining a first preset image category corresponding to each image according to each first category weight;
the determination of the first predetermined image categories respectively corresponding to the images in the terminal device is achieved through a classification model, the classification model includes a classification network and a pre-trained feature extraction network, the pre-trained feature extraction network is used for determining the first image features respectively corresponding to the images, and the classification network is used for determining the first predetermined image categories respectively corresponding to the images according to the first category weights respectively corresponding to the first predetermined image categories.
2. The method of claim 1, wherein obtaining the fraction of each first predetermined image category comprises:
for each first predetermined image category, determining a ratio of a first number to a second number, and determining the ratio as a percentage of each first predetermined image category, wherein the first number is the number of images included in each first predetermined image category, and the second number is the sum of the numbers of images included in each first predetermined image category.
3. The method according to claim 1, wherein when the first predetermined image categories are preset by the terminal device according to a category setting instruction of a user, the classification network is trained by:
acquiring a first preset number of first sample images, and determining first preset image categories corresponding to the first sample images respectively;
determining second image characteristics corresponding to the first sample images respectively through the pre-trained characteristic extraction network;
and training the classification network based on the second image characteristics corresponding to the first sample images and the first preset image categories corresponding to the first sample images until the classification network meets a first preset condition.
4. The method of claim 3, wherein training the classification network based on the second image features corresponding to the first sample images and the first predetermined image classes corresponding to the first sample images comprises:
for each first sample image, inputting the second image features of the first sample image into the classification network, and training the class weight of the first sample image corresponding to the first predetermined image class through the classification network, so that the maximum value of the class weights obtained through the trained classification network is the class weight of the first predetermined image class of the first sample image corresponding to the first sample image.
5. An information recommendation apparatus, comprising:
the acquisition module is used for determining a first preset image category corresponding to each image in the terminal equipment; acquiring the current first preset image types and the proportion of the first preset image types included by the terminal equipment, wherein the first preset image types are preset by the terminal equipment according to a type setting instruction of a user and/or are pre-configured by the terminal equipment;
the determining a first predetermined image category corresponding to each image in the terminal device includes: determining first image characteristics corresponding to the images respectively; determining first class weights of the images corresponding to the first preset image classes respectively according to the first image characteristics corresponding to the images respectively; determining a first preset image category corresponding to each image according to each first category weight;
the determining of the first predetermined image categories respectively corresponding to the images in the terminal device is realized through a classification model, the classification model comprises a classification network and a pre-trained feature extraction network, the pre-trained feature extraction network is used for determining the first image features respectively corresponding to the images, and the classification network is used for determining the first predetermined image categories respectively corresponding to the images according to the first category weights respectively corresponding to the first predetermined image categories;
a first determining module, configured to determine the largest N ratios from the ratios of the first predetermined image categories, and determine first predetermined image categories corresponding to the largest N ratios, where N is an integer greater than 1; obtaining second regression information according to the maximum N ratios and first preset image categories respectively corresponding to the maximum N ratios through a pre-trained neural network regression model; determining preference information of the user according to the second regression information;
and the second determining module is used for determining a target object according to the preference information and recommending the object information of the target object to the user.
6. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of claims 1-4 when executing the program.
7. A non-transitory computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, which when executed by a processor implements the method of any one of claims 1-4.
CN202010219926.5A 2020-03-25 2020-03-25 Information recommendation method and device, electronic equipment and computer-readable storage medium Active CN111368209B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010219926.5A CN111368209B (en) 2020-03-25 2020-03-25 Information recommendation method and device, electronic equipment and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010219926.5A CN111368209B (en) 2020-03-25 2020-03-25 Information recommendation method and device, electronic equipment and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN111368209A CN111368209A (en) 2020-07-03
CN111368209B true CN111368209B (en) 2022-04-12

Family

ID=71209235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010219926.5A Active CN111368209B (en) 2020-03-25 2020-03-25 Information recommendation method and device, electronic equipment and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN111368209B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699311A (en) * 2020-12-31 2021-04-23 上海博泰悦臻网络技术服务有限公司 Information pushing method, storage medium and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108038161A (en) * 2017-12-06 2018-05-15 北京奇虎科技有限公司 Information recommendation method, device and computing device based on photograph album
CN109978610A (en) * 2019-03-13 2019-07-05 努比亚技术有限公司 Information processing method, mobile terminal and computer readable storage medium
CN110418200A (en) * 2018-04-27 2019-11-05 Tcl集团股份有限公司 A kind of video recommendation method, device and terminal device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10296525B2 (en) * 2016-04-15 2019-05-21 Google Llc Providing geographic locations related to user interests
KR20180026155A (en) * 2016-09-02 2018-03-12 에스케이플래닛 주식회사 Apparatus for automatically analyzing pregerence of rental item using user image and method using the same
US11093839B2 (en) * 2018-04-13 2021-08-17 Fujifilm Business Innovation Corp. Media object grouping and classification for predictive enhancement
CN108984657B (en) * 2018-06-28 2020-12-01 Oppo广东移动通信有限公司 Image recommendation method and device, terminal and readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108038161A (en) * 2017-12-06 2018-05-15 北京奇虎科技有限公司 Information recommendation method, device and computing device based on photograph album
CN110418200A (en) * 2018-04-27 2019-11-05 Tcl集团股份有限公司 A kind of video recommendation method, device and terminal device
CN109978610A (en) * 2019-03-13 2019-07-05 努比亚技术有限公司 Information processing method, mobile terminal and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Using Viewing Time to Infer User Preference in Recommender Systems;Jeffrey Parsons;《AAAI Workshop in Semantic Web Personalization》;20040731;全文 *
协同过滤推荐技术综述;冷亚军;《模式识别与人工智能》;20141231;第27卷(第08期);第720-734页 *

Also Published As

Publication number Publication date
CN111368209A (en) 2020-07-03

Similar Documents

Publication Publication Date Title
CN110297848B (en) Recommendation model training method, terminal and storage medium based on federal learning
CN108776676B (en) Information recommendation method and device, computer readable medium and electronic device
WO2017181612A1 (en) Personalized video recommendation method and device
CN105635824A (en) Personalized channel recommendation method and system
CN109688479B (en) Bullet screen display method, bullet screen display device and bullet screen display server
CN111178970B (en) Advertisement putting method and device, electronic equipment and computer readable storage medium
WO2022016522A1 (en) Recommendation model training method and apparatus, recommendation method and apparatus, and computer-readable medium
CN112100489B (en) Object recommendation method, device and computer storage medium
CN111818370A (en) Information recommendation method and device, electronic equipment and computer-readable storage medium
CN111783810B (en) Method and device for determining attribute information of user
WO2023231542A1 (en) Representation information determination method and apparatus, and device and storage medium
US20210201146A1 (en) Computing device and operation method thereof
CN115203539B (en) Media content recommendation method, device, equipment and storage medium
WO2023160555A1 (en) Encyclopedic information display method and apparatus, device, and medium
CN111368209B (en) Information recommendation method and device, electronic equipment and computer-readable storage medium
US8971645B1 (en) Video categorization using heterogeneous signals
CN111414966B (en) Classification method, classification device, electronic equipment and computer storage medium
CN106815285A (en) The method of the video recommendations based on video website, device and electronic equipment
CN114912039A (en) Search special effect display method, device, equipment and medium
CN111401464B (en) Classification method, classification device, electronic equipment and computer-readable storage medium
CN115439770A (en) Content recall method, device, equipment and storage medium
CN113704596A (en) Method and apparatus for generating a set of recall information
CN110909206B (en) Method and device for outputting information
US20230153664A1 (en) Stochastic Multi-Modal Recommendation and Information Retrieval System
US10685182B2 (en) Identifying novel information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.