CN111125501A - Method and apparatus for processing information - Google Patents

Method and apparatus for processing information Download PDF

Info

Publication number
CN111125501A
CN111125501A CN201811287919.8A CN201811287919A CN111125501A CN 111125501 A CN111125501 A CN 111125501A CN 201811287919 A CN201811287919 A CN 201811287919A CN 111125501 A CN111125501 A CN 111125501A
Authority
CN
China
Prior art keywords
presentation
image
product
product image
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811287919.8A
Other languages
Chinese (zh)
Other versions
CN111125501B (en
Inventor
龙睿
薛潇剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201811287919.8A priority Critical patent/CN111125501B/en
Publication of CN111125501A publication Critical patent/CN111125501A/en
Application granted granted Critical
Publication of CN111125501B publication Critical patent/CN111125501B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services

Abstract

The embodiment of the application discloses a method and a device for processing information. One embodiment of the method comprises: acquiring user information of a target user and a product image set to be presented; for a product image to be presented in a product image set to be presented, inputting the product image to be presented and user information into a pre-trained first evaluation model to obtain a first evaluation result, wherein the first evaluation result is used for representing the interest degree of a user corresponding to the input user information on a product indicated by the input product image to be presented; and selecting the product image to be presented from the product image set to be presented as a product image for presentation based on the obtained first evaluation result. The embodiment improves the pertinence and diversity of information processing.

Description

Method and apparatus for processing information
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for processing information.
Background
Currently, product providers can recommend products to users by pushing product images to terminals (e.g., cell phones, computers, etc.) used by the users.
Often, different users have different preferences. Thus, a user viewing the pushed product image may or may not be interested in the product indicated by the product image.
Disclosure of Invention
The embodiment of the application provides a method and a device for processing information.
In a first aspect, an embodiment of the present application provides a method for processing information, where the method includes: acquiring user information of a target user and a product image set to be presented; for a product image to be presented in a product image set to be presented, inputting the product image to be presented and user information into a pre-trained first evaluation model to obtain a first evaluation result, wherein the first evaluation result is used for representing the interest degree of a user corresponding to the input user information on a product indicated by the input product image to be presented; and selecting the product image to be presented from the product image set to be presented as a product image for presentation based on the obtained first evaluation result.
In some embodiments, the product image to be presented in the set of product images to be presented corresponds to at least one background image; and after selecting the product image to be presented as the product image for presentation from the product image set to be presented, the method further comprises: for the selected product image for presentation in the product image for presentation, executing the following steps: acquiring at least one background image corresponding to the product image for presentation; adding the product image for presentation to the background image in the obtained at least one background image to obtain an image for presentation corresponding to the product image for presentation; and selecting a presentation image from the presentation images corresponding to the obtained presentation product images as a target presentation image corresponding to the presentation product image.
In some embodiments, after selecting a presentation image from the presentation images corresponding to the presentation product image as a target presentation image corresponding to the presentation product image, the method further comprises: and outputting the selected target presentation image to a terminal used by the target user.
In some embodiments, selecting a presentation image from presentation images corresponding to the presentation product image as a target presentation image corresponding to the presentation product image includes: for the presentation image in the presentation image corresponding to the presentation product image, inputting the presentation image and user information into a pre-trained second evaluation model to obtain a second evaluation result, wherein the second evaluation result is used for representing the interest degree of a user corresponding to the input user information in the input presentation image; and selecting a presentation image from presentation images corresponding to the presentation product image as a target presentation image corresponding to the presentation product image based on the obtained second evaluation result.
In some embodiments, the first evaluation model is trained by: obtaining a plurality of product images for sample presentation; for a sample presentation product image of a plurality of sample presentation product images, performing the steps of: acquiring user information of a user corresponding to the terminal presenting the product image for sample presentation as sample user information; based on the obtained sample user information, determining a sample first evaluation result for characterizing the degree of interest of the user in presenting the product indicated by the product image for the sample; forming a training sample by using the sample presentation product image, the obtained sample user information and the determined sample first evaluation result; and training to obtain a first evaluation model by taking the sample user information and the sample presentation product image included in the training samples in the formed training samples as input and taking the input sample user information and a sample first evaluation result corresponding to the sample presentation product image as expected output by using a machine learning method.
In some embodiments, the user information comprises at least one of: attribute information, historical behavior information.
In a second aspect, an embodiment of the present application provides an apparatus for processing information, the apparatus including: the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is configured to acquire user information of a target user and a product image set to be presented; the input unit is configured to input the product image to be presented and user information into a pre-trained first evaluation model for the product image to be presented in the product image set to be presented, and obtain a first evaluation result, wherein the first evaluation result is used for representing the interest degree of a user corresponding to the input user information on a product indicated by the input product image to be presented; and the selecting unit is configured to select the product image to be presented from the product image set to be presented as the product image for presentation based on the obtained first evaluation result.
In some embodiments, the product image to be presented in the set of product images to be presented corresponds to at least one background image; and the apparatus further comprises: an adding unit configured to execute the following steps for a presentation product image of the selected presentation product images: acquiring at least one background image corresponding to the product image for presentation; adding the product image for presentation to the background image in the obtained at least one background image to obtain an image for presentation corresponding to the product image for presentation; and selecting a presentation image from the presentation images corresponding to the obtained presentation product images as a target presentation image corresponding to the presentation product image.
In some embodiments, the apparatus further comprises: an output unit configured to output the selected image for target presentation to a terminal used by the target user.
In some embodiments, the adding unit is further configured to: for the presentation image in the presentation image corresponding to the presentation product image, inputting the presentation image and user information into a pre-trained second evaluation model to obtain a second evaluation result, wherein the second evaluation result is used for representing the interest degree of a user corresponding to the input user information in the input presentation image; and selecting a presentation image from presentation images corresponding to the presentation product image as a target presentation image corresponding to the presentation product image based on the obtained second evaluation result.
In some embodiments, the first evaluation model is trained by: obtaining a plurality of product images for sample presentation; for a sample presentation product image of a plurality of sample presentation product images, performing the steps of: acquiring user information of a user corresponding to the terminal presenting the product image for sample presentation as sample user information; based on the obtained sample user information, determining a sample first evaluation result for characterizing the degree of interest of the user in presenting the product indicated by the product image for the sample; forming a training sample by using the sample presentation product image, the obtained sample user information and the determined sample first evaluation result; and training to obtain a first evaluation model by taking the sample user information and the sample presentation product image included in the training samples in the formed training samples as input and taking the input sample user information and a sample first evaluation result corresponding to the sample presentation product image as expected output by using a machine learning method.
In some embodiments, the user information comprises at least one of: attribute information, historical behavior information.
In a third aspect, an embodiment of the present application provides a server, including: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement the method of any of the embodiments of the method for processing information described above.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, which when executed by a processor implements the method of any of the above-described methods for processing information.
According to the method and the device for processing information, the user information of the target user and the product image set to be presented are obtained, then the product image to be presented in the product image set to be presented is input into the pre-trained first evaluation model, the first evaluation result is obtained, the first evaluation result is used for representing the interest degree of the user corresponding to the input user information on the product indicated by the input product image to be presented, and finally the product image to be presented is selected from the product image set to be presented as the present product image based on the obtained first evaluation result, so that the interest degree of the target user on the product image to be presented is effectively evaluated by using the first evaluation model, and the method and the device for processing information are beneficial to selecting the product image to be presented, which is interested by the target user, from the product image set to be presented based on the evaluation result and is finally used for being presented for use The product image of the user improves the pertinence and diversity of information processing.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a flow diagram for one embodiment of a method for processing information according to the present application;
FIG. 3 is a schematic diagram of an application scenario of a method for processing information according to an embodiment of the present application;
FIG. 4 is a flow diagram of yet another embodiment of a method for processing information according to the present application;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for processing information according to the present application;
FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing a server according to embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the method for processing information or the apparatus for processing information of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having a display screen and supporting information transmission, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, such as a background server that provides support for presentation product images displayed on the terminal devices 101, 102, 103. The background server may obtain a set of images of a product to be presented, analyze and otherwise process data such as the set of images of the product to be presented, and feed back a processing result (e.g., a product image for presentation) to the terminal device.
It should be noted that the method for processing information provided in the embodiment of the present application is generally performed by the server 105, and accordingly, the apparatus for processing information is generally disposed in the server 105.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. In the case where data used in obtaining an image of a product for presentation does not need to be acquired from a remote location, the system architecture described above may include no network and terminal devices, but only a server.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for processing information in accordance with the present application is shown. The method for processing information comprises the following steps:
step 201, acquiring user information of a target user and a product image set to be presented.
In this embodiment, an execution subject (e.g., a server shown in fig. 1) of the method for processing information may acquire user information of a target user and a set of images of a product to be presented by a wired connection manner or a wireless connection manner. The target user is a user to be determined the corresponding product image for presentation. And the product image for presentation corresponding to the target user is a product image for presentation to the target user. The product image may indicate a product. Specifically, the product image may be an image obtained by photographing the product, as an example. The user information of the target user may be used to characterize the characteristics of the target user, which may include, but is not limited to, at least one of: characters, numerical values, symbols, images.
In some optional implementations of this embodiment, the user information may include, but is not limited to, at least one of the following: attribute information, historical behavior information. The attribute information may be used to characterize attributes of the user, such as gender attribute, age attribute, and the like. Historical behavior information may be used to indicate historical behavior of the user, for example, the historical behavior information may include product images historically viewed by the user and historical times at which the product images were viewed.
Specifically, the execution main body may acquire user information of a target user, which is stored locally in advance, or may acquire user information of the target user, which is transmitted by an electronic device (for example, a terminal device shown in fig. 1) connected to the execution main body in a communication manner.
In this embodiment, the product image to be presented may be a predetermined product image to be presented to the user. The product image set to be presented comprises at least one product image to be presented. Specifically, the execution main body may obtain at least one to-be-presented product image pre-stored locally to form a to-be-presented product image set; or, the execution main body may acquire at least one to-be-presented product image sent by an electronic device in communication connection therewith, and form a to-be-presented product image set.
Step 202, inputting the product image to be presented and the user information into a pre-trained first evaluation model for the product image to be presented in the product image set to be presented, and obtaining a first evaluation result.
In this embodiment, for the to-be-presented product image in the to-be-presented product image set obtained in step 201, the executing body may input the to-be-presented product image and the user information into a first evaluation model trained in advance, so as to obtain a first evaluation result. The first evaluation result is used for characterizing the degree of interest of a user corresponding to the input user information on a product indicated by the input image of the product to be presented, and may include, but is not limited to, at least one of the following: characters, numerical values, symbols. For example, the first evaluation result may include a value "0" or a value "1", where the value "0" may be used to characterize that the user corresponding to the input user information is not interested in the product indicated by the input image of the product to be presented; the value "1" may be used to characterize that the user corresponding to the input user information is interested in the product indicated by the input image of the product to be presented.
In this embodiment, the first evaluation model may be used to represent a correspondence between the image of the product to be presented and the user information and the first evaluation result corresponding to the image of the product to be presented and the user information. Specifically, as an example, the first evaluation model may be a correspondence table in which a plurality of product images to be presented, user information of a user, and first evaluation results corresponding to the product images to be presented and the user information are stored, the correspondence table being prepared in advance by a technician based on statistics of a large number of product images to be presented, the user information of the user, and the first evaluation results.
Here, the first evaluation result in the correspondence table may be obtained by labeling by a technician, or may be generated based on a preset rule. Here, the preset rule may be a rule set by a technician in advance for the product image to be presented and used for determining the user information and the first evaluation result corresponding to the product image to be presented based on the user information. For example, the product indicated by the product image to be presented is a skin care product. For the product image to be presented, the preset rule may be: the user information indicates that the user is female, and the first evaluation result is 'interested'; the user information indicates that the user is male and the first evaluation result is "uninteresting".
The first evaluation model may be a model obtained by training an initial model (for example, a neural network, an FM (Factorization Machine) model, or the like) by a Machine learning method based on a preset training sample.
In some optional implementations of this embodiment, the first evaluation model may be trained by:
step 2021, acquire a plurality of sample presentation product images.
Here, the image for sample presentation is a product image for presentation to the user determined from predetermined product images to be presented for the sample. The plurality of sample presentation product images may be a plurality of product images for presentation to a certain user, or may be a plurality of product images for presentation to a plurality of users.
Specifically, a plurality of presentation product images stored in advance may be acquired as a plurality of sample presentation product images, or a plurality of presentation product images transmitted from an electronic device connected to a communication may be acquired as sample presentation product images.
Step 2022, for a sample presentation product image among the plurality of sample presentation product images, performing the steps of: acquiring user information of a user corresponding to the terminal presenting the product image for sample presentation as sample user information; based on the obtained sample user information, determining a sample first evaluation result for characterizing the degree of interest of the user in presenting the product indicated by the product image for the sample; and forming a training sample by using the sample presentation product image, the acquired sample user information and the determined sample first evaluation result.
Here, for a sample presentation product image among the plurality of sample presentation product images, the following steps may be performed:
first, user information of a user corresponding to a terminal on which the sample presentation product image is presented is acquired as sample user information.
The user corresponding to the terminal presenting the image of the product for sample presentation may be a user using the terminal. Specifically, the user information of the user corresponding to the terminal presenting the product image for sample presentation, which is stored in advance, may be acquired as the sample user information, or the user information transmitted from the terminal presenting the product image for sample presentation may be acquired as the sample user information.
Then, based on the obtained sample user information, a sample first evaluation result for characterizing a degree of interest of the user in presenting the product indicated by the product image for the sample is determined.
Specifically, based on the obtained sample user information, various methods may be employed to determine a sample first evaluation result that characterizes a degree of interest of the user in presenting the product indicated by the product image for the sample. For example, a technician may label the acquired sample user information and a sample first evaluation result corresponding to the sample presentation product image, and further determine the acquired sample user information and the sample first evaluation result corresponding to the sample presentation product image; alternatively, the acquired sample user information and the sample first evaluation result corresponding to the sample presentation product image may be generated based on the preset rule.
And finally, forming a training sample by using the sample presentation product image, the acquired sample user information and the determined sample first evaluation result.
It will be appreciated that with multiple sample presentation product images, multiple training samples may be obtained.
Step 2023, using a machine learning method to input the sample user information and the product image for sample presentation included in the training samples in the formed training samples, and taking the input sample user information and the sample first evaluation result corresponding to the product image for sample presentation as expected output to train and obtain a first evaluation model.
Specifically, the machine learning method may be used to input sample user information and a sample presentation product image included in a training sample of the composed training samples, output a sample first evaluation result corresponding to the input sample user information and the sample presentation product image as an expected result, train a predetermined initial model (e.g., a neural network, an FM model, or the like), and finally obtain the first evaluation model.
It should be noted that, in practice, the execution subject of the step for generating the first evaluation model may be the same as or different from the execution subject of the method for processing information. If the first evaluation model is the same as the second evaluation model, the executing agent of the step for generating the first evaluation model may store the trained first evaluation model locally after training the first evaluation model. If not, the executive agent of the step for generating the first evaluation model may send the trained first evaluation model to the executive agent of the method for processing information after training the first evaluation model.
And step 203, selecting a product image to be presented from the product image set to be presented as a product image for presentation based on the obtained first evaluation result.
In this embodiment, based on the first evaluation result obtained in step 202, the executing body may select a product image to be presented as a product image for presentation from a set of product images to be presented. The selected product image for presentation is the product image for presentation to the target user.
Specifically, the executing body may adopt various methods to select a product image to be presented from a set of product images to be presented as a product image for presentation based on the interest degree indicated by the obtained first evaluation result. For example, the product image to be presented with the highest interest level indicated by the corresponding first evaluation result may be selected as the product image for presentation.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for processing information according to the present embodiment. In the application scenario of fig. 3, the server 301 may first obtain user information 303 of a target user using the terminal device 302 from the communicatively connected terminal device 302, and obtain a pre-stored set of images 304 of a product to be presented. The to-be-presented product image set 304 includes a to-be-presented product image 3041 and a to-be-presented product image 3042. Then, for the product image to be presented 3041, the server 301 may input the product image to be presented 3041 and the user information 303 into the pre-trained first evaluation model 305, obtaining a first evaluation result (e.g., a value "9") 3061. Here, the first evaluation result may be used to characterize the degree of interest of the target user in the product indicated by the product image to be presented 3041 (e.g., the greater the numerical value, the greater the degree of interest). Similarly, for the product image to be presented 3042, the server 301 may input the product image to be presented 3042 and the user information 303 into the first evaluation model 305, obtaining a first evaluation result (e.g., a value of "7") 3062. Finally, the server 301 may select a product image to be presented from the set of product images to be presented 304 as a product image for presentation 307 based on the obtained first evaluation result. For example, the server 301 may select, from the to-be-presented product image set 304, a to-be-presented product image with a larger numerical value in the corresponding first evaluation result as the to-be-presented product image 307, that is, select the to-be-presented product image 3041 as the to-be-presented product image 307.
The method provided by the embodiment of the application effectively utilizes the first evaluation model to evaluate the interest degree of the target user aiming at the product image to be presented, is beneficial to selecting the product image to be presented, which is interested by the target user, from the product image set to be presented as the product image finally used for presenting to the user based on the evaluation result, and improves the pertinence and diversity of information processing.
With further reference to FIG. 4, a flow 400 of yet another embodiment of a method for processing information is shown. The flow 400 of the method for processing information includes the steps of:
step 401, acquiring user information of a target user and a product image set to be presented.
In this embodiment, an execution subject (e.g., a server shown in fig. 1) of the method for processing information may acquire user information of a target user and a set of images of a product to be presented by a wired connection manner or a wireless connection manner. The target user is a user to be determined the corresponding product image for presentation. And the product image for presentation corresponding to the target user is a product image for presentation to the target user. The product image may indicate a product. Specifically, the product image may be an image obtained by photographing the product, as an example. The user information of the target user may be used to characterize the characteristics of the target user, which may include, but is not limited to, at least one of: characters, numerical values, symbols, images. The product image to be presented may be a predetermined product image to be presented to the user. The product image set to be presented comprises at least one product image to be presented.
Step 402, inputting the product image to be presented and the user information into a pre-trained first evaluation model for the product image to be presented in the product image set to be presented, and obtaining a first evaluation result.
In this embodiment, for the to-be-presented product image in the to-be-presented product image set obtained in step 401, the execution subject may input the to-be-presented product image and the user information into a first evaluation model trained in advance, so as to obtain a first evaluation result. The first evaluation result is used for characterizing the degree of interest of a user corresponding to the input user information on a product indicated by the input image of the product to be presented, and may include, but is not limited to, at least one of the following: characters, numerical values, symbols. The first evaluation model may be used to represent a correspondence between the image of the product to be presented and the user information and a first evaluation result corresponding to the input image of the product to be presented and the user information.
And step 403, selecting a product image to be presented from the product image set to be presented as a product image for presentation based on the obtained first evaluation result.
In this embodiment, based on the first evaluation result obtained in step 402, the executing body may select a product image to be presented as a product image for presentation from a set of product images to be presented. The selected product image for presentation is the product image for presentation to the target user.
In step 404, for a product image for presentation out of the selected product images for presentation, the following steps are performed: acquiring at least one background image corresponding to the product image for presentation; adding the product image for presentation to the background image in the obtained at least one background image to obtain an image for presentation corresponding to the product image for presentation; and selecting a presentation image from the presentation images corresponding to the obtained presentation product images as a target presentation image corresponding to the presentation product image.
In this embodiment, the product image to be presented in the product image set to be presented corresponds to at least one background image. Specifically, the at least one background image corresponding to the product image to be presented may be a background image predetermined by a technician based on the product image to be presented. Further, the execution main body may execute, for a presentation product image among the presentation product images selected in step 403, the steps of:
step 4041, at least one background image corresponding to the image of the product for presentation is obtained.
Specifically, at least one background image corresponding to the product image for presentation, which is stored in advance, may be acquired, or at least one background image corresponding to the product image for presentation, which is sent by the electronic device connected to the communication device, may also be acquired.
Step 4042, for the background image in the at least one acquired background image, add the product image for presentation to the background image, and obtain an image for presentation corresponding to the product image for presentation.
The presentation image is an image including both the presentation product image and the background image corresponding to the presentation product image. Specifically, the execution subject may add the product image for presentation to the background image by various methods to obtain an image for presentation corresponding to the product image for presentation. For example, the execution main body may superimpose the product image for presentation at a preset position of the background image, and determine the superimposed image as the image for presentation corresponding to the product image for presentation; alternatively, the execution subject may perform image fusion on the product image for presentation and the background image by using an image fusion method, and determine the fused image as the image for presentation corresponding to the product image for presentation.
It should be noted that the method of image fusion is a well-known technique that is currently widely studied and applied, and is not described herein again.
Step 4043, selecting a presentation image from the obtained presentation images corresponding to the presentation product image as a target presentation image corresponding to the presentation product image.
The target rendering image may be an image finally used for rendering to a target user. It is to be understood that, since the presentation image corresponds to at least one background image, the obtained presentation image corresponding to the presentation product image includes at least one.
Specifically, the executing body may select, by various methods, a presentation image from the obtained at least one presentation image corresponding to the presentation product image as a target presentation image corresponding to the presentation product image. As an example, when the presentation product image corresponds to only one presentation image, the execution subject may directly determine the presentation image as a target presentation image corresponding to the presentation product image; when the product image for presentation only corresponds to at least two images for presentation, the execution main body may select a presentation image from the at least two images for presentation corresponding to the product image for presentation as a target image for presentation corresponding to the product image for presentation in a random selection manner.
In some optional implementation manners of this embodiment, the executing body may further select a presentation image from presentation images corresponding to the presentation product image as a target presentation image corresponding to the presentation product image by:
first, the execution agent may input the presentation image and user information of the target user into a second evaluation model trained in advance for a presentation image in the presentation image corresponding to the presentation product image, and obtain a second evaluation result.
The second evaluation result may be used to characterize the degree of interest of the user corresponding to the input user information in the input presentation image, and may include, but is not limited to, at least one of the following: characters, numerical values, symbols. For example, the second evaluation result may include a value "0" or a value "1", where the value "0" may be used to characterize that the user corresponding to the input user information is not interested in the input presentation image; the value "1" may be used to characterize that the user corresponding to the input user information is interested in the input presentation image.
In this implementation, the second evaluation model may be used to characterize a correspondence between the presentation image and the user information and a second evaluation result corresponding to the input presentation image and the user information. Specifically, as an example, the second evaluation model may be a correspondence table in which a plurality of presentation images, user information, and second evaluation results corresponding to the presentation images and the user information are stored, the correspondence table being previously prepared by a technician based on statistics of a large number of presentation images, user information of users, and the second evaluation results.
Here, the second evaluation result in the correspondence table may be obtained by labeling by a technician, or may be generated based on a preset rule. Here, the preset rule may be a rule set in advance for the presentation image by the technician for determining the user information and the second evaluation result corresponding to the presentation image based on the user information. For example, the presentation image may be used for user clicking, and the preset rule may be for the presentation image: the user information indicates that the user clicked on the image for presentation, and the second evaluation result is "interested"; the user information indicates that the user did not click on the image for presentation, and the second evaluation result is "uninteresting".
The second evaluation model may be a model obtained by training an initial model (for example, a neural network, an FM model, or the like) by a machine learning method based on a preset training sample. It should be noted that the training process of the second evaluation model is substantially the same as the training process of the first evaluation model in the embodiment corresponding to fig. 2, and details are not repeated here.
Then, the execution subject may select, based on the obtained second evaluation result, a presentation image from presentation images corresponding to the presentation product image as a target presentation image corresponding to the presentation product image.
Specifically, the executing agent may select, based on the interest level indicated by the obtained second evaluation result, a presentation image from presentation images corresponding to the presentation product image by using various methods as a target presentation image corresponding to the presentation product image. For example, the presentation image with the highest degree of interest indicated by the corresponding second evaluation result may be selected as the target presentation image.
In some optional implementation manners of this embodiment, after selecting a presentation image from presentation images corresponding to the presentation product images as a target presentation image corresponding to the presentation product images, the execution main body may further output the selected target presentation image to a terminal used by a target user.
Step 401, step 402, and step 403 are respectively the same as step 201, step 202, and step 203 in the foregoing embodiment, and the above description for step 201, step 202, and step 203 also applies to step 401, step 402, and step 403, which is not described herein again.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the flow 400 of the method for processing information in the present embodiment highlights a step of obtaining a target presentation image corresponding to a presentation product image based on a background image corresponding to the presentation product image after obtaining the presentation product image. Therefore, the scheme described in the embodiment can further determine the display background of the product image for presentation, and generate the target image for presentation which is finally presented to the user, thereby improving the comprehensiveness of information processing.
With further reference to fig. 5, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of an apparatus for processing information, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the apparatus 500 for processing information of the present embodiment includes: an acquisition unit 501, an input unit 502, and a selection unit 503. The obtaining unit 501 is configured to obtain user information of a target user and a set of images of a product to be presented; the input unit 502 is configured to, for a product image to be presented in a product image set to be presented, input the product image to be presented and user information into a pre-trained first evaluation model to obtain a first evaluation result, where the first evaluation result is used to represent a degree of interest of a user corresponding to the input user information in a product indicated by the input product image to be presented; the selecting unit 503 is configured to select a product image to be presented from the set of product images to be presented as a product image for presentation based on the obtained first evaluation result.
In this embodiment, the obtaining unit 501 of the apparatus for processing information may obtain the user information of the target user and the set of images of the product to be presented by a wired connection manner or a wireless connection manner. The target user is a user to be determined the corresponding product image for presentation. And the product image for presentation corresponding to the target user is a product image for presentation to the target user. The product image may indicate a product. Specifically, the product image may be an image obtained by photographing the product, as an example. The user information of the target user may be used to characterize the characteristics of the target user, which may include, but is not limited to, at least one of: characters, numerical values, symbols, images.
In this embodiment, the product image to be presented may be a predetermined product image to be presented to the user. The product image set to be presented comprises at least one product image to be presented.
In this embodiment, for a product image to be presented in the product image set to be presented obtained by the obtaining unit 501, the input unit 502 may input the product image to be presented and user information into a first evaluation model trained in advance, so as to obtain a first evaluation result. The first evaluation result is used for characterizing the degree of interest of a user corresponding to the input user information on a product indicated by the input image of the product to be presented, and may include, but is not limited to, at least one of the following: characters, numerical values, symbols.
In this embodiment, the first evaluation model may be used to represent a correspondence between the image of the product to be presented and the user information and the first evaluation result corresponding to the input image of the product to be presented and the user information.
In the present embodiment, based on the first evaluation result obtained by the input unit 502, the selection unit 503 may select a product image to be presented as a product image for presentation from a set of product images to be presented. The selected product image for presentation is the product image for presentation to the target user.
In some optional implementation manners of the embodiment, the to-be-presented product image in the to-be-presented product image set corresponds to at least one background image; and the apparatus 500 may further comprise: an adding unit (not shown in the figure) configured to execute the following steps for a presentation product image of the selected presentation product images: acquiring at least one background image corresponding to the product image for presentation; adding the product image for presentation to the background image in the obtained at least one background image to obtain an image for presentation corresponding to the product image for presentation; and selecting a presentation image from the presentation images corresponding to the obtained presentation product images as a target presentation image corresponding to the presentation product image.
In some optional implementations of this embodiment, the apparatus 500 may further include: an output unit (not shown in the figure) configured to output the selected image for target presentation to a terminal used by the target user.
In some optional implementations of this embodiment, the adding unit may be further configured to: for the presentation image in the presentation image corresponding to the presentation product image, inputting the presentation image and user information into a pre-trained second evaluation model to obtain a second evaluation result, wherein the second evaluation result is used for representing the interest degree of a user corresponding to the input user information in the input presentation image; and selecting a presentation image from presentation images corresponding to the presentation product image as a target presentation image corresponding to the presentation product image based on the obtained second evaluation result.
In some optional implementations of this embodiment, the first evaluation model may be trained by: obtaining a plurality of product images for sample presentation; for a sample presentation product image of a plurality of sample presentation product images, performing the steps of: acquiring user information of a user corresponding to the terminal presenting the product image for sample presentation as sample user information; based on the obtained sample user information, determining a sample first evaluation result for characterizing the degree of interest of the user in presenting the product indicated by the product image for the sample; forming a training sample by using the sample presentation product image, the obtained sample user information and the determined sample first evaluation result; and training to obtain a first evaluation model by taking the sample user information and the sample presentation product image included in the training samples in the formed training samples as input and taking the input sample user information and a sample first evaluation result corresponding to the sample presentation product image as expected output by using a machine learning method.
In some optional implementations of this embodiment, the user information may include, but is not limited to, at least one of: attribute information, historical behavior information.
It will be understood that the elements described in the apparatus 500 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 500 and the units included therein, and are not described herein again.
The apparatus 500 provided in the foregoing embodiment of the present application effectively utilizes the first evaluation model to evaluate the interest degree of the target user with respect to the product image to be presented, which is helpful for selecting the product image to be presented, which is interested by the target user, from the set of product images to be presented as the product image to be finally presented to the user based on the evaluation result, thereby improving the pertinence and diversity of information processing.
Referring now to FIG. 6, shown is a block diagram of a computer system 600 suitable for use in implementing a server according to embodiments of the present application. The server shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601. It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, an input unit, and a selection unit. Where the names of these units do not in some cases constitute a limitation of the unit itself, for example, the acquiring unit may also be described as a "unit that acquires user information of a target user and a set of images of a product to be presented".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the server described in the above embodiments; or may exist separately and not be assembled into the server. The computer readable medium carries one or more programs which, when executed by the server, cause the server to: acquiring user information of a target user and a product image set to be presented; for a product image to be presented in a product image set to be presented, inputting the product image to be presented and user information into a pre-trained first evaluation model to obtain a first evaluation result, wherein the first evaluation result is used for representing the interest degree of a user corresponding to the input user information on a product indicated by the input product image to be presented; and selecting the product image to be presented from the product image set to be presented as a product image for presentation based on the obtained first evaluation result.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (14)

1. A method for processing information, comprising:
acquiring user information of a target user and a product image set to be presented;
for a product image to be presented in the product image set to be presented, inputting the product image to be presented and the user information into a pre-trained first evaluation model to obtain a first evaluation result, wherein the first evaluation result is used for representing the interest degree of a user corresponding to the input user information on a product indicated by the input product image to be presented;
and selecting the product image to be presented from the product image set to be presented as a product image for presentation based on the obtained first evaluation result.
2. The method according to claim 1, wherein the product image to be presented in the set of product images to be presented corresponds to at least one background image; and
after the to-be-presented product image is selected from the to-be-presented product image set as a product image for presentation, the method further includes:
for the selected product image for presentation in the product image for presentation, executing the following steps: acquiring at least one background image corresponding to the product image for presentation; adding the product image for presentation to the background image in the obtained at least one background image to obtain an image for presentation corresponding to the product image for presentation; and selecting a presentation image from the presentation images corresponding to the obtained presentation product images as a target presentation image corresponding to the presentation product image.
3. The method of claim 2, wherein after selecting the presentation image from the presentation images corresponding to the presentation product images as the target presentation image corresponding to the presentation product image, the method further comprises:
and outputting the image for the selected target presentation to a terminal used by the target user.
4. The method of claim 2, wherein said selecting a presentation image from the presentation images corresponding to the presentation product image as a target presentation image corresponding to the presentation product image comprises:
for the presentation image in the presentation image corresponding to the presentation product image, inputting the presentation image and the user information into a pre-trained second evaluation model to obtain a second evaluation result, wherein the second evaluation result is used for representing the interest degree of the user corresponding to the input user information in the input presentation image;
and selecting a presentation image from presentation images corresponding to the presentation product image as a target presentation image corresponding to the presentation product image based on the obtained second evaluation result.
5. The method of claim 1, wherein the first evaluation model is trained by:
obtaining a plurality of product images for sample presentation;
for a sample presentation product image of a plurality of sample presentation product images, performing the steps of: acquiring user information of a user corresponding to the terminal presenting the product image for sample presentation as sample user information; based on the obtained sample user information, determining a sample first evaluation result for characterizing the degree of interest of the user in presenting the product indicated by the product image for the sample; forming a training sample by using the sample presentation product image, the obtained sample user information and the determined sample first evaluation result;
and training to obtain a first evaluation model by taking the sample user information and the sample presentation product image included in the training samples in the formed training samples as input and taking the input sample user information and a sample first evaluation result corresponding to the sample presentation product image as expected output by using a machine learning method.
6. The method according to one of claims 1-5, wherein the user information comprises at least one of: attribute information, historical behavior information.
7. An apparatus for processing information, comprising:
the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is configured to acquire user information of a target user and a product image set to be presented;
the input unit is configured to input the image of the product to be presented and the user information into a pre-trained first evaluation model for the image of the product to be presented in the image set of the product to be presented to obtain a first evaluation result, wherein the first evaluation result is used for representing the interest degree of a user corresponding to the input user information on the product indicated by the input image of the product to be presented;
and the selecting unit is configured to select a product image to be presented from the product image set to be presented as a product image for presentation based on the obtained first evaluation result.
8. The device of claim 7, wherein the product image to be presented in the set of product images to be presented corresponds to at least one background image; and
the device further comprises:
an adding unit configured to execute the following steps for a presentation product image of the selected presentation product images: acquiring at least one background image corresponding to the product image for presentation; adding the product image for presentation to the background image in the obtained at least one background image to obtain an image for presentation corresponding to the product image for presentation; and selecting a presentation image from the presentation images corresponding to the obtained presentation product images as a target presentation image corresponding to the presentation product image.
9. The apparatus of claim 8, wherein the apparatus further comprises:
an output unit configured to output the selected image for target presentation to a terminal used by the target user.
10. The apparatus of claim 8, wherein the adding unit is further configured to:
for the presentation image in the presentation image corresponding to the presentation product image, inputting the presentation image and the user information into a pre-trained second evaluation model to obtain a second evaluation result, wherein the second evaluation result is used for representing the interest degree of the user corresponding to the input user information in the input presentation image;
and selecting a presentation image from presentation images corresponding to the presentation product image as a target presentation image corresponding to the presentation product image based on the obtained second evaluation result.
11. The apparatus of claim 7, wherein the first evaluation model is trained by:
obtaining a plurality of product images for sample presentation;
for a sample presentation product image of a plurality of sample presentation product images, performing the steps of: acquiring user information of a user corresponding to the terminal presenting the product image for sample presentation as sample user information; based on the obtained sample user information, determining a sample first evaluation result for characterizing the degree of interest of the user in presenting the product indicated by the product image for the sample; forming a training sample by using the sample presentation product image, the obtained sample user information and the determined sample first evaluation result;
and training to obtain a first evaluation model by taking the sample user information and the sample presentation product image included in the training samples in the formed training samples as input and taking the input sample user information and a sample first evaluation result corresponding to the sample presentation product image as expected output by using a machine learning method.
12. The apparatus according to one of claims 7-11, wherein the user information comprises at least one of: attribute information, historical behavior information.
13. A server, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
14. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN201811287919.8A 2018-10-31 2018-10-31 Method and device for processing information Active CN111125501B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811287919.8A CN111125501B (en) 2018-10-31 2018-10-31 Method and device for processing information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811287919.8A CN111125501B (en) 2018-10-31 2018-10-31 Method and device for processing information

Publications (2)

Publication Number Publication Date
CN111125501A true CN111125501A (en) 2020-05-08
CN111125501B CN111125501B (en) 2023-07-25

Family

ID=70485471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811287919.8A Active CN111125501B (en) 2018-10-31 2018-10-31 Method and device for processing information

Country Status (1)

Country Link
CN (1) CN111125501B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907317A (en) * 2021-01-27 2021-06-04 北京百度网讯科技有限公司 Information pushing method, device, equipment, storage medium and program product

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140019484A1 (en) * 2012-07-13 2014-01-16 Deepmind Technologies Limited Method and Apparatus for Image Searching
US20140072227A1 (en) * 2012-09-13 2014-03-13 International Business Machines Corporation Searching and Sorting Image Files
US20150286898A1 (en) * 2014-04-04 2015-10-08 Wei Di Image evaluation
CN106295832A (en) * 2015-05-12 2017-01-04 阿里巴巴集团控股有限公司 Product information method for pushing and device
CN106407425A (en) * 2016-09-27 2017-02-15 北京百度网讯科技有限公司 A method and a device for information push based on artificial intelligence
CN107292713A (en) * 2017-06-19 2017-10-24 武汉科技大学 A kind of rule-based individual character merged with level recommends method
CN107862058A (en) * 2017-11-10 2018-03-30 北京百度网讯科技有限公司 Method and apparatus for generating information
WO2018166288A1 (en) * 2017-03-15 2018-09-20 北京京东尚科信息技术有限公司 Information presentation method and device
CN108573054A (en) * 2018-04-24 2018-09-25 百度在线网络技术(北京)有限公司 Method and apparatus for pushed information
US20180278674A1 (en) * 2016-05-05 2018-09-27 Tencent Technology (Shenzhen) Company Limited Media information presentation method, client, and server

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140019484A1 (en) * 2012-07-13 2014-01-16 Deepmind Technologies Limited Method and Apparatus for Image Searching
US20140072227A1 (en) * 2012-09-13 2014-03-13 International Business Machines Corporation Searching and Sorting Image Files
US20150286898A1 (en) * 2014-04-04 2015-10-08 Wei Di Image evaluation
CN106295832A (en) * 2015-05-12 2017-01-04 阿里巴巴集团控股有限公司 Product information method for pushing and device
US20180278674A1 (en) * 2016-05-05 2018-09-27 Tencent Technology (Shenzhen) Company Limited Media information presentation method, client, and server
CN106407425A (en) * 2016-09-27 2017-02-15 北京百度网讯科技有限公司 A method and a device for information push based on artificial intelligence
WO2018166288A1 (en) * 2017-03-15 2018-09-20 北京京东尚科信息技术有限公司 Information presentation method and device
CN107292713A (en) * 2017-06-19 2017-10-24 武汉科技大学 A kind of rule-based individual character merged with level recommends method
CN107862058A (en) * 2017-11-10 2018-03-30 北京百度网讯科技有限公司 Method and apparatus for generating information
CN108573054A (en) * 2018-04-24 2018-09-25 百度在线网络技术(北京)有限公司 Method and apparatus for pushed information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宋乐;林玉池;刘启海;赵美蓉;冯伟昌;: "一种新型异源图像融合质量评价模型", 激光与红外, no. 01 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907317A (en) * 2021-01-27 2021-06-04 北京百度网讯科技有限公司 Information pushing method, device, equipment, storage medium and program product
CN112907317B (en) * 2021-01-27 2023-08-04 北京百度网讯科技有限公司 Information pushing method, device, equipment, storage medium and program product

Also Published As

Publication number Publication date
CN111125501B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN108830235B (en) Method and apparatus for generating information
CN110708346B (en) Information processing system and method
CN111125574B (en) Method and device for generating information
CN109446442B (en) Method and apparatus for processing information
CN109981787B (en) Method and device for displaying information
CN108536867B (en) Method and apparatus for generating information
US20200322570A1 (en) Method and apparatus for aligning paragraph and video
CN109873756B (en) Method and apparatus for transmitting information
CN109413056B (en) Method and apparatus for processing information
CN111061956A (en) Method and apparatus for generating information
CN112306793A (en) Method and device for monitoring webpage
CN110866040A (en) User portrait generation method, device and system
CN111897950A (en) Method and apparatus for generating information
CN108600780B (en) Method for pushing information, electronic device and computer readable medium
CN107885872B (en) Method and device for generating information
CN107330087B (en) Page file generation method and device
CN108509442B (en) Search method and apparatus, server, and computer-readable storage medium
CN110673886A (en) Method and device for generating thermodynamic diagram
CN112308648A (en) Information processing method and device
CN111125502B (en) Method and device for generating information
CN109408647B (en) Method and apparatus for processing information
CN109472028B (en) Method and device for generating information
CN111125501B (en) Method and device for processing information
CN109034085B (en) Method and apparatus for generating information
CN109584012B (en) Method and device for generating item push information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant