WO2019233260A1 - 广告信息推送方法和装置、存储介质、电子设备 - Google Patents

广告信息推送方法和装置、存储介质、电子设备 Download PDF

Info

Publication number
WO2019233260A1
WO2019233260A1 PCT/CN2019/087351 CN2019087351W WO2019233260A1 WO 2019233260 A1 WO2019233260 A1 WO 2019233260A1 CN 2019087351 W CN2019087351 W CN 2019087351W WO 2019233260 A1 WO2019233260 A1 WO 2019233260A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
category
image
scene category
advertisement information
Prior art date
Application number
PCT/CN2019/087351
Other languages
English (en)
French (fr)
Inventor
陈岩
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019233260A1 publication Critical patent/WO2019233260A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0252Targeted advertisements based on events or environment, e.g. weather or festivals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements

Definitions

  • the present application relates to the field of computer technology, and in particular, to a method and device for pushing advertisement information, a storage medium, and an electronic device.
  • the traditional advertisement pushing method generally infers the content that the user is interested in according to the application and the content that the user has recently used, so as to recommend some advertisements related to the user's interest to the user.
  • users have certain limitations when using the application, and they cannot capture the user's points of interest as comprehensively as possible, so as to achieve accurate advertising push.
  • the embodiments of the present application provide a method and a device for pushing advertisement information, a storage medium, and an electronic device, which can push advertisement information more accurately.
  • a method for pushing advertisement information includes:
  • Advertisement information corresponding to the scene category is pushed according to the scene category.
  • An advertisement information pushing device includes:
  • An image acquisition module configured to acquire an image captured in a first preset time period
  • a scene recognition module configured to perform scene recognition on the image to obtain a scene category to which the image belongs
  • An advertisement information pushing module is configured to push advertisement information corresponding to a scene category according to a scene category.
  • a computer-readable storage medium has stored thereon a computer program that, when executed by a processor, implements the operations of the advertising information push method described above.
  • An electronic device includes a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the processor executes the computer program, the operation of the advertising information pushing method described above is performed.
  • the above advertisement information pushing method and device, storage medium, and electronic device obtain images captured in a first preset time period, perform scene recognition on the images, and obtain scene categories to which the images belong. Advertisement information corresponding to the scene category is pushed according to the scene category.
  • FIG. 1 is an internal structural diagram of an electronic device in an embodiment
  • FIG. 2 is a flowchart of a method for pushing advertisement information in an embodiment
  • FIG. 3 is a schematic structural diagram of a neural network model in an embodiment
  • FIG. 4 is a flowchart of a method for scene recognition in FIG. 2 to obtain a scene category to which the image belongs;
  • FIG. 5 is a flowchart of a method for pushing advertisement information in another embodiment
  • FIG. 6 is a flowchart of a method for pushing advertisement information corresponding to a scene category according to a scene category in FIG. 2;
  • FIG. 7 is a schematic structural diagram of an advertisement information pushing device according to an embodiment
  • FIG. 8 is a schematic structural diagram of an advertisement information pushing device according to another embodiment
  • FIG. 9 is a block diagram of a partial structure of a mobile phone related to an electronic device according to an embodiment.
  • FIG. 1 is a schematic diagram of an internal structure of an electronic device in an embodiment.
  • the electronic device includes a processor, a memory, and a network interface connected through a system bus.
  • the processor is used to provide computing and control capabilities to support the operation of the entire electronic device.
  • the memory is used to store data, programs, and the like. At least one computer program is stored on the memory, and the computer program can be executed by a processor to implement the advertising information pushing method applicable to electronic devices provided in the embodiments of the present application.
  • the memory may include a non-volatile storage medium such as a magnetic disk, an optical disc, a read-only memory (ROM), or a random-access memory (RAM).
  • the memory includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and a computer program.
  • the computer program can be executed by a processor to implement a method for pushing advertisement information provided by each of the following embodiments.
  • the internal memory provides a cached operating environment for operating system computer programs in a non-volatile storage medium.
  • the network interface may be an Ethernet card or a wireless network card, and is used to communicate with external electronic devices.
  • the electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device.
  • a method for pushing advertisement information is provided.
  • the method is applied to the electronic device in FIG. 1 as an example, and includes:
  • Operation 220 Acquire an image captured in a first preset time period.
  • the first preset time period may be defined according to the number of photos taken by the user. For example, the time period during which the user takes 100 photos from the current moment is set as the first preset time period. Of course, it may also be set as another number of photos. It is also possible to directly set the fixed time period as the first preset time period. For example, directly set the time period one week before the current time as the first preset time period. Of course, other time periods may also be set as the first preset time period. Preset time period. All images with a shooting time within a first preset time period are obtained from the electronic device where the user has taken an image. Images include photos and videos taken.
  • Operation 240 Perform scene recognition on the image to obtain a scene category to which the image belongs.
  • Scene recognition is performed on all the images in the first preset time period obtained above to obtain a scene recognition result for each image.
  • a neural network model is used to perform scene recognition on the image.
  • the specific training process of the neural network model is: inputting a training image including a background training target and a foreground training target to a neural network, and reflecting each background area in the training image.
  • a first loss function of the difference between the first predicted confidence level of the pixels and the first true confidence level, and the second predicted confidence level and the second true confidence level that reflect each pixel point of the foreground region in the training image The second loss function of the difference; the first prediction confidence is the confidence that a pixel of the background region in the training image predicted by the neural network belongs to the background training target, the first true Confidence indicates the confidence that the pixel points previously marked in the training image belong to the background training target; the second prediction confidence is a certain area of the foreground in the training image predicted by the neural network. One pixel point belongs to the confidence level of the foreground training target, and the second true confidence level represents a prediction in the training image.
  • the pixel points marked first belong to the confidence of the foreground training target; weighting and summing the first loss function and the second loss function to obtain a target loss function; adjusting the neural network's Parameters to train the neural network.
  • a neural network model is trained, and scene recognition is performed on the image according to the neural network model to obtain a scene category to which the image belongs.
  • FIG. 3 is a schematic structural diagram of a neural network model in an embodiment.
  • the input layer of the neural network receives the training images with image category labels, performs feature extraction through a basic network (such as a CNN network), and outputs the extracted image features to the feature layer, and the feature layer is used for the background
  • the first loss function is obtained by performing category detection on the training target
  • the second loss function is obtained by performing category detection on the foreground training target based on image features.
  • the position loss function is obtained by performing position detection on the foreground training target based on the foreground area.
  • the weighted sum of the loss function and the position loss function is used to obtain the target loss function.
  • the neural network may be a convolutional neural network.
  • Convolutional neural networks include a data input layer, a convolutional calculation layer, an activation layer, a pooling layer, and a fully connected layer.
  • the data input layer is used to pre-process the original image data.
  • the pre-processing may include de-averaging, normalization, dimensionality reduction, and whitening processes.
  • De-averaging refers to centering all dimensions of the input data to 0 in order to pull the center of the sample back to the origin of the coordinate system.
  • Normalization is normalizing the amplitude to the same range.
  • Whitening refers to normalizing the amplitude on each characteristic axis of the data.
  • the convolution calculation layer is used for local correlation and window sliding. The weight of each filter connected to the data window in the convolution calculation layer is fixed.
  • Each filter focuses on an image feature, such as vertical edges, horizontal edges, colors, textures, etc., and these filters are combined to obtain the entire image.
  • a filter is a weight matrix.
  • a weight matrix can be used to convolve with data in different windows.
  • the activation layer is used to non-linearly map the output of the convolution layer.
  • the activation function used by the activation layer may be ReLU (The Rectified Linear Unit).
  • the pooling layer can be sandwiched between consecutive convolutional layers to compress the amount of data and parameters and reduce overfitting.
  • the pooling layer can use the maximum method or average method to reduce the dimensionality of the data.
  • the fully connected layer is located at the tail of the convolutional neural network, and all neurons between the two layers have the right to reconnect.
  • a part of the convolutional neural network is cascaded to the first confidence output node, a part of the convolutional layer is cascaded to the second confidence output node, and a part of the convolutional layer is cascaded to the position output node.
  • the first confidence output node it can be detected.
  • the output node can detect the type of the foreground object of the image according to the second confidence level, and the position corresponding to the foreground object can be detected according to the position output node.
  • the scene recognition results of all the images in the first preset time period are classified according to a preset standard, and the scene categories corresponding to all the images are obtained.
  • Scene categories are divided according to preset standards. For example, scene recognition results can be divided into landscape categories, gourmet categories, and portrait categories.
  • Operation 260 Push advertisement information corresponding to the scene category according to the scene category.
  • Set corresponding advertisement information for each scene category in advance for example, when the scene category is landscape category, set the corresponding tourism and hotel advertisement information; and when the scene category is gourmet category, you can set Corresponding to the restaurant and hotel advertising information; when the scene category is portrait, you can set the corresponding beauty salon advertising information; when the scene category is pet, you can set the corresponding Advertising message for pet feeding.
  • an image captured in a first preset time period is acquired, scene recognition is performed on the image, and a scene category to which the image belongs is obtained. Advertisement information corresponding to the scene category is pushed according to the scene category. Because users generally take pictures of things of interest, by acquiring images taken during the first preset time period, and then performing scene recognition on the images, the scene category to which the images belong is obtained. According to the scene category, an advertisement corresponding to the scene category is pushed. It is easy to accurately grasp the user's points of interest, so as to accurately push advertising information.
  • operation 240 performs scene recognition on the image to obtain a scene category to which the image belongs, including:
  • Operation 242 Perform scene recognition on the images taken in the first preset time period to obtain a scene recognition result corresponding to each image.
  • the scene recognition result is a result of scene recognition performed on the subject elements included in the image.
  • the scene recognition results in the image include beach, blue sky, green grass, snow, night scene, backlight, sunrise / sunset, fireworks, spotlight, indoor, text document, portrait, baby, cat, dog, food, etc.
  • Scene recognition is performed on the images taken during the first preset time period one by one to obtain the scene recognition result corresponding to each image.
  • the scene recognition result corresponding to an image can be one or more.
  • the scene recognition result obtained after performing scene recognition on a selfie image with only portraits is a portrait; the scene recognition is performed on an image that includes a beach and a blue sky
  • Operation 244 Classify the scene recognition result of the image according to a preset classification rule to obtain a scene category to which the image belongs.
  • the preset classification rules are specifically: landscape refers to the natural scenery and scenery for viewing, including natural and cultural landscapes. So the scene recognition results are beach, blue sky, green grass, snow, sunrise / sunset, fireworks, etc. into landscape categories. Gastronomy, as its name implies, is delicious food. Expensive ones have mountains and sea food, and cheap ones have street food. In fact, food is not expensive, as long as you like it, you can call it food. Therefore, the scene recognition result is food (food, carbohydrates, meat, fruits, vegetables, etc.) is divided into gourmet categories. Divide the scene recognition result as a portrait into a portrait category, and classify the scene recognition result as a cat, dog, or other pet into a pet category.
  • the scene recognition results obtained after scene recognition of the images belong to the same scene category, it is determined that the same scene category is the scene category to which the image belongs.
  • the scene recognition result obtained after performing scene recognition on an image does not belong to the same scene category, it is necessary to determine which scene category in the image has a higher weight, and use the scene category with a higher weight as the scene category of the image.
  • Operation 246 Count the number of images included in each scene category.
  • each image corresponds to only one scene category. After all the divisions, count the number of images contained in each scene category.
  • the preset classification rule can be used to divide an image into different scene classifications according to the scene recognition result. Therefore, scene classification of the image is realized. Therefore, subsequent advertisement information corresponding to the scene category can be pushed according to the scene category.
  • operation 220 before acquiring an image captured in a first preset time period, includes:
  • an advertisement category corresponding to a push is set for each scene category in advance, and each scene category may correspond to one or more advertisement categories to be pushed.
  • each scene category For each scene category, set the corresponding advertisement category in advance. For example, when the scene category is landscape category, set the corresponding advertisement as tourism or hotel category. When the scene category is gourmet category, you can set and The corresponding ads are restaurants and hotels; when the scene category is portrait, the corresponding ads can be set as beauty salons; and when the scene category is pets, the corresponding ads can be set as Pet feeding category.
  • a push advertisement category is set for each scene category in advance, and each scene category may correspond to one or more push advertisement categories. This can achieve more comprehensive push advertising. The accuracy of the category and frequency of the finally calculated push advertisement information will also be greatly improved.
  • operation 260 according to the scene category, pushing advertisement information corresponding to the scene category includes:
  • Operation 262 Set a corresponding weight for the scene category according to the counted number of images included in each scene category. The greater the number of images contained in the scene category, the greater the corresponding weight.
  • the foregoing weight rule may also be set according to the number of all captured images in the first preset time period.
  • Operation 264 Calculate the number of pushes of the advertisement information corresponding to the scene category in the second preset time period according to the weight of the scene category.
  • each advertisement category is the weight of each scene category corresponding to it And.
  • each scene category corresponds to a specific advertisement category. Therefore, the number of pushes of each category of advertisement category corresponding to the scene category is calculated according to the weight of the scene category, and the number of recommendations is the second preset Number of recommendations during the time period.
  • the weight of tourism advertisement is 4, the value of restaurant advertisement is 5, and the value of pet feeding advertisement is 1.
  • the total number of advertisements pushed is 10 times, which can include 4 tourism advertisements, 5 restaurant advertisements, and 1 pet feeding advertisement.
  • Operation 266 Push the advertisement information according to the number of times the advertisement information is pushed within the second preset time period.
  • the second preset time period may be a time period next to the first preset time period of the same duration. For example, when the first preset time period is one week, the second preset time period is one week adjacent to the first preset time period. In the second preset time period, the advertisement information corresponding to the push category is performed according to the number of pushes of different advertisement categories.
  • a corresponding weight value is set for a scene category according to the number of images included in each scene category. Then calculate the weight of the advertisement category corresponding to the scene category according to the weight of the scene category, so as to obtain the weight of each type of advertisement.
  • the number of pushes of different advertising categories is allocated according to the weight of different advertising categories, that is, the higher the weight, the more pushes. Because the weight of the scene category obtained above can reflect the interests of the user, the weight of the advertisement category obtained by the weight of the scene category can also reflect the interests of the user to a certain extent. Therefore, the pushed advertisement information can more accurately predict the user's interests.
  • calculating the number of pushes of the advertisement information corresponding to the scene category in the second preset time period according to the weight of the scene category includes:
  • the number of pushes of the advertisement information corresponding to the advertisement category is correspondingly allocated within the second preset time period.
  • the weight of each advertisement category is set to the weight of the scene recognition corresponding to it. If there are multiple scenes corresponding to the same advertisement category, the weight of the advertisement category is each scene corresponding to it Sum of category weights. The weights of the same advertising category are accumulated to obtain the total weight of the advertising category. For example, when the scene category is a landscape category with a weight of 4, the corresponding advertisement is a tourism category with a weight of 4 and the corresponding advertisement is a hotel category with a weight of 4. When the category is gourmet, the weight is 5, you can set the corresponding advertisement to be a restaurant, the hotel category is also 5, and when the scene category is a pet, the weight can be set. The corresponding advertisement is a pet feeding class with a weight of 1.
  • the weight of tourism advertising is 4
  • the weight of hotel advertising is 9
  • the weight of restaurant advertising is 5
  • pets Feeding ads have a weight of 1.
  • the number of pushes of the advertisement information corresponding to the advertisement category is correspondingly allocated within the second preset time period. Assuming that a total of 19 advertisements are pushed in the second preset time period, among them 9 hotel advertisements, 5 restaurant advertisements, 4 tourism advertisements, and 1 pet feeding advertisement will be pushed.
  • the weight of the scene category is set to the weight of the advertisement category to be pushed corresponding to the scene category.
  • the same scene category corresponds to multiple advertisement categories, and the weights of the same advertisement category are accumulated to obtain the total weight of the advertisement category.
  • Setting the same scene category can correspond to multiple advertisement categories, which solves the problem that the same scene category corresponds to only one type of advertisement category, which is too simple and not accurate enough. Therefore, according to the size of the total weight of the advertisement category, the number of pushes of the advertisement information corresponding to the advertisement category is correspondingly allocated within the second preset time period.
  • the content of the advertisement information includes information obtained from the image.
  • the number of pushes of each type of advertisement category within the second preset time period is calculated and obtained in the foregoing embodiment, and the content pushed by each type of advertisement can be obtained from the first preset time period.
  • the acquired images are obtained by analysis. An analysis is performed on the images acquired during the first preset time period, and shooting location information, specific shooting time information, and landmark information in the image corresponding to the images belonging to each type of scene category. Therefore, when pushing the advertisement information, the content of the advertisement information can be enriched and detailed according to the obtained shooting position information, specific shooting time information, and landmark information in the image.
  • a method for pushing advertisement information is provided.
  • the method is applied to the electronic device in FIG. 1 as an example, and includes:
  • the image can be divided into different scene categories according to a unified standard, and an advertisement category corresponding to the push is set for each scene category in advance, and each scene category can correspond to one or more advertisement categories to be pushed.
  • the corresponding advertisements can be set as tourism and hotel categories; and when the scene category is gourmet category, the corresponding advertisements can be set as restaurant and hotel categories; and when When the scene category is portrait, you can set the corresponding advertisement as a beauty salon; and when the scene category is pet, you can set the corresponding advertisement as pet feeding;
  • Operation 2 Perform scene recognition on the images taken during the first preset time period to obtain the scene recognition result corresponding to each image;
  • Operation three classify the scene recognition result of the image according to a preset classification rule to obtain the scene category to which the image belongs;
  • Operation four Count the number of images included in each scene category.
  • Operation five Set a corresponding weight value for the scene category according to the counted number of images included in each scene category. The greater the number of images included in the scene category, the greater the corresponding weight value.
  • Operation six Calculate the number of times of advertising information corresponding to the scene category in the second preset time period according to the weight of the scene category.
  • Operation seven Push the advertisement information according to the number of times the advertisement information is pushed within the second preset time period.
  • a corresponding weight value is set for a scene category according to the number of images included in each scene category. Then calculate the weight of the advertisement category corresponding to the scene category according to the weight of the scene category, so as to obtain the weight of each type of advertisement.
  • the number of pushes of different advertising categories is allocated according to the weight of different advertising categories, that is, the higher the weight, the more pushes. Because the weight of the scene category obtained above can reflect the interests of the user, the weight of the advertisement category obtained by the weight of the scene category can also reflect the interests of the user to a certain extent. Therefore, the pushed advertisement information can more accurately predict the user's interests.
  • an advertisement information pushing device 700 includes an image obtaining module 702, a scene recognition module 704, and an advertisement information pushing module 706. among them,
  • An image acquisition module 702 configured to acquire an image captured in a first preset time period
  • a scene recognition module 704 is configured to perform scene recognition on an image to obtain a scene category to which the image belongs;
  • the advertisement information pushing module 706 is configured to push advertisement information corresponding to the scene category according to the scene category.
  • the scene recognition module is further configured to perform scene recognition on the images taken in the first preset time period to obtain the scene recognition result corresponding to each image; the scene recognition result of the image is classified according to a preset The rules are classified to obtain the scene category to which the image belongs; the number of images contained in each scene category is counted.
  • an advertisement information pushing device 700 is provided.
  • the device further includes: an advertisement category presetting module 708, which is used to set an advertisement category to be pushed for each scene category in advance.
  • a scene category may correspond to one or more advertisement categories to be pushed.
  • the advertisement information pushing module is further configured to set a corresponding weight for the scene category according to the number of occurrences corresponding to each scene category. The higher the number of occurrences of the scene category, the corresponding weight is. The larger; calculate the number of times the advertisement information is pushed corresponding to the scene category in the second preset time period according to the weight of the scene category; push the advertisement information according to the number of times the advertisement information is pushed in the second preset time period.
  • the advertisement information pushing module is further configured to set the weight of the scene category to the weight of the advertisement category to be pushed corresponding to the scene category; add up the weight of the same advertisement category to obtain the advertisement category According to the size of the total weight of the advertisement category, the number of pushes of the advertisement information corresponding to the advertisement category is correspondingly allocated within the second preset time period.
  • the advertisement information pushing device may be divided into different modules according to requirements to complete all or part of the functions of the above-mentioned advertisement information pushing device.
  • Each module in the above advertisement information pushing device may be implemented in whole or in part by software, hardware, and a combination thereof.
  • the network interface may be an Ethernet card or a wireless network card.
  • the above modules may be embedded in the processor in the form of hardware or independent of the processor in the server, or may be stored in the memory of the server in the form of software to facilitate the processor. Call to perform the operations corresponding to the above modules.
  • a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the operations of the advertisement information pushing methods provided by the foregoing embodiments are implemented.
  • an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the processor executes the computer program, the advertisement information provided by the foregoing embodiments is implemented. The operation of the push method.
  • the embodiments of the present application also provide a computer program product, which when executed on a computer, causes the computer to execute the operations of the advertisement information pushing methods provided by the foregoing embodiments.
  • An embodiment of the present application further provides an electronic device.
  • the above electronic device includes an image processing circuit.
  • the image processing circuit may be implemented by hardware and / or software components, and may include various processing units that define an ISP (Image Signal Processing) pipeline.
  • FIG. 9 is a schematic diagram of an image processing circuit in one embodiment. As shown in FIG. 9, for ease of description, only aspects of the image processing technology related to the embodiments of the present application are shown.
  • the image processing circuit includes an ISP processor 940 and a control logic 950.
  • the image data captured by the imaging device 910 is first processed by the ISP processor 940, which analyzes the image data to capture image statistical information that can be used to determine and / or one or more control parameters of the imaging device 910.
  • the imaging device 910 may include a camera having one or more lenses 912 and an image sensor 914.
  • the image sensor 914 may include a color filter array (such as a Bayer filter). The image sensor 914 may obtain the light intensity and wavelength information captured by each imaging pixel of the image sensor 914, and provide a set of Image data.
  • the sensor 920 may provide parameters (such as image stabilization parameters) of the acquired image processing to the ISP processor 940 based on the interface type of the sensor 920.
  • the sensor 920 interface may use a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the foregoing interfaces.
  • SMIA Standard Mobile Imaging Architecture
  • the image sensor 914 may also send the original image data to the sensor 920, and the sensor 920 may provide the original image data to the ISP processor 940 based on the interface type of the sensor 920, or the sensor 920 stores the original image data in the image memory 930.
  • the ISP processor 940 processes the original image data pixel by pixel in a variety of formats.
  • each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 940 may perform one or more image processing operations on the original image data and collect statistical information about the image data.
  • the image processing operations may be performed with the same or different bit depth accuracy.
  • the ISP processor 940 may also receive image data from the image memory 930.
  • the sensor 920 interface sends the original image data to the image memory 930, and the original image data in the image memory 930 is then provided to the ISP processor 940 for processing.
  • the image memory 930 may be a part of a memory device, a storage device, or a separate dedicated memory in an electronic device, and may include a DMA (Direct Memory Access) feature.
  • DMA Direct Memory Access
  • the ISP processor 940 may perform one or more image processing operations, such as time-domain filtering.
  • the processed image data may be sent to the image memory 930 for further processing before being displayed.
  • the ISP processor 940 receives the processing data from the image memory 930, and performs processing on the image data in the original domain and in the RGB and YCbCr color spaces.
  • the image data processed by the ISP processor 940 may be output to the display 970 for viewing by the user and / or further processed by a graphics engine or a GPU (Graphics Processing Unit).
  • the output of the ISP processor 940 can also be sent to the image memory 930, and the display 970 can read image data from the image memory 930.
  • the image memory 930 may be configured to implement one or more frame buffers.
  • the output of the ISP processor 940 may be sent to an encoder / decoder 960 to encode / decode image data.
  • the encoded image data can be saved and decompressed before being displayed on the display 970 device.
  • the encoder / decoder 960 may be implemented by a CPU or a GPU or a coprocessor.
  • the statistical data determined by the ISP processor 940 may be sent to the control logic 950 unit.
  • the statistical data may include image information of the image sensor 914 such as auto exposure, auto white balance, auto focus, flicker detection, black level compensation, and lens 912 shading correction.
  • the control logic 950 may include a processor and / or a microcontroller that executes one or more routines (such as firmware). The one or more routines may determine the control parameters of the imaging device 910 and the ISP processing according to the received statistical data. Parameters of the controller 940.
  • control parameters of the imaging device 910 may include sensor 920 control parameters (such as gain, integration time for exposure control, image stabilization parameters, etc.), camera flash control parameters, lens 912 control parameters (such as focus distance for focusing or zooming), or these A combination of parameters.
  • ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (eg, during RGB processing), and lens 912 shading correction parameters.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM), which is used as external cache memory.
  • RAM is available in various forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDR, SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR dual data rate SDRAM
  • SDRAM enhanced SDRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Accounting & Taxation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Environmental & Geological Engineering (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Artificial Intelligence (AREA)
  • Game Theory and Decision Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本申请涉及一种广告信息推送方法和装置、电子设备、计算机可读存储介质,获取第一预设时间段内所拍摄的图像,对图像进行场景识别,得到图像所属的场景类别。根据场景类别推送与场景类别对应的广告信息。

Description

广告信息推送方法和装置、存储介质、电子设备
相关申请的交叉引用
本申请要求于2018年06月08日提交中国专利局,申请号为201810587687.1,发明名称为“广告信息推送方法和装置、存储介质、电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别是涉及一种广告信息推送方法和装置、存储介质、电子设备。
背景技术
随着移动互联网和智能终端技术的高速发展,智能终端在普通大众中越来越普及,因此越来越多的广告厂商开始在智能终端上进行广告推送。传统的广告推送方法,一般根据用户最近所使用的应用程序及访问的内容进行推测用户感兴趣的内容,从而向用户推荐一些和用户兴趣相关的广告。然而,用户使用应用程序时有一定的局限性,不能尽量全面地捕捉到用户的兴趣点,从而做到精确地进行广告推送。
发明内容
本申请实施例提供一种广告信息推送方法和装置、存储介质、电子设备,可以更精确地进行广告信息推送。
一种广告信息推送方法,包括:
获取第一预设时间段内所拍摄的图像;
对所述图像进行场景识别,得到所述图像所属的场景类别;
根据场景类别推送与所述场景类别对应的广告信息。
一种广告信息推送装置,所述装置包括:
图像获取模块,用于获取第一预设时间段内所拍摄的图像;
场景识别模块,用于对所述图像进行场景识别,得到所述图像所属的场景类别;
广告信息推送模块,用于根据场景类别推送与所述场景类别对应的广告信息。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的广告信息推送方法的操作。
一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行计算机程序时执行如上所述的广告信息推送方法的操作。
上述广告信息推送方法和装置、存储介质、电子设备,获取第一预设时间段内所拍摄的图像,对图像进行场景识别,得到图像所属的场景类别。根据场景类别推送与场景类别对应的广告信息。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中电子设备的内部结构图;
图2为一个实施例中广告信息推送方法的流程图;
图3为一个实施例中神经网络模型的架构示意图;
图4为图2中对图像进行场景识别得到图像所属的场景类别方法的流程图;
图5为又一个实施例中广告信息推送方法的流程图;
图6为图2中根据场景类别推送与场景类别对应的广告信息方法的流程图;
图7为一个实施例中广告信息推送装置的结构示意图;
图8为又一个实施例中广告信息推送装置的结构示意图;
图9为一个实施例中提供的电子设备相关的手机的部分结构的框图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对 本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
图1为一个实施例中电子设备的内部结构示意图。如图1所示,该电子设备包括通过系统总线连接的处理器、存储器和网络接口。其中,该处理器用于提供计算和控制能力,支撑整个电子设备的运行。存储器用于存储数据、程序等,存储器上存储至少一个计算机程序,该计算机程序可被处理器执行,以实现本申请实施例中提供的适用于电子设备的广告信息推送方法。存储器可包括磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等非易失性存储介质,或随机存储记忆体(Random-Access-Memory,RAM)等。例如,在一个实施例中,存储器包括非易失性存储介质及内存储器。非易失性存储介质存储有操作系统和计算机程序。该计算机程序可被处理器所执行,以用于实现以下各个实施例所提供的一种广告信息推送方法。内存储器为非易失性存储介质中的操作系统计算机程序提供高速缓存的运行环境。网络接口可以是以太网卡或无线网卡等,用于与外部的电子设备进行通信。该电子设备可以是手机、平板电脑或者个人数字助理或穿戴式设备等。
在一个实施例中,如图2所示,提供了一种广告信息推送方法,以该方法应用于图1中的电子设备为例进行说明,包括:
操作220,获取第一预设时间段内所拍摄的图像。
第一预设时间段可以根据用户拍摄照片的多少来定义,例如,将用户距离当前时刻拍摄100张照片的时间段设置为第一预设时间段,当然,也可以设置为其他数量的照片。也可以直接设置固定时间段为第一预设时间段,例如,直接设置距离当前时刻往前一个星期的时间段为第一预设时间段,当然,也可以设置其他长度的时间段为第一预设时间段。从用户拍摄图像的电子设备上获取拍摄时间在第一预设时间段内的所有图像。图像包含所拍摄的照片及视频等。
操作240,对图像进行场景识别,得到图像所属的场景类别。
对上述所获取的第一预设时间段内的所有图像一一进行场景识别,得到每张图像场景识别的结果。具体地,采用神经网络模型对图像进行场景识别,神经网络模型的具体训练过程为:将包含有背景训练目标和前景训练目标的训练图像输入到神经网络,得到反映所述训练图像中背景区域各像素点的第一预测置信度与第一真实置信度之间的差异的第一 损失函数,以及反映所述训练图像中前景区域各像素点的第二预测置信度与第二真实置信度之间的差异的第二损失函数;所述第一预测置信度为采用所述神经网络预测出的所述训练图像中背景区域某一像素点属于所述背景训练目标的置信度,所述第一真实置信度表示在所述训练图像中预先标注的所述像素点属于所述背景训练目标的置信度;所述第二预测置信度为采用所述神经网络预测出的所述训练图像中前景区域某一像素点属于所述前景训练目标的置信度,所述第二真实置信度表示在所述训练图像中预先标注的所述像素点属于所述前景训练目标的置信度;将所述第一损失函数和第二损失函数进行加权求和得到目标损失函数;根据所述目标损失函数调整所述神经网络的参数,对所述神经网络进行训练。从而训练出神经网络模型,根据该神经网络模型对图像进行场景识别,得到图像所属的场景类别。
图3为一个实施例中神经网络模型的架构示意图。如图3所示,神经网络的输入层接收带有图像类别标签的训练图像,通过基础网络(如CNN网络)进行特征提取,并将提取的图像特征输出给特征层,由该特征层对背景训练目标进行类别检测得到第一损失函数,对前景训练目标根据图像特征进行类别检测得到第二损失函数,对前景训练目标根据前景区域进行位置检测得到位置损失函数,将第一损失函数、第二损失函数和位置损失函数进行加权求和得到目标损失函数。该神经网络可为卷积神经网络。卷积神经网络包括数据输入层、卷积计算层、激活层、池化层和全连接层。数据输入层用于对原始图像数据进行预处理。该预处理可包括去均值、归一化、降维和白化处理。去均值是指将输入数据各个维度都中心化为0,目的是将样本的中心拉回到坐标系原点上。归一化是将幅度归一化到同样的范围。白化是指对数据各个特征轴上的幅度归一化。卷积计算层用于局部关联和窗口滑动。卷积计算层中每个滤波器连接数据窗的权重是固定的,每个滤波器关注一个图像特征,如垂直边缘、水平边缘、颜色、纹理等,将这些滤波器合在一起得到整张图像的特征提取器集合。一个滤波器是一个权重矩阵。通过一个权重矩阵可与不同窗口内数据做卷积。激活层用于将卷积层输出结果做非线性映射。激活层采用的激活函数可为ReLU(The Rectified Linear Unit,修正线性单元)。池化层可夹在连续的卷积层中间,用于压缩数据和参数的量,减小过拟合。池化层可采用最大值法或平均值法对数据降维。全连接层位于卷积神经网络的尾部,两层之间所有神经元都有权重连接。卷积神经网络的一部分卷积层级 联到第一置信度输出节点,一部分卷积层级联到第二置信度输出节点,一部分卷积层级联到位置输出节点,根据第一置信度输出节点可以检测到图像的背景分类,根据第二置信度输出节点可以检测到图像的前景目标的类别,根据位置输出节点可以检测到前景目标所对应的位置。
对第一预设时间段内所有图像的场景识别结果按照预设标准进行分类,得到所有图像各自对应的场景类别。场景类别为按照预设标准进行划分而成的,例如,可以将场景识别结果分为风景类、美食类、人像类等。
操作260,根据场景类别推送与场景类别对应的广告信息。
预先为每一个场景类别设置对应的广告信息,例如,可以是对于场景类别为风景类的时候,设置与之对应的为旅游、酒店类广告信息;而当场景类别为美食类的时候,可以设置与之对应的为餐厅、酒店类广告信息;而当场景类别为人像类的时候,可以设置与之对应的为美容美发类广告信息;而当场景类别为宠物类的时候,可以设置与之对应的为宠物喂养类广告信息。
本申请实施例中,获取第一预设时间段内所拍摄的图像,对图像进行场景识别,得到图像所属的场景类别。根据场景类别推送与场景类别对应的广告信息。因为用户一般会对感兴趣的事物进行拍照留念,所以通过获取第一预设时间段内所拍摄的图像,再对图像进行场景识别,得到图像所属的场景类别。根据场景类别推送与场景类别对应的广告。很容易准确把握到用户的兴趣点,从而精准进行广告信息推送。
在一个实施例中,如图4所示,操作240,对图像进行场景识别,得到图像所属的场景类别,包括:
操作242,对第一预设时间段内所拍摄的图像进行场景识别,得到每一图像所对应的场景识别结果。
其中,场景识别结果为对图像中所包含的主体要素所进行场景识别的结果。一般情况下图像中的场景识别结果包括海滩、蓝天、绿草、雪景、夜景、背光、日出/日落、烟火、聚光灯、室内、文本文档、人像、婴儿、猫、狗、美食等。当然,以上并不是穷举。对第一预设时间段内所拍摄的图像一一进行场景识别,得到每一图像所对应的场景识别结果。一张图像所对应的场景识别结果可以为一个或多个,例如,对一张只有人像的自拍图像进 行场景识别之后得到的场景识别结果就是人像;对于一张包含海滩、蓝天的图像进行场景识别之后得到的场景识别结果即为两个:海滩和蓝天。
操作244,对图像的场景识别结果按照预设分类规则进行分类,得到图像所属的场景类别。
预设分类规则具体为:风景指的是供观赏的自然风光、景物,包括自然景观和人文景观。所以将场景识别结果为海滩、蓝天、绿草、雪景、日出/日落、烟火等划分为风景类。美食,顾名思义就是美味的食物,贵的有山珍海味,便宜的有街边小吃。其实美食是不分贵贱的,只要是自己喜欢的,都可以称之为美食。因此,将场景识别结果为食物(可供食用的东西,碳水化合物、肉食、水果、蔬菜等)的划分为美食类。将场景识别结果为人像的划分为人像类,将场景识别结果为猫、狗或其他宠物的划分为宠物类。
当对图像进行场景识别之后得到的场景识别结果都属于同一类场景类别的时候,则确定该同一场景类别就是该图像所属的场景类别。当对图像进行场景识别之后得到的场景识别结果不属于同一场景类别的时候,就需要判断该图像中哪个场景类别的权重较高,将较高权重的场景类别作为该图像的场景类别。
操作246,统计每个场景类别中所包含的图像的数目。
对于第一预设时间段内的所有图像都进行了场景类别的划分之后,每一个图像只对应一个场景类别。在全部划分之后,统计每个场景类别中所包含的图像的数目。
本申请实施例中,预设分类规则可以实现将图像按照场景识别结果划分入不同的场景分类中,因此就实现了对图像进行场景类别划分。从而可以实现后续根据场景类别推送与场景类别对应的广告信息。
在一个实施例中,如图5所示,操作220,在获取第一预设时间段内所拍摄的图像之前,包括:
操作210,预先为每一场景类别设置对应进行推送的广告类别,每一场景类别可以对应一个或多个进行推送的广告类别。
预先为每一个场景类别设置对应的广告类别,例如,可以是对于场景类别为风景类的时候,设置与之对应的广告为旅游、酒店类;而当场景类别为美食类的时候,可以设置与之对应的广告为餐厅、酒店类;而当场景类别为人像类的时候,可以设置与之对应的广告 为美容美发类;而当场景类别为宠物类的时候,可以设置与之对应的广告为宠物喂养类。
本申请实施例中,预先为每一个场景类别设置推送的广告类别,且每一场景类别可以对应一个或多个进行推送的广告类别。这样可以做到更加全面的推送广告。最后计算出的推送广告信息的类别及频率的准确度也会大大提高。
在一个实施例中,如图6所示,操作260,根据场景类别推送与场景类别对应的广告信息,包括:
操作262,根据统计出的每个场景类别中所包含的图像的数目,为场景类别设置对应的权值,场景类别中所包含的图像的数目越多则所对应的权值越大。
例如,可以规定当某场景类别中所包含的图像的数目在[0,10)之间,则为该场景类别设置对应的权值为1;可以规定当某场景类别中所包含的图像的数目在[10,20)之间,则为该场景类别设置对应的权值为2;可以规定当某场景类别中所包含的图像的数目在[20,30)之间,则为该场景类别设置对应的权值为3;可以规定当某场景类别中所包含的图像的数目在[30,40)之间,则为该场景类别设置对应的权值为4;可以规定当某场景类别中所包含的图像的数目在[40,∞)之间,则为该场景类别设置对应的权值为5。场景类别中所包含的图像的数目越多则所对应的权值越大。当然,也可以根据第一预设时间段内所有拍摄图像的数目相应进行设置上述权值规则。
操作264,根据场景类别的权值计算第二预设时间段内与场景类别对应的广告信息的推送次数。
将每个广告分类的权值设置为与它相对应的场景识别的权值,如果有多个场景对应于同一个广告分类,则该广告分类的权值是与之对应的各个场景类别权值的和。在上述实施例中,每一个场景类别都相应对应着特定的广告类别,因此,根据场景类别的权值计算与场景类别对应的每一类广告类别的推送次数,推荐次数为在第二预设时间段内的推荐次数。例如,得到了旅游类广告的权值为4,餐厅类广告的权值为5,宠物喂养类广告的权值为1。则在第二预设时间段内假设推送广告的总次数为10次,则其中就可以包含4次旅游类广告推送,5次餐厅类广告推送,1次宠物喂养类广告推送。
操作266,在第二预设时间段内按照广告信息的推送次数进行推送广告信息。
第二预设时间段可以是紧邻第一预设时间段的下一相同时长的时间段。例如,当第一 预设时间段为一个星期,则第二预设时间段为与第一预设时间段相邻的一个星期。在第二预设时间段内按照不同广告类别的推送次数进行推送类别对应的广告信息。
本申请实施例中,根据统计出的每个场景类别中所包含的图像的数目,为场景类别设置对应的权值。再根据场景类别的权值计算与场景类别对应的广告类别的权值,从而得到每一类广告的权值。在第二预设时间段内按照不同广告类别的权值去分配不同广告类别的推送次数,即权值越高则推送次数越多。因为上述所得的场景类别的权值即可以反应用户的兴趣爱好,因此通过场景类别的权值得到的广告类别的权值也可以在一定程度上反应应用户的兴趣爱好。从而实现所推送的广告信息能够更加精确地预测用户的兴趣爱好。
在一个实施例中,根据场景类别的权值计算第二预设时间段内与场景类别对应的广告信息的推送次数,包括:
将场景类别的权值设置为与场景类别对应的进行推送的广告类别的权值;
将相同的广告类别的权值进行累加,得到广告类别的总权值;
根据广告类别的总权值的大小,在第二预设时间段内对应分配与广告类别对应的广告信息的推送次数。
具体地,将每个广告分类的权值设置为与它相对应的场景识别的权值,如果有多个场景对应于同一个广告分类,则该广告分类的权值是与之对应的各个场景类别权值的和。将相同广告类别的权值进行累加,得到该广告类别的总权值。例如,当场景类别为风景类的权值为4的时候,则与之对应的广告为旅游类广告的权值为4,之对应的广告为酒店类广告的权值也为4;而当场景类别为美食类的权值为5的时候,可以设置与之对应的广告为餐厅、酒店类的权值也为5;而当场景类别为宠物类的权值为1的时候,可以设置与之对应的广告为宠物喂养类的权值为1。
因此,经过对相同的广告类别的权值进行累加之后,由上述例子可以得到了旅游类广告的权值为4,酒店类广告的权值也为9,餐厅类广告的权值为5,宠物喂养类广告的权值为1。根据广告类别的总权值的大小,在第二预设时间段内对应分配与广告类别对应的广告信息的推送次数。假设在第二预设时间段内一共推送19次广告的话,那么其中将会推送9次酒店类广告、5次餐厅类广告、4次旅游类广告及1次宠物喂养类广告。
本申请实施例中,将场景类别的权值设置为与场景类别对应的进行推送的广告类别的 权值。且对同一场景类别对应多个广告类别的情形进行了详细的概述,将相同的广告类别的权值进行累加,得到广告类别的总权值。设置同一场景类别可以对应多个广告类别,解决了同一场景类别只对应一类广告类别的过于单一、不够准确的问题。从而根据广告类别的总权值的大小,在第二预设时间段内对应分配与广告类别对应的广告信息的推送次数。
在一个实施例中,广告信息的内容包含从图像中所获得的信息。
本申请实施例中,上述实施例中计算获得了每一类广告类别在第二预设时间段内的推送次数,而每一类广告推送的内容则可以从对第一预设时间段内所获取到的图像进行分析得到。对第一预设时间段内所获取到的图像进行分析得到,每一类场景类别下所属的图像所对应的拍摄位置信息、拍摄具体时间信息、图像中的标志物信息等。因此,在进行推送广告信息的时候,就可以根据所获取到的拍摄位置信息、拍摄具体时间信息、图像中的标志物信息等,对广告信息的内容进行丰富和细化。
在一个具体的实施例中,提供了一种广告信息推送方法,以该方法应用于图1中的电子设备为例进行说明,包括:
操作一:图像可以按照统一的标准分入不同的场景类别中,预先为每一场景类别设置对应进行推送的广告类别,每一场景类别可以对应一个或多个进行推送的广告类别。例如,可以是对于场景类别为风景类的时候,设置与之对应的广告为旅游、酒店类;而当场景类别为美食类的时候,可以设置与之对应的广告为餐厅、酒店类;而当场景类别为人像类的时候,可以设置与之对应的广告为美容美发类;而当场景类别为宠物类的时候,可以设置与之对应的广告为宠物喂养类;
操作二:对第一预设时间段内所拍摄的图像进行场景识别,得到每一图像所对应的场景识别结果;
操作三:对图像的场景识别结果按照预设分类规则进行分类,得到图像所属的场景类别;
操作四,统计每个场景类别中所包含的图像的数目。
操作五,根据统计出的每个场景类别中所包含的图像的数目,为场景类别设置对应的权值,场景类别中所包含的图像的数目越多则所对应的权值越大。
操作六,根据场景类别的权值计算第二预设时间段内与场景类别对应的广告信息的推 送次数。
操作七,在第二预设时间段内按照广告信息的推送次数进行推送广告信息。
本申请实施例中,根据统计出的每个场景类别中所包含的图像的数目,为场景类别设置对应的权值。再根据场景类别的权值计算与场景类别对应的广告类别的权值,从而得到每一类广告的权值。在第二预设时间段内按照不同广告类别的权值去分配不同广告类别的推送次数,即权值越高则推送次数越多。因为上述所得的场景类别的权值即可以反应用户的兴趣爱好,因此通过场景类别的权值得到的广告类别的权值也可以在一定程度上反应应用户的兴趣爱好。从而实现所推送的广告信息能够更加精确地预测用户的兴趣爱好。
应该理解的是,虽然上述流程图中的各个操作按照箭头的指示依次显示,但是这些操作并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些操作的执行并没有严格的顺序限制,这些操作可以以其它的顺序执行。而且,上述图中的至少一部分操作可以包括多个子操作或者多个阶段,这些子操作或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子操作或者阶段的执行顺序也不必然是依次进行,而是可以与其它操作或者其它操作的子操作或者阶段的至少一部分轮流或者交替地执行。
在一个实施例中,如图7所示,提供了一种广告信息推送装置700,装置包括:图像获取模块702、场景识别模块704及广告信息推送模块706。其中,
图像获取模块702,用于获取第一预设时间段内所拍摄的图像;
场景识别模块704,用于对图像进行场景识别,得到图像所属的场景类别;
广告信息推送模块706,用于根据场景类别推送与场景类别对应的广告信息。
在一个实施例中,场景识别模块,还用于对第一预设时间段内所拍摄的图像进行场景识别,得到每一图像所对应的场景识别结果;对图像的场景识别结果按照预设分类规则进行分类,得到图像所属的场景类别;统计每个场景类别中所包含的图像的数目。
在一个实施例中,如图8所示,提供了一种广告信息推送装置700,装置还包括:广告类别预设模块708,用于预先为每一场景类别设置对应进行推送的广告类别,每一场景类别可以对应一个或多个进行推送的广告类别。
在一个实施例中,广告信息推送模块,还用于根据统计出的每个场景类别所对应的出现次数,为场景类别设置对应的权值,场景类别的出现次数越高则所对应的权值越大;根 据场景类别的权值计算第二预设时间段内与场景类别对应的广告信息的推送次数;在第二预设时间段内按照广告信息的推送次数进行推送广告信息。
在一个实施例中,广告信息推送模块,还用于将场景类别的权值设置为与场景类别对应的进行推送的广告类别的权值;将相同的广告类别的权值进行累加,得到广告类别的总权值;根据广告类别的总权值的大小,在第二预设时间段内对应分配与广告类别对应的广告信息的推送次数。
上述广告信息推送装置中各个模块的划分仅用于举例说明,在其他实施例中,可将广告信息推送装置按照需要划分为不同的模块,以完成上述广告信息推送装置的全部或部分功能。
上述广告信息推送装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。其中,网络接口可以是以太网卡或无线网卡等,上述各模块可以以硬件形式内嵌于或独立于服务器中的处理器中,也可以以软件形式存储于服务器中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述各实施例所提供的广告信息推送方法的操作。
在一个实施例中,提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行计算机程序时实现上述各实施例所提供的广告信息推送方法的操作。
本申请实施例还提供了一种计算机程序产品,当其在计算机上运行时,使得计算机执行上述各实施例所提供的广告信息推送方法的操作。
本申请实施例还提供一种电子设备。上述电子设备中包括图像处理电路,图像处理电路可以利用硬件和/或软件组件实现,可包括定义ISP(Image Signal Processing,图像信号处理)管线的各种处理单元。图9为一个实施例中图像处理电路的示意图。如图9所示,为便于说明,仅示出与本申请实施例相关的图像处理技术的各个方面。
如图9所示,图像处理电路包括ISP处理器940和控制逻辑器950。成像设备910捕捉的图像数据首先由ISP处理器940处理,ISP处理器940对图像数据进行分析以捕捉可用于确定和/或成像设备910的一个或多个控制参数的图像统计信息。成像设备910可包 括具有一个或多个透镜912和图像传感器914的照相机。图像传感器914可包括色彩滤镜阵列(如Bayer滤镜),图像传感器914可获取用图像传感器914的每个成像像素捕捉的光强度和波长信息,并提供可由ISP处理器940处理的一组原始图像数据。传感器920(如陀螺仪)可基于传感器920接口类型把采集的图像处理的参数(如防抖参数)提供给ISP处理器940。传感器920接口可以利用SMIA(Standard Mobile ImagingArchitecture,标准移动成像架构)接口、其它串行或并行照相机接口或上述接口的组合。
此外,图像传感器914也可将原始图像数据发送给传感器920,传感器920可基于传感器920接口类型把原始图像数据提供给ISP处理器940,或者传感器920将原始图像数据存储到图像存储器930中。
ISP处理器940按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器940可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。
ISP处理器940还可从图像存储器930接收图像数据。例如,传感器920接口将原始图像数据发送给图像存储器930,图像存储器930中的原始图像数据再提供给ISP处理器940以供处理。图像存储器930可为存储器装置的一部分、存储设备、或电子设备内的独立的专用存储器,并可包括DMA(Direct MemoryAccess,直接直接存储器存取)特征。
当接收到来自图像传感器914接口或来自传感器920接口或来自图像存储器930的原始图像数据时,ISP处理器940可进行一个或多个图像处理操作,如时域滤波。处理后的图像数据可发送给图像存储器930,以便在被显示之前进行另外的处理。ISP处理器940从图像存储器930接收处理数据,并对处理数据进行原始域中以及RGB和YCbCr颜色空间中的图像数据处理。ISP处理器940处理后的图像数据可输出给显示器970,以供用户观看和/或由图形引擎或GPU(Graphics Processing Unit,图形处理器)进一步处理。此外,ISP处理器940的输出还可发送给图像存储器930,且显示器970可从图像存储器930读取图像数据。在一个实施例中,图像存储器930可被配置为实现一个或多个帧缓冲器。此外,ISP处理器940的输出可发送给编码器/解码器960,以便编码/解码图像数据。编码的图像数据可被保存,并在显示于显示器970设备上之前解压缩。编码器/解码器960可由 CPU或GPU或协处理器实现。
ISP处理器940确定的统计数据可发送给控制逻辑器950单元。例如,统计数据可包括自动曝光、自动白平衡、自动聚焦、闪烁检测、黑电平补偿、透镜912阴影校正等图像传感器914统计信息。控制逻辑器950可包括执行一个或多个例程(如固件)的处理器和/或微控制器,一个或多个例程可根据接收的统计数据,确定成像设备910的控制参数及ISP处理器940的控制参数。例如,成像设备910的控制参数可包括传感器920控制参数(例如增益、曝光控制的积分时间、防抖参数等)、照相机闪光控制参数、透镜912控制参数(例如聚焦或变焦用焦距)、或这些参数的组合。ISP控制参数可包括用于自动白平衡和颜色调整(例如,在RGB处理期间)的增益水平和色彩校正矩阵,以及透镜912阴影校正参数。
本申请所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。合适的非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM),它用作外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDR SDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (16)

  1. 一种广告信息推送方法,其特征在于,包括:
    获取第一预设时间段内所拍摄的图像;
    对所述图像进行场景识别,得到所述图像所属的场景类别;及
    根据场景类别推送与所述场景类别对应的广告信息。
  2. 根据权利要求1所述的方法,其特征在于,对所述图像进行场景识别,得到所述图像所属的场景类别,包括:
    对第一预设时间段内所拍摄的图像进行场景识别,得到每一图像所对应的场景识别结果;
    对所述图像的场景识别结果按照预设分类规则进行分类,得到所述图像所属的场景类别;及
    统计每个场景类别中所包含的图像的数目。
  3. 根据权利要求2所述的方法,其特征在于,所述场景识别结果为对所述图像中所包含的主体要素所进行场景识别的结果,所述场景类别为对所述场景识别结果进行分类所得的类别。
  4. 根据权利要求1所述的方法,其特征在于,在获取第一预设时间段内所拍摄的图像之前,包括:
    预先为每一场景类别设置对应进行推送的广告类别,每一场景类别可以对应一个或多个进行推送的广告类别。
  5. 根据权利要求2所述的方法,其特征在于,所述根据场景类别推送与所述场景类别对应的广告信息,包括:
    根据统计出的每个场景类别所包含的图像的数目,为所述场景类别设置对应的权值,场景类别中所包含的图像的数目越多则所对应的权值越大;
    根据所述场景类别的权值计算第二预设时间段内与所述场景类别对应的广告信息的推送次数;及
    在第二预设时间段内按照所述广告信息的推送次数进行推送所述广告信息。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述场景类别的权值计算第 二预设时间段内与所述场景类别对应的广告信息的推送次数,包括:
    将所述场景类别的权值设置为与所述场景类别对应的进行推送的广告类别的权值;
    将相同的广告类别的权值进行累加,得到所述广告类别的总权值;及
    根据所述广告类别的总权值的大小,在所述第二预设时间段内对应分配与所述广告类别对应的广告信息的推送次数。
  7. 根据权利要求1所述的方法,其特征在于,所述广告信息的内容包含从所述图像中所获得的信息。
  8. 一种广告信息推送装置,其特征在于,所述装置包括:
    图像获取模块,用于获取第一预设时间段内所拍摄的图像;
    场景识别模块,用于对所述图像进行场景识别,得到所述图像所属的场景类别;及
    广告信息推送模块,用于根据场景类别推送与所述场景类别对应的广告信息。
  9. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7中任一项所述的广告信息推送方法的步骤。
  10. 一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现以下操作:
    获取第一预设时间段内所拍摄的图像;
    对所述图像进行场景识别,得到所述图像所属的场景类别;及
    根据场景类别推送与所述场景类别对应的广告信息。
  11. 根据权利要求10所述的电子设备,其特征在于,所述处理器执行所述计算机程序时实现以下操作:对所述图像进行场景识别,得到所述图像所属的场景类别,包括:
    对第一预设时间段内所拍摄的图像进行场景识别,得到每一图像所对应的场景识别结果;
    对所述图像的场景识别结果按照预设分类规则进行分类,得到所述图像所属的场景类别;及
    统计每个场景类别中所包含的图像的数目。
  12. 根据权利要求11所述的电子设备,其特征在于,所述处理器执行所述计算机程序时实现以下操作:所述场景识别结果为对所述图像中所包含的主体要素所进行场景识别 的结果,所述场景类别为对所述场景识别结果进行分类所得的类别。
  13. 根据权利要求10所述的电子设备,其特征在于,所述处理器执行所述计算机程序时实现以下操作:获取第一预设时间段内所拍摄的图像之前,包括:
    预先为每一场景类别设置对应进行推送的广告类别,每一场景类别可以对应一个或多个进行推送的广告类别。
  14. 根据权利要求11所述的电子设备,其特征在于,所述处理器执行所述计算机程序时实现以下操作:所述根据场景类别推送与所述场景类别对应的广告信息,包括:
    根据统计出的每个场景类别所包含的图像的数目,为所述场景类别设置对应的权值,场景类别中所包含的图像的数目越多则所对应的权值越大;
    根据所述场景类别的权值计算第二预设时间段内与所述场景类别对应的广告信息的推送次数;及
    在第二预设时间段内按照所述广告信息的推送次数进行推送所述广告信息。
  15. 根据权利要求14所述的电子设备,其特征在于,所述处理器执行所述计算机程序时实现以下操作:所述根据所述场景类别的权值计算第二预设时间段内与所述场景类别对应的广告信息的推送次数,包括:
    将所述场景类别的权值设置为与所述场景类别对应的进行推送的广告类别的权值;
    将相同的广告类别的权值进行累加,得到所述广告类别的总权值;及
    根据所述广告类别的总权值的大小,在所述第二预设时间段内对应分配与所述广告类别对应的广告信息的推送次数。
  16. 根据权利要求10所述的电子设备,其特征在于,所述处理器执行所述计算机程序时实现以下操作:所述广告信息的内容包含从所述图像中所获得的信息。
PCT/CN2019/087351 2018-06-08 2019-05-17 广告信息推送方法和装置、存储介质、电子设备 WO2019233260A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810587687.1 2018-06-08
CN201810587687.1A CN108765033B (zh) 2018-06-08 2018-06-08 广告信息推送方法和装置、存储介质、电子设备

Publications (1)

Publication Number Publication Date
WO2019233260A1 true WO2019233260A1 (zh) 2019-12-12

Family

ID=64000707

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/087351 WO2019233260A1 (zh) 2018-06-08 2019-05-17 广告信息推送方法和装置、存储介质、电子设备

Country Status (2)

Country Link
CN (1) CN108765033B (zh)
WO (1) WO2019233260A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111666014A (zh) * 2020-07-06 2020-09-15 腾讯科技(深圳)有限公司 一种消息推送方法、装置、设备及计算机可读存储介质
CN111694983A (zh) * 2020-06-12 2020-09-22 百度在线网络技术(北京)有限公司 信息显示方法、装置、电子设备以及存储介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765033B (zh) * 2018-06-08 2021-01-12 Oppo广东移动通信有限公司 广告信息推送方法和装置、存储介质、电子设备
CN111800445B (zh) * 2019-04-09 2023-02-28 Oppo广东移动通信有限公司 消息推送方法、装置、存储介质及电子设备
CN111798259A (zh) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 应用推荐方法、装置、存储介质及电子设备
CN111340557B (zh) * 2020-02-28 2024-02-06 京东科技控股股份有限公司 互动广告的处理方法、装置、终端及存储介质
CN112330371A (zh) * 2020-11-26 2021-02-05 深圳创维-Rgb电子有限公司 基于ai的智能广告推送方法及装置、系统及存储介质
CN116614673B (zh) * 2023-07-21 2023-10-20 山东宝盛鑫信息科技有限公司 一种基于特殊人群的短视频推送系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107295362A (zh) * 2017-08-10 2017-10-24 上海六界信息技术有限公司 基于图像的直播内容筛选方法、装置、设备及存储介质
CN107622281A (zh) * 2017-09-20 2018-01-23 广东欧珀移动通信有限公司 图像分类方法、装置、存储介质及移动终端
CN107864225A (zh) * 2017-12-21 2018-03-30 北京小米移动软件有限公司 基于ar的信息推送方法、装置及电子设备
CN108765033A (zh) * 2018-06-08 2018-11-06 Oppo广东移动通信有限公司 广告信息推送方法和装置、存储介质、电子设备

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104618446A (zh) * 2014-12-31 2015-05-13 百度在线网络技术(北京)有限公司 一种实现多媒体推送的方法和装置
CN105160550A (zh) * 2015-08-21 2015-12-16 浙江视科文化传播有限公司 一种智能广告投放方法及装置
CN106878355A (zh) * 2015-12-11 2017-06-20 腾讯科技(深圳)有限公司 一种信息推荐方法和装置
CN105608609B (zh) * 2016-02-17 2018-02-16 北京金山安全软件有限公司 一种推送旅游信息的方法、装置及电子设备
CN106530008B (zh) * 2016-11-10 2022-01-07 广州市沃希信息科技有限公司 一种基于场景图片的广告方法及系统
CN106792004B (zh) * 2016-12-30 2020-09-15 北京小米移动软件有限公司 内容项目推送方法、装置及系统
CN107194318B (zh) * 2017-04-24 2020-06-12 北京航空航天大学 目标检测辅助的场景识别方法
CN107402964A (zh) * 2017-06-22 2017-11-28 深圳市金立通信设备有限公司 一种信息推荐方法、服务器及终端
CN107609602A (zh) * 2017-09-28 2018-01-19 吉林大学 一种基于卷积神经网络的驾驶场景分类方法
CN107944386B (zh) * 2017-11-22 2019-11-22 天津大学 基于卷积神经网络的视觉场景识别方法
CN108108751B (zh) * 2017-12-08 2021-11-12 浙江师范大学 一种基于卷积多特征和深度随机森林的场景识别方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107295362A (zh) * 2017-08-10 2017-10-24 上海六界信息技术有限公司 基于图像的直播内容筛选方法、装置、设备及存储介质
CN107622281A (zh) * 2017-09-20 2018-01-23 广东欧珀移动通信有限公司 图像分类方法、装置、存储介质及移动终端
CN107864225A (zh) * 2017-12-21 2018-03-30 北京小米移动软件有限公司 基于ar的信息推送方法、装置及电子设备
CN108765033A (zh) * 2018-06-08 2018-11-06 Oppo广东移动通信有限公司 广告信息推送方法和装置、存储介质、电子设备

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111694983A (zh) * 2020-06-12 2020-09-22 百度在线网络技术(北京)有限公司 信息显示方法、装置、电子设备以及存储介质
CN111694983B (zh) * 2020-06-12 2023-12-19 百度在线网络技术(北京)有限公司 信息显示方法、装置、电子设备以及存储介质
CN111666014A (zh) * 2020-07-06 2020-09-15 腾讯科技(深圳)有限公司 一种消息推送方法、装置、设备及计算机可读存储介质
CN111666014B (zh) * 2020-07-06 2024-02-02 腾讯科技(深圳)有限公司 一种消息推送方法、装置、设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN108765033A (zh) 2018-11-06
CN108765033B (zh) 2021-01-12

Similar Documents

Publication Publication Date Title
WO2019233260A1 (zh) 广告信息推送方法和装置、存储介质、电子设备
WO2019233394A1 (zh) 图像处理方法和装置、存储介质、电子设备
WO2019233393A1 (zh) 图像处理方法和装置、存储介质、电子设备
CN108764370B (zh) 图像处理方法、装置、计算机可读存储介质和计算机设备
CN108777815B (zh) 视频处理方法和装置、电子设备、计算机可读存储介质
WO2019233266A1 (zh) 图像处理方法、计算机可读存储介质和电子设备
US10896323B2 (en) Method and device for image processing, computer readable storage medium, and electronic device
US11138478B2 (en) Method and apparatus for training, classification model, mobile terminal, and readable storage medium
CN108805103B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
WO2020259179A1 (zh) 对焦方法、电子设备和计算机可读存储介质
CN108984657B (zh) 图像推荐方法和装置、终端、可读存储介质
CN108810418B (zh) 图像处理方法、装置、移动终端及计算机可读存储介质
CN108961302B (zh) 图像处理方法、装置、移动终端及计算机可读存储介质
WO2019233262A1 (zh) 视频处理方法、电子设备、计算机可读存储介质
CN108810413B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
WO2019233392A1 (zh) 图像处理方法、装置、电子设备和计算机可读存储介质
CN108897786B (zh) 应用程序的推荐方法、装置、存储介质及移动终端
CN108875619B (zh) 视频处理方法和装置、电子设备、计算机可读存储介质
CN108805198B (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
CN109712177B (zh) 图像处理方法、装置、电子设备和计算机可读存储介质
CN108717530B (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
WO2019223513A1 (zh) 图像识别方法、电子设备和存储介质
CN109002843A (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
CN110956679B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
CN108848306B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19816084

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19816084

Country of ref document: EP

Kind code of ref document: A1