CN115187307B - Advertisement putting processing method and device for virtual world - Google Patents

Advertisement putting processing method and device for virtual world Download PDF

Info

Publication number
CN115187307B
CN115187307B CN202210868330.7A CN202210868330A CN115187307B CN 115187307 B CN115187307 B CN 115187307B CN 202210868330 A CN202210868330 A CN 202210868330A CN 115187307 B CN115187307 B CN 115187307B
Authority
CN
China
Prior art keywords
advertisement
user
eye
virtual world
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210868330.7A
Other languages
Chinese (zh)
Other versions
CN115187307A (en
Inventor
曹佳炯
丁菁汀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202210868330.7A priority Critical patent/CN115187307B/en
Publication of CN115187307A publication Critical patent/CN115187307A/en
Application granted granted Critical
Publication of CN115187307B publication Critical patent/CN115187307B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0242Determining effectiveness of advertisements
    • G06Q30/0244Optimization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0277Online advertisement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the specification provides a method and a device for processing advertisement delivery of a virtual world, wherein the method for processing advertisement delivery of the virtual world comprises the following steps: estimating the view angle based on the eye images of the user to obtain the view angle direction of the user; the eye images of the user are collected through an image sensor configured by access equipment of the virtual world; inputting the visual angle direction of the user and the image data of the virtual world, in which advertisements are put, into a detection model to detect advertisement focus areas, and obtaining the advertisement focus areas; constructing a virtual environment feature carrying the concerned duration of the concerned area of the advertisement; and inputting the virtual environment characteristics and the candidate advertisements of the virtual world into an advertisement scoring model to perform advertisement scoring calculation, and determining target candidate advertisements in the candidate advertisements according to the obtained advertisement scores.

Description

Advertisement putting processing method and device for virtual world
Technical Field
The present document relates to the field of virtualization technologies, and in particular, to a method and an apparatus for processing advertisement delivery in a virtual world.
Background
The virtual world provides a simulation of the real world and can even provide scenes that are difficult to implement in the real world, so the virtual world is increasingly applied to various scenes. In the virtual world scenario, a user logs into the three-dimensional virtual world with a specific ID, performs an activity by a virtual user character in the virtual world, and typically, there are different user characters in the virtual world, each of which performs a different activity.
Disclosure of Invention
One or more embodiments of the present disclosure provide an advertisement delivery processing method for a virtual world, including: and estimating the visual angle based on the eye images of the user to obtain the visual angle direction of the user. The eye images of the user are collected through an image sensor configured by access equipment of the virtual world. And inputting the visual angle direction of the user and the image data of the virtual world, in which advertisements are put, into a detection model to detect the advertisement focus area, and obtaining the advertisement focus area. And constructing a virtual environment characteristic carrying the concerned duration of the concerned area of the advertisement. And inputting the virtual environment characteristics and the candidate advertisements of the virtual world into an advertisement scoring model to perform advertisement scoring calculation, and determining target candidate advertisements in the candidate advertisements according to the obtained advertisement scores.
One or more embodiments of the present specification provide an advertisement delivery processing apparatus of a virtual world, including: and the visual angle estimation module is configured to perform visual angle estimation based on the eye images of the user to obtain the visual angle direction of the user. The eye images of the user are collected through an image sensor configured by access equipment of the virtual world. And the area detection module is configured to input the user visual angle direction and the image data of the virtual world, in which advertisements are put, into a detection model to detect advertisement attention areas and obtain the advertisement attention areas. And the characteristic construction module is configured to construct a virtual environment characteristic carrying the attention duration of the advertisement attention area. And the scoring calculation module is configured to input the virtual environment characteristics and the candidate advertisements of the virtual world into an advertisement scoring model to perform advertisement scoring calculation and determine target candidate advertisements in the candidate advertisements according to the obtained advertisement scores.
One or more embodiments of the present specification provide an advertisement delivery processing apparatus of a virtual world, including: a processor; and a memory configured to store computer-executable instructions that, when executed, cause the processor to: and estimating the visual angle based on the eye images of the user to obtain the visual angle direction of the user. The eye images of the user are collected through an image sensor configured by access equipment of the virtual world. And inputting the visual angle direction of the user and the image data of the virtual world, in which advertisements are put, into a detection model to detect the advertisement focus area, and obtaining the advertisement focus area. And constructing a virtual environment characteristic carrying the concerned duration of the concerned area of the advertisement. And inputting the virtual environment characteristics and the candidate advertisements of the virtual world into an advertisement scoring model to perform advertisement scoring calculation, and determining target candidate advertisements in the candidate advertisements according to the obtained advertisement scores.
One or more embodiments of the present specification provide a storage medium storing computer-executable instructions that, when executed by a processor, implement the following: and estimating the visual angle based on the eye images of the user to obtain the visual angle direction of the user. The eye images of the user are collected through an image sensor configured by access equipment of the virtual world. And inputting the visual angle direction of the user and the image data of the virtual world, in which advertisements are put, into a detection model to detect the advertisement focus area, and obtaining the advertisement focus area. And constructing a virtual environment characteristic carrying the concerned duration of the concerned area of the advertisement. And inputting the virtual environment characteristics and the candidate advertisements of the virtual world into an advertisement scoring model to perform advertisement scoring calculation, and determining target candidate advertisements in the candidate advertisements according to the obtained advertisement scores.
Drawings
For a clearer description of one or more embodiments of the present description or of the solutions of the prior art, the drawings that are needed in the description of the embodiments or of the prior art will be briefly described below, it being obvious that the drawings in the description that follow are only some of the embodiments described in the present description, from which other drawings can be obtained, without inventive faculty, for a person skilled in the art;
FIG. 1 is a process flow diagram of a method for handling ad placement in a virtual world according to one or more embodiments of the present disclosure;
FIG. 2 is a process flow diagram of a method for handling advertising for a virtual world applied to a virtual world scene in accordance with one or more embodiments of the present disclosure;
FIG. 3 is a schematic diagram of an advertisement delivery processing device for a virtual world according to one or more embodiments of the present disclosure;
Fig. 4 is a schematic structural diagram of an advertisement delivery processing device of a virtual world according to one or more embodiments of the present disclosure.
Detailed Description
In order to enable a person skilled in the art to better understand the technical solutions in one or more embodiments of the present specification, the technical solutions in one or more embodiments of the present specification will be clearly and completely described below with reference to the drawings in one or more embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one or more embodiments of the present disclosure without inventive effort, are intended to be within the scope of the present disclosure.
The embodiment of the advertisement putting processing method of the virtual world is provided in the specification:
According to the advertisement putting processing method of the virtual world, in the process of advertisement putting in the virtual world, the view angle estimation is carried out based on the eye images of the user to obtain the view angle direction of the user, the advertisement attention area is determined according to the view angle direction of the user and the image data of the put advertisements in the virtual world, so that feedback collection of advertisement putting in the virtual world is realized from the eye images of the user, further, the constructed virtual environment characteristics and the candidate advertisements to be put in the virtual world are input into the advertisement scoring model to carry out advertisement scoring calculation, and the target candidate advertisements to be put in the virtual world are determined according to the obtained advertisement scores of the candidate advertisements, so that more accurate and targeted advertisement putting in the virtual world is carried out on the basis of feedback collection of advertisement putting in the virtual world, and further, the conversion effect of advertisement putting in the virtual world is improved on the basis of improving the advertisement putting effect in the virtual world.
Referring to fig. 1, the method for processing advertisement delivery of a virtual world provided in this embodiment specifically includes steps S102 to S108.
Step S102, estimating the visual angle based on the eye images of the user to obtain the visual angle direction of the user.
The user eye image in this embodiment refers to an image including the user eye features collected by the image sensor. Optionally, in a scenario where the user accesses the virtual world, the eye image of the user is acquired by an image sensor configured by an access device of the virtual world. The virtual world refers to a virtual simulated world which is realized based on decentralization cooperation and has an open economic system, and optionally, decentralization transactions are carried out in the virtual world by generating a non-homogeneous identification, and ownership of the virtual asset is occupied by the transactions. The access device of the Virtual world may be a VR (Virtual Reality) device, an AR (Augmented Reality) device, etc. connected to the Virtual world, such as a head-mounted VR device connected to the Virtual world.
Optionally, the user eye image includes: eye color image (RGB image) and/or eye infrared image, respectively, the access device configured image sensor comprising: the eye color image acquisition device comprises an image sensor (an image sensor supporting RGB image acquisition) for acquiring eye color images and/or an infrared sensor for acquiring eye infrared images, so that accuracy of view angle estimation is improved through acquisition of multi-mode user eye images, and accuracy of user feedback collection of advertisement delivery in a virtual world is improved through the multi-mode user eye images.
In practical application, in order to improve the image acquisition quality and further improve the acquired image quality, in the process of acquiring the eye image features by using the image sensor, the image acquisition quality can be improved by adjusting an acquisition mode, specifically, the acquisition parameters of the image sensor can be adjusted according to a preset time interval in a non-advertisement putting state, and a preset number of user eye images are acquired after the acquisition parameters are adjusted; such as adjusting the orientation and/or focal length of the image sensor and the infrared sensor every 1s, and acquiring 30 frames of user eye images after adjustment. And in the advertisement putting state, the eye images of the user can be acquired according to the preset image acquisition frequency, for example, the eye images of the user can be acquired according to the acquisition frequency of acquiring 30 images for 1 s.
In particular, in order to improve the estimation efficiency of the view angle estimation based on the user eye image, the view angle estimation may be performed by using a view angle estimation model, and in the process of view angle estimation, the user eye image is input into the view angle estimation model to perform the view angle estimation, and the view angle direction of the user is output. The user viewing angle direction refers to a viewing direction when a user views advertisements or other objects in the virtual world through the access equipment.
The training of the visual angle estimation model can be finished in advance, for example, model training of the visual angle estimation model is carried out on a cloud server, specifically, in the training process of the visual angle estimation model, a training sample can be generated through a data generation network, specifically, random Gaussian noise and corresponding angle data are input into the data generation network, and an image of a corresponding angle is generated and output; in addition, a data generation network is added for the view angle estimation model on the basis that the view angle estimation model comprises a view angle estimation network, the data generation network is used for generating and outputting images of corresponding angles according to the input random Gaussian noise and the corresponding angle data, and the view angle estimation network carries out view angle estimation and outputs view angle directions according to the images generated by the data generation network. Further, a loss function can be constructed in the model training process, such as introducing content loss, counterloss and angle estimation loss of a generated image into the loss function, distributing corresponding loss weights for the three losses, constructing the loss function based on the three losses and the corresponding loss weights, and performing network training based on the network structure and the loss function in the training process until the network converges to obtain the angle estimation model.
In addition to the implementation manner of inputting the eye image of the user into the view angle estimation model to perform view angle estimation to obtain the view angle direction of the user, the following manner may be adopted to perform view angle estimation: and establishing a mapping relation between the user eye image and world coordinates of the virtual world, and determining a mapping direction of the user eye image mapped in the virtual world as a viewing angle focus direction based on the mapping relation.
As described above, in order to improve the conversion rate of advertisement placement in the virtual world on the basis that the eye image of the user includes the eye color image and the eye infrared image, in this embodiment, the conversion rate of advertisement placement in the virtual world may be improved by improving the image quality of the eye color image and the eye infrared image, and in an alternative implementation provided in this embodiment, before performing the view angle estimation based on the eye image of the user, the eye color image and the eye infrared image are aligned, and a multi-modal mass fraction is calculated based on the aligned feature points of the eye color image and the eye infrared image obtained after the alignment;
if the multi-mode quality score is smaller than a preset quality score threshold, carrying out eye color image acquisition and eye infrared image acquisition again;
and if the multi-mode quality score is greater than or equal to a preset quality score threshold, executing the processing procedure of estimating the visual angle based on the eye images of the user.
Optionally, the multi-mode mass fraction is calculated in the following manner:
calculating the feature similarity of the aligned eye color image and the aligned eye infrared image based on the aligned feature points of the aligned eye color image and the aligned eye infrared image;
Calculating the multi-modal mass fraction according to the feature similarity and a preset constant; wherein the multi-modal mass fraction is equal to a ratio of the preset constant to the square of the feature similarity. Specifically, the feature similarity can be obtained by calculating a similarity algorithm, for example, a cosine similarity algorithm is used to calculate the feature similarity of the aligned eye color image and the aligned eye infrared image.
In the process of determining the alignment feature points of the aligned eye color image and the aligned eye infrared image, in order to improve the processing efficiency, the eye key point detection model can be trained to detect the eye key feature points of the eye color image and the eye infrared image, so that the alignment feature points are determined on the basis of the eye key feature points.
In the process of aligning the eye color image and the eye infrared image, the eye color image and the eye infrared image may be aligned with each other, and in this case, when the feature similarity calculation is performed, the feature similarity between the aligned eye color image and the aligned eye infrared image is calculated; in addition, in the process of aligning the eye color image and the eye infrared image, the eye color image and the eye infrared image can be aligned, namely: only aligning the eye color image to align the eye color image with the eye infrared image, and calculating the feature similarity of the eye infrared image and the aligned eye color image when calculating the feature similarity under the condition; the eye infrared image may also be aligned with the eye color image, i.e.: and (3) only aligning the eye infrared image to align the eye infrared image with the eye color image, and in this case, calculating the feature similarity between the eye color image and the aligned eye infrared image when calculating the feature similarity.
On the basis, in order to further improve the image quality of the eye color image and the eye infrared image and further improve the advertisement putting conversion rate in the virtual world, the time difference between the acquisition time of the eye color image and the acquisition time of the eye infrared image can be calculated before the eye color image and the eye infrared image are aligned and the multi-mode quality score is calculated, if the time difference is smaller than a preset time threshold, the alignment and the multi-mode quality score calculation processing is carried out on the eye color image and the eye infrared image, or the processing process of inputting the eye image of a user into a visual angle estimation model to carry out visual angle estimation is carried out; and if the time difference value is greater than or equal to a preset time threshold value, carrying out eye color image acquisition and eye infrared image acquisition again.
And step S104, inputting the user visual angle direction and the image data of the virtual world in which the advertisement is put into a detection model to detect the advertisement attention area, and obtaining the advertisement attention area.
In the implementation, on the basis of obtaining the view angle direction of the user, the image data of the advertisement put in the virtual world and the view angle direction of the user are input into a detection model to detect the advertisement attention area, specifically, the attention area of the advertisement put in the virtual world is watched by the user through the access equipment, so that the area of the sight attention area when the user accesses the virtual world through the access equipment is determined, and the feedback collection of the advertisement put in the virtual world from the eye image of the user is realized. Wherein, the advertisement is the advertisement which is put in the virtual world currently; the image data of the advertisement is advertisement images required for advertisement putting and displaying in the virtual world. Optionally, the advertisements placed in the virtual world include asset advertisements that identify the corresponding virtual asset by non-homogeneity.
In this embodiment, the detection model may be trained in advance, and specifically, a pre-constructed model to be trained using the fast R-CNN architecture may be trained based on training samples, where the training samples are a user viewing angle direction and corresponding advertisement data for being put in a virtual world, and the detection model is obtained after training is completed.
In addition, in an optional implementation manner provided in this embodiment, the advertisement attention area detection performed by the detection model is further implemented in the following manner: determining a three-dimensional view position mapped by the user view direction in the virtual world; and detecting an image area corresponding to the three-dimensional visual angle position in the image data of the advertisement serving as the advertisement attention area.
It should be noted that, the implementation manner of inputting the user viewing angle direction and the image data of the advertisement in the virtual world into the detection model to detect the advertisement attention area may be replaced by determining the advertisement attention area according to the user viewing angle direction and the image data of the advertisement in the virtual world, and form a new implementation manner with other processing steps provided in this embodiment, specifically in the process of determining the advertisement attention area according to the user viewing angle direction and the image data of the advertisement in the virtual world, determining the three-dimensional viewing angle position mapped by the user viewing angle direction in the virtual world; and then detecting an image area corresponding to the three-dimensional visual angle position in the image data of the advertisement serving as the advertisement attention area.
In addition, the implementation manner of detecting the advertisement focus area by inputting the user view angle direction and the image data of the advertisement in the virtual world into the detection model may be replaced by detecting the user focus area in the virtual world based on the user view angle direction and the advertisement in the virtual world, and may form a new implementation manner with other processing steps provided in this embodiment, specifically in the process of detecting the user focus area in the virtual world, firstly determining the view angle area range mapped by the user view angle direction in the virtual world, determining the advertisement focus subject based on the user view angle direction and the image data of the advertisement in the virtual world, and determining the advertisement focus area according to the view angle area range and the corresponding subject area range of the advertisement focus subject in the virtual world; the advertisement attention body is determined based on the user visual angle direction and the image data of the virtual world, and the advertisement attention body is obtained by inputting the user visual angle direction and the image data of the virtual world in which the advertisement is put into a corresponding attention body detection model; the advertisement interest area is an intersection area of the view angle area range and the subject area range.
In an optional implementation manner provided in this embodiment, before the foregoing inputting, to a detection model, the user view angle direction and image data of the advertisement in the virtual world to detect an advertisement attention area, a process of putting the advertisement in the virtual world, where a process of putting the advertisement specifically includes: performing advertisement image rendering based on the image data of the advertisement, and outputting a rendering result to the access equipment; the advertisement delivery is randomly selected and determined in an advertisement pool and/or a sequence advertisement, or the advertisement delivery is obtained by matching in the advertisement pool and/or the sequence advertisement based on user interest data. The sequential advertisement refers to an advertisement set composed of a plurality of advertisements with certain content correlation or delivery correlation, and corresponding delivery order can be specified for the advertisements with the content correlation in the advertisement set.
And S106, constructing a virtual environment feature carrying the attention duration of the advertisement attention area.
After the advertisement attention area is obtained, in this step, a virtual environment feature carrying the attention duration of the advertisement attention area is constructed. Optionally, the attention duration of the advertisement attention area is calculated by the following manner: and calculating the acquisition time span of the advertisement image as the concerned time according to the acquisition time of the advertisement image of the concerned area of the advertisement.
Further, in order to improve feature comprehensiveness of the virtual environment feature, improve accuracy of calculating advertisement scores of candidate advertisements in the virtual world based on the virtual environment feature, on the basis that the virtual environment feature carries a time length of interest of the advertisement interest region, the virtual environment feature may also carry a user interest degree, and in an optional implementation manner provided in this embodiment, the user interest degree is calculated in the following manner:
extracting a first eye feature in the user eye image in the advertisement putting state and a second eye feature in the user eye image in the non-putting state through an image segmentation algorithm;
The user attention is calculated based on the first eye feature and the second eye feature.
Or on the basis that the virtual environment feature carries the concerned time length of the concerned area of the advertisement, the virtual environment feature can also carry advertisement main body features, and optionally, the advertisement main body features comprise: and carrying out feature extraction on the advertisement main body features obtained from the advertisement image on which the advertisement is put, and/or carrying out feature extraction on the attention main body features obtained from the region sub-image corresponding to the attention region of the advertisement.
Furthermore, in order to improve feature comprehensiveness of the virtual environment feature, accuracy of advertisement score calculation on candidate advertisements in the virtual world based on the virtual environment feature after improvement, on the basis that the virtual environment feature carries a time length of attention of the advertisement attention area and a user attention, the virtual environment feature may further carry an advertisement main feature, and optionally, the advertisement main feature includes: and carrying out feature extraction on the advertisement main body features obtained from the advertisement image on which the advertisement is put, and/or carrying out feature extraction on the attention main body features obtained from the region sub-image corresponding to the attention region of the advertisement. In particular, in the process of extracting features from the advertisement image on which the advertisement is put or extracting features from the region sub-image corresponding to the advertisement attention region, feature extraction can be performed through a pre-trained feature extraction model (for example, imageNet model).
In the specific implementation, in the process of constructing the virtual environment feature carrying the concerned duration of the concerned area of the advertisement, under the condition of only carrying the concerned duration of the concerned area of the advertisement, in the process of constructing the virtual environment feature based on the concerned duration of the concerned area of the advertisement, the concerned duration vector is used as the virtual environment feature by vectorizing the concerned duration of the concerned area of the advertisement.
In addition, in the process of constructing the virtual environment feature based on the attention duration and the user attention of the advertisement attention area under the condition that the attention duration and the user attention of the advertisement attention area are carried, optionally, the virtual environment feature is constructed by adopting the following modes: and carrying out vectorization processing on the concerned duration and the user concerned degree to obtain a concerned duration vector and a concerned degree vector, carrying out vector splicing on the concerned duration vector and the concerned degree vector, and taking the obtained spliced vector as the virtual environment feature.
Or in the process of constructing the virtual environment feature based on the attention duration and the advertisement main body feature of the advertisement attention area under the condition of carrying the attention duration and the advertisement main body feature of the advertisement attention area, optionally, constructing by adopting the following modes: and carrying out vectorization processing on the concerned duration and the advertisement main body characteristic to obtain a concerned duration vector and a main body characteristic vector, carrying out vector splicing on the concerned duration vector and the main body characteristic vector, and taking the obtained spliced vector as the virtual environment characteristic.
Or under the condition that the concerned time length, the user concerned degree and the advertisement main body characteristic of the concerned area of the advertisement are carried, constructing the virtual environment characteristic based on the concerned time length, the user concerned degree and the advertisement main body characteristic of the concerned area of the advertisement, and optionally adopting the following modes to construct:
Vectorizing the concerned duration, the user concerned degree and the advertisement main body characteristic to obtain concerned duration vectors, concerned degree vectors and main body characteristic vectors;
Vector stitching is carried out on the concerned duration vector, the concerned degree vector and the main body characteristic vector, and the obtained stitching vector is used as the virtual environment characteristic.
And S108, inputting the virtual environment characteristics and the candidate advertisements of the virtual world into an advertisement scoring model to perform advertisement scoring calculation, and determining target candidate advertisements in the candidate advertisements according to the obtained advertisement scores.
Optionally, the advertisement scoring model includes: a reinforcement learning model constructed based on a reinforcement learning algorithm; wherein, the input in the model training process of the reinforcement learning model comprises: and carrying out vectorization processing and vector splicing on the concerned time length, the user concerned degree and the advertisement main body characteristic of the advertisement sample serving as the training sample to obtain the virtual environment characteristic. In the model training process, a corresponding loss function can be configured, and training loss calculation is performed through the loss function, so that more efficient model training is performed under the constraint of training loss.
In the specific implementation, in the process of carrying out advertisement scoring calculation through an advertisement scoring model, the virtual environment characteristics and each candidate advertisement are respectively input into the advertisement scoring model to carry out one-time advertisement scoring calculation, and the advertisement scoring of each advertisement is obtained after the virtual environment characteristics and each candidate advertisement are sequentially input; in addition, in the case of support of the advertisement scoring model, the virtual environment feature and each candidate advertisement may be input into the advertisement scoring model together to calculate the advertisement score of each candidate advertisement, and the advertisement score of each candidate advertisement may be output. After obtaining the advertisement scores of the candidate advertisements, determining target candidate advertisements in the candidate advertisements according to the advertisement scores, such as determining the candidate advertisement with the highest score as the target candidate advertisement.
In practical application, after determining the target candidate advertisement for advertisement delivery in the virtual world, the delivery time point for advertisement delivery in the virtual world can be further determined, and the delivery process of the target candidate advertisement is performed at the delivery time point.
The following further describes the advertisement delivery processing method of the virtual world provided in this embodiment with reference to fig. 2 by taking an application of the advertisement delivery processing method of the virtual world provided in this embodiment to a virtual world scene as an example, and referring to fig. 2, the advertisement delivery processing method of the virtual world applied to the virtual world scene specifically includes the following steps.
Step S202, aligning an eye color image and an eye infrared image, and calculating multi-mode mass fraction based on aligned characteristic points of the eye color image and the eye infrared image obtained after alignment;
If the multi-modal mass fraction is greater than or equal to the preset mass fraction threshold, executing step S204;
and if the multi-mode quality score is smaller than the preset quality score threshold, re-acquiring the eye color image and the eye infrared image.
Step S204, inputting the eye color image and the eye infrared image into a visual angle estimation model to perform visual angle estimation, and obtaining the visual angle direction of the user.
Step S206, inputting the user visual angle direction and the image data of the virtual world in which the advertisement is put into a detection model to detect the advertisement attention area, and obtaining the advertisement attention area.
Step S208, the attention duration and the user attention degree of the advertisement attention area are calculated, and the advertisement main body characteristics are extracted from the image data.
Step S210, vectorizing the concerned duration, the user concerned degree and the advertisement main body characteristic to obtain concerned duration vector, concerned degree vector and main body characteristic vector.
And S212, vector stitching is carried out on the attention duration vector, the attention degree vector and the main body feature vector, and the obtained stitching vector is used as a virtual environment feature.
Step S214, inputting the virtual environment characteristics and the candidate advertisements of the virtual world into an advertisement scoring model for advertisement scoring calculation, and determining target candidate advertisements in the candidate advertisements according to the obtained advertisement scores.
Step S216, determining a delivery time point of advertisement delivery in the virtual world, and carrying out delivery processing of target candidate advertisements at the delivery time point.
The embodiment of the advertisement delivery processing device of the virtual world provided in the specification is as follows:
In the foregoing embodiments, a method for processing advertisement delivery in a virtual world is provided, and corresponding apparatus for processing advertisement delivery in a virtual world is also provided, which will be described with reference to the accompanying drawings.
Referring to fig. 3, a schematic diagram of an advertisement delivery processing apparatus for a virtual world according to the present embodiment is shown.
Since the apparatus embodiments correspond to the method embodiments, the description is relatively simple, and the relevant portions should be referred to the corresponding descriptions of the method embodiments provided above. The device embodiments described below are merely illustrative.
The embodiment provides an advertisement delivery processing device of a virtual world, which comprises:
a viewing angle estimation module 302 configured to perform viewing angle estimation based on the user eye image, to obtain a user viewing angle direction; the eye images of the user are collected through an image sensor configured by access equipment of the virtual world;
The area detection module 304 is configured to input the user visual angle direction and the image data of the virtual world advertisement to a detection model for advertisement attention area detection, and obtain an advertisement attention area;
a feature construction module 306 configured to construct a virtual environment feature that carries a duration of interest of the advertisement interest region;
a scoring computation module 308 configured to input the virtual environment features and the candidate advertisements of the virtual world into an advertisement scoring model for advertisement scoring computation and determine target candidate advertisements of the candidate advertisements based on the obtained advertisement scores.
The embodiment of the advertisement delivery processing device of the virtual world provided in the specification is as follows:
According to the above-described advertisement delivery processing method of the virtual world, based on the same technical concept, one or more embodiments of the present disclosure further provide an advertisement delivery processing device of the virtual world, where the advertisement delivery processing device of the virtual world is configured to execute the above-provided advertisement delivery processing method of the virtual world, and fig. 4 is a schematic structural diagram of the advertisement delivery processing device of the virtual world provided by one or more embodiments of the present disclosure.
The advertisement delivery processing device of the virtual world provided in this embodiment includes:
As shown in FIG. 4, the advertisement delivery processing device of the virtual world may have a relatively large difference due to different configurations or performances, and may include one or more processors 401 and a memory 402, where the memory 402 may store one or more storage applications or data. Wherein the memory 402 may be transient storage or persistent storage. The application program stored in memory 402 may include one or more modules (not shown in the figures), each of which may include a series of computer-executable instructions in the advertising processing device of the virtual world. Still further, the processor 401 may be configured to communicate with the memory 402 to execute a series of computer executable instructions in the memory 402 on the advertising processing device of the virtual world. The advertising processing device of the virtual world may also include one or more power supplies 403, one or more wired or wireless network interfaces 404, one or more input/output interfaces 405, one or more keyboards 406, and the like.
In one particular embodiment, an advertising processing device for a virtual world includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the advertising processing device for the virtual world, and the execution of the one or more programs by the one or more processors comprises computer-executable instructions for:
estimating the view angle based on the eye images of the user to obtain the view angle direction of the user; the eye images of the user are collected through an image sensor configured by access equipment of the virtual world;
Inputting the visual angle direction of the user and the image data of the virtual world, in which advertisements are put, into a detection model to detect advertisement focus areas, and obtaining the advertisement focus areas;
constructing a virtual environment feature carrying the concerned duration of the concerned area of the advertisement;
and inputting the virtual environment characteristics and the candidate advertisements of the virtual world into an advertisement scoring model to perform advertisement scoring calculation, and determining target candidate advertisements in the candidate advertisements according to the obtained advertisement scores.
An embodiment of a storage medium provided in the present specification is as follows:
According to the advertisement delivery processing method of the virtual world, one or more embodiments of the present specification further provide a storage medium based on the same technical concept.
The storage medium provided in this embodiment is configured to store computer executable instructions that, when executed by a processor, implement the following flow:
estimating the view angle based on the eye images of the user to obtain the view angle direction of the user; the eye images of the user are collected through an image sensor configured by access equipment of the virtual world;
Inputting the visual angle direction of the user and the image data of the virtual world, in which advertisements are put, into a detection model to detect advertisement focus areas, and obtaining the advertisement focus areas;
constructing a virtual environment feature carrying the concerned duration of the concerned area of the advertisement;
and inputting the virtual environment characteristics and the candidate advertisements of the virtual world into an advertisement scoring model to perform advertisement scoring calculation, and determining target candidate advertisements in the candidate advertisements according to the obtained advertisement scores.
It should be noted that, in the present specification, the embodiment about the storage medium and the embodiment about the advertisement delivery processing method of the virtual world in the present specification are based on the same inventive concept, so that the specific implementation of this embodiment may refer to the implementation of the foregoing corresponding method, and the repetition is omitted.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In the 30 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable GATE ARRAY, FPGA)) is an integrated circuit whose logic functions are determined by user programming of the device. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented with "logic compiler (logic compiler)" software, which is similar to the software compiler used in program development and writing, and the original code before being compiled is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but HDL is not just one, but a plurality of kinds, such as ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language), and VHDL (Very-High-SPEED INTEGRATED Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application SPECIFIC INTEGRATED Circuits (ASICs), programmable logic controllers, and embedded microcontrollers, examples of controllers include, but are not limited to, the following microcontrollers: ARC625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each unit may be implemented in the same piece or pieces of software and/or hardware when implementing the embodiments of the present specification.
One skilled in the relevant art will recognize that one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
One or more embodiments of the present specification may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is by way of example only and is not intended to limit the present disclosure. Various modifications and changes may occur to those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. that fall within the spirit and principles of the present document are intended to be included within the scope of the claims of the present document.

Claims (15)

1. An advertisement delivery processing method of a virtual world, comprising:
estimating the view angle based on the eye images of the user to obtain the view angle direction of the user; the eye images of the user are collected through an image sensor configured by access equipment of the virtual world;
Inputting the visual angle direction of the user and the image data of the virtual world, in which advertisements are put, into a detection model to detect advertisement focus areas, and obtaining the advertisement focus areas;
Constructing a virtual environment feature carrying the attention duration of the advertisement attention area, wherein the virtual environment feature also carries the user attention, and the user attention is calculated by adopting the following mode: extracting a first eye feature in the eye image of the user in the advertisement putting state and a second eye feature in the eye image of the user in the non-putting state through an image segmentation algorithm; calculating the user attention based on the first eye feature and the second eye feature;
and inputting the virtual environment characteristics and the candidate advertisements of the virtual world into an advertisement scoring model to perform advertisement scoring calculation, and determining target candidate advertisements in the candidate advertisements according to the obtained advertisement scores.
2. The advertisement delivery processing method of the virtual world according to claim 1, the user eye image comprising: an eye color image and/or an eye infrared image;
correspondingly, the image sensor configured by the access device comprises: an image sensor for acquiring eye color images and/or an infrared sensor for acquiring eye infrared images.
3. The advertisement delivery processing method of the virtual world according to claim 2, wherein the step of estimating the viewing angle based on the eye image of the user and obtaining the viewing angle direction of the user further comprises, before executing:
Aligning the eye color image and the eye infrared image, and calculating multi-modal mass fractions based on alignment feature points of the eye color image and the eye infrared image obtained after alignment;
if the multi-mode quality score is smaller than a preset quality score threshold, carrying out eye color image acquisition and eye infrared image acquisition again;
the multi-mode mass fraction is calculated by adopting the following mode:
calculating the feature similarity of the aligned eye color image and the aligned eye infrared image based on the aligned feature points of the aligned eye color image and the aligned eye infrared image;
And calculating the multi-modal mass fraction according to the feature similarity and a preset constant.
4. The advertisement delivery processing method of the virtual world according to claim 3, wherein the multi-modal mass fraction is equal to a ratio of the preset constant to a square of the feature similarity.
5. The advertisement delivery processing method of the virtual world according to claim 1, wherein the advertisement attention area detection comprises:
determining a three-dimensional view position mapped by the user view direction in the virtual world;
and detecting an image area corresponding to the three-dimensional visual angle position in the image data of the advertisement serving as the advertisement attention area.
6. The advertisement putting processing method of the virtual world according to claim 1, wherein the step of inputting the user viewing angle direction and the image data of the advertisement putting of the virtual world into a detection model to detect the advertisement attention area, and before the step of obtaining the advertisement attention area is performed, further comprises:
performing advertisement image rendering based on the image data of the advertisement, and outputting a rendering result to the access equipment;
The advertisement delivery is randomly selected and determined in an advertisement pool and/or a sequence advertisement, or the advertisement delivery is obtained by matching in the advertisement pool and/or the sequence advertisement based on user interest data.
7. The advertisement delivery processing method of the virtual world according to claim 1, wherein the attention duration of the advertisement attention area is calculated by adopting the following manner:
And calculating the acquisition time span of the advertisement image as the concerned time according to the acquisition time of the advertisement image of the concerned area of the advertisement.
8. The advertisement delivery processing method of the virtual world according to claim 1, wherein the virtual environment features further carry advertisement subject features;
wherein the advertisement body feature comprises: and carrying out feature extraction on the advertisement main body features obtained from the advertisement image on which the advertisement is put, and/or carrying out feature extraction on the attention main body features obtained from the region sub-image corresponding to the attention region of the advertisement.
9. The advertisement delivery processing method of the virtual world according to claim 1, wherein the constructing the virtual environment feature carrying the attention duration of the advertisement attention area comprises:
Vectorizing the concerned duration, the user concerned degree and the advertisement main body characteristic to obtain concerned duration vectors, concerned degree vectors and main body characteristic vectors;
Vector stitching is carried out on the concerned duration vector, the concerned degree vector and the main body characteristic vector, and the obtained stitching vector is used as the virtual environment characteristic.
10. The advertisement delivery processing method of the virtual world according to claim 1, the advertisement scoring model comprising: a reinforcement learning model constructed based on a reinforcement learning algorithm;
Wherein, the input in the model training process of the reinforcement learning model comprises: and carrying out vectorization processing and vector splicing on the concerned time length, the user concerned degree and the advertisement main body characteristic of the advertisement sample serving as the training sample to obtain the virtual environment characteristic.
11. The advertisement placement processing method of the virtual world according to claim 1, wherein after the step of inputting the virtual environment feature and the candidate advertisement of the virtual world into an advertisement scoring model to perform advertisement scoring calculation and determining a target candidate advertisement in the candidate advertisement according to the obtained advertisement scoring, further comprises:
determining a delivery time point of advertisement delivery in the virtual world, and carrying out delivery processing of the target candidate advertisement at the delivery time point.
12. The advertisement delivery processing method of a virtual world according to any one of claims 1 to 11, wherein the virtual world is subjected to a decentralised trade by generating a heterogeneous identifier, and the trade takes ownership of the virtual asset.
13. An advertisement delivery processing apparatus of a virtual world, comprising:
the visual angle estimation module is configured to perform visual angle estimation based on the eye images of the user to obtain the visual angle direction of the user; the eye images of the user are collected through an image sensor configured by access equipment of the virtual world;
The area detection module is configured to input the user visual angle direction and the image data of the virtual world, in which advertisements are put, into a detection model to detect advertisement attention areas and obtain advertisement attention areas;
The feature construction module is configured to construct a virtual environment feature carrying the attention duration of the advertisement attention area, wherein the virtual environment feature also carries user attention, and the user attention is calculated by adopting the following mode: extracting a first eye feature in the eye image of the user in the advertisement putting state and a second eye feature in the eye image of the user in the non-putting state through an image segmentation algorithm; calculating the user attention based on the first eye feature and the second eye feature;
And the scoring calculation module is configured to input the virtual environment characteristics and the candidate advertisements of the virtual world into an advertisement scoring model to perform advertisement scoring calculation and determine target candidate advertisements in the candidate advertisements according to the obtained advertisement scores.
14. An advertisement delivery processing device of a virtual world, comprising:
A processor; and a memory configured to store computer-executable instructions that, when executed, cause the processor to:
estimating the view angle based on the eye images of the user to obtain the view angle direction of the user; the eye images of the user are collected through an image sensor configured by access equipment of the virtual world;
Inputting the visual angle direction of the user and the image data of the virtual world, in which advertisements are put, into a detection model to detect advertisement focus areas, and obtaining the advertisement focus areas;
Constructing a virtual environment feature carrying the attention duration of the advertisement attention area, wherein the virtual environment feature also carries the user attention, and the user attention is calculated by adopting the following mode: extracting a first eye feature in the eye image of the user in the advertisement putting state and a second eye feature in the eye image of the user in the non-putting state through an image segmentation algorithm; calculating the user attention based on the first eye feature and the second eye feature;
and inputting the virtual environment characteristics and the candidate advertisements of the virtual world into an advertisement scoring model to perform advertisement scoring calculation, and determining target candidate advertisements in the candidate advertisements according to the obtained advertisement scores.
15. A storage medium storing computer-executable instructions that when executed by a processor implement the following:
estimating the view angle based on the eye images of the user to obtain the view angle direction of the user; the eye images of the user are collected through an image sensor configured by access equipment of the virtual world;
Inputting the visual angle direction of the user and the image data of the virtual world, in which advertisements are put, into a detection model to detect advertisement focus areas, and obtaining the advertisement focus areas;
Constructing a virtual environment feature carrying the attention duration of the advertisement attention area, wherein the virtual environment feature also carries the user attention, and the user attention is calculated by adopting the following mode: extracting a first eye feature in the eye image of the user in the advertisement putting state and a second eye feature in the eye image of the user in the non-putting state through an image segmentation algorithm; calculating the user attention based on the first eye feature and the second eye feature;
and inputting the virtual environment characteristics and the candidate advertisements of the virtual world into an advertisement scoring model to perform advertisement scoring calculation, and determining target candidate advertisements in the candidate advertisements according to the obtained advertisement scores.
CN202210868330.7A 2022-07-22 2022-07-22 Advertisement putting processing method and device for virtual world Active CN115187307B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210868330.7A CN115187307B (en) 2022-07-22 2022-07-22 Advertisement putting processing method and device for virtual world

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210868330.7A CN115187307B (en) 2022-07-22 2022-07-22 Advertisement putting processing method and device for virtual world

Publications (2)

Publication Number Publication Date
CN115187307A CN115187307A (en) 2022-10-14
CN115187307B true CN115187307B (en) 2024-06-07

Family

ID=83521788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210868330.7A Active CN115187307B (en) 2022-07-22 2022-07-22 Advertisement putting processing method and device for virtual world

Country Status (1)

Country Link
CN (1) CN115187307B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115731375B (en) * 2022-12-09 2024-05-10 支付宝(杭州)信息技术有限公司 Method and device for updating virtual image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960937A (en) * 2018-08-10 2018-12-07 陈涛 Advertisement sending method of the application based on eye movement tracer technique of AR intelligent glasses
CN111612576A (en) * 2020-05-09 2020-09-01 向培红 Commodity recommendation method and device and electronic equipment
CN112181152A (en) * 2020-11-13 2021-01-05 幻蝎科技(武汉)有限公司 Advertisement push management method, equipment and application based on MR glasses
CN112507799A (en) * 2020-11-13 2021-03-16 幻蝎科技(武汉)有限公司 Image identification method based on eye movement fixation point guidance, MR glasses and medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100997873B1 (en) * 2008-03-31 2010-12-02 팅크웨어(주) Advertisement method and system of map using virtual point of interest
US10598929B2 (en) * 2011-11-09 2020-03-24 Google Llc Measurement method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960937A (en) * 2018-08-10 2018-12-07 陈涛 Advertisement sending method of the application based on eye movement tracer technique of AR intelligent glasses
CN111612576A (en) * 2020-05-09 2020-09-01 向培红 Commodity recommendation method and device and electronic equipment
CN112181152A (en) * 2020-11-13 2021-01-05 幻蝎科技(武汉)有限公司 Advertisement push management method, equipment and application based on MR glasses
CN112507799A (en) * 2020-11-13 2021-03-16 幻蝎科技(武汉)有限公司 Image identification method based on eye movement fixation point guidance, MR glasses and medium

Also Published As

Publication number Publication date
CN115187307A (en) 2022-10-14

Similar Documents

Publication Publication Date Title
TWI773189B (en) Method of detecting object based on artificial intelligence, device, equipment and computer-readable storage medium
CN111563502A (en) Image text recognition method and device, electronic equipment and computer storage medium
CN112016475B (en) Human body detection and identification method and device
CN111491187B (en) Video recommendation method, device, equipment and storage medium
CN110427915B (en) Method and apparatus for outputting information
CN112085120B (en) Multimedia data processing method and device, electronic equipment and storage medium
US20230067934A1 (en) Action Recognition Method, Apparatus and Device, Storage Medium and Computer Program Product
JP2023527615A (en) Target object detection model training method, target object detection method, device, electronic device, storage medium and computer program
CN117078790B (en) Image generation method, device, computer equipment and storage medium
CN115187307B (en) Advertisement putting processing method and device for virtual world
CN114495916B (en) Method, device, equipment and storage medium for determining insertion time point of background music
CN115600157A (en) Data processing method and device, storage medium and electronic equipment
CN114332484A (en) Key point detection method and device, computer equipment and storage medium
CN113537187A (en) Text recognition method and device, electronic equipment and readable storage medium
CN117336526A (en) Video generation method and device, storage medium and electronic equipment
CN115862130B (en) Behavior recognition method based on human body posture and trunk sports field thereof
CN111611941A (en) Special effect processing method and related equipment
CN115358777A (en) Advertisement putting processing method and device of virtual world
EP4394690A1 (en) Image processing method and apparatus, computer device, computer-readable storage medium, and computer program product
CN115546908A (en) Living body detection method, device and equipment
CN115346028A (en) Virtual environment theme processing method and device
CN113378774A (en) Gesture recognition method, device, equipment, storage medium and program product
CN115830196B (en) Virtual image processing method and device
CN115374298B (en) Index-based virtual image data processing method and device
CN115495712B (en) Digital work processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant