CN117541321A - Advertisement making and publishing method and system based on virtual digital person - Google Patents

Advertisement making and publishing method and system based on virtual digital person Download PDF

Info

Publication number
CN117541321A
CN117541321A CN202410020352.7A CN202410020352A CN117541321A CN 117541321 A CN117541321 A CN 117541321A CN 202410020352 A CN202410020352 A CN 202410020352A CN 117541321 A CN117541321 A CN 117541321A
Authority
CN
China
Prior art keywords
advertisement
virtual digital
data
matching
adopting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410020352.7A
Other languages
Chinese (zh)
Other versions
CN117541321B (en
Inventor
刘治宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Fenghuo Wanjia Technology Co ltd
Original Assignee
Beijing Fenghuo Wanjia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Fenghuo Wanjia Technology Co ltd filed Critical Beijing Fenghuo Wanjia Technology Co ltd
Priority to CN202410020352.7A priority Critical patent/CN117541321B/en
Publication of CN117541321A publication Critical patent/CN117541321A/en
Application granted granted Critical
Publication of CN117541321B publication Critical patent/CN117541321B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0276Advertisement creation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0277Online advertisement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/19007Matching; Proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Marketing (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses an advertisement making and issuing method and system based on virtual digital people, and relates to the field of advertisement issuing, displaying and making; the method comprises the following steps: acquiring a user portrait; determining character shape information according to the user portrait; adopting a shape reconstruction and shape rendering mode to construct a virtual digital person according to shape information; according to the advertisement scene information and the virtual digital person, adopting a synthetic model to carry out information matching to obtain a matching animation; the synthetic model is constructed by adopting a convolutional neural network and a cyclic neural network; adopting an identification detection model to carry out examination detection on the matching animation to obtain a detected matching animation; the recognition detection model is constructed based on a deep learning method and a bidirectional circulating neural network fused with a self-attention mechanism; releasing the detected matching animation to obtain a release advertisement; the invention can efficiently and flexibly realize advertisement putting and release.

Description

Advertisement making and publishing method and system based on virtual digital person
Technical Field
The invention relates to the field of advertisement release display production, in particular to an advertisement production and release method and system based on virtual digital people.
Background
With the development of the internet and artificial intelligence technology, internet advertising has received extensive attention and has been rapidly developed. Compared with the traditional television, newspaper and outdoor propaganda advertisement, the internet advertisement publishing means is flexible, can provide users with strong sensory effects, can carry out relatively accurate delivery to different user groups according to advertisement contents, and has relatively wide spreading range and relatively low price. Currently, internet advertisements are often used for directly or indirectly selling goods or providing advertisement services in the form of text, pictures, audio, video and the like through internet media such as websites, webpages, internet application programs and the like.
The virtual digital person is a virtual character with a digital appearance, and depends on the existence of a display device, on one hand, the appearance of the person has the characteristics of specific looks, sexes, characters and the like, and on the other hand, the behavior of the person has the capability of expressing by language, facial expression and limb actions, and meanwhile, along with the addition of an artificial intelligence technology, the virtual digital person can have the thought of the person and has the capability of identifying the external environment and interacting with the person. The virtual digital person is a virtual figure digital image which is designed and manufactured by comprehensively applying technologies such as computer graphics, graphic rendering, motion capturing, deep learning, voice synthesis and the like, is influenced by metauniverse hot surge, and is widely focused once generated, and the current virtual digital person technology is focused on the content manufacturing and tool design of the virtual digital person on one hand, so that the virtual digital person has more realistic figure, motion characteristics, language capability and emotion expression, on the other hand, the application field of the virtual digital person is deepened and expanded continuously, and the virtual digital person is expanded from cultural entertainment, news anchor to a plurality of industries such as finance, medical treatment, education, communication and the like.
The content making technology of the internet advertisement lacks innovative thinking and imagination at present, network advertisements in the form of fall of push-button advertisements, flag advertisements, pop-up advertisements and the like are filled in the network media, and no virtual digital figures are present in the internet advertisements at present.
The content release of the internet advertisement lacks timely, effective and intelligent examination technical support, so that the release and examination efficiency of the internet advertisement is low, and illegal contents still appear in the released contents.
Although the Internet advertisement provides an interaction channel with users compared with the traditional media advertisement, the current interaction mode is still single, text interaction is taken as the main part, the flexibility of the interaction mode and the real-time performance of the interaction mode are greatly restricted, and the interaction effect is still unsatisfactory.
Meanwhile, although the real person code advertisement is still a main form in the Internet brand advertisement, the real person code advertisement is easy to invert, the advertisement puts the center of gravity on the real person, the advertisement content lacks novelty, and the advertisement quality is poor.
Therefore, how to efficiently and flexibly realize the release of advertisements is of great importance.
Disclosure of Invention
The invention aims to provide an advertisement making and publishing method and system based on virtual digital people, which can efficiently and flexibly realize advertisement putting and publishing.
In order to achieve the above object, the present invention provides the following solutions: an advertisement making and publishing method based on virtual digital people, the method comprising: acquiring a user portrait; the user portrayal is three-dimensional attribute data determined based on authorization data of a user; the three-dimensional attribute data includes: gender, interests, and geographic location.
Determining character outline information according to the user portrait; and constructing a virtual digital person according to the shape information by adopting a shape reconstruction and shape rendering mode.
According to the advertisement scene information and the virtual digital person, adopting a synthetic model to carry out information matching to obtain a matching animation; the information matching includes: speech synthesis and action matching; the synthesis model is constructed by adopting a convolutional neural network and a cyclic neural network; the advertisement scene information includes: advertisement scenario content, advertisement required action gestures and advertisement language timbre.
Adopting an identification detection model to carry out examination and detection on the matching animation to obtain a detected matching animation; the recognition detection model is constructed based on a deep learning method and a bidirectional circulating neural network fused with a self-attention mechanism; and putting and releasing the detected matching animation to obtain a release advertisement.
Optionally, obtaining the user portrait specifically includes: acquiring authorization data of a user; preprocessing the authorization data to obtain authorization processing data; determining three-dimensional attribute data according to the authorization processing data; and determining the three-dimensional attribute data as the user portrait.
Optionally, according to the advertisement scene information and the virtual digital person, adopting a synthetic model to perform information matching to obtain a matching animation, which specifically comprises the following steps: determining an information sequence by adopting a three-dimensional reconstruction algorithm according to the advertisement scene information; the information sequence includes: a speech sequence and an action sequence.
Extracting characteristic data of the information sequence by adopting a convolutional neural network; determining the mapping relation between the characteristic data by adopting a cyclic neural network; and carrying out matching synthesis processing according to the mapping relation and the virtual digital person to obtain a matching animation.
Optionally, the matching animation is detected by adopting an identification detection model to obtain the detected matching animation, which specifically comprises the following steps: performing voice recognition on the matching animation by adopting a deep learning method to obtain recognition text data; comparing the identification text data with a set sensitive word stock to obtain sensitive similarity; filtering the identification text data according to the sensitive similarity to obtain filtered text data; dividing the matched animation by adopting a sliding window method to obtain a division animation; adopting a bidirectional cyclic neural network fused with a self-attention mechanism to perform image text recognition on the segmentation animation to obtain a text feature sequence; according to the filtered text data and the text feature sequence, identifying and detecting based on the sensitivity similarity to obtain a detection result; and determining the detected matching animation according to the detection result and the matching animation.
Optionally, performing voice recognition on the matching animation by adopting a deep learning method to obtain recognition text data, which specifically comprises the following steps: processing the matching animation by adopting a dynamic transcoding method, and extracting signal characteristics to obtain a voice signal; and carrying out voice recognition on the voice signal by adopting a deep learning method based on the hidden Markov acoustic model to obtain recognition text data.
Optionally, the method further comprises: acquiring voice data of a user; determining voice conversion and word segmentation processing according to the voice information to obtain problem processing data; searching and matching are carried out in a set knowledge base according to the problem processing data, and a matching result is obtained; and the virtual digital person performs interaction processing with the user according to the matching result.
Optionally, the method further comprises: acquiring response data of the published advertisement; the response data includes: put in data, operational data, behavioral data, and monitoring data.
Performing cleaning conversion and integrated specification processing on the response data to obtain response processing data; performing importance comparison according to the response processing data and the set level, and determining a weight judgment matrix; and carrying out consistency test according to the weight judgment matrix, and comparing a consistency test result with a set threshold value to obtain a comparison result.
Determining a response result according to the comparison result; the response results are used to characterize the spreading impact of the published advertisement.
An advertising production distribution system based on virtual digital people, the system comprising: the acquisition module is used for acquiring the user portrait; the user portrayal is three-dimensional attribute data determined based on authorization data of a user; the three-dimensional attribute data includes: gender, interests, and geographic location.
And the information determining module is used for determining the figure appearance information according to the user portrait.
And the construction module is used for constructing the virtual digital person according to the appearance information by adopting an appearance reconstruction and appearance rendering mode.
The matching module is used for carrying out information matching by adopting a synthetic model according to the advertisement scene information and the virtual digital person to obtain a matching animation; the information matching includes: speech synthesis and action matching; the synthesis model is constructed by adopting a convolutional neural network and a cyclic neural network; the advertisement scene information includes: advertisement scenario content, advertisement required action gestures and advertisement language timbre.
The detection module is used for adopting an identification detection model to carry out examination detection on the matching animation to obtain a detected matching animation; the recognition detection model is constructed based on a deep learning method and a bidirectional circulating neural network fused with a self-attention mechanism.
And the release module is used for releasing the detected matching animation to obtain release advertisements.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects: the invention provides an advertisement making and publishing method and system based on virtual digital people, which are implemented by acquiring user portraits; determining character shape information according to the user portrait; adopting a shape reconstruction and shape rendering mode to construct a virtual digital person according to shape information; according to the advertisement scene information and the virtual digital person, adopting a synthetic model to carry out information matching to obtain a matching animation; adopting an identification detection model to carry out examination detection on the matching animation to obtain a detected matching animation; releasing the detected matching animation to obtain a release advertisement; because the synthetic model is constructed by adopting a convolutional neural network and a cyclic neural network; the recognition detection model is constructed based on a deep learning method and a bidirectional circulating neural network fused with a self-attention mechanism, and on the basis, matching animation is completed through a virtual digital person, and finally, advertisement release is realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an advertisement making and publishing method based on a virtual digital person according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of modeling and generating a virtual digital person according to an embodiment of the present invention.
Fig. 3 is a flow chart of virtual digital human voice synthesis provided in an embodiment of the present invention.
Fig. 4 is a schematic diagram of a working process and a principle of virtual digital personal advertisement content production according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of a virtual digital human advertisement interactive interaction process and principle provided by an embodiment of the present invention.
Fig. 6 is a schematic diagram of an advertisement delivery and distribution management process and principle according to an embodiment of the present invention.
Fig. 7 is a schematic diagram of an intelligent advertisement content identification and review process and principle according to an embodiment of the present invention.
Fig. 8 is a schematic diagram of an intelligent comprehensive evaluation working process and a principle of an advertisement effect according to an embodiment of the present invention.
FIG. 9 is a diagram illustrating the interrelationship of various parts of a virtual digital personal advertisement production and intelligent distribution system according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The virtual digital person technology is developed and applied to content making and putting and publishing of internet advertisements, the virtual digital person internet advertisement making and intelligent publishing system is developed, advertisement creative can be improved, advertisement content is purified, advertisement publishing quality is improved, meanwhile, virtual digital persons can interact and interact with the public in real time, the public is helped to better know and experience advertisement products, the defect of true person pronouncing advertisements is overcome, and a brand new content form and user experience are provided for internet advertisements.
The invention aims to provide an advertisement making and publishing method and system based on virtual digital people, which can efficiently and flexibly realize advertisement putting and publishing.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Example 1: as shown in fig. 1, an embodiment of the present invention provides an advertisement making and publishing method based on a virtual digital person, which includes: step 100: and acquiring the user portrait. The user representation is three-dimensional attribute data determined based on authorization data of the user; the three-dimensional attribute data includes: gender, interests, and geographic location.
Wherein, obtain user's portrait, specifically include: acquiring authorization data of a user; preprocessing the authorization data to obtain authorization processing data; determining three-dimensional attribute data according to the authorization processing data; three-dimensional attribute data is determined as a user representation.
Step 200: character outline information is determined based on the user representation.
Step 300: and constructing a virtual digital person according to the shape information by adopting a shape reconstruction and shape rendering mode.
Step 400: and carrying out information matching by adopting a synthetic model according to the advertisement scene information and the virtual digital person to obtain a matching animation. The information matching includes: speech synthesis and action matching; the synthetic model is constructed by adopting a convolutional neural network and a cyclic neural network; the advertisement scene information includes: advertisement scenario content, advertisement required action gestures and advertisement language timbre.
And carrying out information matching by adopting a synthetic model according to the advertisement scene information and the virtual digital person to obtain a matching animation, wherein the matching animation specifically comprises the following steps of: determining an information sequence by adopting a three-dimensional reconstruction algorithm according to the advertisement scene information; the information sequence includes: a speech sequence and an action sequence; extracting characteristic data of the information sequence by adopting a convolutional neural network; determining a mapping relation between the characteristic data by adopting a cyclic neural network; and carrying out matching synthesis processing according to the mapping relation and the virtual digital person to obtain a matching animation.
Step 500: adopting an identification detection model to carry out examination detection on the matching animation to obtain a detected matching animation; the recognition detection model is constructed based on a deep learning method and a bidirectional recurrent neural network fused with a self-attention mechanism.
The method comprises the steps of adopting an identification detection model to carry out examination detection on the matching animation to obtain the detected matching animation, and specifically comprises the following steps: performing voice recognition on the matching animation by adopting a deep learning method to obtain recognition text data; comparing the identification text data with a set sensitive word stock to obtain sensitive similarity; filtering the identification text data according to the sensitivity similarity to obtain filtered text data; and dividing the matched animation by adopting a sliding window method to obtain a division animation.
Adopting a bidirectional circulating neural network fused with a self-attention mechanism to perform image text recognition on the segmentation animation to obtain a text feature sequence; according to the filtered text data and the text feature sequence, carrying out identification detection based on the sensitive similarity to obtain a detection result; and determining the detected matching animation according to the detection result and the matching animation.
Performing voice recognition on the matching animation by adopting a deep learning method to obtain recognition text data, wherein the method specifically comprises the following steps: processing the matching animation by adopting a dynamic transcoding method, and extracting signal characteristics to obtain a voice signal; based on the hidden Markov acoustic model, the voice signal is subjected to voice recognition by adopting a deep learning method, so that recognition text data are obtained.
Step 600: and putting and releasing the detected matching animation to obtain a release advertisement.
In one embodiment, the method further comprises: acquiring voice data of a user; determining voice conversion and word segmentation processing according to the voice information to obtain problem processing data; searching and matching in a set knowledge base according to the problem processing data to obtain a matching result; and the virtual digital person performs interaction processing with the user according to the matching result.
As an alternative embodiment, the method further comprises: acquiring response data of the published advertisement; the response data includes: put in data, operational data, behavioral data, and monitoring data.
Performing cleaning conversion and integrated specification processing on the response data to obtain response processing data; performing importance comparison according to the response processing data and the set level, and determining a weight judgment matrix; and carrying out consistency test according to the weight judgment matrix, and comparing the consistency test result with a set threshold value to obtain a comparison result.
Determining a response result according to the comparison result; the response results are used to characterize the spreading impact of the published advertisement.
In short, in practical application, the technical idea of the method mentioned in the present invention can be specifically as follows.
1. The advertisement content design and the production are based on the virtual digital person. The method is characterized in that a real person is used as a prototype, a virtual digital person model is created according to advertisement creative and advertisement scene image demands, and the action, expression, voice habit and emotion characteristics of the real person can be simulated through deep learning of the action, expression and voice of the real person. The virtual digital artificial advertisement content main body provides users with viewing angles, movement routes and advancing modes for viewing, and the users feel the changes of space, brightness, temperature and sound in the advertisement scene and the sense and induction of human bodies on objects and environments, so that the advertisement content which is more similar to the user experience, life reality and vividness is generated.
(1) Modeling and generating the virtual digital person facing the advertisement application.
The method has the advantages that the method has high requirements on the creative of the advertisement, the aesthetic feeling of the picture, the connotation of the advertisement and the commercial value, and requires that virtual digital people can be flexibly configured from image design, action gesture to scene rendering during the advertisement design, so that the method is suitable for the requirements of complex and changeable advertisement scenes. The working principle and working process of modeling and generating the virtual digital person facing the advertisement application are shown in figure 2.
Modeling a virtual digital portrait: and (5) according to the customized virtual digital person style of the advertising creative, completing the collection and modeling of the appearance of the virtual digital person.
A three-tuple for the visual modeling of virtual digital peopleThe description is given below.
The method comprises the steps of collecting the appearance information of a virtual digital person: the camera is used for collecting the face, the five sense organs and the appearance, the skin and the skin, the hair style, the clothes material, the clothes pattern and the color, the tie, the headwear, the glasses, the ring and other outline element information of the real person.
Reconstructing the appearance of the virtual digital person: and adopting methods such as camera array scanning static reconstruction, high visual fidelity dynamic light field reconstruction and the like to reconstruct the appearance of the virtual person, and realizing detail production or restoration of the appearance of the virtual digital person.
Rendering for a virtual digital person appearance: and adopting novel rendering technologies such as physical-based rendering technology, heavy illumination and the like to carry out precision adjustment on the appearance of the virtual person, and constructing the environment expression and effect of the virtual person.
Virtual digital human speech synthesis: the analysis of the advertisement script is completed by carrying out text regularity, text segmentation and word sound conversion on the advertisement script, the control prediction is carried out on the rhythm of the advertisement script such as text accent, duration, pause and the like, corresponding voice unit fragments are selected from a large-scale voice library recorded in advance, then the unit fragments are spliced, acoustic feature parameter extraction is completed according to the emotion demand characteristics such as happiness, sadness, angry and the like which are supposed to be embodied in the advertisement language script in the advertisement creative, the statistical modeling and model parameter prediction are carried out on acoustic feature parameters extracted by a vocoder by utilizing the methods such as statistical parameter synthesis and deep learning voice synthesis based on a hidden Markov model, and the acoustic feature parameters obtained by model prediction are input into the vocoder to complete voice synthesis, so that voice generation with emotion for language related texts in the advertisement script is realized. The specific process is shown in fig. 3.
Virtual digital human action capture and driving: the method comprises the steps of utilizing action position sensors such as gyroscopes, magnetometers and accelerometers, cameras and the like to finish inertial type and optical capturing and collecting of mouth shapes, expressions, running, jumping, fighting and other limb actions of real actors, combining a three-dimensional reconstruction algorithm based on traditional multi-view geometry with a three-dimensional reconstruction algorithm based on deep learning, recording, processing and calculating parameters such as three-dimensional space pose, speed and acceleration of mark points, constructing a virtual digital human action visual language model, generating a multi-modal action driving engine supporting the sensors, texts, voices, videos, controllers and scripts, and realizing the driving of the virtual digital human to finish rich actions such as running, jumping, overturning and the like and facial expressions under various moods such as happiness, sadness, gas generation and surprise through full data set training by assisting with high-precision skeleton binding.
Virtual digital human voice animation synthesis: the method comprises the steps of establishing a virtual digital person voice sequence-to-action sequence mapping model, wherein the input of the model is a virtual digital person voice characteristic sequence, the output is virtual digital person key action and expression characteristic parameters, learning and training the mapping relation between voice characteristics and action and expression parameters by using a convolutional neural network and a cyclic neural network method to obtain an accurate virtual digital person voice action mapping model, realizing consistent matching of virtual digital person voice to mouth shapes, lips, facial expressions and body actions based on the model, realizing voice animation synthesis of the virtual digital person, and driving the virtual digital person to make relevant actions and expressions according to certain voice tones.
(2) Virtual digital personal advertising content production.
Based on the advertisement scene script and the virtual digital person role setting, on the basis of completing virtual digital person modeling, the virtual digital person advertisement content video generation and output are completed based on the technologies of virtual reality, augmented reality, artificial intelligence, computer graphics and the like, and the main working principle and process are as shown in figure 4.
Virtual digital human speech synthesis with emotion: and (3) completing speech synthesis from advertisement language text to emotion language by utilizing the virtual digital person modeling process described in the step (1) based on the advertisement scenario language script according to the advertisement creative, the content scenario and the set roles and emotion characteristics of the virtual digital person, and generating virtual digital person computer audio.
Virtual digital person animation generation with emotion: according to the advertising creative and advertising scenario script and setting the character characteristics and emotion characteristics of the virtual digital person, based on the virtual digital person action capturing and driving program which is established and realized by the virtual digital person model development in the step (1), a series of designs of the gestures and actions of the virtual digital person which meet the requirement of advertising scenario development and embody the emotion characteristics are completed, and the computer animation is generated.
Advertisement scene and environment generation: according to the scene design in the development of advertising creatives and advertising dramas, the three-dimensional computer graph animation design tool is utilized to complete the computer modeling and scene deduction design generation of advertising backgrounds, scenes and environments related to advertising brand contents.
Virtual digital human advertisement content synthesis and output: according to advertisement creative, advertisement script setting and content scenario requirements, based on the virtual digital human voice animation synthesis program which is realized by constructing the virtual digital human model development in the step (1), the consistency synthesis of virtual digital human voice and animation is completed, on the basis, video editing, splicing and optimizing are completed manually, finally, a virtual digital human advertisement content main body is generated, a user observation visual angle, a movement route and a running mode are provided, the change of space, brightness, temperature and sound in an advertisement scene and the sense and induction of human bodies to objects and environments are felt, and the advertisement video output which is more close to user feeling, life reality and vividness is generated.
2. Virtual digital personal advertisement intelligent interaction. The interactive interaction between the user and the virtual digital person advertisement is realized in the virtual digital person advertisement, on one hand, the user can interact with the virtual digital person in real time by voice and words, and related information about the quality, the functional performance and the product service of the advertised product, and on the other hand, the virtual digital person of the user can be generated by utilizing the virtual reality and augmented reality technology, so that the user virtual digital person experiences the using effect of the advertised product in an advertising scene, the user can participate in the advertising interactive experience independently, and the advertising effect is enhanced.
(1) Virtual digital human advertisement interaction.
In the playing process of the virtual digital person advertisement, the interactive interaction between the virtual digital person and the advertisement user can be realized, and the virtual digital person advertisement is mainly divided into two types of services, including the consultation interactive service about products between the virtual digital person and the advertisement user and the interactive service about product experience of clients, so that the users can clearly know the characteristics of the products, autonomously participate in the advertisement interactive experience, and the advertisement effect and the playing quality are enhanced. The operation and principle are shown in fig. 5.
Product consultation intelligent service: the method comprises the steps of summarizing and carding the knowledge of the use function, the product performance, the operation use, the after-sales service, the technical support and the like of the advertised product, classifying and tagging the knowledge according to the type, the theme, the keywords and the like of the product knowledge to form an advertised product knowledge base, and providing consultation services about the content such as the product quality, the product operation use, the product service and the like to an advertisement user in a voice dialogue and multimedia interaction mode in the advertisement broadcasting process based on the knowledge base.
The specific process is as follows: the system receives the voice input of the user, converts the voice of the user into text based on a hidden Markov voice recognition model, performs word segmentation, keyword extraction, synonym expansion, sentence vector calculation and other processes on the text, and realizes the voice recognition and content understanding of the user. And (3) carrying out retrieval matching in a knowledge base based on the processing result, sorting a problem set in the knowledge base by utilizing rules, machine learning and deep learning, picking out the most similar problem from the problem set, returning an answer corresponding to the problem in the knowledge base, and completing intelligent response to the problem presented by the user. Based on the question results retrieved from the knowledge base, a speech synthesis program implemented by the virtual digital human model is invoked to convert the question structure text into a attitudinally friendly speech answer for the user.
Product experience service: in the process of advertisement broadcasting, product use experience is provided for users through advertisement interaction. Through the interactive interaction of the virtual digital person and the user, the user adjusts the parameter setting of the virtual digital person, the virtual digital person has the appearance characteristic of the user's own image, through the interaction between the virtual digital person and the user, based on the virtual reality, the augmented reality and the man-machine interaction technology, the user can intuitively experience the use effect of the advertised product, for example, the virtual digital person can experience the wearing effect of the clothing product after finishing the user parameter setting and having the user appearance characteristic, the use and operation of the household appliance product and the like are as follows.
Generating a user virtual digital human model: and calling a virtual digital human model program, collecting facial form and five sense organs appearance information of the user by using a mobile phone camera, inputting skin complexion, skin quality, height, body shape and other information of the user, and generating a virtual digital human appearance model of the user. Based on the appearance image model, the voice synthesis and the action capture are completed.
User virtual digital human interactive experience: according to the characteristics of the product, the user inputs the manner of using the experience. And calling a virtual digital person advertisement content making program, adding a virtual digital person model of a user into a virtual digital person product advertisement scene, providing scenario development selection for the user based on the original product advertisement scenario, and providing vivid product use experience for the user according to the scenario development mode selected by the user. For example, the user may select different sizes, different colors of the advertised clothing product to view the dressing effect on the user's virtual digital persona. For equipment product advertisements, a user can input operation instructions in sequence according to system prompts according to scenario setting, and experience the product using effect.
3. Advertisement is accurately put and released in the internet environment. According to the needs of advertisers and advertisement contents, the intelligent analysis of the gender, the geographic position, the behavior habit, the hobby interests and other characteristics of the advertisers, the hobby habit of the users is accurately matched with the advertisement contents, and the advertisement contents of the proper virtual digital people are pushed to the proper crowd in proper time and in proper scenes, so that the accurate release of the advertisement contents is realized. Meanwhile, the system can automatically adjust and optimize according to the advertisement putting effect, so that the flow effect of advertisement putting is optimal.
Internet advertisement putting and release management; internet advertisement putting and issuing management intelligently analyzes habit and interest characteristics of advertisers according to needs and advertisement contents of advertisers, so that accurate putting and issuing management is realized, and the main working process is shown in figure 6.
1) And (5) collecting big data of Internet users.
The data acquisition tool is used for acquiring publicable Internet user data and data authorized by the user, such as basic information of age, sex and the like of the user, equipment information, related position area information, browsing habits, access time length, use frequency, consumption records, favorite preferences, behavior tracks and the like of the user, so as to form a user original database.
2) Raw data preprocessing.
And the collected original data is cleaned, converted, integrated and regulated by using a data processing tool in combination with manpower, so that the data quality is improved. Missing in the data, recognition, deletion, replacement, interpolation and correction of abnormal data are processed through data cleaning. And combining and storing the data of a plurality of data sources according to the data keywords through data integration, and eliminating data redundancy. And the data is subjected to simple function transformation and normalization processing through data transformation, so that the subsequent data analysis requirement is met. Simple functional variations include squaring, logarithmic and differential operations on the data, and conversion of non-stationary sequences, among others. The normalization process completes the normalization process of dimension, decimal scaling, dispersion and standard deviation of the data. The data protocol completes the discovery and searching of useful characteristics of the data, reduces the influence of invalid error data on the analysis modeling of the subsequent data on the premise of keeping the original appearance of the data as much as possible, reduces the storage data space, and lays a foundation for the analysis of the subsequent data.
3) And (5) generating a user portrait. The user representation is built from three dimensions.
Static attribute dimension portraits. From the dimension of the user's static attribute, the basic attribute information of the user is static data, including the age, sex, occupation, academic, consumption level, industry of the user, etc., and the user attribute labels are basically stable and can not be updated for a long time once constructed. The part of users filled with information is used as a sample, feature training is carried out based on a logistic regression (Logistic Regression, LR) model, a factorization machine (Factorization Machine, FM) model, a gradient lifting decision tree model (Gradient Boosting Decision Tree, GBDT) and the like in machine learning, static attribute prediction is carried out on unlabeled users, and labels of the users are transmitted to users similar to the users.
User interest dimension portraits. Firstly, content modeling is carried out, core information extraction, labeling and statistics are carried out from a user massive behavior log, a hierarchical interest label system is constructed according to three layers of classification, theme and keywords, and labels with a plurality of granularities are used for matching conveniently in use, so that the accuracy of the labels is ensured, and the generalization of the labels is also ensured. And secondly, calculating the interests of the user to the classifications, the topics and the keywords by using a Beyes clustering method based on network behavior data such as access time periods, access equipment, flow sources, accessed page content, click rate and time length of the user, obtaining the weight of interest labels of all layers of the user, and forming an interest dimension portrait of the user.
The user's geographic location is imaged. Based on user IP address resolution, a corresponding city location is obtained, and a more accurate location may be constructed based on user GPS data.
4) And (5) accurately putting advertisements into strategies and releasing advertisements.
According to the needs of advertisers, the advertisement content of products and related advertisement release demands, user groups of advertised products, such as blood pressure meter products mainly facing middle-aged and elderly user groups and social software mainly facing young groups, product advertisement features, user groups and user image features are matched, active surfing time intervals, position areas and interest preference features of specific user groups are obtained, the optimal time period, release positions and release groups of the advertisement release of the products are determined sequentially, an advertisement release strategy is formed, virtual digital person advertisement content is pushed to the appropriate groups in appropriate time and in appropriate scenes, and accurate release of advertisements of thousands of people is completed. And may adjust the optimization advertisement strategy based on the advertisement delivery effect evaluation.
4. Intelligent identification and review of advertising content. The method comprises the steps of intelligently identifying internet advertisement content to be released, automatically processing contents such as figures, gestures, postures, images, characters, voices and the like in advertisement fragments by utilizing the multi-mode film identification capability, intelligently examining, giving out risk judgment and action suggestion of whether illegal, purifying advertisement content, avoiding advertisement behavior risks and improving advertisement quality.
Intelligent identification and examination of internet advertisement content: the virtual digital person Internet advertisement making and intelligent publishing system performs intelligent recognition and filtering processing on audio and video information in the Internet advertisement, performs risk recognition and judgment on advertisement content, and completes computer intelligent examination before the Internet advertisement content is published. The operation is shown in fig. 7.
1) And constructing a sensitive word stock.
The construction of the sensitive word library is the basic work of detecting and filtering sensitive words of the Internet advertisement audio and video information. Collecting and sorting words with unhealthy colors, words with unclaimed or illegal and illegal words to form a sensitive word stock, and storing the relation between sensitive words in the sensitive word stock by using a directed graph.
2) Internet advertisement audio intelligent recognition and filtration.
The method has the advantages that dynamic transcoding, voice recognition based on deep learning and intelligent characteristic semantic analysis are adopted for the Internet advertisement audio information, so that the scene audio content of the Internet advertisement can be detected and identified, and illegal contents in the audio in the Internet advertisement can be identified.
Advertisement audio information dynamic transcoding: the advertisement audio information is subjected to sampling and A/D conversion, pre-emphasis, framing, windowing, endpoint detection, noise filtering and other preprocessing on the voice digital signal, and voice signal characteristic parameters are extracted to form a voice signal characteristic sequence.
Deep learning-based speech recognition: based on the hidden Markov acoustic model, the corresponding characteristic parameters of the voice model are obtained through deep learning training in a multi-scene and multi-dimensional advertisement scene and stored in a voice template library. And performing pattern matching on the advertisement audio frequency voice signal characteristic sequence and a model in a template library so as to obtain voice recognition text output of multi-scene and multi-dimensional advertisement audio frequency information.
Detecting illegal sensitive words: based on a deterministic finite automaton model, comparing words in the to-be-detected voice recognition text with the sensitive word directed graph, and calculating the sensitive similarity. According to the set sensitivity threshold, double-layer sensitive word detection is designed, prefix detection is carried out by a first layer of filter, preliminary detection and filtration can be carried out on sensitive texts, and the meaning of a second layer of filter is judged by means of an SVM classifier. If the sensitivity degree is greater than the set threshold, the word is considered as a sensitive word, the sensitive word is identified, the sensitive word is filtered, and the identification and examination of illegal advertisement audio voice are completed.
3) And intelligent identification and filtration of Internet advertisement videos.
Shot segmentation and key video frame extraction are carried out from the Internet advertisement video stream, picture and text positioning is carried out aiming at the characteristics of video texts, and picture and text recognition and sensitive word meaning detection are carried out, so that illegal and illegal contents in the Internet advertisement video are identified. The main working process is as follows.
Video shot segmentation and key frame extraction: the video is formed by splicing a plurality of lens clips, and different lenses are spliced by using different transformation effects in the video manufacturing process. Aiming at the situations of gradual shot change and abrupt shot change, a shot segmentation method based on a sliding window is adopted to cut a video, a first frame, an intermediate frame and a tail frame are selected from a cut shot frame sequence, and a key frame is determined by comparing the difference value between frames.
Detecting and positioning a picture text region: for the characteristics of advertisement video texts, based on a YOLO series detection algorithm, an equally divided grid is used for carrying out average segmentation on images, the segmented images are input into a convolutional neural network to generate target detection frames, a non-maximum suppression algorithm is used for obtaining target detection frames with higher confidence, candidate frame areas in the images are obtained, and picture and text positioning is achieved.
Image text recognition based on deep learning: and extracting text sequence characteristics based on a bidirectional cyclic neural network integrating an attention mechanism, and carrying out image text recognition.
Detecting illegal sensitive words: based on a deterministic finite automaton model, comparing words in the to-be-detected voice recognition text with the sensitive word directed graph, and calculating the sensitive similarity. A double-layer sensitive word detection filter is designed, prefix detection is carried out on the first layer of filter, preliminary detection and filtering can be carried out on sensitive texts, and the second layer of filter judges the meaning by means of an SVM classifier, so that sensitive word detection is achieved.
5. And comprehensively evaluating advertisement Internet delivery effect. And carrying out big data analysis and evaluation on advertisement putting effect based on advertisement putting data, advertisement effect data, product operation data, third party monitoring data and user behavior data, analyzing propagation effect and conversion effect of advertisements, and carrying out comprehensive analysis and evaluation on short-term influence and long-term value effect of virtual digital person advertisements on influence on user psychology, emotion and product brand cognition.
The intelligent comprehensive evaluation of the advertisement effect is realized by constructing an advertisement effect evaluation index system, collecting big data of the advertisement putting effect, and comprehensively analyzing and evaluating the advertisement putting effect by applying a big data intelligent analysis technology, wherein the main working process and principle are shown in figure 8.
1) And (5) collecting and preprocessing big data of advertisement effect evaluation.
And collecting virtual digital person Internet advertisement putting data of an Internet advertisement platform, product operation data of an advertiser, behavior data of an Internet user for browsing advertisements, monitoring data of the virtual digital person Internet advertisements and the like by a third party organization to form an original large data set for evaluating the virtual digital person Internet advertisement effect. And cleaning, converting, integrating and stipulating the collected advertisement effect evaluation original data, improving the data quality, and forming data which can be used by an advertisement effect evaluation algorithm.
2) And constructing an advertisement effect evaluation index system.
A hierarchical advertisement effect evaluation index system is constructed from the aspects of measuring the propagation effect, the appeal effect, the duration effect, the behavior effect on an audience and the like of the virtual digital person Internet advertisement.
First-level indexes: the primary index comprises perception of the advertisement, interaction of the advertisement, conversion of the advertisement, sharing of the advertisement, persistence of the advertisement and the like.
Second-level index: the perception indexes of the advertisement comprise secondary indexes such as advertisement exposure, advertisement click-through rate, user access amount, website jump rate and the like. The interactive force index of the advertisement comprises secondary indexes such as interaction mode and time with virtual digital people, user praise and comment quantity, search quantity and the like. The advertisement conversion power index comprises secondary indexes such as the pre-purchase rate, the instant transaction amount, the transaction amount in a period of time when the advertisement is put in, the repeated purchase rate and the like of the product. The sharing force of the advertisement comprises secondary indexes such as the sharing forwarding times of the user on the advertisement through a social tool. The persistence of the advertisement comprises secondary indexes such as emotion recognition degree of the user on the advertisement, memorization degree of the advertisement, stability of audience, stability of products and the like.
3) Advertisement effectiveness evaluation model and calculation.
Based on the virtual digital person Internet advertisement effect evaluation index system, an advertisement effectiveness evaluation model is constructed.
The method comprises the following specific steps: firstly, analyzing the relation among factors in a two-stage advertisement effect evaluation index system, comparing the importance of the factors in the same layer relative to the factors of the upper layer in pairs, and constructing a weight judgment matrix.
And secondly, converting the relative comparison of multiple indexes into the relative comparison between every two indexes by using a Delph method, carrying out assignment by using a scale assignment method, establishing a judgment matrix, and adopting a power method to iteratively calculate the index weight.
And carrying out consistency test on the calculated index weight. If the obtained consistency test result is smaller than the specified threshold, the judgment matrix has satisfactory consistency, otherwise, the judgment matrix needs to be adjusted until a satisfactory calculation result is obtained.
And according to the index weight, finishing the evaluation and calculation of the virtual digital human advertisement effect.
Through advertisement effect evaluation calculation, virtual digital person advertisement propagation effect and conversion effect evaluation can be obtained, influence of virtual digital person internet advertisements on aspects of user mind, emotion and product brand cognition can be reflected, short-term influence and long-term value effect on internet advertisements are mastered, and a reference basis is provided for perfecting and optimizing and adjusting internet advertisement strategies for virtual digital person internet advertisement content.
Example 2: the embodiment of the invention provides an advertisement making and publishing system based on a virtual digital person, which comprises the following steps: the system comprises an acquisition module, an information determination module, a construction module, a matching module, a detection module and a release module.
The acquisition module is used for acquiring the user portrait; the user representation is three-dimensional attribute data determined based on authorization data of the user; the three-dimensional attribute data includes: gender, interests, and geographic location.
And the information determining module is used for determining the figure appearance information according to the user portrait.
The building module is used for building the virtual digital person according to the appearance information by adopting an appearance reconstruction and appearance rendering mode.
The matching module is used for carrying out information matching by adopting a synthetic model according to the advertisement scene information and the virtual digital person to obtain a matching animation; the information matching includes: speech synthesis and action matching; the synthetic model is constructed by adopting a convolutional neural network and a cyclic neural network; the advertisement scene information includes: advertisement scenario content, advertisement required action gestures and advertisement language timbre.
The detection module is used for adopting an identification detection model to carry out examination detection on the matching animation to obtain a detected matching animation; the recognition detection model is constructed based on a deep learning method and a bidirectional recurrent neural network fused with a self-attention mechanism.
And the release module is used for releasing the detected matching animation to obtain release advertisements.
In practical applications, the system mentioned in relation to the present invention may also be structured as shown in fig. 9. The system mainly comprises a virtual digital person engine, namely a virtual digital person generation subsystem, a virtual digital person advertisement content production subsystem, a virtual digital person advertisement interaction and interaction subsystem, an advertisement content intelligent identification and examination subsystem, an advertisement putting and release management subsystem and an advertisement effect intelligent comprehensive evaluation subsystem. The main functions of the subsystems are as follows.
(1) A virtual digital person generation subsystem.
The virtual digital person generating subsystem completes the image design and generation of the virtual digital person. According to the advertisement creative and the digital image demands in the advertisement scene, a virtual digital human model is generated by using a real human prototype, and the characteristics of the virtual digital human shape, actions, voice, emotion and the like are realized. The appearance can provide the personalized hairstyle, clothes style, clothes material, clothes pattern and color, tie, headwear, glasses, ring and other factors of the virtual digital person with the highest 4K resolution. The deep learning training of the actions and the expressions of the true persons can be supported, the related actions and expressions can be made according to the requirements of the advertising creative, voice broadcasting and voice dialogue with emotion of the user can be carried out, and voice characteristics of different sexes, different individuals and different types of people can be selected and simulated.
(2) A virtual digital personal advertising content production subsystem.
The virtual digital person advertisement content making subsystem selects a virtual digital person character model which is suitable for advertisement content according to advertisement creative, advertisement script setting and content scenario requirements, and the virtual digital person character model comprises the characteristics of the appearance, action, gesture, voice, emotion and the like of a virtual digital person, and performs rendering and video synthesis to generate advertisement video output containing the virtual digital person. The subsystem can support editing and conversion of virtual digital human exterior images, voice conversations, gesture actions and emotion, and can support multi-track mixed editing and intelligent splicing of video, audio and video. The subsystem generates a virtual digital artificial advertisement content main body, provides a user with a viewing angle, a movement route and a travelling mode, senses the changes of space, brightness, temperature and sound in an advertisement scene and the sense and induction of a human body on objects and environments, and generates advertisement video output which is more close to the sense, life reality and vividness of the user.
(3) Virtual digital human advertisement interaction subsystem.
The virtual digital person advertisement interaction subsystem realizes interaction and interaction between the virtual digital person and a user during advertisement broadcasting. The virtual digital person can carry out real-time voice dialogue communication and multimedia interaction with the user about the contents such as the functional performance, the product quality, the product operation and use, the product service and the like of the advertised product in the advertisement broadcasting process, and also support the interaction between the virtual digital person and the user in the advertisement broadcasting process, so that the user experience is converted into the appearance (appearance, height and body form) of the user image by the user experience, for example, the virtual digital person is interacted with the user, the user experiences the wearing effect of clothes and caps of different models by the virtual digital person, and the user can take the discretion in the advertisement interaction experience by the interaction between the advertisement broadcasting process and the user, thereby enhancing the advertisement effect and broadcasting quality.
(4) And the advertisement content intelligent identification and examination subsystem.
The virtual digital person Internet advertisement making and intelligent publishing system can complete intelligent multi-mode identification of Internet advertisement content, identify and process information such as figures, pictures, actions, gestures, words, voices, videos and the like in advertisement fragments, further judge risks of the advertisement content, judge whether illegal information exists in the advertisement content, whether sensitive information exists, whether illegal social public order popular information exists or not, complete computer intelligent examination before guardrail network advertisement content publishing, give examination results, indicate existing problems and modification perfect suggestions, improve examination efficiency of the Internet advertisement publishing content, and play an important role in purifying the advertisement content and guaranteeing advertisement publishing quality.
(5) And the advertisement putting and releasing management subsystem.
The advertisement putting and releasing management subsystem can analyze the modeling characteristics of gender age, behavior habit, hobby interests, geographic position and the like of users by using big data intelligent analysis means such as cluster analysis, deep learning and the like according to the needs of advertisers and advertisement contents and based on the collection of the crowd information of released internet advertisements and corresponding advertisement users, and accurately match with the advertisement contents, breaks through the selection limit of the time space and audience of the traditional advertisements, pushes the proper virtual digital person advertisement contents to the proper crowd in proper time and in proper scenes, completes the accurate putting and releasing of the advertisement contents, and realizes the 'thousand people' of advertisement putting on the internet. Furthermore, the factors such as the crowd, the time period, the content, the flow and the like of the advertisement can be automatically adjusted and optimized according to the feedback of the advertisement putting effect, so that the advertisement putting reaches the optimal effect, the advertisement running cost is saved, and the advertisement release quality and the operation effect are improved.
(6) And the advertising effect intelligent comprehensive evaluation subsystem.
The intelligent comprehensive advertisement effect evaluation subsystem builds an advertisement release effect index evaluation system based on collection and summarization of advertisement release data, advertisement effect data, product operation data, third party monitoring data and user behavior data by using a big data intelligent analysis technology, carries out comprehensive analysis and evaluation on advertisement release effects, analyzes advertisement propagation effects and conversion effects, and influences on the mind, emotion and product brand cognition of users, further grasps short-term influence and long-term value effects on internet advertisements, and provides reference basis for perfecting and optimizing and adjusting internet advertisement strategies for virtual digital person internet advertisement content.
The virtual digital person generation subsystem is the basis of the whole system and provides a virtual digital person model and related parameter support for the virtual digital person advertisement content production subsystem and the virtual digital person advertisement interaction subsystem.
The virtual digital person advertisement content making subsystem generates advertisement content output containing the virtual digital person according to the advertisement creative and the advertisement script setting based on the virtual digital person model provided by the virtual digital person generating subsystem.
The advertisement has man-machine interaction capability with the user in the playing process, the user can carry out voice interaction with the virtual digital person, and the relevant parameter setting and information input of the edited virtual digital person are regulated, so that the use experience of the advertised product is obtained.
The intelligent advertisement content identification and examination subsystem provides content identification and examination results for the virtual digital person advertisement content making subsystem and the virtual digital person advertisement interaction subsystem through intelligent identification and examination of advertisement content and interaction content, and legal compliance and quality of the advertisement content and the interaction content are guaranteed.
The advertisement putting and releasing management subsystem receives the advertisement content of the virtual digital person advertisement content making subsystem and the interactive content of the virtual digital person advertisement interactive subsystem, and generates an advertisement putting strategy by utilizing big data analysis, so that the accurate putting and releasing of the virtual digital person internet advertisement is realized.
The intelligent comprehensive advertisement effect evaluation subsystem provides effect evaluation results for the advertisement putting and releasing management subsystem, helps to continuously adjust and optimize advertisement putting and releasing strategies, and simultaneously provides advertisement effect evaluation results for the virtual digital human advertisement content making subsystem and the virtual digital human advertisement interaction subsystem, so as to promote continuously optimizing advertisement content making and interaction modes and improve advertisement quality.
The invention is based on virtual digital human figure making and generating, designs and develops virtual digital human figure design, completes the synthesis of virtual digital human voice, the consistent synthesis of motion capture virtual drive and voice animation, and can flexibly configure the image parameters of the virtual digital human.
The design and development are based on the virtual digital person model according to the advertisement scene script and the virtual digital person role setting, and the virtual digital person advertisement content video generation and output are realized by utilizing the technologies of virtual reality, augmented reality, artificial intelligence, computer graphics and the like.
The interactive interaction between the virtual digital person and the advertisement user is mainly divided into two types of services, including consultation interactive service about products between the virtual digital person and the advertisement user and interactive service about product experience of clients, so that the users can clearly know the characteristics of the products, autonomously participate in the advertisement interactive experience, and the advertisement effect and broadcasting quality are enhanced.
The intelligent recognition and filtering processing is carried out on the audio and video information in the internet advertisement, the risk recognition and judgment are carried out on the advertisement content, and the computer intelligent examination before the release of the internet advertisement content is completed.
The design and development intelligently analyzes the habit and interest characteristics of the advertiser according to the needs of the advertiser and the advertisement content, and realizes accurate release management.
And constructing an advertisement effect evaluation index system, collecting big data of advertisement putting effects, and comprehensively analyzing and evaluating the advertisement putting effects by using a big data intelligent analysis technology.
The invention has the beneficial effects that: enriches the creative of the Internet advertisement design, innovates the design mode and means of the Internet advertisement content, changes the current situation that the current Internet advertisement design lacks innovative thinking and imagination and has single content form and is out of the way, and improves the innovation capability of the Internet advertisement design; the method has the advantages that the multimedia real-time interaction between the virtual digital person and the user is realized in the internet advertisement playing process, the way that the user obtains product consultation service and carries out product experience is innovated, the current situation that the internet advertisement interaction way is single, the interaction effect is poor and the product experience cannot be carried out is changed, and the internet advertisement quality is improved.
The virtual digital person Internet advertisement content is intelligently identified and inspected, and illegal contents in the Internet advertisement are automatically risk judged, identified and filtered, so that the efficiency of inspecting the Internet advertisement is improved, and the Internet advertisement content is purified.
Virtual digital people are introduced into Internet advertisements, the situation that advertisement brands are affected due to the defects of the real person code advertisements is avoided, and the advertisement quality is improved.
Innovative Internet advertisement putting and issuing strategy implementation and advertisement effect evaluation modes, the propagation effect and conversion effect of advertisements are analyzed, influences on the psychological, emotional and product brand cognition aspects of users are influenced, and short-term influence and long-term value effect of virtual digital person advertisements are comprehensively analyzed and evaluated.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (8)

1. An advertisement making and publishing method based on virtual digital people is characterized by comprising the following steps:
Acquiring a user portrait; the user portrayal is three-dimensional attribute data determined based on authorization data of a user; the three-dimensional attribute data includes: gender, interests, and geographic location;
determining character outline information according to the user portrait;
adopting a shape reconstruction and shape rendering mode, and constructing a virtual digital person according to the shape information;
according to the advertisement scene information and the virtual digital person, adopting a synthetic model to carry out information matching to obtain a matching animation; the information matching includes: speech synthesis and action matching; the synthesis model is constructed by adopting a convolutional neural network and a cyclic neural network; the advertisement scene information includes: advertisement scenario content, advertisement required action gesture and advertisement language tone;
adopting an identification detection model to carry out examination and detection on the matching animation to obtain a detected matching animation; the recognition detection model is constructed based on a deep learning method and a bidirectional circulating neural network fused with a self-attention mechanism;
and putting and releasing the detected matching animation to obtain a release advertisement.
2. The advertisement creation and distribution method based on virtual digital person according to claim 1, wherein the obtaining of the user portraits specifically comprises:
Acquiring authorization data of a user;
preprocessing the authorization data to obtain authorization processing data;
determining three-dimensional attribute data according to the authorization processing data;
and determining the three-dimensional attribute data as the user portrait.
3. The advertisement making and publishing method based on the virtual digital person according to claim 1, wherein the information matching is performed by adopting a synthetic model according to advertisement scene information and the virtual digital person to obtain a matching animation, and the method specifically comprises the following steps:
determining an information sequence by adopting a three-dimensional reconstruction algorithm according to the advertisement scene information; the information sequence includes: a speech sequence and an action sequence;
extracting characteristic data of the information sequence by adopting a convolutional neural network;
determining the mapping relation between the characteristic data by adopting a cyclic neural network;
and carrying out matching synthesis processing according to the mapping relation and the virtual digital person to obtain a matching animation.
4. The advertisement making and issuing method based on virtual digital people according to claim 1, wherein the matching animation is detected by adopting an identification detection model to obtain the detected matching animation, and the method specifically comprises the following steps:
performing voice recognition on the matching animation by adopting a deep learning method to obtain recognition text data;
Comparing the identification text data with a set sensitive word stock to obtain sensitive similarity;
filtering the identification text data according to the sensitive similarity to obtain filtered text data;
dividing the matched animation by adopting a sliding window method to obtain a division animation;
adopting a bidirectional cyclic neural network fused with a self-attention mechanism to perform image text recognition on the segmentation animation to obtain a text feature sequence;
according to the filtered text data and the text feature sequence, identifying and detecting based on the sensitivity similarity to obtain a detection result;
and determining the detected matching animation according to the detection result and the matching animation.
5. The advertisement making and publishing method based on virtual digital person according to claim 4, wherein the matching animation is subjected to voice recognition by adopting a deep learning method to obtain recognition text data, and the method specifically comprises the following steps:
processing the matching animation by adopting a dynamic transcoding method, and extracting signal characteristics to obtain a voice signal;
and carrying out voice recognition on the voice signal by adopting a deep learning method based on the hidden Markov acoustic model to obtain recognition text data.
6. The virtual digital person-based advertising delivery method of claim 1, further comprising:
acquiring voice data of a user;
determining voice conversion and word segmentation processing according to the voice information to obtain problem processing data;
searching and matching are carried out in a set knowledge base according to the problem processing data, and a matching result is obtained;
and the virtual digital person performs interaction processing with the user according to the matching result.
7. The virtual digital person-based advertising delivery method of claim 1, further comprising:
acquiring response data of the published advertisement; the response data includes: put in data, operation data, behavior data and monitoring data;
performing cleaning conversion and integrated specification processing on the response data to obtain response processing data;
performing importance comparison according to the response processing data and the set level, and determining a weight judgment matrix;
consistency test is carried out according to the weight judgment matrix, and a comparison result is obtained based on a consistency test result and a set threshold value;
determining a response result according to the comparison result; the response results are used to characterize the spreading impact of the published advertisement.
8. An advertising system based on virtual digital people, the system comprising:
the acquisition module is used for acquiring the user portrait; the user portrayal is three-dimensional attribute data determined based on authorization data of a user; the three-dimensional attribute data includes: gender, interests, and geographic location;
the information determining module is used for determining figure appearance information according to the user portrait;
the building module is used for building a virtual digital person according to the appearance information by adopting an appearance reconstruction and appearance rendering mode;
the matching module is used for carrying out information matching by adopting a synthetic model according to the advertisement scene information and the virtual digital person to obtain a matching animation; the information matching includes: speech synthesis and action matching; the synthesis model is constructed by adopting a convolutional neural network and a cyclic neural network; the advertisement scene information includes: advertisement scenario content, advertisement required action gesture and advertisement language tone;
the detection module is used for adopting an identification detection model to carry out examination detection on the matching animation to obtain a detected matching animation; the recognition detection model is constructed based on a deep learning method and a bidirectional circulating neural network fused with a self-attention mechanism;
And the release module is used for releasing the detected matching animation to obtain release advertisements.
CN202410020352.7A 2024-01-08 2024-01-08 Advertisement making and publishing method and system based on virtual digital person Active CN117541321B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410020352.7A CN117541321B (en) 2024-01-08 2024-01-08 Advertisement making and publishing method and system based on virtual digital person

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410020352.7A CN117541321B (en) 2024-01-08 2024-01-08 Advertisement making and publishing method and system based on virtual digital person

Publications (2)

Publication Number Publication Date
CN117541321A true CN117541321A (en) 2024-02-09
CN117541321B CN117541321B (en) 2024-04-12

Family

ID=89794059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410020352.7A Active CN117541321B (en) 2024-01-08 2024-01-08 Advertisement making and publishing method and system based on virtual digital person

Country Status (1)

Country Link
CN (1) CN117541321B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117932165A (en) * 2024-03-22 2024-04-26 湖南快乐阳光互动娱乐传媒有限公司 Personalized social method, system, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112906546A (en) * 2021-02-09 2021-06-04 中国工商银行股份有限公司 Personalized generation method for virtual digital human figure, sound effect and service model
WO2023124933A1 (en) * 2021-12-31 2023-07-06 魔珐(上海)信息科技有限公司 Virtual digital person video generation method and device, storage medium, and terminal
CN116402556A (en) * 2023-04-12 2023-07-07 碳丝路文化传播(成都)有限公司 Meta universe advertisement implantation method, system and storage medium based on event driving
CN116415017A (en) * 2023-03-17 2023-07-11 湖北巨字传媒有限公司 Advertisement sensitive content auditing method and system based on artificial intelligence
US20230267665A1 (en) * 2020-09-01 2023-08-24 Mofa (Shanghai) Information Technology Co., Ltd. End-to-end virtual object animation generation method and apparatus, storage medium, and terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230267665A1 (en) * 2020-09-01 2023-08-24 Mofa (Shanghai) Information Technology Co., Ltd. End-to-end virtual object animation generation method and apparatus, storage medium, and terminal
CN112906546A (en) * 2021-02-09 2021-06-04 中国工商银行股份有限公司 Personalized generation method for virtual digital human figure, sound effect and service model
WO2023124933A1 (en) * 2021-12-31 2023-07-06 魔珐(上海)信息科技有限公司 Virtual digital person video generation method and device, storage medium, and terminal
CN116415017A (en) * 2023-03-17 2023-07-11 湖北巨字传媒有限公司 Advertisement sensitive content auditing method and system based on artificial intelligence
CN116402556A (en) * 2023-04-12 2023-07-07 碳丝路文化传播(成都)有限公司 Meta universe advertisement implantation method, system and storage medium based on event driving

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117932165A (en) * 2024-03-22 2024-04-26 湖南快乐阳光互动娱乐传媒有限公司 Personalized social method, system, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN117541321B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
CN110688911B (en) Video processing method, device, system, terminal equipment and storage medium
US11858118B2 (en) Robot, server, and human-machine interaction method
Stappen et al. The multimodal sentiment analysis in car reviews (muse-car) dataset: Collection, insights and improvements
CN105895087B (en) Voice recognition method and device
Fanelli et al. A 3-d audio-visual corpus of affective communication
WO2018045553A1 (en) Man-machine interaction system and method
CN110868635B (en) Video processing method and device, electronic equipment and storage medium
CN117541321B (en) Advertisement making and publishing method and system based on virtual digital person
CN106663095A (en) Facet recommendations from sentiment-bearing content
JP2018014094A (en) Virtual robot interaction method, system, and robot
Camurri et al. The MEGA project: Analysis and synthesis of multisensory expressive gesture in performing art applications
Aslan et al. Multimodal video-based apparent personality recognition using long short-term memory and convolutional neural networks
WO2022242706A1 (en) Multimodal based reactive response generation
CN116958342A (en) Method for generating actions of virtual image, method and device for constructing action library
Volpe Computational models of expressive gesture in multimedia systems.
CN116910302A (en) Multi-mode video content effectiveness feedback visual analysis method and system
Tao et al. Emotional Chinese talking head system
JP6222465B2 (en) Animation generating apparatus, animation generating method and program
Cui et al. Virtual Human: A Comprehensive Survey on Academic and Applications
Kathiravan et al. Efficient Intensity Bedded Sonata Wiles System using IoT
CN112069836A (en) Rumor recognition method, device, equipment and storage medium
CN108334806B (en) Image processing method and device and electronic equipment
Knoppel et al. Trackside DEIRA: A Dynamic Engaging Intelligent Reporter Agent (Full paper)
Chang et al. Real-time emotion retrieval scheme in video with image sequence features
CN115951787B (en) Interaction method of near-eye display device, storage medium and near-eye display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant