CN116720890A - Advertisement delivery clue cleaning method and related device - Google Patents

Advertisement delivery clue cleaning method and related device Download PDF

Info

Publication number
CN116720890A
CN116720890A CN202210189507.0A CN202210189507A CN116720890A CN 116720890 A CN116720890 A CN 116720890A CN 202210189507 A CN202210189507 A CN 202210189507A CN 116720890 A CN116720890 A CN 116720890A
Authority
CN
China
Prior art keywords
screened
intention
audio stream
text
stream data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210189507.0A
Other languages
Chinese (zh)
Inventor
陈星宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210189507.0A priority Critical patent/CN116720890A/en
Publication of CN116720890A publication Critical patent/CN116720890A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0269Targeted advertisements based on user profile or attribute
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Development Economics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Strategic Management (AREA)
  • Human Computer Interaction (AREA)
  • Molecular Biology (AREA)
  • Finance (AREA)
  • Evolutionary Computation (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Biophysics (AREA)
  • Accounting & Taxation (AREA)
  • Biomedical Technology (AREA)
  • General Business, Economics & Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a method for cleaning advertisement delivery clues, which can be applied to the field of cloud computing. The method comprises the following steps: acquiring audio stream data of an object to be screened; according to the audio stream data of the object to be screened, obtaining a session text of the object to be screened; determining the intention of the object to be screened by using a semantic neural network model according to the session text of the object to be screened, wherein the intention of the object to be screened is the intention of the object to be screened to the target advertisement; and adjusting the target advertisement putting proportion of the object to be screened according to the intention of the object to be screened. Through the mode, the advertising effect can be optimized, the labor cost is reduced, and the advertising efficiency is improved.

Description

Advertisement delivery clue cleaning method and related device
Technical Field
The application relates to the technical field of Internet, in particular to a method for cleaning advertisement delivery clues and a related device.
Background
Advertising is one of the most common ways of information dissemination over the internet, and more advertisers tend to present advertisements to users via network media platforms.
Advertisers can collect consumers with the same potential needs over a period of time through digital marketing means. Currently, advertisers are primarily targeted for gatherer marketing through general casting or simple crowd targeting. After the advertiser collects a large number of sales leads through the advertisement gatherer, customer service personnel can randomly and unordered call back the leads to realize lead cleaning and sales follow-up.
However, customer-gathering marketing through general casting or simple crowd targeting faces the problem of inefficient customer-gathering. Many times, the large number of advertising is not accompanied by a large number of high quality sales leads, even though many people leave personal information as a lead, they do not actually have the intent to actually purchase the merchandise. Meanwhile, the thread cleaning mode of random manual call return visit requires a great deal of labor cost, thereby reducing the practicability of the scheme.
Disclosure of Invention
The embodiment of the application provides a method for cleaning advertisement putting clues, which comprises the steps of obtaining audio stream data of an object to be screened, and then converting the audio stream data to obtain a conversation text of the object to be screened. And then determining the intention of the object to be screened by using a semantic neural network model according to the session text of the object to be screened, wherein the intention of the object to be screened is the intention of the object to be screened to the target advertisement, so that the advertisement putting effect can be optimized, the labor cost can be reduced, and the advertisement putting efficiency can be improved.
In view of this, the present application provides a method for cleaning advertisement delivery clues, comprising: acquiring audio stream data of an object to be screened; according to the audio stream data of the object to be screened, obtaining a session text of the object to be screened; determining the intention of the object to be screened by using a semantic neural network model according to the session text of the object to be screened, wherein the intention of the object to be screened is the intention of the object to be screened to the target advertisement; determining a processing mode of the object to be screened according to the intention of the object to be screened, wherein the processing mode of the object to be screened comprises the following steps: when the intention of the object to be screened is high intention, increasing the target advertisement putting proportion of the object to be screened; when the intention of the object to be screened is low intention, reducing the target advertisement putting proportion of the object to be screened, wherein the high intention comprises the following steps: affirmative intent, low intent includes: negative intent, busy intent, or unrecognized intent of the subject voice.
In the embodiment of the application, after the advertisement putting clue cleaning device acquires the audio stream data of the object to be screened, the audio stream data is converted to obtain the session text of the object to be screened. And then determining the intention of the object to be screened by using a semantic neural network model according to the session text of the object to be screened, wherein the intention of the object to be screened is the intention of the object to be screened on the target advertisement (or a target product corresponding to the target advertisement). After the intention of the object to be screened is determined, the processing mode of the object to be screened is determined according to the intention. When the intention of the object to be screened is high intention, increasing the advertisement putting proportion of the object to be screened; and when the intention of the object to be screened is low intention, reducing the advertisement putting proportion of the object to be screened. Through the mode, the advertising effect can be optimized, the labor cost is reduced, and the advertising efficiency is improved.
Another aspect of the present application provides an advertisement delivery thread cleaning apparatus, comprising:
the receiving and transmitting module is used for acquiring audio stream data of the object to be screened;
the processing module is used for obtaining the session text of the object to be screened according to the audio stream data of the object to be screened;
The processing module is also used for determining the intention of the object to be screened by using the semantic neural network model according to the conversation text of the object to be screened, wherein the intention of the object to be screened is the intention of the object to be screened to the target advertisement;
the processing module is further configured to determine a processing manner of the object to be screened according to the intention of the object to be screened, where the processing manner of the object to be screened includes: when the intention of the object to be screened is high intention, increasing the target advertisement putting proportion of the object to be screened; when the intention of the object to be screened is low intention, reducing the target advertisement putting proportion of the object to be screened, wherein the high intention comprises the following steps: affirmative intent, low intent includes: negative intent, busy intent, or unrecognized intent of the subject voice.
In one possible design, in another implementation of another aspect of the embodiments of the present application,
the processing module is also used for processing the conversation text of the object to be screened by using the semantic neural network model and the knowledge base when the intention of the object to be screened does not belong to any one of positive intention, negative intention, busy intention of the object or unrecognized intention of the object voice,
the knowledge base is a semantic information knowledge base associated with the semantic neural network model, and the knowledge points are one or more semantic information included in the knowledge base;
The processing module is also used for determining that the intention of the object to be screened is high intention when the knowledge points matched with the conversation text of the object to be screened exist in the knowledge base,
the processing module is further used for determining that the intention of the object to be screened is low intention when the knowledge points matched with the conversation text of the object to be screened do not exist in the knowledge base;
and the processing module is also used for determining the processing mode of the object to be screened according to the intention of the object to be screened.
In one possible design, in another implementation of another aspect of the embodiments of the present application,
the processing module is also used for calculating the embedded ebedding characteristic of the session text of the object to be screened;
the receiving and transmitting module is also used for acquiring embedded characteristics of a plurality of knowledge points in the knowledge base;
the processing module is also used for calculating the similarity between the embedded characteristics of the conversation text of the object to be screened and the embedded characteristics of the knowledge points according to the embedded characteristics of the conversation text of the object to be screened and the embedded characteristics of the knowledge points;
and the processing module is also used for determining whether the matched knowledge points exist in the conversation text of the object to be screened in the knowledge base according to the similarity, wherein the matched knowledge points are knowledge points with the similarity larger than a first threshold value and the highest similarity in the plurality of knowledge points.
In one possible design, in another implementation of another aspect of the embodiments of the present application,
one or more industry templates are included in the knowledge base, each industry template including one or more knowledge points.
In one possible design, in another implementation of another aspect of the embodiments of the present application,
the embedded features of the knowledge points in the knowledge base are obtained through offline calculation.
In one possible design, in another implementation of another aspect of the embodiments of the present application,
the receiving and transmitting module is also used for sending an identification request to the target identification engine, wherein the identification request carries audio stream data of the object to be screened, and the audio stream data of the object to be screened comes from the call module;
and the processing module is also used for processing the audio stream data of the object to be screened by using the target recognition engine to obtain the session text of the object to be screened.
In one possible design, in another implementation of another aspect of the embodiments of the present application,
the identification request packet body also comprises a first control parameter, and the purpose of the first control parameter comprises one or more of the following: the sensitivity of the object recognition engine to perform the recognition operation of the audio stream data is instructed, or the recognition operation is started, or the recognition operation is suspended.
In one possible design, in another implementation of another aspect of the embodiments of the present application,
the method further comprises the step of identifying a second control parameter in the packet header of the request, wherein the purpose of the second control parameter comprises one or more of the following: controlling the tone color of the audio stream data in the identification operation, controlling the volume of the audio stream data in the identification operation, and controlling the playing speed of the audio stream data in the identification operation.
In one possible design, in another implementation of another aspect of the embodiments of the present application,
the processing module is also used for carrying out noise reduction processing on the audio stream data of the object to be screened to obtain the audio stream data of the object to be screened after noise reduction;
the processing module is also used for converting the noise-reduced audio stream data of the object to be screened into a session text of the object to be screened.
In one possible design, in another implementation of another aspect of the embodiments of the present application,
the receiving and transmitting module is also used for acquiring a first conversation text, wherein the first conversation text comprises X sentences, and X is a positive integer;
the processing module is further used for generating a second conversation text according to the first conversation text, wherein the second conversation text comprises N sentences, and N is a positive integer;
The processing module is also used for extracting the feature vectors of the N sentences according to the second conversation text;
and the processing module is also used for training the semantic neural network model according to the feature vectors of the N sentences.
In one possible design, in another implementation of another aspect of the embodiments of the present application,
the receiving and transmitting module is further used for acquiring client information of a plurality of objects, wherein the client information comprises one or more of the following: the name of the object, the contact way of the object, the address information of the object, the text data of the object and customer service, or the willingness level of the object;
and the processing module is also used for screening the client information of the objects and determining the objects to be screened.
In one possible design, in another implementation of another aspect of the embodiments of the present application,
the transceiver module is further configured to obtain configuration information, where the configuration information includes: source of session configuration information, or customer profile information;
the processing module is further used for screening the plurality of objects based on the configuration information and the calling time period of the plurality of objects and determining the objects to be screened.
Another aspect of the present application provides a computer apparatus comprising: a memory, a processor, and a bus system;
Wherein the memory is used for storing programs;
the processor is used for executing the program in the memory, and the processor is used for executing the method according to the aspects according to the instructions in the program code;
the bus system is used to connect the memory and the processor to communicate the memory and the processor.
Another aspect of the application provides a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the methods of the above aspects.
In another aspect of the application, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the methods provided in the above aspects.
Drawings
FIG. 1 is a schematic diagram of an application scenario involved in an advertisement delivery cue cleaning method according to an embodiment of the present application;
FIG. 2 is a schematic functional diagram of an advertisement delivery clue cleaning apparatus according to an embodiment of the present application;
FIG. 3 is a schematic diagram of component layers according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an embodiment of a method for cleaning advertisement delivery cues in accordance with an embodiment of the present application;
FIG. 5 is a schematic diagram of another embodiment of a method for cleaning advertisement delivery cues according to an embodiment of the present application;
FIG. 6 is a schematic diagram of another embodiment of a method for cleaning advertisement delivery cues in accordance with an embodiment of the present application;
FIG. 7 is a schematic diagram of another embodiment of a method for cleaning advertisement delivery cues in an embodiment of the present application;
FIG. 8 is a schematic diagram of a structure for identifying a request according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a training process of a semantic neural network model according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a configuration interface according to an embodiment of the present application;
FIG. 11 is a schematic illustration of call details involved in an embodiment of the present application;
FIG. 12 is a schematic diagram of an advertisement delivery clue cleaning apparatus according to an embodiment of the present application;
fig. 13 is a schematic diagram of a server structure according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "includes" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For ease of understanding, some terms or concepts related to embodiments of the present application are explained first.
1. (Cost Per Click, CPC) advertisement
CPC is used to represent the cost per advertisement click. In this mode, the advertiser pays for the action of clicking the target object (such as the advertisement) by the basic object, and does not pay for exposure of the advertisement any more, so that the risk of only exposing and not clicking is avoided for the advertiser, and the CPC advertisement is one of the current mainstream advertisement charging modes.
2. Advertiser
Advertisers refer to objects or service providers who pay to deliver advertisements, and the advertiser expects that each paid advertisement click is a valid click of a real user, rather than a cheating click.
3. Flow main
Traffic is the carrier that provides the object traffic, typically referred to as media, web sites, or software. In the advertising platform, the flow owner refers to a public number with a certain vermicelli quantity. The flow owner can participate in profit division of advertisements, and the higher the click rate is under the same advertisement exposure, the higher the profit divided is, so the flow owner has stronger cheating motivation to promote the click rate.
4. Advertisement cheating
In links of advertisement exposure, clicking, effect and the like, a certain target object has behavior of advertisement exposure, clicking and effect brushing for a certain malicious purpose, and the malicious behavior of the true intention of the non-target object is called advertisement cheating.
5. Anti-cheating advertisement
And checking links such as advertisement exposure, clicking and effect, and judging whether the advertisement exposure, clicking and effect are normal or not.
6. Advertisement delivery
Advertisement delivery refers to the process of the advertisement platform displaying advertisements on the pages of the traffic owner. Which advertisement is presented on which page may generally be determined by the algorithm of the platform, and is generally placed according to the interests of the underlying object. I.e. what interest the underlying object that browses the page has, what kind of advertisement will be presented.
7. Advertisement position
The advertisement space refers to a media position identifier of advertisement delivery, for example, advertisements on articles displayed on a platform are divided into a top advertisement space, a middle advertisement space and a bottom advertisement space according to different positions of the advertisements in the articles, and advertisements on other applets can be divided into Banner advertisements, motivation video advertisements or screen inserting advertisements according to different positions of the advertisements.
8. Advertisement conversion rate
Advertisement conversion refers to the ratio of the amount of conversion an advertisement brings to the amount of clicks the advertisement in the process of putting the advertisement. For different popularization targets, the corresponding conversion targets are different. For example, for an e-commerce advertisement, the conversion target refers to the order placed by the advertisement, and the corresponding advertisement conversion rate is the ratio of the order placed by the advertisement to the click-through rate; for Android/IOS download type advertisements, conversion refers to application program activation caused by advertisements, and the corresponding advertisement conversion rate is the ratio of the activation amount of the application program caused by the advertisements to the click rate.
9. Click rate of advertisement
The advertisement click-through rate is the ratio of click-through to exposure. If the exposure number of an advertisement is M and the click number is N, the click rate ctr=n/M.
10. Different pattern
Isomerism refers to the property of containing different components, and in the field of information technology isomerism is commonly used to describe the inclusion of a plurality of different types of entities, and iso-patterning refers to a graph containing different types of nodes.
It will be appreciated that the present application relates specifically to Cloud technology (Cloud technology), which is further described below. Cloud technology refers to a hosting technology for unifying serial resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data. The cloud technology is based on the general names of network technology, information technology, integration technology, management platform technology, application technology and the like applied by the cloud computing business mode, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical networking systems require a large amount of computing, storage resources, such as video websites, picture-like websites, and more portals. Along with the high development and application of the internet industry, each article possibly has an own identification mark in the future, the identification mark needs to be transmitted to a background system for logic processing, data with different levels can be processed separately, and various industry data needs strong system rear shield support and can be realized only through cloud computing.
Cloud computing (clouding) is a computing model that distributes computing tasks over a resource pool of large numbers of computers, enabling various application systems to acquire computing power, storage space, and information services as needed. The network that provides the resources is referred to as the "cloud". Resources in the cloud are infinitely expandable in the sense of users, and can be acquired at any time, used as needed, expanded at any time and paid for use as needed.
As a basic capability provider of cloud computing, a cloud computing resource pool (cloud platform for short, generally referred to as IaaS (Infrastructure as a Service, infrastructure as a service) platform) is established, in which multiple types of virtual resources are deployed for external clients to select for use.
According to the logic function division, a PaaS (Platform as a Service ) layer can be deployed on an IaaS (Infrastructure as a Service ) layer, and a SaaS (Software as a Service, software as a service) layer can be deployed above the PaaS layer, or the SaaS can be directly deployed on the IaaS. PaaS is a platform on which software runs, such as a database, web container, etc. SaaS is a wide variety of transactional software such as web portals, text message mass senders, etc. Generally, saaS and PaaS are upper layers relative to IaaS.
Secondly, cloud storage (cloud storage) is a new concept which extends and develops in the concept of cloud computing, and a distributed cloud storage system (hereinafter referred to as a storage system for short) refers to a storage system which provides data storage and transaction access functions together and cooperatively works a large number of storage devices (storage devices are also called storage nodes) of different types in a network through application software or application interfaces through functions of cluster application, grid technology, a distributed storage file system and the like.
At present, the storage method of the storage system is as follows: when creating logical volumes, each logical volume is allocated a physical storage space, which may be a disk composition of a certain storage device or of several storage devices. The client stores data on a certain logical volume, that is, the data is stored on a file system, the file system divides the data into a plurality of parts, each part is an object, the object not only contains the data but also contains additional information such as a data Identification (ID) and the like, the file system writes each object into a physical storage space of the logical volume, and the file system records storage position information of each object, so that when the client requests to access the data, the file system can enable the client to access the data according to the storage position information of each object.
The process of allocating physical storage space for the logical volume by the storage system specifically includes: physical storage space is divided into stripes in advance according to the set of capacity measures for objects stored on a logical volume (which measures tend to have a large margin with respect to the capacity of the object actually to be stored) and redundant array of independent disks (RAID, redundant Array of Independent Disk), and a logical volume can be understood as a stripe, whereby physical storage space is allocated for the logical volume.
The current advertisement delivery system is usually based on manual customer service to provide service, so that the defect of high cost exists. For example: of all the customer threads collected in the form, about 5% of the threads have a null number, while the remaining threads have a 60% answer, and the intentional percentage of the on data is only 20%. After the traditional advertiser collects the customer information, the customer service team needs to carry out the follow-up and cleaning of clues, the great labor cost is needed to be born, the clues of the piled customers cannot be cleaned in time, and the best opportunity of the follow-up customers is missed.
Based on the method, a neural network semantic model is introduced, and the conversation text of the object to be screened is determined by detecting the audio stream data of the object to be screened. And then determining the intention of the object to be screened according to the conversation text of the object to be screened. The intent of the object to be screened is the intent of the object to be screened for the targeted advertisement, for example: determining a processing mode of an object to be screened for the payment intention of the product corresponding to the target advertisement, wherein when the intention of the object to be screened is high intention, the target advertisement putting proportion of the object to be screened is increased; and when the intention of the object to be screened is low intention, reducing the target advertisement putting proportion of the object to be screened. Through the mode, the advertising effect can be optimized, the labor cost is reduced, and the advertising efficiency is improved. It is to be understood that the method for cleaning advertisement delivery clues provided by the application can be applied to the fields of cloud technology, artificial intelligence, intelligent traffic and the like.
It can be appreciated that the intelligent transportation system (Intelligent Traffic System, ITS) applied in the intelligent transportation field is also called as an intelligent transportation system (Intelligent Transportation System), and is an integrated transportation system for effectively and comprehensively applying advanced scientific technologies (information technology, computer technology, data communication technology, sensor technology, electronic control technology, automatic control theory, operation study, artificial intelligence and the like) to transportation, service control and vehicle manufacturing, and enhancing the connection among vehicles, roads and users, thereby ensuring safety, improving efficiency, improving environment and saving energy.
For easy understanding, please refer to fig. 1, fig. 1 is a schematic diagram of an application scenario involved in an advertisement delivery cue cleaning method according to an embodiment of the present application. The application scene comprises the following steps: an advertisement delivery system, an advertisement delivery cue cleaning device, and a Customer Relationship Management (CRM). The advertisement delivery cue cleaning device is used for executing the advertisement delivery cue cleaning method provided by the application, specifically, the advertisement delivery cue cleaning device executes cue collection from the advertisement delivery system, for example, collects audio stream data of an object to be screened, determines the intention of the object to be screened based on the audio stream data of the object to be screened, and then determines the processing mode of the object to be screened. The advertising system may be a data management platform (Data Management Platform, DMP). The advertisement delivery clue cleaning device is used for feeding the processing mode of the object to be screened back to the advertisement delivery system as a result. The advertisement delivery clue cleaning device informs the client CRM of the cleaning result of the advertisement delivery clue, so that the client CRM adjusts the business strategy according to the cleaning result; the advertisement delivery system is used for delivering advertisements to the target objects, and specifically, the target advertisement delivery proportion of the objects to be screened is adjusted according to the back feeding result (the processing mode of the objects to be screened) of the advertisement delivery clue cleaning device; the customer CRM can activate the advertisement delivery clue cleaning device, and can adjust business strategies according to the cleaning result so as to optimize advertisement delivery effects.
Only one type of computer device is shown in fig. 1. The computer device comprises a server or a terminal device. In the actual scene, more kinds of terminal equipment can participate in the data processing process, and the terminal equipment comprises, but is not limited to, mobile phones, computers, intelligent voice interaction equipment, intelligent home appliances, vehicle-mounted terminals and the like, and the specific number and the specific kind are determined according to the actual scene, and are not limited in the specific point. In addition, in the actual scenario, there may be a plurality of servers involved, and in particular, in the scenario of the multi-model training interaction, the number of servers depends on the actual scenario, which is not limited herein.
It should be noted that in this embodiment, the server may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery network (content delivery network, CDN), and basic cloud computing services such as big data and an artificial intelligence platform. The terminal device and the server may be directly or indirectly connected through wired or wireless communication, and the terminal device and the server may be connected to form a blockchain network, which is not limited herein.
Next, the advertisement delivery thread cleaning device in the embodiment of the present application is further described, referring to fig. 2, fig. 2 is a functional schematic diagram of the advertisement delivery thread cleaning device in the embodiment of the present application. The advertisement putting clue cleaning device in the embodiment of the application can specifically comprise the following steps according to the functional division: the system comprises an application layer, a task processing layer, a storage service, an algorithm layer, a component layer, a voice task processing and communication module. The talk module may employ a freeswitch module. Processing voice tasks: call management, etc., e.g., to manual customer service. The call module is used for making a call to the customer.
Wherein, the application layer is used for: ticket management, task management, merchant management, statistical analysis, billing management, ticket/recording management, and the like. One possible implementation is: a hypertext preprocessor (PHP: hypertext Preprocessor) is adopted as a front end, and a java Springboot project is adopted as a rear end to jointly build two layers. The method is mainly used for displaying the interface, performing speaking, tasks, performing statistical analysis, configuring pages of functions such as ticket and calling application program interfaces (Application Programming Interface, API).
The task processing layer is used for scheduling tasks, and one possible implementation manner is as follows: is built by zoo manager (ZK), java Springboot and Lua script. The task processing layer is mainly responsible for scheduling and pulling the call tasks. For example: after creating a task, the advertiser generates a task queue in the task processing layer, and after receiving the user form information and submitting, the advertiser automatically joins the task queue.
The storage layer is used for storing the states of tasks, telephone operation configuration of users, telephone bills and the like. For example, the task state includes: not started, in progress, paused, ended, or expired, etc.
Performing speaking: the term "session" refers to the complete flow of a session configured by the user. For example: the first sentence is "AAA". When the target client answers "BBB," a second sentence is replied to "CCC," and so on, until the session with the target client hangs up. The complete process is called speaking.
User session configuration: during the session, for example, it is possible to set up: triggering XXX, and sending a short message to the target client. When the target client speaks "QQQ", the process is switched to the manual customer service. When the target client speaks "PPP", a hang-up is performed.
Ticket(s): after the robot communicates with the client, the record kept is called a ticket. The ticket includes: calling number, called number, talk time, recording, and conversation text, intention level, word slot extraction information, etc. of the robot and the user.
Algorithm layer the functions of the algorithm layer include, but are not limited to: a speech template, word slot extraction, robotic control, session management, synonym expansion, or knowledge base. Wherein the knowledge base includes one or more industry templates, each industry template including one or more knowledge points therein. Knowledge points are one or more semantic information included in a knowledge base. For example: "user does not speak", "user does not speak three times in succession", "keyword is broken", "no answer processing" or "transfer failure", etc.
The specific flow of the algorithm layer is as follows: first, audio stream data (dialog) is preprocessed: identifying synonyms, homophones, null character filtering, etc. Then, question and answer interception is executed, and the question and answer interception specifically comprises: managing the context of audio stream data, intercepting audio stream data that is difficult to understand, intercepting audio stream data that is silent, etc. The execution task core then includes, but is not limited to: question understanding, or intelligent speech surgery, etc. Finally, the result processing is executed, including: rendering, transferring manual customer service, hanging up a short message, or identifying whether the short message can be interrupted or not.
Referring to fig. 3, fig. 3 is a schematic diagram of a component layer according to an embodiment of the application. The component layer comprises: component routing, rules engine, content management system, kafka, consumption thread pool, etc. The component layer mainly abstracts some general functional components, such as ASR technology (Automatic Speech Recognition, speech recognition), TTS technology (Text-To-Speech, speech synthesis), short message, SIP (Session initialization Protocol, session initiation protocol) gateway, and the like. For example: the functions of ASR, TTS, etc. are integrated into a target recognition engine, which may employ an unimrcp recognition engine. The unimrcp recognition engine can interface with all ASR and TTS services supporting the standard mrcp protocol. The SMS provides standard hypertext transfer protocol (Hyper Text Transfer Protocol, HTTP) to make any SMS manufacturer support SMS component service according to the interface protocol, the SIP gateway adopts standard SIP protocol to make butt joint, and can make butt joint with all gateways supporting the protocol.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating an embodiment of a method for cleaning advertisement delivery clues according to an embodiment of the present application. The method for cleaning the advertisement delivery clues provided by the embodiment of the application comprises the following steps:
401. and acquiring audio stream data of the object to be screened.
In this embodiment, the advertisement delivery cue cleaning device first obtains audio stream data of an object to be screened. The object to be screened may be understood as a customer facing the advertisement delivery system. The audio stream data may be call audio between a robot (bot) and a client. The advertisement putting clue cleaning device acquires the audio stream data from the call module and performs real-time analysis; the advertisement delivery cue cleaning device may also obtain the audio stream data from the storage service and then analyze the intent of the object to be screened based on the audio stream data, which is not limited in the embodiment of the present application.
Optionally, the advertisement delivery cue cleaning device obtains client information of a plurality of objects, the client information including one or more of: the name of the object, the contact way of the object, the address information of the object, the text data of the object and customer service, or the willingness level of the object; and screening the client information of the objects to determine the object to be screened. For example: the client information comprises client information of 100 clients, and then the advertisement delivery clue cleaning device screens out clients with mobile phone numbers of null numbers. The remaining 80 clients are the objects to be screened. The advertisement putting clue cleaning device dials a call to the 80 clients through the call module, and call records of the call serve as audio stream data of the object to be screened. For example, referring to fig. 10, fig. 10 is a schematic diagram of a configuration interface according to an embodiment of the present application. The interface shows how to set up the screening method for the customer. Illustratively, the configuration is performed in an advertising cue management platform. The system configuration menu may be accessed through a "system setup" control. Then, the menu is configured, for example, the screening method that the current user needs to use is to identify whether the customer has a shop wish by dialing a robot phone to the customer for the car sales service. Therefore, in the system configuration menu, the rule name is selected as: rule of "car to store intention intelligent cleaning". Configuration information under the rule then includes a session configuration, such as: "automotive" field. Specifically, "automotive industry standard conversation (including intention to store)". After the option is selected, yes is selected in the control of whether to open, and the rule is started. Then, for obtaining configuration information of the rule, the configuration information includes: source of session configuration information or customer profile information. The information about the conversation configuration is for example: "automotive industry standard, cosmetic industry standard, insurance industry standard, or real estate industry standard, etc. The customer information sources include: audio stream files for application software, audio stream files for telephone, text for text messaging, mail, or dialogue text for application software, etc. When the configuration information is configured, the advertisement delivery clue cleaning device can make a call to a plurality of objects (the objects can be understood as clients) through a call module (e.g. a Freeswitch module) so as to acquire audio stream data of the objects.
After the audio stream data of a plurality of objects are acquired, the advertisement delivery clue cleaning device can determine the objects to be screened in a plurality of modes. One possible implementation manner is to screen customer information of a plurality of objects based on configuration information and a call making time period of the plurality of objects, and determine the objects to be screened. For example, for the "car to store intent Intelligent cleaning" rule, the rule sets 17:00-20: the customer's intention to get to the store is higher for the on-call customers within the 00 time period, therefore, the on-call customers within other time periods are excluded, and 17:00-20 are reserved: clients receiving calls within a period of 00 are the objects to be screened. The audio stream data of these clients are used as the audio stream data of the object to be screened.
402. And obtaining the session text of the object to be screened according to the audio stream data of the object to be screened.
In this embodiment, after the advertisement delivery cue cleaning device obtains the audio stream data of the object to be screened, the advertisement delivery cue cleaning device performs conversion processing on the audio stream data to obtain the session text of the object to be screened. In one possible implementation, the advertisement delivery cue cleaning device uses automatic speech recognition AST technology to process the audio stream data to obtain the session text. For ease of understanding, please refer to fig. 11, fig. 11 is a schematic diagram of a call detail related to an embodiment of the present application. Fig. 11 illustrates a process of the audio stream data of the object to be screened by the advertisement delivery clue cleaning device, and the obtained session text of the object to be screened. After the session text is obtained, step 403 is entered.
In another possible implementation, the advertisement delivery cue cleaning device may directly obtain the session text of the object to be screened, and then determine the intent of the object to be screened using a semantic neural network model based on the session text.
403. And determining the intention of the object to be screened by using the semantic neural network model according to the conversation text of the object to be screened.
In this embodiment, after the advertisement delivery clue cleaning device obtains the session text of the object to be screened, the semantic neural network model is used to analyze the session text, and the intention of the object to be screened is determined. For ease of understanding, the intent refers to the intent of the object to be screened for the targeted advertisement (or the targeted product to which the targeted advertisement corresponds). For example: to store intention, purchase intention, payment intention, or collection intention, etc.
The intent may include: high intent or low intent, where high intent is a high-demand intent of a client (object), such as: when a certain object is high intention, the payment intention of the object to the target product is high; similarly, a low intent is one that is less demanding for the customer, such as: when a certain object is low intention, the payment intention of the object for the target product is low. For example, high intent includes affirmative intent, such as: the dialogue text of the object to be screened comprises 'good, i am to store' on the sunday afternoon, and the intention of the object to be screened is determined to be affirmative intention. Low intent includes: negative intent, busy intent of the object, or unrecognized intent of the object voice, etc. For example: the session text of the object to be screened includes 'reject', the intention of the object to be screened is determined to be negative intention. Also for example: and if the conversation text of the object to be screened comprises 'no afternoon of the weekday of me', determining that the intention of the object to be screened is the busy intention of the object. Also for example: and if the conversation text of the object to be screened is empty, determining that the intention of the object to be screened is the unrecognized intention of the object voice.
404. And determining the processing mode of the object to be screened according to the intention of the object to be screened.
In this embodiment, after determining the intention of the object to be screened, the advertisement delivery clue cleaning device determines the processing mode of the object to be screened. Specifically, the processing mode of the object to be screened includes: and when the intention of the object to be screened is high intention, increasing the target advertisement putting proportion of the object to be screened. And when the intention of the object to be screened is low intention, reducing the target advertisement putting proportion of the object to be screened. Optionally, when the intent of the object to be screened is low intent, the object to be screened is excluded from the advertisement delivery object set.
In the embodiment of the application, after the advertisement putting clue cleaning device acquires the audio stream data of the object to be screened, the audio stream data is converted to obtain the session text of the object to be screened. And then determining the intention of the object to be screened by using a semantic neural network model according to the session text of the object to be screened, wherein the intention of the object to be screened is the intention of the object to be screened on the target advertisement (or a target product corresponding to the target advertisement). After the intention of the object to be screened is determined, the processing mode of the object to be screened is determined according to the intention. When the intention of the object to be screened is high intention, increasing the advertisement putting proportion of the object to be screened; and when the intention of the object to be screened is low intention, reducing the advertisement putting proportion of the object to be screened. Through the mode, the advertising effect can be optimized, the labor cost is reduced, and the advertising efficiency is improved.
In combination with the foregoing embodiment, when the intention of the object to be screened does not belong to any one of a positive intention, a negative intention, an object busy intention, or an object voice unrecognized intention, the advertisement delivery cue cleaning apparatus processes the conversation text of the object to be screened using the semantic neural network model and the knowledge base. Specifically, referring to fig. 5, fig. 5 is a schematic diagram illustrating another embodiment of a method for cleaning advertisement delivery clues according to an embodiment of the present application. The method for cleaning the advertisement delivery clues provided by the embodiment of the application further comprises the following steps:
501. whether the intention of the object to be screened belongs to any one of positive intention, negative intention, busy intention of the object or unrecognized intention of the object voice is detected.
In this embodiment, the advertisement delivery clue cleaning device first uses a semantic neural network model to process a session text of an object to be screened to detect whether the intention of the object to be screened belongs to any one of a positive intention, a negative intention, an object busy intention, or an object voice unrecognized intention. Specifically, the session text of the object to be screened is: such as a customer's dialogue with the robot. The semantic neural network model performs corresponding scene analysis, and the determined operation is as follows: the user consults with the buyer. And the semantic neural network model determines the intention corresponding to the conversation text of the object to be screened according to the record of the conversation between the corresponding vehicle buyer and the customer service personnel. The corresponding records of the traffic between the purchaser and the customer service personnel can be understood as models of the automotive industry. For example: the session text of the object to be screened comprises: what the performance of the vehicle is, what the driving feeling is. The semantic neural network model determines the intent of the object to be screened as a positive intent. In one possible implementation, the semantic neural network model employs a transformer (Bidirectional Encoder Representation from Transformers, BERT) of bi-directionally encoded representations.
When the intention of the object to be screened does not belong to the above: step 502 is entered when any of positive intention, negative intention, busy intention of the subject, or unrecognized intention of the subject voice is performed.
502. And processing the session text of the object to be screened by using the semantic neural network model and the knowledge base.
In this embodiment, the advertisement delivery clue cleaning device performs further semantic analysis on the session text of the object to be screened to determine the intention of the object to be screened. In step 502, a knowledge base is used to process together with a semantic neural network model. The knowledge base includes one or more knowledge points, which are one or more semantic information included in the knowledge base. In particular, the knowledge base includes knowledge for certain speech or industries. Such as expertise in the intermediary industry, which specific knowledge is referred to as knowledge points, such as: how lighting is, how many users are in one ladder, and how fine the lighting is.
The advertisement putting clue cleaning device can input knowledge points and corresponding problems into a knowledge base, perform semantic analysis on the conversation text through a semantic neural network model, and then determine the corresponding problems and the knowledge points in the knowledge base. The knowledge base may employ an transliteration update, for example: and directly storing the questions and the corresponding knowledge points into a knowledge base. The knowledge base can also adopt storage based on semantic matching, and details are not repeated in the embodiment of the application.
503. And when knowledge points matched with the conversation text of the object to be screened exist in the knowledge base, determining that the intention of the object to be screened is high intention.
504. And when the knowledge points matched with the conversation text of the object to be screened do not exist in the knowledge base, determining that the intention of the object to be screened is low intention.
After steps 503-504, step 505 is performed.
505. And determining the processing mode of the object to be screened according to the intention of the object to be screened.
Step 505 is similar to step 404 described above and will not be described in detail herein.
In the embodiment of the application, for some difficult-to-recognize conversation texts, a knowledge base can be adopted to carry out further semantic analysis on the conversation texts so as to determine the intention of the objects to be screened corresponding to the conversation texts. The accuracy of semantic analysis is improved, the advertisement putting effect is further improved and optimized, and the advertisement putting efficiency is improved.
Referring to fig. 6 in combination with the foregoing embodiments, fig. 6 is a schematic diagram illustrating another embodiment of a method for cleaning advertisement delivery clues according to an embodiment of the present application. In the method for cleaning advertisement delivery clues provided by the embodiment of the application, an advertisement delivery clue cleaning device processes a conversation text of an object to be screened by using a semantic neural network model, and determines whether a matched knowledge point exists in a knowledge base of the conversation text of the object to be screened, and the method specifically comprises the following steps:
601. And calculating the embedded characteristics of the session text of the object to be screened.
In this embodiment, the advertisement delivery cue cleaning device calculates an embedding (embedding) feature of a conversation text of an object to be screened. Specifically, the session text is subjected to word segmentation processing and split into one or more word segmentation units. And then extracting the characteristics of the word segmentation units, and calculating to obtain the embedded characteristics of the conversation text of the object to be screened. The embedded feature may be understood as a labeled vector of the text of the conversation.
602. The embedded features of a plurality of knowledge points in a knowledge base are obtained.
In this embodiment, the advertisement delivery cue cleaning device may obtain embedded features of a plurality of knowledge points in the knowledge base. One possible implementation manner is that the advertisement delivery clue cleaning device acquires a plurality of knowledge points, and then calculates embedded features corresponding to the knowledge points in real time. In another possible implementation manner, in order to improve the efficiency of semantic matching, all knowledge points in the knowledge base are offline calculated to obtain corresponding embedded features. And then when the knowledge points are needed to be used, the embedded features of all the knowledge points calculated in advance are taken out from the cache, so that the real-time calculation amount on the line is greatly reduced.
It should be noted that the execution order of steps 601-602 is not limited in this regard by the embodiment of the present application.
603. And calculating the similarity of the embedded features of the conversation text of the object to be screened and the embedded features of the knowledge points according to the embedded features of the conversation text of the object to be screened and the embedded features of the knowledge points.
In this embodiment, the advertisement delivery clue cleaning device calculates similarity between an embedded feature of a session text of an object to be screened and embedded features of a plurality of knowledge points, and specifically includes: calculating the distance between the embedded features of the conversation text of the object to be screened and the embedded features of the knowledge points, wherein the distance comprises the following steps: euclidean distance or cosine distance. And then according to the distance, calculating to obtain the similarity between the embedded features of the conversation text of the object to be screened and the embedded features of the knowledge points.
604. And determining whether the matched knowledge points exist in the knowledge base in the conversation text of the object to be screened according to the similarity.
In this embodiment, the advertisement delivery clue cleaning device determines, according to the similarity between the session text of the object to be screened and a plurality of knowledge points in the knowledge base, knowledge points in the knowledge base that are matched with the session text of the object to be screened, where the matched knowledge points are knowledge points with the highest similarity and the similarity being greater than a first threshold value in the plurality of knowledge points. For example: and calculating to obtain the similarity of the conversation text of the object to be screened and the knowledge point A as 20%, the similarity of the conversation text of the object to be screened and the knowledge point B as 13% and the similarity of the conversation text of the object to be screened and the knowledge point C as 75%. And setting the first threshold value as 50%, and determining the knowledge point matched with the conversation text of the object to be screened as the knowledge point C.
According to the embodiment of the application, through the method, the accuracy of identifying the intention can be effectively improved, and the advertisement putting efficiency is further improved.
Referring to fig. 7 in combination with the foregoing embodiments, fig. 7 is a schematic diagram illustrating another embodiment of a method for cleaning advertisement delivery clues according to an embodiment of the present application. In the method for cleaning advertisement delivery clues provided by the embodiment of the application, an advertisement delivery clue cleaning device processes a conversation text of an object to be screened by using a semantic neural network model, and determines whether a matched knowledge point exists in a knowledge base of the conversation text of the object to be screened, and the method specifically comprises the following steps:
701. the advertisement putting clue cleaning device acquires the audio stream data of the object to be screened from the calling module.
In this embodiment, one possible implementation manner is: the advertisement putting clue cleaning device actively acquires audio stream data of an object to be screened from a calling module (such as a Freeswitch module). Another possible implementation is: after each time the calling module (e.g. Freeswitch module) collects the audio stream data, the calling module actively sends the audio stream data to the advertisement delivery clue cleaning device.
702. The advertisement delivery cue cleaning device sends an identification request to the target identification engine, wherein the identification request carries audio stream data of the object to be screened.
In this embodiment, after the advertisement delivery cue cleaning device obtains the audio stream data of the object to be screened, an identification request may be sent to the target recognition engine, so as to enable the target recognition engine to process the audio stream data to obtain the corresponding session text. The calling module can be used as a part of a sub-module of the advertisement putting clue cleaning device. Thus, in step 702, it may be that the Freeswitch module sends the audio stream data of the object to be screened to the unimrcp engine. Specifically, the Freeswitch module sends the audio stream data of the object to be screened to the unimrcp engine by identifying the request.
Alternatively, the identification request may be transparent to the custom parameters. For ease of understanding, please refer to fig. 8, fig. 8 is a schematic diagram illustrating a structure of identifying a request according to an embodiment of the present application. The identification request provided by the embodiment of the application comprises the following steps: optionally, the Package (Package body) of the identification request further includes a first control parameter, and the use of the first control parameter includes one or more of the following: the sensitivity of the object recognition engine to perform the recognition operation of the audio stream data is instructed, or the recognition operation is started, or the recognition operation is suspended.
Optionally, the packet header (Package) of the identification request further includes a second control parameter, and the use of the second control parameter includes one or more of the following: controlling the tone color of the audio stream data in the identification operation, controlling the volume of the audio stream data in the identification operation, and controlling the playing speed of the audio stream data in the identification operation.
Optionally, the unimrcp engine may further include a noise reduction module, where the noise reduction module may perform noise reduction processing on the audio stream data of the object to be screened, so as to improve recognition accuracy. Specifically, noise reduction processing is carried out on the audio stream data of the object to be screened to obtain the audio stream data of the object to be screened after noise reduction; and converting the noise-reduced audio stream data of the object to be screened into a conversation text of the object to be screened.
Alternatively, the unimrcp engine may be rewritten To implement control of tone, volume, and speed capabilities of Text-To-Speech (TTS) using lua script parameters.
The target recognition engine processes the audio stream data of the object to be screened by adopting an ASR technology to obtain the session text of the object to be screened.
703. The advertisement delivery cue cleaning device receives the session text of the object to be screened from the target recognition engine.
In the embodiment of the application, the audio stream data of the object to be screened can be processed by adopting the target recognition engine to obtain the session text of the object to be screened, and the target recognition engine can be an unimrcp engine to improve the recognition accuracy.
In combination with the foregoing embodiments, a training process of the semantic neural network model in the embodiments of the present application is described below. Referring to fig. 9, fig. 9 is a schematic diagram illustrating a training process of a semantic neural network model according to an embodiment of the present application.
Specifically, a description will be given by taking a semantic neural network model as an example of a transformer BERT model represented by bi-directional coding.
First, the BERT model obtains a first conversation text, the first conversation text including X sentences, X being a positive integer. To facilitate training, the input of the BERT model is set to conversational text comprising N sentences. The BERT model extends (e.g., pads) or punctures the first conversation text to generate a second conversation text that includes N sentences, N being a positive integer. For example: n is 20. Then, the BERT model extracts feature vectors of the N sentences according to the second dialogue text. Again, the BERT model is trained based on the feature vectors of the N sentences. In one possible implementation, the training includes: and carrying out softmax processing on the feature vectors of the N sentences to obtain single sentence intentions of the N sentences. Then, a loss function (loss) corresponding to the intent is calculated from the dialog intention composed of N sentences. And finally, calculating the overall loss function of the BERT model according to the loss functions corresponding to the plurality of intentions. And further, the BERT model is trained.
Referring to fig. 12, fig. 12 is a schematic structural diagram of an advertisement delivery clue cleaning device according to an embodiment of the present application. Advertisement delivery cue cleaning device 1200, comprising:
A transceiver module 1201, configured to obtain audio stream data of an object to be screened;
a processing module 1202, configured to obtain a session text of the object to be screened according to the audio stream data of the object to be screened;
the processing module 1202 is further configured to determine, according to a session text of the object to be screened, an intention of the object to be screened, which is an intention of the object to be screened to the target advertisement, using a semantic neural network model;
the processing module 1202 is further configured to determine a processing manner of the object to be screened according to the intention of the object to be screened, where the processing manner of the object to be screened includes: when the intention of the object to be screened is high intention, increasing the target advertisement putting proportion of the object to be screened; when the intention of the object to be screened is low intention, reducing the target advertisement putting proportion of the object to be screened, wherein the high intention comprises the following steps: affirmative intent, low intent includes: negative intent, busy intent, or unrecognized intent of the subject voice.
In one possible design, in another implementation of another aspect of the embodiments of the present application,
the processing module 1202 is further configured to process the dialog text of the object to be screened using the semantic neural network model and the knowledge base when the intention of the object to be screened does not belong to any one of a positive intention, a negative intention, an object busy intention, or an object voice unrecognized intention,
The knowledge base is a semantic information knowledge base associated with the semantic neural network model, and the knowledge points are one or more semantic information included in the knowledge base;
the processing module 1202 is further configured to determine that the intention of the object to be screened is a high intention when there are knowledge points in the knowledge base that match the conversation text of the object to be screened,
the processing module 1202 is further configured to determine that the intention of the object to be screened is a low intention when there is no knowledge point in the knowledge base that matches the conversation text of the object to be screened;
the processing module 1202 is further configured to determine a processing manner of the object to be screened according to the intention of the object to be screened.
In one possible design, in another implementation of another aspect of the embodiments of the present application,
the processing module 1202 is further configured to calculate an embedded ebedding feature of a session text of the object to be screened;
the transceiver module 1201 is further configured to obtain embedded features of a plurality of knowledge points in the knowledge base;
the processing module 1202 is further configured to calculate similarity between the embedded feature of the session text of the object to be screened and the embedded feature of the plurality of knowledge points according to the embedded feature of the session text of the object to be screened and the embedded feature of the plurality of knowledge points;
The processing module 1202 is further configured to determine, according to the similarity, whether there are matching knowledge points in the knowledge base in the session text of the object to be screened, where the matching knowledge points are knowledge points with a similarity greater than a first threshold and a highest similarity among the plurality of knowledge points.
In one possible design, in another implementation of another aspect of the embodiments of the present application,
one or more industry templates are included in the knowledge base, each industry template including one or more knowledge points.
In one possible design, in another implementation of another aspect of the embodiments of the present application,
the embedded features of the knowledge points in the knowledge base are obtained through offline calculation.
In one possible design, in another implementation of another aspect of the embodiments of the present application,
the transceiver module 1201 is further configured to send an identification request to the target identification engine, where the identification request carries audio stream data of the object to be screened, where the audio stream data of the object to be screened comes from the call module;
the processing module 1202 is further configured to process the audio stream data of the object to be screened by using the target recognition engine to obtain a session text of the object to be screened.
In one possible design, in another implementation of another aspect of the embodiments of the present application,
the identification request packet body also comprises a first control parameter, and the purpose of the first control parameter comprises one or more of the following: the sensitivity of the object recognition engine to perform the recognition operation of the audio stream data is instructed, or the recognition operation is started, or the recognition operation is suspended.
In one possible design, in another implementation of another aspect of the embodiments of the present application,
the method further comprises the step of identifying a second control parameter in the packet header of the request, wherein the purpose of the second control parameter comprises one or more of the following: controlling the tone color of the audio stream data in the identification operation, controlling the volume of the audio stream data in the identification operation, and controlling the playing speed of the audio stream data in the identification operation.
In one possible design, in another implementation of another aspect of the embodiments of the present application,
the processing module 1202 is further configured to perform noise reduction processing on the audio stream data of the object to be screened, so as to obtain noise-reduced audio stream data of the object to be screened;
the processing module 1202 is further configured to convert the audio stream data of the object to be screened after noise reduction into a session text of the object to be screened.
In one possible design, in another implementation of another aspect of the embodiments of the present application,
the transceiver module 1201 is further configured to obtain a first session text, where the first session text includes X sentences, and X is a positive integer;
the processing module 1202 is further configured to generate a second conversation text according to the first conversation text, where the second conversation text includes N sentences, and N is a positive integer;
the processing module 1202 is further configured to extract feature vectors of the N sentences according to the second dialog text;
the processing module 1202 is further configured to train the semantic neural network model according to the feature vectors of the N sentences.
In one possible design, in another implementation of another aspect of the embodiments of the present application,
the transceiver module 1201 is further configured to obtain client profile information of a plurality of objects, where the client profile information includes one or more of the following: the name of the object, the contact way of the object, the address information of the object, the text data of the object and customer service, or the willingness level of the object;
the processing module 1202 is further configured to filter client profile information of a plurality of objects, and determine an object to be filtered.
In one possible design, in another implementation of another aspect of the embodiments of the present application,
The transceiver module 1201 is further configured to obtain configuration information, where the configuration information includes: source of session configuration information, or customer profile information;
the processing module 1202 is further configured to screen the plurality of objects based on the configuration information and a time period for making a call to the plurality of objects, and determine an object to be screened.
Fig. 13 is a schematic diagram of a server structure according to an embodiment of the present application, where the server 700 may have a relatively large difference between configurations or performances, and may include one or more central processing units (central processing units, CPU) 722 (e.g., one or more processors) and a memory 732, and one or more storage media 730 (e.g., one or more mass storage devices) storing application programs 742 or data 744. Wherein memory 732 and storage medium 730 may be transitory or persistent. The program stored in the storage medium 730 may include one or more modules (not shown), each of which may include a series of instruction operations on a server. Still further, the central processor 722 may be configured to communicate with the storage medium 730 and execute a series of instruction operations on the server 700 in the storage medium 730.
The server 700 may also include one or more power supplies 726, one or more wired or wireless network interfaces750, one or more input/output interfaces 758, and/or one or more operating systems 741, e.g., windows Server TM ,Mac OS X TM ,Unix TM ,Linux TM ,FreeBSD TM Etc.
The steps performed by the server in the above embodiments may be based on the server structure shown in fig. 13.
Fig. 14 is a schematic structural diagram of a terminal device according to an embodiment of the present application, as shown in fig. 14, for convenience of explanation, only a portion related to the embodiment of the present application is shown, and specific technical details are not disclosed, please refer to a method portion of the embodiment of the present application. The terminal device is also called a user terminal, and the terminal device may be any terminal device including a mobile phone, a tablet computer, a personal digital assistant (Personal Digital Assistant, PDA), a Point of Sales (POS), a vehicle computer, etc., and the user terminal includes, but is not limited to, a mobile phone, a computer, an intelligent voice interaction device, an intelligent home appliance, a vehicle terminal, an aircraft, etc. Taking a terminal device as a mobile phone as an example:
fig. 14 is a block diagram showing a part of the structure of a mobile phone related to a terminal device provided by an embodiment of the present application. Referring to fig. 14, the mobile phone includes: radio Frequency (RF) circuitry 810, memory 820, input unit 830, display unit 840, sensor 850, audio circuitry 860, wireless fidelity (wireless fidelity, wiFi) module 870, processor 880, power supply 890, and the like. It will be appreciated by those skilled in the art that the handset construction shown in fig. 14 is not limiting of the handset and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The following describes the components of the mobile phone in detail with reference to fig. 14:
the RF circuit 810 may be used for receiving and transmitting signals during a message or a call, and in particular, after receiving downlink information of a base station, it is processed by the processor 880; in addition, the data of the design uplink is sent to the base station. Typically, the RF circuitry 810 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (Low Noise Amplifier, LNA), a duplexer, and the like. In addition, the RF circuitry 810 may also communicate with networks and other devices via wireless communications. The wireless communications may use any communication standard or protocol including, but not limited to, global system for mobile communications (Global System of Mobile communication, GSM), general packet radio service (General Packet Radio Service, GPRS), code division multiple access (Code Division Multiple Access, CDMA), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), long term evolution (Long Term Evolution, LTE), email, short message service (Short Messaging Service, SMS), and the like.
The memory 820 may be used to store software programs and modules, and the processor 880 performs various functional applications and data processing of the cellular phone by executing the software programs and modules stored in the memory 820. The memory 820 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 820 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 830 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function controls of the handset. In particular, the input unit 830 may include a touch panel 831 and other input devices 832. The touch panel 831, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 831 or thereabout using any suitable object or accessory such as a finger, stylus, etc.), and actuate the corresponding connection device according to a predetermined program. Alternatively, the touch panel 831 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 880 and can receive commands from the processor 880 and execute them. In addition, the touch panel 831 may be implemented in various types of resistive, capacitive, infrared, surface acoustic wave, and the like. The input unit 830 may include other input devices 832 in addition to the touch panel 831. In particular, other input devices 832 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 840 may be used to display information input by a user or information provided to the user and various menus of the mobile phone. The display unit 840 may include a display panel 841, and optionally, the display panel 841 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 831 may overlay the display panel 841, and when the touch panel 831 detects a touch operation thereon or thereabout, the touch operation is transferred to the processor 880 to determine the type of touch event, and the processor 880 then provides a corresponding visual output on the display panel 841 according to the type of touch event. Although in fig. 14, the touch panel 831 and the display panel 841 are implemented as two separate components to implement the input and input functions of the mobile phone, in some embodiments, the touch panel 831 and the display panel 841 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 850, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 841 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 841 and/or the backlight when the mobile phone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for applications of recognizing the gesture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured with the handset are not described in detail herein.
Audio circuitry 860, speaker 861, microphone 862 may provide an audio interface between the user and the handset. The audio circuit 860 may transmit the received electrical signal converted from audio data to the speaker 861, and the electrical signal is converted into a sound signal by the speaker 861 to be output; on the other hand, microphone 862 converts the collected sound signals into electrical signals, which are received by audio circuit 860 and converted into audio data, which are processed by audio data output processor 880 for transmission to, for example, another cell phone via RF circuit 810, or which are output to memory 820 for further processing.
WiFi belongs to a short-distance wireless transmission technology, and a mobile phone can help a user to send and receive emails, browse webpages, access streaming media and the like through a WiFi module 870, so that wireless broadband Internet access is provided for the user. Although fig. 14 shows a WiFi module 870, it is understood that it does not belong to the necessary constitution of the handset, and can be omitted entirely as needed within the scope of not changing the essence of the invention.
The processor 880 is a control center of the mobile phone, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile phone and processes data by running or executing software programs and/or modules stored in the memory 820 and calling data stored in the memory 820, thereby performing an overall inspection of the mobile phone. In the alternative, processor 880 may include one or more processing units; alternatively, the processor 880 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 880.
The handset further includes a power supply 890 (e.g., a battery) for powering the various components, optionally in logical communication with the processor 880 through a power management system, as well as performing functions such as managing charge, discharge, and power consumption by the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which will not be described herein.
The steps performed by the terminal device in the above-described embodiments may be based on the terminal device structure shown in fig. 14.
Embodiments of the present application also provide a computer-readable storage medium having a computer program stored therein, which when run on a computer, causes the computer to perform the method as described in the foregoing embodiments.
Embodiments of the present application also provide a computer program product comprising a program which, when run on a computer, causes the computer to perform the method described in the previous embodiments.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (16)

1. A method of cleaning an advertising cue, comprising:
acquiring audio stream data of an object to be screened;
according to the audio stream data of the object to be screened, obtaining a session text of the object to be screened;
determining the intention of the object to be screened by using a semantic neural network model according to the session text of the object to be screened, wherein the intention of the object to be screened is the intention of the object to be screened to a target advertisement;
determining a processing mode of the object to be screened according to the intention of the object to be screened, wherein the processing mode of the object to be screened comprises the following steps: when the intention of the object to be screened is high intention, increasing the target advertisement putting proportion of the object to be screened; when the intention of the object to be screened is low intention, reducing the target advertisement putting proportion of the object to be screened, wherein the high intention comprises the following steps: affirmative intent, the low intent including: negative intent, busy intent, or unrecognized intent of the subject voice.
2. The method of claim 1, wherein determining the intent of the object to be screened using a semantic neural network model based on the conversational text of the object to be screened comprises:
When the intention of the object to be screened does not belong to any one of the positive intention, the negative intention, the busy intention of the object or the unrecognized intention of the object voice, processing the conversation text of the object to be screened by using the semantic neural network model and a knowledge base,
the knowledge base is a semantic information knowledge base associated with the semantic neural network model, and the knowledge points are one or more semantic information included in the knowledge base;
when the knowledge points matched with the conversation text of the object to be screened exist in the knowledge base, determining the intention of the object to be screened as the high intention,
when the knowledge points matched with the conversation text of the object to be screened do not exist in the knowledge base, determining that the intention of the object to be screened is the low intention;
and determining the processing mode of the object to be screened according to the intention of the object to be screened.
3. The method of claim 2, wherein processing the conversational text of the object to be screened using the semantic neural network model to determine whether the conversational text of the object to be screened has the matching knowledge points in the knowledge base comprises:
Calculating embedded embedding characteristic of the session text of the object to be screened;
acquiring embedded features of a plurality of knowledge points in the knowledge base;
calculating the similarity between the embedded features of the session text of the object to be screened and the embedded features of the knowledge points according to the embedded features of the session text of the object to be screened and the embedded features of the knowledge points;
and determining whether matched knowledge points exist in the conversation text of the object to be screened in a knowledge base according to the similarity, wherein the matched knowledge points are knowledge points with the similarity larger than a first threshold value and the highest similarity in the plurality of knowledge points.
4. A method according to any one of claims 2-3, wherein one or more industry templates are included in the knowledge base, each industry template including one or more of the knowledge points.
5. The method of any of claims 3-4, wherein the embedded features of the plurality of knowledge points in the knowledge base are calculated offline.
6. The method according to any one of claims 1-5, wherein obtaining the session text of the object to be screened according to the audio stream data of the object to be screened comprises:
Sending an identification request to a target identification engine, wherein the identification request carries audio stream data of the object to be screened, and the audio stream data of the object to be screened come from a call module;
and processing the audio stream data of the object to be screened by using the target recognition engine to obtain the session text of the object to be screened.
7. The method of claim 6, wherein the identification request further comprises a first control parameter, the use of the first control parameter comprising one or more of: the sensitivity of the object recognition engine to perform a recognition operation of the audio stream data is instructed, or the recognition operation is started, or the recognition operation is suspended.
8. The method according to any of claims 6-7, wherein the header of the identification request further comprises a second control parameter, the use of the second control parameter comprising one or more of the following: and controlling the tone color of the audio stream data in the identification operation, controlling the volume of the audio stream data in the identification operation, and controlling the playing speed of the audio stream data in the identification operation.
9. The method according to any one of claims 1-8, wherein obtaining the session text of the object to be screened according to the audio stream data of the object to be screened comprises:
Carrying out noise reduction treatment on the audio stream data of the object to be screened to obtain the audio stream data of the object to be screened after noise reduction;
and converting the noise-reduced audio stream data of the object to be screened into a conversation text of the object to be screened.
10. The method according to any of claims 1-9, wherein the semantic neural network model is a transformer BERT model of a bi-directionally encoded representation, the method further comprising:
acquiring a first conversation text, wherein the first conversation text comprises X sentences, and X is a positive integer;
generating a second conversation text according to the first conversation text, wherein the second conversation text comprises N sentences, and N is a positive integer;
extracting feature vectors of the N sentences according to the second conversation text;
and training the semantic neural network model according to the feature vectors of the N sentences.
11. The method according to any one of claims 1-10, wherein prior to obtaining the audio stream data of the object to be screened, the method further comprises:
acquiring customer profile information for a plurality of objects, the customer profile information comprising one or more of: the name of the object, the contact way of the object, the address information of the object, the text data of the object and customer service, or the willingness level of the object;
And screening the client information of the objects to determine the object to be screened.
12. The method of claim 11, wherein screening the customer profile information for the plurality of objects to determine the object to be screened comprises:
acquiring configuration information, wherein the configuration information comprises: source of session configuration information, or customer profile information;
and screening the plurality of objects based on the configuration information and the time period of making calls to the plurality of objects, and determining the object to be screened.
13. An advertising cue cleaning device, comprising:
the receiving and transmitting module is used for acquiring audio stream data of the object to be screened;
the processing module is used for obtaining the session text of the object to be screened according to the audio stream data of the object to be screened;
the processing module is further configured to determine, according to a session text of the object to be screened, an intention of the object to be screened, which is an intention of the object to be screened to a target advertisement, using a semantic neural network model;
the processing module is further configured to determine a processing manner of the object to be screened according to the intention of the object to be screened, where the processing manner of the object to be screened includes: when the intention of the object to be screened is high intention, increasing the target advertisement putting proportion of the object to be screened; when the intention of the object to be screened is low intention, reducing the target advertisement putting proportion of the object to be screened, wherein the high intention comprises the following steps: affirmative intent, the low intent including: negative intent, busy intent, or unrecognized intent of the subject voice.
14. A computer device, comprising: a memory, a processor, and a bus system;
wherein the memory is used for storing programs;
the processor being for executing a program in the memory, the processor being for executing the method of any one of claims 1 to 12 according to instructions in program code;
the bus system is used for connecting the memory and the processor so as to enable the memory and the processor to communicate.
15. A computer readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1 to 12.
16. A computer program product comprising a computer program and instructions which, when executed by a processor, implement the method of any one of claims 1 to 12.
CN202210189507.0A 2022-02-28 2022-02-28 Advertisement delivery clue cleaning method and related device Pending CN116720890A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210189507.0A CN116720890A (en) 2022-02-28 2022-02-28 Advertisement delivery clue cleaning method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210189507.0A CN116720890A (en) 2022-02-28 2022-02-28 Advertisement delivery clue cleaning method and related device

Publications (1)

Publication Number Publication Date
CN116720890A true CN116720890A (en) 2023-09-08

Family

ID=87873916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210189507.0A Pending CN116720890A (en) 2022-02-28 2022-02-28 Advertisement delivery clue cleaning method and related device

Country Status (1)

Country Link
CN (1) CN116720890A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117314082A (en) * 2023-09-26 2023-12-29 选房宝(珠海横琴)数字科技有限公司 Client follow-up method, system, equipment and medium based on client clues

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117314082A (en) * 2023-09-26 2023-12-29 选房宝(珠海横琴)数字科技有限公司 Client follow-up method, system, equipment and medium based on client clues

Similar Documents

Publication Publication Date Title
US11683279B2 (en) System and method of using conversational agent to collect information and trigger actions
RU2467394C2 (en) Set of actions and icons for advertising in mobile devices
US9959547B2 (en) Platform for mobile advertising and persistent microtargeting of promotions
KR101217045B1 (en) Critical mass billboard
US20100086107A1 (en) Voice-Recognition Based Advertising
CN109347722B (en) Interaction system, method, client and background server
US20090198579A1 (en) Keyword tracking for microtargeting of mobile advertising
CN109302338A (en) Intelligent indicating risk method, mobile terminal and computer readable storage medium
CN107657007B (en) Information pushing method, device, terminal, readable storage medium and system
CN110033294A (en) A kind of determination method of business score value, business score value determining device and medium
CN105096154A (en) Active providing method of advertising
CN110111153A (en) A kind of bid advertisement placement method, system, medium and electronic equipment
CN103582897A (en) Displaying phone number on the landing page based on keywords
US11509610B2 (en) Real-time messaging platform with enhanced privacy
KR20170101416A (en) Method for providing funding and consulting information related with entertainment by crowd funding system
CN116720890A (en) Advertisement delivery clue cleaning method and related device
CN111787042B (en) Method and device for pushing information
CN109522543B (en) Information processing method and terminal equipment
KR20130116646A (en) System and method for operating of sponser talk service
US20150142568A1 (en) Method for enabling a mobile device to generate message feedback, and advertising server implementing the same cross-reference to related application
CN113413590A (en) Information verification method and device, computer equipment and storage medium
CN118193884A (en) Page processing method, device and system, computing device, computer storage medium and computer program product
CN114078023A (en) Data processing method and related equipment
CN116227814A (en) Information processing method, intelligent terminal and storage medium
CN115330206A (en) Service scale-based order dispatching method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination