CN107786896A - Method for pushing, device, terminal device and the storage medium of information - Google Patents
Method for pushing, device, terminal device and the storage medium of information Download PDFInfo
- Publication number
- CN107786896A CN107786896A CN201711033003.5A CN201711033003A CN107786896A CN 107786896 A CN107786896 A CN 107786896A CN 201711033003 A CN201711033003 A CN 201711033003A CN 107786896 A CN107786896 A CN 107786896A
- Authority
- CN
- China
- Prior art keywords
- spectators
- facial image
- live
- target information
- subjective attitude
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0207—Discounts or incentives, e.g. coupons or rebates
- G06Q30/0239—Online discounts or incentives
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0277—Online advertisement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/40—Support for services or applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/47815—Electronic shopping
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4788—Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
Abstract
The embodiment of the present application discloses a kind of method for pushing of information, device, terminal device and storage medium.This method includes:The facial image of spectators is obtained during live;The facial image is inputted into subjective attitude and determines model, obtains subjective attitude of the spectators to current live content;The target information determined according to the subjective attitude and current live content is obtained, and the target information is pushed to spectators.The method for pushing for the information that the embodiment of the present application provides, spectators are obtained to the subjective attitude of live content according to facial image, target information is pushed according to subjective attitude, can improve it is live during the convenience that pushes of information.
Description
Technical field
The invention relates to Internet technology, more particularly to a kind of method for pushing of information, device, terminal device and
Storage medium.
Background technology
With the fast development of network technology, mobile direct seeding technique has become one of social hotspots at present.User can
It is live to be done by the live software in terminal device, a product is introduced by live mode, shares the trip being presently in
Swim sight spot or a kind of manufacturing process of cooking etc..
In correlation technique, it is necessary to watch the comment content aware spectators in comment area when user does live using terminal device
To the subjective attitude of live content, when user checks comment content, influence user it is wholwe-hearted do live, operation is extremely inconvenient.
The content of the invention
The embodiment of the present application provides a kind of method for pushing of information, device, terminal device and storage medium, can improve straight
The convenience that information pushes during broadcasting.
In a first aspect, the embodiment of the present application provides a kind of method for pushing of information, this method includes:
The facial image of spectators is obtained during live;
The facial image is inputted into subjective attitude and determines model, obtains subjective attitude of the spectators to current live content;
The target information determined according to the subjective attitude and current live content is obtained, and the target is pushed to spectators
Information.
Second aspect, the embodiment of the present application additionally provide a kind of pusher of information, and the device includes:
Facial image acquisition module, for obtaining the facial image of spectators during live;
Subjective attitude acquisition module, model is determined for the facial image to be inputted into subjective attitude, obtains spectators to working as
The subjective attitude of preceding live content;
Target information pushing module, believe for obtaining the target determined according to the subjective attitude and current live content
Breath, and push the target information to spectators.
The third aspect, the embodiment of the present application additionally provide a kind of terminal device, including:Processor, memory and storage
On a memory and the computer program that can run on a processor, realized such as during computer program described in the computing device
Method for pushing described in first aspect.
Fourth aspect, the embodiment of the present application additionally provide a kind of storage medium, are stored thereon with computer program, the program
Method for pushing as described in relation to the first aspect is realized when being executed by processor.
The embodiment of the present application, the facial image of spectators is obtained during live, facial image is then inputted into subjective state
Degree determines model, obtains spectators to the subjective attitude of current live content, finally obtains according in subjective attitude and current live
Hold the target information determined, and target information is pushed to spectators.The method for pushing for the information that the embodiment of the present application provides, according to people
Face image obtains subjective attitude of the spectators to live content, pushes target information according to subjective attitude, can improve live process
The convenience of middle information push.
Brief description of the drawings
Fig. 1 is a kind of flow chart of the method for pushing of information in the embodiment of the present application;
Fig. 2 is the flow chart of the method for pushing of another information in the embodiment of the present application;
Fig. 3 is the flow chart of the method for pushing of another information in the embodiment of the present application;
Fig. 4 is the flow chart of the method for pushing of another information in the embodiment of the present application;
Fig. 5 is a kind of structural representation of the pusher of information in the embodiment of the present application;
Fig. 6 is a kind of structural representation of terminal device in the embodiment of the present application;
Fig. 7 is the structural representation of another terminal device in the embodiment of the present application.
Embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining the application, rather than the restriction to the application.It also should be noted that in order to just
The part related to the application rather than entire infrastructure are illustrate only in description, accompanying drawing.
Fig. 1 is a kind of flow chart of the method for pushing for information that the embodiment of the present application provides, and the present embodiment is applicable to
It can be performed during live to the situation of spectators' pushed information, this method by the pusher of information, the device can collect
Into in the terminal devices such as mobile phone, tablet personal computer, or it is integrated in server.As shown in figure 1, this method includes following step
Suddenly.
Step 110, the facial image of spectators is obtained during live.
Wherein, live can be network direct broadcasting or live telecast.It is live to be divided into word and picture live broadcast and net cast,
Wherein, live telecast is based on net cast, and network direct broadcasting is based on live broadcast in both illustration and text.It is straight mainly for network in the present embodiment
Broadcast and illustrate.Under this application scene, live can be that user utilizes the live software installed in terminal device with regard to a certain theme
Content is carried out live.Subject content can be arbitrarily meet the legal requirements, actively health content, such as:One of cuisines
Cooking process, the collocation skill of clothes, the introduction of tourist attractions or the explanation to history etc..Spectators can be that viewing is current straight
All spectators broadcast.Facial image can be the Static Human Face image for including a two field picture, or include multiframe consecutive image
Dynamic human face image.
Optionally, the mode that the facial image of spectators is obtained during live can play the terminal of live content
Equipment starts image collecting device, gathers the facial image of spectators;Or direct broadcast server is set to the terminal for playing live content
Preparation send the instruction of collection facial image, and the facial image that receiving terminal apparatus is sent.In the present embodiment, face is being got
After image, facial image need to be pre-processed, to cause facial image to be suitable to follow-up operation.Facial image is located in advance
The process of reason can be carried out Face datection to every frame facial image first, determine human face region, then detect in human face region
Key point characteristic point, facial image is calibrated based on the key feature points detected, finally by the face figure after calibration
As carrying out editing and processing according to default template, to obtain the facial image for meeting default template.Wherein, Face datection can use
Existing Face datection algorithm is scanned to the facial image of input, until determining human face region.The key feature of face
Point can include eyes, eyebrow, nose, face and face's outline etc..Default template includes the letter such as the size of image, pixel
Breath.
Step 120, facial image is inputted into subjective attitude and determines model, obtain subjective state of the spectators to current live content
Degree.
Wherein, subjective attitude can be the psychological tendency that individual is held to specific matters, and this psychological tendency is anti-
Reflect in face.Subjective attitude can include the various moods of individual, such as like, dislike and cold and detached.It is subjective in the present embodiment
Attitude determines that facial image when model can watch live according to spectators determines subjective attitude of the user to current live content.
Subjective attitude determines that model can be the machine learning language based on setting, is constantly trained by facial image sample set
The model of acquisition.
Optionally, facial image is inputted into subjective attitude and determines model, obtain subjective state of the spectators to current live content
The process of degree can be that facial image is inputted into subjective attitude first determines model, obtains expression information corresponding to facial image,
Then subjective attitude of the spectators to current live content is obtained according to expression information.
Wherein, expression information can include it is glad, sad, surprised, fear and indignation etc..In the present embodiment, subjective attitude
Determine that model can determine subjective attitude of the spectators to current live content by the training of labeled face sample set.Example
Property, expression information corresponding to facial image is excitement, then it represents that spectators are to like to the subjective attitude of live content.This implementation
Subjective attitude in example determines that model has and determines ability of the spectators to the subjective attitude of live content, and facial image is inputted and led
It is excitement to see the expression information that attitude determines that model obtained after Expression Recognition, and subjective attitude determines table of the model according to excitement
Feelings obtain user and like current live content.For another example assuming that current anchor passes through live introduces a cosmetics, main broadcaster
Introduce the product to whitening and it is crease-resistant have a significant effect, some of spectators just exist as skin problem, when listening
To after the introduction of main broadcaster, great interest will be shown, gets the facial image of these spectators, these facial images are defeated
It is pleasantly surprised to enter the expression information that subjective attitude determines that model obtains, and subjective attitude determines that model is judged to see according to pleasantly surprised expression
Crowd likes current live content.
Step 130, the target information according to subjective attitude and the determination of current live content is obtained, and target is pushed to spectators
Information.
Wherein, target information can be the information related to live content, such as can be the purchase of main broadcaster institute products Presentation
Buy link, favor information etc..The mode that target information is pushed to spectators can be that the terminal that target information is pushed to spectators is set
In standby.
In the present embodiment, the target information according to subjective attitude and the determination of current live content is obtained, and push to spectators
The process of target information can be obtained the target information related to live content first, then like to subjective attitude
Spectators push target information.Optionally, not to subjective attitude be do not like or cold and detached spectators push target information.
Optionally, if the method for pushing of the present embodiment is performed by terminal device, push process can play live content
Terminal device start image collecting device, after the facial image for collecting spectators, it is true that the facial image is inputted into subjective attitude
Cover half type, to obtain subjective attitude of the spectators to current live content, if spectators are to like to the subjective attitude of live content,
The terminal device sends the instruction of request target information to direct broadcast server, after terminal device receives target information, by target
Information pushes to current live interface, spectators is seen the target information.Optionally, if the method for pushing of the present embodiment is by servicing
Device performs, then pushing process can be, direct broadcast server sends collection facial image to the terminal device for playing live content
Instruction, the facial image that the terminal device of all broadcasting live contents is sent then is received, the facial image of reception is defeated one by one
Enter subjective attitude and determine model, obtain subjective attitude of each spectators to current live content, then obtain and live content phase
The target information of pass, target information is pushed into the terminal device that subjective attitude is the spectators liked.
Optionally, before the target information related to live content is obtained, in addition to:Receive the target letter that main broadcaster sends
Identification code is ceased, and target information identification code is stored to predeterminated position.
Wherein, the form of target information identification code can be Quick Response Code or bar code.Predeterminated position can be direct broadcast service
Memory in device.Under this application scene, main broadcaster is uploaded to direct broadcast server before live beginning, by target information identification code
In, server stores the target information identification code into predeterminated position.
Accordingly, obtaining the target information related to live content can be implemented by following manner:Target information is identified
Code is parsed, and obtains the target information related to live content.
The process parsed to target information identification code can be scanned target information identification code first, then will sweep
The target information identification code retouched is parsed.
The technical scheme of the present embodiment, the facial image of spectators is obtained during live, then inputs facial image
Subjective attitude determines model, obtains subjective attitude of the spectators to current live content, finally obtains according to subjective attitude and current
The target information that live content determines, and push target information to spectators.The method for pushing for the information that the embodiment of the present application provides,
Subjective attitude of the spectators to live content is obtained according to facial image, target information is pushed according to subjective attitude, can be improved straight
The convenience that information pushes during broadcasting.
Optionally, the facial image of spectators is obtained during live can also be implemented by following manner:In live process
In, obtain at least frame facial image of spectators every preset time.
Wherein, preset time could be arranged to specific time value, for example, any between could be arranged to 5-10 minutes
Value.It is exemplary, it is assumed that preset time is arranged to 8 minutes, then main broadcaster obtained current spectators during live every 8 minutes
An at least frame facial image.Advantage of this is that, it is not necessary to the facial image of spectators is obtained always, is reducing data processing
While amount, data transfer difficulty can also be reduced.
Optionally, during live obtain spectators facial image can also by following manner implement can also be by following
Mode is implemented:Semantics recognition is carried out to live content during live, when occurring predetermined keyword in live content, obtained
At least frame facial image of spectators.
Wherein, predetermined keyword can be related to the word of subjective attitude, such as interested, like.To live content
Carrying out the mode of semantics recognition can be, semantics recognition is carried out to the content described in main broadcaster using existing semantics recognition technology.
In the present embodiment, when recognizing the content described in main broadcaster and predetermined keyword occur, at least frame facial image of spectators is obtained.
Exemplary, when main broadcaster does live by live platform, live content occurs " if everybody is interested in this product
Words ... ... ", occur the predetermined keyword of " interested " in the word, then control the facial image of terminal device acquisition spectators.
The technical scheme of the present embodiment, when occurring predetermined keyword in live content, obtain at least frame people of spectators
Face image.The facial image of spectators need not be obtained always, while data processing amount is reduced, can also reduce data transfer
Difficulty.
Fig. 2 is the flow chart of the method for pushing for another information that the embodiment of the present application provides, as shown in Fig. 2 this method
Comprise the following steps.
Step 210, the first face image set when spectators watch live is obtained.
In the present embodiment, the process for obtaining the first face image set when spectators watch live can select some straight
Broadcast popularity in platform and come preceding 10 the live of main broadcaster and be used as live sample, these main broadcasters do it is live during, collection is big
The facial image of the spectators of amount, form the first face image set.
Step 220, the subjective attitude of live content is marked to the first face image set according to spectators, obtains first
Facial image sample set.
In the present embodiment, the first face image set is marked the subjective attitude of live content according to spectators mode
Can watch live each spectators to enter rower to the facial image of collection to the subjective attitude of live content according to itself
Note, it is exemplary, facial image of 10000 spectators when watching live is have collected, allows this 10000 spectators respectively to oneself
Facial image subjective attitude of live content is marked according to it;Or the first face image set of acquisition is carried out
Manual analysis, with reference to the expression of facial image, analysis obtains subjective attitude of the spectators corresponding to facial image to live content, so
Facial image is marked afterwards;Or exceed the live sample of first threshold for degree of liking, the live spectators will be watched
Facial image be collectively labeled as " liking ", for degree of liking be less than Second Threshold live sample, the live spectators will be watched
Facial image be collectively labeled as " not liking ".Wherein, first threshold can be the arbitrary value between 90%-100%, Second Threshold
It can be the arbitrary value between 0-10%.
Step 230, according to the first face sample set, determine that model is carried out to subjective attitude based on setting machine learning algorithm
Training.
In the present embodiment, after the first face sample set is obtained, subjective attitude is determined based on setting machine learning algorithm
Model is trained, in the training process, the parameter in continuous adjustment algorithm so that subjective attitude determines that model has accurate knowledge
The ability of not subjective attitude, that is, after inputting facial image, the subjective attitude in output result is consistent with the information marked.In subjectivity
After attitude determines that model is successfully trained, it is possible to for identifying subjective attitude of the spectators to live content.
Step 240, the facial image of spectators is obtained during live.
Step 250, facial image is inputted into subjective attitude and determines model, obtain subjective state of the spectators to current live content
Degree.
Step 260, the target information according to subjective attitude and the determination of current live content is obtained, and target is pushed to spectators
Information.
The technical scheme of the present embodiment, the first face image set when spectators watch live is obtained, according to spectators to live
First face image set is marked the subjective attitude of content, the first facial image sample set is obtained, according to the first face sample
This collection, model, which is trained, to be determined to subjective attitude based on setting machine learning algorithm.By gathering image set to subjective attitude
It is determined that being trained, subjective attitude can be improved and determine that model determines the accuracy of subjective attitude.
Fig. 3 is the flow chart of the method for pushing for another information that the embodiment of the present application provides, as shown in figure 3, this method
Comprise the following steps.
Step 310, the second face image set when spectators watch live is obtained.
In this implementation, the side of the first face image set is obtained in the mode and above-described embodiment of the second face image set of acquisition
Formula is similar, and here is omitted.
Step 320, rower is entered to the second face image set to the subjective attitude of live content according to expression information and spectators
Note, obtain the second facial image sample set.
Wherein, can be to the mode that the second face image set is marked according to expression information, by the second facial image
The facial image of concentration inputs in existing Expression Recognition model respectively, believes so as to obtain expression corresponding to each facial image
Breath, then the expression information of acquisition is marked in corresponding facial image respectively.According to spectators to live interior in the present embodiment
The subjective attitude of appearance the second face image set is marked the master according to spectators to live content in mode and above-described embodiment
The mode that first face image set is marked sight attitude is similar, and here is omitted.According to expression information and spectators to straight
The subjective attitude for broadcasting content can be " expression-subjective attitude " to the form that the second face image set is marked.
Step 330, according to the second facial image sample set, model is determined to subjective attitude based on setting machine learning algorithm
It is trained.
In the present embodiment, after the second face sample set is obtained, subjective attitude is determined based on setting machine learning algorithm
Model is trained, in the training process, the parameter in continuous adjustment algorithm so that it is accurate true that subjective attitude determines that model has
Determine the ability of spectators' subjectivity attitude, that is, after inputting facial image, the subjective attitude in output result is consistent with the information marked.
After subjective attitude determines that model is successfully trained, it is possible to for determining subjective attitude of the spectators to live content.
Step 340, facial image is inputted into subjective attitude and determines model, obtain expression information corresponding to facial image.
Step 350, subjective attitude of the spectators to current live content is obtained according to expression information.
The technical scheme of the present embodiment, obtain spectators' viewing it is live when the second face image set, according to expression information and
Second face image set is marked the subjective attitude of live content spectators, obtains the second facial image sample set, according to
Second facial image sample set, model, which is trained, to be determined to subjective attitude based on setting machine learning algorithm.Schemed by gathering
Image set determines that model is trained to subjective attitude, can improve subjective attitude and determine that model determines the accuracy of subjective attitude.
Fig. 4 is the flow chart of the method for pushing for another information that the embodiment of the present application provides, as to above-described embodiment
Be explained further, as shown in figure 4, this method comprises the following steps.
Step 410, the second face image set when spectators watch live is obtained.
Step 420, rower is entered to the second face image set to the subjective attitude of live content according to expression information and spectators
Note, obtain the second facial image sample set.
Step 430, according to the second facial image sample set, model is determined to subjective attitude based on setting machine learning algorithm
It is trained.
Step 440, semantics recognition is carried out to live content during live, when occurring predetermined keyword in live content
When, obtain at least frame facial image of spectators.
Step 450, facial image is inputted into subjective attitude and determines model, obtain expression information corresponding to facial image.
Step 460, subjective attitude of the spectators to current live content is obtained according to expression information.
Step 470, the target information related to live content is obtained.
Step 480, it is that the spectators liked push target information to subjective attitude.
Fig. 5 is a kind of structural representation of the pusher for information that the embodiment of the present application provides.As shown in figure 5, the dress
Put including:Facial image acquisition module 510, subjective attitude acquisition module 520 and target information pushing module 530.
Facial image acquisition module 510, for obtaining the facial image of spectators during live;
Subjective attitude acquisition module 520, model is determined for facial image to be inputted into subjective attitude, obtains spectators to current
The subjective attitude of live content;
Target information pushing module 530, for obtaining the target information determined according to subjective attitude and current live content,
And push target information to spectators.
Optionally, facial image acquisition module 510, is additionally operable to:
The terminal device for playing live content starts image collecting device, gathers the facial image of spectators;
Or
Direct broadcast server sends the instruction of collection facial image to the terminal device for playing live content, and receiving terminal is set
The facial image that preparation is sent.
Optionally, facial image acquisition module 510, is additionally operable to:
During live, at least frame facial image of spectators is obtained every preset time;
Or;
Semantics recognition is carried out to live content during live, when occurring predetermined keyword in live content, obtained
At least frame facial image of spectators.
Optionally, in addition to:
First face image set acquisition module, for obtaining the first face image set when spectators watch live;
First facial image sample set acquisition module, for according to spectators to the subjective attitude of live content to the first face
Image set is marked, and obtains the first facial image sample set;
First model training module, for according to the first face sample set, based on setting machine learning algorithm to subjective state
Degree determines that model is trained.
Optionally, subjective attitude acquisition module 520, is additionally operable to:
Facial image is inputted into subjective attitude and determines model, obtains expression information corresponding to facial image;
Subjective attitude of the spectators to current live content is obtained according to expression information.
Optionally, in addition to:
Second face image set acquisition module, for obtaining the second face image set when spectators watch live;
Second facial image sample set acquisition module, for the subjective attitude according to expression information and spectators to live content
Second face image set is marked, obtains the second facial image sample set;
Second model training module, for according to the second facial image sample set, based on setting machine learning algorithm to master
See attitude and determine that model is trained.
Optionally, target information pushing module 530, is additionally operable to:
Obtain the target information related to live content;
It is that the spectators liked push target information to subjective attitude.
Optionally, in addition to:
Target information identification code receiving module, for receiving the target information identification code of main broadcaster's transmission, and by target information
Identification code is stored to predeterminated position;
Accordingly, target information pushing module 530, is additionally operable to:
Target information identification code is parsed, obtains the target information related to live content.
Fig. 6 is a kind of structural representation for terminal device that the embodiment of the present application provides.As shown in fig. 6, terminal device 600
Including memory 601 and processor 602, wherein processor 602 is used to perform following steps:
The facial image of spectators is obtained during live;
The facial image is inputted into subjective attitude and determines model, obtains subjective attitude of the spectators to current live content;
The target information determined according to the subjective attitude and current live content is obtained, and the target is pushed to spectators
Information.
Fig. 7 is the structural representation for another terminal device that the embodiment of the present application provides.As shown in fig. 7, the terminal can
With including:Housing (not shown), memory 601, central processing unit (Central Processing Unit, CPU) 602
(also known as processor, hereinafter referred to as CPU), the computer program that is stored on memory 601 and can be run on processor 602,
Circuit board (not shown) and power circuit (not shown).The circuit board is placed in the space that the housing surrounds
Portion;The CPU602 and the memory 601 are arranged on the circuit board;The power circuit, for for the terminal
Each circuit or device power supply;The memory 601, for storing executable program code;The CPU602 is by reading
The executable program code that is stored in memory 601 is stated to run program corresponding with the executable program code.
The terminal also includes:Peripheral Interface 603, RF (Radio Frequency, radio frequency) circuit 605, voicefrequency circuit
606th, loudspeaker 611, power management chip 608, input/output (I/O) subsystem 609, touch-screen 612, other input/controls
Equipment 610 and outside port 604, these parts are communicated by one or more communication bus or signal wire 607.
It should be understood that graphic terminal 600 is only an example of terminal, and terminal device 600 can be with
With than more or less parts shown in figure, two or more parts can be combined, or can have
Different part configurations.Various parts shown in figure can be including one or more signal transactings and/or special integrated
Hardware, software including circuit are realized in the combination of hardware and software.
The terminal device of the push for information provided below with regard to the present embodiment is described in detail, the terminal device
By taking smart mobile phone as an example.
Memory 601, the memory 601 can be accessed by CPU602, Peripheral Interface 603 etc., and the memory 601 can
Including high-speed random access memory, can also include nonvolatile memory, such as one or more disk memories,
Flush memory device or other volatile solid-state parts.
The input of equipment and output peripheral hardware can be connected to CPU602 and deposited by Peripheral Interface 603, the Peripheral Interface 603
Reservoir 601.
I/O subsystems 609, the I/O subsystems 609 can be by the input/output peripherals in equipment, such as touch-screen 612
With other input/control devicess 610, Peripheral Interface 603 is connected to.I/O subsystems 609 can include the He of display controller 6091
For controlling one or more input controllers 6092 of other input/control devicess 610.Wherein, one or more input controls
Device 6092 processed receives electric signal from other input/control devicess 610 or sends electric signal to other input/control devicess 610,
Other input/control devicess 610 can include physical button (pressing button, rocker buttons etc.), dial, slide switch, behaviour
Vertical pole, click on roller.What deserves to be explained is input controller 6092 can with it is following any one be connected:Keyboard, infrared port,
The instruction equipment of USB interface and such as mouse.
Wherein, according to touch-screen operation principle and transmission information medium classification, touch-screen 612 can be resistance-type,
Capacitor induction type, infrared-type or surface acoustic wave type.Classify according to mounting means, touch-screen 612 can be:It is external hanging type, built-in
Formula or monoblock type.Classify according to technical principle, touch-screen 612 can be:Vector pressure sensing technology touch-screen, resistive technologies are touched
Touch screen, capacitance technology touch-screen, infrared technology touch-screen or surface acoustic wave technique touch-screen.
Touch-screen 612, the touch-screen 612 are the input interface and output interface between user terminal and user, can
It can include figure, text, icon, video etc. to user, visual output depending on output display.Optionally, touch-screen 612 is by user
The electric signal (electric signal of such as contact surface) triggered on touch screen curtain, is sent to processor 602.
Display controller 6091 in I/O subsystems 609 receives electric signal from touch-screen 612 or sent out to touch-screen 612
Electric signals.Touch-screen 612 detects the contact on touch-screen, and the contact detected is converted to and shown by display controller 6091
The interaction of user interface object on touch-screen 612, that is, realize man-machine interaction, the user interface being shown on touch-screen 612
Icon that object can be the icon of running game, be networked to corresponding network etc..What deserves to be explained is equipment can also include light
Mouse, light mouse is not show the touch sensitive surface visually exported, or the extension of the touch sensitive surface formed by touch-screen.
RF circuits 605, it is mainly used in establishing the communication of intelligent sound box and wireless network (i.e. network side), realizes intelligent sound box
Data receiver and transmission with wireless network.Such as transmitting-receiving short message, Email etc..
Voicefrequency circuit 606, it is mainly used in receiving voice data from Peripheral Interface 603, the voice data is converted into telecommunications
Number, and the electric signal is sent to loudspeaker 611.
Loudspeaker 611, for the voice signal for receiving intelligent sound box from wireless network by RF circuits 605, it is reduced to
Sound simultaneously plays the sound to user.
Power management chip 608, the hardware for being connected by CPU602, I/O subsystem and Peripheral Interface are powered
And power management.
In the present embodiment, central processing unit 602 is used for:
The facial image of spectators is obtained during live;
The facial image is inputted into subjective attitude and determines model, obtains subjective attitude of the spectators to current live content;
The target information determined according to the subjective attitude and current live content is obtained, and the target is pushed to spectators
Information.
Further, the facial image that spectators are obtained during live, including:
The terminal device for playing live content starts image collecting device, gathers the facial image of spectators;
Or
Direct broadcast server sends the instruction of collection facial image to the terminal device for playing live content, and receives the end
The facial image that end equipment is sent.
Further, the facial image that spectators are obtained during live, including:
During live, at least frame facial image of spectators is obtained every preset time;
Or;
Semantics recognition is carried out to live content during live, when occurring predetermined keyword in live content, obtained
At least frame facial image of spectators.
Further, before the facial image is inputted subjective attitude determine model, in addition to:
Obtain the first face image set when spectators watch live;
The subjective attitude of live content is marked to the first face image set according to spectators, obtains the first facial image
Sample set;
According to the first face sample set, determine that model is carried out to the subjective attitude based on setting machine learning algorithm
Training.
Further, the facial image is inputted into subjective attitude and determines model, obtain spectators to current live content
Subjective attitude, including:
The facial image is inputted into subjective attitude and determines model, obtains expression information corresponding to the facial image;
Subjective attitude of the spectators to current live content is obtained according to the expression information.
Further, before obtaining expression information corresponding to the facial image, in addition to:
Obtain the second face image set when spectators watch live;
The subjective attitude of live content is marked to second face image set according to expression information and spectators, obtained
Obtain the second facial image sample set;
According to second facial image sample set, determine that model is carried out to subjective attitude based on setting machine learning algorithm
Training.
Further, it is described to obtain the target information determined according to the subjective attitude and current live content, and to sight
Crowd pushes the target information, including:
Obtain the target information related to live content;
It is that the spectators liked push the target information to subjective attitude.
Further, before the target information related to live content is obtained, in addition to:
The target information identification code that main broadcaster sends is received, and the target information identification code is stored to predeterminated position;
Accordingly, obtaining the target information related to live content includes:
The target information identification code is parsed, obtains the target information related to live content.
The embodiment of the present application also provides a kind of storage medium for including terminal device executable instruction, and the terminal device can
Execute instruction is when by terminal device computing device for performing a kind of method for pushing of information.
The computer-readable storage medium of the embodiment of the present application, any of one or more computer-readable media can be used
Combination.Computer-readable medium can be computer-readable signal media or computer-readable recording medium.It is computer-readable
Storage medium for example may be-but not limited to-the system of electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, device or
Device, or any combination above.The more specifically example (non exhaustive list) of computer-readable recording medium includes:Tool
There are the electrical connections of one or more wires, portable computer diskette, hard disk, random access memory (RAM), read-only storage
(ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only storage (CD-
ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.In this document, computer-readable storage
Medium can be any includes or the tangible medium of storage program, the program can be commanded execution system, device or device
Using or it is in connection.
Computer-readable signal media can include in a base band or as carrier wave a part propagation data-signal,
Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited
In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can
Any computer-readable medium beyond storage medium is read, the computer-readable medium, which can send, propagates or transmit, to be used for
By instruction execution system, device either device use or program in connection.
The program code included on computer-readable medium can be transmitted with any appropriate medium, including --- but it is unlimited
In wireless, electric wire, optical cable, RF etc., or above-mentioned any appropriate combination.
Can with one or more programming languages or its combination come write for perform the application operation computer
Program code, programming language include object oriented program language-such as Java, Smalltalk, C++, also wrapped
Include conventional procedural programming language-such as " C " language or similar programming language.Program code can be complete
Ground is performed, partly performed on the user computer on the user computer, the software kit independent as one performs, partly existed
Subscriber computer upper part is performed or performed completely on remote computer or server on the remote computer.It is being related to
In the situation of remote computer, remote computer can pass through the network of any kind --- including LAN (LAN) or wide area
Net (WAN)-be connected to subscriber computer, or, it may be connected to outer computer (such as utilize ISP
To pass through Internet connection).
Certainly, a kind of storage medium for including computer executable instructions that the embodiment of the present application is provided, its computer
The information push operation that executable instruction is not limited to the described above, it can also carry out the information that the application any embodiment is provided
Method for pushing in associative operation.
Said apparatus can perform the method that the foregoing all embodiments of the application are provided, and it is corresponding to possess the execution above method
Functional module and beneficial effect.Not ins and outs of detailed description in the present embodiment, reference can be made to the foregoing all implementations of the application
The method that example is provided.
Pay attention to, above are only preferred embodiment and the institute's application technology principle of the application.It will be appreciated by those skilled in the art that
The application is not limited to specific embodiment described here, can carry out for a person skilled in the art various obvious changes,
The protection domain readjusted and substituted without departing from the application.Therefore, although being carried out by above example to the application
It is described in further detail, but the application is not limited only to above example, in the case where not departing from the application design, also
Other more equivalent embodiments can be included, and scope of the present application is determined by scope of the appended claims.
Claims (11)
- A kind of 1. method for pushing of information, it is characterised in that including:The facial image of spectators is obtained during live;The facial image is inputted into subjective attitude and determines model, obtains subjective attitude of the spectators to current live content;The target information determined according to the subjective attitude and current live content is obtained, and pushes the target to spectators and believes Breath.
- 2. method for pushing according to claim 1, it is characterised in that the face figure that spectators are obtained during live Picture, including:The terminal device for playing live content starts image collecting device, gathers the facial image of spectators;OrDirect broadcast server sends the instruction of collection facial image to the terminal device for playing live content, and receives the terminal and set The facial image that preparation is sent.
- 3. method for pushing according to claim 1, it is characterised in that the face figure that spectators are obtained during live Picture, including:During live, at least frame facial image of spectators is obtained every preset time;Or;Semantics recognition is carried out to live content during live, when occurring predetermined keyword in live content, obtains spectators An at least frame facial image.
- 4. method for pushing according to claim 1, it is characterised in that determined the facial image is inputted into subjective attitude Before model, in addition to:Obtain the first face image set when spectators watch live;The subjective attitude of live content is marked to the first face image set according to spectators, obtains the first facial image sample Collection;According to the first face sample set, model, which is instructed, to be determined to the subjective attitude based on setting machine learning algorithm Practice.
- 5. method for pushing according to claim 1, it is characterised in that the facial image is inputted into subjective attitude and determines mould Type, subjective attitude of the spectators to current live content is obtained, including:The facial image is inputted into subjective attitude and determines model, obtains expression information corresponding to the facial image;Subjective attitude of the spectators to current live content is obtained according to the expression information.
- 6. method for pushing according to claim 5, it is characterised in that obtain expression information corresponding to the facial image it Before, in addition to:Obtain the second face image set when spectators watch live;The subjective attitude of live content is marked to second face image set according to expression information and spectators, obtains the Two facial image sample sets;According to second facial image sample set, model, which is instructed, to be determined to subjective attitude based on setting machine learning algorithm Practice.
- 7. method for pushing according to claim 1, it is characterised in that the acquisition is straight according to the subjective attitude and currently The target information of content determination is broadcast, and the target information is pushed to spectators, including:Obtain the target information related to live content;It is that the spectators liked push the target information to subjective attitude.
- 8. method for pushing according to claim 7, it is characterised in that obtain the target information related to live content it Before, in addition to:The target information identification code that main broadcaster sends is received, and the target information identification code is stored to predeterminated position;Accordingly, obtaining the target information related to live content includes:The target information identification code is parsed, obtains the target information related to live content.
- A kind of 9. pusher of information, it is characterised in that including:Facial image acquisition module, for obtaining the facial image of spectators during live;Subjective attitude acquisition module, model is determined for the facial image to be inputted into subjective attitude, obtain spectators to current straight Broadcast the subjective attitude of content;Target information pushing module, for obtaining the target information determined according to the subjective attitude and current live content, and The target information is pushed to spectators.
- A kind of 10. terminal device, it is characterised in that including:Processor, memory and storage on a memory and can handled The computer program run on device, realized described in the computing device during computer program such as any one of claim 1-8 Described method for pushing.
- 11. a kind of storage medium, is stored thereon with computer program, it is characterised in that the program is realized when being executed by processor Method for pushing as described in any in claim 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711033003.5A CN107786896A (en) | 2017-10-30 | 2017-10-30 | Method for pushing, device, terminal device and the storage medium of information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711033003.5A CN107786896A (en) | 2017-10-30 | 2017-10-30 | Method for pushing, device, terminal device and the storage medium of information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107786896A true CN107786896A (en) | 2018-03-09 |
Family
ID=61432199
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711033003.5A Pending CN107786896A (en) | 2017-10-30 | 2017-10-30 | Method for pushing, device, terminal device and the storage medium of information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107786896A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108446390A (en) * | 2018-03-22 | 2018-08-24 | 百度在线网络技术(北京)有限公司 | Method and apparatus for pushed information |
CN109783669A (en) * | 2019-01-21 | 2019-05-21 | 美的集团武汉制冷设备有限公司 | Screen methods of exhibiting, robot and computer readable storage medium |
CN112417297A (en) * | 2020-12-04 | 2021-02-26 | 网易(杭州)网络有限公司 | Data processing method and device, live broadcast server and terminal equipment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103763626A (en) * | 2013-12-19 | 2014-04-30 | 华为软件技术有限公司 | Method, device and system for pushing information |
CN104410911A (en) * | 2014-12-31 | 2015-03-11 | 合一网络技术(北京)有限公司 | Video emotion tagging-based method for assisting identification of facial expression |
CN104484044A (en) * | 2014-12-23 | 2015-04-01 | 上海斐讯数据通信技术有限公司 | Advertisement pushing method and advertisement pushing system |
CN104573619A (en) * | 2014-07-25 | 2015-04-29 | 北京智膜科技有限公司 | Method and system for analyzing big data of intelligent advertisements based on face identification |
CN106326441A (en) * | 2016-08-26 | 2017-01-11 | 乐视控股(北京)有限公司 | Information recommendation method and device |
CN106682953A (en) * | 2017-01-19 | 2017-05-17 | 努比亚技术有限公司 | Advertisement pushing method and device |
CN106919580A (en) * | 2015-12-25 | 2017-07-04 | 腾讯科技(深圳)有限公司 | A kind of information-pushing method and device |
CN107277643A (en) * | 2017-07-31 | 2017-10-20 | 合网络技术(北京)有限公司 | The sending method and client of barrage content |
-
2017
- 2017-10-30 CN CN201711033003.5A patent/CN107786896A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103763626A (en) * | 2013-12-19 | 2014-04-30 | 华为软件技术有限公司 | Method, device and system for pushing information |
CN104573619A (en) * | 2014-07-25 | 2015-04-29 | 北京智膜科技有限公司 | Method and system for analyzing big data of intelligent advertisements based on face identification |
CN104484044A (en) * | 2014-12-23 | 2015-04-01 | 上海斐讯数据通信技术有限公司 | Advertisement pushing method and advertisement pushing system |
CN104410911A (en) * | 2014-12-31 | 2015-03-11 | 合一网络技术(北京)有限公司 | Video emotion tagging-based method for assisting identification of facial expression |
CN106919580A (en) * | 2015-12-25 | 2017-07-04 | 腾讯科技(深圳)有限公司 | A kind of information-pushing method and device |
CN106326441A (en) * | 2016-08-26 | 2017-01-11 | 乐视控股(北京)有限公司 | Information recommendation method and device |
CN106682953A (en) * | 2017-01-19 | 2017-05-17 | 努比亚技术有限公司 | Advertisement pushing method and device |
CN107277643A (en) * | 2017-07-31 | 2017-10-20 | 合网络技术(北京)有限公司 | The sending method and client of barrage content |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108446390A (en) * | 2018-03-22 | 2018-08-24 | 百度在线网络技术(北京)有限公司 | Method and apparatus for pushed information |
CN108446390B (en) * | 2018-03-22 | 2022-01-04 | 百度在线网络技术(北京)有限公司 | Method and device for pushing information |
CN109783669A (en) * | 2019-01-21 | 2019-05-21 | 美的集团武汉制冷设备有限公司 | Screen methods of exhibiting, robot and computer readable storage medium |
CN112417297A (en) * | 2020-12-04 | 2021-02-26 | 网易(杭州)网络有限公司 | Data processing method and device, live broadcast server and terminal equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107801096B (en) | Video playing control method and device, terminal equipment and storage medium | |
US11380316B2 (en) | Speech interaction method and apparatus | |
CN109637518A (en) | Virtual newscaster's implementation method and device | |
EP2728859B1 (en) | Method of providing information-of-users' interest when video call is made, and electronic apparatus thereof | |
CN107645686A (en) | Information processing method, device, terminal device and storage medium | |
CN107995523A (en) | Video broadcasting method, device, terminal and storage medium | |
CN109348135A (en) | Photographic method, device, storage medium and terminal device | |
US20190220492A1 (en) | Display apparatus and method of controlling the same | |
CN105979035A (en) | AR image processing method and device as well as intelligent terminal | |
US10088901B2 (en) | Display device and operating method thereof | |
JP4621758B2 (en) | Content information reproducing apparatus, content information reproducing system, and information processing apparatus | |
CN107948667A (en) | The method and apparatus that special display effect is added in live video | |
JP7231638B2 (en) | Image-based information acquisition method and apparatus | |
US20190377755A1 (en) | Device for Mood Feature Extraction and Method of the Same | |
CN108491076B (en) | Display control method and related product | |
CN107786896A (en) | Method for pushing, device, terminal device and the storage medium of information | |
CN107968890A (en) | theme setting method, device, terminal device and storage medium | |
CN112653902A (en) | Speaker recognition method and device and electronic equipment | |
CN108021905A (en) | image processing method, device, terminal device and storage medium | |
CN112118397B (en) | Video synthesis method, related device, equipment and storage medium | |
CN108391164A (en) | Video analytic method and Related product | |
CN106507201A (en) | A kind of video playing control method and device | |
CN111368127B (en) | Image processing method, image processing device, computer equipment and storage medium | |
CN108111603A (en) | Information recommendation method, device, terminal device and storage medium | |
CN108491780A (en) | Image landscaping treatment method, apparatus, storage medium and terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180309 |
|
RJ01 | Rejection of invention patent application after publication |