CN110493612A - Processing method, server and the computer readable storage medium of barrage information - Google Patents

Processing method, server and the computer readable storage medium of barrage information Download PDF

Info

Publication number
CN110493612A
CN110493612A CN201910725820.XA CN201910725820A CN110493612A CN 110493612 A CN110493612 A CN 110493612A CN 201910725820 A CN201910725820 A CN 201910725820A CN 110493612 A CN110493612 A CN 110493612A
Authority
CN
China
Prior art keywords
barrage information
destination multimedia
text vector
information
barrage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910725820.XA
Other languages
Chinese (zh)
Other versions
CN110493612B (en
Inventor
颜伟婷
吴嘉旭
杜欧杰
李立锋
陈国仕
李鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MIGU Animation Co Ltd
MIGU Comic Co Ltd
Original Assignee
MIGU Animation Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MIGU Animation Co Ltd filed Critical MIGU Animation Co Ltd
Priority to CN201910725820.XA priority Critical patent/CN110493612B/en
Publication of CN110493612A publication Critical patent/CN110493612A/en
Application granted granted Critical
Publication of CN110493612B publication Critical patent/CN110493612B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/374Thesaurus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/24Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being the cepstrum
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • G10L2015/025Phonemes, fenemes or fenones being the recognition units

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention provides processing method, server and the computer readable storage medium of a kind of barrage information, this method comprises: receiving corresponding barrage information during multimedia resource plays;Obtain the strength of association of barrage information and destination multimedia;Wherein, destination multimedia includes: to light the multimedia in preset time range from the time of reception of barrage information corresponding play time in currently playing multimedia resource;According to the strength of association, the exhibition method of the barrage information is determined;Server adaptively adjusts the exhibition method of barrage information according to barrage information with the strength of association of corresponding destination multimedia in the embodiment of the present invention, which can distinguish the specific gravity of different barrage information, improves the interest that barrage information is shown.

Description

Processing method, server and the computer readable storage medium of barrage information
Technical field
The present invention relates to fields of communication technology, refer in particular to processing method, server and the computer of a kind of barrage information Readable storage medium storing program for executing.
Background technique
Existing barrage Enhancement Method, comprising: by user manually adjust barrage color and size, by delivering barrage The grade of user determine its barrage enhancement method.In the case where barrage is more, user needs from numerous barrage information Useful information relevant to video content is searched to which discussion be added, is made troubles to user.
Summary of the invention
The purpose of the present invention is to provide processing method, server and the computer-readable storage mediums of a kind of barrage information Matter, to solve the problems, such as that existing barrage display mode is unable to satisfy user's current demand.
To solve the above-mentioned problems, the embodiment of the present invention provides a kind of processing method of barrage information, comprising:
During multimedia resource plays, corresponding barrage information is received;
Obtain the strength of association of barrage information and destination multimedia;Wherein, destination multimedia includes: currently playing more matchmakers In body resource, the multimedia in preset time range is lighted from the time of reception of barrage information corresponding play time;
According to strength of association, the exhibition method of the barrage information is determined.
Wherein, the strength of association of the strength of association for obtaining barrage information and destination multimedia and destination multimedia, packet It includes:
Signature analysis is carried out to the barrage information, obtains the text vector of the barrage information;
Signature analysis is carried out to the destination multimedia, obtains the text vector of the destination multimedia;
According to the text vector of the text vector of the barrage information and the destination multimedia, barrage information and mesh are determined Mark multimedia strength of association.
Wherein, described that signature analysis is carried out to the barrage information, obtain the text vector of the barrage information, comprising:
Participle operation is carried out to the barrage information;Wherein, each word that participle operation obtains is as the barrage information A feature;
Vector subscript is assigned to each feature by hash function, obtains the text vector of the barrage information.
Wherein, the text vector of the destination multimedia includes at least one of following:
The text vector of the video image of the destination multimedia;
The text vector of the audio content of the destination multimedia;
The text vector of the text information of the destination multimedia.
Wherein, described that signature analysis is carried out to the destination multimedia, the text vector of the destination multimedia is obtained, is wrapped It includes:
Based on nerve network system, the feature of the video image of the destination multimedia is extracted;
By the Feature Mapping of video image into the nerve network system, image recognition classification is carried out;
Text vector is converted by the feature of sorted video image.
Wherein, described that signature analysis is carried out to the destination multimedia, the text vector of the destination multimedia is obtained, is wrapped It includes:
By linear prediction cepstrum coefficient and Mel cepstrum coefficient by each frame of the audio content of the destination multimedia Waveform is converted into the multi-C vector comprising acoustic information;
The multi-C vector comprising acoustic information is inputted into acoustic model, obtains phoneme information;
By phoneme information in dictionary word or word match, and by being trained to text information, obtain single Word or word are mutually related probability;
The feature is carried out text output and is converted into text by the feature that audio content is extracted according to the obtained probability Vector.
Wherein, described according to the text vector of the barrage information and the text vector of the destination multimedia, determine bullet The strength of association of curtain information and destination multimedia, comprising:
Obtain the approach degree of the text vector of the barrage information and the text vector of the destination multimedia;
According to the approach degree, the strength of association of barrage information and destination multimedia is determined;
Wherein, the approach degree includes at least one of following:
The first of the text vector of the video image of the text vector and destination multimedia of the barrage information close to Degree;
The second of the text vector of the audio content of the text vector and destination multimedia of the barrage information close to Degree;
The third of the text vector of the text information of the text vector and destination multimedia of the barrage information close to Degree.
Wherein, described according to the approach degree, determine the strength of association of barrage information and destination multimedia, comprising:
It include appointing in first approach degree, second approach degree and the third approach degree in the approach degree In the case where meaning one, first patch that the strong degree of association of barrage information and destination multimedia includes for the approach degree is determined Recency, second approach degree or the third approach degree;
The approach degree include first approach degree, second approach degree and the third approach degree in extremely In the case where two few, at least two patches that the strong degree of association of barrage information and destination multimedia includes for the approach degree are determined The sum of recency.
The embodiment of the present invention also provides a kind of server, including memory, processor and is stored on the memory simultaneously The computer program that can be run on the processor, the processor realize barrage letter as described above when executing described program The processing method of breath.
The embodiment of the present invention also provides a kind of computer readable storage medium, is stored thereon with computer program, the program The step in the processing method of barrage information as described above is realized when being executed by processor.
Above-mentioned technical proposal of the invention at least has the following beneficial effects:
In the processing method of the barrage information of the embodiment of the present invention, server and computer readable storage medium, server The exhibition method of barrage information, the displaying side are adaptively adjusted with the strength of association of corresponding destination multimedia according to barrage information Formula can distinguish the specific gravity of the barrage information of different strength of association, quickly can pay close attention to or navigate in order to user and close with video content The barrage content of Lian Dugao promotes user experience.
Detailed description of the invention
Fig. 1 shows the step flow charts of the processing method of barrage information provided in an embodiment of the present invention.
Fig. 2 indicates the schematic diagram of the concrete application scene of the processing method of barrage information provided in an embodiment of the present invention;
Fig. 3 indicates the structural schematic diagram of server provided in an embodiment of the present invention.
Specific embodiment
To keep the technical problem to be solved in the present invention, technical solution and advantage clearer, below in conjunction with attached drawing and tool Body embodiment is described in detail.
As shown in Figure 1, the embodiment of the present invention provides a kind of processing method of barrage information, comprising:
Step 11, during multimedia resource plays, corresponding barrage information is received;
Step 12, the strength of association of barrage information and destination multimedia is obtained;Wherein, destination multimedia includes: currently to broadcast In the multimedia resource put, more matchmakers in preset time range are lighted from the time of reception of barrage information corresponding play time Body;
Step 13, according to the strength of association, the exhibition method of the barrage information is determined.
The relevance of barrage information and video that the embodiment of the present invention is issued according to user adaptively adjusts the displaying of barrage Mode is conducive to protrude good barrage content, improves the content quality of barrage, and can increase the social interest of barrage.
Optionally, above-mentioned preset time range can be for the time that barrage is shown, the time that barrage is shown is system intialization Time.Such as system intialization client show barrage time be 3s, then system obtain user send barrage time, to from The multimedia in 3s from the time is analyzed.
Optionally, step 13 includes:
The strength of association is compared with system intialization rule, determines the exhibition method of the barrage information;Wherein, The exhibition method of the barrage information includes at least one of following:
Barrage font size;
Barrage font color;
Barrage font style.
For example, the font that barrage information is shown is bigger, i.e. the font size direct ratio of barrage information when strength of association is bigger In the strength of association.
For another example the displaying color of barrage information is white when strength of association is less than the first preset value;Work as strength of association Greater than the first preset value, and when less than the second preset value, the displaying color of barrage information is red;When strength of association is greater than second When preset value, the displaying color of barrage information is seven colored.
Optionally, after server determines the exhibition method of barrage information, corresponding exhibition method and barrage information are returned Back to client, client is shown in the multimedia resource of broadcasting.
As an alternative embodiment, step 12 includes:
Signature analysis is carried out to the barrage information, obtains the text vector of the barrage information;
Signature analysis is carried out to the destination multimedia, obtains the text vector of the destination multimedia;
According to the text vector of the text vector of the barrage information and the destination multimedia, barrage information and mesh are determined Mark multimedia strength of association.
Optionally, described that signature analysis is carried out to the barrage information, the text vector of the barrage information is obtained, is wrapped It includes:
Participle operation is carried out to the barrage information;Wherein, each word that participle operation obtains is as the barrage information A feature;
Vector subscript is assigned to each feature by hash function, obtains the text vector of the barrage information.
In short, server, which is based on TF-IDF algorithm, carries out text feature extraction to barrage information, first using jieba points Word device segments barrage information, so that each word all can serve as a feature of barrage information;Pass through hash function again Assign vector subscript to feature, it may be convenient to obtain the position of corresponding vector element, thus obtain the text of barrage information to Amount.
Optionally, the text vector of the destination multimedia includes at least one of following:
The text vector of the video image of the destination multimedia;
The text vector of the audio content of the destination multimedia;
The text vector of the text information of the destination multimedia.
In short, lighting the multimedia in preset time range to the time of reception corresponding play time from barrage information Carry out video image analysis (obtaining the text vector of the video image of the destination multimedia), audio content analysis (obtains institute State the text vector of the audio content of destination multimedia) and text information analysis (obtain the text envelope of the destination multimedia The text vector of breath).
Optionally, described that signature analysis is carried out to the destination multimedia, the text vector of the destination multimedia is obtained, Include:
Based on nerve network system, the feature of the video image of the destination multimedia is extracted;
By the Feature Mapping of video image into the nerve network system, image recognition classification is carried out;
Text vector is converted by the feature of sorted video image.
Video image analysis is to be based on nerve network system in the embodiment of the present invention, by extracting the feature of video image, It recycles Feature Mapping possessed by video image to carry out image recognition classification to neural network, and is converted into text vector.
Optionally, described that signature analysis is carried out to the destination multimedia, the text vector of the destination multimedia is obtained, Include:
By linear prediction cepstrum coefficient and Mel cepstrum coefficient by each frame of the audio content of the destination multimedia Waveform is converted into the multi-C vector comprising acoustic information;
The multi-C vector comprising acoustic information is inputted into acoustic model, obtains phoneme information;
By phoneme information in dictionary word or word match, and by being trained to text information, obtain single Word or word are mutually related probability;
The feature is carried out text output and is converted into text by the feature that audio content is extracted according to the obtained probability Vector.
The method of sound intermediate frequency content analysis of the embodiment of the present invention is specific as follows: the audio that will acquire first is located in advance Reason, then each frame waveform is become one by linear prediction cepstrum coefficient (LPCC) and Mel cepstrum coefficient (MFCC) and includes The multi-C vector of acoustic information;By obtaining acoustic model to voice data training, by the multidimensional comprising acoustic information of acquisition Vector inputs acoustic model, to obtain phoneme information;By phoneme information in dictionary word or word match, and by pair Large amount of text information is trained, and obtains single word or word is mutually related probability;Finally by the audio number after extraction feature According to progress text output, and it is converted to text vector.
It should be noted that the analysis method of text information and to the analysis method of barrage information in the embodiment of the present invention It is identical, i.e., text feature extraction is carried out based on text information of the TF-IDF algorithm to destination multimedia, is segmented first using jieba Device segments the text information of destination multimedia, so that each word all can serve as the one of the text information of destination multimedia A feature;Vector subscript is assigned to feature by hash function again, it may be convenient to the position of corresponding vector element is obtained, thus Obtain the text vector of the text information of destination multimedia.
It is described according to the text vector of the barrage information and the text of the destination multimedia as an alternative embodiment This vector determines the strength of association of barrage information and destination multimedia, comprising:
Obtain the approach degree of the text vector of the barrage information and the text vector of the destination multimedia;
According to the approach degree, the strength of association of barrage information and destination multimedia is determined;
Wherein, the approach degree includes at least one of following:
The first of the text vector of the video image of the text vector and destination multimedia of the barrage information close to Degree;
The second of the text vector of the audio content of the text vector and destination multimedia of the barrage information close to Degree;
The third of the text vector of the text information of the text vector and destination multimedia of the barrage information close to Degree.
In the embodiment of the present invention, by the text vector of barrage information respectively with video image, audio content, text information Text vector compares, and calculates its approach degree.Approach degree is higher, illustrates that similarity between the two is higher.
Wherein, according to Hamming approach degree formula, first approach degree, second approach degree and the third are calculated Approach degree;
Wherein, the Hamming approach degree formula are as follows:
Wherein, N (A, B) indicates the approach degree of two text vectors;Expression is defined as;A(ui)、B(ui) respectively indicate two Degree of membership of a text vector for discrete type domain U;The size of n expression discrete type domain U.
Further, described according to the approach degree, determine the strength of association of barrage information and destination multimedia, comprising:
It include appointing in first approach degree, second approach degree and the third approach degree in the approach degree In the case where meaning one, first patch that the strong degree of association of barrage information and destination multimedia includes for the approach degree is determined Recency, second approach degree or the third approach degree;
The approach degree include first approach degree, second approach degree and the third approach degree in extremely In the case where two few, at least two patches that the strong degree of association of barrage information and destination multimedia includes for the approach degree are determined The sum of recency.
For example, the text vector of barrage information respectively with video image, audio content, text information text vector patch Recency is respectively x (the first approach degree), y (the second approach degree), z (third approach degree), then barrage information with and destination multimedia Strength of association ρ calculation formula are as follows: ρ=x+y+z.
Further, server returns to the exhibition method of barrage information to visitor according to barrage information and the strength of association of video Family end, client are shown according to rule server;The embodiment of the present invention further provides for a kind of barrage exchange method, other User can be according to the displaying form of approval or negative change barrage to barrage.
For example, the user as non-barrage author clicks the barrage, client chooses the barrage, and shows button for user The barrage is approved of or is opposed in selection, and user, which passes through, clicks the operation that "+1 " button realizes the approval barrage, passes through " -1 " button reality Now oppose the operation of the barrage.When approving of number more, barrage font is bigger, and when antilogarithm is more, barrage font is smaller, it may be assumed that
Barrage font size is proportional to and (approves of number-antilogarithm)
Server will show that rule returns to client, client in real time according to the rule adjustment barrage display form Barrage displaying is carried out according to rule.Particularly, each user only records once the operation of single barrage, avoids malice and manipulates The case where barrage.
Optionally, barrage Enhancement Method provided in an embodiment of the present invention, which can also be expanded, is applied to social comment scene, system The initial of barrage information is defined with the strength of association for being currently discussed interior perhaps multimedia content according to the barrage information of user's publication Pattern, other users can be according to region or the sizes of approval or opposition change comment display to comment.
For the processing method of clearer description barrage information provided in an embodiment of the present invention, specifically answered below with reference to one It is described with scene:
As illustrated in fig. 2, it is assumed that user A is watching video by client, the dashed region in Fig. 2 is video playing area Domain, user A input " barrage information 4 " in barrage input area, server input according to user A later " barrage information 4 " and The strength of association (such as strength of association is larger) for playing video determines that the font size of the display of barrage information 4 is 24pt, service The font size of barrage information 4 is handed down to client by device, and client is by barrage information 4 in video playback area according to 24pt's Font size is shown.At this point, video playback area also includes " barrage information 2 ", if user A is to the interior of " barrage information 2 " Hold and approve of, user A clicks "+1 " button and realizes the operation for approving of the barrage, and after approval, the font size of barrage information 2 increases Greatly;Alternatively, user A clicks " -1 " button and realizes to negate the barrage if user A negates to the content representation of " barrage information 2 " It operates, after negative, the font size of barrage information 2 reduces.It should be noted that approval of the other users to barrage information 2 Or the same font size for influencing barrage information 2 of negative operation, action type do not repeat to repeat herein.
To sum up, server is adaptive according to barrage information and the strength of association of corresponding destination multimedia in the embodiment of the present invention The exhibition method of barrage information should be adjusted, which can distinguish the specific gravity of different barrage information, improve barrage information and show Interest.
As shown in figure 3, the embodiment of the present invention also provides a kind of server, including memory 310, processor 300 and storage On the memory 310 and the computer program that can run on the processor 300, the processor 300 execute described Each process in the processing method of barrage information as described above is realized when program, and can reach identical technical effect, is It avoids repeating, which is not described herein again.
The embodiment of the present invention also provides a kind of computer readable storage medium, is stored thereon with computer program, the program Each process in the processing method embodiment of barrage information as described above is realized when being executed by processor, and can be reached identical Technical effect, to avoid repeating, which is not described herein again.Wherein, the computer readable storage medium, such as read-only memory (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, abbreviation RAM), magnetic disk Or CD etc..
It should be understood by those skilled in the art that, embodiments herein can provide as method, system or computer program Product.Therefore, complete hardware embodiment, complete software embodiment or implementation combining software and hardware aspects can be used in the application The form of example.Moreover, can be used can in the computer that one or more wherein includes computer usable program code by the application Read the form for the computer program product implemented on storage medium (including but not limited to magnetic disk storage and optical memory etc.).
The application is reference according to the method for the embodiment of the present application, the flow chart of equipment (system) and computer program product And/or block diagram describes.It should be understood that each process in flowchart and/or the block diagram can be realized by computer program instructions And/or the combination of the process and/or box in box and flowchart and/or the block diagram.It can provide these computer programs to refer to Enable the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to generate One machine so that by the instruction that the processor of computer or other programmable data processing devices executes generate for realizing The device for the function of being specified in one or more flows of the flowchart and/or one box or multiple boxes.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer readable storage medium that mode works, so that the instruction being stored in the computer readable storage medium generates packet The paper products of command device are included, which realizes in one side of one or more flows of the flowchart and/or block diagram The function of being specified in frame or multiple boxes.
These computer program instructions can also be loaded into computer or other programmable data processing devices, so that calculating On machine or other programmable devices execute series of operation steps to generate computer implemented processing, thus computer or its The instruction that executes on his section's programming device is provided for realizing in one or more flows of the flowchart and/or one, block diagram The step of function of being specified in box or multiple boxes.
The above is a preferred embodiment of the present invention, it is noted that for those skilled in the art For, without departing from the principles of the present invention, several improvements and modifications can also be made, these improvements and modifications It should be regarded as protection scope of the present invention.

Claims (10)

1. a kind of processing method of barrage information characterized by comprising
During multimedia resource plays, corresponding barrage information is received;
Obtain the strength of association of barrage information and destination multimedia;Wherein, destination multimedia includes: currently playing multimedia money In source, the multimedia in preset time range is lighted from the time of reception of barrage information corresponding play time;
According to the strength of association, the exhibition method of the barrage information is determined.
2. the method according to claim 1, wherein the association of the acquisition barrage information and destination multimedia is strong The strength of association of degree and destination multimedia, comprising:
Signature analysis is carried out to the barrage information, obtains the text vector of the barrage information;
Signature analysis is carried out to the destination multimedia, obtains the text vector of the destination multimedia;
According to the text vector of the text vector of the barrage information and the destination multimedia, determine that barrage information and target are more The strength of association of media.
3. according to the method described in claim 2, it is characterized in that, described carry out signature analysis, acquisition to the barrage information The text vector of the barrage information, comprising:
Participle operation is carried out to the barrage information;Wherein, participle operates one of obtained each word as the barrage information A feature;
Vector subscript is assigned to each feature by hash function, obtains the text vector of the barrage information.
4. according to the method described in claim 2, it is characterized in that, the text vector of the destination multimedia include it is following at least One:
The text vector of the video image of the destination multimedia;
The text vector of the audio content of the destination multimedia;
The text vector of the text information of the destination multimedia.
5. according to the method described in claim 4, it is characterized in that, it is described to the destination multimedia carry out signature analysis, obtain Take the text vector of the destination multimedia, comprising:
Based on nerve network system, the feature of the video image of the destination multimedia is extracted;
By the Feature Mapping of video image into the nerve network system, image recognition classification is carried out;
Text vector is converted by the feature of sorted video image.
6. according to the method described in claim 4, it is characterized in that, it is described to the destination multimedia carry out signature analysis, obtain Take the text vector of the destination multimedia, comprising:
By linear prediction cepstrum coefficient and Mel cepstrum coefficient by each frame waveform of the audio content of the destination multimedia It is converted into the multi-C vector comprising acoustic information;
The multi-C vector comprising acoustic information is inputted into acoustic model, obtains phoneme information;
By phoneme information in dictionary word or word match, and by being trained to text information, obtain single word or Person's word is mutually related probability;
The feature that audio content is extracted according to the obtained probability, by the feature carry out text output be converted into text to Amount.
7. according to the method described in claim 4, it is characterized in that, the text vector according to the barrage information and described The text vector of destination multimedia determines the strength of association of barrage information and destination multimedia, comprising:
Obtain the approach degree of the text vector of the barrage information and the text vector of the destination multimedia;
According to the approach degree, the strength of association of barrage information and destination multimedia is determined;
Wherein, the approach degree includes at least one of following:
First approach degree of the text vector of the video image of the text vector and destination multimedia of the barrage information;
Second approach degree of the text vector of the audio content of the text vector and destination multimedia of the barrage information;
The third approach degree of the text vector of the text information of the text vector and destination multimedia of the barrage information.
8. determining barrage information and mesh the method according to the description of claim 7 is characterized in that described according to the approach degree Mark multimedia strength of association, comprising:
It include any one in first approach degree, second approach degree and the third approach degree in the approach degree In the case where, determine barrage information and destination multimedia the strong degree of association be the approach degree include described first close to Degree, second approach degree or the third approach degree;
In at least two that the approach degree includes in first approach degree, second approach degree and the third approach degree In the case where, at least two approach degrees that the strong degree of association of barrage information and destination multimedia includes for the approach degree are determined The sum of.
9. a kind of server, including memory, processor and it is stored on the memory and can runs on the processor Computer program;It is characterized in that, the processor is realized when executing described program as claim 1-8 is described in any item The processing method of barrage information.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor The step in the processing method such as the described in any item barrage information of claim 1-8 is realized when execution.
CN201910725820.XA 2019-08-07 2019-08-07 Barrage information processing method, server and computer readable storage medium Active CN110493612B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910725820.XA CN110493612B (en) 2019-08-07 2019-08-07 Barrage information processing method, server and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910725820.XA CN110493612B (en) 2019-08-07 2019-08-07 Barrage information processing method, server and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110493612A true CN110493612A (en) 2019-11-22
CN110493612B CN110493612B (en) 2022-03-04

Family

ID=68550084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910725820.XA Active CN110493612B (en) 2019-08-07 2019-08-07 Barrage information processing method, server and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110493612B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111212328A (en) * 2019-12-31 2020-05-29 咪咕互动娱乐有限公司 Bullet screen display method, bullet screen server and computer readable storage medium
CN113709578A (en) * 2021-09-14 2021-11-26 上海幻电信息科技有限公司 Bullet screen display method and device
CN114257826A (en) * 2021-12-01 2022-03-29 广州方硅信息技术有限公司 Live comment information display method and device, storage medium and computer equipment
CN115271851A (en) * 2022-07-04 2022-11-01 天翼爱音乐文化科技有限公司 Video color ring recommendation method, system, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2963938A1 (en) * 2013-02-27 2016-01-06 Brother Kogyo Kabushiki Kaisha Terminal device, program, and information processing device
CN108566565A (en) * 2018-03-30 2018-09-21 科大讯飞股份有限公司 Barrage methods of exhibiting and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107809658A (en) * 2017-10-18 2018-03-16 维沃移动通信有限公司 A kind of barrage content display method and terminal

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2963938A1 (en) * 2013-02-27 2016-01-06 Brother Kogyo Kabushiki Kaisha Terminal device, program, and information processing device
CN108566565A (en) * 2018-03-30 2018-09-21 科大讯飞股份有限公司 Barrage methods of exhibiting and device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111212328A (en) * 2019-12-31 2020-05-29 咪咕互动娱乐有限公司 Bullet screen display method, bullet screen server and computer readable storage medium
CN111212328B (en) * 2019-12-31 2022-03-25 咪咕互动娱乐有限公司 Bullet screen display method, bullet screen server and computer readable storage medium
CN113709578A (en) * 2021-09-14 2021-11-26 上海幻电信息科技有限公司 Bullet screen display method and device
CN113709578B (en) * 2021-09-14 2023-08-11 上海幻电信息科技有限公司 Bullet screen display method, bullet screen display device, bullet screen display equipment and bullet screen display medium
CN114257826A (en) * 2021-12-01 2022-03-29 广州方硅信息技术有限公司 Live comment information display method and device, storage medium and computer equipment
CN115271851A (en) * 2022-07-04 2022-11-01 天翼爱音乐文化科技有限公司 Video color ring recommendation method, system, electronic equipment and storage medium
CN115271851B (en) * 2022-07-04 2023-10-10 天翼爱音乐文化科技有限公司 Video color ring recommending method, system, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110493612B (en) 2022-03-04

Similar Documents

Publication Publication Date Title
CN110493612A (en) Processing method, server and the computer readable storage medium of barrage information
US10657969B2 (en) Identity verification method and apparatus based on voiceprint
US11875807B2 (en) Deep learning-based audio equalization
CN110798636B (en) Subtitle generating method and device and electronic equipment
EP2568429A1 (en) Method and system for pushing individual advertisement based on user interest learning
CN112533051A (en) Bullet screen information display method and device, computer equipment and storage medium
CN111626049A (en) Title correction method and device for multimedia information, electronic equipment and storage medium
CN107861954A (en) Information output method and device based on artificial intelligence
CN110489747A (en) A kind of image processing method, device, storage medium and electronic equipment
CN110704618B (en) Method and device for determining standard problem corresponding to dialogue data
WO2021128817A1 (en) Video and audio recognition method, apparatus and device and storage medium
CN109286848B (en) Terminal video information interaction method and device and storage medium
WO2020170593A1 (en) Information processing device and information processing method
CN112614489A (en) User pronunciation accuracy evaluation method and device and electronic equipment
CN110223678A (en) Audio recognition method and system
CN115640398A (en) Comment generation model training method, comment generation device and storage medium
CN106530377B (en) Method and apparatus for manipulating three-dimensional animated characters
Felipe et al. Acoustic scene classification using spectrograms
CN117152308A (en) Virtual person action expression optimization method and system
CN116645683A (en) Signature handwriting identification method, system and storage medium based on prompt learning
CN107656760A (en) Data processing method and device, electronic equipment
CN116821324A (en) Model training method and device, electronic equipment and storage medium
CN113360630B (en) Interactive information prompting method
CN109960752A (en) Querying method, device, computer equipment and storage medium in application program
CN111477212A (en) Content recognition, model training and data processing method, system and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant