CN115174951A - Unmanned live broadcast online analysis and management system based on multi-dimensional feature capture - Google Patents
Unmanned live broadcast online analysis and management system based on multi-dimensional feature capture Download PDFInfo
- Publication number
- CN115174951A CN115174951A CN202210775962.9A CN202210775962A CN115174951A CN 115174951 A CN115174951 A CN 115174951A CN 202210775962 A CN202210775962 A CN 202210775962A CN 115174951 A CN115174951 A CN 115174951A
- Authority
- CN
- China
- Prior art keywords
- recorded
- broadcast
- broadcast video
- watching
- video clip
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000004458 analytical method Methods 0.000 title claims abstract description 66
- 230000000694 effects Effects 0.000 claims abstract description 45
- 238000011156 evaluation Methods 0.000 claims abstract description 36
- 230000003993 interaction Effects 0.000 claims abstract description 12
- 238000007405 data analysis Methods 0.000 claims abstract description 8
- 230000004044 response Effects 0.000 claims description 49
- 238000000034 method Methods 0.000 claims description 42
- 230000008569 process Effects 0.000 claims description 38
- 238000004364 calculation method Methods 0.000 claims description 23
- 230000002452 interceptive effect Effects 0.000 claims description 22
- 238000012163 sequencing technique Methods 0.000 claims description 12
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 3
- 230000014759 maintenance of location Effects 0.000 abstract description 4
- 230000009286 beneficial effect Effects 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/47815—Electronic shopping
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4788—Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses an unmanned live broadcast online analysis management system based on multi-dimensional feature capture, which can intelligently and automatically reply request information sent by a target user by setting a rule base, a user request information acquisition module, a user request information analysis module, an intelligent reply module, a recorded broadcast video division module, a recorded broadcast video data acquisition module, a data analysis module and a comprehensive display end, has better real-time performance, improves the viewing experience of the user on the recorded broadcast video, is also beneficial to improving the user retention rate of a platform, improves the comprehensiveness of information analysis and reply on the user by analyzing and responding character information, voice information and picture information sent by the user, simultaneously performs playing effect evaluation and interaction effect evaluation on each recorded broadcast video segment respectively, and further can perform comprehensive and accurate multi-dimensional effect analysis and effect evaluation on the recorded broadcast video.
Description
Technical Field
The invention belongs to the technical field of unmanned live broadcast online analysis management, and particularly relates to an unmanned live broadcast online analysis management system based on multi-dimensional feature capture.
Background
In recent years, along with internet technology and mobile intelligent terminal's high-speed development, a large amount of live broadcast platforms begin to emerge, compare with traditional electricity merchant, live with its characteristics that are rich in interestingness and direct-viewing nature strong and receive people's favor, now, more and more people select to purchase commodity at live broadcast platform, and along with the diversification of live broadcast platform industrial chain and spectator's continuous increase, unmanned live broadcast just takes place to take place as a novel live broadcast form, unmanned live broadcast refers to the recorded broadcast video that live broadcast platform prepared in advance, unmanned live broadcast is when possessing convenience and reuse nature, its management mode has also received comparatively strict requirement.
Nowadays, the management disadvantages of the prior art for the unmanned live broadcast are embodied in the following aspects:
(1) On the one hand, the prior art adopts the manual response management mode to unmanned live broadcast more, because online staff's quantity is less, and recorded broadcast video watch the people flow great, leads to the staff can't in time handle the request information that spectator sent, and the real-time is relatively poor, and then influences user's video and watches experience and feel, also does not benefit to the user retention rate that improves the platform yet.
(2) On one hand, in the prior art, most of the text messages sent by the user are analyzed and replied, and the analysis and the reply of the voice and the picture messages sent by the user are lacked, so that the problem of one-sidedly analyzing and replying the messages of the user is caused.
(3) On the other hand, most of the prior art concerns the overall playing effect of recorded and broadcast videos, and not only lacks analysis and evaluation on interactive effects, but also lacks corresponding playing effect evaluation in each recorded and broadcast video clip, so that the recorded and broadcast videos cannot be subjected to comprehensive and accurate effect analysis and effect evaluation, and self-management promotion of a live broadcast platform is not facilitated.
Disclosure of Invention
In order to overcome the defects in the background art, the embodiment of the invention provides an unmanned live broadcast online analysis management system based on multi-dimensional feature capture, which can effectively solve the problems related to the background art.
The purpose of the invention can be realized by the following technical scheme:
an unmanned live broadcast online analysis management system based on multi-dimensional feature capture comprises: the system comprises a rule base, a user request information acquisition module, a user request information analysis module, an intelligent reply module, a recorded and broadcast video division module, a recorded and broadcast video data acquisition module, a data analysis module and a comprehensive display terminal;
the rule base is used for storing response templates corresponding to various keywords to which various commodities belong, commodities corresponding to various main body outline profiles, response templates corresponding to various display characteristics to which various commodities belong and reference watching persons corresponding to various recorded and broadcast video clips;
the user request information acquisition module is used for acquiring request information sent by a target user in the process of watching recorded and broadcast videos, wherein the request information comprises character information, voice information and picture information;
the user request information analysis module is used for analyzing request information sent by a target user in the process of watching recorded and broadcast videos, and comprises a text information analysis unit, a voice information analysis unit and a picture information analysis unit;
the intelligent reply module is used for intelligently replying the target user based on the analysis result of the user request information analysis module;
the recorded and broadcast video dividing module is used for dividing the time length of the recorded and broadcast video according to different displayed commodities so as to obtain a plurality of recorded and broadcast video clips which are respectively numbered as 1,2, · i,. So, q;
the recorded and broadcast video data acquisition module is used for acquiring parameter data corresponding to each recorded and broadcast video clip, wherein the parameter data comprise broadcast data and interaction data;
the data analysis module is used for analyzing the playing data corresponding to each recorded and broadcast video segment so as to obtain a playing effect evaluation coefficient corresponding to each recorded and broadcast video segment, and analyzing the interactive data corresponding to each recorded and broadcast video segment so as to obtain an interactive effect evaluation coefficient corresponding to each recorded and broadcast video segment;
the comprehensive display terminal is used for sequencing the playing effect evaluation coefficients corresponding to the recorded and broadcast video segments from high to low, further sequencing and displaying the numbers to which the corresponding recorded and broadcast video segments belong, sequencing the interaction effect evaluation coefficients corresponding to the recorded and broadcast video segments from high to low, and further sequencing and displaying the numbers to which the corresponding recorded and broadcast video segments belong.
As a further scheme of the present invention, the text information analysis unit is configured to analyze text information sent by a target user during watching a recorded video, and specifically includes the following steps:
a1, extracting request commodities and key words from text information sent by a target user in the process of watching recorded and broadcast videos to obtain the request commodities and various key words corresponding to the text information sent by the target user in the process of watching recorded and broadcast videos;
and A2, matching the request commodity corresponding to the text information sent by the target user in the process of watching the recorded and broadcast video and the response templates corresponding to the various keywords to which the various commodities belong, which are stored in the rule base, further acquiring the response templates corresponding to the various keywords to which the request commodity to which the text information corresponding to the text information sent by the target user in the process of watching the recorded and broadcast video belongs, and marking the response templates as text information response templates.
As a further scheme of the present invention, the voice information analysis unit is configured to analyze the voice information sent by the target user during the process of watching the recorded and broadcast video, and the specific steps are as follows:
b1, performing word conversion processing on voice information sent by a target user in the process of watching recorded and broadcast videos by a voice word conversion technology;
and B2, acquiring response templates corresponding to various keywords to which the voice information sent by the target user in the process of watching the recorded and broadcast video according to the analysis acquisition mode of the text information response templates, and marking the response templates as the voice information response templates.
As a further scheme of the present invention, the picture information analysis unit is configured to analyze picture information sent by a target user in a process of watching a recorded video, and specifically includes the following steps:
c1, extracting main body outline of a picture sent by a target user in the process of watching the recorded and broadcast video, matching the picture with commodities corresponding to various main body outline stored in a rule base, further acquiring the commodity corresponding to the main body outline of the picture sent by the target user in the process of watching the recorded and broadcast video, marking the commodity as an intention commodity of the target user, and extracting display characteristics corresponding to the intention commodity of the target user;
and C2, matching the display characteristics corresponding to the intended commodities of the target user with response templates which are stored in the rule base and to which the various display characteristics corresponding to the various commodities belong, further acquiring the response templates to which the display characteristics corresponding to the intended commodities of the target user belong, and marking the response templates as picture information response templates.
As a further scheme of the present invention, the intelligent reply to the target user comprises the following specific steps:
and D1, according to the request information sent by the target user, further identifying the type of the request information sent by the target user, wherein the type of the request information comprises: a text information type, a voice information type and a picture information type;
and D2, judging a result based on the type of the request information sent by the target user, and then adopting a corresponding information response template to carry out intelligent response.
As a further aspect of the present invention, the playing data includes: watch the number of people, the time length and the sales volume are watched to everyone, interactive data includes: the number of the requested information sending persons and the number of the requested information sending pieces.
As a further scheme of the present invention, the analyzing the playing data corresponding to each recorded and played video clip specifically comprises the following steps:
e1, comparing the watching number corresponding to each recorded and broadcasted video clip with the reference watching number corresponding to each recorded and broadcasted video clip stored in the rule base, and calculating the watching heat degree corresponding to each recorded and broadcasted video clip, wherein the calculation formula is as follows:wherein alpha is i Indicating the corresponding viewing heat, RC, of the ith recorded and broadcast video clip i The number of persons watched, RC, corresponding to the ith recorded and broadcast video clip i0 ' represents a reference viewing population corresponding to the ith recorded video clip;
e2, acquiring the time length corresponding to each recorded and broadcasted video segment, comparing the per-person watching time length corresponding to each recorded and broadcasted video segment with the time length corresponding to each recorded and broadcasted video segment, and calculating the per-person watching time length ratio coefficient corresponding to each recorded and broadcasted video segment, wherein the calculation formula is as follows:wherein beta is i Expressed as the per-person watching time length ratio coefficient, SJ, corresponding to the ith recorded and broadcast video clip i Expressed as the per-person watching duration, phi, corresponding to the ith recorded and broadcast video clip i ' is the time length corresponding to the ith recorded and played video clip;
e3, calculating a sales ratio coefficient corresponding to each recorded and broadcast video clip based on the obtained sales corresponding to each recorded and broadcast video clip, wherein the calculation formula is as follows:wherein delta i Expressed as the sales percentage coefficient, XE, corresponding to the ith recorded and broadcast video clip i Indicated as sales corresponding to the ith recorded video clip.
As a further scheme of the present invention, the calculation formula of the play effect evaluation coefficient corresponding to each recorded and played video segment is:wherein eta i Is expressed as a play effect evaluation coefficient, gamma, corresponding to the ith recorded video clip 1 、γ 2 And gamma 3 And the correction coefficients are respectively expressed as preset playing effect correction coefficients corresponding to the watching heat, the per-person watching time and the sales.
As a further scheme of the present invention, the analyzing of the interactive data corresponding to each recorded and played video clip specifically comprises the following steps:
and F1, calculating a request information sending person number ratio coefficient corresponding to each recorded and broadcasted video clip based on the obtained request information sending person number corresponding to each recorded and broadcasted video clip, wherein the calculation formula is as follows:wherein epsilon i The number of persons sending the request information corresponding to the ith recorded and broadcast video clip is expressed as a number of persons sending the coefficient, FS i The number of the request information sending persons corresponding to the ith recorded and broadcast video clip is represented;
and F2, calculating the ratio coefficient of the number of the request information sending pieces corresponding to each recorded and broadcast video clip based on the number of the request information sending pieces corresponding to each recorded and broadcast video clip, wherein the calculation formula is as follows:whereinExpressed as the ratio coefficient of the number of request information sending pieces, TS, corresponding to the ith recorded and broadcast video clip i The number of request information sending pieces corresponding to the ith recorded and played video clip is shown.
As a further scheme of the present invention, the interactive effect evaluation coefficient corresponding to each recorded and played video segment is calculated according to the following formula:wherein psi i Expressed as the interactive effect evaluation coefficient corresponding to the ith recorded and broadcast video clip, and e is expressed as a natural constant χ 1 Hexix- 2 And the interactive effect weight factors are respectively expressed as the preset number of the sending persons of the request information and the number of the sending pieces of the request information.
Compared with the prior art, the embodiment of the invention at least has the following beneficial effects:
(1) The invention provides an unmanned live broadcast online analysis management system based on multi-dimensional feature capture, which can intelligently and automatically reply request information sent by a target user, avoids the problem that the request information sent by audiences cannot be processed in time due to a traditional manual management mode, has better real-time performance, further improves the viewing experience of the user on recorded and broadcast videos, and is also beneficial to improving the user retention rate of a platform.
(2) According to the method and the device, the request information sent by the user is identified, the text information sent by the user is analyzed and responded, and the voice information and the picture information sent by the user are analyzed and responded.
(3) According to the method and the device, the recorded and broadcast videos are segmented according to different displayed commodities to obtain a plurality of recorded and broadcast video segments, and then the recorded and broadcast video segments are subjected to play effect evaluation and interaction effect evaluation respectively, so that the situation that most segments concern the overall play effect of the recorded and broadcast videos in the prior art is avoided, and further the recorded and broadcast videos can be subjected to comprehensive and accurate multi-dimensional effect analysis and effect evaluation, so that the direct broadcast platform can intuitively recognize the play effect and the interaction effect of the recorded and broadcast video segments corresponding to various commodities, and self-management promotion of the direct broadcast platform is facilitated.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
FIG. 1 is a schematic diagram of the system structure connection of the present invention.
Fig. 2 is a schematic structural diagram of a user request information analysis module according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the present invention provides an unmanned live broadcast online analysis management system based on multidimensional feature capture, including: the system comprises a rule base, a user request information acquisition module, a user request information analysis module, an intelligent reply module, a recorded and broadcast video division module, a recorded and broadcast video data acquisition module, a data analysis module and a comprehensive display terminal;
the user request information analysis module is respectively connected with the rule base, the user request information acquisition module and the intelligent reply module, the recorded broadcast video dividing module is connected with the recorded broadcast video data acquisition module, and the data analysis module is respectively connected with the rule base, the recorded broadcast video data acquisition module and the comprehensive display terminal.
The rule base is used for storing response templates corresponding to various keywords to which various commodities belong, commodities corresponding to various main body outline profiles, response templates corresponding to various display features to which various commodities belong and reference watching persons corresponding to recorded and broadcasted video clips.
The user request information acquisition module is used for acquiring request information sent by a target user in the process of watching recorded and broadcast videos, wherein the request information comprises character information, voice information and picture information.
The user request information analysis module is used for analyzing the request information sent by the target user in the process of watching recorded and broadcast videos.
Referring to fig. 2, the user request information analysis module includes a text information analysis unit, a voice information analysis unit, and a picture information analysis unit;
specifically, the text information analysis unit is configured to analyze text information sent by a target user in a process of watching recorded and broadcasted videos, and includes the specific steps of:
a1, extracting request commodities and key words from text information sent by a target user in the process of watching recorded and broadcast videos to obtain the request commodities and various key words corresponding to the text information sent by the target user in the process of watching recorded and broadcast videos;
and A2, matching the request commodity corresponding to the text information sent by the target user in the process of watching the recorded and broadcast video and the response templates corresponding to the various keywords to which the various commodities belong, which are stored in the rule base, further acquiring the response templates corresponding to the various keywords to which the request commodity to which the text information corresponding to the text information sent by the target user in the process of watching the recorded and broadcast video belongs, and marking the response templates as text information response templates.
The keywords include: appearance, performance, price, logistics, etc.
Specifically, the voice information analysis unit is configured to analyze the voice information sent by the target user in the process of watching recorded and broadcasted video, and specifically includes the following steps:
b1, performing character conversion processing on voice information sent by a target user in the process of watching recorded and broadcast videos by a voice character conversion technology;
and B2, acquiring response templates corresponding to various keywords to which the voice information sent by the target user in the process of watching the recorded and broadcast video according to the analysis acquisition mode of the text information response templates, and marking the response templates as the voice information response templates.
Specifically, the picture information analysis unit is configured to analyze picture information sent by a target user in a process of watching recorded and broadcasted video, and includes the specific steps of:
c1, extracting the main body outline of a picture sent by a target user in the process of watching recorded and broadcast video, matching the main body outline with commodities corresponding to various main body outlines stored in a rule base, further acquiring the commodity corresponding to the main body outline of the picture sent by the target user in the process of watching recorded and broadcast video, marking the commodity as an intention commodity of the target user, and extracting display characteristics corresponding to the intention commodity of the target user;
and C2, matching the display characteristics corresponding to the intended commodities of the target user with response templates which are stored in the rule base and to which the various display characteristics corresponding to the various commodities belong, further acquiring the response templates to which the display characteristics corresponding to the intended commodities of the target user belong, and marking the response templates as picture information response templates.
The intelligent reply module is used for intelligently replying the target user based on the analysis result of the user request information analysis module;
specifically, the intelligent reply to the target user includes the following specific steps:
and D1, according to the request information sent by the target user, further identifying the type of the request information sent by the target user, wherein the type of the request information comprises: a text information type, a voice information type and a picture information type;
and D2, judging a result based on the type of the request information sent by the target user, and then adopting a corresponding information response template to carry out intelligent response.
In the specific embodiment of the invention, the request information sent by the user is identified, so that the text information sent by the user is analyzed and responded, and the voice information and the picture information sent by the user are analyzed and responded.
The recorded and broadcast video dividing module is used for dividing the duration of recorded and broadcast videos according to different displayed commodities so as to obtain a plurality of recorded and broadcast video clips, and the recorded and broadcast video clips are respectively numbered as 1,2.
The recorded and broadcast video data acquisition module is used for acquiring parameter data corresponding to each recorded and broadcast video clip, wherein the parameter data comprise broadcast data and interaction data;
specifically, the playing data includes: watch the number of people, the time length and the sales volume are watched to everyone, interactive data includes: the number of the requested information sending persons and the number of the requested information sending pieces.
The data analysis module is used for analyzing the playing data corresponding to each recorded and broadcast video segment so as to obtain a playing effect evaluation coefficient corresponding to each recorded and broadcast video segment, and analyzing the interactive data corresponding to each recorded and broadcast video segment so as to obtain an interactive effect evaluation coefficient corresponding to each recorded and broadcast video segment;
specifically, the analyzing the playing data corresponding to each recorded and played video clip specifically comprises the following steps:
e1, comparing the watching number corresponding to each recorded and broadcasted video clip with the reference watching number corresponding to each recorded and broadcasted video clip stored in the rule base, and calculating the watching heat degree corresponding to each recorded and broadcasted video clip, wherein the calculation formula is as follows:wherein alpha is i Indicating the corresponding viewing heat, RC, of the ith recorded and broadcast video clip i Is expressed as the number of viewers corresponding to the ith recorded and broadcast video clip, RC i0 ' represents the reference number of viewers corresponding to the ith recorded and broadcast video clip;
in the above formula for calculating the viewing popularity of each recorded and broadcast video segment, the greater the number of viewers corresponding to a certain recorded and broadcast video segment is, the greater the viewing popularity of the recorded and broadcast video segment is.
In the specific embodiment of the invention, the number of the viewers corresponding to each recorded and broadcast video clip is analyzed, so that the purpose is to consider that the number of the viewers is an important factor for evaluating the playing effect of the recorded and broadcast video, and the number of the viewers is large, so that the recorded and broadcast video is more highly watched, the analysis result of the number of the viewers can enable the platform to more clearly recognize the playing heat of the recorded and broadcast video, and further make corresponding adjustment.
E2, acquiring the time length corresponding to each recorded and broadcasted video segment, comparing the per-person watching time length corresponding to each recorded and broadcasted video segment with the time length corresponding to each recorded and broadcasted video segment, and calculating the per-person watching time length ratio coefficient corresponding to each recorded and broadcasted video segment, wherein the calculation formula is as follows:wherein beta is i Expressed as the per-person watching time length ratio coefficient, SJ, corresponding to the ith recorded and broadcast video clip i Showing the per-person watching time length corresponding to the ith recorded and played video clip,φ i ' is the time length corresponding to the ith recorded and played video clip;
it should be noted that, in the above calculation formula of the per-person watching time ratio coefficient corresponding to each recorded and broadcasted video segment, the greater the per-person watching time ratio corresponding to a certain recorded and broadcasted video segment is, the greater the per-person watching time ratio coefficient corresponding to the recorded and broadcasted video segment is, the better the watching duration corresponding to the recorded and broadcasted video segment is.
E3, calculating a sales ratio coefficient corresponding to each recorded and broadcast video clip based on the obtained sales corresponding to each recorded and broadcast video clip, wherein the calculation formula is as follows:wherein delta i Expressed as the sales percentage coefficient, XE, corresponding to the ith recorded and broadcast video clip i Indicated as sales corresponding to the ith recorded video clip.
It should be noted that, in the above calculation formula of the sales ratio coefficient corresponding to each recorded and broadcasted video segment, the larger the sales ratio corresponding to a certain recorded and broadcasted video segment is, the larger the sales ratio coefficient corresponding to the recorded and broadcasted video segment is, which indicates that the better the sales effect corresponding to the recorded and broadcasted video segment is.
In the specific embodiment of the invention, the sales corresponding to each recorded and broadcast video clip is analyzed, so that the sales is considered as a basic operation index of the live broadcast platform, the platform can more intuitively know the selling effect corresponding to a certain commodity by analyzing the sales, and the live broadcast platform can increase, decrease and improve the sold commodity based on the sales analysis result.
Specifically, the calculation formula of the play effect evaluation coefficient corresponding to each recorded and played video segment is as follows:wherein eta i Is expressed as a play effect evaluation coefficient, gamma, corresponding to the ith recorded video clip 1 、γ 2 And gamma 3 A play effect correction system corresponding to the preset watching heat, the per-person watching time and the salesAnd (4) counting.
Specifically, the analyzing of the interactive data corresponding to each recorded and played video clip specifically comprises the following steps:
f1, calculating a request information sending person number ratio coefficient corresponding to each recorded and broadcast video clip based on the obtained request information sending person number corresponding to each recorded and broadcast video clip, wherein the calculation formula is as follows:wherein epsilon i The number of persons sending the request information corresponding to the ith recorded and broadcast video clip is expressed as a number of persons sending the coefficient, FS i The number of the request information sending persons corresponding to the ith recorded and broadcast video clip is represented;
it should be noted that, in the above calculation formula for the ratio of the number of request information senders corresponding to each recorded and broadcast video segment, the larger the number of request information senders corresponding to a certain recorded and broadcast video segment is, the larger the ratio of the number of request information senders corresponding to the recorded and broadcast video segment is, which indicates that the better the attraction effect of the recorded and broadcast video segment is.
And F2, calculating the ratio coefficient of the number of the request information sending pieces corresponding to each recorded and broadcast video clip based on the number of the request information sending pieces corresponding to each recorded and broadcast video clip, wherein the calculation formula is as follows:whereinExpressed as the ratio coefficient of the number of request information sending pieces, TS, corresponding to the ith recorded and broadcast video clip i The number of request information sending pieces corresponding to the ith recorded and played video clip is shown.
It should be noted that, in the above calculation formula for the ratio coefficient of the number of request information sending pieces corresponding to each recorded and broadcast video segment, the larger the number of request information sending pieces corresponding to a certain recorded and broadcast video segment is, the larger the ratio coefficient of the number of request information sending pieces corresponding to the recorded and broadcast video segment is, the better the interaction effect of the recorded and broadcast video segment is.
In particular, the respective recordings are broadcastThe interactive effect evaluation coefficient corresponding to the video clip has the following calculation formula:wherein psi i Expressed as the interactive effect evaluation coefficient corresponding to the ith recorded and broadcast video clip, and e is expressed as a natural constant χ 1 Hexix- 2 And the interactive effect weight factors are respectively expressed as the preset number of the sending persons of the request information and the number of the sending pieces of the request information.
In the specific embodiment of the invention, the recorded and broadcast videos are segmented according to different displayed commodities to obtain a plurality of recorded and broadcast video segments, and then the recorded and broadcast video segments are respectively subjected to play effect evaluation and interaction effect evaluation, so that the situation that most of the recorded and broadcast video segments pay attention to the overall play effect of the recorded and broadcast video in the prior art is avoided, and further the recorded and broadcast video can be comprehensively and accurately subjected to multi-dimensional effect analysis and effect evaluation, so that the live broadcast platform can intuitively recognize the play effect and interaction effect of the recorded and broadcast video segments corresponding to various commodities, and the self-management promotion of the live broadcast platform is facilitated.
The comprehensive display terminal is used for sequencing the playing effect evaluation coefficients corresponding to the recorded and broadcast video segments from high to low, further sequencing and displaying the numbers to which the corresponding recorded and broadcast video segments belong, sequencing the interaction effect evaluation coefficients corresponding to the recorded and broadcast video segments from high to low, and further sequencing and displaying the numbers to which the corresponding recorded and broadcast video segments belong.
It should be noted that, by setting the comprehensive display end, the embodiment of the present invention aims to: a visual display interface is presented for a target user in a more intuitive and convenient mode, so that the user can select a corresponding live broadcast clip to watch according to the personalized requirements of the user, the time of the target user is effectively saved, and the recorded and broadcast video watching experience of the target user is enhanced.
In the specific embodiment of the invention, by providing the unmanned live broadcast online analysis management system based on multi-dimensional feature capture, the request information sent by the target user can be intelligently and automatically replied, the problem that the request information sent by audiences cannot be processed in time due to a traditional manual management mode is avoided, and the system has better real-time property, so that the watching experience of the user on recorded and broadcast videos is improved, and the retention rate of the user on a platform is also favorably improved.
The foregoing is merely exemplary and illustrative of the present invention and various modifications, additions and substitutions may be made by those skilled in the art to the specific embodiments described without departing from the scope of the invention as defined in the following claims.
Claims (10)
1. The utility model provides an unmanned live online analysis management system based on multidimension degree characteristic snatchs which characterized in that includes: the system comprises a rule base, a user request information acquisition module, a user request information analysis module, an intelligent reply module, a recorded and broadcast video division module, a recorded and broadcast video data acquisition module, a data analysis module and a comprehensive display terminal;
the rule base is used for storing response templates corresponding to various keywords to which various commodities belong, commodities corresponding to various main body outline profiles, response templates corresponding to various display characteristics to which various commodities belong and reference watching persons corresponding to various recorded and broadcasted video clips;
the user request information acquisition module is used for acquiring request information sent by a target user in the process of watching recorded and broadcast videos, wherein the request information comprises character information, voice information and picture information;
the user request information analysis module is used for analyzing request information sent by a target user in the process of watching recorded and broadcast videos, and comprises a text information analysis unit, a voice information analysis unit and a picture information analysis unit;
the intelligent reply module is used for intelligently replying the target user based on the analysis result of the user request information analysis module;
the recorded and broadcast video dividing module is used for dividing the recorded and broadcast video duration according to different displayed commodities so as to obtain a plurality of recorded and broadcast video segments which are respectively numbered as 1,2, ·, i, ·, q;
the recorded and broadcast video data acquisition module is used for acquiring parameter data corresponding to each recorded and broadcast video clip, wherein the parameter data comprise broadcast data and interaction data;
the data analysis module is used for analyzing the playing data corresponding to each recorded and broadcast video segment so as to obtain a playing effect evaluation coefficient corresponding to each recorded and broadcast video segment, and analyzing the interactive data corresponding to each recorded and broadcast video segment so as to obtain an interactive effect evaluation coefficient corresponding to each recorded and broadcast video segment;
the comprehensive display terminal is used for sequencing the playing effect evaluation coefficients corresponding to the recorded and broadcast video segments from high to low, further sequencing and displaying the numbers to which the corresponding recorded and broadcast video segments belong, sequencing the interaction effect evaluation coefficients corresponding to the recorded and broadcast video segments from high to low, and further sequencing and displaying the numbers to which the corresponding recorded and broadcast video segments belong.
2. The unmanned live broadcast online analysis and management system based on multi-dimensional feature capture as claimed in claim 1, characterized in that: the text information analysis unit is used for analyzing text information sent by a target user in the process of watching recorded and broadcast videos, and comprises the following specific steps:
a1, extracting request commodities and key words from character information sent by a target user in the process of watching recorded and broadcast videos to obtain the request commodities and various key words corresponding to the character information sent by the target user in the process of watching recorded and broadcast videos;
and A2, matching the request commodity corresponding to the text information sent by the target user in the process of watching the recorded and broadcast video and the response templates corresponding to the various keywords to which the various commodities belong, which are stored in the rule base, further acquiring the response templates corresponding to the various keywords to which the request commodity to which the text information corresponding to the text information sent by the target user in the process of watching the recorded and broadcast video belongs, and marking the response templates as text information response templates.
3. The unmanned live broadcast online analysis management system based on multi-dimensional feature capture as claimed in claim 1, characterized in that: the voice information analysis unit is used for analyzing the voice information sent by the target user in the process of watching recorded and broadcast videos, and the specific steps are as follows:
b1, performing character conversion processing on voice information sent by a target user in the process of watching recorded and broadcast videos by a voice character conversion technology;
and B2, acquiring a response template corresponding to various keywords of the voice information sent by the target user in the process of watching the recorded and broadcast video according to the analysis acquisition mode of the text information response template, and marking the response template as the voice information response template.
4. The unmanned live broadcast online analysis management system based on multi-dimensional feature capture as claimed in claim 1, characterized in that: the picture information analysis unit is used for analyzing the picture information sent by the target user in the process of watching recorded and broadcast videos, and the specific steps are as follows:
c1, extracting the main body outline of a picture sent by a target user in the process of watching recorded and broadcast video, matching the main body outline with commodities corresponding to various main body outlines stored in a rule base, further acquiring the commodity corresponding to the main body outline of the picture sent by the target user in the process of watching recorded and broadcast video, marking the commodity as an intention commodity of the target user, and extracting display characteristics corresponding to the intention commodity of the target user;
and C2, matching the display characteristics corresponding to the intended commodities of the target user with response templates which are stored in the rule base and to which the various display characteristics corresponding to the various commodities belong, further acquiring the response templates to which the display characteristics corresponding to the intended commodities of the target user belong, and marking the response templates as picture information response templates.
5. The unmanned live broadcast online analysis management system based on multi-dimensional feature capture as claimed in claim 1, characterized in that: the intelligent reply to the target user comprises the following specific steps:
and D1, according to the request information sent by the target user, further identifying the type of the request information sent by the target user, wherein the type of the request information comprises: a text information type, a voice information type and a picture information type;
and D2, judging a result based on the type of the request information sent by the target user, and then adopting a corresponding information response template to carry out intelligent response.
6. The unmanned live broadcast online analysis management system based on multi-dimensional feature capture as claimed in claim 1, characterized in that: the playing data includes: watch the number of people, the time length and the sales volume are watched to everyone, interactive data includes: the number of the requested information sending persons and the number of the requested information sending pieces.
7. The unmanned live broadcast online analysis management system based on multi-dimensional feature capture as claimed in claim 1, characterized in that: the method for analyzing the playing data corresponding to each recorded and played video clip comprises the following specific steps:
e1, comparing the watching number corresponding to each recorded and broadcasted video clip with the reference watching number corresponding to each recorded and broadcasted video clip stored in the rule base, and calculating the watching heat degree corresponding to each recorded and broadcasted video clip, wherein the calculation formula is as follows:wherein alpha is i Represents the corresponding watching heat, RC, of the ith recorded and broadcast video clip i The number of persons watched, RC, corresponding to the ith recorded and broadcast video clip i0 ' represents a reference viewing population corresponding to the ith recorded video clip;
e2, acquiring the time length corresponding to each recorded and broadcasted video segment, comparing the per-person watching time length corresponding to each recorded and broadcasted video segment with the time length corresponding to each recorded and broadcasted video segment, and calculating the per-person watching time length ratio coefficient corresponding to each recorded and broadcasted video segment, wherein the calculation formula is as follows:wherein beta is i Expressed as the per-person watching time length ratio coefficient, SJ, corresponding to the ith recorded and broadcast video clip i Is expressed as the per-person watching duration phi corresponding to the ith recorded and played video clip i ' is the time length corresponding to the ith recorded and played video clip;
e3, calculating a sales ratio coefficient corresponding to each recorded and broadcast video clip based on the obtained sales corresponding to each recorded and broadcast video clip, wherein the calculation formula is as follows:wherein delta i Expressed as the sales percentage coefficient, XE, corresponding to the ith recorded and broadcast video clip i Indicated as sales corresponding to the ith recorded video clip.
8. The unmanned live broadcast online analysis management system based on multi-dimensional feature capture as claimed in claim 7, wherein: the calculation formula of the play effect evaluation coefficient corresponding to each recorded and played video clip is as follows:wherein eta i Is expressed as a play effect evaluation coefficient, gamma, corresponding to the ith recorded video clip 1 、γ 2 And gamma 3 And the correction coefficients are respectively expressed as preset playing effect correction coefficients corresponding to the watching heat, the per-person watching time and the sales.
9. The unmanned live broadcast online analysis management system based on multi-dimensional feature capture as claimed in claim 1, characterized in that: the method for analyzing the interactive data corresponding to each recorded and played video clip comprises the following specific steps:
and F1, calculating a request information sending person number ratio coefficient corresponding to each recorded and broadcasted video clip based on the obtained request information sending person number corresponding to each recorded and broadcasted video clip, wherein the calculation formula is as follows:wherein epsilon i The number of persons sending the request information corresponding to the ith recorded and broadcast video clip is expressed as a number of persons sending the coefficient, FS i The number of the request information sending persons corresponding to the ith recorded and broadcast video clip is represented;
and F2, calculating the ratio coefficient of the number of the request information sending pieces corresponding to each recorded and broadcast video clip based on the number of the request information sending pieces corresponding to each recorded and broadcast video clip, wherein the calculation formula is as follows:whereinExpressed as the ratio coefficient of the number of request information sending pieces, TS, corresponding to the ith recorded and broadcast video clip i The number of request information sending pieces corresponding to the ith recorded and played video clip is shown.
10. The unmanned live broadcast online analysis management system based on multi-dimensional feature capture as claimed in claim 9, wherein: the interactive effect evaluation coefficient corresponding to each recorded and played video clip is calculated according to the formula:wherein psi i Expressed as the interactive effect evaluation coefficient corresponding to the ith recorded and broadcast video clip, and e is expressed as a natural constant χ 1 Hexix- 2 And the interactive effect weight factors are respectively expressed as the preset number of the sending persons of the request information and the number of the sending pieces of the request information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210775962.9A CN115174951B (en) | 2022-07-02 | 2022-07-02 | Unmanned live broadcast online analysis and management system based on multi-dimensional feature capture |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210775962.9A CN115174951B (en) | 2022-07-02 | 2022-07-02 | Unmanned live broadcast online analysis and management system based on multi-dimensional feature capture |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115174951A true CN115174951A (en) | 2022-10-11 |
CN115174951B CN115174951B (en) | 2023-04-07 |
Family
ID=83490133
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210775962.9A Active CN115174951B (en) | 2022-07-02 | 2022-07-02 | Unmanned live broadcast online analysis and management system based on multi-dimensional feature capture |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115174951B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111200737A (en) * | 2019-12-29 | 2020-05-26 | 航天信息股份有限公司企业服务分公司 | Intelligent robot-assisted question answering system and method for live video platform |
US20200320307A1 (en) * | 2019-04-08 | 2020-10-08 | Baidu Usa Llc | Method and apparatus for generating video |
CN111754302A (en) * | 2020-06-24 | 2020-10-09 | 詹晨 | Video live broadcast interface commodity display intelligent management system based on big data |
CN112465596A (en) * | 2020-12-01 | 2021-03-09 | 南京翰氜信息科技有限公司 | Image information processing cloud computing platform based on electronic commerce live broadcast |
CN113191845A (en) * | 2021-05-07 | 2021-07-30 | 武汉新之扬电子商务有限公司 | Online live shopping platform data analysis processing method, system, equipment and computer storage medium |
CN113204709A (en) * | 2021-05-29 | 2021-08-03 | 武汉申子仟电子商务有限公司 | Short video search matching recommendation method and system based on multidimensional data depth comparison analysis and computer storage medium |
-
2022
- 2022-07-02 CN CN202210775962.9A patent/CN115174951B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200320307A1 (en) * | 2019-04-08 | 2020-10-08 | Baidu Usa Llc | Method and apparatus for generating video |
CN111200737A (en) * | 2019-12-29 | 2020-05-26 | 航天信息股份有限公司企业服务分公司 | Intelligent robot-assisted question answering system and method for live video platform |
CN111754302A (en) * | 2020-06-24 | 2020-10-09 | 詹晨 | Video live broadcast interface commodity display intelligent management system based on big data |
CN112465596A (en) * | 2020-12-01 | 2021-03-09 | 南京翰氜信息科技有限公司 | Image information processing cloud computing platform based on electronic commerce live broadcast |
CN113191845A (en) * | 2021-05-07 | 2021-07-30 | 武汉新之扬电子商务有限公司 | Online live shopping platform data analysis processing method, system, equipment and computer storage medium |
CN113204709A (en) * | 2021-05-29 | 2021-08-03 | 武汉申子仟电子商务有限公司 | Short video search matching recommendation method and system based on multidimensional data depth comparison analysis and computer storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN115174951B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107071578B (en) | IPTV program commending method | |
CN111754302B (en) | Video live broadcast interface commodity display intelligent management system based on big data | |
US20190392037A1 (en) | Intelligent Visual Inquiry Method and System | |
CN102929966B (en) | A kind of for providing the method and system of personalized search list | |
CN111050191B (en) | Video generation method and device, computer equipment and storage medium | |
CN101146206A (en) | A method and device for TV and program watching rate statistics | |
CN114707034A (en) | Intelligent exhibition management system for intelligent exhibition hall based on VR visual technology | |
CN114639051B (en) | Advertisement short video quality evaluation method, system and storage medium based on big data analysis | |
CN114971747A (en) | Data analysis method and system based on big data commodity accurate marketing | |
CN115174951B (en) | Unmanned live broadcast online analysis and management system based on multi-dimensional feature capture | |
CN108090170A (en) | A kind of intelligence inquiry method for recognizing semantics and visible intelligent interrogation system | |
CN110378190A (en) | Video content detection system and detection method based on topic identification | |
CN112396461A (en) | Vehicle-mounted video advertisement management system based on big data | |
CN111489244A (en) | Off-line merchant targeted promotion system | |
CN115474093B (en) | Method and device for calculating importance of video elements, storage medium and electronic equipment | |
CN116739836A (en) | Restaurant data analysis method and system based on knowledge graph | |
CN116303663A (en) | User affinity calculation method and system based on content social platform | |
CN110737845A (en) | method, computer storage medium and system for realizing information analysis | |
CN115994802A (en) | Mixed-editing channel operation system and method based on big data algorithm | |
CN111177549A (en) | Client recommendation method suitable for export-type cross-border trade | |
CN215526666U (en) | System capable of displaying file scanning information of mobile storage medium in real time | |
CN113411517B (en) | Video template generation method and device, electronic equipment and storage medium | |
CN111427919A (en) | Brand feature extraction method and device based on e-commerce recommendation model | |
CN113918548A (en) | Questionnaire survey method and device based on private domain flow and storage medium | |
CN111932315A (en) | Data display method and device, electronic equipment and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20230315 Address after: 200120 1st floor, building 12, No. 718, Chuansha Road, Pudong New Area, Shanghai Applicant after: Shanghai Jiachang Zhilian Automobile Technology Co.,Ltd. Address before: 430300 No. 18, Julong Avenue, panlongcheng Economic Development Zone, Huangpi District, Wuhan City, Hubei Province Applicant before: Wuhan Qingshi advertising media Co.,Ltd. |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |