CN113596491A - Cross-border live broadcast system based on cloud server - Google Patents
Cross-border live broadcast system based on cloud server Download PDFInfo
- Publication number
- CN113596491A CN113596491A CN202110835343.XA CN202110835343A CN113596491A CN 113596491 A CN113596491 A CN 113596491A CN 202110835343 A CN202110835343 A CN 202110835343A CN 113596491 A CN113596491 A CN 113596491A
- Authority
- CN
- China
- Prior art keywords
- cloud server
- client terminal
- video data
- information
- comment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 claims abstract description 42
- 238000003058 natural language processing Methods 0.000 claims abstract description 5
- 238000004891 communication Methods 0.000 claims description 15
- 238000000034 method Methods 0.000 claims description 7
- AYFVYJQAPQTCCC-GBXIJSLDSA-N L-threonine Chemical compound C[C@@H](O)[C@H](N)C(O)=O AYFVYJQAPQTCCC-GBXIJSLDSA-N 0.000 claims description 6
- 230000004044 response Effects 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 238000012795 verification Methods 0.000 claims description 3
- 230000009467 reduction Effects 0.000 description 21
- 238000001914 filtration Methods 0.000 description 14
- 230000009286 beneficial effect Effects 0.000 description 4
- 238000003709 image segmentation Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000013075 data extraction Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000012550 audit Methods 0.000 description 1
- 230000001680 brushing effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000008719 thickening Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4788—Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
- H04N21/4884—Data services, e.g. news ticker for displaying subtitles
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The invention provides a cross-border live broadcast system based on a cloud server, which comprises a main broadcast terminal, a cloud server and a client terminal, wherein the main broadcast terminal is connected with the cloud server; the anchor terminal is used for acquiring video data of a live broadcast site; the cloud server is used for processing the video data, acquiring subtitle information of a preset language type, responding to a watching request of the client terminal and transmitting the video data and the subtitle information to the client terminal; the client terminal is used for displaying the video data and the subtitle information, acquiring comment information input by a user and then sending the comment information to the client terminal; the client terminal is also used for carrying out natural language processing on the comment information from different client terminals to obtain key comments in a preset time period; the anchor terminal is used for displaying key comments. The invention can effectively solve the problem that the anchor is easy to miss important comments due to too many live watching people in the prior art, and simultaneously realizes cross-border live broadcasting of different nationalities.
Description
Technical Field
The invention relates to the field of live broadcast, in particular to a cross-border live broadcast system based on a cloud server.
Background
The existing live broadcast system often causes the anchor to miss some important information when the comments are too many. For example, in live broadcasting for the purpose of bringing goods, information such as purchasing and consultation offers of viewers is relatively important information. However, when the number of viewers is large, the brushing speed of the comments is also high, but the display area of the comments is small, and the narrow display area obviously cannot meet the requirement of displaying a large number of comments in a short time, so that some important comment information is missed.
Disclosure of Invention
In view of the above problems, an object of the present invention is to provide a cross-border live broadcast system based on a cloud server, which includes a main broadcast terminal, a cloud server and a client terminal;
the anchor terminal is used for acquiring video data of a live broadcast site and transmitting the live broadcast data to the cloud server;
the cloud server is used for processing the video data, acquiring subtitle information of a preset language type, responding to a watching request of the client terminal and transmitting the video data and the subtitle information to the client terminal;
the client terminal is used for displaying the video data and the subtitle information, acquiring comment information input by a user and then sending the comment information to the client terminal;
the client terminal is also used for carrying out natural language processing on comment information from different client terminals, acquiring key comments in a preset time period and sending the key comments to the anchor terminal;
the anchor terminal is used for displaying the key comments.
Preferably, the anchor terminal comprises an identity authentication module, a data acquisition module, a first communication module and a first display module;
the identity authentication module is used for authenticating the identity of the anchor and opening the use authority of the anchor terminal to the anchor passing the identity authentication;
the data acquisition module is used for acquiring video data of a live broadcast site and transmitting the video data to the cloud server through the first communication module;
the first communication module is used for receiving the key comments sent by the cloud server and transmitting the key comments to the first display module;
the first display module is used for displaying the key comments.
Preferably, the video data includes video pictures and video sounds.
Preferably, the processing the video data to obtain subtitles of a preset language type includes:
the preset language type comprises a native language type and a foreign language type;
performing voice recognition processing on the video sound, and translating the video sound into subtitle information of a native language type;
and translating the text information of the native language type into subtitle information of a foreign language type.
Preferably, the transmitting the video data and the subtitle information to the client terminal in response to a viewing request of the client terminal includes:
acquiring the country of the client terminal according to the IP address of the viewing request;
acquiring corresponding subtitle information according to the country of the client terminal;
and transmitting the subtitle information and the video data to the client terminal.
Preferably, the client terminal comprises an input module, a second display module and a second communication module;
the input module is used for acquiring comment information input by a user;
the second display module is used for displaying the video data and the subtitle information;
the second communication module is used for receiving the video data and the subtitle information sent from the cloud server and sending the comment information to the cloud server.
Preferably, the processing of the comment information from different client terminals in the natural language to obtain the key comment in a preset time period includes:
storing the comment information in the same time period into a set S;
translating the comment information contained in the set S into comment information of the native language type and storing the comment information into a set S';
obtaining a similar comment set of comment information in the set S ', and recording the similar comment set of comment information i in the set S' as Ui;
And sequencing the comment information in the set S' according to the number of the elements contained in the similar comment set, wherein the comment information in the top thre% is used as a key comment, and thre represents a preset threshold parameter.
According to the invention, the comment information from the client terminal can be processed by utilizing the strong performance of the cloud server, and the key comments obtained after processing are output to the anchor terminal for displaying, so that the problem that the anchor easily misses the important comments due to too many live watching people in the prior art can be effectively solved. Meanwhile, the invention can also identify and process the live video data to obtain the caption information of the native language type and translate the caption information of the native language type into the caption information of the foreign language type, thereby facilitating the users in different countries to understand the live content.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
Fig. 1 is a diagram of an exemplary embodiment of a cross-border live broadcast system based on a cloud server according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
As shown in fig. 1, an embodiment of the present invention provides a cross-border live broadcast system based on a cloud server, which includes a main broadcast terminal, a cloud server, and a client terminal;
the anchor terminal is used for acquiring video data of a live broadcast site and transmitting the live broadcast data to the cloud server;
the cloud server is used for processing the video data, acquiring subtitle information of a preset language type, responding to a watching request of the client terminal and transmitting the video data and the subtitle information to the client terminal;
the client terminal is used for displaying the video data and the subtitle information, acquiring comment information input by a user and then sending the comment information to the client terminal;
the client terminal is also used for carrying out natural language processing on comment information from different client terminals, acquiring key comments in a preset time period and sending the key comments to the anchor terminal;
the anchor terminal is used for displaying the key comments.
Specifically, the cloud server is used for sending the key comments to the anchor terminal and also used for sending comment information to the anchor terminal. And the anchor terminal simultaneously displays the key comments and the comment information.
A special highlight display area may be set in the anchor terminal, or highlight processing may be performed on the highlight in another manner, for example, a manner of thickening characters.
Preferably, the anchor terminal comprises an identity authentication module, a data acquisition module, a first communication module and a first display module;
the identity authentication module is used for authenticating the identity of the anchor and opening the use authority of the anchor terminal to the anchor passing the identity authentication;
the data acquisition module is used for acquiring video data of a live broadcast site and transmitting the video data to the cloud server through the first communication module;
the first communication module is used for receiving the key comments sent by the cloud server and transmitting the key comments to the first display module;
the first display module is used for displaying the key comments.
The mode of setting the identity authentication is mainly used for improving the safety of the live broadcast process, and only the user who passes the registration and the verification can use the live broadcast terminal, so that the live broadcast terminal is prevented from being used for illegal things. If the identity authentication is not passed, the anchor can not use the live broadcast terminal to carry out live broadcast.
Specifically, the identity authentication module comprises a photographing unit, an image processing unit and an identification unit;
the photographing unit is used for acquiring a face image of a main broadcasting and transmitting the face image to the image processing unit;
the image processing unit is used for acquiring feature data contained in the face image;
the identification unit is used for matching the characteristic data acquired by the image processing unit with the characteristic data prestored in the characteristic database, if the matching is successful, the anchor passes the identity authentication, otherwise, the anchor does not pass the identity authentication.
Specifically, stored in the feature database is feature data of a facial image of a user who has passed the registration audit.
The identification unit can judge whether the matching is successful according to the similarity, and if the feature data with the similarity larger than a preset similarity threshold value with the feature data acquired by the image processing unit exists in the feature database, the matching is successful.
Specifically, the photographing unit comprises a camera and a judging subunit;
the camera user acquires a face image of a main broadcast;
the judging subunit is used for calculating the effective information content parameter of the face image, and if the effective information content parameter of the face image is larger than a preset parameter threshold, transmitting the face image to the image processing unit; otherwise, informing the camera to acquire the face image of the user again;
the effective information content parameter is calculated in the following way:
wherein eicp represents a significant information content parameter, w1、w2、w3Representing preset weight parameters, numUf representing the total number of pixel points in the face image, Uf representing the set of pixel points in the face image, and UkRepresenting a set of pixels in a window of H size centered on pixel k in the face image, fd (k) and fd (m) representing the gradient amplitudes of pixels k and m, respectively, vfd representing the average of the gradient amplitudes of pixels in Uf, numUkRepresents UkThe total number of pixels contained in (1), numfcp represents the number of skin pixels in Uf, and litva represents the variance of the L component of the face image in the Lab color model.
In the embodiment of the invention, the effective information content parameters are obtained by calculating from the aspects of the gradient amplitude, the number of the skin pixel points, the variance of the L component and the like, so that the larger the gradient amplitude difference is, the larger the number of the skin pixel points is, and the smaller the variance of the L component is, the larger the effective information content parameters of the face image are, which is beneficial to selecting the face image with high effective information content, thereby improving the accuracy of the processing process of the image processing unit. The method is favorable for improving the safety of the live broadcast system.
Specifically, the acquiring feature data included in the face image includes:
converting the face image into a grayscale image;
carrying out noise reduction processing on the gray level image to obtain a noise reduction processing image;
carrying out image segmentation processing on the noise reduction processing image, and dividing the noise reduction processing image into a foreground image and a background image;
identifying the face image to obtain a set imgfcU of skin pixel points contained in the face image;
processing the foreground image according to the imgfcU to obtain a processed foreground image;
and acquiring feature data contained in the processed foreground image by using a preset feature extraction algorithm.
The conversion into the gray image is beneficial to reducing the dimensionality of information needing to be processed and improving the processing speed of the invention. The noise reduction processing can effectively avoid the influence of noise pixel points on the feature data extraction.
In the invention, an image segmentation algorithm such as otsu algorithm can be used to perform image segmentation on the noise reduction processed image, and the noise reduction processed image is divided into a foreground image and a background image.
Because the image segmentation can not correctly divide all the pixel points completely, the invention supplements the foreground image through the skin pixel points, so that the processed foreground image contains the pixel points belonging to the face skin area as much as possible, which is beneficial to improving the content of effective information contained in the image for the characteristic data extraction step, thereby improving the safety of the identity verification of the invention.
A preset feature extraction algorithm, such as a HOG algorithm, an LBP algorithm, etc., may be used to obtain feature data included in the processed foreground image.
Specifically, converting the face image into a grayscale image includes:
the face image is converted into a grayscale image using a weighted average method.
Specifically, the performing noise reduction processing on the grayscale image to obtain a noise-reduced image includes:
performing suspected noise point detection on the pixel points in the gray level image, and dividing the pixel points in the face image into a suspected noise point set comp and a non-suspected noise point set nconp:
for the pixel point xc in the face image, if the pixel value is the maximum value in the 8 neighborhood, or the pixel value is larger than 255 × T1Or its pixel value is less than 255 XT2If so, xc belongs to the suspected noise point;
and (3) processing the pixel point p in the con in the following way:
wherein cf (p) represents the value of the pixel point p after noise reduction treatment; delta1And delta2Representing a preset weight parameter, numa representing the total number of differently sized filtering windows, waWeight value, mid (U), representing a filtering window of size ap,a) Representing the intermediate value of the pixel point in the filtering window of the a-th size taking the pixel point p as the center;wherein, numsc represents the total number of non-suspected pixels in the filtering window of the a-th size with the pixel p as the center, f (sc) represents the pixel value of the sc-th non-suspected pixel in the filtering window of the a-th size with the pixel p as the center, bwf (p, a) represents the average value of the pixel values of the non-suspected pixels in the filtering window of the a-th size with the pixel p as the center, va (p, a) represents the standard deviation of the pixel values of the non-suspected pixels in the filtering window of the a-th size with the pixel p as the center;
and (3) processing the pixel point q in the nconp in the following way:
wherein cf (q) represents the value of the pixel point q after noise reduction treatment, alpha1And alpha2Expressing a weight parameter, f (Q) expressing a pixel value of a pixel point Q, (i, j) expressing a coordinate of a pixel point in a neighborhood of Q multiplied by Q of the pixel point Q, f (i, j) expressing a pixel value of a pixel point with a coordinate of (i, j), hc expressing a standard deviation of Gaussian filtering on the pixel point Q, tf (i, j) expressing a gradient amplitude of the pixel point with the coordinate of (i, j), tf (Q) expressing a gradient amplitude of the pixel point Q, gc expressing a standard deviation of a gradient amplitude between the pixel point and the pixel point Q in the neighborhood of Q multiplied by Q of the pixel point Q;
and respectively carrying out filtering processing on the pixel points in the con and the nconp in the gray level image to obtain a noise reduction processing image.
In the embodiment, the pixel points are divided into the suspected noise points and the non-suspected noise points, and different noise reduction modes are adopted for processing different types of pixel points respectively, so that the accuracy of noise reduction results is improved. Specifically, when processing the pixel point in the con, filtering is performed through filtering windows of different sizes, and then a weighted fusion mode is used to obtain a value of the pixel point in the con after the noise reduction processing. Meanwhile, in filtering windows of different sizes, two different noise reduction modes are respectively used, then noise reduction results of the two different noise reduction modes are weighted, and finally a value after noise reduction processing is obtained. When noise reduction processing is carried out on the pixel points in the nconp, the pixel values after noise reduction are comprehensively obtained by obtaining information of pixel points in the neighborhood and the current pixel points for filtering in the aspects of pixel value difference, gradient value difference and the like, and the pixel values are beneficial to enabling the pixel point weight with smaller difference between the pixel points in the neighborhood and the current pixel points for filtering to be larger, so that an accurate noise reduction result is obtained.
Specifically, processing the foreground image according to imgfcU to obtain a processed foreground image includes:
storing pixel points contained in the foreground image into a set imgfrU;
deleting pixel points belonging to the imgfrU in the imgfcU from the imgfcU to obtain a set imgfcU';
acquiring a union frtlU of imgfrU and imgfcU';
and forming the processed foreground image by the pixel points in the frtlU.
In another embodiment, the identity authentication module includes a fingerprint identification unit, and the fingerprint identification unit is configured to determine whether the anchor passes the identity authentication in a fingerprint identification manner.
In another embodiment, the identity authentication module authenticates whether the anchor is authenticated by means of an account number and a password.
Preferably, the video data includes video pictures and video sounds.
Preferably, the processing the video data to obtain subtitles of a preset language type includes:
the preset language type comprises a native language type and a foreign language type;
performing voice recognition processing on the video sound, and translating the video sound into subtitle information of a native language type;
and translating the text information of the native language type into subtitle information of a foreign language type.
Specifically, the language type of the home country is the language type of the country where the anchor terminal is located or the language type of the country where the anchor nations are located, and the anchor can be set according to actual needs. For example, when the Chinese nationality anchor is broadcast directly in China, the native language type is Chinese.
The foreign language type refers to a language type of a country other than the country to which the native language type corresponds. For example, when the native language type is Chinese, then the foreign language type may include English, French, German, Spanish, and the like.
Preferably, the transmitting the video data and the subtitle information to the client terminal in response to a viewing request of the client terminal includes:
acquiring the country of the client terminal according to the IP address of the viewing request;
acquiring corresponding subtitle information according to the country of the client terminal;
and transmitting the subtitle information and the video data to the client terminal.
Specifically, the country where the client terminal is located can be obtained through an IP address included in a viewing request sent by the client terminal to the cloud server, and then the subtitle information of the language type corresponding to the country is obtained from the translated subtitle information of the foreign language type according to the name of the country.
Preferably, the client terminal comprises an input module, a second display module and a second communication module;
the input module is used for acquiring comment information input by a user;
the second display module is used for displaying the video data and the subtitle information;
the second communication module is used for receiving the video data and the subtitle information sent from the cloud server and sending the comment information to the cloud server.
Specifically, the input module may include a touch screen, a microphone, an external input device, and the like. The devices can realize the input of comment information.
Preferably, the processing of the comment information from different client terminals in the natural language to obtain the key comment in a preset time period includes:
storing the comment information in the same time period into a set S;
translating the comment information contained in the set S into comment information of the native language type and storing the comment information into a set S';
obtaining a similar comment set of comment information in the set S ', and recording the similar comment set of comment information i in the set S' as Ui;
And sequencing the comment information in the set S' according to the number of the elements contained in the similar comment set, wherein the comment information in the top thre% is used as a key comment, and thre represents a preset threshold parameter.
Specifically, the obtaining of the similar comment set of the comment information in the set S' includes:
for the comment information i in the S ', calculating the text similarity between the comment information i and the comment information j in the S', wherein i is not equal to j, and storing the comment information j with the text similarity larger than a preset threshold value into UiThen deleting the comment information j from the set S 'to obtain an updated set S';
and then selecting a piece of comment information from the updated set S 'to calculate a similar comment set, and repeating the steps until the set S' is an empty set, and finishing the calculation of the similar comment set.
According to the invention, the comment information from the client terminal can be processed by utilizing the strong performance of the cloud server, and the key comments obtained after processing are output to the anchor terminal for displaying, so that the problem that the anchor easily misses the important comments due to too many live watching people in the prior art can be effectively solved. Meanwhile, the invention can also identify and process the live video data to obtain the caption information of the native language type and translate the caption information of the native language type into the caption information of the foreign language type, thereby facilitating the users in different countries to understand the live content.
While embodiments of the invention have been shown and described, it will be understood by those skilled in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
Claims (7)
1. A cross-border live broadcast system based on a cloud server is characterized by comprising an anchor terminal, the cloud server and a client terminal;
the anchor terminal is used for acquiring video data of a live broadcast site and transmitting the live broadcast data to the cloud server;
the cloud server is used for processing the video data, acquiring subtitle information of a preset language type, responding to a watching request of the client terminal and transmitting the video data and the subtitle information to the client terminal;
the client terminal is used for displaying the video data and the subtitle information, acquiring comment information input by a user and then sending the comment information to the client terminal;
the client terminal is also used for carrying out natural language processing on comment information from different client terminals, acquiring key comments in a preset time period and sending the key comments to the anchor terminal;
the anchor terminal is used for displaying the key comments.
2. The cloud server-based cross-border live broadcast system according to claim 1, wherein the anchor terminal comprises an identity verification module, a data acquisition module, a first communication module and a first display module;
the identity authentication module is used for authenticating the identity of the anchor and opening the use authority of the anchor terminal to the anchor passing the identity authentication;
the data acquisition module is used for acquiring video data of a live broadcast site and transmitting the video data to the cloud server through the first communication module;
the first communication module is used for receiving the key comments sent by the cloud server and transmitting the key comments to the first display module;
the first display module is used for displaying the key comments.
3. The cloud server-based cross-border live broadcast system of claim 1, wherein the video data comprises video pictures and video sounds.
4. The cloud server-based cross-border live broadcasting system according to claim 3, wherein the processing the video data to obtain the subtitles in the preset language type includes:
the preset language type comprises a native language type and a foreign language type;
performing voice recognition processing on the video sound, and translating the video sound into subtitle information of a native language type;
and translating the text information of the native language type into subtitle information of a foreign language type.
5. The cloud server-based cross-border live broadcasting system according to claim 4, wherein the transmitting the video data and the subtitle information to the client terminal in response to the viewing request of the client terminal comprises:
acquiring the country of the client terminal according to the IP address of the viewing request;
acquiring corresponding subtitle information according to the country of the client terminal;
and transmitting the subtitle information and the video data to the client terminal.
6. The cloud server-based cross-border live broadcasting system of claim 1, wherein the client terminal comprises an input module, a second display module and a second communication module;
the input module is used for acquiring comment information input by a user;
the second display module is used for displaying the video data and the subtitle information;
the second communication module is used for receiving the video data and the subtitle information sent from the cloud server and sending the comment information to the cloud server.
7. The cross-border live broadcasting system based on the cloud server as claimed in claim 4, wherein the natural language processing is performed on comment information from different client terminals to obtain important comments within a preset time period, and the method includes:
storing the comment information in the same time period into a set S;
translating the comment information contained in the set S into comment information of the native language type and storing the comment information into a set S';
obtaining a similar comment set of comment information in the set S ', and recording the similar comment set of comment information i in the set S' as Ui;
And sequencing the comment information in the set S' according to the number of the elements contained in the similar comment set, wherein the comment information in the top thre% is used as a key comment, and thre represents a preset threshold parameter.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110835343.XA CN113596491A (en) | 2021-07-23 | 2021-07-23 | Cross-border live broadcast system based on cloud server |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110835343.XA CN113596491A (en) | 2021-07-23 | 2021-07-23 | Cross-border live broadcast system based on cloud server |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113596491A true CN113596491A (en) | 2021-11-02 |
Family
ID=78249224
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110835343.XA Pending CN113596491A (en) | 2021-07-23 | 2021-07-23 | Cross-border live broadcast system based on cloud server |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113596491A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114040220A (en) * | 2021-11-25 | 2022-02-11 | 京东科技信息技术有限公司 | Live broadcasting method and device |
CN114501042A (en) * | 2021-12-20 | 2022-05-13 | 阿里巴巴(中国)有限公司 | Cross-border live broadcast processing method and electronic equipment |
-
2021
- 2021-07-23 CN CN202110835343.XA patent/CN113596491A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114040220A (en) * | 2021-11-25 | 2022-02-11 | 京东科技信息技术有限公司 | Live broadcasting method and device |
CN114501042A (en) * | 2021-12-20 | 2022-05-13 | 阿里巴巴(中国)有限公司 | Cross-border live broadcast processing method and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10992666B2 (en) | Identity verification method, terminal, and server | |
TWI706268B (en) | Identity authentication method and device | |
KR102350507B1 (en) | Access control method, access control device, system and storage medium | |
CN104834849B (en) | Dual-factor identity authentication method and system based on Application on Voiceprint Recognition and recognition of face | |
CN111385283B (en) | Double-recording video synthesis method and double-recording system of self-service equipment | |
US10740636B2 (en) | Method, system and terminal for identity authentication, and computer readable storage medium | |
WO2019104930A1 (en) | Identity authentication method, electronic device and computer-readable storage medium | |
CN106850648B (en) | Identity verification method, client and service platform | |
CN106778179B (en) | Identity authentication method based on ultrasonic lip language identification | |
CN113596491A (en) | Cross-border live broadcast system based on cloud server | |
CN104361276A (en) | Multi-mode biometric authentication method and multi-mode biometric authentication system | |
TW201337812A (en) | Method and device for indentification and system and method for payment | |
KR101884291B1 (en) | Display apparatus and control method thereof | |
CN112148922A (en) | Conference recording method, conference recording device, data processing device and readable storage medium | |
CN109005104B (en) | Instant messaging method, device, server and storage medium | |
CN111881726A (en) | Living body detection method and device and storage medium | |
CN111611568A (en) | Face voiceprint rechecking terminal and identity authentication method thereof | |
KR101724971B1 (en) | System for recognizing face using wide angle camera and method for recognizing face thereof | |
WO2021007857A1 (en) | Identity authentication method, terminal device, and storage medium | |
CN103714282A (en) | Interactive type identification method based on biological features | |
CN111611437A (en) | Method and device for preventing face voiceprint verification and replacement attack | |
CN204576520U (en) | Based on the Dual-factor identity authentication device of Application on Voiceprint Recognition and recognition of face | |
CN111046804A (en) | Living body detection method, living body detection device, electronic equipment and readable storage medium | |
CN105989324A (en) | Fingerprint feature-based embedded identity authentication system | |
CN111985400A (en) | Face living body identification method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |