CN117082268A - Video recording and broadcasting method and system for online live broadcast - Google Patents

Video recording and broadcasting method and system for online live broadcast Download PDF

Info

Publication number
CN117082268A
CN117082268A CN202311348130.XA CN202311348130A CN117082268A CN 117082268 A CN117082268 A CN 117082268A CN 202311348130 A CN202311348130 A CN 202311348130A CN 117082268 A CN117082268 A CN 117082268A
Authority
CN
China
Prior art keywords
live
target
state
scene
item
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311348130.XA
Other languages
Chinese (zh)
Other versions
CN117082268B (en
Inventor
张德涛
刘良君
汪德兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Youwei Caishang Education Technology Co ltd
Original Assignee
Chengdu Youwei Caishang Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Youwei Caishang Education Technology Co ltd filed Critical Chengdu Youwei Caishang Education Technology Co ltd
Priority to CN202311348130.XA priority Critical patent/CN117082268B/en
Publication of CN117082268A publication Critical patent/CN117082268A/en
Application granted granted Critical
Publication of CN117082268B publication Critical patent/CN117082268B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

According to the video recording and broadcasting method and system for online live broadcasting, on the premise that the real-time live broadcasting scene type is determined as the target live broadcasting scene type according to the key description scenario of the first target item, item live broadcasting description information of the second target item in the target live broadcasting scene type is read, the recording and broadcasting condition of the live broadcasting state of the second target item is determined by reading whether the target live broadcasting state of the second target item belongs to the target live broadcasting scene live broadcasting state licensed in the target live broadcasting scene type or not, when the live broadcasting description information is processed, the real-time live broadcasting scene type is determined, on the basis of the target live broadcasting scene type, the second target item live broadcasting state is read in a targeted mode according to the licensed live broadcasting state of the target live broadcasting scene type, and complete video recording can be carried out, so that the accuracy and reliability of recording and broadcasting are improved.

Description

Video recording and broadcasting method and system for online live broadcast
Technical Field
The application relates to the technical field of video data processing, in particular to a video recording and broadcasting method and a video recording and broadcasting system for online live broadcasting.
Background
In recent years, with the increase of resident education expenditure and the increase of internet popularity, online education rapidly develops, online education industry enters a blowout period, and online education companies such as spring bamboo shoots after raining show a tendency of hundreds of flowers and even release. However, some industry pain points cannot be solved, for example, how to allow users to participate in courses more; especially, when a live broadcast ends, the user often needs to review the wonderful moment of the just-heard course or review and eat something that is not understood by himself in the state of highest enthusiasm. Often, playback takes a long time to generate (current industry is about 1 hour for courses take 5-10 minutes, some even waiting for the next day), and thus, the listening will and enthusiasm of the user are greatly reduced (how to try to let the user watch the course in a live room for 10 minutes or after 10 minutes). In the actual operation process, when the video is recorded and played, the problems of video negligence, unclear video and the like may exist. Currently, a technical solution is needed to improve the above technical problems.
Disclosure of Invention
In order to improve the technical problems in the related art, the application provides a video recording and broadcasting method and a video recording and broadcasting system for online live broadcasting.
In a first aspect, a video recording and broadcasting method for online live broadcast is provided, and the method includes: on the premise that the real-time live broadcast scene type is determined to be the target live broadcast scene type according to the key description scenario of the first target item, item live broadcast description information of a second target item under the target live broadcast scene type is obtained; reading the item live description information to obtain a target live state of the second target item; and if the target live state belongs to the target live scene live state licensed under the target live scene category, recording the live state of the second target item.
In an independent embodiment, before determining that the real-time live scene category is the target live scene category according to the key description scenario of the first target item, the method further includes: acquiring speech data generated by a first target item in a live broadcast scene in real time; performing speaking reading on the speaking data, and extracting key description plots; and determining the real-time live scene type according to the key description scenario, and judging whether the live scene type is a target live scene type or not.
In an independent embodiment, the reading the item live description information to obtain the target live state of the second target item includes: determining a reading direction associated with the target live scene category; and reading the item live broadcast description information according to the reading direction to obtain the target live broadcast state of the second target item.
In an independent embodiment, the target live scene type is a first direct-broadcasting scene, and the reading direction of the first direct-broadcasting scene includes a semantic description direction; the step of reading the item live description information through the reading direction to obtain a target live state of the second target item, including: the attribute information in the item live broadcast description information is read through the semantic description direction, so that the description content of the second target item in the first live broadcast scene is obtained; determining a target live state of the second target item according to the description content; if the target live state belongs to the licensed target live scene live state in the target live scene category, recording the live state of the second target item, including: and if the target live state belongs to a second live state licensed in the first live scene, recording the live state of the second target item.
In an independent embodiment, the target live scene type is a third live scene, and the reading direction of the third live scene includes a factor description direction; the step of reading the item live description information through the reading direction to obtain a target live state of the second target item, including: reading the live description information of the event according to the factor description direction, and reading whether the anchor language information exists or not; if the anchor language information exists, determining a target live state of the second target item by positioning the anchor language information; if the target live state belongs to the licensed target live scene live state in the target live scene category, recording the live state of the second target item, including: and if the target live state belongs to the mute live state licensed under the target live scene type, recording the live state of the second target item.
In an independent embodiment, the target live scene type is a fourth live scene, and the reading direction of the fourth live scene includes a live state parsing direction; the step of reading the item live description information through the reading direction to obtain a target live state of the second target item, including: the attribute information in the item live description information is read through the live state analysis direction, and the voice live state and the behavior live state of the second target item in a fourth live scene are read; determining a target live state of the second target item according to the voice live state and the behavior live state; if the target live state belongs to the licensed target live scene live state in the target live scene category, recording the live state of the second target item, including: and if the target live state belongs to the permitted fifth live state in the fourth live scene, recording the live state of the second target item.
In an independent embodiment, the target live broadcast scene type is a live lecture scene, and the reading direction of the live lecture scene includes a local recognition direction; the step of reading the item live description information through the reading direction to obtain a target live state of the second target item, including: reading local identification information of a second target item in the item live description information through the local identification direction to obtain a target live state of the second target item; if the target live state belongs to the licensed target live scene live state in the target live scene category, recording the live state of the second target item, including: and if the target live state belongs to the licensed live state of the lectures under the lecture live scene, recording the live state of the second target item.
In an independently implemented embodiment, the method further comprises: obtaining the recording and playing state quantization information of the second target item and obtaining the voice recording and playing state quantization information of the second target item; and determining target live broadcast evaluation information according to the original live broadcast evaluation information, the recorded broadcast state quantization information and the voice recorded broadcast state quantization information which are set in advance.
In a second aspect, an on-line live video recording and broadcasting system is provided, including a processor and a memory in communication with each other, where the processor is configured to read a computer program from the memory and execute the computer program to implement the method described above.
According to the video recording and broadcasting method and system for online live broadcasting, provided by the embodiment of the application, on the premise that the real-time live broadcasting scene type is determined as the target live broadcasting scene type according to the key description scenario of the first target event, event live broadcasting description information of the second target event in the target live broadcasting scene type is read, the recording and broadcasting condition of the live broadcasting state of the second target event is determined by reading whether the target live broadcasting state of the second target event belongs to the target live broadcasting scene live broadcasting state permitted in the target live broadcasting scene type or not, when the live broadcasting description information is processed, the real-time live broadcasting scene type is determined, on the basis of the determination as the target live broadcasting scene type, the second target event live broadcasting state is read in a targeted manner according to the permitted live broadcasting state of the target live broadcasting scene type, and complete video recording can be carried out, so that the accuracy and reliability of recording and broadcasting are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a video recording and broadcasting method for online live broadcasting according to an embodiment of the present application.
Detailed Description
In order to better understand the above technical solutions, the following detailed description of the technical solutions of the present application is made by using the accompanying drawings and specific embodiments, and it should be understood that the specific features of the embodiments and the embodiments of the present application are detailed descriptions of the technical solutions of the present application, and not limiting the technical solutions of the present application, and the technical features of the embodiments and the embodiments of the present application may be combined with each other without conflict.
Referring to fig. 1, an on-line live video recording method is shown, which may include the following technical solutions described in steps 202-206.
Step 202, obtaining item live description information of a second target item under the target live scene category on the premise of determining that the real-time live scene category is the target live scene category according to the key description scenario of the first target item.
The first target item may be determined according to an actual application live scene, for example, when live broadcasting is different in required materials, the live broadcasting may cause different scenes, the content of the live broadcasting is limited to language information of live broadcasting personnel, and a more vivid scene to be live broadcasting is an important key point and needs to be accurately recorded.
The item live description information of the second target item may include live description information and anchor language information of the second target item, the live description information includes description content data and local identification data of the second target item, and the description content data may be description content data such as language behavior and action behavior.
There is a correspondence between the key descriptive episodes and the live scene categories.
In the classroom application live broadcast scene, if the item live broadcast description information of the first target item includes the anchor language information of the first target item, the anchor language information in the item live broadcast description information of the first target item is read, a key description scenario of the first target item is determined, a real-time live broadcast scene type is determined according to the key description scenario, and if the real-time live broadcast scene type is determined to be the target live broadcast scene type, item live broadcast description information of the second target item in the target live broadcast scene type is obtained.
And 204, reading the item live description information to obtain the target live state of the second target item.
Wherein, the live broadcast state comprises voice information, scene information, equipment information and the like.
When the event live broadcast description information of the second target event is read, in order to ensure the accuracy and reliability of live broadcast description information reading, a corresponding reading direction can be determined according to the live broadcast scene type determined in real time, so that the reading requirement of the live broadcast scene type is met.
For example, when the item live description information of the second target item is read, the item live description information includes description content data of the second target item and live scene anchor language information under the local identification data live scene.
And 206, if the target live state belongs to the licensed target live scene live state under the target live scene category, recording the live state of the second target item.
The method comprises the steps that from the beginning of pushing of video live streams, a project background independently establishes a task for each live stream, pulls a media play list in an m3u8 format from the last task to the current time at a time interval (30 seconds), then independently analyzes m3u8 files, analyzes each ts file in the files line by line, uniquely identifies each ts file according to the beginning time and the ending time of each ts segment, and discards the ts files if the current ts file is already analyzed; if not, the multi-thread downloading of ts video frame fragments is started and then recorded. And merging and generating the mp4 small file of the current time segment according to the algorithm logic, merging all the small time segments of 5 into a large time segment after the mp4 small file is larger than 5 time segments, and so on.
After the live broadcast is finished, all mp4 file synthesis instructions are initiated, all small mp4 segments generated in the whole live broadcast process are synthesized into a final complete mp4, and finally the final mp4 is uploaded to a cdn distribution platform for students to watch.
It can be appreciated that, in order to meet the read requirement of each live scene, there is a corresponding licensed live state of the target live scene under the specified target live scene category. For example, when the target live scene type is the first live scene, the live state of the target live scene can be permitted under the live scene.
When determining the target live state of the second target item, comparing the target live state of the second target item with the target live scene live state licensed in the target live scene category, and if the target live state belongs to the target live scene live state licensed in the target live scene category, recording the live state of the second target item; if the target live state belongs to the licensed target live scene live state under the target live scene category, determining that the live state of the second target item is normal.
In the video recording and broadcasting method of online live broadcasting, on the premise that the real-time live broadcasting scene type is determined as the target live broadcasting scene type according to the key description scenario of the first target item, the item live broadcasting description information of the second target item in the target live broadcasting scene type is read, the recording and broadcasting condition of the live broadcasting state of the second target item is determined by reading whether the target live broadcasting state of the second target item belongs to the target live broadcasting scene live broadcasting state permitted in the target live broadcasting scene type or not, when the live broadcasting description information is processed, the real-time live broadcasting scene type is determined, on the basis of the determination as the target live broadcasting scene type, the second target item live broadcasting state is read in a targeted mode according to the permitted live broadcasting state of the target live broadcasting scene type, and complete video recording can be carried out, so that the accuracy and reliability of recording and broadcasting are improved.
In an actual live broadcast scene, in order to determine the accuracy of live broadcast description information reading, a reading direction associated with a target live broadcast scene type needs to be determined, and item live broadcast description information of a second target item is read in a targeted manner based on the reading direction, so that all target live broadcast scene types are prevented from being read according to the same reading direction.
Optionally, in one embodiment, determining a reading direction associated with the target live scene category; and reading the item live broadcast description information based on the reading direction to obtain the target live broadcast state of the second target item.
The reading direction comprises a semantic description direction, a factor description direction, a live broadcast state analysis direction, a local recognition direction and the like. The reading direction associated with the first direct broadcast scene can be a semantic description direction, the reading direction associated with the third direct broadcast scene can be a factor description direction, the reading direction associated with the fourth direct broadcast scene can be a direct broadcast state analysis direction, and the reading direction associated with the teaching direct broadcast scene can be a local recognition direction. It is appreciated that the event live description information of the second target event may not need to be read in discussing the live scenario.
For example, on the basis of determining the reading direction associated with the target live scene category, the item live description information is read based on the reading direction, so as to obtain the target live state of the second target item. And reading the item live broadcast description information based on the corresponding reading direction by determining the reading direction associated with the target live broadcast scene category, and accurately determining the target live broadcast state of the second target item.
The following is a manner of reading the item live description information of the second target item based on different reading directions.
In one embodiment, a processing method for reading a live status of a second target item based on a reading direction is provided, including the following steps.
And step 302, reading attribute information in the item live description information based on the semantic description direction to obtain the description content of the second target item in the first live scene.
On the premise that the target live broadcast scene type is determined to be the first live broadcast scene, the attribute information in the item live broadcast description information is read based on the semantic description direction corresponding to the first live broadcast scene, and the description content of the second target item is read to obtain the description content of the second target item in the first live broadcast scene.
Step 304, determining the target live state of the second target item according to the description content.
Step 306, if the target live state belongs to the second live state licensed in the first live scene, recording the live state of the second target item.
In the above embodiment, based on the semantic description direction in the first live scene, the gesture of the second target item is read, so as to determine, in a specific live scene, whether the live state of the second target item is abnormal.
In another embodiment, a processing method for reading a live status of a second target item based on a reading direction is provided, including the following steps.
And step 402, reading the item live description information based on the factor description direction, and reading whether the anchor language information exists.
On the premise that the target live broadcast scene type is determined to be the third live broadcast scene, on the basis of the factor description direction corresponding to the third live broadcast scene, the item live broadcast description information of the second target item is read in a sound mode, and whether the anchor language information exists or not is read.
If the anchor language information exists, the target live state of the second target item is determined by locating the anchor language information, step 404.
Step 406, if the target live state belongs to the mute live state licensed in the target live scene category, recording the live state of the second target item.
Illustratively, upon determining that the anchor language information exists, determining a location AI space vector by locating the anchor language information, determining a corresponding second target item according to the location AI space vector and obtaining item information of the second target item.
In the above embodiment, based on the factor description direction of the third live broadcast scene, whether the main broadcast language information exists in the live broadcast description information of the item to read whether the second target item has an abnormal live broadcast state or not can be determined pertinently, whether the live broadcast state of the second target item can be recorded or not can be determined in the specific live broadcast scene, reading of other live broadcast states is not needed, and reliability of reading is improved.
In another embodiment, a processing method for reading a live status of a second target item based on a reading direction is provided, including the following steps.
Step 602, reading attribute information in the item live description information based on the live state analysis direction, and reading a voice live state and a behavior live state of the second target item in the fourth live scene.
Step 604, determining the target live state of the second target item according to the voice live state and the behavior live state.
In step 606, if the target live state belongs to the licensed fifth live state in the fourth live scene, the live state of the second target item is recorded.
On the premise that the target live scene type is determined to be the fourth live scene, the attribute information in the item live description information is read based on the live state analysis direction corresponding to the fourth live scene, the voice live state and the behavior live state of the second target item are read, and the target live state of the second target item is determined according to the read obtained voice live state and behavior. If the target live state belongs to a fifth live state licensed in the fourth live scene, recording the live state of the second target item; otherwise, determining that the live state of the second target item is normal.
Further, when the voice live broadcast state of the second target item is read, the voice recording and broadcasting state quantization information is updated according to the read voice live broadcast state under the corresponding target live broadcast scene type.
In another embodiment, a processing method for reading a live status of a second target item based on a reading direction is provided, including the following steps.
Step 702, reading the local identification information of the second target item in the item live description information based on the local identification direction, so as to obtain the target live state of the second target item.
Step 704, if the target live state belongs to the licensed live state of the lecture in the lecture live scene, recording the live state of the second target item.
In the actual application live broadcast scene, after the live broadcast description information is processed, accurate evaluation can be performed on the actual application live broadcast scene according to the processing result of the live broadcast description information. For example, in a live classroom scene, the live description information of the second target item is processed in combination with the lecture words of the first target item, so that the live description information of the second target item can be accurately analyzed further, and effective live evaluation information can be determined.
In another embodiment, a video recording method for online live broadcast is provided, which includes the following steps.
Step 802, obtaining item live description information of a second target item under the target live scene category on the premise of determining that the real-time live scene category is the target live scene category according to the key description scenario of the first target item.
Further, the manner of determining the real-time live scene category as the target live scene category according to the key description scenario of the first target item may be implemented by:
obtaining, in one embodiment, utterance data generated by a first target item in a live scene in real time; performing speaking reading on the speaking data, and extracting key description plots; and determining the real-time live scene type according to the key description scenario, and judging whether the live scene type is the target live scene type.
In order to improve the accuracy and reliability of processing, the method determines the type of the live broadcast scene in real time by acquiring the speaking data generated by the first target item in the live broadcast scene and extracting keywords from the speaking data. In order to realize targeted reading of live scenes, it is necessary to further determine whether the real-time live scene type is a target live scene type.
Step 804, determining a reading direction associated with the target live scene category.
Step 806, reading the item live description information based on the reading direction, to obtain the target live state of the second target item.
The method includes the steps that item live description information is read according to a read direction associated with the determined target live scene type, and a target live state of a second target item is obtained. The manner of reading the event live description information based on the reading directions associated with different target live scene types is described above, and will not be described herein.
Step 808, if the target live state belongs to the licensed target live scene live state under the target live scene category, recording the live state of the second target item.
Step 810, obtaining the recording status quantization information of the second target item, and obtaining the voice recording status quantization information of the second target item.
Step 812, determining target live broadcast evaluation information according to the original live broadcast evaluation information, the recorded broadcast status quantization information and the voice recorded broadcast status quantization information which are set in advance.
It can be understood that, for a class, original live broadcast evaluation information can be preset, when the live broadcast description information of the item of the second target item is read, the voice recording and broadcasting state quantization information is updated according to whether the voice live broadcast state of the second target item is in accordance with the corresponding live broadcast scene, the recording and broadcasting state quantization information is determined according to the abnormal live broadcast state of each target live broadcast scene type, and then the original live broadcast evaluation information is updated to obtain the target live broadcast evaluation information.
According to the video recording and broadcasting method of the online live broadcasting, on the premise that the real-time live broadcasting scene type is determined to be the target live broadcasting scene type according to the key description scenario of the first target item, item live broadcasting description information of the second target item in the target live broadcasting scene type is read, the recording and broadcasting condition of the live broadcasting state of the second target item is determined by reading whether the target live broadcasting state of the second target item belongs to the target live broadcasting scene live broadcasting state permitted in the target live broadcasting scene type or not, and the target live broadcasting evaluation information is determined according to the original live broadcasting evaluation information, the recording and broadcasting state quantification information and the voice recording and broadcasting state quantification information which are set in advance based on the reading condition of the second target item. When the live broadcast description information is processed, the live broadcast state of the second target item can be read in a targeted manner according to the licensed live broadcast state of the target live broadcast scene type on the basis of determining the real-time live broadcast scene type, so that the accuracy and reliability of recording and broadcasting are improved, and the target live broadcast evaluation information is accurately determined by combining and analyzing the live broadcast state of the second target item, the voice and the anchor language information of the first target item.
On the basis of the above, a video recording and broadcasting device for online live broadcasting is provided, and the device comprises:
the information acquisition module is used for acquiring item live broadcast description information of a second target item under the target live broadcast scene type on the premise that the real-time live broadcast scene type is determined to be the target live broadcast scene type according to the key description scenario of the first target item;
the state obtaining module is used for reading the item live description information to obtain a target live state of the second target item;
and the video recording module is used for recording the live state of the second target item if the target live state belongs to the live state of the target live scene licensed under the target live scene category.
On the basis of the above, an on-line live video recording and broadcasting system is shown, which comprises a processor and a memory, wherein the processor and the memory are communicated with each other, and the processor is used for reading a computer program from the memory and executing the computer program to realize the method.
On the basis of the above, there is also provided a computer readable storage medium on which a computer program stored which, when run, implements the above method.
In summary, based on the above scheme, on the premise that the real-time live scene type is determined as the target live scene type according to the key description scenario of the first target item, the item live description information of the second target item under the target live scene type is read, and the recording and playing condition of the live state of the second target item is determined by reading whether the target live state of the second target item belongs to the target live scene live state licensed under the target live scene type, when the live description information is processed, the live state of the second target item is read pertinently according to the licensed live state of the target live scene type on the basis of determining the target live scene type, so that complete video recording can be performed, and therefore, the accuracy and reliability of recording and playing are improved.
It should be appreciated that the systems and modules thereof shown above may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may then be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or special purpose design hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such as provided on a carrier medium such as a magnetic disk, CD or DVD-ROM, a programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system of the present application and its modules may be implemented not only with hardware circuitry such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also with software executed by various types of processors, for example, and with a combination of the above hardware circuitry and software (e.g., firmware).
It should be noted that, the advantages that may be generated by different embodiments may be different, and in different embodiments, the advantages that may be generated may be any one or a combination of several of the above, or any other possible advantages that may be obtained.
While the basic concepts have been described above, it will be apparent to those skilled in the art that the foregoing detailed disclosure is by way of example only and is not intended to be limiting. Although not explicitly described herein, various modifications, improvements and adaptations of the application may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within the present disclosure, and therefore, such modifications, improvements, and adaptations are intended to be within the spirit and scope of the exemplary embodiments of the present disclosure.
Meanwhile, the present application uses specific words to describe embodiments of the present application. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the application. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the application may be combined as suitable.
Furthermore, those skilled in the art will appreciate that the various aspects of the application are illustrated and described in the context of a number of patentable categories or circumstances, including any novel and useful procedures, machines, products, or materials, or any novel and useful modifications thereof. Accordingly, aspects of the application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the application may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media.
The computer storage medium may contain a propagated data signal with the computer program code embodied therein, for example, on a baseband or as part of a carrier wave. The propagated signal may take on a variety of forms, including electro-magnetic, optical, etc., or any suitable combination thereof. A computer storage medium may be any computer readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated through any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or a combination of any of the foregoing.
The computer program code necessary for operation of portions of the present application may be written in any one or more programming languages, including an object oriented programming language such as Java, scala, smalltalk, eiffel, JADE, emerald, C ++, c#, vb net, python, etc., a conventional programming language such as C language, visual Basic, fortran 2003, perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, ruby and Groovy, or other programming languages, etc. The program code may execute entirely on the user's computer or as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any form of network, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or the use of services such as software as a service (SaaS) in a cloud computing environment.
Furthermore, the order in which the elements and sequences are presented, the use of numerical letters, or other designations are used in the application is not intended to limit the sequence of the processes and methods unless specifically recited in the claims. While certain presently useful inventive embodiments have been discussed in the foregoing disclosure, by way of example, it is to be understood that such details are merely illustrative and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements included within the spirit and scope of the embodiments of the application. For example, while the system components described above may be implemented by hardware devices, they may also be implemented solely by software solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in order to simplify the description of the present disclosure and thereby aid in understanding one or more inventive embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof. This method of disclosure, however, is not intended to imply that more features than are required by the subject application. Indeed, less than all of the features of a single embodiment disclosed above.
In some embodiments, numbers describing the components, number of attributes are used, it being understood that such numbers being used in the description of embodiments are modified in some examples by the modifier "about," approximately, "or" substantially. Unless otherwise indicated, "about," "approximately," or "substantially" indicate that the numbers allow for adaptive variation. Accordingly, in some embodiments, numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by the individual embodiments. In some embodiments, the numerical parameters should take into account the specified significant digits and employ a method for preserving the general number of digits. Although the numerical ranges and parameters set forth herein are approximations in some embodiments for use in determining the breadth of the range, in particular embodiments, the numerical values set forth herein are as precisely as possible.
Each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited herein is hereby incorporated by reference in its entirety. Except for the application history file that is inconsistent or conflicting with this disclosure, the file (currently or later attached to this disclosure) that limits the broadest scope of the claims of this disclosure is also excluded. It is noted that the description, definition, and/or use of the term in the appended claims controls the description, definition, and/or use of the term in this application if there is a discrepancy or conflict between the description, definition, and/or use of the term in the appended claims.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present application. Other variations are also possible within the scope of the application. Thus, by way of example, and not limitation, alternative configurations of embodiments of the application may be considered in keeping with the teachings of the application. Accordingly, the embodiments of the present application are not limited to the embodiments explicitly described and depicted herein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (9)

1. The video recording and broadcasting method of the online live broadcast is characterized by comprising the following steps:
on the premise that the real-time live broadcast scene type is determined to be the target live broadcast scene type according to the key description scenario of the first target item, item live broadcast description information of a second target item under the target live broadcast scene type is obtained;
reading the item live description information to obtain a target live state of the second target item;
and if the target live state belongs to the target live scene live state licensed under the target live scene category, recording the live state of the second target item.
2. The method of claim 1, wherein prior to determining that the live scene category in real time is the target live scene category based on the key descriptive scenario of the first target event, the method further comprises:
acquiring speech data generated by a first target item in a live broadcast scene in real time;
performing speaking reading on the speaking data, and extracting key description plots;
and determining the real-time live scene type according to the key description scenario, and judging whether the live scene type is a target live scene type or not.
3. The method of claim 1, wherein the reading the item live description information to obtain the target live status of the second target item includes:
determining a reading direction associated with the target live scene category;
and reading the item live broadcast description information according to the reading direction to obtain the target live broadcast state of the second target item.
4. A method according to claim 3, wherein the target live scene category is a first direct scene, the read direction of the first direct scene comprising a semantic description direction; the step of reading the item live description information through the reading direction to obtain a target live state of the second target item, including:
the attribute information in the item live broadcast description information is read through the semantic description direction, so that the description content of the second target item in the first live broadcast scene is obtained;
determining a target live state of the second target item according to the description content;
if the target live state belongs to the licensed target live scene live state in the target live scene category, recording the live state of the second target item, including: and if the target live state belongs to a second live state licensed in the first live scene, recording the live state of the second target item.
5. The method of claim 3, wherein the target live scene category is a third live scene, and wherein a reading direction of the third live scene includes a factor description direction; the step of reading the item live description information through the reading direction to obtain a target live state of the second target item, including:
reading the live description information of the event according to the factor description direction, and reading whether the anchor language information exists or not;
if the anchor language information exists, determining a target live state of the second target item by positioning the anchor language information;
if the target live state belongs to the licensed target live scene live state in the target live scene category, recording the live state of the second target item, including: and if the target live state belongs to the mute live state licensed under the target live scene type, recording the live state of the second target item.
6. The method of claim 3, wherein the target live scene category is a fourth live scene, and wherein the read direction of the fourth live scene includes a live state parsing direction; the step of reading the item live description information through the reading direction to obtain a target live state of the second target item, including:
the attribute information in the item live description information is read through the live state analysis direction, and the voice live state and the behavior live state of the second target item in a fourth live scene are read;
determining a target live state of the second target item according to the voice live state and the behavior live state;
if the target live state belongs to the licensed target live scene live state in the target live scene category, recording the live state of the second target item, including: and if the target live state belongs to the permitted fifth live state in the fourth live scene, recording the live state of the second target item.
7. A method according to claim 3, wherein the target live scene category is a live lecture scene, and the reading direction of the live lecture scene includes a local recognition direction; the step of reading the item live description information through the reading direction to obtain a target live state of the second target item, including: reading local identification information of a second target item in the item live description information through the local identification direction to obtain a target live state of the second target item;
if the target live state belongs to the licensed target live scene live state in the target live scene category, recording the live state of the second target item, including: and if the target live state belongs to the licensed live state of the lectures under the lecture live scene, recording the live state of the second target item.
8. The method according to claim 1, wherein the method further comprises:
obtaining the recording and playing state quantization information of the second target item and obtaining the voice recording and playing state quantization information of the second target item;
and determining target live broadcast evaluation information according to the original live broadcast evaluation information, the recorded broadcast state quantization information and the voice recorded broadcast state quantization information which are set in advance.
9. An on-line live video recording system comprising a processor and a memory in communication with each other, the processor being adapted to read a computer program from the memory and execute the computer program to implement the method of any of claims 1-8.
CN202311348130.XA 2023-10-18 2023-10-18 Video recording and broadcasting method and system for online live broadcast Active CN117082268B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311348130.XA CN117082268B (en) 2023-10-18 2023-10-18 Video recording and broadcasting method and system for online live broadcast

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311348130.XA CN117082268B (en) 2023-10-18 2023-10-18 Video recording and broadcasting method and system for online live broadcast

Publications (2)

Publication Number Publication Date
CN117082268A true CN117082268A (en) 2023-11-17
CN117082268B CN117082268B (en) 2024-01-30

Family

ID=88713894

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311348130.XA Active CN117082268B (en) 2023-10-18 2023-10-18 Video recording and broadcasting method and system for online live broadcast

Country Status (1)

Country Link
CN (1) CN117082268B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007020897A1 (en) * 2005-08-17 2007-02-22 Matsushita Electric Industrial Co., Ltd. Video scene classification device and video scene classification method
CN110035330A (en) * 2019-04-16 2019-07-19 威比网络科技(上海)有限公司 Video generation method, system, equipment and storage medium based on online education
CN110881131A (en) * 2018-09-06 2020-03-13 武汉斗鱼网络科技有限公司 Classification method of live review videos and related device thereof
CN111897978A (en) * 2020-08-04 2020-11-06 广州虎牙科技有限公司 Live broadcast state monitoring method and device, electronic equipment and storage medium
CN112235592A (en) * 2020-10-15 2021-01-15 腾讯科技(北京)有限公司 Live broadcast method, live broadcast processing method, device and computer equipment
CN112383790A (en) * 2020-11-12 2021-02-19 咪咕视讯科技有限公司 Live broadcast screen recording method and device, electronic equipment and storage medium
CN114155121A (en) * 2021-10-21 2022-03-08 广东天智实业有限公司 Artificial intelligent AI auxiliary scoring system for middle test experiment operation
CN114173145A (en) * 2021-12-08 2022-03-11 四川启睿克科技有限公司 HLS protocol-based dynamic code rate low-delay live broadcast method
CN115243066A (en) * 2022-07-19 2022-10-25 广州博冠信息科技有限公司 Information pushing method and device, electronic equipment and computer readable medium
CN116112746A (en) * 2023-04-10 2023-05-12 成都有为财商教育科技有限公司 Online education live video compression method and system
CN116546130A (en) * 2022-01-26 2023-08-04 广州三星通信技术研究有限公司 Multimedia data control method, device, terminal and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007020897A1 (en) * 2005-08-17 2007-02-22 Matsushita Electric Industrial Co., Ltd. Video scene classification device and video scene classification method
CN110881131A (en) * 2018-09-06 2020-03-13 武汉斗鱼网络科技有限公司 Classification method of live review videos and related device thereof
CN110035330A (en) * 2019-04-16 2019-07-19 威比网络科技(上海)有限公司 Video generation method, system, equipment and storage medium based on online education
CN111897978A (en) * 2020-08-04 2020-11-06 广州虎牙科技有限公司 Live broadcast state monitoring method and device, electronic equipment and storage medium
CN112235592A (en) * 2020-10-15 2021-01-15 腾讯科技(北京)有限公司 Live broadcast method, live broadcast processing method, device and computer equipment
CN112383790A (en) * 2020-11-12 2021-02-19 咪咕视讯科技有限公司 Live broadcast screen recording method and device, electronic equipment and storage medium
CN114155121A (en) * 2021-10-21 2022-03-08 广东天智实业有限公司 Artificial intelligent AI auxiliary scoring system for middle test experiment operation
CN114173145A (en) * 2021-12-08 2022-03-11 四川启睿克科技有限公司 HLS protocol-based dynamic code rate low-delay live broadcast method
CN116546130A (en) * 2022-01-26 2023-08-04 广州三星通信技术研究有限公司 Multimedia data control method, device, terminal and storage medium
CN115243066A (en) * 2022-07-19 2022-10-25 广州博冠信息科技有限公司 Information pushing method and device, electronic equipment and computer readable medium
CN116112746A (en) * 2023-04-10 2023-05-12 成都有为财商教育科技有限公司 Online education live video compression method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李岭涛 等: "手机催化电视传播新模式的预测", 当代电视, no. 03 *

Also Published As

Publication number Publication date
CN117082268B (en) 2024-01-30

Similar Documents

Publication Publication Date Title
US11853370B2 (en) Scene aware searching
US20190378494A1 (en) Method and apparatus for outputting information
WO2019112858A1 (en) Streaming radio with personalized content integration
US9959872B2 (en) Multimodal speech recognition for real-time video audio-based display indicia application
CN114822512B (en) Audio data processing method and device, electronic equipment and storage medium
US10997965B2 (en) Automated voice processing testing system and method
KR102561712B1 (en) Apparatus for Voice Recognition and operation method thereof
US10535352B2 (en) Automated cognitive recording and organization of speech as structured text
JP6785904B2 (en) Information push method and equipment
CN112418011A (en) Method, device and equipment for identifying integrity of video content and storage medium
CN104980790A (en) Voice subtitle generating method and apparatus, and playing method and apparatus
US20220021942A1 (en) Systems and methods for displaying subjects of a video portion of content
CN106548785A (en) A kind of method of speech processing and device, terminal unit
US11762451B2 (en) Methods and apparatus to add common sense reasoning to artificial intelligence in the context of human machine interfaces
US20170092277A1 (en) Search and Access System for Media Content Files
CN113691909A (en) Digital audio workstation with audio processing recommendations
CN110659387A (en) Method and apparatus for providing video
JP2022530201A (en) Automatic captioning of audible parts of content on computing devices
CN117082268B (en) Video recording and broadcasting method and system for online live broadcast
US11423920B2 (en) Methods and systems for suppressing vocal tracks
CN116112746A (en) Online education live video compression method and system
US11523186B2 (en) Automated audio mapping using an artificial neural network
US10832692B1 (en) Machine learning system for matching groups of related media files
CN112380871A (en) Semantic recognition method, apparatus, and medium
US20200204856A1 (en) Systems and methods for displaying subjects of an audio portion of content

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant