CN112004100A - Driving method for integrating multiple audio and video sources into single audio and video source - Google Patents

Driving method for integrating multiple audio and video sources into single audio and video source Download PDF

Info

Publication number
CN112004100A
CN112004100A CN202010895614.6A CN202010895614A CN112004100A CN 112004100 A CN112004100 A CN 112004100A CN 202010895614 A CN202010895614 A CN 202010895614A CN 112004100 A CN112004100 A CN 112004100A
Authority
CN
China
Prior art keywords
audio
video
source
sources
video source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010895614.6A
Other languages
Chinese (zh)
Other versions
CN112004100B (en
Inventor
傅曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI JINGDA TECHNOLOGY CO LTD
Original Assignee
SHANGHAI JINGDA TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI JINGDA TECHNOLOGY CO LTD filed Critical SHANGHAI JINGDA TECHNOLOGY CO LTD
Priority to CN202010895614.6A priority Critical patent/CN112004100B/en
Publication of CN112004100A publication Critical patent/CN112004100A/en
Application granted granted Critical
Publication of CN112004100B publication Critical patent/CN112004100B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally

Abstract

The invention discloses a driving method for integrating multi-channel audio and video sources into a single-channel audio and video source, which comprises the following steps of S1: adding multiple required video sources, respectively and independently preprocessing each video source, and splicing and overlapping the preprocessed video sources to obtain a single-path video source; step S2: inputting a single-channel video source to a virtual camera and outputting the single-channel video source through the virtual camera; step S3: and the application layer of the operating system of the mainboard synthesizes the first audio source through an application program to form a first synthesized audio source, and the first synthesized audio source is sent to the external sound card through the audio simulation port. The invention discloses a driving method for integrating multiple audio and video sources into a single audio and video source, which can support various audio and video software and use thereof, comprises mobile phone live broadcast software, supports high-definition camera access such as USB (universal serial bus) and HDMI (high-definition multimedia interface), accesses and processes multiple video sources and outputs the single video source.

Description

Driving method for integrating multiple audio and video sources into single audio and video source
Technical Field
The invention belongs to the technical field of audio and video source driving, and particularly relates to a driving method for integrating multiple audio and video sources into a single audio and video source.
Background
With the development of internet video technology, people have been accustomed to enriching amateur life by connecting electronic devices with video playing function, such as televisions, personal computers, intelligent terminals, and the like, to the internet to watch various videos. At present, in the prior art, after an electronic device with a video playing function is connected to the internet, when a user watches various live videos, a video server sends live video streams to a client to play according to a user request. However, when a user watches a live video program according to the above prior art, the user can only watch the live video program from a shooting angle provided by a current video stream, and cannot watch the content of scenes in the live video program from other angles at the same time. Therefore, in order to meet the watching requirements of users, a composite video including multiple channels of videos is produced, and in the actual video playing, for example, the video playing of a sports event, multiple channels of video data are displayed on the same screen interface.
At present, in an application scene shot by a plurality of cameras, a plurality of video sources can be generated, namely, video streams shot by each camera are all the video sources. When multiple video sources are played, multiple playing clients are usually started on a playing device at the same time, and one video source is correspondingly played on a display screen of each playing client. In this way, multiple playback clients can be utilized on the playback device to simultaneously display images from multiple video sources. In the existing playing scheme, a plurality of player dialog boxes are popped up to play a plurality of videos, so that a plurality of label pages are generated, the management is inconvenient, and meanwhile, the adjustment of the layers is inconvenient.
Disclosure of Invention
The invention mainly aims to provide a driving method for integrating multi-channel audio and video sources into a single-channel audio and video source, which can support various audio and video software and use thereof, and comprises mobile phone live broadcast software (including panning, shaking, fast hand and the like), USB (universal serial bus), HDMI (high-definition multimedia interface) and other high-definition cameras for access, processing and outputting the single-channel video source, outputting the single-channel audio source through a virtual camera, accessing and processing the multi-channel audio source and outputting the single-channel audio source, and supporting an external sound card.
Another objective of the present invention is to provide a driving method for integrating multiple audio/video sources into a single audio/video source, which has the advantages of stable signal, convenient operation, and wide application.
In order to achieve the above object, the present invention provides a driving method for integrating multiple audio/video sources into a single audio/video source, comprising the following steps:
step S1: adding multiple required video sources (including HDMI IN camera data, USB camera data, local MP4 file, network push-pull stream video source and the like, and a user can preprocess each video source according to the field requirement) and independently, and splicing and overlapping each preprocessed video source to obtain a single-path video source;
step S2: outputting the obtained one-way video source as video content captured by the virtual camera (displaying through a display screen);
step S3: an application layer of an operating system of the mainboard synthesizes a first audio source (including virtual audio driving sound, sound emitted by an application program, audio and video sound brought by network pull stream of the application program and the like) through a (third party) application program (including software such as jitter and the like) to form a first synthesized audio source, and sends the first synthesized audio source to an external sound card through an audio simulation port (the first synthesized audio source is not subjected to any pretreatment on the external sound card, and is directly sent to an external playing port of the four-core headset to be used as an ear return sound, and is directly heard by a user such as a main broadcast and the like);
step S4: sending (other audio streams) a second audio source (including background music input by Line in, sound of a user such as a director captured by an MIC) to the external sound card, and the external sound card pre-processes the second audio source;
step S5: the external sound card mixes the preprocessed second audio sources to form a second synthesized audio source;
step S6: the external sound card outputs the second synthetic audio source back to the main board through the audio simulation port;
step S7: the driving layer of the main board mixes the audio source carried by each video source, the first audio source and the second audio source into a single-channel audio source (including the audio carried by each video source, the audio sent back by the external sound card through the analog port, the audio carried by the local video file, the audio carried by the network push-pull stream video source, etc.), and outputs the single-channel audio source as the audio of the virtual audio device, and repeats step S3.
As a further preferable technical solution of the above technical solution, the preprocessing of each video source in step S1 is specifically implemented as the following steps:
step S1.1: carrying out volume adjustment on the sound of a video source;
step S1.2: the video source is edited.
As a further preferred embodiment of the above technical solution, step S1.1 is specifically implemented to include the following steps:
step S1.1.1: if the volume of the sound of the video source is closed, the sound of the current video source does not need to be sent to the mainboard for synthesis;
step S1.2.2: and if the volume of the sound of the video source is adjusted (the volume is not closed), the sound of the current video source is sent to the mainboard for synthesis.
As a further preferred embodiment of the above technical solution, the step S1.2 is specifically implemented to include the following steps:
step S1.2.1: determining at least one start position and at least one end position of a current video source to extract a clipped video source between the at least one start position and an adjacent end position (multiple extracted clipped video sources may be acquired from one video source, for example, 3 required clipped video sources are extracted from one video source for later playback in turn, and the clipped video sources are spliced and overlapped with other processed video sources);
step S1.2.2: editing and splicing different editing video sources (by a required splicing mode) extracted from different video sources;
step S1.2.3: determining the frame position of the current video source to extract the video picture of the current frame (the video picture of a certain frame may be needed in a certain video source);
step S1.2.4: editing and splicing video pictures extracted from different video sources;
step S1.2.5: and splicing the video source and the video picture.
As a further preferable technical solution of the above technical solution, the splicing and superimposing of each video source in step S1 is specifically implemented as including the following steps:
step S1.3: setting the serial numbers of the shots corresponding to the video sources, and sequencing the serial numbers of the shots corresponding to the video sources (if the video sources are added in the later period, the shots corresponding to the video sources can be added);
step S1.4: splicing and overlapping the sequenced lenses to form a picture-in-picture video source;
step S1.5: and switching shortcuts according to requirements (switching can be carried out through a remote controller, and a touch display screen can also be arranged to carry out switching on the touch display screen).
As a further preferred embodiment of the above technical solution, step S1.4 is specifically implemented as the following steps:
step S1.4.1: acquiring one (one) lens of the sequenced lenses, outputting a single-channel video source by taking a video source corresponding to the current lens as a single picture, recording a shortcut mode output by the current single-channel video source, and setting the shortcut mode as a first shortcut mode;
step S1.4.2: acquiring two lenses (among the two lenses) of the sequenced lenses, outputting two spliced video sources (picture-in-picture or parallel images, wherein the image size of each video source is automatically adjusted) by taking the video sources corresponding to the current two lenses as two spliced images, recording the shortcut output by the current two spliced video sources, and setting the shortcut as a second shortcut;
step S1.4.3: and acquiring (one of) three lenses of the sequenced lenses, outputting (picture-in-picture or parallel) three-spliced video sources by taking the video sources corresponding to the current three lenses as three spliced pictures, recording the shortcut output by the current three-spliced video sources, and setting the shortcut as a third shortcut. Theoretically, in order to acquire a plurality of lenses (not exceeding the total number) arranged and finished lenses, the invention not only protects and outputs 1 lens, 2 lenses and 3 lenses, but also ensures that a user can output a plurality of lenses according to the requirement of a plurality of lenses.
As a further preferable technical solution of the above technical solution, the preprocessing (each audio source of) the second audio source in step S4 is specifically implemented to include the following steps:
step S4.1: adjusting the volume of an audio source;
step S4.2: the audio sources are clipped.
As a further preferable embodiment of the above technical solution, the step S4.2 is specifically implemented to include the steps of:
step S4.2.1: determining at least one start position and at least one end position of a current audio source to extract a clipping audio source between the at least one start position and an adjacent end position (a plurality of extracted clipping audio sources may be acquired from one audio source, for example, 3 required clipping audio sources are extracted from one audio source for post-loop playback, and the clipping audio sources are combined with other processed audio sources);
step S4.2.2: and (4) carrying out clipping synthesis on different clipping audio sources extracted from different audio sources (by a required splicing mode).
Preferably, sound effect and atmosphere sound can be added into a single-channel audio source according to real-time requirements, and the control can be carried out through a remote controller or a touch display screen.
In order to achieve the above object, the present invention provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements a driving method for integrating multiple audio/video sources into a single audio/video source when executing the program.
To achieve the above object, the present invention provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a driving method for integrating multiple audio and video sources into a single audio and video source.
Detailed Description
The following description is presented to disclose the invention so as to enable any person skilled in the art to practice the invention. The preferred embodiments in the following description are given by way of example only, and other obvious variations will occur to those skilled in the art. The basic principles of the invention, as defined in the following description, may be applied to other embodiments, variations, modifications, equivalents, and other technical solutions without departing from the spirit and scope of the invention.
In a preferred embodiment of the present invention, it should be noted by those skilled in the art that the electronic device, the non-transitory computer-readable storage medium, the touch display screen, the remote controller, and the like, to which the present invention relates, may be regarded as prior art.
Preferred embodiments.
The invention discloses a driving method for integrating multi-channel audio and video sources into a single-channel audio and video source, which comprises the following steps:
step S1: adding multiple required video sources (including HDMI IN camera data, USB camera data, local MP4 files, network push-pull stream video sources and the like, and being capable of preprocessing each video source according to the field requirements of a user) and independently preprocessing each video source respectively, and splicing and overlapping each preprocessed video source to obtain a single-path video source;
step S2: outputting the obtained one-way video source as video content captured by the virtual camera (displaying through a display screen);
step S3: an application layer of an operating system of the mainboard synthesizes a first audio source (virtual audio driving sound, sound emitted by an application program, audio and video sound brought by network pull stream of the application program and the like) through a (third party) application program (including software such as jitter and the like) to form a first synthesized audio source, and the first synthesized audio source is sent to an external sound card through an audio simulation port (the first synthesized audio source is not subjected to any pretreatment on the external sound card, and is directly sent to an external playing port of the four-core headset to be used as an ear return sound and directly heard by a user such as a main broadcast and the like);
step S4: sending (other audio streams) a second audio source (including background music input by Line in, sound of a user such as a director captured by an MIC) to the external sound card, and the external sound card pre-processes the second audio source;
step S5: the external sound card mixes the preprocessed second audio sources to form a second synthesized audio source;
step S6: the external sound card outputs the second synthetic audio source back to the main board through the audio simulation port;
step S7: the driving layer of the main board mixes the audio source carried by each video source, the first audio source and the second audio source into a single-channel audio source (including the audio carried by each video source, the audio sent back by the external sound card through the analog port, the audio carried by the local video file, the audio carried by the network push-pull stream video source, etc.), and outputs the single-channel audio source as the audio of the virtual audio device, and repeats step S3.
Specifically, the step S1 of preprocessing each video source is implemented as the following steps:
step S1.1: carrying out volume adjustment on the sound of a video source;
step S1.2: the video source is edited.
More specifically, step S1.1 is embodied as comprising the steps of:
step S1.1.1: if the volume of the sound of the video source is closed, the sound of the current video source does not need to be sent to the mainboard for synthesis;
step S1.2.2: and if the volume of the sound of the video source is adjusted (the volume is not closed), the sound of the current video source is sent to the mainboard for synthesis.
Further, step S1.2 is embodied as comprising the steps of:
step S1.2.1: determining at least one start position and at least one end position of a current video source to extract a clipped video source between the at least one start position and an adjacent end position (multiple extracted clipped video sources may be acquired from one video source, for example, 3 required clipped video sources are extracted from one video source for later playback in turn, and the clipped video sources are spliced and overlapped with other processed video sources);
step S1.2.2: editing and splicing different editing video sources (by a required splicing mode) extracted from different video sources;
step S1.2.3: determining the frame position of the current video source to extract the video picture of the current frame (the video picture of a certain frame may be needed in a certain video source);
step S1.2.4: editing and splicing video pictures extracted from different video sources;
step S1.2.5: and splicing the video source and the video picture.
Further, the splicing and superimposing of each video source in step S1 is specifically implemented as follows:
step S1.3: setting the serial numbers of the shots corresponding to the video sources, and sequencing the serial numbers of the shots corresponding to the video sources (if the video sources are added in the later period, the shots corresponding to the video sources can be added);
step S1.4: splicing and overlapping the sequenced lenses to form a picture-in-picture video source;
step S1.5: and switching shortcuts according to requirements (switching can be carried out through a remote controller, and a touch display screen can also be arranged to carry out switching on the touch display screen).
Preferably, step S1.4 is embodied as the following steps:
step S1.4.1: acquiring one (one) lens of the sequenced lenses, outputting a single-channel video source by taking a video source corresponding to the current lens as a single picture, recording a shortcut mode output by the current single-channel video source, and setting the shortcut mode as a first shortcut mode;
step S1.4.2: acquiring two lenses (among the two lenses) of the sequenced lenses, outputting two spliced video sources (picture-in-picture or parallel images, wherein the image size of each video source is automatically adjusted) by taking the video sources corresponding to the current two lenses as two spliced images, recording the shortcut output by the current two spliced video sources, and setting the shortcut as a second shortcut;
step S1.4.3: and acquiring (one of) three lenses of the sequenced lenses, outputting (picture-in-picture or parallel) three-spliced video sources by taking the video sources corresponding to the current three lenses as three spliced pictures, recording the shortcut output by the current three-spliced video sources, and setting the shortcut as a third shortcut. Theoretically, in order to acquire a plurality of lenses (not exceeding the total number) arranged and finished lenses, the invention not only protects and outputs 1 lens, 2 lenses and 3 lenses, but also ensures that a user can output a plurality of lenses according to the requirement of a plurality of lenses.
Preferably, the preprocessing of (individual ones of) the second audio sources in step S4 is embodied as including the steps of:
step S4.1: adjusting the volume of an audio source;
step S4.2: the audio sources are clipped.
Preferably, step S4.2 is embodied to comprise the following steps:
step S4.2.1: determining at least one start position and at least one end position of a current audio source to extract a clipping audio source between the at least one start position and an adjacent end position (a plurality of extracted clipping audio sources may be acquired from one audio source, for example, 3 required clipping audio sources are extracted from one audio source for post-loop playback, and the clipping audio sources are combined with other processed audio sources);
step S4.2.2: and (4) carrying out clipping synthesis on different clipping audio sources extracted from different audio sources (by a required splicing mode).
Preferably, sound effect and atmosphere sound can be added into a single-channel audio source according to real-time requirements, and the control can be carried out through a remote controller or a touch display screen.
The invention also discloses an electronic device, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor realizes the step of the driving method for integrating the multi-path audio and video sources into the single-path audio and video source when executing the program.
The invention also discloses a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a driving method for integrating multiple audio and video sources into a single audio and video source.
It should be noted that the technical features of the electronic device, the non-transitory computer readable storage medium, the touch display screen, the remote controller, and the like, which are referred to in the present patent application, should be regarded as the prior art, and the specific structure, the operation principle, the control mode and the spatial arrangement mode of the technical features, which may be referred to, may be conventional choices in the art, and should not be regarded as the invention point of the present patent, and the present patent is not further specifically described in detail.
It will be apparent to those skilled in the art that modifications and equivalents may be made in the embodiments and/or portions thereof without departing from the spirit and scope of the present invention.

Claims (10)

1. A driving method for integrating multi-path audio and video sources into a single-path audio and video source is characterized by comprising the following steps:
step S1: adding multiple required video sources, respectively and independently preprocessing each video source, and splicing and overlapping the preprocessed video sources to obtain a single-path video source;
step S2: outputting the obtained one-way video source as video content captured by the virtual camera;
step S3: the application layer of the operating system of the mainboard synthesizes the first audio source through an application program to form a first synthesized audio source, and the first synthesized audio source is sent to the external sound card through an audio simulation port;
step S4: sending the second audio source to the external sound card, and preprocessing the second audio source by the external sound card;
step S5: the external sound card mixes the preprocessed second audio sources to form a second synthesized audio source;
step S6: the external sound card outputs the second synthetic audio source back to the main board through the audio simulation port;
step S7: the driving layer of the main board mixes the audio sources carried by the respective video sources, the first audio source and the second audio source into a single audio source, and outputs the single audio source as the audio of the virtual audio device, and repeats step S3.
2. The driving method for integrating multiple audio/video sources into a single audio/video source according to claim 1, wherein the step S1 is implemented as the following steps:
step S1.1: carrying out volume adjustment on the sound of a video source;
step S1.2: the video source is edited.
3. The driving method for integrating multiple audio and video sources into a single audio and video source according to claim 2, wherein step S1.1 is implemented to include the following steps:
step S1.1.1: if the volume of the sound of the video source is closed, the sound of the current video source does not need to be sent to the mainboard for synthesis;
step S1.2.2: if the volume of the sound of the video source is adjusted, the sound of the current video source is sent to the mainboard for synthesis.
4. The driving method for integrating multiple audio and video sources into a single audio and video source according to claim 3, wherein the step S1.2 is implemented to include the following steps:
step S1.2.1: determining at least one start position and at least one end position of a current video source to extract a clipped video source between the at least one start position and an adjacent end position;
step S1.2.2: editing and splicing different editing video sources extracted from different video sources;
step S1.2.3: determining the frame position of a current video source to extract a video picture of the current frame;
step S1.2.4: editing and splicing video pictures extracted from different video sources;
step S1.2.5: and splicing the video source and the video picture.
5. The driving method for integrating the multiple audio/video sources into the single audio/video source according to claim 1 or 4, wherein the splicing and stacking of each video source in the step S1 is specifically implemented as follows:
step S1.3: setting the serial numbers of the shots corresponding to the video sources, and sequencing the serial numbers of the shots corresponding to the video sources;
step S1.4: splicing and overlapping the sequenced lenses to form a picture-in-picture video source;
step S1.5: and switching shortcuts according to requirements.
6. The driving method for integrating multiple audio and video sources into a single audio and video source according to claim 5, wherein the step S1.4 is implemented as the following steps:
step S1.4.1: acquiring a lens of the sequenced lenses, outputting a single-channel video source by taking a video source corresponding to the current lens as a single picture, recording a shortcut mode output by the current single-channel video source, and setting the shortcut mode as a first shortcut mode;
step S1.4.2: acquiring two lenses of the sequenced lenses, outputting two spliced video sources by taking the video sources corresponding to the current two lenses as two spliced pictures, recording shortcuts output by the current two spliced video sources, and setting the shortcuts as second shortcuts;
step S1.4.3: and acquiring three lenses of the sequenced lenses, and outputting the video sources corresponding to the current three lenses as three-spelling pictures.
7. The driving method for integrating multiple audio/video sources into a single audio/video source according to claim 1, wherein the step S4 of preprocessing the second audio source is implemented to include the following steps:
step S4.1: adjusting the volume of an audio source;
step S4.2: the audio sources are clipped.
8. The driving method for integrating multiple audio/video sources into a single audio/video source according to claim 7, wherein the step S4.2 is implemented to include the following steps:
step S4.2.1: determining at least one start position and at least one end position of a current audio source to extract a clipping audio source between the at least one start position and an adjacent end position;
step S4.2.2: and performing clipping synthesis on different clipping audio sources extracted from different audio sources.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program performs the steps of a method of driving the aggregation of multiple audio and video sources into a single audio and video source as claimed in any one of claims 1 to 8.
10. A non-transitory computer readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the steps of a driving method for integrating multiple audio and video sources into a single audio and video source according to any one of claims 1 to 8.
CN202010895614.6A 2020-08-31 2020-08-31 Driving method for integrating multiple audio and video sources into single audio and video source Active CN112004100B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010895614.6A CN112004100B (en) 2020-08-31 2020-08-31 Driving method for integrating multiple audio and video sources into single audio and video source

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010895614.6A CN112004100B (en) 2020-08-31 2020-08-31 Driving method for integrating multiple audio and video sources into single audio and video source

Publications (2)

Publication Number Publication Date
CN112004100A true CN112004100A (en) 2020-11-27
CN112004100B CN112004100B (en) 2022-02-11

Family

ID=73465700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010895614.6A Active CN112004100B (en) 2020-08-31 2020-08-31 Driving method for integrating multiple audio and video sources into single audio and video source

Country Status (1)

Country Link
CN (1) CN112004100B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086548A (en) * 2022-04-13 2022-09-20 中国人民解放军火箭军工程大学 Double-spectrum virtual camera synthesis method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104469393A (en) * 2014-11-25 2015-03-25 百度在线网络技术(北京)有限公司 Method and system for obtaining audio frequency of cloud simulator
CN106921866A (en) * 2017-05-03 2017-07-04 广州华多网络科技有限公司 The live many video guide's methods and apparatus of auxiliary
CN107027046A (en) * 2017-04-13 2017-08-08 广州华多网络科技有限公司 Auxiliary live audio/video processing method and device
CN107027050A (en) * 2017-04-13 2017-08-08 广州华多网络科技有限公司 Auxiliary live audio/video processing method and device
CN107948756A (en) * 2017-11-22 2018-04-20 广州华多网络科技有限公司 Video Composition control method, device and corresponding terminal
CN108259989A (en) * 2018-01-19 2018-07-06 广州华多网络科技有限公司 Method, computer readable storage medium and the terminal device of net cast
CN109714603A (en) * 2017-10-25 2019-05-03 北京展视互动科技有限公司 The method and device of multichannel audio-video frequency live streaming
US20200169767A1 (en) * 2018-11-27 2020-05-28 Peter Hackes Systems, Methods And Computer Program Products For Delivering Audio And Video Data Through Multiplexed Data Streams

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104469393A (en) * 2014-11-25 2015-03-25 百度在线网络技术(北京)有限公司 Method and system for obtaining audio frequency of cloud simulator
CN107027046A (en) * 2017-04-13 2017-08-08 广州华多网络科技有限公司 Auxiliary live audio/video processing method and device
CN107027050A (en) * 2017-04-13 2017-08-08 广州华多网络科技有限公司 Auxiliary live audio/video processing method and device
CN106921866A (en) * 2017-05-03 2017-07-04 广州华多网络科技有限公司 The live many video guide's methods and apparatus of auxiliary
CN109714603A (en) * 2017-10-25 2019-05-03 北京展视互动科技有限公司 The method and device of multichannel audio-video frequency live streaming
CN107948756A (en) * 2017-11-22 2018-04-20 广州华多网络科技有限公司 Video Composition control method, device and corresponding terminal
CN108259989A (en) * 2018-01-19 2018-07-06 广州华多网络科技有限公司 Method, computer readable storage medium and the terminal device of net cast
US20200169767A1 (en) * 2018-11-27 2020-05-28 Peter Hackes Systems, Methods And Computer Program Products For Delivering Audio And Video Data Through Multiplexed Data Streams

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086548A (en) * 2022-04-13 2022-09-20 中国人民解放军火箭军工程大学 Double-spectrum virtual camera synthesis method and device

Also Published As

Publication number Publication date
CN112004100B (en) 2022-02-11

Similar Documents

Publication Publication Date Title
US9100706B2 (en) Method and system for customising live media content
US20170171274A1 (en) Method and electronic device for synchronously playing multiple-cameras video
US20090055742A1 (en) Media data presented with time-based metadata
US20090202223A1 (en) Information processing device and method, recording medium, and program
KR100889367B1 (en) System and Method for Realizing Vertual Studio via Network
KR20030016607A (en) Portable terminal equipment having image capture function and implementation method thereof
WO2020062685A1 (en) Video processing method and apparatus, terminal and medium
WO2005013618A1 (en) Live streaming broadcast method, live streaming broadcast device, live streaming broadcast system, program, recording medium, broadcast method, and broadcast device
CN113141524B (en) Resource transmission method, device, terminal and storage medium
CN113115110B (en) Video synthesis method and device, storage medium and electronic equipment
WO2022007722A1 (en) Display method and apparatus, and device and storage medium
CN109547724B (en) Video stream data processing method, electronic equipment and storage device
US20220264053A1 (en) Video processing method and device, terminal, and storage medium
WO2023035882A1 (en) Video processing method, and device, storage medium and program product
KR20180038256A (en) Method, and system for compensating delay of virtural reality stream
CN114095671A (en) Cloud conference live broadcast system, method, device, equipment and medium
US20020188772A1 (en) Media production methods and systems
CN112004100B (en) Driving method for integrating multiple audio and video sources into single audio and video source
CN112153472A (en) Method and device for generating special picture effect, storage medium and electronic equipment
CN109862385B (en) Live broadcast method and device, computer readable storage medium and terminal equipment
US20220007078A1 (en) An apparatus and associated methods for presentation of comments
JP6473262B1 (en) Distribution server, distribution program, and terminal
US10764655B2 (en) Main and immersive video coordination system and method
CN115225915A (en) Live broadcast recording device, live broadcast recording system and live broadcast recording method
CN114697724A (en) Media playing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant