CN115643442A - Audio and video converging recording and playing method, device, equipment and storage medium - Google Patents
Audio and video converging recording and playing method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN115643442A CN115643442A CN202211322012.7A CN202211322012A CN115643442A CN 115643442 A CN115643442 A CN 115643442A CN 202211322012 A CN202211322012 A CN 202211322012A CN 115643442 A CN115643442 A CN 115643442A
- Authority
- CN
- China
- Prior art keywords
- stream
- video
- audio
- target file
- streams
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 230000001360 synchronised effect Effects 0.000 claims abstract description 18
- 238000004590 computer program Methods 0.000 claims description 25
- 230000004044 response Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 description 7
- 230000015572 biosynthetic process Effects 0.000 description 5
- AWSBQWZZLBPUQH-UHFFFAOYSA-N mdat Chemical compound C1=C2CC(N)CCC2=CC2=C1OCO2 AWSBQWZZLBPUQH-UHFFFAOYSA-N 0.000 description 5
- 238000003786 synthesis reaction Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Images
Landscapes
- Management Or Editing Of Information On Record Carriers (AREA)
Abstract
The invention discloses an audio and video confluence recording and playing method, device, equipment and storage medium, wherein the method comprises the following steps: acquiring video streams of all cameras and audio streams of all microphones; determining data of a target file according to the video stream and the audio stream, and generating header information of the target file; writing each video stream and each audio stream into the target file according to the header information of the target file, so that each audio stream corresponds to each video stream one by one, and recording the target file is completed; obtaining audio streams and video streams contained in the recorded target file by analyzing the header information of the recorded target file; and responding to the video stream which needs to be played in each window determined by the user, and respectively reading the audio stream corresponding to each video stream when the video streams are synchronous, thereby realizing the playing of the audio and the video.
Description
Technical Field
The invention relates to the technical field of audio and video recording and playing, in particular to an audio and video converging recording and playing method, device, equipment and storage medium.
Background
At present, the confluence recording of the audio and video streams of a camera and a microphone by a recording and broadcasting host in the market is formed by splicing all input video streams into a video stream according to the width and the height, splicing all input audio streams into an audio stream, and finally synthesizing the spliced video stream and audio stream into a media file.
However, the prior art has obvious defects, mainly including: (1) And audio and video coding and decoding are required during synthesis, the software operation cost is high, and the requirement on the hardware performance of a recording and broadcasting host is high. (2) Only one video stream and one audio stream are in the synthesized media file, only pictures containing all cameras can be played during playing, and the selection of playing all video pictures or pictures of a certain camera cannot be selected, so that the method is not flexible enough.
Disclosure of Invention
The invention provides a problem, a device and equipment for recording and playing audio and video confluence, which aim to solve the technical problems that the steps of audio and video synthesis are complex and the required camera pictures cannot be played in a self-defined way in the prior art.
In order to solve the technical problem, an embodiment of the present invention provides an audio and video converging recording and playing method, including:
acquiring video streams of all cameras and audio streams of all microphones;
determining data of a target file according to the video stream and the audio stream, and generating header information of the target file;
writing each video stream and each audio stream into the target file according to the header information of the target file, so that each audio stream corresponds to each video stream one by one, and recording of the target file is completed;
acquiring an audio stream and a video stream contained in the recorded target file by analyzing the header information of the recorded target file;
and responding to the video stream which needs to be played in each window determined by the user, and respectively reading the corresponding audio stream when the audio stream is synchronous with each video stream, thereby realizing the playing of the audio and the video.
Compared with the prior art, the method and the device have the advantages that the data and the header information of the target file are determined by acquiring the video stream of each camera and the audio stream of each microphone, the video stream and the audio stream of each microphone are written into the target file through the header information of the target file, each audio stream and each video stream are ensured to be in one-to-one correspondence, so that the recording of the target file is completed, audio and video coding and decoding are not needed during synthesis, the step of audio and video recording is simplified, the audio and video recording can be ensured to be carried out only by one video stream and one audio stream in the synthesized target file through the header information of the target file, the corresponding audio stream is read when the video stream is synchronous with each video stream by responding to the video stream which is determined to be played by a user, the pictures containing all the cameras can be played, the pictures of one of the cameras can be selected, and the user experience is improved.
Preferably, the data of the target file includes: the playing time of the target file, the number of the contained audio streams and video streams, the stream index of each video stream in the target file, and the stream index of each audio stream in the target file.
As a preferred scheme, the generating of the header information of the target file specifically includes:
and respectively writing the stream indexes of the video streams and the audio streams into a head track of the target file, and further generating the head information of the target file.
It can be understood that, by respectively writing the stream indexes of the video streams and the audio streams into the head track of the target file, the video streams and the audio streams can be accurately distinguished and positioned, and by directly calling in the head track of the target file, the video streams and the audio streams in the target file can be conveniently played subsequently, and then the stream indexes corresponding to the video streams and the audio streams are written into the head information of the target file, so that the problems of complicated calling and inaccurate calling of the actual audio and video data of the video streams and the audio streams to the target file are avoided.
As a preferred scheme, the writing of the target file into each video stream and each audio stream is performed according to the header information of the target file, so that each audio stream corresponds to each video stream one to one, thereby completing the recording of the target file, specifically:
according to the stream indexes of the video streams and the audio streams in the header information of the target file, registering each audio stream and each video stream to enable each audio stream to be in one-to-one correspondence with each video stream;
and directly writing the video streams and the audio streams which are registered into the media data of the target file, thereby completing the recording of the target file.
It can be understood that each audio and video data is registered by indexing each video stream and each audio stream in the header information of the target file, so that each audio stream corresponds to each video stream one to one, the accuracy of recording and playing the audio and video files is improved, the corresponding video streams and audio stream data are directly written into the media data of the target file after the stream indexes are registered, and the recording of the target file can be accurately and efficiently realized.
As a preferred scheme, the obtaining of the audio stream and the video stream contained in the recorded target file by analyzing the header information of the recorded target file specifically includes:
and according to the head track in the head information of the recorded target file, obtaining the stream indexes of the video streams and the audio streams, and according to the media data of the target file, obtaining the video streams and the audio streams corresponding to the recorded target file.
It can be understood that the stream index of each audio/video stream is obtained through the header track in the header information of the recorded target file, and then the video stream and the audio stream written in the media data in the recorded target file can be quickly and accurately positioned.
As a preferred scheme, the responding to the video stream that the user determines that each window needs to be played, respectively reading the audio stream corresponding to each video stream when the video stream is synchronized, and further implementing the playing of the audio and video, specifically:
in response to a target audio and video played by a window selected by a user, determining a stream index corresponding to the target audio and video, and selecting a track corresponding to an index stream;
according to the selected track corresponding to the index stream, the corresponding video stream in the media data and the corresponding audio stream in synchronization with the video stream are determined and read, and then the video playing window corresponding to the stream index number of the selected audio/video is created to play the single or multiple videos.
It can be understood that the stream index corresponding to the target audio/video is determined by responding to the target audio/video played in the window selected by the user, and the track corresponding to the stream index is selected, so that the corresponding video stream in the read media data and the corresponding audio stream when the video stream is synchronized can be called to the track for playing subsequently and accurately, and the video playing window corresponding to the stream index number of the selected audio/video is created for playing a single or multiple videos.
Correspondingly, the invention also provides an audio and video interflow recording and playing device, which comprises: the device comprises an acquisition module, a header information module, a recording module, an analysis module and a playing module;
the acquisition module is used for acquiring the video stream of each camera and the audio stream of each microphone;
the header information module is used for determining data of the target file according to the video stream and the audio stream and generating header information of the target file;
the recording module is used for writing each video stream and each audio stream into the target file according to the header information of the target file, so that each audio stream corresponds to each video stream one by one, and the recording of the target file is completed;
the analysis module is used for analyzing the header information of the recorded target file to obtain an audio stream and a video stream contained in the recorded target file;
and the playing module is used for responding to the video stream which is determined by the user to be played in each window, and respectively reading the corresponding audio stream when the audio stream and the video stream are synchronous, so as to realize the playing of the audio and the video.
Preferably, the data of the target file includes: the playing time of the target file, the number of the contained audio streams and video streams, the stream index of each video stream in the target file, and the stream index of each audio stream in the target file.
As a preferred scheme, the generating of the header information of the target file specifically includes:
and respectively writing the stream indexes of the video streams and the audio streams into a head track of the target file, and further generating the head information of the target file.
As a preferred scheme, the writing of the target file into each video stream and each audio stream is performed according to the header information of the target file, so that each audio stream corresponds to each video stream one to one, thereby completing the recording of the target file, specifically:
according to the stream indexes of the video streams and the audio streams in the header information of the target file, registering each audio stream and each video stream to enable each audio stream to be in one-to-one correspondence with each video stream;
and directly writing the video streams and the audio streams which are subjected to the registration into the media data of the target file, thereby completing the recording of the target file.
As a preferred scheme, the obtaining of the audio stream and the video stream contained in the recorded target file by analyzing the header information of the recorded target file specifically includes:
and according to the head track in the head information of the recorded target file, obtaining the stream indexes of the video streams and the audio streams, and according to the media data of the target file, obtaining the video streams and the audio streams corresponding to the recorded target file.
As a preferred scheme, the responding to the video stream that the user determines that each window needs to be played, respectively reading the audio stream corresponding to each video stream when the video stream is synchronized, and further implementing the playing of the audio and video, specifically:
responding to a target audio and video played by a window selected by a user, determining a stream index corresponding to the target audio and video, and selecting a track corresponding to an index stream;
according to the selected track corresponding to the index stream, the corresponding video stream in the media data and the corresponding audio stream in synchronization with the video stream are determined and read, and then the video playing window corresponding to the stream index number of the selected audio/video is created to play the single or multiple videos.
Correspondingly, the invention further provides a terminal device, which comprises a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, wherein the processor realizes the audio and video confluence recording and playing method when executing the computer program.
Accordingly, the present invention also provides a computer readable storage medium comprising a stored computer program; when the computer program runs, the device where the computer readable storage medium is located is controlled to execute the audio and video confluence recording and playing method.
Drawings
FIG. 1: the step flow chart of the audio and video interflow recording playing method provided by the embodiment of the invention is shown;
FIG. 2: a schematic structural diagram of an mp4 file format provided by an embodiment of the present invention;
FIG. 3: the schematic diagram of the player playing the audio and video provided by the embodiment of the invention;
FIG. 4: the structure schematic diagram of the audio and video interflow recording and playing device provided by the embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
Example one
Referring to fig. 1, the method for recording and playing an audio and video stream according to the embodiment of the present invention includes the following steps S101 to S105:
step S101: and acquiring the video stream of each camera and the audio stream of each microphone.
It should be noted that, in this embodiment, the video stream is acquired through each camera, and the audio stream is acquired through each microphone; it will be appreciated that the video stream and the audio stream acquired for a certain moment or time period are synchronized, i.e. the pictures in the video stream correspond to the sound of the audio stream.
Step S102: and determining data of the target file according to the video stream and the audio stream, and generating header information of the target file.
As a preferable solution of this embodiment, the data of the target file includes: the playing time of the target file, the number of the contained audio streams and video streams, the stream index of each video stream in the target file, and the stream index of each audio stream in the target file.
It should be noted that, preferably, the target file is in an mp4 file format, and in this embodiment, the data of the mp4 file includes a playing time of the mp4 file, a number of audio/video streams included in the mp4 file, and a stream index of each audio/video stream in the final mp4 file.
To explain further, referring to fig. 2, the mp4 file is composed of boxes (box), each of which is divided into a Header portion Header and a Data portion Data. Wherein, the Header part Header contains the type and size of the box, the Data contains the sub-box or Data, and the box can embed the sub-box. The media data Mdat contains the actual media data, and the audio and video data which are finally decoded and played are all in the surface. Track trak is a Track Box, where an mp4 may contain one or more tracks (e.g. video tracks, audio tracks), and Track related information is in the trak, which is an integrated Box (container Box) containing at least two boxes, tkhd and mdia respectively.
As a preferred solution of this embodiment, the generating header information of the target file specifically includes:
and respectively writing the stream indexes of the video streams and the audio streams into a head track of the target file, and further generating the head information of the target file.
It should be noted that, in this embodiment, stream indexes of each audio/video stream are respectively written into trak of an mp4 file header, so as to generate header information of an mp4 file to be synthesized.
It can be understood that, by respectively writing the stream indexes of the video streams and the audio streams into the head track of the target file, the video streams and the audio streams can be accurately distinguished and positioned, and by directly calling in the head track of the target file, the video streams and the audio streams in the target file can be conveniently played subsequently, and then the stream indexes corresponding to the video streams and the audio streams are written into the head information of the target file, so that the problems of complicated calling and inaccurate calling of the actual audio and video data of the video streams and the audio streams to the target file are avoided.
Step S103: and writing the target file into each video stream and each audio stream according to the header information of the target file so as to enable each audio stream to correspond to each video stream one by one, thereby completing the recording of the target file.
As a preferable solution of this embodiment, the writing of the target file into each video stream and each audio stream according to the header information of the target file, so that each audio stream and each video stream correspond to each other one by one, thereby completing the recording of the target file, specifically:
according to the stream indexes of the video streams and the audio streams in the header information of the target file, registering each audio stream and each video stream to enable each audio stream to be in one-to-one correspondence with each video stream; and directly writing the video streams and the audio streams which are subjected to the registration into the media data of the target file, thereby completing the recording of the target file.
In this embodiment, after the stream indexes of the audio and video streams are written into the trak of the mp4 file header, the actual audio and video data are directly written into the Mdat without encoding and decoding, and the index of each audio stream corresponds to the actual audio and video data one to synthesize an mp4 file. The mp4 file thus synthesized will have multiple video streams and multiple audio streams.
It can be understood that each audio and video data is registered by indexing each video stream and each audio stream in the header information of the target file, so that each audio stream corresponds to each video stream one to one, the accuracy of recording and playing the audio and video files is improved, the corresponding video streams and audio stream data are directly written into the media data of the target file after the stream indexes are registered, and the recording of the target file can be accurately and efficiently realized.
Step S104: and obtaining the audio stream and the video stream contained in the recorded target file by analyzing the header information of the recorded target file.
As a preferred solution of this embodiment, the obtaining, by analyzing header information of the recorded target file, an audio stream and a video stream contained in the recorded target file specifically includes:
and according to the head track in the head information of the recorded target file, obtaining the stream indexes of the video streams and the audio streams, and according to the media data of the target file, obtaining the video streams and the audio streams corresponding to the recorded target file.
It should be noted that, during playing, the player obtains the stream index of each stream according to trak by analyzing the header information in the mp4 file, so as to obtain the actual audio/video data according to Mdat for playing.
It can be understood that the stream index of each audio/video stream is obtained through the header track in the header information of the recorded target file, and then the video stream and the audio stream written in the media data in the recorded target file can be quickly and accurately located.
Step S105: and responding to the video stream which needs to be played in each window determined by the user, and respectively reading the audio stream corresponding to each video stream when the video streams are synchronous, thereby realizing the playing of the audio and the video.
As a preferred solution of the embodiment, in response to the determination by the user that the video stream needs to be played in each window, the audio streams corresponding to the video streams when the audio streams are synchronized are read respectively, so as to implement the playing of the audio and video, specifically:
responding to a target audio and video played by a window selected by a user, determining a stream index corresponding to the target audio and video, and selecting a track corresponding to an index stream; according to the selected track corresponding to the index stream, the corresponding video stream in the media data and the corresponding audio stream in synchronization with the video stream are determined and read, and then the video playing window corresponding to the stream index number of the selected audio/video is created to play the single or multiple videos.
It should be noted that, when the video is played, the corresponding played video stream is determined by the stream index corresponding to the played video selected by the user, so as to select the track trak corresponding to the index stream, read the corresponding audio/video data in the Mdat, and create a video playing window corresponding to the number of the selected video stream indexes on the player to play the single or multiple videos.
Further, the playing principle is equivalent to that a plurality of sub-players are created on the player to play each video stream, so that the effect that the video is visually perceived to be spliced is achieved. Since the track in mp4 contains the information of the actual media data position and size, and the Mdat can be found according to the track, the user can select a specific camera picture to be played in the player through the track, and can also play all the camera pictures in a split screen manner.
It can be understood that the stream index corresponding to the target audio/video is determined by responding to the target audio/video played in the window selected by the user, and the track corresponding to the stream index is selected, so that the corresponding video stream in the read media data and the corresponding audio stream when the video stream is synchronized can be called to the track for playing subsequently and accurately, and the video playing window corresponding to the stream index number of the selected audio/video is created for playing a single or multiple videos.
In this embodiment, when the recording and playing host performs merging and recording, the stream information of each camera video stream and each microphone audio stream is analyzed, and the playing time length, the number of audio and video streams contained, and the stream index of each audio and video stream in the final mp4 file of the finally merged mp4 file are determined.
And according to the determination result, generating header information of the mp4 file to be synthesized, and independently storing the media data of each audio and video stream as each track in the mp4 file to be synthesized according to the stream index without physically splicing the media data according to the width and the height, and outputting the media data as the mp4 file.
Referring to fig. 3, during playing, the player analyzes the header information in the mp4 file to obtain the number of audio/video streams contained, the duration and index of each audio/video stream, and determines the video stream to be played in each playing window and the stream that needs to be based on the number when the audio/video is synchronized, so as to implement split-screen playing of all video frames or single playing of a certain video frame. The invention can not physically splice the media data in width and height during confluence, so audio and video coding and decoding are not needed during confluence, the software overhead is lower, the hardware performance requirement on a recording and broadcasting host is lower, and in the mode, a user can select any one camera picture to independently watch during broadcasting, and all the camera pictures can be watched simultaneously.
The above embodiment is implemented, and has the following effects:
compared with the prior art, the method and the device have the advantages that the data and the head information of the target file are determined by acquiring the video stream of each camera and the audio stream of each microphone, the target file is written into each video stream and each audio stream through the head information of the target file, each audio stream and each video stream are ensured to be in one-to-one correspondence, so that the recording of the target file is completed, audio and video coding and decoding are not needed in the synthesis process, the audio and video recording steps are simplified, the head information of the target file can ensure that only one video stream and one audio stream exist in the synthesized target file, the video stream which is synchronous with each video stream is determined by a user, the audio stream which corresponds to each window and needs to be played is read, the pictures containing all the cameras can be played in the playing process, the picture of one of the cameras can be selected, and the user experience is improved.
Example two
Please refer to fig. 4, which is a device for recording and playing an audio and video stream according to the present invention, including: an acquisition module 201, a header information module 202, a recording module 203, an analysis module 204, and a play module 205.
The obtaining module 201 is configured to obtain a video stream of each camera and an audio stream of each microphone.
The header information module 202 is configured to determine data of the target file according to the video stream and the audio stream, and generate header information of the target file.
The recording module 203 is configured to write the target file into each video stream and each audio stream according to the header information of the target file, so that each audio stream corresponds to each video stream one to one, thereby completing recording of the target file.
The analysis module 204 is configured to obtain an audio stream and a video stream contained in the recorded target file by analyzing header information of the recorded target file.
The playing module 205 is configured to respond to a video stream that a user determines that each window needs to be played, and respectively read an audio stream corresponding to each video stream when the video streams are synchronized, thereby implementing playing of audio and video.
Preferably, the data of the object file includes: the playing time of the target file, the number of the contained audio streams and video streams, the stream index of each video stream in the target file, and the stream index of each audio stream in the target file.
As a preferred solution of this embodiment, the generating header information of the target file specifically includes:
and respectively writing the stream indexes of the video streams and the audio streams into a head track of the target file, and further generating the head information of the target file.
As a preferable solution of this embodiment, the writing of the target file into each video stream and each audio stream according to the header information of the target file, so that each audio stream and each video stream correspond to each other one by one, thereby completing the recording of the target file, specifically:
registering each audio stream and each video stream according to the stream index of each video stream and each audio stream in the header information of the target file so as to enable each audio stream and each video stream to be in one-to-one correspondence; and directly writing the video streams and the audio streams which are subjected to the registration into the media data of the target file, thereby completing the recording of the target file.
As a preferred scheme of this embodiment, the obtaining of the audio stream and the video stream contained in the recorded target file by analyzing the header information of the recorded target file specifically includes:
and according to the head track in the head information of the recorded target file, obtaining the stream indexes of the video streams and the audio streams, and according to the media data of the target file, obtaining the video streams and the audio streams corresponding to the recorded target file.
As a preferred scheme of this embodiment, the responding to the video stream that the user determines that each window needs to be played, respectively reading the audio stream corresponding to each video stream when the audio stream is synchronized, and further implementing the playing of the audio and video, specifically includes:
responding to a target audio and video played by a window selected by a user, determining a stream index corresponding to the target audio and video, and selecting a track corresponding to an index stream; according to the selected track corresponding to the index stream, the corresponding video stream in the media data and the corresponding audio stream in synchronization with the video stream are determined and read, and then the video playing window corresponding to the stream index number of the selected audio/video is created to play the single or multiple videos.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
The embodiment of the invention has the following effects:
compared with the prior art, the method and the device have the advantages that the data and the head information of the target file are determined by acquiring the video stream of each camera and the audio stream of each microphone, the target file is written into each video stream and each audio stream through the head information of the target file, each audio stream and each video stream are ensured to be in one-to-one correspondence, so that the recording of the target file is completed, audio and video coding and decoding are not needed in the synthesis process, the audio and video recording steps are simplified, the head information of the target file can ensure that only one video stream and one audio stream exist in the synthesized target file, the video stream which is synchronous with each video stream is determined by a user, the audio stream which corresponds to each window and needs to be played is read, the pictures containing all the cameras can be played in the playing process, the picture of one of the cameras can be selected, and the user experience is improved.
EXAMPLE III
Correspondingly, the invention also provides a terminal device, comprising: the device comprises a processor, a memory and a computer program which is stored in the memory and configured to be executed by the processor, wherein the processor executes the computer program to realize the audio and video confluence recording and playing method according to any one of the above embodiments.
The terminal device of this embodiment includes: a processor, a memory, and a computer program, computer instructions stored in the memory and executable on the processor. The processor implements the steps in the first embodiment, such as steps S101 to S105 shown in fig. 1, when executing the computer program. Alternatively, the processor, when executing the computer program, implements the functions of the modules/units in the above-described apparatus embodiments, such as the recording module 203.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory and executed by the processor to implement the invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used for describing the execution process of the computer program in the terminal device. For example, the recording module 203 is configured to write the target file into each video stream and each audio stream according to the header information of the target file, so that each audio stream and each video stream are in one-to-one correspondence, thereby completing recording of the target file.
The terminal device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The terminal device may include, but is not limited to, a processor, a memory. It will be appreciated by those skilled in the art that the schematic diagram is merely an example of a terminal device and does not constitute a limitation of a terminal device, and may include more or less components than those shown, or combine certain components, or different components, for example, the terminal device may also include input output devices, network access devices, buses, etc.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is the control center of the terminal device and connects the various parts of the whole terminal device using various interfaces and lines.
The memory may be used to store the computer programs and/or modules, and the processor may implement various functions of the terminal device by running or executing the computer programs and/or modules stored in the memory and calling data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the mobile terminal, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Wherein, the terminal device integrated module/unit can be stored in a computer readable storage medium if it is implemented in the form of software functional unit and sold or used as an independent product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
Example four
Correspondingly, the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes a stored computer program, and when the computer program runs, the apparatus where the computer-readable storage medium is located is controlled to execute the audio and video merging recording and playing method according to any of the above embodiments.
The above-mentioned embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, and it should be understood that the above-mentioned embodiments are only examples of the present invention and are not intended to limit the scope of the present invention. It should be understood that any modifications, equivalents, improvements and the like, which come within the spirit and principle of the invention, may occur to those skilled in the art and are intended to be included within the scope of the invention.
Claims (10)
1. An audio and video interflow recording playing method is characterized by comprising the following steps:
acquiring video streams of all cameras and audio streams of all microphones;
determining data of a target file according to the video stream and the audio stream, and generating header information of the target file;
writing each video stream and each audio stream into the target file according to the header information of the target file, so that each audio stream corresponds to each video stream one by one, and recording the target file is completed;
obtaining audio streams and video streams contained in the recorded target file by analyzing the header information of the recorded target file;
and responding to the video stream which needs to be played in each window determined by the user, and respectively reading the audio stream corresponding to each video stream when the video streams are synchronous, thereby realizing the playing of the audio and the video.
2. The audio-video interflow recording playing method according to claim 1, wherein the data of the target file comprises: the playing time of the target file, the number of the contained audio streams and video streams, the stream index of each video stream in the target file, and the stream index of each audio stream in the target file.
3. The audio-video interflow recording and playing method according to claim 2, wherein the generating of the header information of the target file specifically comprises:
and respectively writing the stream indexes of the video streams and the audio streams into a head track of the target file, and further generating the head information of the target file.
4. The method for merging, recording and playing audio and video according to claim 3, wherein the writing of the target file into each video stream and each audio stream is performed according to the header information of the target file, so that each audio stream corresponds to each video stream one to one, thereby completing the recording of the target file, specifically:
according to the stream indexes of the video streams and the audio streams in the header information of the target file, registering each audio stream and each video stream to enable each audio stream to be in one-to-one correspondence with each video stream;
and directly writing the video streams and the audio streams which are subjected to the registration into the media data of the target file, thereby completing the recording of the target file.
5. The audio and video interflow recording and playing method according to claim 4, wherein the audio stream and the video stream contained in the recorded target file are obtained by analyzing the header information of the recorded target file, and the method specifically comprises the following steps:
and according to the head track in the head information of the recorded target file, obtaining the stream indexes of the video streams and the audio streams, and according to the media data of the target file, obtaining the video streams and the audio streams corresponding to the recorded target file.
6. The method for merging, recording and playing the audio and video according to claim 5, wherein the audio stream corresponding to each video stream in synchronization is read respectively in response to the video stream that the user determines to play in each window, so as to realize the playing of the audio and video, specifically:
responding to a target audio and video played by a window selected by a user, determining a stream index corresponding to the target audio and video, and selecting a track corresponding to an index stream;
according to the selected track corresponding to the index stream, the corresponding video stream in the media data and the corresponding audio stream in synchronization with the video stream are determined and read, and then the video playing window corresponding to the stream index number of the selected audio/video is created to play the single or multiple videos.
7. An audio and video interflow recording playing device is characterized by comprising: the device comprises an acquisition module, a header information module, a recording module, an analysis module and a playing module;
the acquisition module is used for acquiring the video stream of each camera and the audio stream of each microphone;
the header information module is used for determining data of the target file according to the video stream and the audio stream and generating header information of the target file;
the recording module is used for writing each video stream and each audio stream into the target file according to the header information of the target file, so that each audio stream corresponds to each video stream one by one, and the recording of the target file is completed;
the analysis module is used for analyzing the header information of the recorded target file to obtain an audio stream and a video stream contained in the recorded target file;
and the playing module is used for responding to the video stream which is determined by the user to be played in each window, and respectively reading the corresponding audio stream when the audio stream and the video stream are synchronous, so as to realize the playing of the audio and the video.
8. The apparatus for merging, recording and playing of audio and video according to claim 7, wherein the data of the target file includes: the playing time of the target file, the number of the contained audio streams and video streams, the stream index of each video stream in the target file, and the stream index of each audio stream in the target file.
9. A terminal device comprising a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, wherein the processor implements the audio-video merged stream recording and playing method according to any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored computer program; wherein the computer program controls, when running, the device on which the computer-readable storage medium is located to execute the audio and video interflow recording and playing method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211322012.7A CN115643442A (en) | 2022-10-25 | 2022-10-25 | Audio and video converging recording and playing method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211322012.7A CN115643442A (en) | 2022-10-25 | 2022-10-25 | Audio and video converging recording and playing method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115643442A true CN115643442A (en) | 2023-01-24 |
Family
ID=84946442
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211322012.7A Pending CN115643442A (en) | 2022-10-25 | 2022-10-25 | Audio and video converging recording and playing method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115643442A (en) |
Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1767601A (en) * | 2005-10-21 | 2006-05-03 | 西安交通大学 | Synchronous broadcast controlling method capable of supporting multi-source stream media |
CN101303880A (en) * | 2008-06-30 | 2008-11-12 | 北京中星微电子有限公司 | Method and apparatus for recording and playing audio-video document |
US20090220206A1 (en) * | 2005-06-29 | 2009-09-03 | Canon Kabushiki Kaisha | Storing video data in a video file |
KR20110134857A (en) * | 2010-06-09 | 2011-12-15 | 삼성전자주식회사 | Method and apparatus for providing fragmented multimedia streaming service, and method and apparatus for receiving fragmented multimedia streaming service |
CN103179435A (en) * | 2013-02-27 | 2013-06-26 | 北京视博数字电视科技有限公司 | Multi-channel video data multiplexing method and device |
CN103428462A (en) * | 2013-08-29 | 2013-12-04 | 中安消技术有限公司 | Method and device for processing multichannel audio and video |
CN103731625A (en) * | 2013-12-13 | 2014-04-16 | 厦门雅迅网络股份有限公司 | Method for simultaneously and synchronously playing multiple paths of audios and videos |
CN103780878A (en) * | 2013-12-31 | 2014-05-07 | 南宁市公安局 | Portable monitoring equipment |
CN104144178A (en) * | 2013-05-07 | 2014-11-12 | 上海国富光启云计算科技有限公司 | Cloud-computing-based method for transmitting video between virtual machine and client terminal |
CN104505109A (en) * | 2014-12-29 | 2015-04-08 | 珠海全志科技股份有限公司 | Audio track switching method and system of multimedia player and corresponding player and equipment |
CN106231222A (en) * | 2016-08-23 | 2016-12-14 | 深圳亿维锐创科技股份有限公司 | Based on many code streams can be mutual teaching video file form and storing and playing method |
WO2017048326A1 (en) * | 2015-09-18 | 2017-03-23 | Furment Odile Aimee | System and method for simultaneous capture of two video streams |
CN108174283A (en) * | 2017-12-27 | 2018-06-15 | 威创集团股份有限公司 | A kind of vision signal source generating method and device |
CN109547865A (en) * | 2018-11-09 | 2019-03-29 | 中国航空无线电电子研究所 | The method for organizing of video data encoder, synchronous broadcast method, segmentation sweep-out method |
CN110536077A (en) * | 2018-05-25 | 2019-12-03 | 杭州海康威视系统技术有限公司 | A kind of Video Composition and playback method, device and equipment |
CN111263220A (en) * | 2020-01-15 | 2020-06-09 | 北京字节跳动网络技术有限公司 | Video processing method and device, electronic equipment and computer readable storage medium |
CN111741376A (en) * | 2020-07-31 | 2020-10-02 | 南斗六星系统集成有限公司 | Method for synchronizing audio and video lip sounds of multimedia file splicing |
CN112584087A (en) * | 2021-02-25 | 2021-03-30 | 浙江华创视讯科技有限公司 | Video conference recording method, electronic device and storage medium |
CN114257771A (en) * | 2021-12-21 | 2022-03-29 | 杭州海康威视数字技术股份有限公司 | Video playback method and device for multi-channel audio and video, storage medium and electronic equipment |
CN115209222A (en) * | 2022-06-15 | 2022-10-18 | 深圳市锐明技术股份有限公司 | Video playing method and device, electronic equipment and readable storage medium |
-
2022
- 2022-10-25 CN CN202211322012.7A patent/CN115643442A/en active Pending
Patent Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090220206A1 (en) * | 2005-06-29 | 2009-09-03 | Canon Kabushiki Kaisha | Storing video data in a video file |
CN1767601A (en) * | 2005-10-21 | 2006-05-03 | 西安交通大学 | Synchronous broadcast controlling method capable of supporting multi-source stream media |
CN101303880A (en) * | 2008-06-30 | 2008-11-12 | 北京中星微电子有限公司 | Method and apparatus for recording and playing audio-video document |
KR20110134857A (en) * | 2010-06-09 | 2011-12-15 | 삼성전자주식회사 | Method and apparatus for providing fragmented multimedia streaming service, and method and apparatus for receiving fragmented multimedia streaming service |
CN103179435A (en) * | 2013-02-27 | 2013-06-26 | 北京视博数字电视科技有限公司 | Multi-channel video data multiplexing method and device |
CN104144178A (en) * | 2013-05-07 | 2014-11-12 | 上海国富光启云计算科技有限公司 | Cloud-computing-based method for transmitting video between virtual machine and client terminal |
CN103428462A (en) * | 2013-08-29 | 2013-12-04 | 中安消技术有限公司 | Method and device for processing multichannel audio and video |
CN103731625A (en) * | 2013-12-13 | 2014-04-16 | 厦门雅迅网络股份有限公司 | Method for simultaneously and synchronously playing multiple paths of audios and videos |
CN103780878A (en) * | 2013-12-31 | 2014-05-07 | 南宁市公安局 | Portable monitoring equipment |
CN104505109A (en) * | 2014-12-29 | 2015-04-08 | 珠海全志科技股份有限公司 | Audio track switching method and system of multimedia player and corresponding player and equipment |
WO2017048326A1 (en) * | 2015-09-18 | 2017-03-23 | Furment Odile Aimee | System and method for simultaneous capture of two video streams |
CN106231222A (en) * | 2016-08-23 | 2016-12-14 | 深圳亿维锐创科技股份有限公司 | Based on many code streams can be mutual teaching video file form and storing and playing method |
CN108174283A (en) * | 2017-12-27 | 2018-06-15 | 威创集团股份有限公司 | A kind of vision signal source generating method and device |
CN110536077A (en) * | 2018-05-25 | 2019-12-03 | 杭州海康威视系统技术有限公司 | A kind of Video Composition and playback method, device and equipment |
CN109547865A (en) * | 2018-11-09 | 2019-03-29 | 中国航空无线电电子研究所 | The method for organizing of video data encoder, synchronous broadcast method, segmentation sweep-out method |
CN111263220A (en) * | 2020-01-15 | 2020-06-09 | 北京字节跳动网络技术有限公司 | Video processing method and device, electronic equipment and computer readable storage medium |
CN111741376A (en) * | 2020-07-31 | 2020-10-02 | 南斗六星系统集成有限公司 | Method for synchronizing audio and video lip sounds of multimedia file splicing |
CN112584087A (en) * | 2021-02-25 | 2021-03-30 | 浙江华创视讯科技有限公司 | Video conference recording method, electronic device and storage medium |
CN114257771A (en) * | 2021-12-21 | 2022-03-29 | 杭州海康威视数字技术股份有限公司 | Video playback method and device for multi-channel audio and video, storage medium and electronic equipment |
CN115209222A (en) * | 2022-06-15 | 2022-10-18 | 深圳市锐明技术股份有限公司 | Video playing method and device, electronic equipment and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6438598B2 (en) | Method and device for displaying information on a video image | |
US8504591B2 (en) | Data generating device and data generating method, and data processing device and data processing method | |
US20210350545A1 (en) | Image processing method and apparatus, and hardware apparatus | |
US20160065791A1 (en) | Sound image play method and apparatus | |
KR20210110852A (en) | Image deformation control method, device and hardware device | |
KR102683551B1 (en) | Decryption device and method, and computer-readable recording medium recording the program | |
CN108124170A (en) | A kind of video broadcasting method, device and terminal device | |
WO2021052130A1 (en) | Video processing method, apparatus and device, and computer-readable storage medium | |
US20230377606A1 (en) | Video editing projects using single bundled video files | |
CN113497963B (en) | Video processing method, device and equipment | |
CN115643442A (en) | Audio and video converging recording and playing method, device, equipment and storage medium | |
CN116055540B (en) | Virtual content display system, method, apparatus and computer readable medium | |
KR101018781B1 (en) | Method and system for providing additional contents using augmented reality | |
CN108831510B (en) | Method, device, terminal and storage medium for dotting audio and video files | |
CN111314777A (en) | Video generation method and device, computer storage medium and electronic equipment | |
KR100758304B1 (en) | Method for making a multi-view panoramic video content and method for playing thereof | |
CN112911373B (en) | Video subtitle generating method, device, equipment and storage medium | |
CN111225293A (en) | Video data processing method and device and computer storage medium | |
CN114979531A (en) | Double-recording method for android terminal to support real-time voice recognition | |
CN114025229A (en) | Method and device for processing audio and video files, computing equipment and storage medium | |
US20240305800A1 (en) | Intelligent video export | |
CN112148115A (en) | Media processing method, device, system and readable storage medium | |
CN112151048A (en) | Method for generating and processing audio-visual image data | |
CN111225210B (en) | Video coding method, video coding device and terminal equipment | |
CN111629255B (en) | Audio and video recording method and device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |