WO2024061005A1 - 一种音视频缓冲区读取处理方法及装置 - Google Patents

一种音视频缓冲区读取处理方法及装置 Download PDF

Info

Publication number
WO2024061005A1
WO2024061005A1 PCT/CN2023/117434 CN2023117434W WO2024061005A1 WO 2024061005 A1 WO2024061005 A1 WO 2024061005A1 CN 2023117434 W CN2023117434 W CN 2023117434W WO 2024061005 A1 WO2024061005 A1 WO 2024061005A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
video
read
video data
reading
Prior art date
Application number
PCT/CN2023/117434
Other languages
English (en)
French (fr)
Inventor
何军辉
王刚
王家宾
薛有义
刘博�
罗国鸿
李康炎
Original Assignee
天翼数字生活科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 天翼数字生活科技有限公司 filed Critical 天翼数字生活科技有限公司
Publication of WO2024061005A1 publication Critical patent/WO2024061005A1/zh

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10527Audio or video recording; Data buffering arrangements
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10009Improvement or modification of read or write signals
    • G11B20/10481Improvement or modification of read or write signals optimisation methods
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10527Audio or video recording; Data buffering arrangements
    • G11B2020/1062Data buffering arrangements, e.g. recording or playback buffers
    • G11B2020/10675Data buffering arrangements, e.g. recording or playback buffers aspects of buffer control

Definitions

  • the present application relates to the technical field of video networking, and in particular to an audio and video buffer reading processing method and device.
  • This application provides an audio and video buffer reading and processing method and device to solve the technical problem of poor operating stability of existing video network audio and video equipment.
  • the first aspect of this application provides an audio and video buffer reading processing method, including:
  • the target audio and video data stored in the audio and video buffer is read through a reference call method, wherein the audio and video data in the audio and video buffer are The data is obtained by preprocessing the original audio and video data based on the original audio and video data collected by the video network audio and video equipment.
  • the target audio and video data acquired as needed, in combination with the read handle, reading the target audio and video data stored in the audio and video buffer area by reference calling specifically includes:
  • the single frame data in the target audio and video data is sequentially read from the audio and video cache area through reference calling.
  • the application that releases the single frame data reads the remaining single frame data until all the target audio and video data are read.
  • the target audio and video data obtained as needed, combined with the read handle, and through a reference call method, reading the target audio and video data stored in the audio and video buffer area specifically includes:
  • the target audio and video data obtained as needed, combined with the read handle, the target audio and video data is read from the audio and video buffer area at one time through a reference call method.
  • the method further includes:
  • the read attribute information includes: read mode information and read business module information.
  • the second aspect of this application provides an audio and video buffer reading and processing device, including:
  • a read handle generation unit configured to respond to an audio and video data read instruction and generate a read handle according to the read attribute information corresponding to the audio and video data read instruction;
  • a reference acquisition unit is used to obtain the target audio and video data as needed, combined with the read handle, and reads the target audio and video data stored in the audio and video cache area through a reference call method, wherein the audio and video
  • the audio and video data in the buffer is obtained by preprocessing the original audio and video data based on the original audio and video data collected by the video network audio and video equipment.
  • the reference acquisition unit is specifically used to:
  • single frame data in the target audio and video data is read from the audio and video buffer area in sequence through a reference call method.
  • the single frame data is used up, the remaining single frame data is read until all the target audio and video data are read.
  • the reference acquisition unit is specifically used to:
  • the target audio and video data obtained as needed, combined with the read handle, the target audio and video data is read from the audio and video buffer area at one time through a reference call method.
  • it also includes:
  • the handle destruction unit is used to destroy and release the read handle.
  • the read attribute information includes: read mode information and read business module information.
  • the solution provided by this application improves the data reading and writing method of the audio and video buffer. It uses the consumer module to create a handle for reading the data, and uses the read handle as a unique identifier to read the data from the audio and video buffer in the form of data reference. Read data for business processing. Since the data in this solution is obtained by reference, no additional memory is added. After the current consumer module uses the data, it can release the data reference. Through this reading and writing method, it can be Low hardware resources support the operation of multiple business function modules without significantly increasing memory consumption, thereby solving the technical problem of poor operational stability of existing video network audio and video equipment due to shortage of memory resources.
  • Figure 1 is a schematic diagram of the existing reading and writing architecture of the video network audio and video cache area.
  • Figure 2 is a schematic diagram of the reading and writing architecture of the video network audio and video cache area provided by this application solution.
  • FIG. 3 is a schematic flowchart of an embodiment of an audio and video buffer reading processing method provided by this application.
  • FIG. 4 is a schematic flowchart of another embodiment of an audio and video buffer reading processing method provided by this application.
  • Figure 5 is a schematic structural diagram of an embodiment of an audio and video buffer reading and processing device provided by this application.
  • the old version of the architecture is a single producer single consumer architecture, as follows: As shown in Figure 1, under this architecture, the audio and video data collected by the hardware device stores multiple buffers as producers according to the business, and each consumer business consumes its own buffer consumption data. Under the old version of the architecture, audio and video use Single producer and single consumer, multiple business scenarios enable Use multiple buffers. For example, assume that the device uses 5 audio and video services. According to the old version of the architecture, 5 audio and video buffers need to be opened, which consumes a lot of resources and causes a waste of hardware memory resources, making the hardware resources insufficient to support Sustainable business development.
  • embodiments of the present application provide an audio and video buffer reading processing method and device to solve the technical problem of poor operating stability of existing video network audio and video equipment.
  • a method for reading and processing an audio and video buffer provided in the first embodiment of the present application includes:
  • Step 101 In response to the audio and video data reading instruction, generate a reading handle according to the reading attribute information corresponding to the audio and video data reading instruction.
  • Step 102 According to the target audio and video data obtained as needed, combined with the read handle, read the target audio and video data stored in the audio and video cache area through reference calling.
  • the audio and video data in the audio and video buffer are obtained by preprocessing the original audio and video data based on the original audio and video data collected by the video network audio and video equipment.
  • the different business function modules within the SDK software include, for example, cloud storage modules, video live broadcast modules, etc. These modules are the consumers of audio and video media data in the system, so they are also called consumer modules.
  • the audio and video buffer is used to store the audio and video data collected by IPC and other devices. Specifically, it can include the timestamp, data length, encoding type and other attributes corresponding to each frame of data, which are written to the audio and video buffer of the SDK.
  • the SDK The software can also preprocess the input audio and video data. The above preprocessing includes but is not limited to: frame dropping processing when the buffer area is full, and key frame marking, index recording and other data optimization of the stored audio and video data. deal with.
  • the system When the business function module of the video networking device needs to obtain audio and video data from the audio and video buffer, the system will automatically generate an audio and video data reading instruction. This instruction is used to trigger and control the business function module to execute the audio and video buffer provided by this application. Read processing method.
  • the business function module receives and responds to the tone After the video data reading instruction is issued, the business function module will generate the corresponding reading handle based on the reading attribute information corresponding to the audio and video data reading instruction, and use the reading handle as the unique identifier to read data from the audio and video buffer, where, Reading attribute information includes: reading mode information and reading business module information.
  • the read handle After obtaining the read handle, you can use the read handle, combined with the audio and video data that needs to be obtained, that is, the target audio and video data, to read the target audio and video data stored in the audio and video cache area through reference calling. Since the data in this application is obtained by reference, no additional memory is added. After the current consumer module uses the data, the data reference can be released, which greatly reduces memory usage.
  • this application also provides an example description of a complete embodiment of the audio and video buffer reading processing method at the system level.
  • the terminal IPC hardware manufacturer accesses the VisionLink SDK, and the hardware obtains the encoded H264 or H265 data and audio PCM data, including timestamp, frame type, data content and corresponding length, and transfers the parameters into the SDK audio and video ring buffer.
  • the buffer is internally judged whether it is full. If the current buffer is full, data frame dropping processing is performed. Specifically, the oldest GOP data is discarded, and the latest frame data is kept and written to the end of the ring buffer.
  • the write buffer follows this logic cycle deal with;
  • the business modules for reading data include cloud storage upload, card recording, streaming media forwarding, and P2P point-to-point live broadcast.
  • Each read handle is created according to the mode to be read, and then the buffer content is obtained by referencing the original data through the handle. After acquisition, it is used directly without additional memory consumption, but the original data content cannot be modified. After use, the reference is released, and the acquisition is repeated until the current owner closes the corresponding read handle.
  • the solution provided by this application improves the data reading and writing method of the audio and video buffer, uses the consumer module to create a handle for reading the data, and uses the reading handle as a unique identifier.
  • Data is read from the audio and video buffer in the form of data reference for business processing. Since the data in this solution is obtained by reference, no additional memory is added. After the current consumer module has used the data, the data reference can be released.
  • the buffer reading method provided by this application can support the operation of multiple business function modules with lower hardware resources without significantly increasing memory consumption. , thus solving the technical problem of poor operational stability of existing video network audio and video equipment due to shortage of memory resources. question.
  • a second embodiment of the present application provides an audio and video buffer reading and processing method, including:
  • step 102 the target audio and video data obtained as needed mentioned in step 102, combined with the read handle, reads the target audio and video data stored in the audio and video cache area through reference calling.
  • the step process specifically includes:
  • Step 1021 According to the target audio and video data obtained as needed, combined with the read handle, read the single frame data in the target audio and video data sequentially from the audio and video cache area through the reference call method. When the single frame data is used up, then The application that releases the single frame data then reads the remaining single frame data until all the target audio and video data is read.
  • the process of step 102 may also include:
  • Step 1022 Based on the target audio and video data obtained as needed, combined with the read handle, read the target audio and video data from the audio and video buffer area at one time through reference calling.
  • step 1021 provides a frame-by-frame reference method, specifically: according to the created read handle, the reference call method is used to read the single frame data of the target audio and video data according to the single frame loop acquisition method. When the After the single frame data is used, the reference can be released and the next frame of data can be read again. This cycle continues until all the frame data of the target audio and video data have been read; and step 1022 provides a one-time reference. For example, based on the created read handle, multiple read handles correspond to each frame of the target audio and video data one-to-one, and then the target audio and video data is read in one go through the reference call method.
  • step 1021 For the above two reading methods, it is generally preferred to use the frame-by-frame reference method mentioned in step 1021. However, in some special application scenarios, such as when the target audio and video data to be obtained is only a small number of key frames, it can also be used.
  • the one-time reference method mentioned in step 1022 In some special application scenarios, such as when the target audio and video data to be obtained is only a small number of key frames, it can also be used.
  • the one-time reference method mentioned in step 1022 The one-time reference method mentioned in step 1022.
  • target audio and video data it also includes:
  • Step 103 Destroy and release the read handle.
  • the read handle can be destroyed, thereby releasing the resources occupied by the read handle.
  • the third embodiment of the present application provides an audio and video buffer reading and processing device, including:
  • the read handle generation unit 201 is configured to respond to the audio and video data read instruction and generate a read handle according to the read attribute information corresponding to the audio and video data read instruction;
  • the reference acquisition unit 202 is used to obtain the target audio and video data as needed, combine the read handle, and read the target audio and video data stored in the audio and video buffer through reference calling, wherein the audio and video data in the audio and video buffer are
  • the video data is obtained by preprocessing the original audio and video data based on the original audio and video data collected by the video network audio and video equipment.
  • reference acquisition unit 202 is specifically used to:
  • the single frame data in the target audio and video data is sequentially read from the audio and video buffer area through reference calling.
  • the single frame data is used up, the remaining data is read. of single frame data until all target audio and video data are read.
  • reference acquisition unit 202 is specifically used to:
  • the target audio and video data obtained as needed is read from the audio and video buffer area at one time through reference calling.
  • the handle destruction unit 203 is used to destroy and release the read handle.
  • reading attribute information includes: reading mode information and reading business module information.
  • the disclosed terminal, device and method can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division.
  • there may be other For example, multiple units or components may be combined or integrated into another system, or some features may be omitted or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in various embodiments of the present invention can be integrated into one processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solution of the present invention is essentially or contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to cause a computer device (which can be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the method described in various embodiments of the present invention.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Telephonic Communication Services (AREA)

Abstract

一种音视频缓冲区读取处理方法及装置,通过对音视频缓冲区的数据读写方式改进,采用由消费者模块创建各自读取数据的句柄,利用读取句柄作为唯一标识,以数据引用的方式从音视频缓冲区读取数据以用于进行业务处理,数据采用引用方式获取,不额外增加内存,当前消费者模块使用完数据后,可以对数据引用进行释放,通过这种读写方式能够以较低的硬件资源支持多个业务功能模块的运行,且不增加大幅的内存消耗,从而解决了现有的视联网音视频设备因内存资源紧缺导致的运行稳定性差的技术问题。

Description

一种音视频缓冲区读取处理方法及装置 技术领域
本申请涉及视联网技术领域,尤其涉及一种音视频缓冲区读取处理方法及装置。
背景技术
随着智能家居业务的不断发展和壮大,软件上需要支撑的功能和场景也不断的增加,使得运营商提供给终端厂家对接的视联SDK内部集成了越来越多的功能,硬件资源消耗也相应增加。但是目前通过运营商视联SDK接入的大部分网络摄像机(简称IPC)和门铃等网络音视频设备硬件资源都非常有限,紧缺的内存资源容易导致SDK功能的运行异常,进而导致了现有的视联网音视频设备存在运行稳定性差的技术问题。
发明内容
本申请提供了一种音视频缓冲区读取处理方法及装置,用于解决现有的视联网音视频设备存在运行稳定性差的技术问题。
为解决上述技术问题,本申请第一方面提供了一种音视频缓冲区读取处理方法,包括:
响应于音视频数据读取指令,根据所述音视频数据读取指令对应的读取属性信息,生成读取句柄;
根据需要获取的目标音视频数据,结合所述读取句柄,通过引用调用方式,读取存储在音视频缓存区中的所述目标音视频数据,其中,所述音视频缓冲区中的音视频数据是根据视联网音视频设备采集的原始音视频数据,对所述原始音视频数据进行预处理后得到的。
优选地,所述根据需要获取的目标音视频数据,结合所述读取句柄,通过引用调用方式,读取存储在音视频缓存区中的所述目标音视频数据具体包括:
根据需要获取的目标音视频数据,结合所述读取句柄,通过引用调用方式,从音视频缓存区中依次读取所述目标音视频数据中的单帧数据,当所述单帧数据使用完毕后,则释放所述单帧数据的应用再读取剩余的单帧数据,直至所述目标音视频数据全部读取完成为止。
优选地,所述根据需要获取的目标音视频数据,结合所述读取句柄,通过引用调用方式,读取存储在音视频缓存区中的所述目标音视频数据具体包括:
根据需要获取的目标音视频数据,结合所述读取句柄,通过引用调用方式,从音视频缓存区中一次性读取所述目标音视频数据。
优选地,所述当所述目标音视频数据使用完成之后,还包括:
销毁并释放所述读取句柄。
优选地,所述读取属性信息包括:读取模式信息与读取业务模块信息。
本申请第二方面提供了一种音视频缓冲区读取处理装置,包括:
读取句柄生成单元,用于响应于音视频数据读取指令,根据所述音视频数据读取指令对应的读取属性信息,生成读取句柄;
引用获取单元,用于根据需要获取的目标音视频数据,结合所述读取句柄,通过引用调用方式,读取存储在音视频缓存区中的所述目标音视频数据,其中,所述音视频缓冲区中的音视频数据是根据视联网音视频设备采集的原始音视频数据,对所述原始音视频数据进行预处理后得到的。
优选地,所述引用获取单元具体用于:
根据需要获取的目标音视频数据,结合所述读取句柄,通过引用调用方式,从音视频缓存区中依次读取所述目标音视频数据中的单帧数据,当所述单帧数据使用完毕后,再读取剩余的单帧数据,直至所述目标音视频数据全部读取完成为止。
优选地,所述引用获取单元具体用于:
根据需要获取的目标音视频数据,结合所述读取句柄,通过引用调用方式,从音视频缓存区中一次性读取所述目标音视频数据。
优选地,还包括:
句柄销毁单元,用于销毁并释放所述读取句柄。
优选地,所述读取属性信息包括:读取模式信息与读取业务模块信息。
从以上技术方案可以看出,本申请实施例具有以下优点:
本申请提供的方案通过对音视频缓冲区的数据读写方式改进,采用由消费者模块创建各自读取数据的句柄,利用该读取句柄作为唯一标识,以数据引用的方式从音视频缓冲区读取数据以用于进行业务处理,由于本方案的数据采用引用方式获取,不额外增加内存,当前消费者模块使用完数据后,可以对数据引用进行释放,通过这种读写方式能够以较低的硬件资源支持多个业务功能模块的运行,且不增加大幅的内存消耗,从而解决了现有的视联网音视频设备因内存资源紧缺导致的运行稳定性差的技术问题。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其它的附图。
图1为现有的视联网音视频缓存区读写架构示意图。
图2为本申请方案提供的视联网音视频缓存区读写架构示意图。
图3为本申请提供的一种音视频缓冲区读取处理方法的一个实施例的流程示意图。
图4为本申请提供的一种音视频缓冲区读取处理方法的另一个实施例的流程示意图。
图5为本申请提供的一种音视频缓冲区读取处理装置的一个实施例的结构示意图。
具体实施方式
针对现有的视联网音视频设备普遍存在的内存资源紧缺现状,申请人对现有的视联网音视频系统架构进行深入研究,通过研究发现,旧版架构为单生产者单消费者架构,具体如图1所示,这种架构下,硬件设备采集的音视频数据根据业务分别存储多个缓冲区作为生产者,各个消费者业务消费各自的缓冲区消费数据,在旧版的架构下,音视频采用单生产者单消费者,多个业务场景使 用多个缓冲区,例如,假设设备使用的音视频业务有5个,按照旧版本架构需要开通5个音视频缓冲区,消耗资源巨多,造成硬件内存资源的浪费,使得硬件资源不足以支撑可持续的业务发展。
针对旧版架构存在的问题,本申请实施例提供了一种音视频缓冲区读取处理方法及装置,用于解决现有的视联网音视频设备存在运行稳定性差的技术问题。
为使得本申请的发明目的、特征、优点能够更加的明显和易懂,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,下面所描述的实施例仅仅是本申请一部分实施例,而非全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
请参阅图2与图3,本申请第一个实施例提供的一种音视频缓冲区读取处理方法,包括:
步骤101、响应于音视频数据读取指令,根据音视频数据读取指令对应的读取属性信息,生成读取句柄。
步骤102、根据需要获取的目标音视频数据,结合读取句柄,通过引用调用方式,读取存储在音视频缓存区中的目标音视频数据。
其中,音视频缓冲区中的音视频数据是根据视联网音视频设备采集的原始音视频数据,对原始音视频数据进行预处理后得到的。
需要说明的是,SDK软件内部不同的业务功能模块,例如包括云存储模块、视频直播模块等,这些模块是系统中的音视频媒体数据的消费主体,因此也被称为消费者模块。而音视频缓冲区用于存储IPC等设备采集到的音视频数据,具体可包括每一帧数据对应的时间戳、数据长度、编码类型等属性,写入SDK的音视频缓冲区,同时,SDK软件还可以对输入的音视频数据做预处理,上述的预处理包括但不限于:当缓存区已满时进行丢帧处理,以及对存储的音视频数据进行关键帧标记、索引记录等数据优化处理。
当视联网设备的业务功能模块需要从音视频缓存区获取音视频数据时,系统会自动生成音视频数据读取指令,此指令用于触发并控制业务功能模块执行本申请提供的音视频缓冲区读取处理方法。当业务功能模块接收并响应了该音 视频数据读取指令后,业务功能模块会根据音视频数据读取指令对应的读取属性信息,生成相应的读取句柄,通过读取句柄作为唯一标识从音视频缓冲区读取数据,其中,读取属性信息包括:读取模式信息与读取业务模块信息。
在得到读取句柄后,则可以利用该读取句柄,再结合需要获取的音视频数据,即目标音视频数据,通过引用调用方式,读取存储在音视频缓存区中的目标音视频数据,由于本申请的数据采用引用方式获取,不额外增加内存,当前消费者模块使用完数据后,可以对数据引用进行释放,极大地减小了内存使用。
为了更清楚地说明本申请的技术方案,本申请还提供了关于音视频缓冲区读取处理方法在系统层面的完整实施例的示例说明。
终端IPC硬件厂商接入视联SDK,硬件获取到已经编码的H264或H265数据和音频PCM数据,包括时间戳、帧类型、数据内容和对应长度,传参进入SDK音视频环形缓冲区。缓冲区内部判断是否已满,若当前缓冲区已满,则做数据丢帧处理,具体为将最旧的GOP数据丢掉,保持最新的帧数据写入环形缓冲尾部,写缓冲区按照此逻辑循环处理;
读取数据的业务模块包括云存上传、卡录像、流媒体转发、P2P点对点直播,按照需要读取的模式创建各自的读取句柄,然后通过句柄采用原数据引用的方式获取缓冲内容,获取后直接使用,不用再额外开销内存,但是不能修改原数据内容,使用完毕后对引用进行释放,如此循环获取直到当前业主关闭对应读取句柄为止。
从上述提供的技术方案可以看出,本申请提供的方案通过对音视频缓冲区的数据读写方式改进,采用由消费者模块创建各自读取数据的句柄,利用该读取句柄作为唯一标识,以数据引用的方式从音视频缓冲区读取数据以用于进行业务处理,由于本方案的数据采用引用方式获取,不额外增加内存,当前消费者模块使用完数据后,可以对数据引用进行释放,而且由于句柄占用的空间极小(一般和当前系统下的整数的位数一样,比如32bit系统下就是4个字节),即使音视频缓存区中的数据同时被多个读取句柄引用,其产生的资源消耗也远小于旧架构的资源消耗,因此,通过本申请提供的缓存区读取方式能够以较低的硬件资源支持多个业务功能模块的运行,且不增加大幅的内存消耗,从而解决了现有的视联网音视频设备因内存资源紧缺导致的运行稳定性差的技术问 题。
以上内容为本申请提供的一种音视频缓冲区读取处理方法的一个实施例的详细说明,下面为本申请提供的一种音视频缓冲区读取处理方法的另一个实施例的详细说明。
请参阅图4,在上述第一个实施例的基础上,本申请第二个实施例提供的一种音视频缓冲区读取处理方法,包括:
进一步地,步骤102中提及的根据需要获取的目标音视频数据,结合读取句柄,通过引用调用方式,读取存储在音视频缓存区中的目标音视频数据,其步骤过程具体包括:
步骤1021、根据需要获取的目标音视频数据,结合读取句柄,通过引用调用方式,从音视频缓存区中依次读取目标音视频数据中的单帧数据,当单帧数据使用完毕后,则释放单帧数据的应用再读取剩余的单帧数据,直至目标音视频数据全部读取完成为止。
在一些实施例中,步骤102的步骤过程也可以包括:
步骤1022、根据需要获取的目标音视频数据,结合读取句柄,通过引用调用方式,从音视频缓存区中一次性读取目标音视频数据。
需要说明的是,上述的步骤1021与步骤1022都是对业务功能模块从音视频缓存区获取音视频数据的步骤细化。其中,步骤1021提供的是逐帧引用的方式,具体为:根据创建的读取句柄,采用引用调用的方式,按照单帧循环获取的方式读取该目标音视频数据的单帧数据,当该单帧数据使用完成后,则可以释放引用,再继续读取下一帧数据,如此循环,直至该目标音视频数据的全部帧数据均读取完成为止;而步骤1022提供的是一次性引用的方式,例如,根据创建的读取句柄,通过多个读取句柄与目标音视频数据的各帧一一对应,然后通过引用调用方式,一次性读取目标音视频数据。
以上两种读取方式,一般情况下优选采用步骤1021提及的逐帧引用方式,但在一些特殊的应用场景下,例如要获取的目标音视频数据仅为少量的关键帧时,也可以采用步骤1022提及的一次性引用方式。
进一步地,当目标音视频数据使用完成之后,还包括:
步骤103、销毁并释放读取句柄。
需要说明的是,若当前业务功能模块消费完数据后,可以销毁读取句柄,从而释放该读取句柄所占用的资源。
以上内容为本申请提供的一种音视频缓冲区读取处理方法的另一个实施例的详细说明,下面为本申请提供的一种音视频缓冲区读取处理装置的一个实施例的详细说明。
请参阅图5,本申请第三个实施例提供了一种音视频缓冲区读取处理装置,包括:
读取句柄生成单元201,用于响应于音视频数据读取指令,根据音视频数据读取指令对应的读取属性信息,生成读取句柄;
引用获取单元202,用于根据需要获取的目标音视频数据,结合读取句柄,通过引用调用方式,读取存储在音视频缓存区中的目标音视频数据,其中,音视频缓冲区中的音视频数据是根据视联网音视频设备采集的原始音视频数据,对原始音视频数据进行预处理后得到的。
进一步地,引用获取单元202具体用于:
根据需要获取的目标音视频数据,结合读取句柄,通过引用调用方式,从音视频缓存区中依次读取目标音视频数据中的单帧数据,当单帧数据使用完毕后,再读取剩余的单帧数据,直至目标音视频数据全部读取完成为止。
进一步地,引用获取单元202具体用于:
根据需要获取的目标音视频数据,结合读取句柄,通过引用调用方式,从音视频缓存区中一次性读取目标音视频数据。
进一步地,还包括:
句柄销毁单元203,用于销毁并释放读取句柄。
进一步地,读取属性信息包括:读取模式信息与读取业务模块信息。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的终端,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的终端,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另 外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
本申请的说明书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例,例如能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (10)

  1. 一种音视频缓冲区读取处理方法,其特征在于,包括:
    响应于音视频数据读取指令,根据所述音视频数据读取指令对应的读取属性信息,生成读取句柄;
    根据需要获取的目标音视频数据,结合所述读取句柄,通过引用调用方式,读取存储在音视频缓存区中的所述目标音视频数据,其中,所述音视频缓冲区中的音视频数据是根据视联网音视频设备采集的原始音视频数据,对所述原始音视频数据进行预处理后得到的。
  2. 根据权利要求1所述的一种音视频缓冲区读取处理方法,其特征在于,所述根据需要获取的目标音视频数据,结合所述读取句柄,通过引用调用方式,读取存储在音视频缓存区中的所述目标音视频数据具体包括:
    根据需要获取的目标音视频数据,结合所述读取句柄,通过引用调用方式,从音视频缓存区中依次读取所述目标音视频数据中的单帧数据,当所述单帧数据使用完毕后,则释放所述单帧数据的应用再读取剩余的单帧数据,直至所述目标音视频数据全部读取完成为止。
  3. 根据权利要求1所述的一种音视频缓冲区读取处理方法,其特征在于,所述根据需要获取的目标音视频数据,结合所述读取句柄,通过引用调用方式,读取存储在音视频缓存区中的所述目标音视频数据具体包括:
    根据需要获取的目标音视频数据,结合所述读取句柄,通过引用调用方式,从音视频缓存区中一次性读取所述目标音视频数据。
  4. 根据权利要求2或3所述的一种音视频缓冲区读取处理方法,其特征在于,所述当所述目标音视频数据使用完成之后,还包括:
    销毁并释放所述读取句柄。
  5. 根据权利要求1所述的一种音视频缓冲区读取处理方法,其特征在于,所述读取属性信息包括:读取模式信息与读取业务模块信息。
  6. 一种音视频缓冲区读取处理装置,其特征在于,包括:
    读取句柄生成单元,用于响应于音视频数据读取指令,根据所述音视频数据读取指令对应的读取属性信息,生成读取句柄;
    引用获取单元,用于根据需要获取的目标音视频数据,结合所述读取句柄,通过引用调用方式,读取存储在音视频缓存区中的所述目标音视频数据,其中,所述音视频缓冲区中的音视频数据是根据视联网音视频设备采集的原始音视频数据,对所述原始音视频数据进行预处理后得到的。
  7. 根据权利要求6所述的一种音视频缓冲区读取处理装置,其特征在于,所述引用获取单元具体用于:
    根据需要获取的目标音视频数据,结合所述读取句柄,通过引用调用方式,从音视频缓存区中依次读取所述目标音视频数据中的单帧数据,当所述单帧数据使用完毕后,再读取剩余的单帧数据,直至所述目标音视频数据全部读取完成为止。
  8. 根据权利要求6所述的一种音视频缓冲区读取处理装置,其特征在于,所述引用获取单元具体用于:
    根据需要获取的目标音视频数据,结合所述读取句柄,通过引用调用方式,从音视频缓存区中一次性读取所述目标音视频数据。
  9. 根据权利要求7或8所述的一种音视频缓冲区读取处理装置,其特征在于,还包括:
    句柄销毁单元,用于销毁并释放所述读取句柄。
  10. 根据权利要求6所述的一种音视频缓冲区读取处理装置,其特征在于,所述读取属性信息包括:读取模式信息与读取业务模块信息。
PCT/CN2023/117434 2022-09-23 2023-09-07 一种音视频缓冲区读取处理方法及装置 WO2024061005A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211164627.1 2022-09-23
CN202211164627.1A CN115547367A (zh) 2022-09-23 2022-09-23 一种音视频缓冲区读取处理方法及装置

Publications (1)

Publication Number Publication Date
WO2024061005A1 true WO2024061005A1 (zh) 2024-03-28

Family

ID=84730226

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/117434 WO2024061005A1 (zh) 2022-09-23 2023-09-07 一种音视频缓冲区读取处理方法及装置

Country Status (2)

Country Link
CN (1) CN115547367A (zh)
WO (1) WO2024061005A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115547367A (zh) * 2022-09-23 2022-12-30 天翼数字生活科技有限公司 一种音视频缓冲区读取处理方法及装置
CN115801747B (zh) * 2023-01-11 2023-06-02 厦门简算科技有限公司 一种基于arm架构的云服务器及音视频数据传输方法

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110061052A1 (en) * 2009-09-03 2011-03-10 Ibm Corporation Method and system using a temporary object handle
CN102508713A (zh) * 2011-10-12 2012-06-20 杭州华三通信技术有限公司 进程启动方法及内核、进程
CN102841785A (zh) * 2011-06-24 2012-12-26 奇智软件(北京)有限公司 一种文件句柄关闭操作的方法及装置
WO2014094247A1 (en) * 2012-12-19 2014-06-26 Intel Corporation Processing video content
CN105404469A (zh) * 2015-10-22 2016-03-16 浙江宇视科技有限公司 一种视频数据的存储方法和系统
CN105828017A (zh) * 2015-10-20 2016-08-03 广东亿迅科技有限公司 一种面向视频会议的云存储接入系统及方法
CN106649580A (zh) * 2016-11-17 2017-05-10 任子行网络技术股份有限公司 用于海量日志查询的流式数据处理方法和系统
CN111901614A (zh) * 2020-06-22 2020-11-06 深圳市沃特沃德股份有限公司 多平台同步直播方法、装置、计算机设备和可读存储介质
CN112218115A (zh) * 2020-09-25 2021-01-12 深圳市捷视飞通科技股份有限公司 流媒体音视频同步的控制方法、装置、计算机设备
CN115547367A (zh) * 2022-09-23 2022-12-30 天翼数字生活科技有限公司 一种音视频缓冲区读取处理方法及装置

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110061052A1 (en) * 2009-09-03 2011-03-10 Ibm Corporation Method and system using a temporary object handle
CN102841785A (zh) * 2011-06-24 2012-12-26 奇智软件(北京)有限公司 一种文件句柄关闭操作的方法及装置
CN102508713A (zh) * 2011-10-12 2012-06-20 杭州华三通信技术有限公司 进程启动方法及内核、进程
WO2014094247A1 (en) * 2012-12-19 2014-06-26 Intel Corporation Processing video content
CN105828017A (zh) * 2015-10-20 2016-08-03 广东亿迅科技有限公司 一种面向视频会议的云存储接入系统及方法
CN105404469A (zh) * 2015-10-22 2016-03-16 浙江宇视科技有限公司 一种视频数据的存储方法和系统
CN106649580A (zh) * 2016-11-17 2017-05-10 任子行网络技术股份有限公司 用于海量日志查询的流式数据处理方法和系统
CN111901614A (zh) * 2020-06-22 2020-11-06 深圳市沃特沃德股份有限公司 多平台同步直播方法、装置、计算机设备和可读存储介质
CN112218115A (zh) * 2020-09-25 2021-01-12 深圳市捷视飞通科技股份有限公司 流媒体音视频同步的控制方法、装置、计算机设备
CN115547367A (zh) * 2022-09-23 2022-12-30 天翼数字生活科技有限公司 一种音视频缓冲区读取处理方法及装置

Also Published As

Publication number Publication date
CN115547367A (zh) 2022-12-30

Similar Documents

Publication Publication Date Title
WO2024061005A1 (zh) 一种音视频缓冲区读取处理方法及装置
CN100517306C (zh) 媒体基础媒体处理器
US11100956B2 (en) MP4 file processing method and related device
CN102664967A (zh) 跨平台的个人信息交互方法和系统及后台服务器
CN109840879B (zh) 图像渲染方法、装置、计算机存储介质及终端
CN108200447A (zh) 直播数据传输方法、装置、电子设备、服务器及存储介质
CN109889894A (zh) 媒体文件解码方法、装置及存储介质
CN111954072B (zh) 一种多媒体播放方法、装置、多媒体播放器和介质
CN107402782A (zh) 一种用于在直播软件中加载插件的方法及装置
CN110662017A (zh) 一种视频播放质量检测方法和装置
CN105578224A (zh) 一种多媒体数据的获取方法、装置、智能电视及机顶盒
JP4992568B2 (ja) クライアント装置、データ処理方法およびそのプログラム
CN112486831A (zh) 一种测试系统、方法、电子设备及存储介质
WO2023077866A1 (zh) 多媒体数据处理方法、装置、电子设备及存储介质
CN115242787B (zh) 消息处理系统及方法
CN201663666U (zh) 网络视频装置
CN109922316A (zh) 媒体资源调度及媒体资源管理方法、装置和电子设备
CN111090818A (zh) 资源管理方法、资源管理系统、服务器及计算机存储介质
CN112541391A (zh) 一种基于考试视频分析的违规行为识别方法与系统
CN104333765A (zh) 一种视频直播流的处理方法及处理装置
CN114461595A (zh) 发送消息的方法、装置、介质和电子设备
CN104219538B (zh) 一种音视频实时采集上传及数据处理方法及系统
CN101711479A (zh) 用于创建内容的方法,用于跟踪内容使用行动的方法、和相应的终端和信号
WO2023174040A1 (zh) 一种图片的处理方法及相关设备
CN112243135B (zh) 一种多媒体播放的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23867296

Country of ref document: EP

Kind code of ref document: A1