WO2023015807A1 - 数字视网膜数据传输方法、装置、电子设备及存储介质 - Google Patents

数字视网膜数据传输方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2023015807A1
WO2023015807A1 PCT/CN2021/139114 CN2021139114W WO2023015807A1 WO 2023015807 A1 WO2023015807 A1 WO 2023015807A1 CN 2021139114 W CN2021139114 W CN 2021139114W WO 2023015807 A1 WO2023015807 A1 WO 2023015807A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
feature
priority
stream
sending
Prior art date
Application number
PCT/CN2021/139114
Other languages
English (en)
French (fr)
Inventor
焦立欣
张羿
滕波
洪一帆
王琪
周东东
陆嘉瑶
Original Assignee
浙江智慧视频安防创新中心有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江智慧视频安防创新中心有限公司 filed Critical 浙江智慧视频安防创新中心有限公司
Publication of WO2023015807A1 publication Critical patent/WO2023015807A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists

Definitions

  • the present application relates to the field of video technology, in particular to a digital retina data transmission method, device, electronic equipment and storage medium.
  • a joint optimization function can be designed to calculate the respective allocation code rates of the compressed video data and visual feature data. In order to take into account both video compression loss and feature recognition accuracy.
  • the inventors found that when the data transmission bandwidth changes drastically and the data transmission has real-time requirements, such as in the application scenario of vehicle-to-vehicle communication under wireless transmission , it is not enough to just allocate bandwidth to video data and visual feature data.
  • the prior art does not solve the problem of how to send video data and feature data according to a certain priority when there is congestion in sending video data and feature data within a short period of time (for example, within a few milliseconds).
  • the embodiment of the present application proposes a digital retinal data transmission method, device, electronic equipment and computer-readable storage medium to solve the problem of video data and features in a short time in the prior art. How to send data according to a certain priority when there is congestion in data sending.
  • the first aspect of the embodiment of the present application provides a digital retina data transmission method, including:
  • the analysis task corresponds to one or more characteristic flows; wherein, the characteristic flows correspond to one or more priorities.
  • the way of setting the real-time requirement includes: at least one of setting in advance, setting during video transmission establishment, and setting during video transmission.
  • the method also includes:
  • the sending priority of the feature stream or video stream is adjusted based on the real-time requirement or priority information fed back by the data receiving end.
  • the method includes:
  • the adaptively adjusting the sending priority of the feature stream or the video stream based on the RF channel condition includes:
  • the sending priority of the feature stream or the video stream is readjusted according to another priority allocation scheme.
  • the RF channel quality metric includes: at least one of a signal-to-noise ratio, a signal-to-interference-plus-noise ratio, and a block error rate.
  • the video feature content includes: at least one of the time between frames, the number of regions of interest per unit time, and the Hamming distance.
  • the adaptively adjusting the sending priority of the feature stream or the video stream based on the video feature content includes:
  • the second aspect of the embodiment of the present application provides a digital retinal data transmission device, including:
  • a digital retinal data transmission device characterized in that it comprises:
  • the sending end module is used to collect video information, determine the sending priority of each characteristic stream and video stream based on the real-time requirements of each video analysis application, and send data based on the priority.
  • the sending end module also includes:
  • a video processing function module configured to perform data compression and video feature processing on the collected data; wherein, the video feature processing includes: video feature extraction and video feature compression;
  • the data sending scheduling module is used to determine a feature stream and video stream sending priority based on the real-time requirements of each video analysis application; wherein, the real-time requirement is the maximum allowable delay requirement, which is determined according to different applications.
  • the device also includes:
  • the receiving end module is used to send feedback information related to the maximum delay requirement or priority to the sending end through the feedback channel based on the current video analysis application needs, the current video content or the status of the video characteristic content, so as to adjust the sending end priority.
  • the real-time requirement of each video analysis application comes from at least one receiving end module.
  • the sending end module also includes:
  • the video collection module is used for collecting video information to be transmitted.
  • the sending end module also includes:
  • the data sending module is used for sending feature data and video data based on priority.
  • the receiver module also includes:
  • the data receiving module is used for receiving feature data and video data.
  • a third aspect of the embodiments of the present application provides an electronic device, including:
  • the memory is connected in communication with the one or more processors, the memory stores instructions executable by the one or more processors, and the instructions are executed by the one or more processors , the electronic device is used to implement the methods described in the foregoing embodiments.
  • the fourth aspect of the embodiments of the present application provides a computer-readable storage medium on which computer-executable instructions are stored.
  • the computer-executable instructions When executed by a computing device, they can be used to implement the above-mentioned embodiments. Methods.
  • a fifth aspect of the embodiments of the present application provides a computer program product, the computer program product includes a computer program stored on a computer-readable storage medium, the computer program includes program instructions, and when the program instructions are executed by the computer , it can be used to implement the methods described in the foregoing embodiments.
  • At least one feature stream and an associated video stream share a priority and send it, which solves the problem of bandwidth allocation in multi-stream transmission of feature streams and video streams.
  • Fig. 1 is a schematic flowchart of a digital retinal data transmission method according to some embodiments of the present application
  • Fig. 2 is a schematic structural diagram of a sending end module according to some embodiments of the present application.
  • Fig. 3 is a schematic structural diagram of a receiver module according to some embodiments of the present application.
  • Fig. 4 is a schematic diagram of a logical structure of an electronic device according to some embodiments of the present application.
  • Fig. 5 is a schematic structural diagram of a general-purpose computer node according to some embodiments of the present application.
  • a joint optimization function can be designed to calculate the size of the respective allocated code rates of the compressed video data and visual feature data. , to take into account both video compression loss and feature recognition accuracy.
  • the embodiment of the present application provides a digital retina data transmission method, by determining the sending priority of each feature stream and video stream based on the real-time requirements of each video analysis task, and adjusting the associated Video data priority, adjust the associated feature data priority based on the video data priority, share a priority with the feature stream and the associated video stream partial content and send it.
  • the data transmission method includes:
  • S101 Determine the sending priority of at least one feature stream or video stream based on the real-time requirements of each video analysis task; wherein the real-time requirements include maximum allowable delay requirements, which are determined according to different applications.
  • the sending priority of each feature stream and video stream is determined based on the real-time requirement of each video analysis task, and the real-time requirement or priority can be preset. It is also possible for the data receiving end to feed back its desired real-time requirement or priority information through the feedback channel during the video transmission establishment and video transmission process.
  • the (transmission device) determines the priority based on multiple real-time requirements sent by multiple receiving ends. For example, set the priority according to the maximum delay requirement from small to large. It is also possible to further set differentiated priorities according to different data receiving ends. For example, a certain data receiving end user has a high priority, and the compressed video and video characteristic data sent to this receiving end have a higher sending priority than data sent to other receiving ends.
  • each video analysis task may correspond to one feature stream or multiple feature streams, and multiple feature streams may correspond to the same priority or different priorities.
  • the real-time requirement setting methods include: presetting, setting during video transmission establishment, and setting during video transmission process.
  • the embodiments of the present application can implement flexible, free and controllable determination of the priorities of video streams and feature streams.
  • the solution to the multi-stream transmission problem is to design a joint optimization function based on the size of the code stream to calculate the size of the respective allocated code rates of the compressed video data and visual feature data, so as to take into account both video compression loss and feature recognition accuracy sex.
  • the data transmission bandwidth changes drastically and the data transmission has real-time requirements, such as in the application scenario of vehicle-to-vehicle communication under wireless transmission, it is not enough to only allocate bandwidth for video data and visual feature data.
  • the embodiment of the present application adjusts the priority of associated video data based on the priority of feature data or adjusts the priority of associated feature data based on the priority of video data, which can greatly speed up the solution to the problem of multi-stream transmission, simplifies the steps, and takes into account the loss of video compression and feature video accuracy.
  • the feature stream and the associated video stream part content are jointly shared with a priority and sent, which relieves the pressure on bandwidth allocation and improves Efficiency, which strengthens the adaptability when there is congestion in sending video data and characteristic data within a short period of time (for example, within a few milliseconds).
  • the method in FIG. 1 solves the problem of how to send feature streams and video streams according to a certain priority in the case of transmission congestion. Specifically: adaptively adjust the priority based on the video feature content, determine the sending priority of each feature stream and video stream based on the real-time requirements of each video analysis application, and compared with the existing technology, pay attention to the relationship between feature streams and video streams The relationship between the corresponding video data and the feature data is linked, therefore, the problems in the prior art can be partially or completely solved.
  • the real-time requirement setting methods include: presetting, setting during video transmission establishment, and setting during video transmission. Real-time requirements are more flexible and adaptable, and can be applied in a wider range of conditions.
  • the method based on FIG. 1 further includes: adjusting the data sending priority based on the real-time requirement or priority information fed back by the data receiving end. Specifically, it also includes: adaptively adjusting priority based on RF channel (radio frequency channel) conditions; adaptively adjusting priority based on video characteristic content.
  • RF channel radio frequency channel
  • the priority when the RF channel condition changes beyond a certain threshold, the priority is re-adjusted based on the latest real-time requirements; or, when the RF channel quality is lower than a certain threshold, according to another priority allocation scheme Re-prioritize.
  • the RF channel quality metrics include: signal-to-noise ratio, signal-to-interference-plus-noise ratio, block error rate, and so on.
  • the visual feature content may include many types, such as: inter-frame time, the number of regions of interest per unit time, Hamming distance, Euclidean distance, and the like.
  • the adaptive adjustment of the priority based on the content of the visual feature includes: adjusting the priority based on motion information other than the visual feature; adjusting the priority based on a change in the video feature exceeding a certain threshold.
  • a digital retinal data transmission method is provided, and correspondingly, the present application also provides a digital retinal data transmission device.
  • the digital retinal data transmission device provided in the embodiment of the present application can implement the above-mentioned digital retinal data transmission method, and the digital retinal data transmission device can be realized by software, hardware or a combination of software and hardware.
  • the digital retinal data transmission device may include integrated or separate functional modules or units to perform corresponding steps in the above-mentioned methods. Since the device embodiments are basically similar to the method embodiments, the description is relatively simple. For related information, please refer to the description of the method embodiments.
  • the device embodiments described below are only illustrative.
  • the embodiment of the present application provides a digital retinal data transmission device, including: a sending end module, used to collect video information, determine the sending priority of each feature stream and video stream based on the real-time requirements of each video analysis application, and send the feature stream and video stream Video streams are sent based on priority.
  • a sending end module used to collect video information, determine the sending priority of each feature stream and video stream based on the real-time requirements of each video analysis application, and send the feature stream and video stream Video streams are sent based on priority.
  • Fig. 2 is a schematic structural diagram of a sending end module according to an embodiment of the present application. As shown in Fig. Sending module 204; Wherein,
  • the video collection module 201 is used to collect the video information to be transmitted; the video information is the video information collected and sent to the video processing functional unit for processing such as data compression and video feature processing;
  • the video processing functional module 202 is used to perform processes such as data compression and video feature processing on the collected data; wherein, the video feature processing includes: video feature extraction and video feature compression;
  • the data transmission scheduling module 203 is used to determine the priority of feature stream and video stream transmission based on the real-time requirements of each video analysis application; wherein, the real-time requirement is the maximum allowable delay requirement, which is determined according to different applications;
  • a data sending module 204 configured to send video data and feature data.
  • the above-mentioned modules may be implemented on the same physical entity, or may be implemented through multiple physically separated physical entities.
  • the video feature processing module includes: a video feature extraction module, a video feature compression module, a summary video stream generation module, and a summary video stream compression module.
  • the video features include any one or more of visual features and machine learning features.
  • the video processing function module can support multiple video analysis (including video retrieval) applications, and a video analysis application may include multiple video features; the compressed video data and video feature data are sent to the data sending scheduling module , the data sending scheduling module sends it through the data sending unit according to a certain priority; the data sending may be through a priority network, such as an Ethernet interface; it may also be a WIFI network or a cellular communication network such as 4G/5G.
  • a video analysis application may include multiple video features
  • the compressed video data and video feature data are sent to the data sending scheduling module , the data sending scheduling module sends it through the data sending unit according to a certain priority; the data sending may be through a priority network, such as an Ethernet interface; it may also be a WIFI network or a cellular communication network such as 4G/5G.
  • the smart camera installed on the car collects surrounding video data, on this basis, video features such as visual features and deep learning features can be further extracted, and visual features include information about colors in video frames. , pattern, texture, grayscale and other information, deep learning features can include pedestrian recognition, license plate recognition, traffic accidents, traffic instruction recognition and other information. Video features can be exchanged between vehicles or through roadside equipment, and also transmitted to the cloud through WIFI, 4G/5G and other networks.
  • the video feature information that can reflect the traffic situation may need to be transmitted to the relevant vehicle at the first time.
  • their real-time requirements may be different, and In the field of daily video surveillance, in most cases, real-time video feature stream transmission is not needed, but only for post-event storage and retrieval; Compared with the characteristic information of identifying the license plate number, its real-time requirements are much higher; considering that in the actual transmission network, especially in the wireless transmission network, the wireless channel is easily interfered, and it is easy to cause short-term or long-term transmission congestion, so based on the present invention, the data The sending scheduling module determines the sending priority of each feature stream and video stream based on the real-time requirements of each video analysis application.
  • the real-time requirement may be a maximum allowable delay requirement; wherein, the maximum allowable delay depends on different applications; for example, in the field of video surveillance, if the video feature stream is mainly to improve post-event video The convenience of retrieval, the maximum allowable delay can be set to several hours or even several days. Then the scheduling unit can implement a "best effort" transmission mode, and only send these characteristic streams when there is no other data to be sent.
  • the real-time requirement of the video feature stream is very high, which should be set at the ms level; when there are multiple applications at the same time, the video analysis requires high real-time performance, and other Feature streams are sent with high priority.
  • the video compression data flow should also be given an appropriate priority; such as the video surveillance data flow, the second-level real-time requirement is sufficient in most cases.
  • the priority information or the maximum delay requirement information is information transmitted from the video processing module to the data sending scheduling module along with the video feature data and video compression data.
  • the one video analysis application may correspond to multiple feature streams, and the multiple feature streams share a unified priority.
  • the priority information or the maximum delay requirement is different according to different feature types under different applications, and is also transmitted from the video processing unit to the data sending scheduling unit along with the video feature data and video compression data.
  • the setting of the priority information or the maximum delay requirement is applied to one or several video clips related to a specific feature data type.
  • the real-time requirement or the priority can be preset, or the data receiving end can feed back its desired real-time requirement or priority through a feedback channel during video transmission establishment and video transmission process Information; when the real-time requirements of each video analysis application come from at least one data receiving end, (the transmission device) determines the priority by combining multiple real-time requirements sent by multiple receiving ends; typically, the present invention can be based on the maximum time
  • the priority can be set from small to large according to the delay requirement, and differentiated priority can be further set according to the different data receiving ends.
  • Fig. 3 is a schematic structural diagram of a sending end module according to another embodiment of the present application.
  • the sending end module 300 includes a data receiving module 301, a video processing module 302, and a video feature processing module 303; ,
  • Data receiving module 301 for receiving video data and feature data
  • a video feature processing module 303 configured to process video feature data.
  • the data receiving module can receive video data and feature data via wired or wireless (WIFI, cellular network, etc.).
  • WIFI wireless
  • cellular network etc.
  • a receiving end can simultaneously receive and process compressed video data and three different types of video feature data.
  • the video processing module is mainly used for video decoding processing, and optionally further displaying, storing and so on.
  • the video feature processing module is configured to process three video features respectively, including feature data decoding, feature matching, and the like. It is easy to know that different video features may have a unified encoding and decoding and/or feature matching manner, or may adopt independent encoding and decoding and/or feature matching manners.
  • the receiver module can send information related to the maximum delay requirement or priority to the sender through the feedback channel based on the current video analysis application needs, or based on the current video content and video characteristic content, In order to facilitate the sending end to adjust the period priority; wherein, the feedback information can be performed during the establishment of the video transmission connection, or during the data transmission period.
  • the receiving end module can also receive other types of data, and feed back other data related to the maximum delay requirement or priority to the sending end through the feedback channel. information.
  • the electronic device 400 includes:
  • memory 430 and one or more processors 410;
  • the memory 430 is connected in communication with the one or more processors 410, the memory 430 stores a program 432 executable by the one or more processors, and the program 432 is executed by the one or more processors.
  • a plurality of processors 410 execute, so that the one or more processors 410 execute the methods in the foregoing embodiments of the present application.
  • the processor 410 and the memory 430 may be connected through a bus or in other ways, and the connection through the bus 440 is taken as an example in FIG. 4 .
  • the processor 410 may be a central processing unit (Central Processing Unit, CPU).
  • the processor 410 can also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application-specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-programmable gate array (Field-Programmable Gate Array, FPGA) or Other chips such as programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or combinations of the above-mentioned types of chips.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • Other chips such as programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or combinations of the above-mentioned types of chips.
  • the memory 430 can be used to store non-transitory software programs, non-transitory computer executable programs and modules, such as the cascaded progressive network in the embodiment of the present application.
  • the processor 410 executes various functional applications and data processing of the processor by running non-transitory software programs, programs 432 and functional modules stored in the memory 430 .
  • the memory 430 may include a program storage area and a data storage area, wherein the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created by the processor 410 and the like.
  • the memory 430 may include a high-speed random access memory, and may also include a non-transitory memory, such as at least one magnetic disk storage device, a flash memory device, or other non-transitory solid-state storage devices.
  • the memory 430 may optionally include memory located remotely from the processor 410 , and these remote memories may be connected to the processor 410 through a network (such as through the communication interface 420 ). Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • An embodiment of the present application provides a computer-readable storage medium, wherein computer-executable instructions are stored in the computer-readable storage medium, and each step in the foregoing method embodiments is executed after the computer-executable instructions are executed.
  • program modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types.
  • program modules may be located in both local and remote memory storage devices.
  • the functions described above are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • a computer device which may be a personal computer, a server, or a network device, etc.
  • the technical solutions of the present application can be implemented and/or propagated through at least one general-purpose computer node 510 as shown in FIG. 5 .
  • FIG. 5 In FIG.
  • a general-purpose computer node 510 includes: a computer system/server 512, peripherals 514, and a display device 516; wherein, the computer system/server 512 includes a processing unit 520, an input/output interface 522, a network adapter 524, and
  • the memory 530 usually implements data transmission through a bus inside; further, the memory 530 is usually composed of various storage devices, such as RAM (Random Access Memory, random access memory) 532, cache 534 and storage system (generally composed of one or more large-capacity non-volatile memory)
  • the program 540 realizing part or all of the functions of the technical solution of the present application is stored in the memory 530, usually in the form of a plurality of program modules 542.
  • the aforementioned computer-readable storage medium includes physically volatile and non-volatile, removable and non-removable media implemented in any manner or technology for storing information such as computer-readable instructions, data structures, program modules, or other data. Indong medium.
  • Computer-readable storage media specifically include, but are not limited to, U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), erasable programmable read-only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash or other solid-state memory technology, CD-ROM, Digital Versatile Disk (DVD), HD-DVD, Blue-Ray or other optical storage device, tape, disk storage or other magnetic storage device, or any other medium that can be used to store the desired information and that can be accessed by a computer.
  • ROM read-only memory
  • RAM random access memory
  • EPROM erasable programmable read-only Memory
  • EEPROM Electrically Erasable Programmable Read-Only
  • the present application proposes a digital retinal data transmission method, device, electronic equipment and storage medium.
  • the embodiment of the present application adjusts or determines the priority of the associated video data based on the priority of the feature data, adjusts or determines the priority of the associated feature data based on the priority of the video data, and at the same time, the feature stream and the content of the associated video stream jointly share a priority It solves the problem of bandwidth allocation in multi-stream transmission of feature streams and video streams.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本申请公开了一种数字视网膜数据传输方法、装置、电子设备及存储介质。其中,该方法包括:基于各视频分析任务的实时性要求确定至少一个特征流或视频流的发送优先级;其中,所述实时性要求包括最大允许时延要求,依据不同的应用确定;基于确定的特征流的发送优先级调整或确定相关联的视频流的发送优先级;和/或基于确定的视频流的发送优先级调整或确定相关联的特征流的发送优先级;将至少一个特征流和相关联的视频流共享一个优先级并发送。本申请基于各视频分析任务的实时性要求,将特征流和视频流基于优先级进行数据发送,解决了特征流和视频流多流传输中带宽分配问题。

Description

数字视网膜数据传输方法、装置、电子设备及存储介质 技术领域
本申请涉及视频技术领域,具体涉及一种数字视网膜数据传输方法、装置、电子设备及存储介质。
背景技术
自从数字视网膜概念提出以来,在视频编解码、视频监控等领域引起了较大的关注。数字视网膜技术的一个重要特点就是视频流和视觉特征流、甚至包括摘要视频流同时进行的双流或多流传输,这对视频检索、视频分析和存储均提供了便利。
然而,多流传输也存在一些需要解决的问题,比如多个数据共享有限传输资源会造成冲突。尤其是在带宽受限、传输时延有较高要求等无线传输并存在实时视频分析应用的场景下,这个问题更加显著。
现有技术中,存在一些解决数字视网膜在进行多流传输时带宽使用冲突的问题,比如,可以根据码流的大小,设计联合优化函数来计算压缩后的视频数据和视觉特征数据各自分配码率的大小,以同时兼顾视频压缩损失和特征识别准确性。
然而,在实现本申请实施例相关技术方案的过程中发明人发现,当数据传输带宽变化剧烈且数据传输具有实时性要求时,比如在无线传输下的车与车之间进行通信的应用场景下,仅仅对视频数据和视觉特征数据进行带宽分配是不够的。现有技术没有解决在很短时间内(比如数毫秒内)视频数据和特征数据发送存在拥塞时如何按照一定的优先级进行发送的问题。
发明内容
针对现有技术中的上述技术问题,本申请实施例提出了一种数字视网膜数据传输方法、装置、电子设备及计算机可读存储介质,以解决现有技术中在很短时间内视频数据和特征数据发送存在拥塞时如何按照一定的优先级进行发送的问题。
本申请实施例的第一方面提供了一种数字视网膜数据传输方法,包括:
基于各视频分析任务的实时性要求确定至少一个特征流或视频流的发送优先级;其中,所述实时性要求包括最大允许时延要求,依据不同的应用确定;
基于确定的特征流的发送优先级调整或确定相关联的视频流的发送优先级;和/或
基于确定的视频流的发送优先级调整或确定相关联的特征流的发送优先级;
将至少一个特征流和相关联的视频流共享一个优先级并发送。
在一些实施例中,所述分析任务对应一个或多个特征流;其中,所述特征流对应一个或多个优先级。
在一些实施例中,所述实时性要求设定方式包括:预先设定、视频传输建立中设定和视频传输过程中设定中的至少一种。
在一些实施例中,所述方法还包括:
基于数据接收端反馈的实时性要求或优先级信息调整所述特征流或视频流的发送优先级。
在一些实施例中,所述方法包括:
基于RF信道状况自适应调整特征流或视频流的发送优先级;
基于视频特征内容自适应调整特征流或视频流的发送优先级。
在一些实施例中,所述基于RF信道状况自适应调整特征流或视频流的发送优先级包括:
当RF信道状况变化超过一定阈值时,基于最新实时性要求重新调整特征流或视频流的发送优先级;或者,
当RF信道质量低于一定阈值时,按照另外的优先级分配方案重新调整特征流或视频流的发送优先级。
在一些实施例中,所述RF信道质量度量指标包括:信噪比、信号与干扰加噪声比和误块率中至少一项。
在一些实施例中,所述视频特征内容包括:帧间时间、单位时间内感兴趣区域数量和汉明距离中至少一项。
在一些实施例中,所述基于视频特征内容自适应调整特征流或视频流的发送优先级包括:
基于视频特征之外的运动信息自适应调整特征流或视频流的发送优先级;
基于视频特征发生变化超过一定阈值时自适应调整特征流或视频流的发送优先级。
本申请实施例的第二方面提供了一种数字视网膜数据传输装置,包括:
一种数字视网膜数据传输装置,其特征在于,包括:
发送端模块,用于采集视频信息,基于各视频分析应用的实时性要求确定各特征流和视频流发送优先级并将特征流和视频流基于优先级进行数据发送。
在一些实施例中,所述发送端模块还包括:
视频处理功能模块,用于将采集到的数据进行数据压缩和视频特征处理;其中,所述视频特征处理包括:视频特征提取以及视频特征压缩;
数据发送调度模块,用于基于各视频分析应用的实时性要求确定个特征流和视频流发送优先级;其中,所述实时性要求为最大允许时延要求,依据不同的应用确定。
在一些实施例中,所述装置还包括:
接收端模块,用于基于当前视频分析应用需要、当前视频内容或视频特征内容的状况,通过反馈信道向发送端发送与最大时延要求或优先级有关的反馈信息,以调整发送端优先级。
在一些实施例中,所述各视频分析应用的实时性要求来自于至少一个所述接收端模块。
在一些实施例中,所述发送端模块还包括:
视频采集模块,用于采集待传输的视频信息。
在一些实施例中,所述发送端模块还包括:
数据发送模块,用于基于优先级发送特征数据和视频数据。
在一些实施例中,所述接收端模块还包括:
数据接收模块,用于接收特征数据和视频数据。
本申请实施例的第三方面提供了一种电子设备,包括:
存储器以及一个或多个处理器;
其中,所述存储器与所述一个或多个处理器通信连接,所述存储器中存储有可被所述一个或多个处理器执行的指令,所述指令被所述一个或多个处理器执行时,所述电子设备用于实现如前述各实施例所述的方法。
本申请实施例的第四方面提供了一种计算机可读存储介质,其上存储有计算机可执行指令,当所述计算机可执行指令被计算装置执行时,可用来实现如前述各实施例所述的方法。
本申请实施例的第五方面提供了一种计算机程序产品,所述计算机程序产品包括存储在计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,可用来实现如前述各实施例所述的方法。
本申请实施例将至少一个特征流和相关联的视频流共享一个优先级并发送,解决了特征流和视频流多流传输中带宽分配问题。
附图说明
通过参考附图会更加清楚的理解本申请的特征和优点,附图是示意性的而不应理解为对本申请进行任何限制,在附图中:
图1是根据本申请的一些实施例所示的一种数字视网膜数据传输方法的流程示意图;
图2是根据本申请的一些实施例所示的一种发送端模块结构示意图;
图3是根据本申请的一些实施例所示的一种接收端模块结构示意图;
图4是根据本申请的一些实施例所示的一种电子设备的逻辑结构示意图;
图5是根据本申请的一些实施例所示的一种通用型计算机节点的架构示意图。
具体实施方式
在下面的详细描述中,通过示例阐述了本申请的许多具体细节,以便提供对相关披露的透彻理解。然而,对于本领域的普通技术人员来讲,本申请显而易见的可以在没有这些细节的情况下实施。应当理解的是,本申请中使用“系统”、“装置”、“单元”和/或“模块”术语,是用于区分在顺序排列中不同级别的不同部件、元件、部分或组件的一种方法。然而,如果其他表达式可以实现相同的目的,这些术语可以被其他表达式替换。
应当理解的是,当设备、单元或模块被称为“在……上”、“连接到”或“耦合到”另一设备、单元或模块时,其可以直接在另一设备、单元或模块上,连接或耦合到或与其他设备、单元或模块通信,或者可以存在中间设备、单元或模块,除非上下文明确提示例外情形。例如,本申请所使用的术语“和/或”包括一个或多个相关所列条目的任何一个和所有组合。
本申请所用术语仅为了描述特定实施例,而非限制本申请范围。如本申请说明书和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。一般说来,术语“包括”与“包含”仅提示包括已明确标识的特征、整体、步骤、操作、元素和/或组件,而该类表述并不构成一个排它性的罗列,其他特征、整体、步骤、操作、元素和/或组件也可以包含在内。
参看下面的说明以及附图,本申请的这些或其他特征和特点、操作方法、结构的相关元素的功能、部分的结合以及制造的经济性可以被更好地理解,其中说明和附图形成了说明书的一部分。然而,可以清楚地理解,附图仅用作说明和描述的目的,并不意在限定本申请的保护范围。可以理解的是,附图并非按比例绘制。
本申请中使用了多种结构图用来说明根据本申请的实施例的各种变形。应当理解的是,前面或下面的结构并不是用来限定本申请。本申请的保护范围以权利要求为准。
自从数字视网膜概念提出以来,在视频编解码、视频监控等领域引起了较大的关注。数字视网膜技术的一个重要特点就是视频流和视觉特征流、甚至包括摘要视频流同时进行的双流或多流传输,这对视频检索、视频分析和存储均提供了便利。
然而,多流传输也存在一些需要解决的问题,比如多个数据共享有限传输资源会造 成冲突。尤其是在带宽受限、传输时延有较高要求等无线传输并存在实时视频分析应用的场景下,这个问题更加显著。现有技术存在一些解决数字视网膜在进行多流传输时带宽使用冲突的问题,比如,可以根据码流的大小,设计联合优化函数来计算压缩后的视频数据和视觉特征数据各自分配码率的大小,以同时兼顾视频压缩损失和特征识别准确性。然而,当数据传输带宽变化剧烈且数据传输具有实时性要求时,比如在无线传输下的车与车之间进行通信的应用场景下,仅仅对视频数据和视觉特征数据进行带宽分配是不够的。现有技术没有解决在很短时间内(比如数毫秒内)视频数据和特征数据发送存在拥塞时如何按照一定的优先级进行发送的问题。
有鉴于此,本申请的实施例提供了一种数字视网膜数据传输方法,通过基于各视频分析任务的实时性要求确定各特征流和视频流的发送优先级,并且基于特征数据优先级调整相关联视频数据优先级,基于视频数据优先级调整相关联特征数据优先级,将特征流和相关联的视频流部分内容联合共享一个优先级并发送。具体地,参见图1,在本申请的一个实施例中,数据传输方法包括:
S101,基于各视频分析任务的实时性要求确定至少一个特征流或视频流的发送优先级;其中,所述实时性要求包括最大允许时延要求,依据不同的应用确定。
在本申请的实施例中,各特征流和视频流的发送优先级是基于各视频分析任务的实时性要求确定的,实时性要求或优先级可以预先设定。也可以在视频传输建立中、视频传输过程中由数据接收端通过反馈信道反馈其希望的实时性要求或优先级信息。当各视频分析应用的实时性要求来自于至少一个数据接收端,(传输装置)综合多个接收端发出的多个实时性要求确定优先级。比如按最大时延要求从小到大设置优先级。也可进一步根据数据接收端的不同设置差异化优先级。比如某一数据接收端用户优先级高,发送到该接收端的压缩视频和视频特征数据相比发送到其它接收端的数据具有更高的发送优先级。
典型地对于各视频分析任务,既可以对应一个特征流,也可以对应多个特征流,此外多个特征流既可以对应相同的优先级,也可以对应不同的优先级。而实时性要求设定方式包括:预先设定、视频传输建立中设定、视频传输过程中设定。本申请的实施例可以实现灵活自由且可控地确定各视频流和特征流的优先级。
S102,基于确定的特征流的发送优先级调整或确定相关联的视频流的发送优先级;和/或基于确定的视频流的发送优先级调整或确定相关联的特征流的发送优先级。
通常情况下,多流传输问题的解决方式为根据码流的大小,设计联合优化函数来计算压缩后的视频数据和视觉特征数据各自分配码率的大小,以同时兼顾视频压缩损失和特征 识别准确性。但是当数据传输带宽变化剧烈且数据传输具有实时性要求时,比如在无线传输下的车与车之间进行通信的应用场景下,仅仅对视频数据和视觉特征数据进行带宽分配是不够的。
本申请的实施例基于特征数据优先级调整相关联视频数据优先级或者基于视频数据优先级调整相关联特征数据优先级可以大大加快多流传输问题的解决,简化了步骤,有同时兼顾视频压缩损失和特征视频准确性。基于确定的特征流的发送优先级调整或确定相关联的视频流的发送优先级,具体地,如果特征流的优先级较高,那么相关联的视频流的发送优先级应该与之对应,同样处在较高的优先级中,反之亦同;进一步地,如果特征流的优先级增高,那么其相关联的视频流的优先级也会增高,根据特征数据的优先级变化调整对应的视频数据的优先级,节省了时间,解决了带宽受限,传输时延较高的问题。
S103,将至少一个特征流和相关联的视频流共享一个优先级并发送。
在本申请的实施例中,为了有效解决带宽变化剧烈并且适应实时性要求的问题,将特征流和相关联的视频流部分内容联合共享一个优先级并发送,缓解了带宽分配的压力,提高了效率,加强了在很短时间内(比如数毫秒内)视频数据和特征数据发送存在拥塞时的适应能力。
通过图1的方法,解决的是存在传输拥塞情况下,各特征流、视频流之间如何按照一定的优先级进行发送的问题。具体是:基于视频特征内容自适应调整优先级、基于各视频分析应用的实时性要求确定各特征流和视频流发送优先级,而且相比于现有技术,关注了特征流和视频流之间的关系,使相对应的视频数据与特征数据联系起来,因此,可以部分或全部地解决现有技术中的问题。
基于图1的方法,本申请实施例还提供了该方法的一些具体实施方案,以及扩展方案,下面进行说明。
在一些实施例中,实时性要求设定方式包括:预先设定、视频传输建立中设定、视频传输过程中设定。实时性要求的灵活性更强,适应性更广,可以在更广泛的条件中适用。
进一步地,在一些实施例中,基于图1的方法还包括:基于数据接收端反馈的实时性要求或优先级信息调整数据发送优先级。具体地,还包括:基于RF信道(无线射频信道)状况自适应调整优先级;基于视频特征内容自适应调整优先级。
在本申请实施例的一些实施方式中,当RF信道状况变化超过一定阈值时,基于最新实时性要求重新调整优先级;或者,当RF信道质量低于一定阈值时,按照另外的优先级分配方案重新调整优先级。相应的,RF信道质量度量指标包括:信噪比、信号与干扰加噪声 比、误块率等。
典型地,在一些实施方式中,视觉特征内容可以包括很多种,比如:帧间时间、单位时间内感兴趣区域数量、汉明距离、欧式距离等。
在另一些实施方式中,基于视觉特征内容自适应调整优先级包括:基于视觉特征之外的运动信息调整优先级;基于视频特征发生变化超过一定阈值时调整优先级。
当然,本领域相关技术人员应当理解,图1中的数字视网膜数据传输方法视图仅仅作为一个示例来说明本申请一些实施例的实现方式,实际应用场景中所面对的数据传输仿佛复杂度通常远高于图1中的示例。
在上述的实施例中,提供了一种数字视网膜数据传输方法,与之相对应的,本申请还提供一种数字视网膜数据传输装置。本申请实施例提供的数字视网膜数据传输装置可以实施上述数字视网膜数据传输方法,该数字视网膜数据传输装置可以通过软件、硬件或软硬结合的方式来实现。例如,该数字视网膜数据传输装置可以包括集成的或分开的功能模块或单元来执行上述各方法中的对应步骤。由于装置实施例基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可,下述描述的装置实施例仅仅是示意性的。
本申请实施例提供了一种数字视网膜数据传输装置,包括:发送端模块,用于采集视频信息,基于各视频分析应用的实时性要求确定各特征流和视频流发送优先级并将特征流和视频流基于优先级进行数据发送。
图2是根据本申请一个实施例所示的一种发送端模块的结构示意图,如图2所示,发送端模块200包括视频采集模块201、视频处理功能模块202、数据发送调度模块203和数据发送模块204;其中,
视频采集模块201,用于采集待传输的视频信息;所述视频信息,为采集到的被送入视频处理功能单元进行数据压缩、视频特征处理等处理过程的视频信息;
视频处理功能模块202,用于将采集到的数据进行数据压缩和视频特征处理等过程;其中,所述视频特征处理包括:视频特征提取以及视频特征压缩;
数据发送调度模块203,用于基于各视频分析应用的实时性要求确定特征流和视频流发送优先级;其中,所述实时性要求为最大允许时延要求,依据不同的应用确定;
数据发送模块204,用于发送视频数据和特征数据。
在一些实施例中,上述各模块可以在同一个物理实体上实现,也可以通过多个物理上分离的物理实体实现。
在一些实施例中,视频特征处理模块包括:视频特征提取模块、视频特征压缩模块、摘要视频流生成模块和摘要视频流压缩模块。
在一些实施例中,视频特征包括视觉特征、机器学习特征的任一种或多种。
在一些实施例中,视频处理功能模块可以支持多种视频分析(包括视频检索)应用,而一个视频分析应用可能包括多种视频特征;压缩后的视频数据和视频特征数据发送到数据发送调度模块,所述数据发送调度模块按照一定的优先级通过数据发送单元将其发送出去;数据发送可能是通过优先网络,比如以太网接口;也可以是WIFI网络或4G/5G等蜂窝通信网络。
进一步地,在本申请的一个实施例中,安装在汽车上的智能摄像头采集到周边视频数据,在这基础上可以进一步提取视觉特征和深度学习特征等视频特征,视觉特征包括视频帧中关于颜色、图案、纹理、灰度等信息,深度学习特征可以包括行人识别、车牌识别、交通事故、交通指示识别等信息。视频特征可以在车辆之间或通过路边设备进行交换、也通过WIFI、4G/5G等网络传输到云端。
典型地,在车联网应用中,能反映交通状况的视频特征信息可能需要第一时间被传送到相关车辆上,即使是同一视频分析应用下的不同视频特征,其实时性要求也可能不同,而在日常视频监控领域,在大部分情况下不需要实时的视频特征流传输,而只是为了事后存储、检索;例如,在车联网应用中,能反映道路坍塌等避险信息的视频特征比能够准确识别车牌号的特征信息相比,其实时性要求高得多;考虑到在实际传输网络尤其是无线传输网络中,无线信道容易受到干扰,容易造成短期或长期传输拥塞,因此基于本发明,数据发送调度模块基于各视频分析应用的实时性要求确定各特征流和视频流发送优先级。
在一些实施例中,所述实时性要求可以为最大允许时延要求;其中,所述最大允许时延依据不同的应用而定;例如在视频监控领域,如果视频特征流主要是为了提高事后视频检索的便利性,则最大允许时延可以设定为数小时甚至数天。那么调度单元可以执行一种“尽力而为”的传输方式,当无其它数据需要发送时才将该等特征流进行发送。但在车联网进行实时的路况分享为目的的视频分析应用中,视频特征流的实时性要求很高,应该设定在ms级别;同时存在多个应用时,视频分析实时性要求高的,其特征流发送优先级高。视频压缩数据流也应赋予一个合适的优先级;比如视频监控数据流,秒级的实时性要求在大部分情况下是足够的。
在一些实施例中,所述优先级信息或最大时延要求信息为随视频特征数据和视频压缩数据从视频处理模块传输到数据发送调度模块的信息。
在一些实施例中,所述一个视频分析应用可能对应多个特征流,多个特征流共享统一的优先级。
在一些实施例中,优先级信息或最大时延要求根据不同应用下不同特征类型而不同,也随视频特征数据和视频压缩数据从视频处理单元传输到数据发送调度单元。
在一些实施例中,所述优先级信息或最大时延要求的设置作用于特定特征数据类型并与其相关的一段或若干段视频片段上。
在一些实施例中,所述实时性要求或所述优先级可以预先设定,也可以在视频传输建立中、视频传输过程中由数据接收端通过反馈信道反馈其希望的实时性要求或优先级信息;当所述各视频分析应用的实时性要求来自于至少一个数据接收端,(传输装置)综合多个接收端发出的多个实时性要求确定优先级;典型地,本发明可以按最大时延要求从小到大设置优先级,也可进一步根据数据接收端的不同设置差异化优先级。
图3是根据本申请另一个实施例所示的一种发送端模块的结构示意图,如图3所示,发送端模块300包括数据接收模块301、视频处理模块302、视频特征处理模块303;其中,
数据接收模块301,用于接收视频数据和特征数据;
视频处理模块302,用于处理压缩视频数据;
视频特征处理模块303,用于处理视频特征数据。
在一些实施例中,数据接收模块可以通过有线、无线(WIFI、蜂窝网络等)接收视频数据和特征数据。
在一些实施例中,一个接收端可以同时接收和处理压缩视频数据和3种不同的视频特征数据。
在一些实施例中,所述视频处理模块主要是用于视频解码处理,并可选地进一步将其进行显示、存储等。
在一些实施例中,所述视频特征处理模块用于分别对三种视频特征进行处理,包括特征数据解码、特征匹配等。容易知道,不同视频特征可以有统一的编码和解码和/或特征匹配方式,也可以采取独立的编码和解码和/或特征匹配方式。
在一些实施例中,所述接收端模块基于当前视频分析应用需要、或基于当前视频内容、视频特征内容的状况,可以通过反馈信道向发送端发送与最大时延要求或优先级有关的信息,以利于发送端调整期优先级;其中,所述反馈信息可以在视频传输连接建立期间进行,也可以在数据传输期间进行。
在一些实施例中,除压缩视频数据、视频特征数据之外,所述接收端模块还可以接收其它类型的数据,并通过反馈信道向发送端反馈其它数据与最大时延要求或优先级有关的信息。
参考附图4,为本申请一个实施例提供的电子设备示意图。如图4所示,该电子设备400包括:
存储器430以及一个或多个处理器410;
其中,所述存储器430与所述一个或多个处理器410通信连接,所述存储器430中存储有可被所述一个或多个处理器执行的程序432,所述程序432被所述一个或多个处理器410执行,以使所述一个或多个处理器410执行本申请前述实施例中的方法。
具体地,处理器410和存储器430可以通过总线或者其他方式连接,图4中以通过总线440连接为例。
处理器410可以为中央处理器(Central Processing Unit,CPU)。处理器410还可以为其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等芯片,或者上述各类芯片的组合。
存储器430作为一种非暂态计算机可读存储介质,可用于存储非暂态软件程序、非暂态计算机可执行程序以及模块,如本申请实施例中的级联渐进网络等。处理器410通过运行存储在存储器430中的非暂态软件程序、程序432以及功能模块,从而执行处理器的各种功能应用以及数据处理。
存储器430可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储处理器410所创建的数据等。此外,存储器430可以包括高速随机存取存储器,还可以包括非暂态存储器,例如至少一个磁盘存储器件、闪存器件、或其他非暂态固态存储器件。在一些实施例中,存储器430可选包括相对于处理器410远程设置的存储器,这些远程存储器可以通过网络(比如通过通信接口420)连接至处理器410。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
本申请的一个实施例提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机可执行指令,所述计算机可执行指令被执行后执行上述方法实施例中的各个步骤。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的设备和模块的具体工作过程,可以参考前述方法和/或装置实施例中的对应描述,在此不再赘述。
尽管此处所述的主题是在结合操作系统和应用程序在计算机系统上的执行而执行的一般上下文中提供的,但本领域技术人员可以认识到,还可结合其他类型的程序模块来执行其他实现。一般而言,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、组件、数据结构和其他类型的结构。本领域技术人员可以理解,此处所述的本主题可以使用其他计算机系统配置来实践,包括手持式设备、多处理器系统、基于微处理器或可编程消费电子产品、小型计算机、大型计算机等,也可使用在其中任务由通过通信网络连接的远程处理设备执行的分布式计算环境中。在分布式计算环境中,程序模块可位于本地和远程存储器存储设备的两者中。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及方法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对原有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。比如,典型地,本申请的技术方案可通过至少一个如图5所示的通用型计算机节点510来实现和/或传播。在图5中,通用型计算机节点510包括:计算机系统/服务器512、外设514和显示设备516;其中,所述计算机系统/服务器512包括处理单元520、输入/输出接口522、网络适配器524和存储器530,内部通常通过总线实现数据传输;进一步地,存储器530通常由多种存储设备组成,比如,RAM(RandomAccessMemory,随机存储器)532、缓存534和存储系统(一般由一个或多个大容量非易失性存储介质组成)536等;实现本申请技术方案的部分或全部功能的程序540保存在存储器530中,通常以多个程序模块542的形式存在。
而前述的计算机可读取存储介质包括以存储如计算机可读指令、数据结构、程序模块或其他数据等信息的任何方式或技术来实现的物理易失性和非易失性、可移动和不可因东介质。计算机可读取存储介质具体包括,但不限于,U盘、移动硬盘、只读存储器 (ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、可擦除可编程只读存储器(EPROM)、电可擦可编程只读存储器(EEPROM)、闪存或其他固态存储器技术、CD-ROM、数字多功能盘(DVD)、HD-DVD、蓝光(Blue-Ray)或其他光存储设备、磁带、磁盘存储或其他磁性存储设备、或能用于存储所需信息且可以由计算机访问的任何其他介质。
综上所述,本申请提出了一种数字视网膜数据传输方法、装置、电子设备及存储介质。本申请实施例基于特征数据优先级调整或确定相关联视频数据优先级,基于视频数据优先级调整或确定相关联特征数据优先级,同时将特征流和相关联的视频流部分内容联合共享一个优先级并发送,解决了特征流和视频流多流传输中带宽分配问题。
应当理解的是,本申请的上述具体实施方式仅仅用于示例性说明或解释本申请的原理,而不构成对本申请的限制。因此,在不偏离本申请的精神和范围的情况下所做的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。此外,本申请所附权利要求旨在涵盖落入所附权利要求范围和边界、或者这种范围和边界的等同形式内的全部变化和修改例。

Claims (18)

  1. 一种数字视网膜数据传输方法,其特征在于,包括:
    基于各视频分析任务的实时性要求确定至少一个特征流或视频流的发送优先级;其中,所述实时性要求包括最大允许时延要求,依据不同的应用确定;
    基于确定的特征流的发送优先级调整或确定相关联的视频流的发送优先级;和/或
    基于确定的视频流的发送优先级调整或确定相关联的特征流的发送优先级;
    将至少一个特征流和相关联的视频流共享一个优先级并发送。
  2. 根据权利要求1所述的方法,其中,所述分析任务对应一个或多个特征流;其中,所述特征流对应一个或多个优先级。
  3. 根据权利要求1所述的方法,其中,所述实时性要求设定方式包括:预先设定、视频传输建立中设定和视频传输过程中设定中的至少一种。
  4. 根据权利要求1所述的方法,其中,所述方法还包括:
    基于数据接收端反馈的实时性要求或优先级信息调整所述特征流或视频流的发送优先级。
  5. 根据权利要求4所述的方法,其特征在于,所述方法包括:
    基于RF信道状况自适应调整特征流或视频流的发送优先级;
    基于视频特征内容自适应调整特征流或视频流的发送优先级。
  6. 根据权利要求5所述的方法,其特征在于,所述基于RF信道状况自适应调整特征流或视频流的发送优先级包括:
    当RF信道状况变化超过一定阈值时,基于最新实时性要求重新调整特征流或视频流的发送优先级;或者,
    当RF信道质量低于一定阈值时,按照另外的优先级分配方案重新调整特征流或视频流的发送优先级。
  7. 根据权利要求6所述的方法,其特征在于,所述RF信道质量度量指标包括:信噪比、信号与干扰加噪声比和误块率中至少一项。
  8. 根据权利要求5所述的方法,其特征在于,所述视频特征内容包括:帧间时间、单位时间内感兴趣区域数量和汉明距离中至少一项。
  9. 根据权利要求5所述的方法,其特征在于,所述基于视频特征内容自适应调整特征流或视频流的发送优先级包括:
    基于视觉特征之外的运动信息自适应调整特征流或视频流的发送优先级;
    基于视频特征发生变化超过一定阈值时自适应调整特征流或视频流的发送优先级。
  10. 一种数字视网膜数据传输装置,其特征在于,包括:
    发送端模块,用于采集视频信息,基于各视频分析应用的实时性要求确定各特征流和视频流发送优先级并将特征流和视频流基于优先级进行数据发送。
  11. 根据权利要求10所述的装置,其特征在于,所述发送端模块还包括:
    视频处理功能模块,用于将采集到的数据进行数据压缩和视频特征处理;其中,所述视频特征处理包括:视频特征提取以及视频特征压缩;
    数据发送调度模块,用于基于各视频分析应用的实时性要求确定个特征流和视频流发送优先级;其中,所述实时性要求为最大允许时延要求,依据不同的应用确定。
  12. 根据权利要求10所述的装置,其特征在于,所述装置还包括:
    接收端模块,用于基于当前视频分析应用需要、当前视频内容或视频特征内容的状况,通过反馈信道向发送端发送与最大时延要求或优先级有关的反馈信息,以调整发送端优先级。
  13. 根据权利要求10所述的装置,其特征在于,所述各视频分析应用的实时性要求来自于至少一个接收端模块。
  14. 根据权利要求10所述的装置,其特征在于,所述发送端模块还包括:
    视频采集模块,用于采集待传输的视频信息。
  15. 根据权利要求10所述的装置,其特征在于,所述发送端模块还包括:
    数据发送模块,用于基于优先级发送特征数据和视频数据。
  16. 根据权利要求12所述的装置,其特征在于,所述接收端模块还包括:
    数据接收模块,用于接收特征数据和视频数据。
  17. 一种电子设备,其特征在于,包括:
    存储器以及一个或多个处理器;
    其中,所述存储器与所述一个或多个处理器通信连接,所述存储器中存储有可被所述一个或多个处理器执行的指令,所述指令被所述一个或多个处理器执行时,所述电子设备用于实现如权利要求1-9任一项所述的方法。
  18. 一种计算机可读存储介质,其上存储有计算机可执行指令,当所述计算机可执行指令被计算装置执行时,可用来实现如权利要求1-9任一项所述的方法。
PCT/CN2021/139114 2021-08-11 2021-12-17 数字视网膜数据传输方法、装置、电子设备及存储介质 WO2023015807A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110917613.1 2021-08-11
CN202110917613.1A CN113382285B (zh) 2021-08-11 2021-08-11 数字视网膜数据传输方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023015807A1 true WO2023015807A1 (zh) 2023-02-16

Family

ID=77576718

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/139114 WO2023015807A1 (zh) 2021-08-11 2021-12-17 数字视网膜数据传输方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN113382285B (zh)
WO (1) WO2023015807A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113382285B (zh) * 2021-08-11 2021-11-16 浙江智慧视频安防创新中心有限公司 数字视网膜数据传输方法、装置、电子设备及存储介质
CN114257817B (zh) * 2022-03-01 2022-09-02 浙江智慧视频安防创新中心有限公司 一种多任务数字视网膜特征流的编码方法及解码方法
CN115174567A (zh) * 2022-06-22 2022-10-11 浙江大华技术股份有限公司 一种送码方法、装置、设备及存储介质
CN115396312B (zh) * 2022-08-10 2023-04-25 中国科学院空天信息创新研究院 一种数据分级处理与发送方法、系统、电子设备及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009104153A1 (en) * 2008-02-20 2009-08-27 Koninklijke Philips Electronics N.V. Method and device for transferring video streams in a network
US20160275642A1 (en) * 2015-03-18 2016-09-22 Hitachi, Ltd. Video analysis and post processing of multiple video streams
CN108737861A (zh) * 2018-05-11 2018-11-02 浙江大学 一种拥塞环境下基于解码优先级的带宽资源优化配置方法
CN110719438A (zh) * 2019-08-28 2020-01-21 北京大学 一种数字视网膜视频流与特征流的同步传输控制方法
CN113382285A (zh) * 2021-08-11 2021-09-10 浙江智慧视频安防创新中心有限公司 数字视网膜数据传输方法、装置、电子设备及存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111787218A (zh) * 2020-06-18 2020-10-16 安徽超清科技股份有限公司 一种基于数字视网膜技术的监控相机

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009104153A1 (en) * 2008-02-20 2009-08-27 Koninklijke Philips Electronics N.V. Method and device for transferring video streams in a network
US20160275642A1 (en) * 2015-03-18 2016-09-22 Hitachi, Ltd. Video analysis and post processing of multiple video streams
CN108737861A (zh) * 2018-05-11 2018-11-02 浙江大学 一种拥塞环境下基于解码优先级的带宽资源优化配置方法
CN110719438A (zh) * 2019-08-28 2020-01-21 北京大学 一种数字视网膜视频流与特征流的同步传输控制方法
CN113382285A (zh) * 2021-08-11 2021-09-10 浙江智慧视频安防创新中心有限公司 数字视网膜数据传输方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN113382285A (zh) 2021-09-10
CN113382285B (zh) 2021-11-16

Similar Documents

Publication Publication Date Title
WO2023015807A1 (zh) 数字视网膜数据传输方法、装置、电子设备及存储介质
CN112492118B (zh) 数据传输控制方法、信息发送端、接收端及飞行器图传系统
US9137530B2 (en) Video communication method and system for dynamically modifying video encoding
WO2018019184A1 (zh) 网络切片方法和系统
JP5897447B2 (ja) 品質及びレート情報に基づいてマルチメディアコンテンツのサイズを変更するための方法及びシステム
US20190109787A1 (en) Method for transmitting data streams, and device
CN112511325B (zh) 网络拥塞控制方法、节点、系统及存储介质
US9213521B2 (en) Control method of information processing apparatus and information processing apparatus
US20120213069A1 (en) Transmission control method, transmission control system, communication device and recording medium of transmission control program
EP2530889A1 (en) Method and Apparatus for Controlling Stream to Receive Data in Parallel
US20150106501A1 (en) Facilitating high quality network delivery of content over a network
EP2802170A1 (en) Method, system and device for service rate control
EP2869506B1 (en) Congestion avoidance and fairness in data networks with multiple traffic sources
US9800662B2 (en) Generic network trace with distributed parallel processing and smart caching
CN112995048B (zh) 数据中心网络的阻塞控制与调度融合方法及终端设备
CN107210999A (zh) 链路感知流送自适应
JP7356581B2 (ja) 情報処理方法、装置、設備及びコンピュータ読み取り可能な記憶媒体
CN102594578B (zh) 信息推送业务的处理方法、装置和系统
EP3360388B1 (en) Systems and method for wireless data-acknowledgement communication using frame aggregation
US20220053373A1 (en) Communication apparatus, communication method, and program
WO2022257366A1 (zh) 网络切片自优化方法、基站及存储介质
WO2013165812A1 (en) Data transfer reduction during video broadcasts
CN113206875B (zh) 数据传输方法、装置及存储介质
KR102581310B1 (ko) 무선 통신 시스템에서 접속 네트워크를 선택하는 방법 및 장치
WO2024001266A1 (zh) 视频流传输的控制方法及装置、设备、介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21953410

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE