CN115334305A - Video data transmission method, video data transmission device, electronic equipment and medium - Google Patents

Video data transmission method, video data transmission device, electronic equipment and medium Download PDF

Info

Publication number
CN115334305A
CN115334305A CN202110512361.4A CN202110512361A CN115334305A CN 115334305 A CN115334305 A CN 115334305A CN 202110512361 A CN202110512361 A CN 202110512361A CN 115334305 A CN115334305 A CN 115334305A
Authority
CN
China
Prior art keywords
image area
image
data
quantization parameter
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110512361.4A
Other languages
Chinese (zh)
Inventor
宋嘉文
樊鸿飞
豆修鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Cloud Network Technology Co Ltd
Original Assignee
Beijing Kingsoft Cloud Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Cloud Network Technology Co Ltd filed Critical Beijing Kingsoft Cloud Network Technology Co Ltd
Priority to CN202110512361.4A priority Critical patent/CN115334305A/en
Publication of CN115334305A publication Critical patent/CN115334305A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/177Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the disclosure discloses a video data transmission method, a video data transmission device, an electronic device and a computer readable medium. One embodiment of the method comprises: acquiring video coding data of a target video after coding, wherein the target video corresponds to a plurality of image areas, the image areas are obtained in the process of coding the target video, and the video coding data comprises subdata with at least one resolution ratio respectively corresponding to the image areas; determining a quantization parameter of each image area according to the current transmission mode; and for each image area, transmitting the sub data of at least one resolution corresponding to the image area according to the quantization parameter of the image area. The implementation mode realizes the dynamic and flexible selection of the transmission scheme according to different quantization parameters of the video so as to realize the optimal playing effect.

Description

Video data transmission method, video data transmission device, electronic equipment and medium
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a video data transmission method and apparatus, an electronic device, and a computer-readable medium.
Background
Related VR (virtual reality) video data transmission generally adopts a data transmission mode based on a visual angle. Specifically, encoded data of high definition is provided for a main view, and encoded data of low definition is provided for other views. This has the advantage that the decoding pressure and bandwidth requirements at the playing end can be reduced due to the relatively low complexity and code rate of the low definition data coding.
However, when the above transmission method is adopted, the following technical problems are often caused:
when the user's viewing angle changes, the original low-definition encoded data needs to be switched to high-definition encoded data, which causes a switching delay.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Some embodiments of the present disclosure propose a video data transmission method, a data reading and writing method, an apparatus, an electronic device, and a computer readable medium to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a video data transmission method, including: acquiring video coding data of a target video after coding, wherein the target video corresponds to a plurality of image areas, the image areas are obtained in the process of coding the target video, and the video coding data comprises subdata with at least one resolution ratio respectively corresponding to the image areas; determining a quantization parameter of each image area according to the current transmission mode; and for each image area, transmitting the sub-data of at least one resolution corresponding to the image area according to the quantization parameter of the image area.
In a second aspect, some embodiments of the present disclosure provide a video data transmission apparatus, including: the video encoding device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is configured to acquire encoded video data of a target video after encoding, the target video corresponds to a plurality of image areas, the plurality of image areas are obtained in the process of encoding the target video, and the encoded video data comprises subdata with at least one resolution which respectively corresponds to the plurality of image areas; a determination unit configured to determine a quantization parameter for each image region according to a current transmission mode; and the transmission unit is configured to transmit the sub data of at least one resolution corresponding to the image area according to the quantization parameter of the image area for each image area.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method described in any of the implementations of the first aspect described above.
In a fourth aspect, some embodiments of the disclosure provide a computer readable medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect.
The above embodiments of the present disclosure have the following advantages: the transmission scheme can be flexibly selected dynamically according to different quantization parameters of the video, so that the optimal playing effect can be realized. In particular, it is found in practice that for most image regions, especially for image regions with relatively high values of quantization parameters, the encoding complexity and the code rate of high-definition and low-definition encoded data are not much different. That is to say, even if the high-definition encoded data is transmitted to the partial image area, no extra pressure is caused to a decoder at the playing end, and the switching time delay caused by switching the high-definition encoded data and the low-definition encoded data can be avoided. Based on this, some embodiments of the present disclosure may give consideration to both decoding pressure and switching delay by determining quantization parameters and selecting different transmission schemes according to the quantization parameters, so as to achieve an optimal playing effect.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
Fig. 1 is a schematic diagram of an application scenario of a video data transmission method of some embodiments of the present disclosure;
fig. 2 is a flow diagram of some embodiments of a video data transmission method according to the present disclosure;
fig. 3 is an exemplary diagram of a plurality of image areas corresponding to a target video in a video data transmission method according to the present disclosure;
FIG. 4 is a flow diagram of further embodiments of a video data transmission method according to the present disclosure;
fig. 5 is a schematic block diagram of some embodiments of a video data transmission apparatus according to the present disclosure;
FIG. 6 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and the embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram of an application scenario of a video data transmission method of some embodiments of the present disclosure.
As shown in fig. 1, a service 101 may provide VR video to a terminal 102. The service end 101 may store encoded video data corresponding to the VR video. In the application scenario, the video may be encoded by using the h.265 standard, a video picture is divided into a plurality of image areas in the encoding process, and the application scenario takes the division into 9 image areas as an example, as shown in 103 in the figure. Each of the image areas may correspond to sub data of at least one resolution, respectively. For example, a certain image area may have both high-definition and low-definition sub-data.
On this basis, the server 101 may determine the quantization parameter of each image region according to the current transmission mode. And for each image area, transmitting the sub data of at least one resolution corresponding to the image area according to the quantization parameter of the image area. As an example, for the image areas 1031, 1032, 1033, quantization parameters thereof may be determined, respectively. Then, for the image area 1031, since the quantization parameter value thereof is high, the corresponding high definition data is always transmitted. For the image area 1032, since its quantization parameter value is low and not in the current field of view 104, its corresponding low-definition data is transmitted. For image region 1033, since its quantization parameter value is low but in the field of view 104, its corresponding high definition data is transmitted.
With continued reference to fig. 2, a flow 200 of some embodiments of a video data transmission method according to the present disclosure is shown. The video data transmission method comprises the following steps:
step 201, obtaining encoded video data of a target video after encoding.
In some embodiments, the execution subject of the video data transmission method may first obtain encoded video data of the target video after encoding. The target video may be any video, for example, a video to be currently transmitted may be determined as the target video. In practice, the target video may be a video clip according to different transmission modes. Because the amount of original video data is huge, if the original video data is directly transmitted, a large amount of network resources are occupied, and further, when a user watches or downloads videos, the user is very unsmooth and takes a long time. Therefore, video is typically encoded and then transmitted. As an example, video may be encoded and decoded using the h.265 standard.
In practice, in the process of encoding a target video, a video picture is often divided as needed, so as to obtain a plurality of image areas (tiles) corresponding to the target video. The content in each frame image belonging to the same image area has certain relevance in time, which is beneficial to the realization of coding.
Fig. 3 shows an exemplary schematic diagram of a plurality of image regions corresponding to a target video. Taking the example where the target video includes the frame image 301 and the frame image 302, the image 301 may be divided into three rectangular regions 3011, 3012, and 3013, respectively, from the vertical direction. It is understood that the division into rectangular regions from the horizontal and/or vertical direction can be flexibly selected according to the needs. Similarly, the frame image 302 may be divided into rectangular regions 3021, 3022, and 3023. In practice, the same division method is generally adopted to perform division within a certain range. On the basis, rectangular areas at the same position in each frame image belong to the same image area. For example, the rectangular region 3011 in the frame image 301 and the rectangular region 3021 in the frame image 302 belong to the same image region. The target video corresponds to three image areas.
On this basis, the content (rectangular area) in each frame image corresponding to each image area is encoded, and sub-data corresponding to the image area can be obtained. In practice, some image areas may correspond to sub-data with different resolutions to meet different requirements.
In step 202, a quantization parameter for each image region is determined.
In some embodiments, the execution subject may determine the quantization parameter of each image region according to the current transmission mode. In practice, different transmission modes can be adopted for transmission according to actual needs. The quantization parameter for each image region may be determined in different manners depending on the transmission manner. As an example, the transmission method may be omaf-dash, webrtc (data transfer related protocol), or the like. Wherein, omaf (omni directional media application format) is an input/output interface standard of the VR system. DASH (Dynamic Adaptive Streaming over HTTP) is Adaptive Streaming media transmission, and by storing different resolution data and corresponding description files of the same content in advance in a server, a client can select an optimum version according to the performance of the client and the network environment when playing.
As an example, the execution subject described above may calculate the quantization parameter of each image region by:
the first step is that in response to the fact that the current transmission mode is the first transmission mode, a packaged file corresponding to each image area in the target time period is determined.
The first transmission mode may be a transmission mode in which the video encoding data line is transmitted in a packet mode. For example, the omaf-dash transmission method packs video encoded data of each image region in a period of time to obtain a packed file. That is, the sub data of the video encoded data exists in the form of a packetized file. So that the executing body can determine the corresponding packed file of each image area in the target time period. As an example, if packaging is performed every 2s, one packaged file is obtained. Then each packed file will correspond to a time interval, e.g., 10 minutes 0 seconds to 10 minutes 2 seconds. On this basis, the target time period can be any time interval corresponding to the packed file. As an example, the time interval of the packed file to be currently transmitted may be determined as the target time period. In practice, for the case where there are sub-data of a plurality of resolutions, the quantization parameter may be calculated from the packed file of the local high resolution.
And secondly, determining the average quantization parameter of the packed file corresponding to the image area for each image area, and taking the average quantization parameter of the packed file as the quantization parameter of the image area.
For each image region, the execution subject may first determine an average quantization parameter of a packed file corresponding to the image region. Specifically, each image region further includes a plurality of coding units (CTUs). As an example, for each image region, an average of the quantization parameters of the respective coding units contained in the image region may be determined as an average quantization parameter of the image region. The quantization parameter (qp) of each coding unit can be calculated in a manner specified in h.265 or the like. On the basis, the average quantization parameter of the packed file is used as the quantization parameter of the image area. Therefore, it can be seen that, even for the same image region, the quantization parameter of the image region changes with the passage of time due to the difference of the transmitted package file. Besides, according to actual needs, a weighted average of the quantization parameters of the respective coding units may be determined and the weighted average may be determined as the average quantization parameter.
And 203, for each image area, transmitting the sub data of at least one resolution corresponding to the image area according to the quantization parameter of the image area.
In some embodiments, for each image region, the execution body may transmit the sub-data of at least one resolution corresponding to the image region according to the quantization parameter of the image region.
As an example, the quantization parameter of the image area may be compared with a preset quantization parameter threshold. If the value is larger than the quantization parameter threshold, only the high-definition subdata corresponding to the image area can be transmitted. If the sub-data is smaller than the quantization parameter threshold, the high-definition sub-data and the low-definition sub-data of the image area can be flexibly transmitted according to actual conditions.
In some embodiments, the same image region quantization parameter may change over time. And variations in the quantization parameter may result in different transmission schemes. Therefore, the transmission scheme can be flexibly and dynamically selected according to different quantization parameters of the video, so that the optimal playing effect can be realized. In particular, it is found in practice that for most image regions, especially for image regions with relatively high values of quantization parameters, the encoding complexity and the code rate of these regions are relatively low, and the encoding complexity and the code rate of high-definition and low-definition encoded data are not much different. That is to say, even if the high-definition encoded data is transmitted to the partial image area, no extra pressure is caused to a decoder at the playing end, and the switching time delay caused by switching the high-definition encoded data and the low-definition encoded data can be avoided. Based on this, some embodiments of the present disclosure may give consideration to both decoding pressure and switching delay by determining quantization parameters and selecting different transmission schemes according to the quantization parameters, so as to achieve an optimal playing effect.
With continued reference to fig. 4, a flow 400 of further embodiments of a video data transmission method according to the present disclosure is shown. The video data transmission method comprises the following steps:
step 401, obtaining video coding data of the target video after coding.
In some embodiments, the specific implementation of step 401 and the technical effect thereof may refer to step 201 in those embodiments corresponding to fig. 2, and are not described herein again.
Step 402, in response to that the current transmission mode is the second transmission mode, determining a plurality of coding units corresponding to each image area in the video frame group in the target time period.
In some embodiments, the second transmission mode may be another transmission mode different from the first transmission mode. Unlike the first transmission method, the second transmission method does not pack data and then transmit the data, but transmits the data in the form of a group of video frames (GOP). Wherein a video frame group comprises a number of video frames. These video frames generally have some relevance in content. For example, the second transmission mode may be webrtc.
In some embodiments, in response to the current transmission mode being the second transmission mode, the performing subject of the video data transmission method may determine a plurality of coding units corresponding to each picture region in a group of video frames (GOP) within the target time period. Where, similar to the packed file, there is a corresponding time interval for the group of video frames. At this time, the target period may be a time interval corresponding to any one video frame group. As an example, the time interval of the group of video frames currently to be transmitted may be determined as the target time period.
On this basis, the execution body may determine a plurality of coding units corresponding to each image region in the video frame group within the target time period.
In step 403, for each image region, determining an average quantization parameter of a plurality of coding units corresponding to the image region, and using the average quantization parameter of the plurality of coding units as a quantization parameter of the image region.
In some embodiments, for a certain image region, the execution subject may determine an average value of quantization parameters of a plurality of coding units corresponding to the certain image region, and obtain an average quantization parameter. Then, the average quantization parameter is used as the quantization parameter for the image region. It can be seen that, over time, even for the same image region, the quantization parameter of the image region changes due to the difference in the transmitted video frame groups. That is, the quantization parameter of the image area changes in both the transmission mode of transmitting the packed file and the transmission mode of transmitting the video frame group.
In step 404, for each image region, the ordering position of the quantization parameter of the image region in the quantization parameters corresponding to the plurality of image regions is determined.
In some embodiments, for each image region, the execution subject may determine, according to a size of the quantization parameter, an ordering position of the quantization parameter of the image region in the quantization parameters respectively corresponding to the plurality of image regions. The ranking position can be represented by ranking order, ranking order ratio, and other indexes. For example, the ordering bit may be the 5 th in the queue. The ranking bit fraction may be the top 20% of the queue, etc.
Step 405, transmitting the sub data of at least one resolution corresponding to the image region based on the sorting position.
In some embodiments, the execution subject may transmit sub-data of at least one resolution corresponding to the image area based on the sorting position. Wherein the ranking position reflects more accurately the relative size of the quantization parameter of each image region in its entirety than a fixed threshold. Through setting up the condition of predetermineeing, can guarantee to most image areas, the coded data of transmission high definition to furthest's avoidance switches the switching time delay that high definition and low clear coded data brought.
In some optional implementations of some embodiments, in response to the sorting position satisfying a preset condition, the sub-data of the first resolution corresponding to the image area is transmitted. Wherein, as an example, the preset condition may be that the ranking bit ratio is greater than a preset ratio threshold.
In some optional implementations of some embodiments, in response to determining that the sorting position does not satisfy the preset condition and that the image region is within a current view range, transmitting the sub-data of the first resolution corresponding to the image region, where the current view range is determined by the view parameter from the terminal. Wherein the first resolution may be high definition (1080P), ultra high definition, etc., as examples.
In some optional implementations of some embodiments, transmitting the sub-data of at least one resolution corresponding to the image region based on the sorting position includes: and responding to the condition that the sequencing position does not meet the preset condition and the image area is not in the current view field range, and transmitting the sub-data of the second resolution corresponding to the image area. Wherein the second resolution may be low definition (720 p) or the like, as an example.
In these implementations, for image regions whose ranking positions satisfy the preset condition, high-definition or ultra-high-definition video may be transmitted all the time. And if the sorting position does not meet the preset condition, switching between high definition and low definition is required according to whether the sorting position is in the visual field range or not. Compared with a transmission mode which does not consider the quantization parameter and only has more career scope to switch between high definition and low definition, the switching time delay can be reduced to the maximum extent because most of image areas are transmitted with high definition or ultra-high definition video. Besides, for most image areas, data with various resolutions does not need to be stored simultaneously, and the occupation of storage space is reduced.
With further reference to fig. 5, as an implementation of the methods illustrated in the above figures, the present disclosure provides some embodiments of a video data transmission apparatus, which correspond to those method embodiments illustrated in fig. 2, and which may be applied in particular to various electronic devices.
As shown in fig. 5, some embodiments of a video data transmission apparatus 500 include: an acquisition unit 501, a determination unit 502 and a transmission unit 503. The obtaining unit 501 is configured to obtain encoded video data of an encoded target video, where the target video corresponds to a plurality of image regions, the image regions are obtained during encoding of the target video, and the encoded video data includes sub-data of at least one resolution corresponding to each of the image regions. The determination unit 502 is configured to determine a quantization parameter for each image region depending on the current transmission mode. The transmission unit 503 is configured to transmit, for each image area, the sub-data of at least one resolution corresponding to the image area according to the quantization parameter of the image area.
In an optional implementation of some embodiments, the determining unit 502 is further configured to: in response to the fact that the current transmission mode is the first transmission mode, determining a corresponding packed file of each image area in the target time period; for each image area, determining an average quantization parameter of a packed file corresponding to the image area, and taking the average quantization parameter of the packed file as the quantization parameter of the image area.
In an optional implementation of some embodiments, the determining unit 502 is further configured to: the method includes determining an average of quantization parameters of respective coding units included within a packed file, and determining the average as an average quantization parameter.
In an optional implementation of some embodiments, the determining unit 502 is further configured to: determining a plurality of coding units corresponding to each image area in a video frame group in a target time period in response to the current transmission mode being a second transmission mode; for each image area, determining an average quantization parameter of a plurality of coding units corresponding to the image area, and taking the average quantization parameter of the plurality of coding units as the quantization parameter of the image area.
In an optional implementation of some embodiments, the transmission unit 503 is further configured to: determining the sequencing positions of the quantization parameters of the image areas in the quantization parameters respectively corresponding to the image areas; and transmitting the sub data of at least one resolution corresponding to the image area based on the sequencing position.
In an optional implementation of some embodiments, the transmission unit 503 is further configured to: and transmitting the sub data of the first resolution corresponding to the image area in response to the sequencing position meeting the preset condition.
In an optional implementation of some embodiments, the transmission unit 503 is further configured to: and in response to the determination that the sequencing position does not meet the preset condition and the image area is within the current visual field range, transmitting the sub-data of the first resolution corresponding to the image area, wherein the current visual field range is determined by the visual field parameters from the terminal.
In an optional implementation of some embodiments, the transmission unit 503 is further configured to: and transmitting the sub data of the second resolution corresponding to the image area in response to the condition that the sequencing position does not meet the preset condition and the image area is not in the current visual field range.
It will be understood that the elements described in the apparatus 500 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 500 and the units included therein, and are not described herein again.
Referring now to FIG. 6, a block diagram of an electronic device (e.g., the server of FIG. 1) 600 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 609, or installed from the storage device 608, or installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring video coding data of a target video after coding, wherein the target video corresponds to a plurality of image areas, the image areas are obtained in the process of coding the target video, and the video coding data comprises subdata with at least one resolution ratio respectively corresponding to the image areas; determining a quantization parameter of each image area according to the current transmission mode; and for each image area, transmitting the sub data of at least one resolution corresponding to the image area according to the quantization parameter of the image area.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, which may be described as: a processor includes an acquisition unit, a determination unit, and a transmission unit, where the names of these units do not in some cases constitute a limitation on the unit itself, for example, the acquisition unit may also be described as "acquiring a video encoded data unit in which a target video is encoded".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (11)

1. A video data transmission method, comprising:
acquiring video coding data of a target video after coding, wherein the target video corresponds to a plurality of image areas, the image areas are obtained in the process of coding the target video, and the video coding data comprises sub data of at least one resolution corresponding to the image areas;
determining a quantization parameter of each image area according to the current transmission mode;
and for each image area, transmitting the sub-data of at least one resolution corresponding to the image area according to the quantization parameter of the image area.
2. The method of claim 1, wherein the determining a quantization parameter for each image region according to the current transmission mode comprises:
in response to the fact that the current transmission mode is the first transmission mode, determining a packaged file corresponding to each image area in the target time period;
for each image area, determining an average quantization parameter of a packed file corresponding to the image area, and taking the average quantization parameter of the packed file as the quantization parameter of the image area.
3. The method of claim 2, wherein the determining an average quantization parameter of a packed file corresponding to the image region comprises:
determining an average value of quantization parameters of the respective coding units included within the packed file, and determining the average value as the average quantization parameter.
4. The method of claim 1, wherein the determining a quantization parameter for each image region according to the current transmission mode comprises:
determining a plurality of coding units corresponding to each image area in a video frame group in a target time period in response to the current transmission mode being a second transmission mode;
for each image area, determining an average quantization parameter of a plurality of coding units corresponding to the image area, and taking the average quantization parameter of the plurality of coding units as the quantization parameter of the image area.
5. The method according to claim 2 or 4, wherein the transmitting sub-data of at least one resolution corresponding to the image region according to the quantization parameter of the image region comprises:
determining the sequencing positions of the quantization parameters of the image areas in the quantization parameters respectively corresponding to the image areas;
and transmitting the sub data of at least one resolution corresponding to the image area based on the sequencing position.
6. The method of claim 5, wherein the transmitting the sub-data of at least one resolution corresponding to the image region based on the sorting locations comprises:
and transmitting the sub data of the first resolution corresponding to the image area in response to the sequencing position meeting a preset condition.
7. The method of claim 5, wherein the transmitting the sub-data of at least one resolution corresponding to the image region based on the sorting locations comprises:
and in response to the fact that the sequencing position does not meet the preset condition and the image area is located in the current visual field range, transmitting sub-data of the first resolution corresponding to the image area, wherein the current visual field range is determined through the visual field parameters from the terminal.
8. The method of claim 7, wherein the transmitting sub-data of at least one resolution corresponding to the image region based on the sorting locations comprises:
and in response to that the sequencing position does not meet a preset condition and the image area is not in the current view field range, transmitting the subdata of the second resolution corresponding to the image area.
9. A video data transmission apparatus comprising:
an obtaining unit, configured to obtain encoded video data of an encoded target video, where the encoded target video corresponds to a plurality of image regions, the plurality of image regions being obtained during encoding of the target video, and the encoded video data includes sub-data of at least one resolution corresponding to each of the plurality of image regions;
a determination unit configured to determine a quantization parameter for each image region according to a current transmission mode;
the transmission unit is configured to transmit sub data of at least one resolution corresponding to each image area according to the quantization parameter of the image area.
10. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-8.
11. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-8.
CN202110512361.4A 2021-05-11 2021-05-11 Video data transmission method, video data transmission device, electronic equipment and medium Pending CN115334305A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110512361.4A CN115334305A (en) 2021-05-11 2021-05-11 Video data transmission method, video data transmission device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110512361.4A CN115334305A (en) 2021-05-11 2021-05-11 Video data transmission method, video data transmission device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN115334305A true CN115334305A (en) 2022-11-11

Family

ID=83912936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110512361.4A Pending CN115334305A (en) 2021-05-11 2021-05-11 Video data transmission method, video data transmission device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN115334305A (en)

Similar Documents

Publication Publication Date Title
US9426476B2 (en) Video stream
CN113115067A (en) Live broadcast system, video processing method and related device
CN113453046B (en) Immersive media providing method, immersive media obtaining device, immersive media equipment and storage medium
CN112312229A (en) Video transmission method and device, electronic equipment and storage medium
CN110290398B (en) Video issuing method and device, storage medium and electronic equipment
KR20220149574A (en) 3D video processing method, apparatus, readable storage medium and electronic device
US20230082784A1 (en) Point cloud encoding and decoding method and apparatus, computer-readable medium, and electronic device
AU2018250308B2 (en) Video compression using down-sampling patterns in two phases
CN111385576B (en) Video coding method and device, mobile terminal and storage medium
CN113973224B (en) Media information transmission method, computing device and storage medium
US20120033727A1 (en) Efficient video codec implementation
CN113259729B (en) Data switching method, server, system and storage medium
CN111478916B (en) Data transmission method, device and storage medium based on video stream
WO2023130896A1 (en) Media data processing method and apparatus, computer device and storage medium
WO2023197811A1 (en) Video downloading method and apparatus, video transmission method and apparatus, terminal device, server and medium
CN114424552A (en) Low-delay source-channel joint coding method and related equipment
US10356439B2 (en) Flexible frame referencing for display transport
CN115334305A (en) Video data transmission method, video data transmission device, electronic equipment and medium
CN116016934A (en) Video encoding method, apparatus, electronic device, and computer-readable medium
CN115767149A (en) Video data transmission method and device
CN115334310A (en) Video data encoding method, video data encoding device, electronic apparatus, and medium
CN113473180B (en) Wireless-based Cloud XR data transmission method and device, storage medium and electronic device
CN112995780B (en) Network state evaluation method, device, equipment and storage medium
CN113038277B (en) Video processing method and device
CN112887742B (en) Live stream processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination