WO2021056575A1 - Procédé de codage source-canal combiné à faible retard, et dispositif associé - Google Patents

Procédé de codage source-canal combiné à faible retard, et dispositif associé Download PDF

Info

Publication number
WO2021056575A1
WO2021056575A1 PCT/CN2019/109220 CN2019109220W WO2021056575A1 WO 2021056575 A1 WO2021056575 A1 WO 2021056575A1 CN 2019109220 W CN2019109220 W CN 2019109220W WO 2021056575 A1 WO2021056575 A1 WO 2021056575A1
Authority
WO
WIPO (PCT)
Prior art keywords
code stream
reference frame
image
frame image
channel
Prior art date
Application number
PCT/CN2019/109220
Other languages
English (en)
Chinese (zh)
Inventor
李卫华
高林
郭湛
张亚凡
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201980100614.8A priority Critical patent/CN114424552A/zh
Priority to PCT/CN2019/109220 priority patent/WO2021056575A1/fr
Publication of WO2021056575A1 publication Critical patent/WO2021056575A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols

Definitions

  • This application relates to the field of wireless transmission technology, and in particular to a low-delay source-channel joint coding method and related equipment.
  • wireless networks With the development of wireless transmission technology, it is possible to use wireless networks to carry high-definition video, but at the same time, wireless networks carry high-definition video also put forward new requirements for image coding and decoding technology, because of the short-time and time-varying characteristics of wireless channels during wireless transmission. It will cause rapid changes in channel capacity, so the image codec algorithm needs to be able to quickly track and adapt to this change, so that the delay and image quality at the receiving end can be maintained at an acceptable level.
  • JPEG Joint Photographic Experts Group
  • MPEG Moving Picture Experts Group
  • the embodiments of the present application provide a low-latency source-channel joint coding method and related equipment. By optimizing image coding or transmission mode, information loss can be reduced during video transmission, so as to avoid the receiving end from being unable to play clear and complete videos. Improve user viewing experience.
  • the embodiments of the present application provide a low-delay source-channel joint coding method, which may include: down-sampling the target image to obtain a reference frame image and a non-reference frame image; Images are encoded separately to obtain the first code stream after encoding the reference frame image and the second code stream after encoding the non-reference frame image; based on the channel environment of the current wireless channel, the first channel resource and the second channel resource are determined, The first channel resource is used to send the first code stream and the second channel resource is used to send the second code stream, respectively, wherein the first channel resource is better than the second channel resource.
  • Video wireless transmission technology mainly includes several main links such as video encoding, wireless transmission, video decoding and display.
  • the encoding end can down-sample the target image to obtain the reference frame image and the non-reference frame image, and then perform image encoding on the reference frame image to obtain the first bit stream.
  • the first channel resource and the second channel resource are determined according to the channel environment of the current wireless channel, and the first channel resource is better than the second channel resource, that is, it can be understood that the reference frame image can be encoded during wireless transmission.
  • the second code stream is sent hierarchically. For example, you can set the priority of the first code stream to be higher than the priority of the second code stream, and then send the first code stream separately With the second code stream, according to the channel environment of the current wireless channel, the service quality of the channel when the first code stream is sent is higher than the service quality of the channel when the second code stream is sent.
  • the encoder can send the more important first code stream first, and then send the second code stream with a lower level; or, when the current wireless channel has a poor channel environment, it can send After receiving the first code stream or after a period of time after sending the first code stream, the end sends the second code stream with a lower level. Based on the channel environment of the current wireless channel, the encoding end sends the first code stream and the second code stream separately, so that when the receiving end cannot receive the complete code stream, the probability that the receiving end receives the first code stream can be improved. The first code stream to get a complete reconstructed image.
  • the target image is down-sampled, and after obtaining multiple copies of lower resolution images, the image encoding is performed, which can make the code stream obtained after down-sampling smaller than the direct code stream.
  • the size of the code stream obtained when the target image is image-encoded makes it easier for the decoder at the receiving end to receive the complete code stream. Therefore, after the encoding end performs image encoding on the target image through downsampling, the encoded stream information is based on the channel environment of the current wireless channel and sent through the wireless channel respectively, so that the encoded stream information can still adapt to real-time changes.
  • the channel environment further reduces the probability of information loss during wireless transmission, thereby avoiding the inability of the receiving end to reconstruct a complete video image due to information loss, and improving the user’s viewing experience.
  • the above-mentioned image encoding is performed on the reference frame image and the non-reference frame image respectively, to obtain the first code stream after the reference frame image is encoded, and the non-reference frame image encoding
  • the latter second code stream includes: performing intra-frame compression on each of the reference frame images included in the reference frame image to obtain the first code stream; performing the frame image with the residual difference between the non-reference frame image and the reference frame image Inner compression obtains the corresponding above-mentioned second code stream.
  • the encoding end can select a simpler down-sampling to layer the target image and reduce the resolution.
  • the intra-frame compression encoding method used when encoding the layered duplicate image because the frame Inner compression is spatial compression.
  • the embodiment of the present application does not use inter-frame compression when performing image encoding on the down-sampled replica image.
  • the use of intra-frame compression can cause the channel capacity to jitter and drop below the initial bit rate, so that the decoder at the receiving end can receive the bit stream information encoded based on the smaller-resolution copy image image, and can be based on The reconstructed image obtained from the code stream information.
  • the channel capacity is reduced, the probability of information loss in the transmission process is reduced, and the phenomenon of mosaic, stuttering, and blurring of the video image information is avoided, and the user's perception and experience are improved.
  • the above-mentioned channel environment also includes channel capacity; the above-mentioned image coding is performed on the above-mentioned reference frame image and the above-mentioned non-reference frame image to obtain the first code stream after the coding of the above-mentioned reference frame image, and the above-mentioned non-reference frame image.
  • the second code stream after the reference frame image it further includes: determining the coding parameters corresponding to the reference frame image and the non-reference frame image based on the channel capacity, wherein the coding parameters are used to control the corresponding image in the image processing.
  • a corresponding code stream is generated according to the target code rate, and the target code rates respectively corresponding to the first code stream and the second code stream are less than or equal to the channel capacity.
  • the encoding end can obtain the channel capacity of the current channel through the transmitter when the channel capacity changes, and adjust the encoding parameters of the encoder when encoding the duplicate image according to the feedback channel capacity information. , So that the code rate of the coded code stream can be adapted to the current channel capacity, and the target code stream can be sent smoothly, reducing the loss of the code stream due to the reduced channel capacity.
  • the foregoing channel environment includes one or more of channel bandwidth, signal to interference plus noise ratio, signal-to-noise ratio, received signal strength indicator, duty cycle, and bit rate; the foregoing is based on current To determine the first channel resource and the second channel resource, and use the first channel resource to send the first code stream and the second channel resource to send the second code stream, including: based on the current channel environment , Determine the transmission parameters corresponding to the first code stream and/or the second code stream, wherein the transmission parameters are used to transmit the code stream according to the target modulation and coding strategy information and/or the target transmission power, and the first code stream
  • the sending parameter of is better than the sending parameter of the second code stream; the first code stream is sent according to the sending parameter corresponding to the first code stream, and the second code stream is sent according to the sending parameter corresponding to the second code stream.
  • the encoder can obtain the channel environment of the current wireless channel through the transmitter when the channel changes, such as: channel bandwidth, signal to interference plus noise ratio, signal to noise ratio, received signal strength indicator , Duty cycle, bit rate, etc.), according to the channel environment to adjust the transmission parameters (such as: transmission power, modulation and coding strategy information, etc.) when sending the first code stream and/or when sending the second code stream, it is understandable Yes, after adjusting the sending parameters according to the channel information, the sending parameters of the first code stream are different from the sending parameters of the second code stream, and the sending parameters of the first code stream can make the transmitter more expensive than the sending parameters of the second code stream.
  • the transmission parameters such as: transmission power, modulation and coding strategy information, etc.
  • the first code stream is sent at a faster transmission speed and higher quality of service (QoS). Therefore, in the process of wireless transmission, the channel resource of the first code stream is better than the channel resource of the second code stream to be sent, that is, the transmission quality when sending the first code stream is higher than when sending the second code stream. Transmission quality, or, preferentially send more important code streams (that is, code streams that have a small number of duplicate images after image encoding), causing jitter in channel capacity, etc., causing the decoder at the receiving end to fail to receive the complete code stream. Because of the characteristics of intra-frame compression, it can avoid the inability of the receiver to reconstruct the complete video image due to the loss of information. The decoder is only based on the reconstructed image obtained by the partial code stream received first, which can reduce the mosaic and stutter of the video information. , Blur and other phenomena to enhance the user’s perception and experience.
  • QoS quality of service
  • the wireless channel includes a first wireless channel and a second wireless channel; the first channel resource and the second channel resource are determined based on the current channel environment, and the first channel resource is used to transmit the
  • the first code stream and the sending of the second code stream using the second channel resource include: determining the first wireless channel and the second wireless channel based on the current channel environment, and passing the first code stream through the first wireless channel Sending, sending the above-mentioned second code stream through a second wireless channel, wherein the service quality guarantee mechanism of the above-mentioned first wireless channel is higher than that of the above-mentioned second wireless channel.
  • the encoding end may classify the video source during the wireless transmission process, and the channel resource of the first code stream is better than the channel resource of the second code stream for transmission.
  • the first code stream is mapped to a video (Video, VI) service
  • the second code stream is mapped to a best effort (Best Effort, BE) service.
  • the first code stream and the second code stream sent through the wireless channel use different QoS guarantee levels, which increases the probability of receiving the first code stream at the receiving end, so that the decoder at the receiving end can
  • the image can still be reconstructed only through the first code stream to avoid the inability of the receiving end to reconstruct the complete video image due to information loss and reduce the video information Mosaic, stuttering, blurring, etc. appear to enhance the user’s perception and experience.
  • the above-mentioned target image is any one of the multi-frame target images included in the target video; the above-mentioned method further includes: acquiring each of the multi-frame target images included in the above-mentioned target video corresponds to the above-mentioned reference The code stream of the frame image; the audio information in the target video is obtained; each frame image of the multi-frame image included in the target video corresponds to the code stream of the reference frame image and the audio information through the first wireless channel.
  • the encoding end can obtain all target images in the target video included in the multi-frame target image.
  • the code streams corresponding to the above-mentioned reference frame image and audio information are mapped to the VI service transmission with a high quality of service guarantee mechanism.
  • the above audio information may be audio information that has been encoded by voice.
  • the decoder at the receiving end can reconstruct the video based only on the code streams and audio information corresponding to all target images received through the channel with a high quality of service guarantee mechanism, so as to increase the probability of receiving the first code stream at the receiving end. Avoid the inability of the receiving end to reconstruct the complete video image due to the loss of information, reduce the phenomenon of no sound and freeze in the video information, and improve the user's visual experience.
  • the above-mentioned target image is any one of the multi-frame target images included in the target video; the above-mentioned method further includes: acquiring each of the multi-frame target images included in the above-mentioned target video corresponds to the above-mentioned non- The code stream of the reference frame image; the code stream of each frame image in the multi-frame image included in the target video corresponding to the non-reference frame image is sent through the second wireless channel.
  • the encoding end may separately map to the BE service transmission with a slightly lower service quality assurance mechanism after acquiring the code streams of all target images corresponding to the aforementioned non-reference frame images in the multi-frame target images included in the target video.
  • the decoder can only be based on the results of all target images received through the channel with a high quality of service guarantee mechanism.
  • the code stream and audio information corresponding to the first copy reconstruct the video to prevent the receiving end from being unable to reconstruct the complete video image due to the loss of information, reduce the phenomenon of mosaic, stutter, and blur in the video information, and improve the user's perception and experience.
  • the embodiments of the present application provide another low-delay source-channel joint coding method, which may include: down-sampling the target image to obtain reference frame images and non-reference frame images; according to the current wireless channel channel environment, Determine the coding parameters corresponding to the reference frame image and the non-reference frame image; according to the coding parameters, perform image encoding on the reference frame image and the non-reference frame image respectively to obtain the first code stream after the reference frame image is encoded, and the non-reference frame image encoding After the second code stream; send the first code stream and the second code stream separately.
  • the encoding end may first downsample the target image to obtain the reference frame image and the non-reference frame image; then according to the channel environment of the current wireless channel, determine the encoding parameters corresponding to the reference frame image and the non-reference frame image, Then use the coding parameters to encode the reference frame image and the non-reference frame image, so that the bit rates of the first bit stream and the second bit stream are lower than the channel capacity of the current wireless channel; finally, the first bit stream and the second bit stream are The code streams are sent out through the wireless channel respectively.
  • adjusting the encoding parameters during image encoding can make the code rate of the generated code stream lower than the channel capacity of the current wireless channel, and the code stream will not be due to the low channel capacity when the code stream is wirelessly transmitted.
  • the complete bit stream cannot be transmitted, or the channel capacity is jittered, or the channel capacity is reduced to the initial code rate (for example: the initial code rate can be the code rate when the target image is compressed according to the traditional encoding method or the preset code rate)
  • the initial code rate can be the code rate when the target image is compressed according to the traditional encoding method or the preset code rate
  • the image coding and decoding algorithm of the embodiment of the present application can quickly track and adapt to changes in the channel environment of the wireless channel, so that the delay and image quality of the receiving end can be maintained at an acceptable level.
  • the encoding end first down-samples the target image, and then performs image encoding after obtaining multiple copies of lower resolution images, which can make the code stream required after the down-sampling is obtained. It is smaller than the size of the code stream obtained when directly encoding the target image, which makes it easier for the decoder at the receiving end to receive the complete code stream.
  • the code stream information obtained by encoding the image according to the channel information of the current wireless channel can be adapted to the real-time changing channel environment, thereby reducing the probability of information loss during transmission and preventing the receiving end from being unable to reconstruct complete information due to information loss.
  • the aforementioned encoding parameters are used to control the corresponding image to generate a corresponding code stream according to the target code rate when the corresponding image is encoded; the aforementioned encoding parameters are used to compare the aforementioned reference frame image and the aforementioned non-reference frame image Image encoding is performed separately to obtain the first code stream after the reference frame image is coded, and the second code stream after the non-reference frame image is coded, including: each of the reference frame images included in the reference frame image according to the target The code rate is compressed within the frame to obtain the first code stream; the residuals of each of the non-reference frame images contained in the non-reference frame image and all the reference frame images included in the reference frame image are in accordance with the target bit rate Perform the above-mentioned intra-frame compression to obtain the corresponding above-mentioned second code stream.
  • the encoding end can select a simpler down-sampling to layer the target image and reduce the resolution.
  • the intra-frame compression encoding method used when encoding the layered duplicate image because the frame Inner compression is spatial compression.
  • the encoding end uses intra-frame compression on the down-sampled copy image, which can cause the channel capacity to jitter and reduce to below the initial bit rate, so that the decoder at the receiving end can accept the copy based on a smaller resolution.
  • the code stream information of the image image can be reconstructed from the code stream information.
  • the channel capacity is reduced, the probability of information loss in the transmission process is reduced, and the phenomenon of mosaic, stuttering, and blurring of the video image information is avoided, and the user's perception and experience are improved.
  • the above-mentioned channel environment includes one or more of the bandwidth of the channel, the signal to interference plus noise ratio, the signal-to-noise ratio, the received signal strength indicator, the duty cycle, and the bit rate;
  • the method further includes: determining the transmission parameters corresponding to the first code stream and/or the second code stream according to the channel environment, wherein the transmission parameters are used to follow the target For modulation and coding strategy information and/or target transmission power to transmit a code stream, the transmission parameters of the first code stream are better than the transmission parameters of the second code stream.
  • the encoder can obtain the channel environment of the current wireless channel through the transmitter when the channel changes, and adjust the transmission when sending the first code stream and/or the second code stream according to the channel environment.
  • Parameters such as transmission power, modulation and coding strategy information, etc.
  • the transmission parameters of the first code stream are different from the transmission parameters of the second code stream.
  • the transmission parameters enable the transmitter to transmit the first code stream at a faster transmission speed and a higher quality of service (QoS).
  • the encoding end sends the first code stream using channel resources better than the channel resources of the second code stream, that is, the transmission quality when sending the first code stream is higher than when sending the second code stream.
  • the transmission quality at the time, or the more important code stream is sent first that is, the code stream after the image is encoded with a small number of duplicate images), so that the decoder at the receiving end can still receive the complete code stream when the channel capacity is jittered.
  • the above-mentioned wireless channel includes a first wireless channel and a second wireless channel; the above-mentioned sending the above-mentioned first code stream and the above-mentioned second code stream respectively includes: passing the above-mentioned first code stream through a first wireless channel.
  • Channel transmission is to send the above-mentioned second code stream through a second wireless channel, wherein the service quality guarantee mechanism of the above-mentioned first wireless channel is higher than that of the above-mentioned second wireless channel.
  • the encoding end may classify the video source during the wireless transmission process, and the channel resource of the first code stream is better than the channel resource of the second code stream for transmission.
  • the first code stream and the second code stream sent by the encoding end through the wireless channel use different QoS guarantee levels, which increases the probability that the receiving end receives the first code stream and makes the receiving end decode
  • the receiver can still reconstruct the image only through the first bit stream, avoiding the inability of the receiving end to reconstruct the complete video image due to loss of information, which reduces Mosaic, stuttering, blurring and other phenomena appear in the video information, which improves the user's perception and experience.
  • the above-mentioned target image is any one of the multi-frame target images included in the target video; the above-mentioned method further includes: acquiring each of the multi-frame target images included in the above-mentioned target video corresponds to the above-mentioned reference The code stream of the frame image; the audio information in the target video is obtained; each frame image of the multi-frame image included in the target video corresponds to the code stream of the reference frame image and the audio information through the first wireless channel.
  • the encoding end can obtain all target images in the target video included in the multi-frame target image.
  • the code streams corresponding to the above-mentioned reference frame image and audio information are mapped to the VI service transmission with a high quality of service guarantee mechanism.
  • the aforementioned audio information may be audio information encoded by voice.
  • the decoder at the receiving end can reconstruct the video based only on the code streams and audio information corresponding to all target images received through the channel with a high quality of service guarantee mechanism, so as to increase the probability of receiving the first code stream at the receiving end. Avoid the inability of the receiving end to reconstruct the complete video image due to the loss of information, reduce the phenomenon of no sound and freeze in the video information, and improve the user's visual experience.
  • the above-mentioned target image is any one of the multi-frame target images included in the target video; the above-mentioned method further includes: acquiring each of the multi-frame target images included in the above-mentioned target video corresponds to the above-mentioned non-target image.
  • the code stream of the reference frame image; the code stream of each frame image in the multi-frame image included in the target video corresponding to the non-reference frame image is sent through the second wireless channel.
  • the encoding end may separately map to the BE service transmission with a slightly lower service quality assurance mechanism after acquiring the code streams of all target images corresponding to the aforementioned non-reference frame images in the multi-frame target images included in the target video. Therefore, even if the code streams corresponding to the non-reference frame images of all target images lose part or even all of the code stream information when the channel changes, the decoder can only be based on the results of all target images received through the channel with a high quality of service guarantee mechanism.
  • the code stream and audio information corresponding to the first copy reconstruct the video to prevent the receiving end from being unable to reconstruct the complete video image due to the loss of information, reduce the phenomenon of mosaic, stutter, and blur in the video information, and improve the user's perception and experience.
  • embodiments of the present application provide a low-delay source-channel joint decoding method, which may include: receiving a first code stream sent by an encoding end, where the first code stream is a code stream obtained after image encoding of a reference frame image , Wherein the reference frame image includes one or more images obtained after down-sampling the target image; after decoding the first code stream, the reference frame image corresponding to the first code stream is obtained; according to the reference frame image, reconstruct Target image.
  • the decoding end ie, the receiving end
  • the decoding end can receive the first code stream sent by the encoding end, and then decode the first code stream to obtain the first code stream.
  • a reference frame image corresponding to a code stream is then reconstructed based on the reference frame image.
  • the first code stream is the code stream obtained after image encoding of the reference frame image, where the reference frame image includes one or more images obtained after down-sampling the target image. Therefore, the code received by the receiving end The stream is obtained by down-sampling the target image and then image encoding.
  • downsampling can sample the target image into multiple copies of lower resolution, part of them can be used as the reference frame image.
  • the code stream size obtained by down-sampling and re-encoding the target image is smaller than the code stream size obtained when the target image is directly encoded. Therefore, the code stream after down-sampling the target image may better adapt to changes in the wireless channel. It also reduces the probability of information loss during the transmission process, and at the same time avoids the inability of the receiving end to reconstruct the complete video image due to the loss of information.
  • the target image is any one of multiple target images included in the target video;
  • the reconstruction of the target image according to the reference frame image includes: according to the reference frame image and the target image At least one of the reference frame image and the non-reference frame image corresponding to an adjacent frame of target image is reconstructed by using an interpolation algorithm.
  • the receiving end can use the interpolation algorithm to map the reference frame image corresponding to the first code stream to the target image of the previous frame of the target image corresponding to the first code stream when the receiving end only receives the first code stream.
  • the reference frame image is reconstructed into the target image.
  • the receiving end when the receiving end receives the first bit stream, it can avoid the inability of the receiving end to reconstruct the complete video image due to the loss of information, and reduce the appearance of no sound and freeze in the video information. Phenomenon to enhance the user’s perception and experience.
  • the method further includes: within a preset time period after receiving the first code stream, receiving a second code stream sent by the encoding end, wherein the second code stream is non-
  • the reference frame image is the code stream obtained after the image encoding, and the non-reference frame image includes the remaining images except the reference frame image obtained after down-sampling the target image; if the second code stream is incomplete, the first After the two bit streams are decoded, the corresponding incomplete non-reference frame image is obtained; according to the incomplete non-reference frame image, the peripheral pixels of the incomplete non-reference frame image are determined; and the target image is reconstructed according to the reference frame image, including : According to the surrounding pixels of the reference frame image and the incomplete non-reference frame image, the target image is reconstructed by an interpolation algorithm.
  • the receiving end can determine whether to receive the second code stream within the preset time period of receiving the first code stream, if not, according to the first code stream and the adjacent frame of target image Corresponding to at least one of the reference frame image and the non-reference frame image, reconstruct the target image; if the second code stream is received, determine whether the received second code stream is incomplete, if the received second code stream is incomplete
  • the incomplete second code stream and the complete first code stream can be used to interpolate to obtain the target image, which avoids the inability of the receiving end to reconstruct the complete video image due to partial information loss, and reduces the phenomenon of no sound and freezing in the video information. Improve the user's perception experience.
  • the method further includes: if it is determined that the second code stream is complete, after image decoding the second code stream, the corresponding non-reference frame image is obtained; , Reconstructing the target image includes: splicing the reference frame image and the non-reference frame image to reconstruct the target image.
  • an embodiment of the present application provides a low-delay source-channel joint coding device, which is characterized by comprising: an encoder and a transmitter, wherein the encoder is used for down-sampling the target image to obtain a reference Frame images and non-reference frame images; image encoding is performed on the reference frame image and the non-reference frame image to obtain the first code stream after the reference frame image is encoded, and the non-reference frame image encoded The second code stream; a transmitter, used to determine a first channel resource and a second channel resource based on the channel environment of the current wireless channel, and respectively use the first channel resource to send the first code stream and use the first channel resource The second code stream is sent with two channel resources, wherein the first channel resource is better than the second channel resource.
  • the encoder is configured to perform image encoding on the reference frame image and the non-reference frame image to obtain the first code stream after the reference frame image is encoded, and the non-reference frame image
  • the encoder is specifically configured to: perform intra-frame compression on each of the reference frame images included in the reference frame image to obtain the first code stream; combine the non-reference frame image with the reference frame The residual of the image is subjected to the intra-frame compression to obtain the corresponding second code stream.
  • the above-mentioned channel environment also includes channel capacity; the above-mentioned transmitter performs image encoding on the above-mentioned reference frame image and the above-mentioned non-reference frame image respectively to obtain the first code after encoding the above-mentioned reference frame image Before the second code stream after the encoding of the non-reference frame image, it is also used to determine the encoding parameters corresponding to the reference frame image and the non-reference frame image based on the channel capacity, wherein the encoding parameters are used to control When the corresponding image is encoded, the corresponding code stream is generated according to the target code rate, and the target code rates corresponding to the first code stream and the second code stream are both less than or equal to the channel capacity.
  • the above-mentioned channel environment includes one or more of channel bandwidth, signal to interference plus noise ratio, signal-to-noise ratio, received signal strength indicator, duty cycle, and bit rate;
  • the above-mentioned transmitter When used to determine the first channel resource and the second channel resource based on the current channel environment, and use the first channel resource to send the first code stream and the second channel resource to send the second code stream, respectively, It is specifically used for: determining the transmission parameters corresponding to the first code stream and/or the second code stream respectively based on the current channel environment, wherein the transmission parameters are used for target modulation and coding strategy information and/or target transmission Power transmission code stream, the transmission parameters of the first code stream are better than the transmission parameters of the second code stream; the first code stream is transmitted according to the transmission parameters corresponding to the first code stream, and the transmission parameters corresponding to the second code stream are Sending parameters to send the above second code stream.
  • the above-mentioned wireless channel includes a first wireless channel and a second wireless channel; the above-mentioned transmitter is used in the above-mentioned current channel environment to determine the first channel resource and the second channel resource, and respectively use
  • the first channel resource is used to send the first code stream
  • the second channel resource is used to send the second code stream, it is specifically used to: determine the first wireless channel and the second wireless channel based on the current channel environment, and The first code stream is sent through a first wireless channel, and the second code stream is sent through a second wireless channel, wherein the quality of service guarantee mechanism of the first wireless channel is higher than that of the second wireless channel.
  • the above-mentioned target image is any one of the multi-frame target images included in the target video; the above-mentioned encoder is further used for: acquiring the corresponding target image of each frame of the multi-frame target image included in the above-mentioned target video The code stream of the reference frame image; the audio information in the target video is obtained; the transmitter is also used to: pass each frame image of the multi-frame image included in the target video corresponding to the code stream of the reference frame image and the audio information The above-mentioned first wireless channel transmission.
  • the above-mentioned target image is any one of the multi-frame target images included in the target video; the above-mentioned encoder is further used for: acquiring the corresponding target image of each frame of the multi-frame target image included in the above-mentioned target video The code stream of the non-reference frame image; the transmitter is further used to: transmit the code stream of each frame image corresponding to the non-reference frame image in the multi-frame image included in the target video through the second wireless channel.
  • an embodiment of the present application provides another low-delay source-channel joint coding device, which is characterized by comprising: an encoder and a transmitter, wherein the encoder is used to down-sample the target image to obtain Reference frame image and non-reference frame image; according to the channel environment of the current wireless channel, determine the coding parameters corresponding to the reference frame image and non-reference frame image; according to the coding parameters, perform image coding on the reference frame image and the non-reference frame image to obtain The first code stream after the coding of the reference frame image and the second code stream after the coding of the non-reference frame image; the transmitter is used to send the first code stream and the second code stream respectively.
  • the aforementioned encoding parameters are used to control the corresponding image to generate a corresponding code stream according to the target code rate during image encoding; the encoder is used to compare the aforementioned reference frame image and the aforementioned reference frame image according to the aforementioned encoding parameters.
  • the non-reference frame image When the non-reference frame image is encoded separately to obtain the first code stream after the reference frame image is encoded, and the second code stream after the non-reference frame image is encoded, it is specifically used to:
  • Each of the reference frame images is intra-compressed according to the target code rate to obtain the first code stream; the residual difference between the non-reference frame image and the reference frame image is subjected to the intra-frame compression according to the target code rate to obtain the corresponding The above second code stream.
  • the foregoing channel environment includes one or more of channel bandwidth, signal to interference plus noise ratio, signal-to-noise ratio, received signal strength indicator, duty cycle, and bit rate;
  • the foregoing transmitter Used to separately transmit the first code stream and the second code stream, and also used to: determine the transmission parameters corresponding to the first code stream and/or the second code stream according to the channel environment, wherein the The sending parameter is used to send the code stream according to the target modulation and coding strategy information and/or the target transmission power, and the sending parameter of the first code stream is better than the sending parameter of the second code stream.
  • the above-mentioned wireless channel includes a first wireless channel and a second wireless channel; the above-mentioned transmitter is used to transmit the above-mentioned first code stream and the above-mentioned second code stream, and is specifically used to: The first code stream is sent through a first wireless channel, and the second code stream is sent through a second wireless channel, wherein the quality of service guarantee mechanism of the first wireless channel is higher than that of the second wireless channel.
  • the above-mentioned target image is any one of the multi-frame target images included in the target video; the above-mentioned encoder is further used for: acquiring the corresponding target image of each frame of the multi-frame target image included in the above-mentioned target video The code stream of the reference frame image; the audio information in the target video is obtained; the transmitter is also used to: pass each frame image of the multi-frame image included in the target video corresponding to the code stream of the reference frame image and the audio information The above-mentioned first wireless channel transmission.
  • the above-mentioned target image is any one of the multi-frame target images included in the target video; the above-mentioned encoder is further used for: acquiring the corresponding target image of each frame of the multi-frame target image included in the above-mentioned target video The code stream of the non-reference frame image; the transmitter is further used to: transmit the code stream of each frame image corresponding to the non-reference frame image in the multi-frame image included in the target video through the second wireless channel.
  • an embodiment of the present application provides a low-delay source-channel joint coding device, which is characterized by comprising: a first sampling unit for down-sampling a target image to obtain a reference frame image and a non-reference frame image ;
  • the first coding unit is used to perform image coding on the reference frame image and the non-reference frame image respectively, to obtain the first code stream after the reference frame image is coded, and the second code stream after the non-reference frame image is coded;
  • the first transmission The unit is used to determine the first channel resource and the second channel resource based on the channel environment of the current wireless channel, and respectively use the first channel resource to send the first code stream and the second channel resource to send the second code stream, where the first The channel resource is better than the second channel resource.
  • the above-mentioned first encoding unit is specifically configured to perform intra-frame compression on each of the above-mentioned reference frame images included in the above-mentioned reference frame image to obtain the above-mentioned first code stream;
  • the residuals of each of the non-reference frame images and all the reference frame images included in the reference frame image are subjected to the intra-frame compression to obtain the corresponding second code stream.
  • the foregoing channel environment further includes channel capacity;
  • the device further includes: a first encoding parameter unit, configured to perform image encoding on the foregoing reference frame image and the foregoing non-reference frame image separately to obtain Before the first code stream after the encoding of the reference frame image and the second code stream after the encoding of the non-reference frame image, based on the channel capacity, the encoding parameters corresponding to the reference frame image and the non-reference frame image are determined, wherein, The encoding parameter is used to control the corresponding image to generate a corresponding code stream according to the target code rate when the image is encoded.
  • the target code rate corresponding to the first code stream and the second code stream are both less than or equal to the channel capacity.
  • the foregoing channel environment includes one or more of channel bandwidth, signal to interference plus noise ratio, signal-to-noise ratio, received signal strength indicator, duty cycle, and bit rate;
  • the unit is specifically configured to determine the transmission parameters corresponding to the first code stream and/or the second code stream respectively based on the current channel environment, wherein the transmission parameters are used to modulate and encode strategy information according to the target and/or the target.
  • the code stream is transmitted at the transmit power, and the transmission parameters of the first code stream are better than the transmission parameters of the second code stream; the first code stream is transmitted according to the transmission parameters corresponding to the first code stream, and the second code stream corresponds to The sending parameters of sending the above second code stream.
  • the wireless channel includes a first wireless channel and a second wireless channel; the first sending unit is specifically configured to: determine the first wireless channel and the second wireless channel based on the current channel environment , Sending the first code stream through a first wireless channel, and sending the second code stream through a second wireless channel, wherein the quality of service guarantee mechanism of the first wireless channel is higher than that of the second wireless channel.
  • the above-mentioned target image is any one of the multi-frame target images included in the target video; the above-mentioned device further includes: a first acquisition unit configured to acquire each of the multi-frame target images included in the above-mentioned target video.
  • One frame of target image corresponds to the code stream of the above-mentioned reference frame image; acquiring the audio information in the above-mentioned target video; the second sending unit is used to correspond each frame of the multi-frame image included in the above-mentioned target video to the code stream of the above-mentioned reference frame image
  • the above-mentioned audio information is transmitted through the above-mentioned first wireless channel.
  • the above-mentioned target image is any one of the multi-frame target images included in the target video; the above-mentioned device further includes: a second acquiring unit configured to acquire each of the multi-frame target images included in the above-mentioned target video.
  • One frame of the target image corresponds to the code stream of the non-reference frame image; the third sending unit is used to pass the code stream of each frame image corresponding to the non-reference frame image in the multi-frame image included in the target video through the second wireless channel send.
  • an embodiment of the present application provides another low-delay source-channel joint coding device, which is characterized in that it includes: a second sampling unit for down-sampling the target image to obtain a reference frame image and a non-reference frame Image; the second coding parameter unit is used to determine the coding parameters corresponding to the reference frame image and the non-reference frame image according to the channel environment of the current wireless channel; the second coding unit is used to compare the reference frame image and the non-reference frame image according to the coding parameters Frame images are encoded separately to obtain the first code stream after encoding the reference frame image and the second code stream after encoding the non-reference frame image; the fourth sending unit is used to send the first code stream and the second code stream respectively .
  • the foregoing encoding parameters are used to control the corresponding image to generate a corresponding code stream according to the target code rate during image encoding; the foregoing second encoding unit is specifically used to: Each of the aforementioned reference frame images is intra-compressed according to the aforementioned target code rate to obtain the aforementioned first code stream; the residual difference between the aforementioned non-reference frame image and the aforementioned reference frame image is subjected to the aforementioned intra-frame compression according to the aforementioned target code rate to obtain the corresponding The above second code stream.
  • the foregoing channel environment includes one or more of channel bandwidth, signal to interference plus noise ratio, signal-to-noise ratio, received signal strength indicator, duty cycle, and bit rate;
  • the fourth The sending unit is configured to separately send the first code stream and the second code stream, and is also used to: determine the transmission parameters corresponding to the first code stream and/or the second code stream according to the channel environment, wherein The foregoing transmission parameters are used to transmit the code stream according to the target modulation and coding strategy information and/or the target transmission power, and the transmission parameters of the first code stream are better than the transmission parameters of the second code stream.
  • the above-mentioned wireless channel includes a first wireless channel and a second wireless channel; the above-mentioned fourth sending unit is specifically configured to: send the above-mentioned first code stream through the first wireless channel, and transmit the above-mentioned second code stream through the first wireless channel.
  • the code stream is sent through a second wireless channel, wherein the quality of service guarantee mechanism of the first wireless channel is higher than that of the second wireless channel.
  • the above-mentioned target image is any one of the multi-frame target images included in the target video; the above-mentioned device further includes: a third acquisition unit configured to acquire each of the multi-frame target images included in the above-mentioned target video.
  • One frame of target image corresponds to the code stream of the aforementioned reference frame image; acquiring the audio information in the aforementioned target video; and the fifth sending unit is used to correspond each frame of the multi-frame image included in the aforementioned target video to the code stream of the aforementioned reference frame image
  • the above-mentioned audio information is transmitted through the above-mentioned first wireless channel.
  • the above-mentioned target image is any one of the multi-frame target images included in the target video; the above-mentioned device further includes: a fourth acquisition unit configured to acquire each of the multi-frame target images included in the above-mentioned target video.
  • One frame of target image corresponds to the code stream of the aforementioned non-reference frame image;
  • the sixth sending unit is configured to pass the code stream of each frame image corresponding to the aforementioned non-reference frame image in the multi-frame image included in the aforementioned target video through the second wireless channel send.
  • an embodiment of the present application provides a low-delay source-channel joint decoding device, which is characterized by comprising: a receiving unit, configured to receive a first code stream sent by an encoding end, and the first code stream is a reference frame image The code stream obtained after image encoding, wherein the reference frame image includes one or more images obtained after down-sampling the target image; the decoding unit is used to decode the first code stream to obtain the first code stream Corresponding to the reference frame image; the image unit is used to reconstruct the target image according to the reference frame image.
  • the above-mentioned target image is any one of multiple frames of target images included in the target video; the above-mentioned image unit is specifically used for: according to a frame of target image adjacent to the above-mentioned target image according to the above-mentioned reference frame image Corresponding to at least one of the reference frame image and the non-reference frame image, the above-mentioned target image is reconstructed through an interpolation algorithm.
  • the device further includes: a fifth acquiring unit, configured to receive the second code stream sent by the encoding end within a preset time period after receiving the first code stream, where
  • the second code stream is a code stream obtained by encoding a non-reference frame image, and the non-reference frame image includes images obtained after down-sampling the target image except for the reference frame image; if the first The second code stream is incomplete, and the second code stream is decoded to obtain the corresponding incomplete non-reference frame image; the peripheral pixels of the incomplete non-reference frame image are determined according to the incomplete non-reference frame image; the image unit, It is specifically used for: reconstructing the target image through an interpolation algorithm based on the surrounding pixels of the reference frame image and the incomplete non-reference frame image.
  • the device further includes: a sixth acquiring unit, configured to, if it is determined that the second code stream is complete, perform image decoding on the second code stream to obtain the corresponding non-reference frame image
  • the image unit is specifically used for: splicing the reference frame image and the non-reference frame image to reconstruct the target image.
  • an embodiment of the present application provides a service device, the service device includes a processor, and the processor is configured to support the service device to execute the low-latency source-channel joint coding method provided in the first aspect or the foregoing
  • the second aspect provides a corresponding function in another low-delay source-channel joint coding method.
  • the service device may further include a memory, which is used for coupling with the processor, and stores the necessary program instructions and data of the service device.
  • the service device may also include a communication interface for the service device to communicate with other devices or a communication network.
  • an embodiment of the present application provides a service device, the service device includes a processor, and the processor is configured to support the service device to execute the corresponding low-latency source-channel joint decoding method provided in the third aspect. Function.
  • the service device may further include a memory, which is used for coupling with the processor, and stores the necessary program instructions and data of the service device.
  • the service device may also include a communication interface for the service device to communicate with other devices or a communication network.
  • an embodiment of the present application provides a computer program, which includes instructions, when the computer program is executed by a computer, the computer can execute the low-latency source-channel joint coding provided in the fourth aspect.
  • the embodiments of the present application provide a computer program, the computer program includes instructions, when the computer program is executed by a computer, the computer can execute the low-latency source channel joint decoding provided in the sixth aspect. The process performed by the device.
  • an embodiment of the present application provides a computer storage medium for storing the low-latency source-channel joint coding device provided in the fourth aspect, or another low-latency signal provided in the fifth aspect.
  • the source-channel joint coding device, or the computer software instructions used by the low-latency source-channel joint decoding device provided in the sixth aspect described above, includes a program for executing the program designed in the foregoing aspect.
  • an embodiment of the present application provides a chip system, which includes a processor, and is configured to support a service device to implement the functions involved in the first, second, or third aspects described above.
  • the chip system further includes a memory, and the memory is used to store program instructions and data necessary for the data sending device.
  • the chip system can be composed of chips, or include chips and other discrete devices.
  • an embodiment of the present application provides an electronic device, including the processing chip provided by any one of the foregoing first aspect, second aspect, or third aspect, and a discrete device coupled to the chip.
  • Fig. 1 is a schematic structural diagram of a low-delay source-channel joint coding system provided by an embodiment of the present application.
  • Fig. 2A is a schematic flowchart of a low-delay source-channel joint coding method provided by an embodiment of the present application.
  • Fig. 2B is a schematic diagram of an application scenario provided by the present application.
  • FIG. 2C is a schematic diagram of a first code stream and a second code stream respectively sent through the first channel and the second channel provided by an embodiment of the present application.
  • FIG. 3A is a schematic flowchart of another low-delay source-channel joint coding and decoding method provided by an embodiment of the present application.
  • FIG. 3B is a schematic diagram of an application for encoding and decoding a target image provided by an embodiment of the present application.
  • Fig. 3C is a schematic diagram of another application scenario provided by the present application.
  • Fig. 4A is a schematic structural diagram of a low-delay source-channel joint coding apparatus provided by an embodiment of the present application.
  • Fig. 4B is a schematic structural diagram of another low-delay source-channel joint coding apparatus provided by an embodiment of the present application.
  • Fig. 4C is a schematic structural diagram of a low-delay source-channel joint decoding apparatus provided by an embodiment of the present application.
  • Fig. 5 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • component used in this specification are used to denote computer-related entities, hardware, firmware, a combination of hardware and software, software, or software in execution.
  • the component may be, but is not limited to, a process, a processor, an object, an executable file, an execution thread, a program, and/or a computer running on a processor.
  • the application running on the computing device and the computing device can be components.
  • One or more components may reside in processes and/or threads of execution, and components may be located on one computer and/or distributed among two or more computers.
  • these components can be executed from various computer readable media having various data structures stored thereon.
  • the component may be based on, for example, data having one or more data packets (for example, data from two components interacting with another component in a local system, a distributed system, and/or a network, for example, the Internet that interacts with other systems through signals) Signals are communicated through local and/or remote processes.
  • data packets for example, data from two components interacting with another component in a local system, a distributed system, and/or a network, for example, the Internet that interacts with other systems through signals
  • Signals are communicated through local and/or remote processes.
  • JPEG Joint Photographic Experts Group
  • ISO International Standardization Organization
  • IEC International Electrotechnical Commission
  • MPEG Moving Picture Experts Group
  • MPEG-1 MPEG-1
  • MPEG-2 MPEG-2
  • MPEG-4 MPEG-4
  • MPEG-7 MPEG-21
  • the MPEG standard video compression coding technology mainly uses the inter-frame compression coding technology with motion compensation to reduce the time redundancy, the discrete cosine transform technology to reduce the spatial redundancy of the image, and the entropy coding is used in the aspect of information representation. Reduced statistical redundancy.
  • RGB image R stands for red; G stands for green; B stands for blue.
  • R stands for red
  • G stands for green
  • B stands for blue.
  • red, green, and blue Each pixel of the color image is composed of red and green.
  • the different proportions of, blue, such an image is an RGB image.
  • Wireless Local Area Network which is a very convenient data transmission system. It uses radio frequency (RF) technology and uses electromagnetic waves to replace the old obstructive twisted-pair copper wire (Coaxial).
  • RF radio frequency
  • Coaxial copper wire
  • the formed local area network is connected in the air, so that the wireless local area network can use a simple access structure to allow users to use it to achieve the ideal state of "information portable and convenient to travel the world".
  • Discrete Cosine Transform is a transformation related to Fourier Transform, similar to Discrete Fourier Transform, but using only real numbers.
  • the discrete cosine transform is equivalent to a discrete Fourier transform whose length is about twice as long. This discrete Fourier transform is performed on a real even function (because the Fourier transform of a real even function is still a real even function ), in some deformations, it is necessary to move the input or output position by half a unit.
  • Run Length Coding also known as “Run Length Coding” or “Run Length Coding”
  • RLC Run Length Coding
  • the basic principle of run-length coding is to replace consecutive symbols with the same value with a symbol value or string length (continuous symbols form a continuous "run”. Run-length coding is named after it), so that the length of the symbol is less than that of the original data. length. Only when the code of each row or each column of data changes, the code and the number of repetitions of the same code are recorded at a time, so as to achieve data compression.
  • Signal to Interference plus Noise Ratio refers to: Signal to Interference plus Noise Ratio (SINR) is the strength of the received useful signal and the received interference signal (noise and interference) The ratio of the intensity.
  • SNR Signal to Noise Ratio
  • the signal here refers to the electronic signal from the outside of the device that needs to be processed by this device, and the noise refers to the irregular extra signal (or information) that does not exist in the original signal generated after passing through the device. The signal does not change with the change of the original signal.
  • Packet Error Ratio is an indicator that measures the accuracy of data packets (data packets) transmission within a specified time.
  • Macroblock is a basic concept in video coding technology.
  • a coded image is usually divided into several macroblocks, and a macroblock is composed of a luminance pixel block and two additional chrominance pixel blocks.
  • the luminance block is a 16x16 pixel block
  • the size of the two chrominance image pixel blocks depends on the image sampling format. For example, for a YUV420 sampled image, the chrominance block is an 8x8 pixel block.
  • several macroblocks are arranged in the form of slices, and the video coding algorithm uses macroblocks as the unit to encode macroblocks one by one and organize them into a continuous video stream. .
  • Data rate refers to the data flow used by a video file in a unit time, also called bit rate, which is the most important part of picture quality control in video encoding.
  • Entropy coding is a lossless data compression scheme independent of the specific characteristics of the medium.
  • One of the main types of entropy coding is to create and assign a unique prefix code to each input symbol, and then replace each fixed-length input symbol with the corresponding variable-length prefix-free (prefix-free) The output code word is replaced, so as to achieve the purpose of compressing data.
  • the length of each codeword is approximately proportional to the negative logarithm of the probability. Therefore, the shortest code is used for the most common symbols.
  • Intraframe compression also known as spatial compression. When compressing a frame of image, only the data of the current frame is considered without considering the redundant information between adjacent frames, which is actually similar to still image compression. Intra-frame compression generally uses a lossy compression algorithm.
  • Interframe compression is based on the fact that the two consecutive frames before and after many videos or animations have great correlation, or the characteristics of little change in the information of the two frames before and after. That is, continuous video has redundant information between adjacent frames. According to this feature, compressing the redundancy between adjacent frames can further increase the amount of compression and reduce the compression ratio. Inter-frame compression is also called temporal compression (temporal compression).
  • MCS Modulation and Coding Scheme
  • Down-sampling a sample sequence is sampled once at intervals of several samples, so that the new sequence obtained is the down-sampling of the original sequence.
  • the sampling rate changes are mainly due to different modules of signal processing may have different sampling rate requirements.
  • down-sampling still has to satisfy the sampling theorem, otherwise such down-sampling will cause aliasing of signal components.
  • Down-sampling is decimation, which is one of the basic contents in multi-rate signal processing.
  • Image coding is also called image compression. It refers to a technology that uses a small number of bits to represent an image or the information contained in an image under the condition that a certain quality (signal-to-noise ratio requirement or subjective evaluation score) is met.
  • QoS Quality of Service
  • YUV is a color coding method. Often used in various video processing components. When YUV encodes photos or videos, it takes into account human perception and allows the bandwidth of chroma to be reduced. YUV is a type of compiling true-color color space (color space). Proper nouns such as Y'UV, YUV, YCbCr, YPbPr, etc. can all be called YUV, which overlap with each other. "Y” means brightness (Luminance, Luma), "U” and “V” mean chrominance, density (Chrominance, Chroma)
  • Channel capacity also known as channel capacity, refers to the minimum upper bound of the achievable rate when information can be reliably transmitted in a channel.
  • the so-called reliable transmission refers to the transmission of information with an arbitrarily small error rate.
  • the channel capacity is the limit information rate of a given channel that can be reached with an arbitrarily small error probability.
  • the unit of channel capacity is bit per second, knight per second, etc.
  • wireless video transmission solves the problem of data transmission such as audio and video in the case of inconvenient wiring construction. Therefore, wireless projection compresses the screen information of terminals such as mobile phones/tablets, and then sends them to large-screen devices such as TVs wirelessly, shares videos/pictures with friends through wireless networks, and conducts video conferences through wireless networks.
  • image coding and decoding through wireless transmission.
  • a common wireless video transmission system generally consists of two parts: the transmitter and the receiver.
  • an appropriate gain antenna can be added to increase the transmission distance according to actual needs.
  • the transmitter and receiver of wireless video transmission mainly include audio and video encoding, wireless transmission, audio and video decoding and display.
  • JPEG standard coding and MPEG standard coding are two common image coding methods. Both of these image coding methods can be converted into YUV space after the original image undergoes color space transformation, and then undergoes segmentation and discrete cosine transformation. After (Discrete Cosine Transform, DCT), quantization, and entropy coding, a compressed image is obtained.
  • DCT Discrete Cosine Transform
  • quantization quantization
  • entropy coding a compressed image is obtained.
  • the audio and video codec algorithm at the transmitting end needs to be able to quickly track and adapt to this change, so that the delay and image quality of the receiving end, that is, the receiving end, can be kept at an acceptable level.
  • traditional audio and video coding technologies such as JPEG standard coding and MPEG standard coding, which only focus on compression ratio and image quality, cannot quickly make corresponding coding adjustments according to channel capacity. Therefore, it is easy to lose part or all of the code stream information during wireless transmission.
  • wireless transmission generally does not care about the priority of the source part (that is, the code stream information after the audio and video is encoded), and the transmission is performed without difference.
  • the encoding end cannot control whether the key information (such as audio information, stream information of important frames) that may be lost in the wireless channel, and cannot quickly adapt to changes in the wireless channel, and has no control over the information source (such as : Stream information or other data information) is sent hierarchically, and the receiving end may not be able to reconstruct a complete video image due to the loss of important information.
  • key information such as audio information, stream information of important frames
  • information source such as : Stream information or other data information
  • FIG. 1 is a schematic structural diagram of a low-delay source-channel joint coding system provided by an embodiment of the present application.
  • the architecture of the low-delay source-channel joint coding system in this application may include the coding end 10 and the decoding end 11 in FIG. 1. among them.
  • the encoding end 10 of the low-latency source-channel joint encoding system architecture includes a video source 101, an encoder 102, and a transmitter 103.
  • the decoding end 11 of the low-latency source-channel joint coding system architecture includes a decoder 111 and a display device 112. among them,
  • the video source 101 is an interface, display memory, storage, etc. used to provide target video.
  • the target video can come from various types of interfaces, such as: High Definition Multimedia Interface (HDMI) interface, digital video (Display Port, DP) ) Interface, video graphics array standard (Video Graphics Array, VGA) interface, etc.
  • HDMI High Definition Multimedia Interface
  • DP digital video
  • VGA Video Graphics Array
  • the target video may be sent to the encoder 102, so that the encoder 102 encodes the target image of the target video.
  • the encoder 102 is a device that compiles and converts a signal (such as a bit stream) or data into a signal form that can be used for communication, transmission, and storage.
  • the encoder can convert angular displacement or linear displacement into electrical signals.
  • the former is called a code disc and the latter is called a code ruler.
  • the encoder can be related coding software running on a general-purpose central processing unit (CPU) or a dedicated chip On or on an independent encoding chip, or a part of an independent chip, such as a part of a System-on-a-Chip (SoC) chip of an integrated circuit.
  • CPU general-purpose central processing unit
  • SoC System-on-a-Chip
  • the target image can be hierarchically encoded, that is, the target image is down-sampled to obtain the reference frame image and the non-reference frame image for image encoding; and then the first image after the reference frame image encoding is obtained.
  • the code stream, and the second code stream after encoding the non-reference frame image; finally, the first code stream and the second code stream are sent to the decoder 111 through the transmitter 103.
  • the coding parameters may also receive coding parameters fed back according to the channel information of the current wireless channel sent through the coding parameter feedback control, so as to make the bit rate of the code stream after image coding smaller than the channel capacity of the current wireless channel.
  • the target image can also be down-sampled to obtain the reference frame image and the non-reference frame image; according to the current wireless channel channel environment, determine the corresponding encoding of the reference frame image and the non-reference frame image Parameters; according to the encoding parameters, image encoding is performed on the reference frame image and the non-reference frame image to obtain the first code stream after encoding the reference frame image and the second code stream after encoding the non-reference frame image.
  • the transmitter 103 is a device that uses radio waves as a data transmission device using a wirelessly connected local area network, and the transmission distance is generally tens of meters.
  • the transmitter 103 may be a WiFi chip, and may include a driver program, firmware, etc. that are matched with the WiFi chip.
  • the encoder 102 can perform image encoding to obtain the first code stream and then the second code stream is sent through the wireless channel respectively. Among them, the channel resources when the first code stream is sent wirelessly are high. Channel resources when sending the second code stream.
  • the transmitter 103 can also obtain the channel environment of the current wireless channel, such as: channel capacity, channel bandwidth, or signal to interference plus noise ratio, etc.; and then send channel information to the encoder 103 through coding parameter feedback control, so that The encoder 103 adjusts the encoding parameters of the first code stream and the second code stream in real time; the transmitter 103 can also adjust the sending parameters of the first code stream and the second code stream in real time.
  • the encoding parameter feedback control can be used as an algorithm inside or outside the encoder, which may be implemented by software or chip.
  • the encoding parameter feedback control is mainly used to adjust the image of the encoder according to the channel information of the current wireless channel. Encoding parameters during encoding. It is understandable that the encoder 102 and the transmitter 103 may be devices coupled together to receive target video information sent from the video source 101, and send the target video information to the decoder 111 after image encoding.
  • a decoder 111 is a hardware/software device that can decode and restore digital video and audio data streams into analog video and audio signals.
  • the encoder mainly compresses analog video and audio signals into data-encoded files, while the decoder converts the data-encoded files into analog video and audio signals.
  • the decoder can be related decoding software running on a general-purpose CPU or a dedicated chip, or an independent decoding chip, or a part of an independent chip (such as a SoC chip); for example: when the decoder 111 is an independent decoding chip, the decoder 111 can receive and receive The first code stream sent by the transmitter 103 of the encoding end, the first code stream is the code stream obtained after image encoding of the reference frame image, wherein the reference frame image includes the reference frame image obtained after down-sampling the target image; then decode The device 111 decodes the first code stream to obtain a reference frame according to the reverse flow of the process when the encoder 102 encodes the target image of the target video; finally obtains the target image according to the reference frame.
  • the decoder 111 when the decoder 111 is an independent decoding chip, the decoder 111 can receive and receive The first code stream sent by the transmitter 103 of the encoding end, the first code stream is the code stream obtained after image encoding of
  • the decoder 111 may also receive the second code stream, and then decode the second code stream according to the reverse process when the encoder 102 encodes the target image of the target video to obtain a complete or incomplete non-reference frame; finally, the decoder uses The reference frame and the complete non-reference frame are spliced into a complete reconstructed frame, that is, the target image; when the non-reference frame is incomplete, adjacent frame images (such as the reference frame image and the non-reference frame image of an adjacent target image) can be used At least one of) or a set of surrounding pixels is interpolated to obtain a reconstructed frame, and then the obtained reconstructed frame is sent to the display device 112 for display and audio playback.
  • adjacent frame images such as the reference frame image and the non-reference frame image of an adjacent target image
  • the decoder 111 determines whether the non-reference frame is incomplete, it can also determine whether the code stream is incomplete when receiving the code stream, and then determine whether the image corresponding to the code stream is incomplete, which is not limited in this application. .
  • the display device 112 is a display tool that displays certain data on a screen through a specific transmission device and then reflects it to the human eye.
  • it can refer to a display, a projector, a virtual reality head-mounted display device, and a belt.
  • Devices such as smart terminals with display functions.
  • the display device 112 may be: a virtual reality head-mounted display device (such as virtual reality (VR) glasses, VR goggles, VR helmets), smart phones, notebook computers, tablet devices, projectors, cameras, etc., Or display clients, applications, etc. installed or running on computers, tablet devices, smart phones.
  • VR virtual reality
  • the display device when the display device is a projector, the projector may project the target video data information sent by the decoder 111 onto the screen through the projection device, and then play the target video.
  • the decoder 111 and the display device 112 may be devices coupled together to receive the code stream information sent from the transmitter 103, decode the code stream information and display the target video corresponding to the code stream information.
  • decoding end 11 in the low-delay source-channel joint coding system architecture provided by the embodiment of the present application is equivalent to the receiving end in the present application.
  • the low-latency source-channel joint coding system architecture of FIG. 1 is only a partial exemplary implementation in the embodiments of the present application.
  • the low-latency source-channel joint coding system architecture in the embodiments of the present application includes but not only Limited to the above low-latency source-channel joint coding system architecture.
  • this application proposes a low-latency source-channel joint coding method, which can avoid the requirement for stream integrity in traditional audio and video coding and decoding algorithms, and also It can quickly track changes in channel capacity, keeping the receiving end delay and image quality at an acceptable level.
  • the channel capacity is reduced, the phenomenon of mosaic, stuttering, and blurring in the video information is reduced, and the user's visual experience is improved.
  • FIG. 2A is a schematic flowchart of a low-latency source-channel joint coding method according to an embodiment of the present application, which can be applied to the low-latency source-channel joint coding system architecture described in FIG. 1 Among them, the encoder 102 and transmitter 103 can be used to support and execute steps S201 to S203 of the method flow shown in FIG. 2A.
  • Step S201 down-sampling the target image to obtain a reference frame image and a non-reference frame image.
  • the encoder 102 down-samples the target image to obtain the reference frame image and the non-reference frame image.
  • the down-sampling is to sample a sequence of samples at intervals of several samples, so that the new sequence obtained is the down-sampling of the original sequence.
  • FIG. 2B is a schematic diagram of an application scenario provided by this application.
  • the encoder 102 can down-sample the input target video with multiple frames of target images, where the down-sampling horizontal sampling rate is 2:1, and the vertical sampling rate is 2:1.
  • the ratio is 2:1, and it is divided into 4 lower-resolution duplicate images.
  • one duplicate image can be selected as the reference frame image, and the remaining 3 duplicate images are non-reference frame images.
  • the reference frame image may include one or more images
  • the non-reference frame image may also include one or more images
  • the number of images included in the reference frame image is less than or equal to the number of images included in the non-reference frame image.
  • the code rate can then be jittered or reduced to the initial code rate (for example: the initial code rate can be the code rate when the target image is compressed according to the traditional encoding method or the code rate of the preset size) , So that the decoder at the receiving end can still reconstruct the image based on the smaller-resolution copy image.
  • the target video includes multiple frames of target images, and the target image is any one of the multiple frames of target images.
  • the encoder 102 down-samples the target image to obtain N duplicate images, the resolution of the N duplicate images is lower than the resolution of the target image, where N is a positive integer greater than 1; Divided into reference frame image and non-reference frame image.
  • the multiple copies of lower resolution images can be divided into two types of images with different numbers.
  • the number of images included in the reference frame image is less than or equal to the number of images included in the non-reference frame image.
  • the intersection of the reference frame image and the non-reference frame image is an empty set.
  • the four duplicate images obtained after down-sampling, among them, any one duplicate image can be selected as the reference frame image, and the remaining 3 duplicate images are non-reference frame images.
  • Step S202 Perform image encoding on the reference frame image and the non-reference frame image, respectively, to obtain a first code stream after the reference frame image is coded, and a second code stream after the non-reference frame image is coded.
  • the encoder 102 performs image encoding on the reference frame image and the non-reference frame image after the multiple copy images are divided, to obtain the first code stream after the reference frame image is encoded, and the second code after the non-reference frame image is encoded. flow.
  • the encoder 102 may sequentially perform block, DCT, quantization, run-length coding, entropy coding, packing, marking, etc., on the copy image in the reference frame image to generate the first code stream.
  • the encoder 102 can perform blocking, DCT, quantization, and navigation on the residual of each duplicate image in the non-reference frame image relative to the duplicate image in the reference frame image or directly on each duplicate image in the non-reference frame image. Long coding, entropy coding, packing, marking, etc. generate the second code stream.
  • the encoder 102 may perform intra-frame compression on the reference frame image and the non-reference frame image, respectively, to obtain the first code stream after the reference frame image is coded, and the second code stream after the non-reference frame image is coded. It is understandable that while choosing a simpler down-sampling method to layer the target image and reduce the resolution, the intra-frame compression encoding method used when encoding the layered duplicate image, because intra-frame compression is spatial compression. When compressing a frame of image, only the data of the current frame is considered without considering the redundant information between adjacent frames, and the inter-frame compression should refer to other frame data. The compression rate is large.
  • this application implements For example, when the down-sampled copy image is coded, the inter-frame compression is not applicable. Therefore, using the intra-frame compression method can cause the decoder to receive the code stream information based on the smaller resolution copy image when the channel capacity is jittered and drops below the initial bit rate, and can be based on the code The reconstructed image obtained from the stream information. At the same time, when the channel capacity is reduced, the phenomenon of mosaic, stuttering, and blurring in the video information is reduced, and the user's perception and experience are improved.
  • the encoder 102 may also use other layered encoding methods to divide the target image into lower-resolution duplicate images, and the resolution, bit rate, and frame rate among the multiple duplicate images may be different. , Can also be the same.
  • the code stream generated by layered video coding contains multiple sub-streams.
  • the sub-streams are divided into a basic layer and an extended layer. Each layer has a different bit rate, frame rate and resolution.
  • the basic layer has the most basic video quality.
  • Each subsequent extension layer is a supplement to the previous one.
  • the streaming end can be based on the actual network environment (such as: channel bandwidth, signal and interference plus noise ratio, signal-to-noise ratio, received signal strength indicator, duty cycle Ratio, bit rate, etc.) to select several sub-streams for decoding.
  • the encoder 102 performs image encoding on the reference frame image and the non-reference frame image to obtain the first code stream after the reference frame image is encoded, and the non-reference frame image encoding the first bit stream.
  • the coding parameters corresponding to the reference frame image and the non-reference frame image may also be determined based on the channel environment of the current wireless channel, where the channel environment may include channel capacity, and the coding parameters are used
  • the target code rate corresponding to the first code stream and the second code stream are both less than or equal to the channel capacity.
  • the encoding parameter determined by the encoder may be an encoding parameter sent by the encoding parameter feedback control. Therefore, when the channel capacity changes, the channel capacity of the current channel can be obtained through the transmitter, and the encoding parameters of the encoder when encoding the duplicate image can be adjusted according to the channel capacity information fed back, so that the encoded bit stream
  • the code rate can be adapted to the current channel capacity, and the target code stream can be sent smoothly, reducing the loss of the code stream due to the decrease of the channel capacity.
  • Step S203 Determine the first channel resource and the second channel resource based on the channel environment of the current wireless channel, and use the first channel resource to send the first code stream and the second channel resource to send the second code stream, respectively.
  • the transmitter 103 may use the first code stream and the second code stream received from the encoder 102 after determining the first channel resource and the second channel resource based on the channel environment of the current wireless channel.
  • the channel resource is used to send the first code stream
  • the second channel resource is used to send the second code stream.
  • the embodiment of the present application may separately send the first code stream and the second code stream based on the channel environment of the current wireless channel that is fed back.
  • the quality of service of the channel when streaming is higher than the quality of service of the channel when transmitting the second code stream.
  • the second code stream with a lower level is sent.
  • the transmitter 103 may send link statistical information such as SINR/SNR/RSSI to the encoding end via the air interface, and the encoding end makes statistics on the error sub-frame rate, collision rate, etc., and then based on the link statistical information and the error sub-frame rate , Collision rate, etc. Adjust sending parameters, such as sending MCS, transmitting power, quantization parameters, etc.
  • link statistical information such as SINR/SNR/RSSI
  • the encoding end makes statistics on the error sub-frame rate, collision rate, etc., and then based on the link statistical information and the error sub-frame rate , Collision rate, etc.
  • sending parameters such as sending MCS, transmitting power, quantization parameters, etc.
  • the channel information of the current channel (such as: channel bandwidth, signal-to-interference plus noise ratio, signal-to-noise ratio, received signal strength indicator, duty cycle, bit rate, etc.) is obtained through the transmitter, According to the channel information, the transmission parameters (such as transmission power, modulation and coding strategy information, etc.) when transmitting the first code stream and/or the second code stream are adjusted.
  • the transmitter 103 may be based on the current channel environment , Determine the transmission parameters corresponding to the first code stream and/or the second code stream, wherein the transmission parameters are used to transmit the code stream according to the target modulation and coding strategy information and/or the target transmission power, the The transmission parameters of the first code stream are better than the transmission parameters of the second code stream; the first code stream is transmitted according to the transmission parameters corresponding to the first code stream, and the transmission parameters corresponding to the second code stream are transmitted Sending the second code stream.
  • the channel environment includes one or more of channel bandwidth, signal-to-interference plus noise ratio, signal-to-noise ratio, received signal strength indicator, duty cycle, and bit rate.
  • the transmitter 103 when the transmitter 103 sends the first code stream and the second code stream, it will respond to the sending request of the first code stream more quickly.
  • the importance and priority of the video source are classified, that is, the transmission quality when the first code stream is sent is higher than the transmission quality when the second code stream is sent, or the more important code stream is sent first ( That is, the number of duplicate images is small and the code stream after image encoding), so that when the channel capacity is jittered, and the decoder cannot receive the complete code stream, the reconstructed image can be obtained only based on the duplicate image corresponding to the partial code stream.
  • the transmitter 103 may also determine the first wireless channel and the second wireless channel based on the current channel environment, send the first code stream through the first wireless channel, and send the second code stream The stream is sent through a second wireless channel, wherein the quality of service guarantee mechanism of the first wireless channel is higher than that of the second wireless channel.
  • the wireless channel includes a first wireless channel and a second wireless channel.
  • FIG. 2C is a schematic diagram of a first code stream and a second code stream being sent through a first channel and a second channel, respectively, according to an embodiment of the present application.
  • the transmitter 103 transmits the first code stream through the first wireless channel, and transmits the second code stream through the second wireless channel.
  • the first code stream and the second code stream sent through the wireless channel use different QoS guarantee levels, and the decoder can only receive the first code stream sent through the channel with a high quality of service guarantee mechanism. , And then can reconstruct the image only through the first bit stream, reduce the phenomenon of mosaic, freeze, and blur in the video information, and improve the user's perception and experience. It is understandable that after adjusting the transmission parameters according to the channel information, the transmission parameters of the first code stream are different from the transmission parameters of the second code stream, and the transmission parameters of the first code stream can be compared with the transmission parameters of the second code stream. The machine sends the first stream with a faster sending speed and higher QoS.
  • the first code stream is sent with better channel resources and a higher transmission success rate.
  • the success rate may refer to the probability that the code stream information is completely transmitted to the receiving end. Therefore, in the process of wireless transmission, the video source can be classified, and the more important stream can be sent first, so that when the channel capacity is jittered, or the decoder cannot receive the complete stream, it can only be based on partial stream correspondence.
  • the reconstructed image obtained from the copy image of the image for example: the first code stream and the second code stream are sent over the wireless channel, and the first code stream is mapped to a higher quality of service (QoS) guarantee mechanism, such as:
  • QoS quality of service
  • the first code stream is mapped to the VI service
  • the second code stream is mapped to the BE service.
  • the first code stream and the second code stream can also be sent through the same wireless channel, but only the first code stream can be sent first to ensure that the receiving end can receive at least the first code stream.
  • the first code stream can be sent first, and only after the first code stream is received at the receiving end or after a preset period of time after the first code stream is sent, The second code stream is sent to ensure that the receiving end can obtain at least one complete, lower-resolution code stream information in order to reconstruct the image.
  • Sending the first code stream and the second code stream separately through the wireless channel can increase the probability of the receiving end receiving the first code stream even when the receiving end cannot receive the complete code stream, so that the receiving end can only receive the first code stream. You can also get a complete reconstructed image when the first code stream is used.
  • the code stream of each target image corresponding to the reference frame image in the multiple frames of target images included in the target video obtain the audio information in the target video and mark it as audio information;
  • Each of the included multi-frame images corresponds to the code stream of the reference frame image and the audio information is sent through the first wireless channel.
  • the audio information is as important as the first code stream information of the multi-frame target image. Therefore, in order to avoid the loss of audio information, the first code stream can be sent together with the audio information through a QoS guarantee level comparison. High first channel transmission, reducing the phenomenon of no sound and freezing in the video information, and improving the user's visual experience.
  • the code stream of the non-reference frame image is sent through the second wireless channel.
  • the decoder can only be based on the results of all target images received through the channel with a high quality of service guarantee mechanism.
  • the code stream and audio information corresponding to the first copy reconstruct the video, reduce the phenomenon of mosaic, stutter, and blur in the video information, and improve the user's perception and experience.
  • the target image can be down-sampled to obtain the reference frame image and the non-reference frame image, and then the reference frame image is image-encoded to obtain the first code stream, and the non-reference frame image is image-encoded to obtain the second code stream.
  • Code stream finally using the first channel resource to send the first code stream, and using the second channel resource to send the second code stream.
  • the first channel resource and the second channel resource are determined according to the channel environment of the current wireless channel, and the first channel resource is better than the second channel resource, that is, it can be understood that the reference frame image can be encoded during wireless transmission.
  • the second code stream is sent hierarchically.
  • the service quality of the channel when the first code stream is sent is higher than the service quality of the channel when the second code stream is sent.
  • the second code stream with a lower level is sent.
  • the first code stream and the second code stream are sent separately, so that when the receiving end cannot receive the complete code stream, the probability that the receiving end receives the first code stream can be increased, and the received code stream can be used.
  • the first code stream obtains a complete reconstructed image.
  • the target image is down-sampled, and after obtaining multiple copies of lower resolution images, the image encoding is performed, which can make the code stream obtained after down-sampling smaller than the direct code stream.
  • the size of the code stream obtained when the target image is image-encoded makes it easier for the decoder at the receiving end to receive the complete code stream.
  • the encoded stream information is based on the current wireless channel channel environment and sent through the wireless channel respectively, so that the encoded stream information can still adapt to the real-time changing channel environment , Thereby reducing the probability of information loss during wireless transmission, thereby preventing the receiving end from being unable to reconstruct a complete video image due to information loss, and improving the user's viewing experience.
  • FIG. 3A is a schematic flowchart of a low-latency source-channel joint coding and decoding method provided by an embodiment of the present application
  • FIG. 3B is a schematic diagram of a target image encoding and decoding application provided by an embodiment of the present application
  • the method shown in FIG. 3A can be applied to the low-delay source-channel joint coding system architecture described in FIG. 1, where the encoder 102 and the transmitter 103 can be used to support and execute the method flow steps shown in FIG. 3A S301-step S304, the decoder 111 can be used to support and execute the method flow step S305-step S307 shown in FIG. 3A.
  • the method may include the following steps S301-S307.
  • Step S301 down-sampling the target image to obtain a reference frame image and a non-reference frame image.
  • step S301 may correspond to the related description of step S201 in FIG. 2A, which will not be repeated here.
  • Step S302 Determine the coding parameters corresponding to the reference frame image and the non-reference frame image according to the channel environment of the current wireless channel.
  • the encoder 102 determines the encoding parameters corresponding to the reference frame image and the non-reference frame image according to the channel environment of the current wireless channel.
  • FIG. 3C is another application scenario provided by this application.
  • the schematic diagram shows that the coding parameters for graphics coding can be determined according to the channel environment of the current wireless channel. For example, when a 3-second target video needs to be wirelessly transmitted, the encoder 102 can down-sample the input target video with multiple frames of target images, where the down-sampling horizontal sampling rate is 2:1, and the vertical sampling rate is 2:1. The ratio is 2:1, and it is divided into 4 lower-resolution duplicate images.
  • one duplicate image is selected as the reference frame image, and the remaining 3 duplicate images are non-reference frame images, and then based on the channel environment of the current wireless channel, The encoding parameters corresponding to the reference frame image and the non-reference frame image during image encoding are determined, so as to reduce the size of the code stream after encoding.
  • the transmitter 103 may obtain the channel environment of the current wireless channel, where the channel environment includes the channel capacity.
  • the channel capacity of the current channel is obtained through the transmitter, and the coding parameters of the encoder when encoding the duplicate image can be adjusted according to the channel capacity, so that the code rate of the coded stream is adapted to Under the current channel capacity, the target code stream can be sent smoothly, reducing the loss of the code stream due to the decrease of the channel capacity.
  • Step S303 Perform image encoding on the reference frame image and the non-reference frame image respectively according to the encoding parameters to obtain a first code stream after encoding the reference frame image and a second code stream after encoding the non-reference frame image.
  • the encoder 102 adjusts the encoding parameters corresponding to the reference frame image and the non-reference frame image during image encoding according to the channel capacity through the encoding parameter feedback control, and then, according to the encoding parameter determined after adjustment, the reference frame image Image coding is performed separately with the non-reference frame image to obtain a first code stream after the reference frame image is coded, and a second code stream after the non-reference frame image is coded.
  • the encoding parameters are used to control the corresponding copy to generate the corresponding code stream according to the target code rate during image encoding, so that the code rates of the first code stream and the second code stream are less than or equal to the above-mentioned channel capacity to ensure the code after encoding
  • the bit rate of the stream is adapted to the current channel capacity. Therefore, according to the channel information of the current wireless channel, adjusting the coding parameters during image encoding can make the code rate of the generated code stream lower than the channel capacity of the current wireless channel, and the code stream will not be due to the low channel capacity when the code stream is wirelessly transmitted.
  • the complete bit stream cannot be transmitted, or the channel capacity is jittered, or the channel capacity is reduced to the initial code rate (for example: the initial code rate can be the code rate when the target image is compressed according to the traditional encoding method or the preset code rate)
  • the initial code rate can be the code rate when the target image is compressed according to the traditional encoding method or the preset code rate
  • the image coding and decoding algorithm of the embodiment of the present application can quickly track and adapt to changes in the channel environment of the wireless channel, so that the delay and image quality of the receiving end can be maintained at an acceptable level.
  • the code rates of the first code stream and the second code stream may be the same or different.
  • the code rates of the first code stream and the second code stream are the same, the coding parameters of the image coding can be uniformly adjusted.
  • the code rate of the code stream is different from that of the second code stream, it is first required to ensure that the code rate of the first code stream is less than or equal to the channel capacity, so as to reduce the probability of code stream loss due to reduced channel capacity.
  • Step S304 Send the first code stream and the second code stream respectively.
  • step S304 may correspond to the related description of step S203 in FIG. 2A, which will not be repeated here.
  • Step S305 Receive the first code stream sent by the encoding end.
  • the decoder 111 receives a first code stream sent by the encoding terminal 102, where the first code stream is a code stream obtained after image encoding of a reference frame image.
  • the embodiment of the present application may receive the first code stream sent by the encoding end during the wireless transmission, and then decode the first code stream to obtain the reference frame image corresponding to the first code stream, and then according to the reference frame image, Obtain the target image.
  • the first code stream is a code stream obtained after image encoding of a reference frame image, where the reference frame image includes one or more images obtained after down-sampling the target image. Therefore, the code received by the encoding end The stream is obtained by down-sampling the target image and then image encoding.
  • Downsampling can divide the target image into multiple copies of lower resolution, and select a part of them as the reference frame image. Moreover, the size of the code stream obtained by down-sampling and re-encoding the target image is smaller than the code stream size obtained when the target image is directly coded, and the code stream after down-sampling may be better adapted to changes in the wireless channel, and is transmitted In the process, the probability of information loss is reduced, and the receiving end cannot reconstruct the complete video image due to the loss of information.
  • Step S306 After image decoding is performed on the first code stream, a reference frame image corresponding to the first code stream is obtained.
  • the decoder 111 obtains the reference frame image corresponding to the first code stream after image decoding of the first code stream.
  • the encoder 102 performs block, DCT, quantization, run-length encoding, entropy encoding, packing, and marking of the reference frame image to obtain the corresponding first code stream; then the decoder 111 can follow the process completely opposite to that of the encoder 102, namely , Unmarking, unpacking, anti-entropy coding, inverse run length coding, inverse quantization, inverse DCT, decode the first bit stream to get YUV, and then convert YUV to RGB to get the reference frame image.
  • the encoder at the encoding end converts the target image from RGB to YUV, and then performs block, DCT, quantization, run-length encoding, entropy encoding, packing, and marking in turn; then it is sent to the receiving end, the decoder of the receiving end Decode the first code stream to obtain YUV according to the reverse process of encoding, and then convert it from YUV to RGB to obtain the reference frame.
  • Step S307 reconstruct the target image according to the reference frame image.
  • the decoder 111 will reconstruct the target image according to the reference frame image.
  • the target image is any one of the multiple frames of target images included in the target video;
  • the reconstructing the target image according to the reference frame image includes: according to the reference frame image and the target image At least one of the reference frame image and the non-reference frame image corresponding to an adjacent frame of target image is reconstructed by using an interpolation algorithm. For example: when performing image encoding, a time packet can be marked for the code stream; when the first code stream is received, the second code stream has not been received within the time period specified by the time packet, and it can be regarded as the second code stream. The code stream is lost.
  • the reconstructed frame that is, the target image
  • the decoding end receives the first code stream, it can avoid the inability of the receiving end to reconstruct the complete video image due to the loss of information, reduce the phenomenon of no sound and freeze in the video information, and improve the user's visual experience.
  • the decoder 111 receives a second code stream sent by the encoding end within a preset time period after receiving the first code stream, where the second code stream is a non-reference frame image for performing the processing.
  • the code stream obtained after encoding the image, the non-reference frame image includes the remaining images except the reference frame image obtained after down-sampling the target image; if the second code stream is incomplete, the After image decoding is performed on the second bitstream, the corresponding incomplete non-reference frame image is obtained; the peripheral pixels of the incomplete non-reference frame image are determined according to the incomplete non-reference frame image; the reconstruction is based on the reference frame image
  • the target image includes: reconstructing the target image through an interpolation algorithm according to peripheral pixels of the reference frame image and the incomplete non-reference frame image.
  • the second code stream is received, and if not, it is determined according to the reference frame image corresponding to the first code stream and the target image of the target image. At least one of the reference frame image and the non-reference frame image corresponding to the next frame of the target image is reconstructed; if the second code stream is received, it can be determined whether the received second code stream is incomplete. If the second bit stream is incomplete, the target image can be obtained by interpolating the incomplete second bit stream and the complete first bit stream, which avoids the inability of the receiving end to reconstruct the complete video image due to partial information loss, and reduces the appearance of the video information. Sounds, freezes, and other phenomena enhance the user’s perception and experience.
  • a time packet can be marked for the code stream; when the first code stream is received, the second code stream is received within the time period specified by the time packet, and the second code stream needs to be determined at this time Whether the stream is incomplete, if it is determined that the second code stream is incomplete, the target image can be reconstructed according to the information of the incomplete second code stream by using an interpolation algorithm and the complete first code stream information; or it can be based on the information of the incomplete second code stream At least one of a reference frame image and a non-reference frame image corresponding to the complete first code stream information and an adjacent frame of the target image of the target image is used to reconstruct the target image using an interpolation algorithm.
  • the decoder 111 receives the second code stream sent by the encoding end within a preset time period after receiving the first code stream; if the second code stream is complete, the second code stream is After image decoding of the stream, the corresponding non-reference frame image is obtained; the reconstruction of the target image according to the reference frame image includes: stitching the reference frame image and the non-reference frame image to Reconstruct the target image.
  • a time packet can be marked on the code stream; when the first code stream is received, the second code stream is received within the time period specified by the time packet, if the second code stream is determined If the stream is complete, the reference frame image decoded by the first code stream and the non-reference frame image decoded by the second code stream can be directly spliced into the target image.
  • the image coding and decoding algorithms in the embodiments of the present application can quickly track and adapt to changes in the channel environment of the wireless channel, so that the delay and image quality of the receiving end can be maintained at an acceptable level.
  • the encoding end adjusts the encoding parameters during image encoding according to the channel information of the current wireless channel, so that the code rate of the generated code stream is lower than the channel capacity of the current wireless channel, and the code stream will not be affected by the channel during wireless transmission.
  • the capacity is too low to transmit the complete code stream, or the channel capacity is jittered, or the channel capacity is reduced to the initial code rate (for example: the initial code rate can be the code rate when the target image is compressed according to the traditional encoding method or the preset code rate ) Below, and part of the code stream information is lost.
  • the code stream received by the encoding end is obtained by down-sampling the target image and then image encoding. Also, because downsampling can divide the target image into multiple copies of lower resolution, the size of the code stream obtained by down-sampling and re-encoding the target image is smaller than the code stream size obtained when the target image is directly encoded.
  • the down-sampled code stream may better adapt to changes in the wireless channel, and reduce the probability of information loss during transmission, and at the same time avoid the inability of the receiving end to reconstruct a complete video image due to information loss.
  • the adjustment of transmission parameters and the allocation of channel resources also reduce the probability of information loss during transmission, avoid the inability of the receiving end to reconstruct the complete video image due to information loss, and reduce the occurrence of mosaic, stuttering, and blurring of video information. Phenomenon to enhance the user’s viewing experience.
  • FIG. 4A is a schematic structural diagram of a low-delay source-channel joint coding apparatus provided by an embodiment of the present application.
  • the low-delay source-channel joint coding apparatus 10 may include a first sampling unit 401 and a first coding unit.
  • the unit 402 and the first sending unit 403 may also include: a first encoding parameter unit 404, a first acquiring unit 405, a second sending unit 406, a second acquiring unit 407, and a third sending unit 408. Detailed description of each unit as follows.
  • the first sampling unit 401 is used for down-sampling the target image to obtain a reference frame image and a non-reference frame image.
  • the first encoding unit 402 is configured to perform image encoding on the reference frame image and the non-reference frame image to obtain a first code stream after encoding the reference frame image and a second code stream after encoding the non-reference frame image.
  • the first sending unit 403 is configured to determine the first channel resource and the second channel resource based on the channel environment of the current wireless channel, and respectively use the first channel resource to send the first code stream and the second channel resource to send the second code stream, Among them, the first channel resource is better than the second channel resource.
  • the above-mentioned first encoding unit 402 is specifically configured to perform intra-frame compression on each of the above-mentioned reference frame images included in the above-mentioned reference frame image to obtain the above-mentioned first code stream;
  • the residual of the reference frame image is subjected to the intra-frame compression to obtain the corresponding second code stream.
  • the foregoing channel environment further includes channel capacity;
  • the device further includes: a first encoding parameter unit 404, configured to perform image encoding on the foregoing reference frame image and the foregoing non-reference frame image separately, Before obtaining the first code stream after encoding the reference frame image and the second code stream after encoding the non-reference frame image, based on the channel capacity, the encoding parameters corresponding to the reference frame image and the non-reference frame image are determined, wherein The above coding parameters are used to control the corresponding image to generate a corresponding code stream according to the target code rate during image encoding.
  • the target code rates corresponding to the first code stream and the second code stream are both less than or equal to the channel capacity .
  • the above-mentioned channel environment includes one or more of channel bandwidth, signal to interference plus noise ratio, signal-to-noise ratio, received signal strength indicator, duty cycle, and bit rate;
  • the unit 403 is specifically configured to determine the transmission parameters corresponding to the first code stream and/or the second code stream respectively based on the current channel environment, where the transmission parameters are used for target modulation and coding strategy information and/or Send the code stream at the target transmission power, the transmission parameters of the first code stream are better than the transmission parameters of the second code stream; send the first code stream according to the transmission parameters corresponding to the first code stream, and according to the second code stream The corresponding sending parameter sends the above-mentioned second code stream.
  • the above-mentioned wireless channel includes a first wireless channel and a second wireless channel; the first sending unit 403 is specifically configured to: determine the above-mentioned first wireless channel and the above-mentioned second wireless channel based on the current channel environment.
  • the first code stream is sent through a first wireless channel
  • the second code stream is sent through a second wireless channel, wherein the quality of service guarantee mechanism of the first wireless channel is higher than that of the second wireless channel.
  • the above-mentioned target image is any one of the multi-frame target images included in the target video; the above-mentioned apparatus further includes: a first obtaining unit 405, configured to obtain the multi-frame target images included in the above-mentioned target video Each frame of target image corresponds to the code stream of the aforementioned reference frame image; the audio information in the aforementioned target video is acquired; the second sending unit 406 is configured to correspond each frame of the aforementioned target video to the aforementioned reference frame image. The code stream and the audio information are sent through the first wireless channel.
  • the above-mentioned target image is any one of the multi-frame target images included in the target video; the above-mentioned apparatus further includes: a second acquisition unit 407, configured to acquire the multi-frame target images included in the above-mentioned target video Each frame of the target image corresponds to the code stream of the aforementioned non-reference frame image; the third sending unit 408 is configured to pass the code stream of each frame image corresponding to the aforementioned non-reference frame image among the multiple frames of images included in the aforementioned target video through the aforementioned second Wireless channel transmission.
  • each functional unit in the low-delay source-channel joint coding device 20 described in the embodiment of the present application can be referred to the relevant description in the method embodiment described in FIG. 2A-2C. Here, No longer.
  • FIG. 4B is a schematic structural diagram of another low-delay source-channel joint coding apparatus provided by an embodiment of the present application.
  • the low-delay source-channel joint coding apparatus 20 may include a second sampling unit 411 and a second sampling unit 411.
  • the encoding parameter unit 412, the second encoding unit 413, and the fourth sending unit 414 may also include: a third acquiring unit 415, a fifth sending unit 416, a fourth acquiring unit 417, and a sixth sending unit 418.
  • the details of each unit Described as follows.
  • the second sampling unit 411 is used for down-sampling the target image to obtain a reference frame image and a non-reference frame image.
  • the second encoding parameter unit 412 is configured to determine the encoding parameters corresponding to the reference frame image and the non-reference frame image according to the channel environment of the current wireless channel.
  • the second encoding unit 413 is configured to perform image encoding on the reference frame image and the non-reference frame image according to the encoding parameters, to obtain the first code stream after the reference frame image is coded, and the second code stream after the non-reference frame image is coded .
  • the fourth sending unit 414 is configured to send the first code stream and the second code stream respectively.
  • the foregoing encoding parameters are used to control the corresponding image to generate a corresponding code stream according to the target bit rate during image encoding;
  • the foregoing second encoding unit 413 is specifically configured to: include the foregoing reference frame image Each of the aforementioned reference frame images is intra-compressed according to the aforementioned target code rate to obtain the aforementioned first code stream; the residual difference between the aforementioned non-reference frame image and the aforementioned reference frame image is subjected to the aforementioned intra-frame compression according to the aforementioned target code rate to obtain Corresponding to the above second code stream.
  • the foregoing channel environment includes one or more of channel bandwidth, signal to interference plus noise ratio, signal-to-noise ratio, received signal strength indicator, duty cycle, and bit rate;
  • the fourth The sending unit 414 is configured to separately send the first code stream and the second code stream, and is further configured to: determine the transmission parameters corresponding to the first code stream and/or the second code stream according to the channel environment, wherein, the transmission parameters are used to transmit the code stream according to the target modulation and coding strategy information and/or the target transmission power, and the transmission parameters of the first code stream are better than the transmission parameters of the second code stream.
  • the above-mentioned wireless channel includes a first wireless channel and a second wireless channel; the above-mentioned fourth sending unit 414 is specifically configured to: send the above-mentioned first code stream through the first wireless channel, and transmit the above-mentioned first code stream to the The two code streams are sent through a second wireless channel, wherein the quality of service guarantee mechanism of the first wireless channel is higher than that of the second wireless channel.
  • the above-mentioned target image is any one of the multi-frame target images included in the target video; the above-mentioned apparatus further includes: a third acquisition unit 415, configured to acquire the multi-frame target images included in the above-mentioned target video Each frame of target image corresponds to the code stream of the aforementioned reference frame image; the audio information in the aforementioned target video is obtained; the fifth sending unit 416 is configured to correspond each frame of the aforementioned target video to the aforementioned reference frame image. The code stream and the audio information are sent through the first wireless channel.
  • the above-mentioned target image is any one of the multi-frame target images included in the target video; the above-mentioned device further includes: a fourth obtaining unit 417, configured to obtain the multi-frame target images included in the above-mentioned target video Each frame of the target image corresponds to the code stream of the aforementioned non-reference frame image; the sixth sending unit 418 is configured to pass the code stream of each frame image corresponding to the aforementioned non-reference frame image in the multi-frame image included in the aforementioned target video through the aforementioned second Wireless channel transmission.
  • each functional unit in the low-delay source-channel joint coding apparatus 20 described in the embodiment of the present application can be referred to the related description in the method embodiment described in FIG. 3A-FIG. 3C, here No longer.
  • FIG. 4C is a schematic structural diagram of a low-delay source-channel joint decoding apparatus provided by an embodiment of the present application.
  • the low-delay source-channel joint decoding apparatus 30 may include a receiving unit 421, a decoding unit 422, and an image.
  • the unit 423 may further include: a fifth acquiring unit 424 and a sixth acquiring unit 425, wherein the detailed description of each unit is as follows.
  • the receiving unit 421 is configured to receive a first code stream sent by an encoding end, the first code stream is a code stream obtained after image encoding of a reference frame image, wherein the reference frame image includes a code stream obtained by down-sampling the target image Reference frame image.
  • the decoding unit 422 is configured to decode the image of the first code stream to obtain the reference frame image corresponding to the first code stream.
  • the image unit 423 is configured to reconstruct the target image according to the reference frame image.
  • the above-mentioned target image is any one of the multi-frame target images included in the target video; the above-mentioned image unit 423 is specifically configured to: according to the adjacent one of the reference frame image and the target image At least one of the reference frame image and the non-reference frame image corresponding to the frame target image is reconstructed by using an interpolation algorithm.
  • the device further includes: a fifth acquiring unit 424, configured to receive the second code stream sent by the encoding end within a preset time period after receiving the first code stream, where
  • the second code stream is a code stream obtained by encoding a non-reference frame image, and the non-reference frame image includes images obtained after down-sampling the target image except for the reference frame image; if the first The second code stream is incomplete.
  • the image unit 423 is specifically configured to: according to the peripheral pixels of the above-mentioned reference frame image and the above-mentioned incomplete non-reference frame image, The above-mentioned target image is reconstructed by interpolation algorithm.
  • the device further includes: a sixth obtaining unit 425, configured to, if the second code stream is complete, perform image decoding on the second code stream to obtain the corresponding non-reference frame image
  • the image unit 423 is specifically configured to: stitch the reference frame image and the non-reference frame image to reconstruct the target image.
  • each functional unit in the low-delay source-channel joint coding apparatus 20 described in the embodiment of the present application can be referred to the related description in the method embodiment described in FIG. 3A-FIG. 3C, here No longer.
  • FIG. 5 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • the terminal device 40 is an intelligent terminal capable of wireless image transmission, such as a projector, a video recorder, a mobile phone, a tablet computer, and a vehicle-mounted terminal.
  • the device includes at least one processor 501, at least one memory 502, and at least one communication interface 503.
  • the device may also include general components such as antennas, which will not be described in detail here.
  • the processor 501 can be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), a field programmable gate array (Field Programmable Gata Array, FPGA) circuit, or one or more An integrated circuit used to control the execution of the above program program.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • FPGA field programmable gate array
  • the communication interface 503 is used to communicate with other devices or communication networks, such as Ethernet, radio access network (RAN), core network, wireless local area networks (WLAN), etc.
  • RAN radio access network
  • WLAN wireless local area networks
  • the memory 502 can be a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (RAM), or other types that can store information and instructions
  • the dynamic storage device can also be electrically erasable programmable read-only memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), CD-ROM (Compact Disc Read-Only Memory, CD-ROM) or other optical disc storage, optical disc storage (Including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program codes in the form of instructions or data structures and can be used by a computer Any other media accessed, but not limited to this.
  • the memory can exist independently and is connected to the processor through a bus.
  • the memory can also be integrated with the processor.
  • the memory 502 is used to store application program codes for executing the above solutions, and the processor 501 controls the execution.
  • the processor 501 is configured to execute application program codes stored in the memory 502.
  • the code stored in the memory 502 can execute the low-delay source-channel joint coding method provided in Figure 2A, such as: down-sampling the target image to obtain a reference frame image and a non-reference frame image; separate the reference frame image and the non-reference frame image Perform image encoding to obtain the first code stream after encoding the reference frame image and the second code stream after encoding the non-reference frame image; based on the channel environment of the current wireless channel, determine the first channel resource and the second channel resource, and use them respectively The first channel resource is used to send the first code stream and the second channel resource is used to send the second code stream, wherein the first channel resource is better than the second channel resource.
  • the low-delay source-channel joint coding method provided in Figure 2A, such as: down-sampling the target image to obtain a reference frame image and a non-reference frame image; separate the reference frame image and the non-reference frame image Perform image encoding to obtain the first code stream after encoding the reference frame image and the
  • the code stored in the memory 502 can also execute the low-latency source-channel joint coding method provided in FIG. 3A, such as: down-sampling the target image to obtain the reference frame image and the non-reference frame image; according to the current wireless channel channel environment, determine Encoding parameters corresponding to the reference frame image and the non-reference frame image; according to the encoding parameters, image encoding is performed on the reference frame image and the non-reference frame image to obtain the encoded reference frame image
  • the first code stream and the second code stream after encoding the non-reference frame image; the first code stream and the second code stream are sent separately.
  • the disclosed device may be implemented in other ways.
  • the device embodiments described above are only illustrative, for example, the division of the above-mentioned units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or integrated. To another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical or other forms.
  • the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the above integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to enable a computer device (which may be a personal computer, a server or a network device, etc., specifically a processor in a computer device) to execute all or part of the steps of the above methods of the various embodiments of the present application.
  • the aforementioned storage media may include: U disk, mobile hard disk, magnetic disk, optical disk, read-only memory (Read-Only Memory, abbreviation: ROM) or Random Access Memory (Random Access Memory, abbreviation: RAM), etc.
  • U disk mobile hard disk
  • magnetic disk magnetic disk
  • optical disk read-only memory
  • Read-Only Memory abbreviation: ROM
  • Random Access Memory Random Access Memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne, selon certains modes de réalisation, un procédé de codage source-canal combiné à faible retard, et un dispositif associé. Le procédé de codage source-canal combiné à faible retard comprend : tout d'abord, le sous-échantillonnage d'une image cible pour obtenir une image de trame de référence et une image de trame de non-référence ; respectivement, la réalisation d'un codage d'image sur l'image de trame de référence et l'image de trame de non-référence pour obtenir un premier flux de code après le codage de l'image de trame de référence et un second flux de code après le codage de l'image de trame de non-référence ; et sur la base d'un environnement de canal d'un canal sans fil actuel, la transmission du premier flux de code à l'aide d'une première ressource de canal et la transmission du second flux de code à l'aide d'une seconde ressource de canal, la première ressource de canal étant supérieure à la seconde ressource de canal. Par la mise en œuvre des modes de réalisation de la présente invention, lorsque la capacité de canal est réduite, des phénomènes tels que la pixellisation, le brouillage et le flou d'informations vidéo peuvent être évités, ce qui permet d'améliorer l'expérience visuelle d'un utilisateur.
PCT/CN2019/109220 2019-09-29 2019-09-29 Procédé de codage source-canal combiné à faible retard, et dispositif associé WO2021056575A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980100614.8A CN114424552A (zh) 2019-09-29 2019-09-29 一种低延迟信源信道联合编码方法及相关设备
PCT/CN2019/109220 WO2021056575A1 (fr) 2019-09-29 2019-09-29 Procédé de codage source-canal combiné à faible retard, et dispositif associé

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/109220 WO2021056575A1 (fr) 2019-09-29 2019-09-29 Procédé de codage source-canal combiné à faible retard, et dispositif associé

Publications (1)

Publication Number Publication Date
WO2021056575A1 true WO2021056575A1 (fr) 2021-04-01

Family

ID=75165457

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/109220 WO2021056575A1 (fr) 2019-09-29 2019-09-29 Procédé de codage source-canal combiné à faible retard, et dispositif associé

Country Status (2)

Country Link
CN (1) CN114424552A (fr)
WO (1) WO2021056575A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024114475A1 (fr) * 2022-11-30 2024-06-06 摩尔线程智能科技(北京)有限责任公司 Procédé et appareil de transcodage vidéo, dispositif électronique, support de stockage lisible par ordinateur, et produit programme informatique

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116456094B (zh) * 2023-06-15 2023-09-05 中南大学 一种分布式视频混合数字模拟传输方法及相关设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040179598A1 (en) * 2003-02-21 2004-09-16 Jian Zhou Multi-path transmission of fine-granular scalability video streams
CN102281436A (zh) * 2011-03-15 2011-12-14 福建星网锐捷网络有限公司 无线视频传输方法、装置及网络设备
US20120092991A1 (en) * 2010-10-15 2012-04-19 Apple Inc. Adapting transmission to improve qos in a mobile wireless device
CN103580773A (zh) * 2012-07-18 2014-02-12 中兴通讯股份有限公司 数据帧的传输方法及装置
CN107809662A (zh) * 2017-11-06 2018-03-16 陕西师范大学 一种基于异构无线自组织网络的可分级视频传输方法及装置

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006080655A1 (fr) * 2004-10-18 2006-08-03 Samsung Electronics Co., Ltd. Dispositif et procede de reglage du debit binaire d'un train binaire code evolutif par multicouches
KR100664929B1 (ko) * 2004-10-21 2007-01-04 삼성전자주식회사 다 계층 기반의 비디오 코더에서 모션 벡터를 효율적으로압축하는 방법 및 장치
CN100588249C (zh) * 2006-07-27 2010-02-03 腾讯科技(深圳)有限公司 调节视频质量的方法、系统及终端
CN100584017C (zh) * 2006-12-31 2010-01-20 联想(北京)有限公司 基于p2p网络的视频通信方法
CN101404759B (zh) * 2008-10-30 2010-09-08 中山大学 一种用于数字视频监控系统的网络自适应系统
CN101448157A (zh) * 2008-12-30 2009-06-03 杭州华三通信技术有限公司 一种视频编码方法和视频编码器
GB2499865B (en) * 2012-03-02 2016-07-06 Canon Kk Method and devices for encoding a sequence of images into a scalable video bit-stream, and decoding a corresponding scalable video bit-stream
CN104247423B (zh) * 2012-03-21 2018-08-07 联发科技(新加坡)私人有限公司 可伸缩视频编码系统的帧内模式编码方法和装置
CN102769747A (zh) * 2012-06-29 2012-11-07 中山大学 一种基于并行迭代的分级分布式视频编解码方法及系统
CN103716630B (zh) * 2012-09-29 2017-02-22 华为技术有限公司 上采样滤波器的生成方法和装置
DE102014006080A1 (de) * 2014-04-25 2015-10-29 Unify Gmbh & Co. Kg Verfahren und Vorrichtung zur Übermittlung von kodierten Mediendaten
CN108496369A (zh) * 2017-03-30 2018-09-04 深圳市大疆创新科技有限公司 视频传输、接收方法、系统、设备及无人飞行器
CN109618188B (zh) * 2018-12-19 2021-03-30 北京东土科技股份有限公司 视频数据的编码、转发方法、装置、设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040179598A1 (en) * 2003-02-21 2004-09-16 Jian Zhou Multi-path transmission of fine-granular scalability video streams
US20120092991A1 (en) * 2010-10-15 2012-04-19 Apple Inc. Adapting transmission to improve qos in a mobile wireless device
CN102281436A (zh) * 2011-03-15 2011-12-14 福建星网锐捷网络有限公司 无线视频传输方法、装置及网络设备
CN103580773A (zh) * 2012-07-18 2014-02-12 中兴通讯股份有限公司 数据帧的传输方法及装置
CN107809662A (zh) * 2017-11-06 2018-03-16 陕西师范大学 一种基于异构无线自组织网络的可分级视频传输方法及装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024114475A1 (fr) * 2022-11-30 2024-06-06 摩尔线程智能科技(北京)有限责任公司 Procédé et appareil de transcodage vidéo, dispositif électronique, support de stockage lisible par ordinateur, et produit programme informatique

Also Published As

Publication number Publication date
CN114424552A (zh) 2022-04-29

Similar Documents

Publication Publication Date Title
TWI663880B (zh) 色度預測的方法和設備
JP5882547B2 (ja) シーンの変化に伴うピクチャ内の符号化及び送信パラメータの好適化
US9071841B2 (en) Video transcoding with dynamically modifiable spatial resolution
US7924917B2 (en) Method for encoding and decoding video signals
TWI647946B (zh) 一種圖像編解碼方法及裝置
TWI505694B (zh) 編碼器及編碼方法
US9414086B2 (en) Partial frame utilization in video codecs
US11743475B2 (en) Advanced video coding method, system, apparatus, and storage medium
TW201931853A (zh) 具有聯合像素/變換為基礎之量化之視頻寫碼之量化參數控制
CN102396225B (zh) 用于可靠实时传输的图像和视频的双模式压缩
US10542265B2 (en) Self-adaptive prediction method for multi-layer codec
CN103493481A (zh) 基于场景的适应性比特率控制
US20220295071A1 (en) Video encoding method, video decoding method, and corresponding apparatus
US20080205513A1 (en) Method and system for upsampling a spatial layered coded video image
WO2021185257A1 (fr) Procédé de codage d'image, procédé de décodage d'image et appareils associés
CA3057894C (fr) Compression video utilisant des motifs de sous-echantillonnage en deux phases
WO2021056575A1 (fr) Procédé de codage source-canal combiné à faible retard, et dispositif associé
US20230017934A1 (en) Image encoding and decoding methods and apparatuses
CN115866297A (zh) 视频处理方法、装置、设备及存储介质
KR20060043050A (ko) 영상 신호의 인코딩 및 디코딩 방법
KR20220063063A (ko) 인공지능 부호화 및 인공지능 복호화를 수행하기 위한 방법 및 장치
US20150078433A1 (en) Reducing bandwidth and/or storage of video bitstreams
Jung Comparison of video quality assessment methods
US11582462B1 (en) Constraint-modified selection of video encoding configurations
US20230196505A1 (en) Artificial intelligence-based image providing apparatus and method, and artificial intelligence-based display apparatus and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19946291

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19946291

Country of ref document: EP

Kind code of ref document: A1