CN113709463A - Data transmission method and device - Google Patents

Data transmission method and device Download PDF

Info

Publication number
CN113709463A
CN113709463A CN202010431926.1A CN202010431926A CN113709463A CN 113709463 A CN113709463 A CN 113709463A CN 202010431926 A CN202010431926 A CN 202010431926A CN 113709463 A CN113709463 A CN 113709463A
Authority
CN
China
Prior art keywords
data
jscc
image
physical layer
field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010431926.1A
Other languages
Chinese (zh)
Other versions
CN113709463B (en
Inventor
颜敏
马梦瑶
李佳徽
林伟
杨讯
于健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010431926.1A priority Critical patent/CN113709463B/en
Publication of CN113709463A publication Critical patent/CN113709463A/en
Application granted granted Critical
Publication of CN113709463B publication Critical patent/CN113709463B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The application discloses a data transmission method and device. The method comprises the following steps: the method comprises the steps that first equipment obtains first data of a first image, wherein the first data comprises a plurality of subdata, and each subdata of the first data comprises data of one or more bit planes of the first image; the method comprises the steps that a first device sends a physical layer frame to a second device, wherein the physical layer frame comprises a joint source channel coding JSCC information indication symbol and second data, and sub-data of the second data is obtained by the sub-data of the first data through JSCC coding; the physical layer frame comprises N loads, each load in the N loads comprises subdata of second data, and N is a positive integer; the JSC information indication symbol comprises a first field, and the information carried by the first field is used for indicating the length of the data of each bit plane. By JSCC encoding the first picture, the transmission quality of the wireless video or picture can be improved.

Description

Data transmission method and device
Technical Field
The embodiment of the application relates to the field of communication, in particular to a data transmission method and device.
Background
With the development of information technology and the progress of society, people have more and more demands on information, and multimedia communication has become a focus of attention. The video is an important component of multimedia data, has a series of advantages of being exact, real-time, visual, specific, vivid and the like, brings visual experience to users, and greatly enriches traditional services. Due to the limited bandwidth of wireless channels, video data needs to be compressed efficiently. However, techniques such as predictive coding and variable length coding used in video coding make the bit stream very sensitive to channel errors while performing efficient compression. The video transmission quality is poor.
Disclosure of Invention
The application provides a data transmission method and device, which can improve the quality of wireless video and image transmission.
In a first aspect, a data transmission method is provided, where the method includes: the method includes the steps that first equipment obtains first data of a first image, wherein the first data comprises a plurality of subdata, and each subdata of the first data comprises data of one or more bit planes of the first image; the method comprises the steps that a first device sends a physical layer frame to a second device, the physical layer frame comprises a joint source channel coding JSCC information indication symbol and second data, and sub-data of the second data is obtained by the sub-data of the first data through JSCC coding; the physical layer frame comprises N loads, each load of the N loads comprises subdata of second data, and N is a positive integer; the JSC information indication symbol comprises a first field, and the information carried by the first field is used for indicating the length of the data of each bit plane.
In the above technical solution, the second data in the transmitted physical layer frame is data obtained by JSCC coding of data of each bit plane of the data of the first image, and the physical layer frame contains a JSCC information indicator indicating that the currently transmitted physical layer frame is a JSCC frame, so that the first device can transmit the first image by using a JSCC technology, and the quality of wireless image or video transmission is improved.
In a possible implementation manner, when the access manner is time division multiple access TDMA, the JSCC information indicator is carried in the JSCC signaling field of the physical layer frame.
In another possible implementation, the JSCC signaling field includes one common JSCC signaling field and N dedicated JSCC signaling fields, where the common JSCC signaling field includes a first field; and the information carried by the ith dedicated JSC signaling field in the N dedicated JSC signaling fields is used for indicating the bit plane number, the sequence number and the modulation mode of the ith load in the N loads.
In another possible implementation, the common JSCC signaling field may further include: frame rate, image size, pixel color bit depth, quantization step size, Discrete Cosine Transform (DCT) size, the number of DCT blocks contained in each coding block, the number of DCT coefficient quantization bit planes, the number of coding blocks, and the number of physical layer service data units (PSDUs).
In another possible implementation manner, when the access manner is an orthogonal frequency division multiple access OFDMA, the JSCC information indication symbol is carried in a JSCC signaling field in the high-efficiency signaling field B of the physical layer frame, or the JSCC information indication symbol is carried in a JSCC signaling field in the very high throughput signaling field; the high efficiency signaling field B or the very high throughput signaling field also includes a resource unit, RU, allocation field that carries information for indicating the data of the bit plane corresponding to each RU block.
When the access mode is the OFDMA format, user mixed transmission can be realized, that is, users supporting JSCC and users not supporting JSCC can simultaneously use given frequency domain resources for transmission, thereby improving the utilization rate of the frequency domain resources.
Illustratively, the first data is data obtained by a DCT transform, quantization and layering operations of the first image at a physical layer of the first device.
In another possible implementation manner, the JSCC information indicator may further include: frame rate, color coding mode, image size, pixel color bit depth, quantization step size, Discrete Cosine Transform (DCT) size, the number of DCT blocks contained in each coding block, the number of DCT coefficient quantization bit planes, the number of coding blocks, the number of physical layer service data units (PSDUs), the number of bit planes of each PSDU, and part or all of the modulation modes of each PSDU.
In another possible implementation manner, part or all of the fields in the JSCC information indicator may be carried in a header field of the MAC layer data.
In another possible implementation manner, the field that is not included in the JSCC information indicator may be sent to the second device through other signaling.
The JSC information indication symbol can bear parameters of the first image, DCT transformation, quantization and layering operation of the first image and parameters of JSC coding, and therefore the second device can decode the received physical layer frame conveniently and restore the first image.
Illustratively, the first data is data obtained by a DCT transform, a quantization operation of the first image at a media access control MAC layer of the first device, and a layering operation at a physical layer of the first device.
In another possible implementation, the MAC header field may include a quantization step size, DCT transform size field; the JSCC information indication symbol may further include: frame rate, color coding mode, image size, pixel color bit depth, the number of DCT blocks contained in each coding block, the number of DCT coefficient quantization bit planes, the number of coding blocks, the number of physical layer service data units (PSDUs), the number of bit planes of each PSDU, and some or all of the modulation modes of each PSDU.
In another possible implementation, the subframe header field of the aggregate MAC layer service data unit A-MSDU may include a picture size (resolution), picture color bit depth field.
In another possible implementation, the MAC layer protocol data unit MPDU delimiter may include: quantization step size, DCT transform size field.
Illustratively, the first data is data obtained by DCT transformation, quantization and layering operations of the first image at a media access control MAC layer of the first device.
In another possible implementation, the MAC header fields may include: quantization step size, DCT transformation size, DCT coefficient quantization bit plane number, DCT block number field contained in each coding block; the JSCC information indication symbol may further include: frame rate, color coding scheme, image size, pixel color bit depth, number of coding blocks, number of physical layer service data units (PSDUs), number of bit planes per PSDU, and modulation scheme per PSDU.
In another possible implementation, the subframe header field of the aggregate MAC layer service data unit A-MSDU may include a picture size (resolution), picture color bit depth field.
In another possible implementation, the MAC layer protocol data unit MPDU delimiter may include: quantization step size, DCT transform size, DCT coefficient quantization bit plane number, DCT block number field contained by each coding block.
Illustratively, the first data is data obtained by DCT transformation, quantization and layering operations of the first image at an application layer or logical link control, LLC, layer of the first device.
In another possible implementation, the subframe header field of the aggregate MAC layer service data unit A-MSDU may include a quantization step size, a DCT transform size, a DCT coefficient quantization bit plane number field; the JSCC information indication symbol may further include: frame rate, color coding mode, image size, pixel color bit depth, the number of DCT blocks contained in each coding block, the number of coding blocks, the number of physical layer service data units (PSDUs), the number of bit planes of each PSDU, and some or all of the modulation modes of each PSDU.
Illustratively, the first data is data obtained by a DCT transform, quantization of the first image at an application layer or logical link control LLC layer of the first device, and a layering operation at a MAC layer or physical layer of the first device.
In another possible implementation, the MAC header field may include a quantization step size, a DCT transform size, a DCT coefficient quantization bitplane number field; the JSCC information indication symbol may further include: frame rate, color coding mode, image size, pixel color bit depth, the number of DCT blocks contained in each coding block, the number of coding blocks, the number of physical layer service data units (PSDUs), the number of bit planes of each PSDU, and some or all of the modulation modes of each PSDU.
In another possible implementation manner, the JSCC information indicator may further include: a MAC header length field and a MAC header encoding mode field.
In the above technical solution, the operation of performing DCT transformation, quantization and layering on the first image to obtain the first data may be performed in the application layer or partial hierarchical structures in the LLC layer, the MAC layer and the physical layer of the first device, which improves the flexibility of the data transmission method.
In a second aspect, a data transmission method is provided, which includes: the second device receives a physical layer frame sent by the first device, wherein the physical layer frame comprises a joint source channel coding JSCC information indication symbol and second data, the physical layer frame comprises N loads, each load in the N loads comprises subdata of the second data, and N is a positive integer; the JSCC information indication symbol comprises a first field, and information carried by the first field is used for indicating the length of data of each bit plane; the second device decodes the second data to obtain first data of the first image, wherein the first data comprises a plurality of subdata, and each subdata of the first data comprises data of one or more bit planes of the first image; and the second equipment acquires the first image according to the first data.
In the above technical solution, the second data is data obtained by JSCC encoding of the first data, and the second device may restore the first image by decoding the received second data, thereby implementing high-quality transmission of the wireless video or image.
In a possible implementation manner, when the access manner is time division multiple access TDMA, the JSCC information indicator is carried in the JSCC signaling field of the physical layer frame.
In another possible implementation, the JSCC signaling field includes one common JSCC signaling field and N dedicated JSCC signaling fields, where the common JSCC signaling field includes a first field; and the information carried by the ith dedicated JSC signaling field in the N dedicated JSC signaling fields is used for indicating the bit plane number, the sequence number and the modulation mode of the ith load in the N loads.
In another possible implementation, the common JSCC signaling field may further include: frame rate, image size, pixel color bit depth, quantization step size, Discrete Cosine Transform (DCT) size, the number of DCT blocks contained in each coding block, the number of DCT coefficient quantization bit planes, the number of coding blocks, and the number of physical layer service data units (PSDUs).
In another possible implementation manner, when the access manner is an orthogonal frequency division multiple access OFDMA, the JSCC information indication symbol is carried in a JSCC signaling field in the high-efficiency signaling field B of the physical layer frame, or the JSCC information indication symbol is carried in a JSCC signaling field in the very high throughput signaling field; the high efficiency signaling field B or the very high throughput signaling field also includes a resource unit, RU, allocation field that carries information for indicating the data of the bit plane corresponding to each RU block.
When the OFDMA format is accessed, user mixed transmission can be realized, namely users supporting JSCC and users not supporting JSCC can simultaneously use given frequency domain resources for transmission, and the utilization rate of the frequency domain resources is improved.
In another possible implementation manner, the second device acquiring the first image according to the first data includes: the first data is subjected to transform coefficient combination, dequantization and DCT inverse transformation operations at a physical layer of the second device to obtain a first image.
In another possible implementation manner, the second device acquiring the first image according to the first data includes: the first data is subjected to a transform coefficient combining operation at a physical layer of the second device, and a first image is obtained through a dequantization and DCT inverse transform operation at a MAC layer of the second device.
In another possible implementation manner, the second device acquiring the first image according to the first data includes: the first data is subjected to transform coefficient combination, dequantization and DCT inverse transformation operations at the MAC layer of the second device to obtain a first image.
In another possible implementation manner, the second device acquiring the first image according to the first data includes: the first data is combined, dequantized and DCT inverse transformed in the application layer or LLC layer of the second device to obtain the first image.
In another possible implementation manner, the second device acquiring the first image according to the first data includes: the first data is combined by transform coefficients at the MAC layer or physical layer of the second device and the first image is obtained by dequantization and inverse DCT transform operations at the application layer or LLC layer of the second device.
In a third aspect, there is provided a data transmission apparatus, which is applied to a first device, and includes: the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring first data of a first image, the first data comprises a plurality of subdata, and each subdata of the first data comprises data of one or more bit planes of the first image; a sending module, configured to send a physical layer frame to a second device, where the physical layer frame includes a joint source channel coding JSCC information indicator and second data, and the sub-data of the second data is obtained by JSCC coding the sub-data of the first data; the physical layer frame comprises N loads, each load in the N loads comprises subdata of second data, and N is a positive integer; the JSC information indication symbol comprises a first field, and the information carried by the first field is used for indicating the length of the data of each bit plane.
In the above technical solution, the second data in the transmitted physical layer frame is data obtained by JSCC coding of data of each bit plane of the data of the first image, and the physical layer frame contains a JSCC information indicator indicating that the currently transmitted physical layer frame is a JSCC frame, so that the first device can transmit the first image by using a JSCC technology, and the quality of wireless image or video transmission is improved.
In a possible implementation manner, when the access manner is time division multiple access TDMA, the JSCC information indicator is carried in the JSCC signaling field of the physical layer frame.
In another possible implementation, the JSCC signaling field includes one common JSCC signaling field and N dedicated JSCC signaling fields, where the common JSCC signaling field includes a first field; and the information carried by the ith dedicated JSC signaling field in the N dedicated JSC signaling fields is used for indicating the bit plane number, the sequence number and the modulation mode of the ith load in the N loads.
In another possible implementation, the common JSCC signaling field may further include: frame rate, image size, pixel color bit depth, quantization step size, Discrete Cosine Transform (DCT) size, the number of DCT blocks contained in each coding block, the number of DCT coefficient quantization bit planes, the number of coding blocks, and the number of physical layer service data units (PSDUs).
In another possible implementation manner, when the access manner is an orthogonal frequency division multiple access OFDMA, the JSCC information indication symbol is carried in a JSCC signaling field in the high-efficiency signaling field B of the physical layer frame, or the JSCC information indication symbol is carried in a JSCC signaling field in the very high throughput signaling field; the high efficiency signaling field B or the very high throughput signaling field also includes a resource unit, RU, allocation field that carries information for indicating the data of the bit plane corresponding to each RU block.
When the access mode is the OFDMA mode, user mixed transmission can be realized, that is, users supporting JSCC and users not supporting JSCC can simultaneously use given frequency domain resources for transmission, and the utilization rate of the frequency domain resources is improved.
Illustratively, the first data is data obtained by a DCT transform, quantization and layering operations of the first image at a physical layer of the first device.
In another possible implementation manner, the JSCC information indicator may further include: frame rate, color coding mode, image size, pixel color bit depth, quantization step size, Discrete Cosine Transform (DCT) size, the number of DCT blocks contained in each coding block, the number of DCT coefficient quantization bit planes, the number of coding blocks, the number of physical layer service data units (PSDUs), the number of bit planes of each PSDU, and part or all of the modulation modes of each PSDU.
In another possible implementation manner, part or all of the fields in the JSCC information indicator may be carried in a header field of the MAC layer data.
In another possible implementation manner, the field that is not included in the JSCC information indicator may be sent to the second device through other signaling.
The JSC information indication symbol can bear parameters of the first image, DCT transformation, quantization and layering operation of the first image and parameters of JSC coding, and therefore the second device can decode the received physical layer frame conveniently and restore the first image.
Illustratively, the obtaining module is specifically configured to: the first image is subjected to DCT transformation and quantization operations on a Media Access Control (MAC) layer of the first device, and first data is acquired through layering operations on a physical layer of the first device.
In another possible implementation, the MAC header field may include a quantization step size, DCT transform size field; the JSCC information indication symbol may further include: frame rate, color coding mode, image size, pixel color bit depth, the number of DCT blocks contained in each coding block, the number of DCT coefficient quantization bit planes, the number of coding blocks, the number of physical layer service data units (PSDUs), the number of bit planes of each PSDU, and some or all of the modulation modes of each PSDU.
In another possible implementation, the subframe header field of the aggregate MAC layer service data unit A-MSDU may include a picture size (resolution), picture color bit depth field.
In another possible implementation, the MAC layer protocol data unit MPDU delimiter may include: quantization step size, DCT transform size field.
Illustratively, the obtaining module is specifically configured to: the first image is obtained through DCT transformation, quantization and layering operation on a Media Access Control (MAC) layer of the first device, and first data are obtained.
In another possible implementation, the MAC header fields may include: quantization step size, DCT transformation size, DCT coefficient quantization bit plane number, DCT block number field contained in each coding block; the JSCC information indication symbol may further include: frame rate, color coding scheme, image size, pixel color bit depth, number of coding blocks, number of physical layer service data units (PSDUs), number of bit planes per PSDU, and modulation scheme per PSDU.
In another possible implementation, the subframe header field of the aggregate MAC layer service data unit A-MSDU may include a picture size (resolution), picture color bit depth field.
In another possible implementation, the MAC layer protocol data unit MPDU delimiter may include: quantization step size, DCT transform size, DCT coefficient quantization bit plane number, DCT block number field contained by each coding block.
Illustratively, the obtaining module is specifically configured to: the first image is obtained through DCT transformation, quantization and layering operation at an application layer or Logical Link Control (LLC) layer of the first device, and first data is obtained.
In another possible implementation, the subframe header field of the aggregate MAC layer service data unit A-MSDU may include a quantization step size, a DCT transform size, a DCT coefficient quantization bit plane number field; the JSCC information indication symbol may further include: frame rate, color coding mode, image size, pixel color bit depth, the number of DCT blocks contained in each coding block, the number of coding blocks, the number of physical layer service data units (PSDUs), the number of bit planes of each PSDU, and some or all of the modulation modes of each PSDU.
Illustratively, the obtaining module is specifically configured to: the first image is transformed and quantized through DCT at an application layer or Logical Link Control (LLC) layer of the first device, and is acquired through layering operation at a Media Access Control (MAC) layer or a physical layer of the first device to acquire first data.
In another possible implementation, the MAC header field may include a quantization step size, a DCT transform size, a DCT coefficient quantization bitplane number field; the JSCC information indication symbol may further include: frame rate, color coding mode, image size, pixel color bit depth, the number of DCT blocks contained in each coding block, the number of coding blocks, the number of physical layer service data units (PSDUs), the number of bit planes of each PSDU, and some or all of the modulation modes of each PSDU.
In another possible implementation manner, the JSCC information indicator may further include: a MAC header length field and a MAC header encoding mode field.
In the above technical solution, the operation of performing DCT transformation, quantization and layering on the first image to obtain the first data may be performed in the application layer or partial hierarchical structures in the LLC layer, the MAC layer and the physical layer of the first device, which improves the flexibility of the data transmission method.
In a fourth aspect, there is provided a data transmission apparatus, which is applied to a second device, and includes: a receiving module, configured to receive a physical layer frame sent by a first device, where the physical layer frame includes a joint source channel coding JSCC information indicator and second data, the physical layer frame includes N loads, each of the N loads includes one subdata of the second data, and N is a positive integer; the JSCC information indication symbol comprises a first field, and information carried by the first field is used for indicating the length of data of each bit plane; the decoding module is used for decoding the second data to obtain first data of the first image, wherein the first data comprises a plurality of subdata, and each subdata of the first data comprises data of one or more bit planes of the first image; and the acquisition module is used for acquiring the first image according to the first data.
In the above technical solution, the second data is data obtained by JSCC encoding of the first data, and the second device may restore the first image by decoding the received second data, thereby implementing high-quality transmission of the wireless video or image.
In a possible implementation manner, when the access manner is time division multiple access TDMA, the JSCC information indicator is carried in the JSCC signaling field of the physical layer frame.
In another possible implementation, the JSCC signaling field includes one common JSCC signaling field and N dedicated JSCC signaling fields, where the common JSCC signaling field includes a first field; and the information carried by the ith dedicated JSC signaling field in the N dedicated JSC signaling fields is used for indicating the bit plane number, the sequence number and the modulation mode of the ith load in the N loads.
In another possible implementation, the common JSCC signaling field may further include: frame rate, image size, pixel color bit depth, quantization step size, Discrete Cosine Transform (DCT) size, the number of DCT blocks contained in each coding block, the number of DCT coefficient quantization bit planes, the number of coding blocks, and the number of physical layer service data units (PSDUs).
In another possible implementation manner, when the access manner is an orthogonal frequency division multiple access OFDMA, the JSCC information indication symbol is carried in a JSCC signaling field in the high-efficiency signaling field B of the physical layer frame, or the JSCC information indication symbol is carried in a JSCC signaling field in the very high throughput signaling field; the high efficiency signaling field B or the very high throughput signaling field also includes a resource unit, RU, allocation field that carries information for indicating the data of the bit plane corresponding to each RU block.
When the OFDMA format is accessed, user mixed transmission can be realized, namely users supporting JSCC and users not supporting JSCC can simultaneously use given frequency domain resources for transmission, and the utilization rate of the frequency domain resources is improved.
In another possible implementation manner, the obtaining module is specifically configured to: the first data is subjected to transform coefficient combination, dequantization and DCT inverse transformation operations at a physical layer of the second device to obtain a first image.
In another possible implementation manner, the obtaining module is specifically configured to: the first data is subjected to a transform coefficient combining operation at a physical layer of the second device, and a first image is obtained through a dequantization and DCT inverse transform operation at a MAC layer of the second device.
In another possible implementation manner, the obtaining module is specifically configured to: the first data is subjected to transform coefficient combination, dequantization and DCT inverse transformation operations at the MAC layer of the second device to obtain a first image.
In another possible implementation manner, the obtaining module is specifically configured to: the first data is combined, dequantized and DCT inverse transformed in the application layer or LLC layer of the second device to obtain the first image.
In another possible implementation manner, the obtaining module is specifically configured to: the first data is combined by transform coefficients at the MAC layer or physical layer of the second device and the first image is obtained by dequantization and inverse DCT transform operations at the application layer or LLC layer of the second device.
In a fifth aspect, a computer-readable medium is provided, which stores program code for execution by a device, the program code comprising instructions for performing the first aspect or the data transmission method in any one of the implementations of the first aspect.
A sixth aspect provides a computer readable medium storing program code for execution by a device, the program code comprising instructions for performing the second aspect or the data transmission method in any one of the implementations of the second aspect.
In a seventh aspect, a computer program product is provided, the computer program product comprising: computer program code for causing a computer to perform the data transmission method of the first aspect or any one of its implementations when the computer program code runs on a computer.
In an eighth aspect, there is provided a computer program product comprising: computer program code for causing a computer to perform the second aspect or the method for data transmission in any implementation of the second aspect, when the computer program code runs on a computer.
A ninth aspect provides a chip, where the chip includes a processor and a data interface, and the processor reads an instruction stored in a memory through the data interface to execute the data transmission method in the first aspect or any implementation manner of the first aspect.
Optionally, as an implementation manner, the chip may further include a memory, the memory stores instructions, the processor is configured to execute the instructions stored on the memory, and when the instructions are executed, the processor is configured to execute the first aspect or the data transmission method in any one implementation manner of the first aspect.
A tenth aspect provides a chip, where the chip includes a processor and a data interface, and the processor reads instructions stored in a memory through the data interface to execute the data transmission method in any implementation manner of the second aspect or the second aspect.
Optionally, as an implementation manner, the chip may further include a memory, the memory stores instructions, the processor is configured to execute the instructions stored on the memory, and when the instructions are executed, the processor is configured to execute the data transmission method in the second aspect or any one implementation manner of the second aspect.
In an eleventh aspect, there is provided an apparatus comprising: a processor and a memory for storing the computer program code, which when run on the processor causes the apparatus to perform the first aspect or the data transmission method of any one of the implementations of the first aspect.
In a twelfth aspect, there is provided an apparatus comprising: a processor and a memory for storing the computer program code, which when run on the processor causes the apparatus to perform the second aspect or the data transmission method of any of the implementations of the second aspect.
A thirteenth aspect provides a communication system, which includes the data transmission apparatus in the third aspect or any implementation manner of the third aspect and the data transmission apparatus in the fourth aspect or any implementation manner of the fourth aspect, and is a communication system that implements the first aspect or any implementation manner of the first aspect and the data transmission method in any implementation manner of the first aspect or the first aspect.
Drawings
Fig. 1 is a flow diagram of a conventional data transmission scheme.
Fig. 2 is a flow diagram of a joint source channel coded data transmission scheme.
Fig. 3 is a schematic flow chart of a data transmission method according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a physical layer frame according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of another physical layer frame according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of another physical layer frame according to an embodiment of the present application.
Fig. 7 is a diagram illustrating a frame structure of a physical layer frame compatible with an existing standard according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of another physical layer frame according to an embodiment of the present application.
Fig. 9 is a diagram illustrating a frame structure of a physical layer frame compatible with an existing standard according to an embodiment of the present application.
Fig. 10 is a schematic diagram of a physical layer frame structure supporting user mixing according to an embodiment of the present application.
Fig. 11 is a schematic structural diagram of another physical layer frame according to an embodiment of the present application.
Fig. 12 is a schematic structural diagram of a mac layer frame according to an embodiment of the present application.
Fig. 13 is a schematic diagram of a data transmission apparatus according to an embodiment of the present application.
Fig. 14 is a schematic diagram of another data transmission device according to an embodiment of the present application.
Fig. 15 is a schematic diagram of another data transmission device according to an embodiment of the present application.
Fig. 16 is a schematic diagram of another data transmission device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, not all, embodiments of the present application.
The technical scheme of the embodiment of the application can be applied to various communication systems, for example: a global system for mobile communication (GSM) system, a Code Division Multiple Access (CDMA) system, a Wideband Code Division Multiple Access (WCDMA) system, a General Packet Radio Service (GPRS), a Long term evolution (Long term evolution, LTE) system, a LTE Frequency Division Duplex (FDD) system, a LTE Time Division Duplex (TDD) system, a universal mobile telecommunications system (universal mobile telecommunications system, UMTS), a Worldwide Interoperability for Microwave Access (WiMAX) communication system, a fifth generation (5G) system or a new radio system (UMTS), a wireless telecommunications network (352), a future video transmission network (NR, 2D, etc. may improve the quality of a network.
Fig. 1 is a flow of a conventional data transmission scheme. As shown in fig. 1, application layer data at a transmitting end, such as video and image data, is compressed and channel coded, and then transmitted to a receiving end through a channel. The receiving end carries out the processes of channel decoding and data decompression on the received data, and the application layer data sent by the receiving end is obtained.
Due to the limited bandwidth of the wireless channel, the video data needs to be compressed efficiently, however, the technologies of predictive coding, variable length coding and the like adopted by the video coding make the bit stream very sensitive to channel errors while compressing efficiently. Since various noise interferences exist in a wireless channel and the error rate is high, in order to transmit high-quality video data on a wireless mobile network, the transmission of the video data can be realized through Joint Source and Channel Coding (JSCC) capable of adapting to a channel. Fig. 2 is a JSCC data transmission flow diagram. In JSCC, a sending terminal device carries out layering on information sources according to importance, and then codes and transmits layered information. Wherein the source comprises video or image data and control information.
As shown in fig. 2, the transmitting-end device divides a picture frame in an image or video into blocks, and performs Discrete Cosine Transform (DCT) on each block. The DCT transform can compress the image frame since most of the energy of the pattern is concentrated in the low frequency part after the DCT transform. Then, the DCT-transformed image data is quantized and layered according to the importance of the data. For layered image data, non-rate coding is performed at different bit planes, such as channel coding 1 through channel coding N shown in fig. 2. And mapping the coded data to a resource block through bit splicing and symbol modulation. The control information includes information of block size, bandwidth, coding, modulation, hierarchical bit width and the like in the above process, and the control information is mapped to a corresponding resource block after being subjected to independent channel coding and modulation and is sent together with the data information.
At a receiving end, after synchronization, channel estimation and equalization processing are performed on a received signal by receiving end equipment, control information and data information are obtained through resource de-mapping. And then, according to the control information, carrying out symbol splitting and demodulation on the data information to obtain soft information, and carrying out channel decoding by a confidence transmission method to obtain 0/1-bit probability. And finally, information combination is carried out according to the probability to recover the original information source information.
The quality of wireless video or image transmission can be improved by a source-channel joint coding scheme as shown in fig. 2. However, the processing procedure of the JSCC scheme is different from the processing procedure of the existing WiFi protocol. As shown in fig. 2, the thickened module is a processing module that needs to be added in the existing WiFi flow. In order to realize the combination of the JSC scheme and the existing WiFi protocol, the application provides a data transmission method, which can realize JSC in the WiFi protocol and improve the transmission quality of wireless video or images.
The data transmission method according to the embodiment of the present application is described in detail below with reference to fig. 3 to 12. Fig. 3 is a schematic flow chart of a data transmission method according to an embodiment of the present application. As shown in fig. 3, the data transmission method of the embodiment of the present application includes steps S310 to S340.
S310, the first device obtains first data of a first image, wherein the first data comprises a plurality of subdata, and each subdata comprises data of one or more bit planes of the first image. The first device is a sending end device of video or image, and the second device is a receiving end device of video or image. In some embodiments, the first image is an image to be transmitted; in other embodiments, the first image is a frame of a video to be transmitted, which is not limited in this application.
Illustratively, the sub-data in the first data in the data transmission method of the present application is data obtained after DCT transform, quantization, and layering operations.
In some embodiments, the first image may also be blocked prior to the DCT transform. The blocking operation divides the first image into a plurality of equal-sized blocks. Illustratively, the image may be divided into blocks of 8 × 8 pixels per block size, or the first image may be divided into blocks of other sizes. It should be understood that when the size of the first image meets the requirements of the subsequent operation, the first image may not be blocked.
After the blocks are divided, DCT transformation is carried out on each block, and the formula of the DCT transformation is shown as formula (1).
Figure BDA0002500891150000091
Wherein M represents a DCT transform size;
(u, v), (i, j) represents each pixel point;
c (u), c (v) representing transform coefficients;
f (u, v) represents the numerical value of each pixel point after DCT transformation, and F (i, j) represents the numerical value of each pixel point before DCT transformation; illustratively, the color coding method adopted by the first image is RGB, taking red (red, R) channel as an example, F (u, v) represents the value of R channel of each pixel after DCT transformation, and F (i, j) represents the value of R channel of each pixel before DCT transformation.
Since the energy of the image is concentrated in the low frequency part after the DCT transformation, lossy compression of the first image can be achieved through the DCT transformation.
Each image after DCT transformation is quantized (squaration). Values at different positions in the DCT coefficient matrix represent components of different frequencies in the image data, continuous values of the components of the corresponding different frequencies in the image data can be approximated to a plurality of (or less) discrete values through quantization, a large amount of image information is converted into information which can be effectively controlled, and lossy compression of the first image is realized.
The quantized image data signals are layered according to importance, and divided into different bit planes for non-rate coding.
S320, the first device transmits a physical layer frame, which includes the joint source channel coded JSCC signaling field and the second data.
Specifically, the second data is data obtained by JSCC-encoding the first data. The physical layer frame includes N payloads (N is a positive integer), each of the N payloads includes one sub data of the second data, and the JSCC information indicator includes a first field therein. Wherein the information carried by the first field is used for indicating the length of data of each bit plane. Wherein the maximum length of each bit plane is 218bit。
S330, the second device receives the physical layer frame sent by the first device, and decodes the second data to obtain the first data. Specifically, since the second data is obtained by JSCC-encoding the first data by the first device, the second device decodes the second data after receiving the physical layer frame transmitted by the first device, and the first data can be obtained.
And S340, the second equipment acquires the first image according to the first equipment. Specifically, since the first data is obtained by processing the first image, for example, DCT transformation, quantization, and layering operation in step S310, the second device performs inverse transformation of the above operation on the first data, and can recover the first image from the first data.
Correspondingly, the inverse transform of the layering operation in step S310 is combined with the transform coefficients; inverse transformation of quantization operation into dequantization; inverse transformation of the DCT transformation into inverse DCT transformation; the inverse transform of the blocking operation is block combining.
According to the data transmission method, JSCC coding is carried out on the data of the first image, and high-quality wireless transmission of videos or images can be achieved.
In the data transmission method according to the embodiment of the present application, an application layer or a Logical Link Control (LLC) layer, a Medium Access Control (MAC) layer, or a physical layer of the first device may perform some or all of the operations of the partitioning, DCT transformation, quantization, and layering in step S310 to obtain the first data.
For example, in some embodiments, a physical layer (physical layer) receives the first image, and the blocking, DCT transformation, quantization, and layering operations in step S310 are all performed at the physical layer. And after the first data is obtained, JSCC coding is carried out on the first data, and second data is sent in a physical layer frame.
Fig. 4 is a physical layer frame structure. As shown in fig. 4, one frame transmitted by the physical layer is an aggregated physical layer protocol data unit (a-PPDU) including a preamble, a JSCC signaling (JSCC signal, JSCC SIG) and second data, wherein the physical layer frame includes N payload. Each of the N payloads is a physical layer protocol data unit (PPDU). Each PPDU includes a physical layer service data unit (PSDU) and control information, where each PSDU carries one sub data of the second data. It should be understood that the control information in the PDDU according to the embodiment of the present application may be signaling in the existing WiFi standard, which is not limited in the embodiment of the present application. In this case, each PSDU includes one sub data of the second data.
In other embodiments, the JSCC signaling field may include, in addition to the first field, a part or all of a frame rate (frame rate), a color coding method (RGB/YUV), a picture size (picture size) (resolution), a pixel color bit depth (pixel depth), a quantization step size, a DCT transform or Discrete Wavelet Transform (DWT) size (DCT/DWT size), a number of DCT blocks included in each coding block, a number of DCT coefficient quantization bit planes, a number of coding blocks, a number of physical layer service data units (PSDUs), a number of bit planes of each PSDU (number of bit plane PSDU), and a modulation in signal PSDU (modulation in signal PSDU).
The description of the above fields is shown in table 1.
TABLE 1
Figure BDA0002500891150000111
Figure BDA0002500891150000121
In some embodiments, the information carried by the first field may be in the form of an array, where the number of elements in the array is the product of the number of DCT coefficient quantization bit-planes and the number of coding blocks.
It should be understood that the fields shown in table 1 may be partially or fully wrapped in the JSCC signaling field or may not be all included in the JSCC signaling field. The fields not included in the JSCC signaling may be agreed according to the previous configuration of the receiving device and the sending device, or sent through other signaling or indication information.
Fig. 5 is another physical layer frame structure. As shown in fig. 5, one frame transmitted by the physical layer includes a preamble, a legacy signal (L-SIG), JSCC signaling, and second data. The second data is an aggregated physical layer service data unit (a-PSDU), and the a-PSDU includes N payload units, where each payload unit is a PSDU. Each PSDU includes data of one or more bit planes. In this case, each PSDU includes one sub data of the second data.
It should be understood that, in the physical layer frame structure shown in fig. 5, the preamble and the L-SIG may be preambles and signaling in an existing WiFi standard, which is not limited in this embodiment of the present application. The fields and the sending method included in the JSCC signaling are the same as those included in the JSCC signaling in the frame structure shown in fig. 4, and are not described herein again.
Fig. 6 is another physical layer frame structure. As shown in fig. 6, one frame transmitted by the physical layer includes a preamble, an L-SIG, a common JSCC signaling field (e.g., JSCC SIG 0), a dedicated JSCC signaling field (e.g., JSCC SIG i), and second data. And the second data is loaded in an A-PPDU, and the A-PPDU comprises N payload and N JSCC signaling. Each payload is a PSDU, and each PSDU includes data of one or more bit planes. In this case, each PSDU includes one sub data of the second data. In this case, the first field in the JSCC information indication symbol is carried in a common JSCC signaling field (JSCC SIG 0).
It should be understood that, in the framing manner shown in fig. 6, the preamble and the L-SIG may be preambles and signaling in an existing WiFi standard, which is not limited in this embodiment of the present application.
In some embodiments, the JSCC SIG0 may include, in addition to the first field, part or all of a frame rate (frame rate), a color coding method (RGB/YUV), an image size (picture size) (resolution), a pixel color bit depth (pixeldepth), a quantization step size, a DCT transform or Discrete Wavelet Transform (DWT) size (DCT/DWT size), the number of DCT blocks contained in each coding block, the number of DCT coefficient quantization bit planes, the number of coding blocks, a physical layer service data unit (PSDU) number (number of PSDU), and descriptions as shown in table 2.
TABLE 2
Figure BDA0002500891150000131
Figure BDA0002500891150000141
In some embodiments, the information carried by the first field may be in the form of an array, the number of elements included in the array being the product of the number of DCT coefficient quantization bit-planes and the number of encoded blocks.
It should be understood that the fields shown in table 2 may be partially or completely wrapped in JSCC SIG0, or may not be all included in JSCC SIG 0. The fields not included in the JSCC SIG0 may be agreed upon by the configuration of the receiving end device and the transmitting end device in advance, or transmitted through other signaling or indication information.
In some embodiments, each of JSCC SIG i (1 ≦ i ≦ N) of JSCC signaling 1 through JSCC signaling N in the a-PPDU may include bit plane number (DCT coefficient quantization bit plane number current PSDU), PSDU index (PSDU index), and modulation in current PSDU modulation mode (modulation in current PSDU) fields of the current PSDU, as shown in table 3.
TABLE 3
Figure BDA0002500891150000142
It should be understood that the fields shown in table 3 may be partially or completely wrapped in JSCC SIG i, or may not be all included in JSCC SIG i. The fields not included in the JSCC SIG i may be agreed upon by the configuration of the receiving end device and the transmitting end device in advance, or transmitted through other signaling or indication information.
In the physical layer frame structures shown in fig. 4 to 6, the JSCC signaling field is added to indicate that the first device supports the JSCC function, so that the compatibility with the existing WiFi protocol framework is facilitated.
For example, fig. 7 is a schematic diagram of a data transmission method of the embodiment of the present application compatible with an existing WiFi protocol framework.
Fig. 7 (a) is a frame structure compatible with the 802.11a or 802.11b or 802.11g standards. As shown in (a) of FIG. 7, a JSC signaling (JSC SIG) field is added on the basis of a legacy short training field (L-STF), a legacy long training field (L-LTF), legacy signaling (L-SIG) and data in 802.11 a/b/g.
Fig. 7 (b) is a frame structure compatible with the 802.11n standard. As shown in fig. 7 (b), a JSCC signaling (JSCC SIG) field is added to 802.11n, i.e., L-STF, L-LTF, L-SIG, high throughput signaling (HT SIG), high throughput short training field (HT STF), high throughput long training field (HT LTF), and data.
Fig. 7 (c) is a frame structure compliant with the 802.11ad standard. As shown in fig. 7 (c), a JSCC header (JSCC header) is added to the 802.11ad based on the Short Training Field (STF), the Long Training Field (LTF), the header (header), and the data, wherein the JSCC header corresponds to the JSCC SIG.
Fig. 7 (d) is a frame structure compliant with the 802.11ay standard. As shown in fig. 7 (d), a JSCC header (JSCC header) is added to the STF, LTF, header, EDMG header (EDMG header), EDMG STF, EDMG LTF, and data in 802.11ay, wherein the JSCC header corresponds to the JSCC SIG.
Fig. 7 (e) shows a frame structure compatible with the 802.11ax standard. As shown in (e) of FIG. 7, JSC signaling (JSC SIG) fields are added on the basis of L-STF, L-LTF, L-SIG, legacy signaling field repetition (RL-SIG), high efficiency signaling field (HE SIG) A, high efficiency short training field (HE-STF), high efficiency long training field (HE-LFT) and data in 802.11 ax.
It should be understood that, in the frame structures shown in (a) to (e) of fig. 7, the data field may be the first data in the data transmission method of the embodiment of the present application, which is equivalent to the a-PPDU or the a-PSDU in the frame structures shown in fig. 4 to 6 of the embodiment of the present application, and the JSCC SIG field or the JSCC header field is the same as the JSCC SIG field in the frame structures shown in fig. 4 to 6, and is not described herein again.
The frame structure shown in fig. 7 in fig. 4 is a frame structure when the access method is Time Division Multiple Access (TDMA).
In other embodiments, the access mode may be Orthogonal Frequency Division Multiple Access (OFDMA).
Fig. 8 is a physical layer frame structure in which the access scheme is OFDMA. As shown in fig. 8, the physical layer frame illustratively includes N payloads, each payload is a PSDU, and each PSDU includes sub data of the second data.
The L-SIG and JSCC SIG in the frame structure shown in fig. 8 are the same as the L-SIG and JSCC SIG in the frame structure shown in fig. 5, and are not described again here.
In some embodiments, the frame structure shown in fig. 8 may also be compatible with existing WiFi standards. Fig. 9 is a schematic diagram of the frame structure shown in fig. 8 being compatible with the existing WiFi standard.
Fig. 9 (a) is a frame structure compatible with the 802.11ax standard when data is in the OFDMA format. As shown in fig. 9 (a), a reserved bit (reserved bit) whose value is 0 to indicate a JSCC frame is included in the existing HE-SIG a. The number of symbols of the HE-SIG B is also included in the HE-SIG A. The existing HE-SIG B field includes a resource unit allocation (RU allocation) field, a JSCC SIG field, and a padding field (padding).
Fig. 9 (b) shows a frame structure compatible with the 802.11be standard when data is in the OFDMA format. As shown in fig. 9 (b), a reserved bit (reserved bit) is included in an existing general signaling (U-SIG), and the value of the reserved bit is 0 to indicate a JSCC frame. Also included in the U-SIG is a PPDU format field and a SIG B symbol digital field. An RU allocation field, a JSCC SIG field, and padding are included in an existing extra high throughput (EHT SIG) field.
In some embodiments, when the access mode is OFDMA, in order to further support transparent hybrid transmission between the users supporting 802.11ax and 802.11be and the users supporting JSCC, a field indicating JSCC may be further added on the basis of the frame format shown in fig. 9, so that the users supporting JSCC and the users not supporting JSCC may transmit on a given frequency domain resource at the same time.
Fig. 10 is a frame structure of a user supporting 802.11ax or 802.11be and a user supporting JSCC mixed transmission.
As shown in fig. 10 (a), in the frame structure of the data-compatible 802.11ax standard, in order to realize the hybrid transmission of users that support JSCC and do not support JSCC, there are 3 fields in the frame structure that can indicate that the current frame is transmitted in the JSCC manner. For example, as shown in (a) in fig. 10, indication 1 is the number of SIG B symbols in the HE-SIG a field. When the number of symbols of SIG B is 12, it indicates that transmission is currently performed in the JSCC mode. Indication 2 is the last two symbols in the HE-SIG B field, when the number of symbols of HE-SIG B in indication 1 is 12, the first 10 symbols in the SIG B field indicate RU allocation, user information, and padding, respectively, and the last two symbols are symbols of JSCC SIG. Indication 3 is the modulation scheme of the JSCC SIG symbol. In the HE-SIG B field, the modulation scheme of the first 10 symbols is BPSK modulation, and when JSCC scheme is adopted, the modulation scheme of the last two JSCC SIG symbols is QPSK. Therefore, according to any one or more of the indications at 3, it can be determined whether the current device is transmitted in the JSCC manner.
As shown in (b) of fig. 10, in the frame structure of the data-compatible 802.11be standard, in order to implement mixed transmission of users that support JSCC and do not support JSCC, there are 3 fields in the frame structure that can indicate whether the current frame is transmitted in the JSCC manner. Note 1 to note 3 are similar to the above-mentioned 3 in fig. 10 (a), and are not described again.
Whether the current equipment supports JSC transmission or not is indicated in a plurality of fields of the frame structure, so that the equipment supporting JSC and the equipment not supporting JSC can be transmitted simultaneously, and the utilization rate of frequency domain resources is improved.
As shown in fig. 8 to 10, when the access scheme is OFDMA, the JSCC information indicator symbol is carried in the JSCC SIG in the HE-SIG B field or the EHT SIG field of the physical layer frame, and the HE-SIG B field or the EHT SIG field further includes an RU allocation field, where information carried in the RU allocation field indicates data of a bit plane corresponding to each RU block.
Fig. 4 to 7 are physical layer frame structures when the access method is TMDA, and fig. 8 to 10 are physical layer frame structures when the access method is OFDMA. In the general frame structure, the access scheme may be TMDA or the access scheme may be OFDMA. In a general frame structure, a support standard field is added to indicate that the physical layer frame is a JSC frame, so that the function of frame identification is realized.
Fig. 11 is a diagram of a general physical layer frame structure. As shown in fig. 11, in the general frame structure, the U-SIG field includes a supported standard (supported) field, a reserved bit field, a PPDU format field, and a JSCCSIG symbol number field. The JSCC SIG field includes an RU allocation field, a new addition field, and padding.
The supported standard field and the reserved bit field can realize the function of frame identification and are used for indicating the JSC frame. Illustratively, the value of the reserved bit field may be 0, indicating a JSCC frame.
The PPDU format field indicates whether the access scheme is OFDMA or TMDA.
The RU allocation field is only effective when the PDDU format field indicates that the access mode is OFDMA and is used for indicating the RU allocation condition corresponding to different bit planes; when the PPDU format field indicates the access mode is TMDA, the RU allocation field is invalid.
The value of the newly added field is the same as the value of the JSCC SIG field in the frame structures shown in fig. 4 to 5, and is not described herein again.
The data portion of the general frame structure shown in fig. 11 is the second data in the data transmission method according to the embodiment of the present application, and corresponds to the above-described a-PPDU or a-PSDU. When the PPDU format field indicates that the access mode is OFDMA, the format of the data portion is as shown in the data portion of the physical layer frame structure shown in fig. 8, and each RU block corresponds to data of a different bit plane; when the PPDU format field indicates that the access mode is the TDMA, the format of the data portion is as shown in the data portion of the physical layer frame structure shown in fig. 4 to 6, and each PSDU corresponds to data of a different bit plane.
In the above embodiment, the operations of blocking, DCT transformation, quantization and layering of the first image are all performed in the physical layer, which can effectively support the JSCC technology on the basis of the existing various WiFi protocol frameworks, and meanwhile, ensure a low data transmission delay.
In other embodiments, the Medium Access Control (MAC) layer may also perform some or all of the operations of blocking, DCT transformation, quantization, and layering.
Illustratively, in other embodiments, the first image directly enters the MAC layer, and some or all of the operations of blocking, DCT transformation, quantization, layering in step S310 are performed at the MAC layer to obtain the first data.
For example, in some embodiments, the original data of the first image corresponds to an aggregated media access control service data unit (a-MSDU), and a MAC layer header (MAC header) is added to the a-MSDU to form a PSDU, which enters the physical layer. After the data enters the physical layer, the operations of blocking, DCT transformation, quantization and layering are performed on the physical layer to obtain first data, the first data is JSCC encoded to obtain second data, and the physical layer frames shown in fig. 4 to 11 are sent.
For example, in other embodiments, the original data of the first image may be directly added to the MAC header to form a PSDU, entering the physical layer. After the data enters the physical layer, the operations of blocking, DCT transformation, quantization and layering are performed on the physical layer to obtain first data, the first data is JSCC encoded to obtain second data, and the physical layer frames shown in fig. 4 to 11 are sent.
The format of the MAC header is the format of the MAC header in the existing standard.
In other embodiments, the MAC header may further include a partial field of the JSCC SIG field in the frame structure, in addition to the fields in the existing standard. For example, some or all of the frame rate, color coding method (RGB/YUV), picture size, pixel depth, quantization step size, DCT/DWT size, number of DCT blocks contained per coding block, number of coding blocks field may be included in the MAC header. Illustratively, the frame rate and color coding method may be included in the MAC header, with other fields still included in the JSCC SIG. Illustratively, a pixel depth may be included in the MAC header, with other fields still included in the JSCC SIG field.
It should be understood that the fields of the frame rate, the color coding method, the picture size, the pixel depth, the quantization step size, the DCT/DWT size, and the number of DCT blocks and coding blocks contained in each coding block may be flexibly configured in the MAC header or JSCC SIG as needed, which is not limited in the embodiment of the present application.
When the data of the first image first enters the MAC layer, the JSCC SIG field in the frame structure of the physical layer shown in fig. 4, 5, and 8 includes, in addition to the payload length in reach bit field and the fields shown in table 1, the MAC header length (MAC header length) and the modulation and coding scheme (MAC header MCS) of the MAC header. These fields and descriptions of the fields are shown in table 4.
TABLE 4
Field(s) Description of the invention
MAC header length Indicating the length of the MAC header.
MAC header MCS And indicating the code modulation mode of the MAC header.
When the data of the first image first enters the MAC layer, the JSCC SIG0 field in the frame structure of the physical layer shown in fig. 6 includes, in addition to the payload length in access bit field and the fields shown in table 1, the MAC header length (MAC header length) and the modulation and coding scheme (MAC header MCS) of the MAC header. These fields and descriptions of the fields are shown in table 4.
The description of the fields and corresponding fields included in the JSCC SIG N (1 ≦ N) field in the physical layer frame structure shown in fig. 6 is shown in table 3, and will not be described again here.
Illustratively, in some embodiments, the data of the first image corresponds to an a-MSDU at the MAC layer, and after the operation of DCT transformation and quantization is performed at the MAC layer, the MAC header is added to form an entry into the physical layer, and after the operation of layering is performed at the physical layer, the first data is obtained, and the rest of the operation is performed at the physical layer, and the JSCC encoding is performed on the first data, the second data is obtained, and the physical layer frame is transmitted. In this case, the MAC header may include: quantization step size, DCT/DWT size field. At this time, the frame structure of the physical layer is as shown in fig. 4 to 11, the description of the corresponding fields in the frame structure is the same as that in the embodiments corresponding to fig. 4 to 11, and the MAC header length field and the MAC header MCS field shown in table 4 are added to the JSCC SIG field, which is not described again here.
Illustratively, in other embodiments, the data of the first image corresponds to an a-MSDU, the MAC layer performs DCT transform, quantization and layering to obtain sub-data of the first data, and the physical layer performs JSCC encoding on the first data to obtain second data and transmits a physical layer frame. In this case, the MAC header may include a quantization step size, a DCT/DWT size, a DCT block number included in each coding block, and a DCT coefficient quantization bit plane number field in the original JSCC SIG field, the frame structure of the physical layer is as shown in fig. 4 to 11, the description of the corresponding fields in the frame structure is the same as the embodiments corresponding to fig. 4 to 11, and the MAC header length field and the MAC header MCS field shown in table 4 are added in the JSCC SIG field, which is not described herein again.
In other embodiments, the application layer or Logical Link Control (LLC) layer may also perform some or all of the operations of blocking, DCT transformation, quantization, and layering of the first image to obtain the first data.
Illustratively, in some embodiments, the application layer or the LLC layer performs the operations of blocking, DCT transforming, and quantizing, layering the first image to obtain the sub-data of the first data. The MAC layer does not perform special processing on the data, and the physical layer completes the rest of JSCC operation. In this case, N A-MSDU sub-frames (sub-frames) are included in each A-MSDU of the MAC layer, where each sub-frame includes a sub-frame header and payload. Each payload is a MAC layer service data unit (MSDU), and each MSDU includes data of one or more bit planes. In this case, each MSDU includes one piece of sub-data. MAC headers and Frame Check Sequences (FCS) are added to the a-MSDUs to form MPDUs.
Fig. 12 is a frame structure of a MAC layer according to an embodiment of the present application. As shown in fig. 12, in an a-MSDU subframe, the subframe header includes a Destination Address (DA), a Source Address (SA), and a data field length (length). In some embodiments, since the application layer or the LLC layer performs the operations of blocking, DCT transformation, quantization and layering of the first image, each subframe header may include, in addition to DA, SA and length, indication information of the above operations, such as a quantization step size, DCT/DWT size, the number of DCT blocks corresponding to each coding block, the number of coding blocks, and the like. In other embodiments, the additional parameter indication field may be added at any position after the subframe header, for example, the additional parameter indication field may be placed after the MSDU.
Illustratively, when a parameter indication field is added in the subframe header, the added fields and descriptions of the fields are shown in table 5.
TABLE 5
Figure BDA0002500891150000191
Figure BDA0002500891150000201
It should be understood that payload length in reach bit plane is the first field described above.
In some embodiments, the parameters of the two fields payload length in each access bit plane and number of bit planes per MSDU in table 5 may also be indicated in the JSCC SIG of the physical layer frame.
It should be understood that, when the frame of the MAC shown in fig. 12 enters the physical layer, each MPDU forms a PSDU of the physical layer, i.e., a data portion of the physical layer frame. Accordingly, the frame structure of the physical layer is as shown in fig. 4 to fig. 11, and will not be described herein again.
Illustratively, in other embodiments, the application layer or LLC layer performs the blocking, DCT transformation, and quantization operations on the first image, and the MAC layer performs the layering operation or no special processing. When the MAC layer does not perform special processing, the physical layer completes the layered operation of the JSC scheme flow to obtain subdata of the first data and the rest operations of the JSC flow; and when the MAC layer carries out layering operation to obtain the subdata of the first data, the physical layer completes the rest operations of the JSC scheme flow and sends a physical layer frame.
In this case, the structure of the frame of the MAC layer is as shown in fig. 12. When the MAC layer performs a hierarchical operation, each MSDU includes data of one or more bit planes, and each MSDU includes one piece of sub-data. The newly added fields in the subframe header of each subframe in the a-MSDU of the MAC frame are as shown in table 5 above.
When the MAC layer does not perform a layering operation, each of the a-MSDUs of the MAC frame is data that is not layered, and in this case, the subframe header of each subframe in the a-MSDUs does not need to include fields for the number of DCT coefficient quantization bit planes, the number of bit planes (MSDUs) corresponding to each MSDU, and the data length (payload length in each bit plane) of each bit plane. When the MPDU enters a physical layer for layering, the JSCC SIG of the physical layer frame includes a DCT coefficient quantization bit plane number, a bit plane number (number of bit planes per PSDU) corresponding to each PSDU, and a data length (payload length in access bit plane) field of each bit plane.
In other embodiments, the application layer or LLC layer corresponds the data of the first image to an a-MSDU and adds indication information, such as frame rate (frame rate), color coding scheme (c.i.), image size (picture size), pixel color bit depth, etc., to the header of each subframe of the a-MSDU. The description of these fields is the same as that of the frame rate, RGB/YUV, picture size, and pixel depth fields in table 1, table 2, or table 5, and thus will not be described again.
The data of the first image is DCT-transformed, quantized, and layered at the MAC layer, in which case, the delimiter (mpdudelimiter) may include a quantization step size, a DCT/DWT size, a DCT coefficient quantization bit plane number, and a DCT block number field included in each encoded block.
In this case, the operation after the MPDU enters the physical layer is the same as the case where the MAC layer performs the blocking, DCT transform, quantization, and layering operations, and will not be described herein again.
In other embodiments, the application layer or LLC layer corresponds the data of the first image to an a-MSDU and adds indication information, such as frame rate (frame rate), color coding scheme (c.i.), image size (picture size), pixel color bit depth, etc., to the header of each subframe of the a-MSDU. The description of these fields is the same as that of the frame rate, RGB/YUV, picture size, and pixel depth fields in table 1, table 2, or table 5, and thus will not be described again.
The data of the first image is subjected to DCT transformation and quantization operation in the MAC layer, and in this case, the additional indication information in the mpdudelimiter is shown in DCT/DWT size and quantization step size fields in table 5.
In this case, the operation after the MPDU enters the physical layer is the same as the case where the MAC layer performs the blocking, DCT transformation, quantization operation, and layering operation, and is not described herein again.
In the above scheme, the MAC layer may perform a part of the JSCC operation, and also may effectively support the JSCC technology on the basis of existing frameworks of various WiFi protocols, and may be compatible with the relevant operations of the MAC layer, with less modification and more robustness.
In the above technical solution, the application layer, the LLC layer, the MAC layer, and the physical layer of the first device may all perform some or all of DCT, quantization, and layering operations. Data can be flexibly arranged to be processed in a certain layer according to the processing capacity of the first equipment, and the flexibility of data transmission is improved.
It should be understood that, in the above technical solution, in step S340, when the second device restores the first image from the first data, the inverse transformation when the first image is generated from the first image is still completed in the corresponding application layer, or LLC layer, MAC layer, and physical layer of the second device.
For example, when a first image is subjected to DCT transform, quantization and layering at a physical layer of a first device to obtain first data, a second device receives a physical layer frame, decodes second data to obtain first data, and performs transform coefficient combination, dequantization and DCT inverse transform on the first data at the physical layer of the second device to obtain the first image.
For example, when a first image is transformed and quantized by DCT at the MAC layer of a first device, and first data is obtained by a layering operation at the physical layer of the first device, a second device decodes second data to obtain the first data, combines by transform coefficients at the physical layer of the second device, and obtains the first image by a dequantization and inverse DCT transform operation at the MAC layer of the second device.
For example, when a first image obtains first data through DCT transformation, quantization and layering operations at the MAC layer of a first device, a second device receives a physical layer frame, decodes second data to obtain the first data, and combines transform coefficients, dequantizes and inversely transforms the first data at the MAC layer of the second device to obtain the first image.
For example, when a first image obtains first data through DCT transform, quantization and layering operations at an application layer or an LLC layer of a first device, a second device receives a physical layer frame, decodes second data to obtain the first data, and obtains the first image through transform coefficient combining, dequantization and DCT inverse transform on the first data at the application layer or the LLC layer of the second device.
For example, when a first image is DCT-transformed and quantized in an application layer or LLC layer of a first device, and first data is obtained by a layering operation in a MAC or physical layer of the first device, a second device decodes second data to obtain the first data, combines the first data by transform coefficients in the MAC layer or physical layer of the second device, and obtains the first image by a dequantization and DCT-inverse-transform operation in the application layer or LLC layer of the second device.
The data transmission device according to the embodiment of the present application will be described with reference to fig. 13 to 16.
Fig. 13 is a schematic structural diagram of a data transmission device according to an embodiment of the present application. As shown in fig. 13, the data transmission apparatus 500 includes an acquisition module 510 and a transmission module 520. The data transmission apparatus 500 may be applied to the first device in the above method.
The obtaining module 510 is configured to obtain first data, where the first data is data of a first image, and the first data includes a plurality of sub-data, and each sub-data of the first data includes data of one or more bit planes of the first image;
the sending module 520 is configured to send a physical layer frame to a second device, where the physical layer frame includes a joint source channel coding JSCC information indicator and second data, the second data includes multiple sub-data, the second data is obtained by JSCC coding of first data, and the sub-data of the second data is obtained by JSCC coding of sub-data of the first data; the second data comprises N loads, each load in the N loads comprises subdata of the second data, and N is an integer greater than or equal to 1; the JSC information indication symbol comprises a first field, and information carried by the first field is used for indicating the length of data of each bit plane.
In one possible approach, the obtaining module 510 may be implemented by a processor and the sending module 520 may be implemented by a sender. For specific functions and advantages of the obtaining module 510 and the sending module 520, reference may be made to the description in the foregoing method, and for brevity, no further description is provided here.
Fig. 14 is a schematic structural diagram of a data transmission device according to another embodiment of the present application. As shown in fig. 14, the data transmission apparatus 600 includes a receiving module 610, a decoding module 620, and an obtaining module 630. The data transmission apparatus 600 may be applied to the second device in the above method.
The receiving module 610 is configured to receive a physical layer frame sent by a first device, where the physical layer frame includes a joint source channel coded JSCC information indicator and second data, the second data includes a plurality of sub-data, the second data includes N loads, and each of the N loads includes one sub-data of the second data; the JSCC information indication symbol comprises a first field, and information carried by the first field is used for indicating the length of data of each bit plane.
The decoding module 620 is configured to decode the second data to obtain first data, where the first data is data of a first image, the first data includes a plurality of sub-data, the sub-data of the first data is sub-data of the second data, and the sub-data of each first data includes data of one or more bit planes of the first image.
The obtaining module 630 is configured to obtain the first image according to the first data.
In one possible approach, the decoding module 620 and the obtaining module 630 may be implemented by a processor, and the receiving module 610 may be implemented by a receiver. For the specific functions and advantages of the receiving module 610, the decoding module 620, and the obtaining module 630, reference may be made to the description in the foregoing method, and for brevity, no further description is provided here.
Fig. 15 is a schematic diagram of another data transmission apparatus 700 according to an embodiment of the present application. The data transmission apparatus 700 may be applied to the first device in the above method.
As shown in fig. 15, the data transmission apparatus 700 includes a processor 710, a memory 720, and a transceiver 730. The processor 710 may be used to control the first device, execute software programs, process data of software programs, and the like. The memory 720 is used primarily for storing software programs and data. The transceiver 730 transmits the physical layer frame to the second device.
For ease of illustration, only one memory and processor are shown in FIG. 15. In an actual communication device product, there may be one or more processors and one or more memories. The memory may also be referred to as a storage medium or a storage device, etc. The memory may be provided independently of the processor, or may be integrated with the processor, which is not limited in this embodiment.
A transceiver may also be referred to as a transceiver unit, transceiver, transceiving means, etc. A processing unit may also be referred to as a processor, a processing board, a processing module, a processing device, or the like. A device for implementing a receiving function in the transceiver 730 may be regarded as a receiving unit, and a device for implementing a transmitting function in the transceiver 730 may be regarded as a transmitting unit, that is, the transceiver 730 includes a receiving unit and a transmitting unit. A receiving unit may also be referred to as a receiver, a receiving circuit, or the like. A transmitting unit may also sometimes be referred to as a transmitter, or a transmitting circuit, etc.
The processor 710, the memory 720 and the transceiver 730 communicate with each other via internal connection paths to transfer control and/or data signals
The method disclosed in the embodiments of the present application may be applied to the processor 710, or implemented by the processor 710. Processor 710 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 710.
Fig. 16 is a schematic diagram of another data transmission apparatus 800 according to an embodiment of the present application. The data transmission apparatus 800 may be applied to the second device in the above method.
As shown in fig. 16, the data transmission apparatus 800 includes a processor 810, a memory 820, and a transceiver 830. The processor 810 may be used to control the second device, execute software programs, process data of the software programs, and the like. The memory 820 is used primarily for storing software programs and data. The transceiver 830 receives a first device transmission physical layer frame.
For ease of illustration, only one memory and processor are shown in FIG. 16. In an actual communication device product, there may be one or more processors and one or more memories. The memory may also be referred to as a storage medium or a storage device, etc. The memory may be provided independently of the processor, or may be integrated with the processor, which is not limited in this embodiment.
A transceiver may also be referred to as a transceiver unit, transceiver, transceiving means, etc. A processing unit may also be referred to as a processor, a processing board, a processing module, a processing device, or the like. A device for implementing a receiving function in the transceiver 830 may be regarded as a receiving unit, and a device for implementing a transmitting function in the transceiver 830 may be regarded as a transmitting unit, that is, the transceiver 830 includes a receiving unit and a transmitting unit. A receiving unit may also be referred to as a receiver, a receiving circuit, or the like. A transmitting unit may also sometimes be referred to as a transmitter, or a transmitting circuit, etc.
The processor 810, the memory 820 and the transceiver 830 communicate with each other via internal connection paths to transfer control and/or data signals
The method disclosed in the embodiments of the present application may be applied to the processor 810 or implemented by the processor 810. Processor 810 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 810.
Embodiments of the present application also provide a computer-readable medium storing program code, which when run on a computer, causes the computer to perform the method in the above aspects. These computer-readable memories include, but are not limited to, one or more of the following: read-only memory (ROM), Programmable ROM (PROM), Erasable PROM (EPROM), Flash memory, Electrically EPROM (EEPROM), and hard drive (hard drive).
An embodiment of the present application further provides a computer program product, which is applied to a first device, and the computer program product includes: computer program code which, when run by a computer, causes the computer to perform the method of any possible implementation of any of the above aspects.
An embodiment of the present application further provides a computer program product, which is applied to a second device, and the computer program product includes: computer program code which, when run by a computer, causes the computer to perform the method of any possible implementation of any of the above aspects.
An embodiment of the present application further provides a chip system, which is applied to the first device, and the chip system includes: the chip system comprises at least one processor, at least one memory and an interface circuit, wherein the interface circuit is responsible for information interaction between the chip system and the outside, the at least one memory, the interface circuit and the at least one processor are interconnected through lines, and instructions are stored in the at least one memory; the instructions are executable by the at least one processor to perform the first device operations of the method of the above aspects.
An embodiment of the present application further provides a chip system, which is applied to a second device, and the chip system includes: the chip system comprises at least one processor, at least one memory and an interface circuit, wherein the interface circuit is responsible for information interaction between the chip system and the outside, the at least one memory, the interface circuit and the at least one processor are interconnected through lines, and instructions are stored in the at least one memory; the instructions are executable by the at least one processor to perform the second device operations of the method of the above aspects.
In a specific implementation process, the chip may be implemented in the form of a Central Processing Unit (CPU), a Micro Controller Unit (MCU), a Micro Processing Unit (MPU), a Digital Signal Processor (DSP), a system on chip (SoC), an application-specific integrated circuit (ASIC), a Field Programmable Gate Array (FPGA), or a Programmable Logic Device (PLD).
In the present application, "at least two" or "plural" means two or more. The term "and/or" describes an associative relationship of associated objects, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Predefinition in this application may be understood as defining, predefining, storing, pre-negotiating, pre-configuring, curing, or pre-firing.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
As used in this specification, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between 2 or more computers. In addition, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from two components interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (38)

1. A method of data transmission, comprising:
a first device acquires first data of a first image, wherein the first data comprises a plurality of subdata, and each subdata of the first data comprises data of one or more bit planes of the first image;
the first device sends a physical layer frame to a second device, wherein the physical layer frame comprises a joint source channel coding JSCC information indication symbol and second data, and the subdata of the second data is obtained by the subdata of the first data through JSCC coding;
wherein the physical layer frame includes N loads, each of the N loads includes one subdata of the second data, and N is a positive integer;
the JSC information indication symbol comprises a first field, and the information carried by the first field is used for indicating the length of the data of each bit plane.
2. The method of claim 1, wherein the JSCC information indicator is carried in a JSCC signaling field of the physical layer frame when an access scheme is Time Division Multiple Access (TDMA).
3. The method of claim 2 wherein the JSC signaling field comprises one common JSC signaling field and N dedicated JSC signaling fields,
wherein the common JSCC signaling field comprises the first field;
and the information carried by the ith dedicated JSC signaling field in the N dedicated JSC signaling fields is used for indicating the bit plane number, the sequence number and the modulation mode of the ith load in the N loads.
4. The method of claim 1, wherein when the access scheme is orthogonal frequency division multiple access, OFDMA, the JSC information indicator symbol is carried in a JSC signaling field in an efficient signaling field B of the physical layer frame, or the JSC information indicator symbol is carried in a JSC signaling field in a very high throughput signaling field;
the high efficiency signaling field B or the very high throughput signaling field further includes a resource unit, RU, allocation field, which carries information for indicating data of a bit plane corresponding to each RU block.
5. The method according to any of claims 1-4, wherein the first data is data of the first image obtained by discrete cosine transform, DCT, quantization and layering operations at a physical layer of the first device.
6. The method according to any of claims 1-4, wherein the first data is obtained by the first image through DCT, quantization operation at a Media Access Control (MAC) layer of the first device, and layering operation at a physical layer of the first device.
7. The method according to any of claims 1-4, wherein the first data is obtained by DCT, quantization and layering operations at a media Access control, MAC, layer of the first device.
8. Method according to any of claims 1-4, wherein said first data is obtained by DCT, quantization and layering operations of said first image at an application layer or logical Link control, LLC, layer of said first device.
9. Method according to any of claims 1-4, wherein said first data is obtained by said first image through DCT, quantization operations at the application layer or LLC layer of said first device, and through layering operations at the MAC layer or physical layer of said first device.
10. A method of data transmission, comprising:
the method comprises the steps that a second device receives a physical layer frame sent by a first device, wherein the physical layer frame comprises a joint source channel coding JSCC information indication symbol and second data, the physical layer frame comprises N loads, each load in the N loads comprises subdata of the second data, and N is a positive integer;
the JSCC information indication symbol comprises a first field, and information carried by the first field is used for indicating the length of data of each bit plane;
the second device decodes the second data to obtain first data of a first image, wherein the first data comprises a plurality of subdata, and each subdata of the first data comprises data of one or more bit planes of the first image;
and the second equipment acquires the first image according to the first data.
11. The method of claim 10, wherein the JSCC information indicator is carried in a JSCC signaling field of the physical layer frame when the access mode is Time Division Multiple Access (TDMA).
12. The method of claim 11 wherein the JSC signaling field comprises one common JSC signaling field and N dedicated JSC signaling fields,
wherein the common JSCC signaling field comprises the first field;
and the information carried by the ith dedicated JSC signaling field in the N dedicated JSC signaling fields is used for indicating the bit plane number, the sequence number and the modulation mode of the ith load in the N loads.
13. The method of claim 10, wherein when the access scheme is OFDMA, the JSCC information indicator symbol is carried in a JSCC signaling field in an efficient signaling field B of the physical layer frame, or the JSCC information indicator symbol is carried in a JSCC signaling field in a very high throughput signaling field;
the high efficiency signaling field B or the very high throughput signaling field further includes a resource unit, RU, allocation field, which carries information for indicating data of a bit plane corresponding to each RU block.
14. The method of any of claims 10-13, wherein the second device acquiring the first image from the first data comprises:
and the first data is subjected to transform coefficient combination, dequantization and Discrete Cosine Transform (DCT) inverse transformation operations on a physical layer of the second device to obtain the first image.
15. The method of any of claims 10-13, wherein the second device acquiring the first image from the first data comprises:
the first data is subjected to transform coefficient combination operation at a physical layer of the second device, and the first image is obtained through dequantization and inverse DCT transform operation at a Media Access Control (MAC) layer of the second device.
16. The method of any of claims 10-13, wherein the second device acquiring the first image from the first data comprises:
and the first data is subjected to transform coefficient combination, dequantization and DCT inverse transformation operations at the MAC layer of the second device to obtain the first image.
17. The method of any of claims 10-13, wherein the second device acquiring the first image from the first data comprises:
and the first data is subjected to transform coefficient combination, dequantization and DCT inverse transformation operations at an application layer or a Logical Link Control (LLC) layer of the second equipment to obtain a first image.
18. The method of any of claims 10-13, wherein the second device acquiring the first image from the first data comprises:
the first data is combined by transform coefficients at the MAC layer or physical layer of the second device, and the first image is obtained by dequantization and inverse DCT transform operations at the application layer or LLC layer of the second device.
19. A data transmission apparatus, applied to a first device, comprising:
an obtaining module, configured to obtain first data of a first image, where the first data includes a plurality of sub-data, and each sub-data of the first data includes data of one or more bit planes of the first image;
a sending module, configured to send a physical layer frame to a second device, where the physical layer frame includes a joint source channel coding JSCC information indicator and second data, and sub-data of the second data is obtained by JSCC coding sub-data of the first data;
wherein the physical layer frame includes N loads, each of the N loads includes one subdata of the second data, and N is a positive integer;
the JSC information indication symbol comprises a first field, and the information carried by the first field is used for indicating the length of the data of each bit plane.
20. The apparatus of claim 19, wherein the JSCC information indicator symbol is carried in a JSCC signaling field of the physical layer frame when the access mode is Time Division Multiple Access (TDMA).
21. The apparatus of claim 20, wherein the JSC signaling field comprises one common JSC signaling field and N dedicated JSC signaling fields,
wherein the common JSCC signaling field comprises the first field;
and the information carried by the ith dedicated JSC signaling field in the N dedicated JSC signaling fields is used for indicating the bit plane number, the sequence number and the modulation mode of the ith load in the N loads.
22. The apparatus of claim 19, wherein when the access scheme is OFDMA, the JSCC information indicator symbol is carried in a JSCC signaling field in an efficient signaling field B of the physical layer frame, or the JSCC information indicator symbol is carried in a JSCC signaling field in a very high throughput signaling field;
the high efficiency signaling field B or the very high throughput signaling field further includes a resource unit, RU, allocation field, which carries information for indicating data of a bit plane corresponding to each RU block.
23. The apparatus according to any one of claims 19 to 22, wherein the obtaining module is specifically configured to: and the first image obtains the first data through Discrete Cosine Transform (DCT), quantization and layering operation on a physical layer of the first device.
24. The apparatus according to any one of claims 19 to 22, wherein the obtaining module is specifically configured to: and the first image acquires the first data through DCT (discrete cosine transformation), quantization and layering operations on a Media Access Control (MAC) layer of the first device.
25. The apparatus according to any one of claims 19 to 22, wherein the obtaining module is specifically configured to: the first image is subjected to DCT and quantization operations at a MAC layer of the first device, and the first data is acquired through layering operations at a physical layer of the first device.
26. The apparatus according to any one of claims 19 to 22, wherein the obtaining module is specifically configured to: and the first image obtains the first data through DCT transformation, quantization and layering operation at an application layer or a Logical Link Control (LLC) layer of the first equipment.
27. The apparatus according to any one of claims 19 to 22, wherein the obtaining module is specifically configured to: the first image is subjected to DCT (discrete cosine transform) transformation and quantization operation at an application layer or LLC (logical Link control) layer of the first device, and the first data is obtained through layering operation at an MAC (media access control) layer or a physical layer of the first device.
28. A data transmission apparatus, wherein the data transmission apparatus is applied to a second device, and comprises:
a receiving module, configured to receive a physical layer frame sent by a first device, where the physical layer frame includes a joint source channel coding JSCC information indicator and second data, the physical layer frame includes N loads, each of the N loads includes one piece of sub data of the second data, and N is a positive integer;
the JSCC information indication symbol comprises a first field, and information carried by the first field is used for indicating the length of data of each bit plane;
a decoding module, configured to decode the second data to obtain first data of a first image, where the first data includes a plurality of subdata, and each subdata of the first data includes data of one or more bit planes of the first image;
and the acquisition module is used for acquiring the first image according to the first data.
29. The apparatus of claim 28, wherein the JSCC information indicator symbol is carried in a JSCC signaling field of the physical layer frame when an access scheme is Time Division Multiple Access (TDMA).
30. The apparatus of claim 29 wherein the JSCC signaling field comprises a common JSCC signaling field and N dedicated JSCC signaling fields,
wherein the common JSCC signaling field comprises the first field;
and the information carried by the ith dedicated JSC signaling field in the N dedicated JSC signaling fields is used for indicating the bit plane number, the sequence number and the modulation mode of the ith load in the N loads.
31. The apparatus of claim 28, wherein when the access scheme is OFDMA, the JSCC information indicator symbol is carried in a JSCC signaling field in an efficient signaling field B of the physical layer frame, or the JSCC information indicator symbol is carried in a JSCC signaling field in a very high throughput signaling field;
the high efficiency signaling field B or the very high throughput signaling field further includes a resource unit, RU, allocation field, which carries information for indicating data of a bit plane corresponding to each RU block.
32. The apparatus according to any one of claims 28 to 31, wherein the obtaining module is specifically configured to: and the first data is subjected to transform coefficient combination, dequantization and Discrete Cosine Transform (DCT) inverse transformation operation on a physical layer of the second device to obtain the first image.
33. The apparatus according to any one of claims 28 to 31, wherein the obtaining module is specifically configured to: the first data is subjected to transform coefficient combination operation at a physical layer of the second device, and the first image is acquired through dequantization and inverse DCT transform operation at a Media Access Control (MAC) layer of the second device.
34. The apparatus according to any one of claims 28 to 31, wherein the obtaining module is specifically configured to: and the first data is subjected to transform coefficient combination, dequantization and DCT inverse transformation operations at the MAC layer of the second device to obtain the first image.
35. The apparatus according to any one of claims 28 to 31, wherein the obtaining module is specifically configured to: and the first data is subjected to transform coefficient combination, dequantization and DCT inverse transformation operations at an application layer or a Logical Link Control (LLC) layer of the second equipment to obtain the first image.
36. The apparatus according to any one of claims 28 to 31, wherein the obtaining module is specifically configured to: the first data is combined by transform coefficients in a MAC layer or a physical layer of the second device, and the first image is obtained by dequantization and inverse DCT transform operations in an application layer or an LLC layer of the second device.
37. A computer-readable storage medium, wherein the computer-readable storage medium stores program code for execution by a device, the program code comprising: for performing the data transmission method according to any one of claims 1-9.
38. A computer-readable storage medium, wherein the computer-readable storage medium stores program code for execution by a device, the program code comprising: for performing the data transmission method according to any one of claims 10-18.
CN202010431926.1A 2020-05-20 2020-05-20 Data transmission method and device Active CN113709463B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010431926.1A CN113709463B (en) 2020-05-20 2020-05-20 Data transmission method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010431926.1A CN113709463B (en) 2020-05-20 2020-05-20 Data transmission method and device

Publications (2)

Publication Number Publication Date
CN113709463A true CN113709463A (en) 2021-11-26
CN113709463B CN113709463B (en) 2023-06-02

Family

ID=78645385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010431926.1A Active CN113709463B (en) 2020-05-20 2020-05-20 Data transmission method and device

Country Status (1)

Country Link
CN (1) CN113709463B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023174065A1 (en) * 2022-03-14 2023-09-21 华为技术有限公司 Communication method and related device
WO2023185389A1 (en) * 2022-03-29 2023-10-05 华为技术有限公司 Communication method and related apparatus
WO2024007292A1 (en) * 2022-07-08 2024-01-11 华为技术有限公司 Methods for transmitting data and receiving data, and communication apparatus

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005004336A2 (en) * 2003-06-26 2005-01-13 Sony Electronics, Inc. Adaptive joint source channel coding
CN101494774A (en) * 2008-01-23 2009-07-29 厦门华侨电子股份有限公司 Non-compression high definition video signal wireless transmission method based on wavelet conversion characteristics
CN102957498A (en) * 2011-08-17 2013-03-06 北京泰美世纪科技有限公司 Method and device for transmitting layered service streams and method and device for receiving layered service streams
CN103986932A (en) * 2014-04-25 2014-08-13 广州市香港科大霍英东研究院 Message source channel combined encoding and decoding method for video data stream
CN110380999A (en) * 2018-04-12 2019-10-25 华为技术有限公司 The data transmission method and device of probability non-uniformly modulated

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005004336A2 (en) * 2003-06-26 2005-01-13 Sony Electronics, Inc. Adaptive joint source channel coding
CN101494774A (en) * 2008-01-23 2009-07-29 厦门华侨电子股份有限公司 Non-compression high definition video signal wireless transmission method based on wavelet conversion characteristics
CN102957498A (en) * 2011-08-17 2013-03-06 北京泰美世纪科技有限公司 Method and device for transmitting layered service streams and method and device for receiving layered service streams
CN103986932A (en) * 2014-04-25 2014-08-13 广州市香港科大霍英东研究院 Message source channel combined encoding and decoding method for video data stream
CN110380999A (en) * 2018-04-12 2019-10-25 华为技术有限公司 The data transmission method and device of probability non-uniformly modulated

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023174065A1 (en) * 2022-03-14 2023-09-21 华为技术有限公司 Communication method and related device
WO2023185389A1 (en) * 2022-03-29 2023-10-05 华为技术有限公司 Communication method and related apparatus
WO2024007292A1 (en) * 2022-07-08 2024-01-11 华为技术有限公司 Methods for transmitting data and receiving data, and communication apparatus

Also Published As

Publication number Publication date
CN113709463B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
CN113709463B (en) Data transmission method and device
JP6699911B2 (en) Compressed preamble for wireless communication system
US10904408B2 (en) Picture file processing method, device, and computer storage medium
JP6905066B2 (en) Coding and decoding methods and devices
TWI441497B (en) Method and device of arranging packets in wireless communication system
US9763238B2 (en) Signaling uplink frame duration in wireless local-area networks
US9526029B2 (en) Methods and arrangements for packet flows in wireless networks
WO2016173239A1 (en) Resource indication method and apparatus for wlan system
EP3383109B1 (en) Resource indication method and apparatus based on multiple time periods
EP4152706A1 (en) Ppdu transmission method and related apparatus
EP3550761B1 (en) Data transmission method and device, and information transmission method and device
CN111050289B (en) Beacon transmitting and receiving method and device, storage medium and terminal
US11757585B2 (en) Data transmission method and related apparatus
WO2013063958A1 (en) Video processing method and system, and related device
WO2022042081A1 (en) Data fragmentation analysis method and related apparatus
WO2018001357A1 (en) Information transmitting and receiving method and apparatus
WO2023174065A1 (en) Communication method and related device
CN110113139B (en) Downlink service channel sending method, user terminal and base station
WO2023185389A1 (en) Communication method and related apparatus
WO2024051645A1 (en) Decoding method, encoding method, and related device
TW202021398A (en) Joint source channel transmission over mmwave
WO2022142577A1 (en) Ppdu transmission method and related apparatus
WO2023155651A1 (en) Polar code-based coding method and device
WO2023155652A1 (en) Encoding method, decoding method, and related apparatuses
WO2023036050A1 (en) Communication method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant