WO2022095636A1 - 一种进行数据压缩的方法和装置及设备 - Google Patents

一种进行数据压缩的方法和装置及设备 Download PDF

Info

Publication number
WO2022095636A1
WO2022095636A1 PCT/CN2021/121339 CN2021121339W WO2022095636A1 WO 2022095636 A1 WO2022095636 A1 WO 2022095636A1 CN 2021121339 W CN2021121339 W CN 2021121339W WO 2022095636 A1 WO2022095636 A1 WO 2022095636A1
Authority
WO
WIPO (PCT)
Prior art keywords
compression
dictionary
data
model
algorithm
Prior art date
Application number
PCT/CN2021/121339
Other languages
English (en)
French (fr)
Inventor
张惠英
全海洋
王可
Original Assignee
大唐移动通信设备有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大唐移动通信设备有限公司 filed Critical 大唐移动通信设备有限公司
Publication of WO2022095636A1 publication Critical patent/WO2022095636A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information

Definitions

  • the present invention relates to the field of communication technologies, and in particular, to a method, apparatus and device for data compression.
  • the network can configure the UE (User Equipment, user equipment) to use UDC (Uplink Data Compression) uplink data compression ) function to compress the uplink data and then transmit it to reduce the air interface resource overhead.
  • UE User Equipment, user equipment
  • UDC Uplink Data Compression
  • the sending UE uses the preset dictionary or uses the content in the compression buffer as the dictionary to compress the data to be transmitted, thereby further improving the compression rate; accordingly, the base station side uses the preset dictionary or uses the previously received data
  • the data is used as a dictionary to decompress the received data.
  • the dictionary is generated by using the content in the compression cache as the dictionary.
  • the compression cache can be preset based on the configuration, or it can be all zeros.
  • the compression cache adopts the first-in-first-out strategy. Replace the original data with the new data as the new dictionary.
  • the above method utilizes the correlation between data, it cannot achieve the best compression effect.
  • the technology in the existing UDC mechanism uses the configured compression algorithm, and does not consider adopting a flexible compression algorithm in order to improve the compression rate.
  • the present invention provides a method, device and device for data compression, which solves the problem that the algorithm for generating a compression dictionary in the prior art cannot achieve the best compression effect and does not consider adopting a flexible compression algorithm to improve the compression rate.
  • the present invention provides a method for data compression, which is applied to a data transmission device, and the method includes:
  • the currently used compression algorithm is used to compress or decompress the transmitted service data.
  • the AI model is used to perform feature extraction on the latest transmitted business data, output a compression dictionary according to the correlation between the extracted features and the business data, and use different compression algorithms to compress with the current compression dictionary,
  • the compression algorithm corresponding to the highest compression rate is output, and the compression rate obtained by using the output compression dictionary and compression algorithm for actual business data is used as the feedback input to adjust the model parameters of the AI model.
  • the method further includes:
  • the currently used compression dictionary and compression algorithm are initialized according to the compression dictionary and compression algorithm output by the AI model when the service transmission was completed last time.
  • determining that the update condition is met includes at least one of the following steps:
  • the update condition is met when it is determined that the event trigger condition is met.
  • determining that an event trigger condition is met includes at least one of the following steps:
  • the compression dictionary output by the current AI model and the compression ratio expected by the compression algorithm when the difference between the compression ratio of the currently completed service data and the compression ratio of the currently completed transmission is greater than the preset value, it is determined that the event trigger condition is satisfied.
  • the service transmission process also includes:
  • the compression dictionary and compression algorithm output by using the AI model from a third-party device, and send it to the peer data transmission device, where the third-party device is a device located at a functional node in the cloud or edge.
  • the method further includes:
  • the present invention provides a method for data compression, which is applied to a third-party device, and the method includes:
  • the compression dictionary and compression algorithm are sent to the data transmission device.
  • the AI model is used to perform feature extraction on the latest transmitted business data, output a compression dictionary according to the correlation between the extracted features and the business data, and use different compression algorithms to compress with the current compression dictionary, Output the compression algorithm corresponding to the highest compression rate, and adjust the model parameters of the AI model by using the compression rate obtained by using the output compression dictionary and compression algorithm for actual business data as feedback input.
  • the compression dictionary and the compression algorithm are sent to the data transmission device, specifically including:
  • the compression dictionary and the compression algorithm are sent to the data transmission device.
  • determining that the update condition is met includes at least one of the following steps:
  • the update condition is met when it is determined that the event trigger condition is met.
  • determining that an event trigger condition is met includes at least one of the following steps:
  • the compression dictionary output by the current AI model and the compression ratio expected by the compression algorithm when the difference between the compression ratio of the currently completed service data and the compression ratio of the currently completed transmission is greater than the preset value, it is determined that the event trigger condition is satisfied.
  • the present invention provides a data transmission device for data compression, including a memory, a transceiver, and a processor:
  • a memory for storing a computer program
  • a transceiver for sending and receiving data under the control of the processor
  • a processor for reading the computer program in the memory and performing the following operations:
  • the currently used compression algorithm is used to compress or decompress the transmitted service data.
  • the AI model is used to perform feature extraction on the latest transmitted business data, output a compression dictionary according to the correlation between the extracted features and the business data, and use different compression algorithms to compress with the current compression dictionary,
  • the compression algorithm corresponding to the highest compression rate is output, and the compression rate obtained by using the output compression dictionary and compression algorithm for actual business data is used as the feedback input to adjust the model parameters of the AI model.
  • the processor is also used for:
  • the currently used compression dictionary and compression algorithm are initialized according to the compression dictionary and compression algorithm output by the AI model when the service transmission was completed last time.
  • the processor determines that the update condition is met, including at least one of the following steps:
  • the update condition is met when it is determined that the event trigger condition is met.
  • the processor determines that an event trigger condition is met, including at least one of the following steps:
  • the compression dictionary output by the current AI model and the compression ratio expected by the compression algorithm when the difference between the compression ratio of the currently completed service data and the compression ratio of the currently completed transmission is greater than the preset value, it is determined that the event trigger condition is satisfied.
  • the processor is further configured to:
  • the compression dictionary and compression algorithm output by using the AI model from a third-party device, and send it to the peer data transmission device, where the third-party device is a device located at a functional node in the cloud or edge.
  • the processor is also used for:
  • the present invention provides a third-party device for data compression, including a memory, a transceiver, and a processor:
  • a memory for storing a computer program
  • a transceiver for sending and receiving data under the control of the processor
  • a processor for reading the computer program in the memory and performing the following operations:
  • the compression dictionary and compression algorithm are sent to the data transmission device.
  • the AI model is used to perform feature extraction on the latest transmitted business data, output a compression dictionary according to the correlation between the extracted features and the business data, and use different compression algorithms to compress with the current compression dictionary,
  • the compression algorithm corresponding to the highest compression rate is output, and the compression rate obtained by using the output compression dictionary and compression algorithm for actual business data is used as the feedback input to adjust the model parameters of the AI model.
  • sending the compression dictionary and the compression algorithm to the data transmission device and the processor is specifically configured to:
  • the compression dictionary and the compression algorithm are sent to the data transmission device.
  • the processor determines that the update condition is met, including at least one of the following steps:
  • the update condition is met when it is determined that the event trigger condition is met.
  • the processor determines that an event trigger condition is met, including at least one of the following steps:
  • the compression dictionary output by the current AI model and the compression ratio expected by the compression algorithm when the difference between the compression ratio of the currently completed service data and the compression ratio of the currently completed transmission is greater than the preset value, it is determined that the event trigger condition is satisfied.
  • the present invention provides a device for data compression, comprising:
  • the dictionary algorithm determination unit is used to determine the currently used compression dictionary and compression algorithm in the process of business transmission. Initially, the initialized compression dictionary and compression algorithm are used, and when it is determined that the update conditions are met, the compression dictionary and compression algorithm output by the AI model are used. Algorithms, respectively update the currently used compression dictionary and compression algorithm;
  • the compression unit is configured to compress or decompress the transmitted service data by using the currently used compression algorithm based on the currently used compression dictionary.
  • the AI model is used to perform feature extraction on the latest transmitted business data, output a compression dictionary according to the correlation between the extracted features and the business data, and use different compression algorithms to compress with the current compression dictionary,
  • the compression algorithm corresponding to the highest compression rate is output, and the compression rate obtained by using the output compression dictionary and compression algorithm for actual business data is used as the feedback input to adjust the model parameters of the AI model.
  • the dictionary algorithm determining unit is also used for:
  • the currently used compression dictionary and compression algorithm are initialized according to the compression dictionary and compression algorithm output by the AI model when the service transmission was completed last time.
  • the dictionary algorithm determining unit determines that the update condition is met, including at least one of the following steps:
  • the update condition is met when it is determined that the event trigger condition is met.
  • the dictionary algorithm determining unit determines that the event trigger condition is met, including at least one of the following steps:
  • the compression dictionary output by the current AI model and the compression ratio expected by the compression algorithm when the difference between the compression ratio of the currently completed service data and the compression ratio of the currently completed transmission is greater than the preset value, it is determined that the event trigger condition is satisfied.
  • the dictionary algorithm determining unit is further configured to:
  • the compression dictionary and compression algorithm output by using the AI model from a third-party device, and send it to the peer data transmission device, where the third-party device is a device located at a functional node in the cloud or edge.
  • the compression unit is also used for:
  • the present invention provides a device for data compression, comprising:
  • a data receiving unit configured to acquire the latest transmitted service data of the current service and the compression ratio of the currently completed service data transmission in response to the request of the data transmission device
  • the dictionary algorithm generation unit is used to input the latest transmitted business data into the AI model, and use the AI model to output the compression dictionary and compression algorithm;
  • a data sending unit configured to send the compression dictionary and the compression algorithm to the data transmission device.
  • the AI model is used to perform feature extraction on the latest transmitted business data, output a compression dictionary according to the correlation between the extracted features and the business data, and use different compression algorithms to compress with the current compression dictionary,
  • the compression algorithm corresponding to the highest compression rate is output, and the compression rate obtained by using the output compression dictionary and compression algorithm for actual business data is used as the feedback input to adjust the model parameters of the AI model.
  • the compression dictionary and the compression algorithm are sent to the data transmission device.
  • the data sending unit determines that the update condition is met, including at least one of the following steps:
  • the update condition is met when it is determined that the event trigger condition is met.
  • the data sending unit determines that an event trigger condition is met, including at least one of the following steps:
  • the compression dictionary output by the current AI model and the compression ratio expected by the compression algorithm when the difference between the compression ratio of the currently completed service data and the compression ratio of the currently completed transmission is greater than the preset value, it is determined that the event trigger condition is satisfied.
  • the present invention provides a computer program medium on which a computer program is stored, and when the program is executed by a processor, implements the steps of the data compression method provided in the above-mentioned first aspect.
  • the present invention provides a chip, which is coupled to a memory in a device, so that the chip calls program instructions stored in the memory when running, so as to implement the above aspects of the embodiments of the present application and the aspects related to the various aspects. Any method of data compression that may be involved.
  • the present invention provides a computer program product, which, when the computer program product runs on an electronic device, enables the electronic device to execute any of the above-mentioned aspects and any of the aspects involved in the embodiments of the present application that may be involved.
  • a method of data compression is provided.
  • a method, device and device for data compression provided by the present invention have the following beneficial effects:
  • the compression dictionary and compression algorithm output by the AI model are used to update the currently used compression dictionary and compression algorithm respectively, optimize the compression dictionary, and adopt a flexible compression algorithm to synchronize the data sender and receiver Update the compression dictionary and compression algorithm, and use the updated compression dictionary and compression algorithm for data compression and decompression to improve the compression ratio.
  • FIG. 1 is a schematic diagram of a system for data compression provided by an embodiment of the present invention
  • FIG. 2 is a schematic diagram of data compression performed by a base station obtaining a compression dictionary and a compression algorithm output by a local AI model according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of obtaining a compression dictionary and a compression algorithm output by an AI model from a third-party device for data compression according to an embodiment of the present invention
  • FIG. 4 is a schematic diagram of obtaining a compression dictionary and a compression algorithm output by an AI model from a third-party device provided by an embodiment of the present invention, and the update condition is to determine that a service connection is established to perform data compression;
  • FIG. 5 is a flowchart of a method for data compression performed by a data transmission device according to an embodiment of the present invention
  • FIG. 6 is a flowchart of a method for data compression performed by a third-party device according to an embodiment of the present invention
  • FIG. 7 is a schematic diagram of a data transmission device for data compression provided by an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a third-party device that performs data compression according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of an apparatus for performing data compression by a data transmission device according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of an apparatus for performing data compression by a third-party device according to an embodiment of the present invention.
  • the network can configure the UE (User Equipment, user equipment) to use UDC (Uplink Data Compression) uplink data compression ) function to compress the uplink data and then transmit it to reduce the air interface resource overhead.
  • UE User Equipment, user equipment
  • UDC Uplink Data Compression
  • the sending UE uses the preset dictionary or uses the content in the compression buffer as the dictionary to compress the data to be transmitted, thereby further improving the compression rate; accordingly, the base station side uses the preset dictionary or uses the previously received data
  • the data is used as a dictionary to decompress the received data.
  • the sender maintains the compression cache
  • the receiver maintains the decompression cache
  • the compression and decompression caches are both first-in, first-out queues
  • the sender compresses the data to be sent:
  • the length exceeds the preset threshold
  • the destination field is the same as a field in the compressed cache or before the destination field in this packet.
  • the offset is the position offset between the target field and the same field before the target field
  • the length is the length of the target field
  • the length of the offset and length combination is shorter than the length of the target field itself, it has the effect of compression.
  • the sender sends the compressed data packet to the opposite end, and at the same time, fills the corresponding original data packet, that is, the uncompressed data, into the compressed cache;
  • the receiving end decompresses the received data packet based on the above offset and length and the decompression cache; then, fills the decompressed data packet into the decompression cache.
  • the compression mechanism based on the preset dictionary can write preset dictionaries for frequently occurring fields based on business characteristics, and store them in the compression and decompression caches of the compression end and decompression segment respectively before UDC is started. .
  • the compression and decompression cache is no longer empty, but a preset dictionary of high-frequency fields is stored, which can effectively improve the discovery probability of target fields and improve the compression rate.
  • the terminal and the base station need to obtain the preset dictionary to be used separately, that is, to complete the preset dictionary synchronization process.
  • the compression and decompression cache may be configured to be empty, the compression end has a low probability of finding the target field in the current packet to be sent, and the compression rate is correspondingly low; running in UDC After a period of time, the compression cache gradually increases, and the probability that the compression end finds the target field in the current to-be-sent packet increases, and the compression rate increases accordingly.
  • the dictionary is generated by using the content in the compression cache as the dictionary.
  • the compression cache can be preset based on the configuration, or it can be all zeros.
  • the compression cache adopts the first-in-first-out strategy. Replace the original data with the new data as the new dictionary.
  • the configured compression algorithm is used in the existing UDC mechanism, and it is not considered to adopt a flexible compression algorithm in order to improve the compression ratio.
  • the embodiments of the present application provide a data compression method, apparatus, and device, which optimize the compression dictionary and compression algorithm through data learning and training, thereby improving the compression rate.
  • the following provides an implementation manner of a data compression method, apparatus, and device provided by the embodiments of the present invention.
  • an embodiment of the present invention provides a schematic diagram of a system for data compression, including:
  • the first data transmission device 101 as the sending end is used to determine the currently used compression dictionary and compression algorithm during the service transmission process, wherein the initialized compression dictionary and compression algorithm are initially used, and when it is determined that the update condition is satisfied, the AI model is used to output Based on the currently used compression dictionary and compression algorithm, respectively update the currently used compression dictionary and compression algorithm; based on the currently used compression dictionary, use the currently used compression algorithm to compress the transmitted business data;
  • first data transmission device 101 when the above-mentioned first data transmission device 101 is used as a sending end, a compression operation is performed; when the above-mentioned first data transmission device 101 is used as a receiving end, a decompression operation is performed.
  • the second data transmission device 102 serving as the receiving end is used to determine the currently used compression dictionary and compression algorithm during the service transmission process, wherein the initialized compression dictionary and compression algorithm are initially used, and when it is determined that the update condition is satisfied, the AI model is used to output Based on the currently used compression dictionary and compression algorithm, respectively update the currently used compression dictionary and compression algorithm; based on the currently used compression dictionary, use the currently used compression algorithm to decompress the transmitted business data;
  • the AI model is used to perform feature extraction on the latest transmitted business data, output a compression dictionary according to the correlation between the extracted features and the business data, and use different compression algorithms to compress the current compression dictionary. , output the compression algorithm corresponding to the highest compression rate, and adjust the model parameters of the AI model by using the compression rate obtained by using the output compression dictionary and compression algorithm for actual business data as feedback input.
  • the identities of the first data transmission device 101 as the sender and the second data transmission device 102 as the receiver can be changed.
  • the first data transmission device 101 uses the currently used compression After the dictionary and the compression algorithm compress the service data, it is sent to the second data transmission device 102, and the second data transmission device 102 uses the currently used compression dictionary and compression algorithm to decompress the transmitted service data.
  • a data transmission device 101 is the sending end of the data, and the second data transmission device 102 is the receiving end of the data; the second data transmission device 102 uses the currently used compression dictionary and compression algorithm to compress the service data, and sends it to the second data transmission device 102.
  • a data transmission device 101 uses the currently used compression dictionary and compression algorithm to decompress the transmitted service data, at this time, the first data transmission device 101 is the data receiving end, the second data transmission device The data transmission device 102 is the sending end of the data.
  • the service transmission process also includes:
  • the above-mentioned first data transmission device 101 obtains the compression dictionary and compression algorithm output by the local AI model, and sends them to the above-mentioned second data transmission device 102;
  • the above-mentioned second data transmission device 102 obtains the compression dictionary and compression algorithm output by the local AI model, and sends them to the above-mentioned first data transmission device 101;
  • the above-mentioned first data transmission device 101 and/or the above-mentioned second data transmission device 102 obtain the compression dictionary and compression algorithm output by the AI model from a third-party device, where the third-party device is a function node located in the cloud or edge equipment.
  • the above-mentioned third-party device sends the output compression dictionary and compression algorithm to any device in the above-mentioned first data transmission device 101 or the above-mentioned second data transmission device 102, the above-mentioned compression dictionary and compression algorithm are received.
  • the data transmission device of the device sends the above compression dictionary and compression algorithm to the data transmission device of the opposite end.
  • sending a compression dictionary and a compression algorithm by the first data transmission device 101/the above-mentioned second data transmission device 102/third-party device includes any of the following steps:
  • the above-mentioned sender and receiver directly transmit a new compression dictionary and/or Compression algorithm;
  • the third-party device uses the AI model to output the compression dictionary and compression algorithm, for example, the above-mentioned third-party device is a cloud or edge functional node, and the above-mentioned sender and receiver obtain the compression dictionary and/or compression algorithm from the above-mentioned functional node.
  • the above-mentioned system also includes:
  • the third-party device 103 is configured to, in response to the request of the data transmission device, obtain the latest service data transmitted by the current service and the compression rate of the currently completed service data transmission; input the newly transmitted service data into the AI model, and use the AI model to output A compression dictionary and a compression algorithm; sending the compression dictionary and compression algorithm to the data transmission device.
  • the AI model is used to perform feature extraction on the latest transmitted business data, output a compression dictionary according to the correlation between the extracted features and the business data, and use different compression algorithms to compress the current compression dictionary. , output the compression algorithm corresponding to the highest compression rate, and adjust the model parameters of the AI model by using the compression rate obtained by using the output compression dictionary and compression algorithm for actual business data as feedback input.
  • the data transmission device sends a request message for requesting a compression dictionary and a compression algorithm to the above-mentioned third-party device.
  • the first data transmission device 101/the above-mentioned second data transmission device 102 are further configured to:
  • the third-party device actively perceives the compression ratio of the newly transmitted service data and the currently completed service data transmission.
  • the third-party device sends the compression dictionary and the compression algorithm to the data transmission device, specifically including:
  • the compression dictionary and the compression algorithm are sent to the data transmission device.
  • the above-mentioned first data transmission device 101 is a user terminal UE
  • the above-mentioned second data transmission device 102 is a base station
  • the above-mentioned third-party device 103 is a cloud or edge function node on which the above-mentioned AI model is deployed, for example AI compression server.
  • the user terminal UE involved in the embodiments of the present application may be a device that provides voice and/or data connectivity to the user, a handheld device with a wireless connection function, or other processing device connected to a wireless modem.
  • the name of the terminal equipment may be different.
  • the terminal equipment may be called user equipment (UE).
  • Wireless terminal equipment can communicate with one or more core networks via a Radio Access Network (RAN), and the wireless terminal equipment can be mobile terminal equipment such as mobile phones (or "cellular" phones) and mobile
  • RAN Radio Access Network
  • the computers of the terminal equipment which may be portable, pocket-sized, hand-held, computer-built or vehicle-mounted mobile devices, for example, exchange language and/or data with the radio access network.
  • Wireless terminal equipment may also be referred to as system, subscriber unit, subscriber station, mobile station, mobile station, remote station, access point , a remote terminal device (remote terminal), an access terminal device (access terminal), a user terminal device (user terminal), a user agent (user agent), and a user device (user device), which are not limited in the embodiments of the present application.
  • the base station involved in the embodiments of this application may also be referred to as an access point according to different specific application scenarios, or may refer to a device in an access network that communicates with a wireless terminal device through one or more sectors on an air interface, or other names.
  • the network device can be used to convert received air frames to and from Internet Protocol (IP) packets, and act as a router between the wireless terminal device and the rest of the access network, which can include the Internet. Protocol (IP) communication network.
  • IP Internet Protocol
  • the network devices may also coordinate attribute management for the air interface.
  • the network device involved in the embodiments of the present application may be a network device (Base Transceiver Station, BTS) in a Global System For Mobile Communications (GSM) or a Code Division Multiple Access (Code Division Multiple Access, CDMA).
  • BTS Base Transceiver Station
  • GSM Global System For Mobile Communications
  • CDMA Code Division Multiple Access
  • NodeB can also be a network device (NodeB) in Wide-band Code Division Multiple Access (WCDMA), or it can be an evolved network device in a Long Term Evolution (LTE) system (evolutionalNodeB, eNB or e-NodeB), 5G base station in 5G network architecture (Nextgeneration System), but also Homeevolved nodeB (HeNB), relay node (Relay Node), home base station (femto), pico base station (pico), etc., which are not limited in the embodiments of the present application.
  • LTE Long Term Evolution
  • HeNB Homeevolved nodeB
  • Relay Node relay node
  • pico pico base station
  • the above AI model is integrated into an AI module, and the above AI module can achieve:
  • Compression rate feedback use AI compression dictionary and compression algorithm to compress the transmission data, calculate the compression rate and feed it back to the AI training model;
  • the above AI module may be located at either end of the sending end and the receiving end of the data transmission device, or may be located in a third-party device, such as a cloud or edge functional node.
  • the AI model is used to perform feature extraction on the latest transmitted business data, and output a compression dictionary according to the correlation between the extracted features and the business data , and use different compression algorithms to compress with the current compression dictionary, output the compression algorithm corresponding to the highest compression rate, and use the output compression dictionary and compression algorithm to obtain the compression rate for the actual business data as feedback input, adjust the model parameters of the AI model,
  • the above actual business data is the business data that has been transmitted currently.
  • the business data uses the AI model to obtain the compression dictionary and compression algorithm during transmission, and uses the compression dictionary and compression algorithm to compress the business data and then feedback the compression rate.
  • the AI model uses the feedback data. Compression rate for AI model parameter adjustment.
  • the training of the AI model includes two stages, including the AI model modeling stage and the AI model update stage.
  • the data transmission equipment adopts the initialized compression dictionary and compression algorithm.
  • the training process of the AI model is as follows: using the AI model to perform feature extraction on the newly transmitted business data, and according to the relationship between the extracted features and the business data.
  • the AI model update stage for the data transmission device, when the data transmission device transmits the latest service data, if the update conditions are not met, the last compression dictionary and compression algorithm are used to compress the data; if the update conditions are met, the AI model is used. Using new business data as input, a new compression dictionary and compression algorithm are generated, and the data transmission device uses the compression dictionary and compression algorithm output by the AI model to compress business data.
  • the update process is: compare the feedback compression ratio with the expected compression ratio of the above output, form a positive or negative feedback excitation value according to the comparison result, and adjust the model parameters of the AI model.
  • the above AI model can use the existing feedback neural network model, and continuously adjust the parameters of the AI model through the self-learning mechanism, and continuously strengthen the ability to extract dictionary words by analyzing the associated features of the input data. Continue to input, and adjust the effect of the recognition dictionary in the positive direction of increasing the compression rate.
  • the AI model uses different compression algorithms to calculate the data compression rate.
  • the above-mentioned different compression algorithms are existing algorithms, such as Huffman coding, Rice coding, and run-length coding.
  • the compression rate feedback is specifically input into the AI model, and the process of adjusting the model parameters can adopt the existing method, which will not be described in detail here.
  • the currently used compression dictionary and compression algorithm are initialized according to the compression dictionary and compression algorithm output by the AI model when the service transmission was completed last time.
  • the currently used compression dictionary is initialized, including:
  • the preset information initialize the currently used compression dictionary to be a compression dictionary configured based on service characteristics
  • the currently used compression dictionary is initialized to be empty.
  • the currently used compression algorithm is initialized, including:
  • the currently used compression algorithm is initialized as the compression algorithm selected according to the service characteristics.
  • the above AI model performs feature extraction on each transmitted business data, outputs a compression dictionary according to the correlation between the extracted features and the business data, and uses different compression algorithms to compress the current compression dictionary. Output the compression algorithm corresponding to the highest compression rate, that is, adjust the compression dictionary and compression algorithm according to the data transmitted by each service. However, only when the update conditions are met, the compression dictionary and compression algorithm output by the AI model are used to update the current The compression dictionary and compression algorithm used.
  • determining that the update condition is met includes at least one of the following steps:
  • the update condition is met when it is determined that the event trigger condition is met.
  • determining that the update condition is satisfied includes any one or more of the above three conditions, that is, determining that the update condition is satisfied is determining that one of the above three conditions or any combination or three conditions of the above two conditions is satisfied.
  • Embodiment a When it is determined that a service connection is established, the currently used compression dictionary and compression algorithm are updated.
  • the compression dictionary and compression algorithm are synchronized once for the above-mentioned service, and the compression dictionary and compression algorithm are not updated during the communication process.
  • the compression dictionary and compression algorithm based on the transmission data and compression rate are updated for the next communication of the service, that is, only after the service connection is established, the compression dictionary and compression algorithm are updated once, and the compression dictionary used in the process of a service connection is updated. And the compression algorithm remains the same.
  • Embodiment b After it is determined that the service connection is established, when the set update period is reached, the currently used compression dictionary and compression algorithm are updated.
  • the compression dictionary and compression algorithm are synchronized once for the service, and the compression dictionary and compression algorithm are updated periodically during the communication process.
  • the compression dictionary output by the current model is used.
  • the dictionary and compression algorithm are updated, and the compression dictionary and compression algorithm used in a cycle remain unchanged.
  • Embodiment c When it is determined that the event trigger condition is satisfied, the currently used compression dictionary and compression algorithm are updated.
  • determining that the event triggering condition is met includes at least one of the following steps:
  • the compression dictionary output by the current AI model and the compression ratio expected by the compression algorithm when the difference between the compression ratio of the currently completed service data and the compression ratio of the currently completed transmission is greater than the preset value, it is determined that the event trigger condition is satisfied.
  • the compression dictionary and the compression ratio expected by the compression dictionary output by the current AI model above are the expected compression ratio for compressing the latest transmitted service data using the compression dictionary and compression algorithm output by the current AI model.
  • the above-mentioned event trigger conditions are only an example, and do not form specific limitations on the event trigger conditions.
  • the triggering condition is that, according to the compression dictionary output by the current AI model and the compression ratio expected by the compression algorithm, when the expected compression ratio is greater than the preset value, it is determined that the event triggering condition is satisfied.
  • the embodiments of the present invention provide three specific implementation manners, and specifically describe the foregoing method for performing data compression.
  • Embodiment 1 The base station obtains the compression dictionary and compression algorithm output by the local AI model.
  • an embodiment of the present invention provides a schematic diagram of a base station obtaining a compression dictionary and a compression algorithm output by a local AI model to perform data compression.
  • the PDCP Packet Data Convergence Protocol, Packet Data Convergence Protocol
  • the base station obtains the compression dictionary and compression algorithm output by the local AI model, and the updated compression dictionary and/or The compression algorithm is directly synchronized via the Uu interface.
  • Embodiment 2 and Embodiment 3 both adopt the above-mentioned transmission mode, and will not be repeated here.
  • the data transmission devices are a base station and a user terminal UE.
  • Step 1 A connection is established between the user terminal UE and the base station.
  • the interaction process between the user terminal UE and the core network is not described here, and the interaction between the user terminal UE and the core network is completed before step 2 .
  • Step 2 The base station determines the initialized compression dictionary and compression algorithm to be used.
  • the above-mentioned initialized compression dictionary and compression algorithm can be determined according to preset information; or determined according to the compression dictionary and compression algorithm output by the AI model when the service transmission was completed last time.
  • Step 3 The base station sends the initialized compression dictionary and compression algorithm to the user terminal UE.
  • the sending method can be RRC message, MAC CE, AI compression dictionary and compression algorithm are carried in the PDCP header of the first data packet, or AI compression dictionary and compression algorithm indication information are carried in the PDCP header of the first data packet.
  • PDCP subPDUs carry AI compression dictionary and compression algorithm.
  • Step 4 The data sending end in the base station and the user terminal UE compresses and transmits the data using the above-mentioned initialized compression dictionary and compression algorithm, and the receiving end uses the above-mentioned initialized compression dictionary and compression algorithm to decompress the data, and in the process Statistical compression ratio.
  • the base station/user terminal UE When the base station/user terminal UE is the data sending end, use the above-mentioned initialized compression dictionary and compression algorithm to compress the data, and send the compressed data to the user terminal UE/base station; when the base station/user terminal UE is the data receiving end , receiving the compressed data sent by the user terminal UE/base station, and using the above-mentioned initialized compression dictionary and compression algorithm to decompress the above-mentioned compressed data.
  • the user terminal UE sends the statistical compression rate to the base station.
  • Step 5 The base station adjusts the training model based on the transmitted data and the data compression rate, and generates a new compression dictionary and compression algorithm.
  • the base station when the base station is the data sender, the data to be transmitted is input into the AI model, and the AI model is used to perform feature extraction on the newly transmitted service data, and output according to the correlation between the extracted features and the service data. Compress the dictionary, and use different compression algorithms to compress the current compression dictionary, output the compression algorithm corresponding to the highest compression rate, and use the actual business data to use the output compression dictionary and compression rate obtained by the compression algorithm as feedback input to adjust the AI model
  • the base station When the base station is the data receiving end, it receives the compressed data sent by the user terminal UE, uses the above-mentioned initialized compression dictionary and compression algorithm to decompress the above-mentioned compressed data, and inputs the decompressed data into the AI model, using The AI model performs feature extraction on the latest transmitted business data, outputs a compression dictionary according to the correlation between the extracted features and the business data, and uses different compression algorithms to compress the current compression dictionary, and outputs the compression corresponding to the highest compression rate Algorithms, and use the actual business
  • Step 6 When it is determined that the update condition is satisfied, the base station sends the updated compression dictionary and compression algorithm to the user terminal UE.
  • the update condition of the above step 6 may be based on a period, or may be based on an event trigger.
  • the sending method used in the above step 6 is the same as that used in the step 3, and will not be repeated here.
  • Step 7 The data sender in the base station and the user terminal UE uses the new compression dictionary and compression algorithm to compress and transmit the data, and the receiver uses the new compression dictionary and compression algorithm to decompress the data, and the compression rate is calculated in this process. .
  • the above embodiment is also applicable to the embodiment in which the user terminal UE obtains the compression dictionary and the compression algorithm output by the local AI model to perform data compression, and the operations of the base station and the user terminal UE in the above process are reversed, and this can be achieved. .
  • Embodiment 2 Obtain the compression dictionary and compression algorithm output by the AI model from a third-party device.
  • an embodiment of the present invention provides a schematic diagram of obtaining a compression dictionary and a compression algorithm output by an AI model from a third-party device to perform data compression.
  • the data transmission devices are base stations and user terminals UE
  • the third-party devices are cloud or edge AI compression servers.
  • Step 1 A connection is established between the user terminal UE and the base station.
  • step 2 the interaction process between the user terminal UE and the core network is not described here, and the interaction between the user terminal UE and the core network is completed before step 2 .
  • Step 2a/2b The user terminal UE and/or the base station request a compression dictionary and a compression algorithm from the AI compression server.
  • both sides of the data transmission device may request the AI compression server, or one end may request the AI compression server.
  • Step 3 The AI compression server determines the initialized compression dictionary and compression algorithm to be used.
  • the above-mentioned initialized compression dictionary and compression algorithm can be determined according to preset information; or determined according to the compression dictionary and compression algorithm output by the AI model when the service transmission was completed last time.
  • Step 4a/4b The AI compression server sends the initialized compression dictionary and compression algorithm to the base station and the user terminal UE.
  • This step can be that the AI compression server sends the compression dictionary and compression algorithm to both data transmission devices, or one end obtains the compression dictionary and compression algorithm from the AI compression server and sends it to the other end.
  • Step 5 The data sending end in the base station and the user terminal UE compresses and transmits the data using the above-mentioned initialized compression dictionary and compression algorithm, and the receiving end uses the above-mentioned initialized compression dictionary and compression algorithm to decompress the data, and in the process Statistical compression ratio.
  • the base station/user terminal UE When the base station/user terminal UE is the data transmitting end, use the above-mentioned initialized compression dictionary and compression algorithm to compress the data, and send the compressed data to the user terminal UE/base station; when the base station/user terminal UE is the data receiving end , receiving the compressed data sent by the user terminal UE/base station, and using the above-mentioned initialized compression dictionary and compression algorithm to decompress the above-mentioned compressed data.
  • the base station and/or the user terminal UE sends the latest transmitted service data and the compression ratio of the currently completed service data to the AI compression server.
  • Step 6 The AI compression server adjusts the training model based on the latest transmitted service data and the compression ratio of the currently completed service data, and generates a new compression dictionary and compression algorithm.
  • the AI compression server receives the latest transmitted service data sent by the base station and/or the user terminal UE and the compression ratio of the currently completed transmission service data.
  • the AI compression server inputs the newly transmitted business data into the AI model, uses the AI model to perform feature extraction on the newly transmitted business data, outputs a compression dictionary according to the correlation between the extracted features and the business data, and uses different compression algorithms to The current compression dictionary is compressed, the compression algorithm corresponding to the highest compression rate is output, and the actual business data is used as the feedback input to adjust the model parameters of the AI model.
  • Step 7a/7b When it is determined that the update condition is satisfied, the AI compression server sends the updated compression dictionary and compression algorithm to the base station and the user terminal UE.
  • the update conditions of the above steps 7a/7b may be based on a period or based on an event trigger.
  • the above sending method can be that the AI compression server sends the compression dictionary and compression algorithm to both data transmission devices, or one end obtains the compression dictionary and compression algorithm from the AI compression server and sends it to the other end.
  • Step 8 The data sender in the base station and the user terminal UE uses the new compression dictionary and compression algorithm to compress and transmit the data, and the receiver uses the new compression dictionary and compression algorithm to decompress the data, and count the compression in the process. rate, and feed back the data and compression rate to the AI compression server.
  • an additional step is added after the above-mentioned step 5: the connection release for this service is added, and a step is added after the above-mentioned step 6: the connection for this service is re-established, and Step:
  • the user terminal UE and/or the base station requests a compression dictionary and a compression algorithm from the AI compression server, and obtaining an update condition is the third embodiment of determining the establishment of a service connection.
  • an embodiment of the present invention provides a schematic diagram of obtaining a compression dictionary and a compression algorithm output by an AI model from a third-party device, and the update condition is to determine the establishment of a service connection to perform data compression.
  • An embodiment of the present invention provides a flowchart of a method for data compression performed by a data transmission device, as shown in FIG. 5 , including:
  • Step S501 during the service transmission process, determine the currently used compression dictionary and compression algorithm, wherein the initialized compression dictionary and compression algorithm are used initially, and when it is determined that the update condition is satisfied, the compression dictionary and compression algorithm output by the AI model are used to correspond to Update the currently used compression dictionary and compression algorithm;
  • Step S502 compress or decompress the transmitted service data by using the currently used compression algorithm based on the currently used compression dictionary.
  • the AI model is used to perform feature extraction on the latest transmitted business data, output a compression dictionary according to the correlation between the extracted features and the business data, and use different compression algorithms to compress with the current compression dictionary,
  • the compression algorithm corresponding to the highest compression rate is output, and the compression rate obtained by using the output compression dictionary and compression algorithm for actual business data is used as the feedback input to adjust the model parameters of the AI model.
  • the method further includes:
  • the currently used compression dictionary and compression algorithm are initialized according to the compression dictionary and compression algorithm output by the AI model when the service transmission was completed last time.
  • determining that the update condition is met includes at least one of the following steps:
  • the update condition is met when it is determined that the event trigger condition is met.
  • determining that an event trigger condition is met includes at least one of the following steps:
  • the compression dictionary output by the current AI model and the compression ratio expected by the compression algorithm when the difference between the compression ratio of the currently completed service data and the compression ratio of the currently completed transmission is greater than the preset value, it is determined that the event trigger condition is satisfied.
  • the service transmission process also includes:
  • the compression dictionary and compression algorithm output by using the AI model from a third-party device, and send it to the peer data transmission device, where the third-party device is a device located at a functional node in the cloud or edge.
  • the method further includes:
  • An embodiment of the present invention provides a flowchart of a method for data compression performed by a third-party device, as shown in FIG. 6 , including:
  • Step S601 in response to the request of the data transmission device, obtain the latest service data transmitted by the current service and the compression ratio of the currently completed service data transmission;
  • Step S602 input the newly transmitted service data into the AI model, and use the AI model to output a compression dictionary and a compression algorithm;
  • Step S603 sending the compression dictionary and the compression algorithm to the data transmission device.
  • the AI model is used to perform feature extraction on the latest transmitted business data, output a compression dictionary according to the correlation between the extracted features and the business data, and use different compression algorithms to compress with the current compression dictionary,
  • the compression algorithm corresponding to the highest compression rate is output, and the compression rate obtained by using the output compression dictionary and compression algorithm for actual business data is used as the feedback input to adjust the model parameters of the AI model.
  • sending the compression dictionary and the compression algorithm to the data transmission device specifically includes:
  • the compression dictionary and the compression algorithm are sent to the data transmission device.
  • determining that the update condition is met includes at least one of the following steps:
  • the update condition is met when it is determined that the event trigger condition is met.
  • determining that an event trigger condition is met includes at least one of the following steps:
  • the compression dictionary output by the current AI model and the compression ratio expected by the compression algorithm when the difference between the compression ratio of the currently completed service data and the compression ratio of the currently completed transmission is greater than the preset value, it is determined that the event trigger condition is satisfied.
  • the data transmission device for data compression provided by the embodiment of the present invention belongs to the same inventive concept as the data transmission device in the above-mentioned Embodiment 1 of the present invention, and is applied to various implementations of data compression by the data transmission device in the system provided by the above-mentioned embodiment.
  • the method can be applied to the data compression method in this embodiment, and will not be repeated here.
  • the third-party device for data compression provided by the embodiment of the present invention belongs to the same inventive concept as the third-party device in the above-mentioned Embodiment 1 of the present invention, and is applied to various implementations of data compression by the third-party device in the system provided by the above-mentioned embodiment.
  • the method can be applied to the data compression method in this embodiment, and will not be repeated here.
  • An embodiment of the present invention provides a schematic diagram of a data transmission device for data compression, as shown in FIG. 7 , including:
  • Memory 701 Memory 701 , processor 702 , transceiver 703 and bus interface 704 .
  • the processor 702 is responsible for managing the bus architecture and general processing, and the memory 701 may store data used by the processor 702 in performing operations.
  • the transceiver 703 is used to receive and transmit data under the control of the processor 702 .
  • the bus architecture may include any number of interconnected buses and bridges, in particular one or more processors represented by processor 702 and various circuits of memory represented by memory 701 linked together.
  • the bus architecture may also link together various other circuits, such as peripherals, voltage regulators, and power management circuits, which are well known in the art and, therefore, will not be described further herein.
  • the bus interface provides the interface.
  • the processor 702 is responsible for managing the bus architecture and general processing, and the memory 701 may store data used by the processor 702 in performing operations.
  • the processes disclosed in the embodiments of the present invention may be applied to the processor 702 or implemented by the processor 702 .
  • each step of the signal processing flow can be completed by hardware integrated logic circuits in the processor 702 or instructions in the form of software.
  • the processor 702 may be a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, and may implement or execute the embodiments of the present invention.
  • a general purpose processor may be a microprocessor or any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present invention may be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory 701, and the processor 702 reads the information in the memory 701, and completes the steps of the signal processing flow in combination with its hardware.
  • the processor 702 is configured to read the program in the memory 701 and execute:
  • the currently used compression algorithm is used to compress or decompress the transmitted service data.
  • the AI model is used to perform feature extraction on the latest transmitted business data, output a compression dictionary according to the correlation between the extracted features and the business data, and use different compression algorithms to compress with the current compression dictionary,
  • the compression algorithm corresponding to the highest compression rate is output, and the compression rate obtained by using the output compression dictionary and compression algorithm for actual business data is used as the feedback input to adjust the model parameters of the AI model.
  • the processor is also used for:
  • the currently used compression dictionary and compression algorithm are initialized according to the compression dictionary and compression algorithm output by the AI model when the service transmission was completed last time.
  • the processor determines that the update condition is met, including at least one of the following steps:
  • the update condition is met when it is determined that the event trigger condition is met.
  • the processor determines that an event trigger condition is met, including at least one of the following steps:
  • the compression dictionary output by the current AI model and the compression ratio expected by the compression algorithm when the difference between the compression ratio of the currently completed service data and the compression ratio of the currently completed transmission is greater than the preset value, it is determined that the event trigger condition is satisfied.
  • the processor is further configured to:
  • the compression dictionary and compression algorithm output by using the AI model from a third-party device, and send it to the peer data transmission device, where the third-party device is a device located at a functional node in the cloud or edge.
  • the processor is also used for:
  • An embodiment of the present invention provides a schematic diagram of a third-party device performing data compression, as shown in FIG. 8 , including:
  • Memory 801 Memory 801 , processor 802 , transceiver 803 and bus interface 804 .
  • the processor 802 is responsible for managing the bus architecture and general processing, and the memory 801 may store data used by the processor 802 in performing operations.
  • the transceiver 803 is used to receive and transmit data under the control of the processor 802 .
  • the bus architecture may include any number of interconnected buses and bridges, in particular one or more processors represented by processor 802 and various circuits of memory represented by memory 801 linked together.
  • the bus architecture may also link together various other circuits, such as peripherals, voltage regulators, and power management circuits, which are well known in the art and therefore will not be described further herein.
  • the bus interface provides the interface.
  • the processor 802 is responsible for managing the bus architecture and general processing, and the memory 801 may store data used by the processor 802 in performing operations.
  • the processes disclosed in the embodiments of the present invention may be applied to the processor 802 or implemented by the processor 802 .
  • each step of the signal processing flow may be completed by hardware integrated logic circuits in the processor 802 or instructions in the form of software.
  • the processor 802 may be a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, and may implement or execute the embodiments of the present invention.
  • a general purpose processor may be a microprocessor or any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present invention may be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
  • the software module may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory 801, and the processor 802 reads the information in the memory 801, and completes the steps of the signal processing flow in combination with its hardware.
  • the processor 802 is configured to read the program in the memory 801 and execute:
  • the compression dictionary and compression algorithm are sent to the data transmission device.
  • the AI model is used to perform feature extraction on the latest transmitted business data, output a compression dictionary according to the correlation between the extracted features and the business data, and use different compression algorithms to compress with the current compression dictionary,
  • the compression algorithm corresponding to the highest compression rate is output, and the compression rate obtained by using the output compression dictionary and compression algorithm for actual business data is used as the feedback input to adjust the model parameters of the AI model.
  • sending the compression dictionary and the compression algorithm to the data transmission device and the processor is specifically configured to:
  • the compression dictionary and the compression algorithm are sent to the data transmission device.
  • the processor determines that the update condition is met, including at least one of the following steps:
  • the update condition is met when it is determined that the event trigger condition is met.
  • the processor determines that an event trigger condition is met, including at least one of the following steps:
  • the compression dictionary output by the current AI model and the compression ratio expected by the compression algorithm when the difference between the compression ratio of the currently completed service data and the compression ratio of the currently completed transmission is greater than the preset value, it is determined that the event trigger condition is satisfied.
  • the data transmission device for data compression provided by the embodiment of the present invention belongs to the same inventive concept as the data transmission device in the above-mentioned Embodiment 1 of the present invention, and is applied to various implementations of data compression by the data transmission device in the system provided by the above-mentioned embodiment.
  • the method can be applied to the data transmission device for data compression in this embodiment, and will not be repeated here.
  • the third-party device for data compression provided by the embodiment of the present invention belongs to the same inventive concept as the third-party device in the above-mentioned Embodiment 1 of the present invention, and is applied to various implementations of data compression by the third-party device in the system provided by the above-mentioned embodiment.
  • the method can be applied to the third-party device that performs data compression in this embodiment, and will not be repeated here.
  • An embodiment of the present invention provides a schematic diagram of an apparatus for data compression by a data transmission device, as shown in FIG. 9 , including:
  • the dictionary algorithm determination unit 901 is used to determine the currently used compression dictionary and compression algorithm in the service transmission process, wherein the initialized compression dictionary and compression algorithm are used initially, and when it is determined that the update condition is satisfied, the compression dictionary and the compression dictionary output by the AI model are used. Compression algorithm, corresponding to update the currently used compression dictionary and compression algorithm;
  • the compression unit 902 is configured to compress or decompress the transmitted service data by using the currently used compression algorithm based on the currently used compression dictionary.
  • the AI model is used to perform feature extraction on the latest transmitted business data, output a compression dictionary according to the correlation between the extracted features and the business data, and use different compression algorithms to compress with the current compression dictionary,
  • the compression algorithm corresponding to the highest compression rate is output, and the compression rate obtained by using the output compression dictionary and compression algorithm for actual business data is used as the feedback input to adjust the model parameters of the AI model.
  • the dictionary algorithm determining unit is also used for:
  • the currently used compression dictionary and compression algorithm are initialized according to the compression dictionary and compression algorithm output by the AI model when the service transmission was completed last time.
  • the dictionary algorithm determining unit determines that the update condition is met, including at least one of the following steps:
  • the update condition is met when it is determined that the event trigger condition is met.
  • the dictionary algorithm determining unit determines that the event trigger condition is met, including at least one of the following steps:
  • the compression dictionary output by the current AI model and the compression ratio expected by the compression algorithm when the difference between the compression ratio of the currently completed service data and the compression ratio of the currently completed transmission is greater than the preset value, it is determined that the event trigger condition is satisfied.
  • the dictionary algorithm determining unit is further configured to:
  • the compression dictionary and compression algorithm output by using the AI model from a third-party device, and send it to the peer data transmission device, where the third-party device is a device located at a functional node in the cloud or edge.
  • the compression unit is also used for:
  • An embodiment of the present invention provides an apparatus for data compression by a third-party device, as shown in FIG. 10 , including:
  • a data receiving unit 1001 configured to acquire the latest transmitted service data of the current service and the compression ratio of the currently completed service data transmission in response to a request of the data transmission device;
  • Dictionary algorithm generation unit 1002 used for inputting the latest transmitted business data into the AI model, and using the AI model to output a compression dictionary and a compression algorithm;
  • a data sending unit 1003, configured to send the compression dictionary and the compression algorithm to the data transmission device.
  • the AI model is used to perform feature extraction on the latest transmitted business data, output a compression dictionary according to the correlation between the extracted features and the business data, and use different compression algorithms to compress with the current compression dictionary,
  • the compression algorithm corresponding to the highest compression rate is output, and the compression rate obtained by using the output compression dictionary and compression algorithm for actual business data is used as the feedback input to adjust the model parameters of the AI model.
  • the data sending unit is specifically used for:
  • the compression dictionary and the compression algorithm are sent to the data transmission device.
  • the data sending unit determines that the update condition is met, including at least one of the following steps:
  • the update condition is met when it is determined that the event trigger condition is met.
  • the data sending unit determines that an event trigger condition is met, including at least one of the following steps:
  • the compression dictionary output by the current AI model and the compression ratio expected by the compression algorithm when the difference between the compression ratio of the currently completed service data and the compression ratio of the currently completed transmission is greater than the preset value, it is determined that the event trigger condition is satisfied.
  • the device for data compression provided by the embodiment of the present invention belongs to the same inventive concept as the data transmission device in the above-mentioned embodiment 1 of the present invention, and is applied to various implementations of data compression by the data transmission device in the system provided by the above-mentioned embodiment, It can be applied to the apparatus for data compression in this embodiment, and will not be repeated here.
  • the apparatus for data compression provided by the embodiment of the present invention belongs to the same inventive concept as the third-party device in the above-mentioned Embodiment 1 of the present invention, and is applied to various implementations of data compression by the third-party device in the system provided by the above-mentioned embodiment, It can be applied to the apparatus for data compression in this embodiment, and will not be repeated here.
  • the present invention also provides a processor-readable storage medium, where a computer program is stored in the processor-readable storage medium, and the computer program is used to cause the processor to execute the above-mentioned Embodiment 1 applied to a data transmission device The steps of a method of performing data compression.
  • the present invention also provides a processor-readable storage medium, where a computer program is stored in the processor-readable storage medium, and the computer program is used to make the processor execute the above-mentioned Embodiment 1 applied to a third-party device The steps of a method of performing data compression.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the modules is only a logical function division. In actual implementation, there may be other division methods.
  • multiple modules or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or modules, and may be in electrical, mechanical or other forms.
  • modules described as separate components may or may not be physically separated, and the components shown as modules may or may not be physical modules, that is, may be located in one place, or may be distributed to multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist physically alone, or two or more modules may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. If the integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may be stored in a computer-readable storage medium.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server, or data center Transmission to another website site, computer, server, or data center is by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • wire eg, coaxial cable, fiber optic, digital subscriber line (DSL)
  • wireless eg, infrared, wireless, microwave, etc.
  • the computer-readable storage medium may be any available medium that can be stored by a computer, or a data storage device such as a server, data center, etc., which includes one or more available media integrated.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, solid state disks (SSDs)), and the like.
  • the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
  • the apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

本申请公开了一种进行数据压缩的方法和装置及设备,所述方法包括:业务传输过程中,确定当前采用的压缩字典和压缩算法,其中初始时采用初始化的压缩字典和压缩算法,之后确定满足更新条件时,利用AI模型输出的压缩字典和压缩算法,分别对应更新当前采用的压缩字典和压缩算法;基于当前采用压缩字典,利用当前采用的压缩算法对传输的业务数据进行压缩或解压缩。利用本申请公开的方法,通过对数据的学习和训练对压缩字典和压缩算法进行优化,提高压缩率。

Description

一种进行数据压缩的方法和装置及设备
相关申请的交叉引用
本申请要求在2020年11月03日提交中国专利局、申请号为202011212245.2、申请名称为“一种进行数据压缩的方法和装置及设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及通信技术领域,尤其涉及一种进行数据压缩的方法和装置及设备。
背景技术
在LTE(Long-Term Evolution,长期演进)/LTE-A(Long-Term Evolution-Advance,进一步长期演进)系统中,网络可以配置UE(User Equipment,用户设备)使用UDC(Uplink Data Compression上行数据压缩)功能,对上行数据进行压缩后再传输,以降低空口资源开销。
在进行上行数据压缩时,发送UE利用预置字典或采用压缩缓存内的内容作为字典,对待传输数据进行压缩,从而进一步提升压缩率;相应的,基站侧根据预置字典或使用之前收到的数据作为字典对接收数据进行解压缩。
现有UDC机制中,字典的生成是采用压缩缓存内的内容作为字典,其中,压缩缓存中基于配置可以预置字典,也可以全零,当有数据传输,则压缩缓存采用先进先出策略,使用新的数据替代原有数据作为新的字典。上述方式虽然利用了数据之间的相关性,但并不能达到最佳压缩效果。此外,现有UDC机制中技术中使用配置的压缩算法,没有考虑为了提高压缩率采用灵活的压缩算法。
发明内容
本发明提供一种进行数据压缩的方法和装置及设备,解决现有技术中生成压缩字典的算法,不能达到最佳压缩效果,且未考虑采用灵活的压缩算法,以提高压缩率的问题。
第一方面,本发明提供一种进行数据压缩的方法,应用于数据传输设备,该方法包括:
业务传输过程中,确定当前采用的压缩字典和压缩算法,其中初始时采用初始化的压缩字典和压缩算法,之后确定满足更新条件时,利用AI模型输出的压缩字典和压缩算法,分别对应更新当前采用的压缩字典和压缩算法;
基于当前采用压缩字典,利用当前采用的压缩算法对传输的业务数据进行压缩或解压缩。
可选地,所述AI模型用于对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,及利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并将实际业务数据使用输出的压缩字典和压缩算法得到的压缩率作为反馈输入,调整AI模型的模型参数。
可选地,所述方法还包括:
根据预先设置信息,初始化当前采用的压缩字典和压缩算法;或者
建立业务连接时,根据上一次完成业务传输时,AI模型输出的压缩字典和压缩算法,初始化当前采用的压缩字典和压缩算法。
可选地,确定满足更新条件,包括如下至少一个步骤:
确定建立业务连接时,满足更新条件;
确定建立业务连接后,到达设定的更新周期时,满足更新条件;
确定满足事件性触发条件时,满足更新条件。
可选地,确定满足事件性触发条件,包括如下至少一个步骤:
确定当前完成传输的业务数据的压缩率低于预设门限时,确定满足事件性触发条件;
根据当前AI模型输出的压缩字典和压缩算法所预期的压缩率,与当前完成传输的业务数据的压缩率的差值大于预设值时,确定满足事件性触发条件。
可选地,业务传输过程中,还包括:
获取本地的AI模型输出的压缩字典和压缩算法;或
获取本地的AI模型输出的压缩字典和压缩算法,并发送给对端数据传输设备;或
从对端数据传输设备获取利用AI模型输出的压缩字典和压缩算法;或
从第三方设备获取利用AI模型输出的压缩字典和压缩算法,所述第三方设备为位于云或边缘的功能节点的设备;或
从第三方设备获取利用AI模型输出的压缩字典和压缩算法,并发送给对端数据传输设备,所述第三方设备为位于云或边缘的功能节点的设备。
可选地,所述方法还包括:
将最新传输的业务数据及当前完成传输的业务数据的压缩率发送至第三方设备。
第二方面,本发明提供一种进行数据压缩的方法,应用于第三方设备,该方法包括:
响应于数据传输设备的请求,获取当前业务最新传输的业务数据及当前完成传输的业务数据的压缩率;
将最新传输的业务数据输入到AI模型,并利用AI模型输出压缩字典和压缩算法;
将所述压缩字典和压缩算法发送至所述数据传输设备。
可选地,所述AI模型用于对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,及利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并以将实际业务数据使用输出的压缩字典和压缩算法得到的压缩率作为反馈输入,调整AI模型的模型参数。
可选地,将所述压缩字典和压缩算法发送至所述数据传输设备,具体包 括:
确定满足更新条件时,将所述压缩字典和压缩算法发送至所述数据传输设备。
可选地,确定满足更新条件,包括如下至少一个步骤:
确定所述数据传输设备建立业务连接时,满足更新条件;
确定所述数据传输设备建立业务连接后,到达设定的更新周期时,满足更新条件;
确定满足事件性触发条件时,满足更新条件。
可选地,确定满足事件性触发条件,包括如下至少一个步骤:
确定当前完成传输的业务数据的压缩率低于预设门限时,确定满足事件性触发条件;
根据当前AI模型输出的压缩字典和压缩算法所预期的压缩率,与当前完成传输的业务数据的压缩率的差值大于预设值时,确定满足事件性触发条件。
第三方面,本发明提供一种进行数据压缩的数据传输设备,包括存储器,收发机,处理器:
存储器,用于存储计算机程序;收发机,用于在所述处理器的控制下收发数据;处理器,用于读取所述存储器中的计算机程序并执行以下操作:
业务传输过程中,确定当前采用的压缩字典和压缩算法,其中初始时采用初始化的压缩字典和压缩算法,之后确定满足更新条件时,利用AI模型输出的压缩字典和压缩算法,分别对应更新当前采用的压缩字典和压缩算法;
基于当前采用压缩字典,利用当前采用的压缩算法对传输的业务数据进行压缩或解压缩。
可选地,所述AI模型用于对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,及利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并将实际业务数据使用输出的压缩字典和压缩算法得到的压缩率作为反馈输入,调整AI模型的模型参数。
可选地,所述处理器还用于:
根据预先设置信息,初始化当前采用的压缩字典和压缩算法;或者
建立业务连接时,根据上一次完成业务传输时,AI模型输出的压缩字典和压缩算法,初始化当前采用的压缩字典和压缩算法。
可选地,所述处理器确定满足更新条件,包括如下至少一个步骤:
确定建立业务连接时,满足更新条件;
确定建立业务连接后,到达设定的更新周期时,满足更新条件;
确定满足事件性触发条件时,满足更新条件。
可选地,所述处理器确定满足事件性触发条件,包括如下至少一个步骤:
确定当前完成传输的业务数据的压缩率低于预设门限时,确定满足事件性触发条件;
根据当前AI模型输出的压缩字典和压缩算法所预期的压缩率,与当前完成传输的业务数据的压缩率的差值大于预设值时,确定满足事件性触发条件。
可选地,业务传输过程中,所述处理器还用于:
获取本地的AI模型输出的压缩字典和压缩算法;或
获取本地的AI模型输出的压缩字典和压缩算法,并发送给对端数据传输设备;或
从对端数据传输设备获取利用AI模型输出的压缩字典和压缩算法;或
从第三方设备获取利用AI模型输出的压缩字典和压缩算法,所述第三方设备为位于云或边缘的功能节点的设备;或
从第三方设备获取利用AI模型输出的压缩字典和压缩算法,并发送给对端数据传输设备,所述第三方设备为位于云或边缘的功能节点的设备。
可选地,所述处理器还用于:
将最新传输的业务数据及当前完成传输的业务数据的压缩率发送至第三方设备。
第四方面,本发明提供一种进行数据压缩的第三方设备,包括存储器,收发机,处理器:
存储器,用于存储计算机程序;收发机,用于在所述处理器的控制下收发数据;处理器,用于读取所述存储器中的计算机程序并执行以下操作:
响应于数据传输设备的请求,获取当前业务最新传输的业务数据及当前完成传输的业务数据的压缩率;
将最新传输的业务数据输入到AI模型,并利用AI模型输出压缩字典和压缩算法;
将所述压缩字典和压缩算法发送至所述数据传输设备。
可选地,所述AI模型用于对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,及利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并将实际业务数据使用输出的压缩字典和压缩算法得到的压缩率作为反馈输入,调整AI模型的模型参数。
可选地,将所述压缩字典和压缩算法发送至所述数据传输设备,所述处理器具体用于:
确定满足更新条件时,将所述压缩字典和压缩算法发送至所述数据传输设备。
可选地,所述处理器确定满足更新条件,包括如下至少一个步骤:
确定所述数据传输设备建立业务连接时,满足更新条件;
确定所述数据传输设备建立业务连接后,到达设定的更新周期时,满足更新条件;
确定满足事件性触发条件时,满足更新条件。
可选地,所述处理器确定满足事件性触发条件,包括如下至少一个步骤:
确定当前完成传输的业务数据的压缩率低于预设门限时,确定满足事件性触发条件;
根据当前AI模型输出的压缩字典和压缩算法所预期的压缩率,与当前完成传输的业务数据的压缩率的差值大于预设值时,确定满足事件性触发条件。
第五方面,本发明提供一种进行数据压缩的装置,包括:
字典算法确定单元,用于业务传输过程中,确定当前采用的压缩字典和压缩算法,其中初始时采用初始化的压缩字典和压缩算法,之后确定满足更新条件时,利用AI模型输出的压缩字典和压缩算法,分别对应更新当前采用的压缩字典和压缩算法;
压缩单元,用于基于当前采用压缩字典,利用当前采用的压缩算法对传输的业务数据进行压缩或解压缩。
可选地,所述AI模型用于对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,及利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并将实际业务数据使用输出的压缩字典和压缩算法得到的压缩率作为反馈输入,调整AI模型的模型参数。
可选地,所述字典算法确定单元还用于:
根据预先设置信息,初始化当前采用的压缩字典和压缩算法;或者
建立业务连接时,根据上一次完成业务传输时,AI模型输出的压缩字典和压缩算法,初始化当前采用的压缩字典和压缩算法。
可选地,所述字典算法确定单元确定满足更新条件,包括如下至少一个步骤:
确定建立业务连接时,满足更新条件;
确定建立业务连接后,到达设定的更新周期时,满足更新条件;
确定满足事件性触发条件时,满足更新条件。
可选地,所述字典算法确定单元确定满足事件性触发条件,包括如下至少一个步骤:
确定当前完成传输的业务数据的压缩率低于预设门限时,确定满足事件性触发条件;
根据当前AI模型输出的压缩字典和压缩算法所预期的压缩率,与当前完成传输的业务数据的压缩率的差值大于预设值时,确定满足事件性触发条件。
可选地,业务传输过程中,所述字典算法确定单元还用于:
获取本地的AI模型输出的压缩字典和压缩算法;或
获取本地的AI模型输出的压缩字典和压缩算法,并发送给对端数据传输设备;或
从对端数据传输设备获取利用AI模型输出的压缩字典和压缩算法;或
从第三方设备获取利用AI模型输出的压缩字典和压缩算法,所述第三方设备为位于云或边缘的功能节点的设备;或
从第三方设备获取利用AI模型输出的压缩字典和压缩算法,并发送给对端数据传输设备,所述第三方设备为位于云或边缘的功能节点的设备。
可选地,所述压缩单元还用于:
将最新传输的业务数据及当前完成传输的业务数据的压缩率发送至第三方设备。
第六方面,本发明提供一种进行数据压缩的装置,包括:
数据接收单元,用于响应于数据传输设备的请求,获取当前业务最新传输的业务数据及当前完成传输的业务数据的压缩率;
字典算法生成单元,用于将最新传输的业务数据输入到AI模型,并利用AI模型输出压缩字典和压缩算法;
数据发送单元,用于将所述压缩字典和压缩算法发送至所述数据传输设备。
可选地,所述AI模型用于对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,及利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并将实际业务数据使用输出的压缩字典和压缩算法得到的压缩率作为反馈输入,调整AI模型的模型参数。
可选地,将所述压缩字典和压缩算法发送至所述数据传输设备,所述数据发送单元具体用于:
确定满足更新条件时,将所述压缩字典和压缩算法发送至所述数据传输设备。
可选地,所述数据发送单元确定满足更新条件,包括如下至少一个步骤:
确定所述数据传输设备建立业务连接时,满足更新条件;
确定所述数据传输设备建立业务连接后,到达设定的更新周期时,满足更新条件;
确定满足事件性触发条件时,满足更新条件。
可选地,所述数据发送单元确定满足事件性触发条件,包括如下至少一个步骤:
确定当前完成传输的业务数据的压缩率低于预设门限时,确定满足事件性触发条件;
根据当前AI模型输出的压缩字典和压缩算法所预期的压缩率,与当前完成传输的业务数据的压缩率的差值大于预设值时,确定满足事件性触发条件。
第七方面,本发明提供一种计算机程序介质,其上存储有计算机程序,该程序被处理器执行时实现如上述第一方面提供的一种进行数据压缩的方法的步骤。
第八方面,本发明提供一种芯片,所述芯片与设备中的存储器耦合,使得所述芯片在运行时调用所述存储器中存储的程序指令,实现本申请实施例上述各个方面以及各个方面涉及的任一可能涉及的进行数据压缩的方法。
第九方面,本发明提供一种计算机程序产品,当所述计算机程序产品在电子设备上运行时,使得所述电子设备执行实现本申请实施例上述各个方面以及各个方面涉及的任一可能涉及的进行数据压缩的方法。
本发明提供的一种进行数据压缩的方法和装置及设备,具有以下有益效果:
在数据传输过程中,利用AI模型输出的压缩字典和压缩算法,分别对应更新当前采用的压缩字典和压缩算法,对压缩字典优化,并采用灵活的压缩算法,使数据的发送端和接收端同步更新压缩字典和压缩算法,并使用更新的压缩字典和压缩算法进行数据压缩和解压缩,提高压缩率。
附图说明
图1为本发明实施例给出的一种进行数据压缩的系统的示意图;
图2为本发明实施例提供的一种基站获取本地的AI模型输出的压缩字典和压缩算法进行数据压缩的示意图;
图3为本发明实施例提供的一种从第三方设备获取利用AI模型输出的压缩字典和压缩算法进行数据压缩的示意图;
图4为本发明实施例提供的一种从第三方设备获取利用AI模型输出的压缩字典和压缩算法且更新条件为确定建立业务连接进行数据压缩的示意图;
图5为本发明实施例提供的一种数据传输设备进行数据压缩的方法流程图;
图6为本发明实施例提供的一种第三方设备进行数据压缩的方法流程图;
图7为本发明实施例提供的一种进行数据压缩的数据传输设备的示意图;
图8为本发明实施例提供的一种进行数据压缩的第三方设备的示意图;
图9为本发明实施例提供的一种数据传输设备进行数据压缩的装置的示意图;
图10为本发明实施例提供的一种第三方设备进行数据压缩的装置的示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,并不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
在LTE(Long-Term Evolution,长期演进)/LTE-A(Long-Term Evolution-Advance,进一步长期演进)系统中,网络可以配置UE(User Equipment,用户设备)使用UDC(Uplink Data Compression上行数据压缩)功能,对上行数据进行压缩后再传输,以降低空口资源开销。
在进行上行数据压缩时,发送UE利用预置字典或采用压缩缓存内的内容作为字典,对待传输数据进行压缩,从而进一步提升压缩率;相应的,基站侧根据预置字典或使用之前收到的数据作为字典对接收数据进行解压缩。
下面对UDC压缩机制和基于预置字典的压缩机制进行详细的介绍:
1)UDC压缩机制
1.1)发送端维护压缩缓存,接收端维护解压缓存,压缩和解压缓存均为先入先出队列;
1.2)发送端在发送数据前,先对待发送数据进行压缩:
a)在待发送数据包中,寻找符合以下特征的目标字段:
长度超过预设门限;
目标字段与压缩缓存中或本数据包内位于目标字段之前的某字段相同。
b)如果找到上述目标字段,则将上述目标字段替换为偏移量和长度组合:
偏移量为目标字段与目标字段之前的相同字段之间的位置偏移量;
长度为目标字段的长度;
因为偏移量和长度组合的长度短于目标字段本身的长度,因此起到了压缩的效果。在一个数据包内,可能存在多个字段符合上述特征,对这些字段均可进行压缩。
1.3)发送端发送压缩数据包给对端,同时,将对应的原始数据包,即未压缩数据填入压缩缓存;
1.4)接收端基于上述偏移量和长度,以及解压缓存对收到的数据包进行解压;然后,将解压后的数据包填入解压缓存。
2)基于预置字典的压缩机制
作为UDC的一种优化,基于预置字典的压缩机制,可以基于业务特征,将高频出现的字段编写预置字典,在UDC启动前,分别存入压缩端和解压段的压缩和解压缓存中。
这样,在UDC刚启动时,压缩和解压缓存不再为空,而是存有高频字段的预置字典,能有效提升目标字段的发现概率,提升压缩率。
显然,为了实现上述机制,在UDC启动前,终端和基站需要分别获取将使用的预置字典,即完成预置字典同步过程。
基于上述基于预置字典的压缩机制的原理,在UDC刚启动时,压缩和解压缓存可能配置为空,压缩端在当前待发包中找到目标字段的概率低,压缩率相应较低;在UDC运行一段时间后,压缩缓存逐渐增多,压缩端在当前待发包中找到目标字段的概率有所提升,压缩率才能相应提升。
现有UDC机制中,字典的生成是采用压缩缓存内的内容作为字典,其中,压缩缓存中基于配置可以预置字典,也可以全零,当有数据传输,则压缩缓存采用先进先出策略,使用新的数据替代原有数据作为新的字典。上述方式虽然利用了数据之间的相关性,但并不能达到最佳压缩效果。此外,现有UDC机制中使用配置的压缩算法,没有考虑为了提高压缩率采用灵活的压缩算法。
针对上述问题,本申请实施例提供了一种进行数据压缩的方法和装置及设备,通过对数据的学习和训练对压缩字典和压缩算法进行优化,从而提高压缩率。下面给出本发明实施例提供的一种进行数据压缩的方法和装置及设备的实施方式。
实施例1
如图1所示,本发明实施例给出一种进行数据压缩的系统的示意图,包括:
作为发送端的第一数据传输设备101,用于业务传输过程中,确定当前采用的压缩字典和压缩算法,其中初始时采用初始化的压缩字典和压缩算法,之后确定满足更新条件时,利用AI模型输出的压缩字典和压缩算法,分别对应更新当前采用的压缩字典和压缩算法;基于当前采用压缩字典,利用当前采用的压缩算法对传输的业务数据进行压缩;
需要说明的是,当上述第一数据传输设备101作为发送端时,执行压缩操作;当上述第一数据传输设备101作为接收端时,执行解压缩操作。
作为接收端的第二数据传输设备102,用于业务传输过程中,确定当前采用的压缩字典和压缩算法,其中初始时采用初始化的压缩字典和压缩算法, 之后确定满足更新条件时,利用AI模型输出的压缩字典和压缩算法,分别对应更新当前采用的压缩字典和压缩算法;基于当前采用压缩字典,利用当前采用的压缩算法对传输的业务数据进行解压缩;
需要说明的是,当上述第二数据传输设备102作为发送端时,执行压缩操作;当上述第二数据传输设备102作为接收端时,执行解压缩操作。
需要说明的是,所述AI模型用于对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,及利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并将实际业务数据使用输出的压缩字典和压缩算法得到的压缩率作为反馈输入,调整AI模型的模型参数。
需要说明的是,在进行业务传输时,上述第一数据传输设备101作为发送端和第二数据传输设备102作为接收端的身份可以进行改变,例如,上述第一数据传输设备101使用当前采用的压缩字典和压缩算法对业务数据进行压缩后,发送给第二数据传输设备102,上述第二数据传输设备102使用当前采用的压缩字典和压缩算法对传输的业务数据进行解压缩,此时,上述第一数据传输设备101为数据的发送端,上述第二数据传输设备102为数据的接收端;上述第二数据传输设备102使用当前采用的压缩字典和压缩算法对业务数据进行压缩后,发送给第一数据传输设备101,上述第一数据传输设备101使用当前采用的压缩字典和压缩算法对传输的业务数据进行解压缩,此时,上述第一数据传输设备101为数据的接收端,上述第二数据传输设备102为数据的发送端。
作为一种可选的实施方式,业务传输过程中,还包括:
(1)上述第一数据传输设备101获取本地的AI模型输出的压缩字典和压缩算法,并发送给上述第二数据传输设备102;
(2)上述第二数据传输设备102获取本地的AI模型输出的压缩字典和压缩算法,并发送给上述第一数据传输设备101;
(3)上述第一数据传输设备101和/或上述第二数据传输设备102从第三 方设备获取利用AI模型输出的压缩字典和压缩算法,所述第三方设备为位于云或边缘的功能节点的设备。
需要说明的是,当上述第三方设备将输出的压缩字典和压缩算法发送给上述第一数据传输设备101或上述第二数据传输设备102中的任一设备时,接收到上述压缩字典和压缩算法的数据传输设备将上述压缩字典和压缩算法发送给对端数据传输设备。
作为一种可选地实施方式,第一数据传输设备101/上述第二数据传输设备102/第三方设备发送压缩字典和压缩算法,包括如下任一步骤:
通过RRC消息发送压缩字典和压缩算法;
通过控制单元MAC CE发送压缩字典和压缩算法;
通过在第一条数据包的分组数据汇聚协议PDCP头中携带压缩字典和压缩算法,发送压缩字典和压缩算法;
通过在第一条数据包的PDCP头中携带压缩字典和压缩算法指示信息,使用PDCP subPDU携带压缩字典和压缩算法,发送压缩字典和压缩算法。
需要说明的是,对于上述数据传输设备的发送端和接收端其中任意一端,自身利用AI模型输出压缩字典和压缩算法的情况,上述发送端和接收端之间直接传输新的压缩字典和/或压缩算法;对于第三方设备利用AI模型输出压缩字典和压缩算法的情况,例如上述第三方设备为云或边缘的功能节点,上述发送端和接收端从上述功能节点获取压缩字典和/或压缩算法。
当采用上述第(3)种从第三方设备接收压缩字典和压缩算法的实施方式时,上述系统还包括:
第三方设备103,用于响应于数据传输设备的请求,获取当前业务最新传输的业务数据及当前完成传输的业务数据的压缩率;将最新传输的业务数据输入到AI模型,并利用AI模型输出压缩字典和压缩算法;将所述压缩字典和压缩算法发送至所述数据传输设备。
需要说明的是,所述AI模型用于对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,及利用不同压 缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并将实际业务数据使用输出的压缩字典和压缩算法得到的压缩率作为反馈输入,调整AI模型的模型参数。
作为一种可选地实施方式,数据传输设备向上述第三方设备发送请求压缩字典和压缩算法的请求消息。
作为一种可选地实施方式,对于上述从第三方设备接收压缩字典和压缩算法的实施方式,第一数据传输设备101/上述第二数据传输设备102还用于:
将最新传输的业务数据及当前完成传输的业务数据的压缩率发送至第三方设备。
需要说明的是,上述将最新传输的业务数据及当前完成传输的业务数据的压缩率发送至第三方设备的操作,可以由数据传输设备的发送端和接收端其中任意一端执行。
作为一种可选地实施方式,上述第三方设备主动感知上述最新传输的业务数据及当前完成传输的业务数据的压缩率。
作为一种可选地实施方式,第三方设备将所述压缩字典和压缩算法发送至所述数据传输设备,具体包括:
确定满足更新条件时,将所述压缩字典和压缩算法发送至所述数据传输设备。
需要说明的是,上述系统架构仅是对本发明实施例适用系统架构的举例说明,本发明实施例适用的系统架构相比图1所示的系统架构还可以增加其它实体,或减少部分实体。
作为一种可选的实施方式,上述第一数据传输设备101为用户终端UE,上述第二数据传输设备102为基站,上述第三方设备103为部署了上述AI模型的云或边缘功能节点,例如AI压缩服务器。
本申请实施例涉及的用户终端UE,可以是指向用户提供语音和/或数据连通性的设备,具有无线连接功能的手持式设备、或连接到无线调制解调器的其他处理设备。在不同的系统中,终端设备的名称可能也不相同,例如在5G 系统中,终端设备可以称为用户设备(user equipment,UE)。无线终端设备可以经无线接入网(Radio Access Network,RAN)与一个或多个核心网进行通信,无线终端设备可以是移动终端设备,如移动电话(或称为“蜂窝”电话)和具有移动终端设备的计算机,例如,可以是便携式、袖珍式、手持式、计算机内置的或者车载的移动装置,它们与无线接入网交换语言和/或数据。例如,个人通信业务(personal communication service,PCS)电话、无绳电话、会话发起协议(session initiated protocol,SIP)话机、无线本地环路(wireless local loop,WLL)站、个人数字助理(personal digital assistant,PDA)等设备。无线终端设备也可以称为系统、订户单元(subscriber unit)、订户站(subscriber station),移动站(mobile station)、移动台(mobile)、远程站(remote station)、接入点(access point)、远程终端设备(remote terminal)、接入终端设备(access terminal)、用户终端设备(user terminal)、用户代理(user agent)、用户装置(user device),本申请实施例中并不限定。
本申请实施例涉及的基站,根据具体应用场合不同,又可以称为接入点,或者可以是指接入网中在空中接口上通过一个或多个扇区与无线终端设备通信的设备,或者其它名称。网络设备可用于将收到的空中帧与网际协议(Internet Protocol,IP)分组进行相互转换,作为无线终端设备与接入网的其余部分之间的路由器,其中接入网的其余部可包括网际协议(IP)通信网络。网络设备还可协调对空中接口的属性管理。例如,本申请实施例涉及的网络设备可以是全球移动通信系统(Global System For Mobile communications,GSM)或码分多址接入(Code Division Multiple Access,CDMA)中的网络设备(Base Transceiver Station,BTS),也可以是带宽码分多址接入(Wide-band Code Division Multiple Access,WCDMA)中的网络设备(NodeB),还可以是长期演进(Long Term Evolution,LTE)系统中的演进型网络设备(evolutionalNodeB,eNB或e-NodeB)、5G网络架构(Nextgeneration System)中的5G基站,也可是家庭演进基站(Homeevolv ednodeB,HeNB)、中继节点(Relay Node)、家庭基站(femto)、微微基站(pico)等,本申请实施例中 并不限定。
作为一种可选地实施方式,将上述AI模型集成在AI模块,上述AI模块可以实现:
(1)压缩字典和压缩算法的AI模型生成;
(2)压缩率反馈,使用AI压缩字典和压缩算法对传输数据进行压缩,计算压缩率并反馈给AI训练模型;
(3)使用AI训练模型对新的传输数据进行AI学习,并生成新的压缩字典和压缩算法。
需要说明的是,上述AI模块可以位于数据传输设备的发送端和接收端其中任意一端,或位于第三方设备,例如云或边缘的功能节点中。
作为一种可选的实施方式,在上述三种实施方式中,所述AI模型用于对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,及利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并将实际业务数据使用输出的压缩字典和压缩算法得到压缩率作为反馈输入,调整AI模型的模型参数,上述实际业务数据为当前完成传输的业务数据,该业务数据在传输时利用AI模型得到压缩字典和压缩算法,并利用压缩字典和压缩算法对业务数据进行压缩后反馈压缩率,AI模型利用反馈的压缩率进行AI模型参数调整。
需要说明的是,AI模型的训练包括两个阶段,具体包括AI模型建模阶段和AI模型更新阶段。
在AI模型建模阶段,数据传输设备采用初始化的压缩字典和压缩算法,AI模型的训练过程为:利用AI模型对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,及利用不同压缩算法以输出的压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并使用输出的压缩字典和输出的压缩算法对训练数据进行压缩,得到压缩率,将得到的压缩率作为反馈输入,调整AI模型的模型参数。
在AI模型更新阶段,对于数据传输设备,数据传输设备在传输最新的业 务数据时,如果不满足更新条件,则利用上一次的压缩字典和压缩算法进行数据压缩;如果满足更新条件,利用AI模型使用新的业务数据作为输入,生成新的压缩字典和压缩算法,数据传输设备利用AI模型输出的压缩字典和压缩算法进行业务数据压缩。
需要说明是,上述满足或不满足更新条件的情况下,每次完成业务数据传输时,将最新传输的业务数据及当前完成传输的业务数据的压缩率进行反馈以进行AI模型更新,AI模型的更新过程为:将反馈的压缩率与上述输出的预期的压缩率进行比较,根据比较结果形成正向或负向反馈激励值,调整AI模型的模型参数。
需要说明的是,上述AI模型可以采用现有的反馈神经网络模型,通过自学习机制不断调整AI模型的参数,不断加强通过分析输入数据的关联特征进行字典词语提取的能力,随着业务数据的持续输入,将识别字典的效果向压缩率提高的正向方向进行调整。AI模型利用不同压缩算法计算数据压缩率,上述不同压缩算法为现有算法,例如,哈夫曼编码、Rice编码、运行长度编码等。另外,具体将压缩率反馈输入AI模型,调整模型参数的过程可以采用现有方式,这里不再详述。
特别的,当上述数据传输设备初始建立连接,第一次进行业务传输时,需要确定初始时采用初始化的压缩字典和压缩算法。
作为一种可选的实施方式,根据预先设置信息,初始化当前采用的压缩字典和压缩算法;或者
建立业务连接时,根据上一次完成业务传输时,AI模型输出的压缩字典和压缩算法,初始化当前采用的压缩字典和压缩算法。
需要说明的是,上述根据预先设置信息,初始化当前采用的压缩字典,包括:
根据预先设置信息,初始化当前采用的压缩字典为基于业务特性配置的压缩字典;或
根据预先设置,初始化当前采用的压缩字典为空。
需要说明的是,上述根据预先设置信息,初始化当前采用的压缩算法,包括:
根据预先设置信息,初始化当前采用的压缩算法为默认压缩算法;或
根据预先设置信息,初始化当前采用的压缩算法为预配置压缩算法;或
根据预先设置信息,初始化当前采用的压缩算法为根据业务特性选择的压缩算法。
需要说明的是,上述AI模型对每一次传输的业务数据都进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,及利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,即根据每一次业务传输的数据调整压缩字典和压缩算法,但是,只有在满足更新条件时,才利用AI模型输出的压缩字典和压缩算法,分别对应更新当前采用的压缩字典和压缩算法。
作为一种可选的实施方式,确定满足更新条件,包括如下至少一个步骤:
确定建立业务连接时,满足更新条件;
确定建立业务连接后,达到设定的更新周期时,满足更新条件;
确定满足事件性触发条件时,满足更新条件。
需要说明的是,确定满足更新条件包括上述三个条件中的任一或任多,即确定满足更新条件为确定满足上述三个条件之一或上述任两个条件的任意组合或三个条件。
针对上述三种确定满足更新条件的方法,给出具体的实施方式:
实施方式a:确定建立业务连接时,更新当前采用的压缩字典和压缩算法。
针对每个进行压缩传输的业务,上述业务连接建立后,针对上述业务同步一次压缩字典和压缩算法,通信过程中不进行压缩字典和压缩算法更新。
基于传输数据和压缩率的压缩字典和压缩算法更新用于下次该业务的通信,即,只在业务连接建立之后,进行一次压缩字典和压缩算法更新,在一次业务连接过程中使用的压缩字典和压缩算法保持不变。
实施方式b:确定建立业务连接后,达到设定的更新周期时,更新当前采 用的压缩字典和压缩算法。
针对每个进行数据压缩传输的业务,业务连接建立后,针对该业务同步一次压缩字典和压缩算法,在通信过程中周期性进行压缩字典和压缩算法更新,达到周期时,利用当前模型输出的压缩字典和压缩算法进行更新,在一个周期内使用的压缩字典和压缩算法保持不变。
实施方式c:确定满足事件性触发条件时,更新当前采用的压缩字典和压缩算法。
作为一种可选地实施方式,确定满足事件性触发条件,包括如下至少一个步骤:
确定当前完成传输的业务数据的压缩率低于预设门限时,确定满足事件性触发条件;
根据当前AI模型输出的压缩字典和压缩算法所预期的压缩率,与当前完成传输的业务数据的压缩率的差值大于预设值时,确定满足事件性触发条件。
需要说明的是,上述当前AI模型输出的压缩字典和压缩算法所预期的压缩率,为利用当前AI模型输出的压缩字典和压缩算法,对最新传输的业务数据进行压缩,所预期的压缩率。需要说明的是,上述事件性触发条件仅是一种举例说明,并不对事件性触发条件形成具体的限定,可以根据具体的实施情况,进行具体的事件性触发条件的设置,例如,设置事件性触发条件为,根据当前AI模型输出的压缩字典和压缩算法所预期的压缩率,当所述预期的压缩率大于预设值时,确定满足事件性触发条件。
基于上述实施例,本发明实施例提供三种具体的实施方式,对上述一种进行数据压缩的方法进行具体的说明。
实施方式一:基站获取本地的AI模型输出的压缩字典和压缩算法。
如图2所示,本发明实施例提供一种基站获取本地的AI模型输出的压缩字典和压缩算法进行数据压缩的示意图。
PDCP(Packet Data Convergence Protocol,分组数据汇聚协议)层负责数据压缩和解压缩,在Uu接口进行上下行压缩数据传输,基站获取本地的AI 模型输出的压缩字典和压缩算法,更新的压缩字典和/或压缩算法通过Uu接口直接进行同步。
需要说明的是,下述实施方式2与实施方式3均采用上述传输方式,不再赘述。
在本实施方式中,数据传输设备为基站和用户终端UE。
步骤1:用户终端UE和基站间建立连接。
这里没有描述用户终端UE与核心网之间的交互过程,用户终端UE与核心网之间的交互在步骤2前完成。
步骤2:基站确定采用的初始化的压缩字典和压缩算法。
上述初始化的压缩字典和压缩算法,可以根据预先设置信息确定;或者根据上一次完成业务传输时,AI模型输出的压缩字典和压缩算法确定。
步骤3:基站将初始化的压缩字典和压缩算法发送给用户终端UE。
发送方式可以是RRC消息,MAC CE,在第一条数据包的PDCP头中携带AI压缩字典和压缩算法,或在第一条数据包的PDCP头中携带AI压缩字典和压缩算法指示信息,使用PDCP subPDU携带AI压缩字典和压缩算法。
步骤4:基站和用户终端UE中的数据发送端使用上述初始化的压缩字典和压缩算法对数据进行压缩传输,接收端使用上述初始化的压缩字典和压缩算法对数据进行解压缩,并在此过程中统计压缩率。
当基站/用户终端UE为数据发送端时,使用上述初始化的压缩字典和压缩算法对数据进行压缩,并将压缩的数据发送至用户终端UE/基站;当基站/用户终端UE为数据接收端时,接收用户终端UE/基站发送的压缩的数据,使用上述初始化的压缩字典和压缩算法对上述压缩的数据进行解压缩。
用户终端UE将统计的压缩率发送给基站。
步骤5:基站基于传输的数据和数据压缩率对训练模型进行调整,并生成新的压缩字典和压缩算法。
需要说明的是,当基站为数据发送端时,将待传输的数据输入AI模型,使用AI模型对最新传输的业务数据进行特征提取,根据提取的特征和所述业 务数据之间的关联性输出压缩字典,并利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并将该实际业务数据使用输出的压缩字典和压缩算法得到的压缩率作为反馈输入,调整AI模型的模型参数;当基站为数据接收端时,接收用户终端UE发送的压缩的数据,使用上述初始化的压缩字典和压缩算法对上述压缩的数据进行解压缩,将解压缩的数据输入AI模型,使用AI模型对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,并利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并将实际业务数据使用输出的压缩字典和压缩算法得到的压缩率作为反馈输入,调整AI模型的模型参数。
需要说明的是,对训练模型进行调整,并生成新的压缩字典和压缩算法的过程,在数据传输的过程中,持续进行。
步骤6:确定满足更新条件时,基站将更新的压缩字典和压缩算法发送给用户终端UE。
上述步骤6的更新条件可以是基于周期,也可以是基于事件性触发。
上述步骤6中使用的发送方式同步骤3,不再赘述。
步骤7:基站和用户终端UE中的数据发送端使用新的压缩字典和压缩算法对数据进行压缩传输,接收端使用新的压缩字典和压缩算法对数据进行解压缩,在此过程中统计压缩率。
需要说明的是,上述实施方式同样适用于用户终端UE获取本地的AI模型输出的压缩字典和压缩算法进行数据压缩的实施方式,将上述过程中的基站与用户终端UE的操作对调,即可实现。
实施方式二:从第三方设备获取利用AI模型输出的压缩字典和压缩算法。
如图3所示,本发明实施例提供一种从第三方设备获取利用AI模型输出的压缩字典和压缩算法进行数据压缩的示意图。
需要说明的是,在本实施方式中,数据传输设备为基站和用户终端UE,第三方设备为云或边缘AI压缩服务器。
步骤1:用户终端UE和基站间建立连接。
为了简化描述,这里没有描述用户终端UE与核心网之间的交互过程,用户终端UE与核心网之间的交互在步骤2前完成。
步骤2a/2b:用户终端UE和/或基站向AI压缩服务器请求压缩字典和压缩算法。
上述步骤可以是数据传输设备双方都向AI压缩服务器请求,也可以是其中一端向AI压缩服务器请求。
步骤3:AI压缩服务器确定采用的初始化的压缩字典和压缩算法。
上述初始化的压缩字典和压缩算法,可以根据预先设置信息确定;或者根据上一次完成业务传输时,AI模型输出的压缩字典和压缩算法确定。
步骤4a/4b:AI压缩服务器将初始化的压缩字典和压缩算法发送给基站和用户终端UE。
此步骤可以是AI压缩服务器将压缩字典和压缩算法向数据传输设备双方都发送,也可以是其中一端从AI压缩服务器获取了压缩字典和压缩算法后发送给对端。
步骤5:基站和用户终端UE中的数据发送端使用上述初始化的压缩字典和压缩算法对数据进行压缩传输,接收端使用上述初始化的压缩字典和压缩算法对数据进行解压缩,并在此过程中统计压缩率。
当基站/用户终端UE为数据发送端时,使用上述初始化的压缩字典和压缩算法对数据进行压缩,并将压缩的数据发送至用户终端UE/基站;当基站/用户终端UE为数据接收端时,接收用户终端UE/基站发送的压缩的数据,使用上述初始化的压缩字典和压缩算法对上述压缩的数据进行解压缩。
需要说明的是,在此过程中基站和/或用户终端UE发送最新传输的业务数据及当前完成传输的业务数据的压缩率给AI压缩服务器。
步骤6:AI压缩服务器基于最新传输的业务数据及当前完成传输的业务数据的压缩率对训练模型进行调整,并生成新的压缩字典和压缩算法。
AI压缩服务器接收基站和/或用户终端UE发送的最新传输的业务数据及 当前完成传输的业务数据的压缩率。
AI压缩服务器将最新传输的业务数据输入AI模型,使用AI模型对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,并利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并将实际业务数据使用输出的压缩字典和压缩算法得到的压缩率作为反馈输入,调整AI模型的模型参数。
步骤7a/7b:确定满足更新条件时,AI压缩服务器将更新的压缩字典和压缩算法发送给基站和用户终端UE。
上述步骤7a/7b的更新条件可以是基于周期,也可以是基于事件触发。
上述发送方式可以是AI压缩服务器将压缩字典和压缩算法向数据传输设备双方都发送,也可以是其中一端从AI压缩服务器获取了压缩字典和压缩算法后发送给对端。
步骤8:基站和用户终端UE中的数据发送端使用新的压缩字典和压缩算法对数据进行压缩传输,接收端使用新的压缩字典和压缩算法对数据进行解压缩,并在此过程中统计压缩率,并将数据和压缩率反馈给AI压缩服务器。
作为一种可选的实施方式,在上述实施方式二的基础上,在上述步骤5之后增加步骤:针对这个业务的连接释放,在上述步骤6后增加步骤:针对这个业务的连接再次建立,和步骤:用户终端UE和/或基站向AI压缩服务器请求压缩字典和压缩算法,得到更新条件为确定建立业务连接的实施方式三。如图4所示,本发明实施例提供一种从第三方设备获取利用AI模型输出的压缩字典和压缩算法且更新条件为确定建立业务连接进行数据压缩的示意图。
实施例2
本发明实施例提供一种数据传输设备进行数据压缩的方法流程图,如图5所示,包括:
步骤S501,业务传输过程中,确定当前采用的压缩字典和压缩算法,其中初始时采用初始化的压缩字典和压缩算法,之后确定满足更新条件时,利用AI模型输出的压缩字典和压缩算法,分别对应更新当前采用的压缩字典和 压缩算法;
步骤S502,基于当前采用压缩字典,利用当前采用的压缩算法对传输的业务数据进行压缩或解压缩。
可选地,所述AI模型用于对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,及利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并将实际业务数据使用输出的压缩字典和压缩算法得到的压缩率作为反馈输入,调整AI模型的模型参数。
可选地,所述方法还包括:
根据预先设置信息,初始化当前采用的压缩字典和压缩算法;或者
建立业务连接时,根据上一次完成业务传输时,AI模型输出的压缩字典和压缩算法,初始化当前采用的压缩字典和压缩算法。
可选地,确定满足更新条件,包括如下至少一个步骤:
确定建立业务连接时,满足更新条件;
确定建立业务连接后,到达设定的更新周期时,满足更新条件;
确定满足事件性触发条件时,满足更新条件。
可选地,确定满足事件性触发条件,包括如下至少一个步骤:
确定当前完成传输的业务数据的压缩率低于预设门限时,确定满足事件性触发条件;
根据当前AI模型输出的压缩字典和压缩算法所预期的压缩率,与当前完成传输的业务数据的压缩率的差值大于预设值时,确定满足事件性触发条件。
可选地,业务传输过程中,还包括:
获取本地的AI模型输出的压缩字典和压缩算法;或
获取本地的AI模型输出的压缩字典和压缩算法,并发送给对端数据传输设备;或
从对端数据传输设备获取利用AI模型输出的压缩字典和压缩算法;或
从第三方设备获取利用AI模型输出的压缩字典和压缩算法,所述第三方 设备为位于云或边缘的功能节点的设备;或
从第三方设备获取利用AI模型输出的压缩字典和压缩算法,并发送给对端数据传输设备,所述第三方设备为位于云或边缘的功能节点的设备。
可选地,所述方法还包括:
将最新传输的业务数据及当前完成传输的业务数据的压缩率发送至第三方设备。
本发明实施例提供一种第三方设备进行数据压缩的方法流程图,如图6所示,包括:
步骤S601,响应于数据传输设备的请求,获取当前业务最新传输的业务数据及当前完成传输的业务数据的压缩率;
步骤S602,将最新传输的业务数据输入到AI模型,并利用AI模型输出压缩字典和压缩算法;
步骤S603,将所述压缩字典和压缩算法发送至所述数据传输设备。
可选地,所述AI模型用于对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,及利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并将实际业务数据使用输出的压缩字典和压缩算法得到的压缩率作为反馈输入,调整AI模型的模型参数。
可选地,将所述压缩字典和压缩算法发送至所述数据传输设备,具体包括:
确定满足更新条件时,将所述压缩字典和压缩算法发送至所述数据传输设备。
可选地,确定满足更新条件,包括如下至少一个步骤:
确定所述数据传输设备建立业务连接时,满足更新条件;
确定所述数据传输设备建立业务连接后,到达设定的更新周期时,满足更新条件;
确定满足事件性触发条件时,满足更新条件。
可选地,确定满足事件性触发条件,包括如下至少一个步骤:
确定当前完成传输的业务数据的压缩率低于预设门限时,确定满足事件性触发条件;
根据当前AI模型输出的压缩字典和压缩算法所预期的压缩率,与当前完成传输的业务数据的压缩率的差值大于预设值时,确定满足事件性触发条件。
本发明实施例所提供的进行数据压缩的数据传输设备,与本发明上述实施例1的数据传输设备属于同一发明构思,应用到上述实施例提供的系统中数据传输设备进行数据压缩的各种实施方式,可以应用到本实施例中进行数据压缩的方法,这里不再重述。
本发明实施例所提供的进行数据压缩的第三方设备,与本发明上述实施例1的第三方设备属于同一发明构思,应用到上述实施例提供的系统中第三方设备进行数据压缩的各种实施方式,可以应用到本实施例中进行数据压缩的方法,这里不再重述。
本发明实施例提供一种进行数据压缩的数据传输设备的示意图,如图7所示,包括:
存储器701、处理器702、收发机703以及总线接口704。
处理器702负责管理总线架构和通常的处理,存储器701可以存储处理器702在执行操作时所使用的数据。收发机703用于在处理器702的控制下接收和发送数据。
总线架构可以包括任意数量的互联的总线和桥,具体由处理器702代表的一个或多个处理器和存储器701代表的存储器的各种电路链接在一起。总线架构还可以将诸如外围设备、稳压器和功率管理电路等之类的各种其它电路链接在一起,这些都是本领域所公知的,因此,本文不再对其进行进一步描述。总线接口提供接口。处理器702负责管理总线架构和通常的处理,存储器701可以存储处理器702在执行操作时所使用的数据。
本发明实施例揭示的流程,可以应用于处理器702中,或者由处理器702实现。在实现过程中,信号处理流程的各步骤可以通过处理器702中的硬件 的集成逻辑电路或者软件形式的指令完成。处理器702可以是通用处理器、数字信号处理器、专用集成电路、现场可编程门阵列或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本发明实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器701,处理器702读取存储器701中的信息,结合其硬件完成信号处理流程的步骤。
具体地,处理器702,用于读取存储器701中的程序并执行:
业务传输过程中,确定当前采用的压缩字典和压缩算法,其中初始时采用初始化的压缩字典和压缩算法,之后确定满足更新条件时,利用AI模型输出的压缩字典和压缩算法,分别对应更新当前采用的压缩字典和压缩算法;
基于当前采用压缩字典,利用当前采用的压缩算法对传输的业务数据进行压缩或解压缩。
可选地,所述AI模型用于对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,及利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并将实际业务数据使用输出的压缩字典和压缩算法得到的压缩率作为反馈输入,调整AI模型的模型参数。
可选地,所述处理器还用于:
根据预先设置信息,初始化当前采用的压缩字典和压缩算法;或者
建立业务连接时,根据上一次完成业务传输时,AI模型输出的压缩字典和压缩算法,初始化当前采用的压缩字典和压缩算法。
可选地,所述处理器确定满足更新条件,包括如下至少一个步骤:
确定建立业务连接时,满足更新条件;
确定建立业务连接后,到达设定的更新周期时,满足更新条件;
确定满足事件性触发条件时,满足更新条件。
可选地,所述处理器确定满足事件性触发条件,包括如下至少一个步骤:
确定当前完成传输的业务数据的压缩率低于预设门限时,确定满足事件性触发条件;
根据当前AI模型输出的压缩字典和压缩算法所预期的压缩率,与当前完成传输的业务数据的压缩率的差值大于预设值时,确定满足事件性触发条件。
可选地,业务传输过程中,所述处理器还用于:
获取本地的AI模型输出的压缩字典和压缩算法;或
获取本地的AI模型输出的压缩字典和压缩算法,并发送给对端数据传输设备;或
从对端数据传输设备获取利用AI模型输出的压缩字典和压缩算法;或
从第三方设备获取利用AI模型输出的压缩字典和压缩算法,所述第三方设备为位于云或边缘的功能节点的设备;或
从第三方设备获取利用AI模型输出的压缩字典和压缩算法,并发送给对端数据传输设备,所述第三方设备为位于云或边缘的功能节点的设备。
可选地,所述处理器还用于:
将最新传输的业务数据及当前完成传输的业务数据的压缩率发送至第三方设备。
本发明实施例提供一种进行数据压缩的第三方设备的示意图,如图8所示,包括:
存储器801、处理器802、收发机803以及总线接口804。
处理器802负责管理总线架构和通常的处理,存储器801可以存储处理器802在执行操作时所使用的数据。收发机803用于在处理器802的控制下接收和发送数据。
总线架构可以包括任意数量的互联的总线和桥,具体由处理器802代表的一个或多个处理器和存储器801代表的存储器的各种电路链接在一起。总 线架构还可以将诸如外围设备、稳压器和功率管理电路等之类的各种其它电路链接在一起,这些都是本领域所公知的,因此,本文不再对其进行进一步描述。总线接口提供接口。处理器802负责管理总线架构和通常的处理,存储器801可以存储处理器802在执行操作时所使用的数据。
本发明实施例揭示的流程,可以应用于处理器802中,或者由处理器802实现。在实现过程中,信号处理流程的各步骤可以通过处理器802中的硬件的集成逻辑电路或者软件形式的指令完成。处理器802可以是通用处理器、数字信号处理器、专用集成电路、现场可编程门阵列或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本发明实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器801,处理器802读取存储器801中的信息,结合其硬件完成信号处理流程的步骤。
具体地,处理器802,用于读取存储器801中的程序并执行:
响应于数据传输设备的请求,获取当前业务最新传输的业务数据及当前完成传输的业务数据的压缩率;
将最新传输的业务数据输入到AI模型,并利用AI模型输出压缩字典和压缩算法;
将所述压缩字典和压缩算法发送至所述数据传输设备。
可选地,所述AI模型用于对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,及利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并将实际业务数据使用输出的压缩字典和压缩算法得到的压缩率作为反馈输入,调整AI模型的模型参数。
可选地,将所述压缩字典和压缩算法发送至所述数据传输设备,所述处理器具体用于:
确定满足更新条件时,将所述压缩字典和压缩算法发送至所述数据传输设备。
可选地,所述处理器确定满足更新条件,包括如下至少一个步骤:
确定所述数据传输设备建立业务连接时,满足更新条件;
确定所述数据传输设备建立业务连接后,到达设定的更新周期时,满足更新条件;
确定满足事件性触发条件时,满足更新条件。
可选地,所述处理器确定满足事件性触发条件,包括如下至少一个步骤:
确定当前完成传输的业务数据的压缩率低于预设门限时,确定满足事件性触发条件;
根据当前AI模型输出的压缩字典和压缩算法所预期的压缩率,与当前完成传输的业务数据的压缩率的差值大于预设值时,确定满足事件性触发条件。
本发明实施例所提供的进行数据压缩的数据传输设备,与本发明上述实施例1的数据传输设备属于同一发明构思,应用到上述实施例提供的系统中数据传输设备进行数据压缩的各种实施方式,可以应用到本实施例中进行数据压缩的数据传输设备,这里不再重述。
本发明实施例所提供的进行数据压缩的第三方设备,与本发明上述实施例1的第三方设备属于同一发明构思,应用到上述实施例提供的系统中第三方设备进行数据压缩的各种实施方式,可以应用到本实施例中进行数据压缩的第三方设备,这里不再重述。
本发明实施例提供一种数据传输设备进行数据压缩的装置的示意图,如图9所示,包括:
字典算法确定单元901,用于业务传输过程中,确定当前采用的压缩字典和压缩算法,其中初始时采用初始化的压缩字典和压缩算法,之后确定满足更新条件时,利用AI模型输出的压缩字典和压缩算法,分别对应更新当前采 用的压缩字典和压缩算法;
压缩单元902,用于基于当前采用压缩字典,利用当前采用的压缩算法对传输的业务数据进行压缩或解压缩。
可选地,所述AI模型用于对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,及利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并将实际业务数据使用输出的压缩字典和压缩算法得到的压缩率作为反馈输入,调整AI模型的模型参数。
可选地,所述字典算法确定单元还用于:
根据预先设置信息,初始化当前采用的压缩字典和压缩算法;或者
建立业务连接时,根据上一次完成业务传输时,AI模型输出的压缩字典和压缩算法,初始化当前采用的压缩字典和压缩算法。
可选地,所述字典算法确定单元确定满足更新条件,包括如下至少一个步骤:
确定建立业务连接时,满足更新条件;
确定建立业务连接后,到达设定的更新周期时,满足更新条件;
确定满足事件性触发条件时,满足更新条件。
可选地,所述字典算法确定单元确定满足事件性触发条件,包括如下至少一个步骤:
确定当前完成传输的业务数据的压缩率低于预设门限时,确定满足事件性触发条件;
根据当前AI模型输出的压缩字典和压缩算法所预期的压缩率,与当前完成传输的业务数据的压缩率的差值大于预设值时,确定满足事件性触发条件。
可选地,业务传输过程中,所述字典算法确定单元还用于:
获取本地的AI模型输出的压缩字典和压缩算法;或
获取本地的AI模型输出的压缩字典和压缩算法,并发送给对端数据传输设备;或
从对端数据传输设备获取利用AI模型输出的压缩字典和压缩算法;或
从第三方设备获取利用AI模型输出的压缩字典和压缩算法,所述第三方设备为位于云或边缘的功能节点的设备;或
从第三方设备获取利用AI模型输出的压缩字典和压缩算法,并发送给对端数据传输设备,所述第三方设备为位于云或边缘的功能节点的设备。
可选地,所述压缩单元还用于:
将最新传输的业务数据及当前完成传输的业务数据的压缩率发送至第三方设备。
本发明实施例提供一种第三方设备进行数据压缩的装置,如图10所示,包括:
数据接收单元1001,用于响应于数据传输设备的请求,获取当前业务最新传输的业务数据及当前完成传输的业务数据的压缩率;
字典算法生成单元1002,用于将最新传输的业务数据输入到AI模型,并利用AI模型输出压缩字典和压缩算法;
数据发送单元1003,用于将所述压缩字典和压缩算法发送至所述数据传输设备。
可选地,所述AI模型用于对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,及利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并将实际业务数据使用输出的压缩字典和压缩算法得到的压缩率作为反馈输入,调整AI模型的模型参数。
可选地,所述数据发送单元具体用于:
确定满足更新条件时,将所述压缩字典和压缩算法发送至所述数据传输设备。
可选地,所述数据发送单元确定满足更新条件,包括如下至少一个步骤:
确定所述数据传输设备建立业务连接时,满足更新条件;
确定所述数据传输设备建立业务连接后,到达设定的更新周期时,满足 更新条件;
确定满足事件性触发条件时,满足更新条件。
可选地,所述数据发送单元确定满足事件性触发条件,包括如下至少一个步骤:
确定当前完成传输的业务数据的压缩率低于预设门限时,确定满足事件性触发条件;
根据当前AI模型输出的压缩字典和压缩算法所预期的压缩率,与当前完成传输的业务数据的压缩率的差值大于预设值时,确定满足事件性触发条件。
本发明实施例所提供的进行数据压缩的装置,与本发明上述实施例1的数据传输设备属于同一发明构思,应用到上述实施例提供的系统中数据传输设备进行数据压缩的各种实施方式,可以应用到本实施例中进行数据压缩的装置,这里不再重述。
本发明实施例所提供的进行数据压缩的装置,与本发明上述实施例1的第三方设备属于同一发明构思,应用到上述实施例提供的系统中第三方设备进行数据压缩的各种实施方式,可以应用到本实施例中进行数据压缩的装置,这里不再重述。
本发明还提供一种处理器可读存储介质,所述处理器可读存储介质存储有计算机程序,所述计算机程序用于使所述处理器执行上述实施例1中提供的应用于数据传输设备的一种进行数据压缩的方法的步骤。
本发明还提供一种处理器可读存储介质,所述处理器可读存储介质存储有计算机程序,所述计算机程序用于使所述处理器执行上述实施例1中提供的应用于第三方设备的一种进行数据压缩的方法的步骤。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间 的耦合或直接耦合或通信连接可以是通过一些接口,装置或模块的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
以上对本申请所提供的技术方案进行了详细介绍,本申请中应用了具体 个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (33)

  1. 一种进行数据压缩的方法,应用于数据传输设备,其特征在于,该方法包括:
    业务传输过程中,确定当前采用的压缩字典和压缩算法,其中初始时采用初始化的压缩字典和压缩算法,之后确定满足更新条件时,利用AI模型输出的压缩字典和压缩算法,分别对应更新当前采用的压缩字典和压缩算法;
    基于当前采用压缩字典,利用当前采用的压缩算法对传输的业务数据进行压缩或解压缩。
  2. 根据权利要求1所述的方法,其特征在于,所述AI模型用于对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,及利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并将实际业务数据使用输出的压缩字典和压缩算法得到的压缩率作为反馈输入,调整AI模型的模型参数。
  3. 根据权利要求1所述的方法,其特征在于,还包括:
    根据预先设置信息,初始化当前采用的压缩字典和压缩算法;或者
    建立业务连接时,根据上一次完成业务传输时,AI模型输出的压缩字典和压缩算法,初始化当前采用的压缩字典和压缩算法。
  4. 根据权利要求1所述的方法,其特征在于,确定满足更新条件,包括如下至少一个步骤:
    确定建立业务连接时,满足更新条件;
    确定建立业务连接后,到达设定的更新周期时,满足更新条件;
    确定满足事件性触发条件时,满足更新条件。
  5. 根据权利要求4所述的方法,其特征在于,确定满足事件性触发条件,包括如下至少一个步骤:
    确定当前完成传输的业务数据的压缩率低于预设门限时,确定满足事件性触发条件;
    根据当前AI模型输出的压缩字典和压缩算法所预期的压缩率,与当前完成传输的业务数据的压缩率的差值大于预设值时,确定满足事件性触发条件。
  6. 根据权利要求1所述的方法,其特征在于,业务传输过程中,还包括:
    获取本地的AI模型输出的压缩字典和压缩算法;或
    获取本地的AI模型输出的压缩字典和压缩算法,并发送给对端数据传输设备;或
    从对端数据传输设备获取利用AI模型输出的压缩字典和压缩算法;或
    从第三方设备获取利用AI模型输出的压缩字典和压缩算法,所述第三方设备为位于云或边缘的功能节点的设备;或
    从第三方设备获取利用AI模型输出的压缩字典和压缩算法,并发送给对端数据传输设备,所述第三方设备为位于云或边缘的功能节点的设备。
  7. 根据权利要求1所述的方法,其特征在于,还包括:
    将最新传输的业务数据及当前完成传输的业务数据的压缩率发送至第三方设备。
  8. 一种进行数据压缩的方法,其特征在于,应用于第三方设备,包括:
    响应于数据传输设备的请求,获取当前业务最新传输的业务数据及当前完成传输的业务数据的压缩率;
    将最新传输的业务数据输入到AI模型,并利用AI模型输出压缩字典和压缩算法;
    将所述压缩字典和压缩算法发送至所述数据传输设备。
  9. 根据权利要求8所述的方法,其特征在于,所述AI模型用于对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,及利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并将实际业务数据使用输出的压缩字典和压缩算法得到的压缩率作为反馈输入,调整AI模型的模型参数。
  10. 根据权利要求8所述的方法,其特征在于,将所述压缩字典和压缩算法发送至所述数据传输设备,具体包括:
    确定满足更新条件时,将所述压缩字典和压缩算法发送至所述数据传输设备。
  11. 根据权利要求10所述的方法,其特征在于,确定满足更新条件,包括如下至少一个步骤:
    确定所述数据传输设备建立业务连接时,满足更新条件;
    确定所述数据传输设备建立业务连接后,到达设定的更新周期时,满足更新条件;
    确定满足事件性触发条件时,满足更新条件。
  12. 根据权利要求11所述的方法,其特征在于,确定满足事件性触发条件,包括如下至少一个步骤:
    确定当前完成传输的业务数据的压缩率低于预设门限时,确定满足事件性触发条件;
    根据当前AI模型输出的压缩字典和压缩算法所预期的压缩率,与当前完成传输的业务数据的压缩率的差值大于预设值时,确定满足事件性触发条件。
  13. 一种进行数据压缩的数据传输设备,其特征在于,包括存储器,收发机,处理器:
    存储器,用于存储计算机程序;收发机,用于在所述处理器的控制下收发数据;处理器,用于读取所述存储器中的计算机程序并执行以下操作:
    业务传输过程中,确定当前采用的压缩字典和压缩算法,其中初始时采用初始化的压缩字典和压缩算法,之后确定满足更新条件时,利用AI模型输出的压缩字典和压缩算法,分别对应更新当前采用的压缩字典和压缩算法;
    基于当前采用压缩字典,利用当前采用的压缩算法对传输的业务数据进行压缩或解压缩。
  14. 根据权利要求13所述的数据传输设备,其特征在于,所述AI模型用于对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,及利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并将实际数据使用输出的压缩字典和压 缩算法得到的压缩率作为反馈输入,调整AI模型的模型参数。
  15. 根据权利要求13所述的数据传输设备,其特征在于,所述处理器还用于执行:
    根据预先设置信息,初始化当前采用的压缩字典和压缩算法;或者
    建立业务连接时,根据上一次完成业务传输时,AI模型输出的压缩字典和压缩算法,初始化当前采用的压缩字典和压缩算法。
  16. 根据权利要求13所述的数据传输设备,其特征在于,所述处理器确定满足更新条件,包括如下至少一个步骤:
    确定建立业务连接时,满足更新条件;
    确定建立业务连接后,到达设定的更新周期时,满足更新条件;
    确定满足事件性触发条件时,满足更新条件。
  17. 根据权利要求16所述的数据传输设备,其特征在于,所述处理器确定满足事件性触发条件,包括如下至少一个步骤:
    确定当前完成传输的业务数据的压缩率低于预设门限时,确定满足事件性触发条件;
    根据当前AI模型输出的压缩字典和压缩算法所预期的压缩率,与当前完成传输的业务数据的压缩率的差值大于预设值时,确定满足事件性触发条件。
  18. 根据权利要求13所述的数据传输设备,其特征在于,业务传输过程中,所述处理器还用于执行:
    获取本地的AI模型输出的压缩字典和压缩算法;或
    获取本地的AI模型输出的压缩字典和压缩算法,并发送给对端数据传输设备;或
    从对端数据传输设备获取利用AI模型输出的压缩字典和压缩算法;或
    从第三方设备获取利用AI模型输出的压缩字典和压缩算法,所述第三方设备为位于云或边缘的功能节点的设备;或
    从第三方设备获取利用AI模型输出的压缩字典和压缩算法,并发送给对端数据传输设备,所述第三方设备为位于云或边缘的功能节点的设备。
  19. 根据权利要求13所述的数据传输设备,其特征在于,所述处理器还用于执行:
    将最新传输的业务数据及当前完成传输的业务数据的压缩率发送至第三方设备。
  20. 一种进行数据压缩的第三方设备,其特征在于,包括存储器,收发机,处理器:
    存储器,用于存储计算机程序;收发机,用于在所述处理器的控制下收发数据;处理器,用于读取所述存储器中的计算机程序并执行以下操作:
    响应于数据传输设备的请求,获取当前业务最新传输的业务数据及当前完成传输的业务数据的压缩率;
    将最新传输的业务数据输入到AI模型,并利用AI模型输出压缩字典和压缩算法;
    将所述压缩字典和压缩算法发送至所述数据传输设备。
  21. 根据权利要求20所述的第三方设备,其特征在于,所述AI模型用于对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,及利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并将实际数据使用输出的压缩字典和压缩算法得到的压缩率作为反馈输入,调整AI模型的模型参数。
  22. 根据权利要求20所述的第三方设备,其特征在于,将所述压缩字典和压缩算法发送至所述数据传输设备,所述处理器具体用于执行:
    确定满足更新条件时,将所述压缩字典和压缩算法发送至所述数据传输设备。
  23. 根据权利要求22所述的第三方设备,其特征在于,所述处理器确定满足更新条件,包括如下至少一个步骤:
    确定所述数据传输设备建立业务连接时,满足更新条件;
    确定所述数据传输设备建立业务连接后,到达设定的更新周期时,满足更新条件;
    确定满足事件性触发条件时,满足更新条件。
  24. 根据权利要求23所述的第三方设备,其特征在于,所述处理器确定满足事件性触发条件,包括如下至少一个步骤:
    确定当前完成传输的业务数据的压缩率低于预设门限时,确定满足事件性触发条件;
    根据当前AI模型输出的压缩字典和压缩算法所预期的压缩率,与当前完成传输的业务数据的压缩率的差值大于预设值时,确定满足事件性触发条件。
  25. 一种进行数据压缩的装置,其特征在于,包括:
    字典算法确定单元,用于业务传输过程中,确定当前采用的压缩字典和压缩算法,其中初始时采用初始化的压缩字典和压缩算法,之后确定满足更新条件时,利用AI模型输出的压缩字典和压缩算法,分别对应更新当前采用的压缩字典和压缩算法;
    压缩单元,用于基于当前采用压缩字典,利用当前采用的压缩算法对传输的业务数据进行压缩或解压缩。
  26. 根据权利要求25所述的装置,其特征在于,包括:所述AI模型用于对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,及利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并将实际业务数据使用输出的压缩字典和压缩算法得到的压缩率作为反馈输入,调整AI模型的模型参数。
  27. 根据权利要求25所述的装置,其特征在于,所述字典算法确定单元确定满足更新条件,包括如下至少一个步骤:
    确定建立业务连接时,满足更新条件;
    确定建立业务连接后,到达设定的更新周期时,满足更新条件;
    确定满足事件性触发条件时,满足更新条件。
  28. 根据权利要求27所述的装置,其特征在于,所述字典算法确定单元确定满足事件性触发条件,包括如下至少一个步骤:
    确定当前完成传输的业务数据的压缩率低于预设门限时,确定满足事件 性触发条件;
    根据当前AI模型输出的压缩字典和压缩算法所预期的压缩率,与当前完成传输的业务数据的压缩率的差值大于预设值时,确定满足事件性触发条件。
  29. 一种进行数据压缩的装置,其特征在于,包括:
    数据接收单元,用于响应于数据传输设备的请求,获取当前业务最新传输的业务数据及当前完成传输的业务数据的压缩率;
    字典算法生成单元,用于将最新传输的业务数据输入到AI模型,并利用AI模型输出压缩字典和压缩算法;
    数据发送单元,用于将所述压缩字典和压缩算法发送至所述数据传输设备。
  30. 根据权利要求29所述的装置,其特征在于,包括:所述AI模型用于对最新传输的业务数据进行特征提取,根据提取的特征和所述业务数据之间的关联性输出压缩字典,及利用不同压缩算法以当前压缩字典进行压缩,输出压缩率最高所对应的压缩算法,并将实际业务数据使用输出的压缩字典和压缩算法得到的压缩率作为反馈输入,调整AI模型的模型参数。
  31. 根据权利要求29所述的装置,其特征在于,所述数据发送单元确定满足更新条件,包括如下至少一个步骤:
    确定建立业务连接时,满足更新条件;
    确定建立业务连接后,到达设定的更新周期时,满足更新条件;
    确定满足事件性触发条件时,满足更新条件。
  32. 根据权利要求31所述的装置,其特征在于,所述数据发送单元确定满足事件性触发条件,包括如下至少一个步骤:
    确定当前完成传输的业务数据的压缩率低于预设门限时,确定满足事件性触发条件;
    根据当前AI模型输出的压缩字典和压缩算法所预期的压缩率,与当前完成传输的业务数据的压缩率的差值大于预设值时,确定满足事件性触发条件。
  33. 一种处理器可读存储介质,其特征在于,所述处理器可读存储介质 存储有计算机程序,所述计算机程序用于使所述处理器执行权利要求1至7或权利要求8至12任一项所述的方法。
PCT/CN2021/121339 2020-11-03 2021-09-28 一种进行数据压缩的方法和装置及设备 WO2022095636A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011212245.2A CN114449579A (zh) 2020-11-03 2020-11-03 一种进行数据压缩的方法和装置及设备
CN202011212245.2 2020-11-03

Publications (1)

Publication Number Publication Date
WO2022095636A1 true WO2022095636A1 (zh) 2022-05-12

Family

ID=81361567

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/121339 WO2022095636A1 (zh) 2020-11-03 2021-09-28 一种进行数据压缩的方法和装置及设备

Country Status (2)

Country Link
CN (1) CN114449579A (zh)
WO (1) WO2022095636A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116506514A (zh) * 2023-06-27 2023-07-28 大唐融合通信股份有限公司 数据压缩方法、装置、设备、服务器和污水云控系统

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024065620A1 (en) * 2022-09-30 2024-04-04 Qualcomm Incorporated Model selection and switching
CN115347902B (zh) * 2022-10-17 2023-02-28 四川省华存智谷科技有限责任公司 存储系统镜像数据传输过程中自适应压缩方法及系统
CN117076388A (zh) * 2023-10-12 2023-11-17 中科信工创新技术(北京)有限公司 一种文件处理的方法、装置、存储介质及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103281156A (zh) * 2013-05-02 2013-09-04 华为技术有限公司 一种数据压缩、解压缩方法及装置
US20140070966A1 (en) * 2012-09-10 2014-03-13 Canon Kabushiki Kaisha Methods and systems for compressing and decompressing data
CN111193752A (zh) * 2020-02-28 2020-05-22 广州市百果园信息技术有限公司 一种数据压缩的方法、装置、压缩服务器和存储介质
CN111552669A (zh) * 2020-04-26 2020-08-18 北京达佳互联信息技术有限公司 数据处理方法、装置、计算设备和存储介质
US20200274549A1 (en) * 2019-02-22 2020-08-27 Qualcomm Incorporated Compression Of High Dynamic Ratio Fields For Machine Learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140070966A1 (en) * 2012-09-10 2014-03-13 Canon Kabushiki Kaisha Methods and systems for compressing and decompressing data
CN103281156A (zh) * 2013-05-02 2013-09-04 华为技术有限公司 一种数据压缩、解压缩方法及装置
US20200274549A1 (en) * 2019-02-22 2020-08-27 Qualcomm Incorporated Compression Of High Dynamic Ratio Fields For Machine Learning
CN111193752A (zh) * 2020-02-28 2020-05-22 广州市百果园信息技术有限公司 一种数据压缩的方法、装置、压缩服务器和存储介质
CN111552669A (zh) * 2020-04-26 2020-08-18 北京达佳互联信息技术有限公司 数据处理方法、装置、计算设备和存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116506514A (zh) * 2023-06-27 2023-07-28 大唐融合通信股份有限公司 数据压缩方法、装置、设备、服务器和污水云控系统
CN116506514B (zh) * 2023-06-27 2023-12-12 大唐融合通信股份有限公司 数据压缩方法、装置、设备、服务器和污水云控系统

Also Published As

Publication number Publication date
CN114449579A (zh) 2022-05-06

Similar Documents

Publication Publication Date Title
WO2022095636A1 (zh) 一种进行数据压缩的方法和装置及设备
US11729299B2 (en) Method for processing data packet and apparatus
US20210219174A1 (en) Communication method and device
WO2011160495A1 (zh) 一种头压缩反馈信息的反馈方法和设备
WO2019034130A1 (zh) 传输方法、发送端和接收端
JP7041255B2 (ja) 送信デバイス、受信デバイス、及びアップリンクデータ圧縮を処理するためにそれらにおいて実行される方法
CN111385268A (zh) 一种数据包头压缩确认方法及通信设备
TW202027478A (zh) 一種資料包處理方法、實體及儲存媒介
WO2019061151A1 (zh) 切换路径的方法和终端设备
WO2019019150A1 (zh) 数据传输的方法、终端设备和网络设备
CN107615810B (zh) 用于在线网络代码的包头压缩系统和方法
WO2019085920A1 (zh) 信息传输方法和通信设备
WO2017143538A1 (zh) 语音数据传输方法以及装置
CN112187400B (zh) 数据传输方法及装置
WO2020063122A1 (zh) 一种数据传输方法及装置
WO2022028133A1 (zh) 一种数据压缩方法、装置及存储介质
JPWO2014141635A1 (ja) 無線通信装置及び送信フレーム制御方法
WO2022206475A1 (zh) 传输方法、装置、设备及可读存储介质
WO2022143149A1 (zh) 传输业务的方法和通信装置
WO2024027383A1 (zh) 数据处理方法及装置
KR20140061109A (ko) 파일 동기화 방법 및 장치
WO2023143287A1 (zh) 数据传输方法、装置、设备及存储介质
WO2023098881A1 (zh) 定位辅助数据的传输方法、终端、定位服务器及存储介质
WO2024017054A1 (zh) 数据处理方法、装置及存储介质
WO2024123325A1 (en) Apparatus and method for timer-based robust header compression for real-time protocol time stamp compression

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21888333

Country of ref document: EP

Kind code of ref document: A1