CN111130698B - Wireless communication receiving window prediction method and device and wireless communication equipment - Google Patents

Wireless communication receiving window prediction method and device and wireless communication equipment Download PDF

Info

Publication number
CN111130698B
CN111130698B CN201911367541.7A CN201911367541A CN111130698B CN 111130698 B CN111130698 B CN 111130698B CN 201911367541 A CN201911367541 A CN 201911367541A CN 111130698 B CN111130698 B CN 111130698B
Authority
CN
China
Prior art keywords
receiving
current
state
strategy
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911367541.7A
Other languages
Chinese (zh)
Other versions
CN111130698A (en
Inventor
高迎宾
郑允军
夏玮玮
燕锋
张亦农
沈连丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zgmicro Nanjing Ltd
Original Assignee
Zgmicro Nanjing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zgmicro Nanjing Ltd filed Critical Zgmicro Nanjing Ltd
Priority to CN201911367541.7A priority Critical patent/CN111130698B/en
Publication of CN111130698A publication Critical patent/CN111130698A/en
Application granted granted Critical
Publication of CN111130698B publication Critical patent/CN111130698B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0001Systems modifying transmission characteristics according to link quality, e.g. power backoff
    • H04L1/0036Systems modifying transmission characteristics according to link quality, e.g. power backoff arrangements specific to the receiver
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0001Systems modifying transmission characteristics according to link quality, e.g. power backoff
    • H04L1/0015Systems modifying transmission characteristics according to link quality, e.g. power backoff characterised by the adaptation strategy
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0001Systems modifying transmission characteristics according to link quality, e.g. power backoff
    • H04L1/0015Systems modifying transmission characteristics according to link quality, e.g. power backoff characterised by the adaptation strategy
    • H04L1/0019Systems modifying transmission characteristics according to link quality, e.g. power backoff characterised by the adaptation strategy in which mode-switching is based on a statistical approach
    • H04L1/002Algorithms with memory of the previous states, e.g. Markovian models
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/02Power saving arrangements
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The present invention relates to the field of wireless communication technologies, and in particular, to a method and an apparatus for predicting a wireless communication receive window, and a wireless communication device. The wireless communication receiving window prediction method is provided with a training mode and a utilization mode, global information is obtained in the training mode, the global information comprises the current state information and information whether a receiving action taken in a current time slot is correct, deep reinforcement learning training is carried out on the basis of the global information, an optimal receiving strategy is obtained, and the current receiving strategy is updated according to the optimal receiving strategy; in a utilization mode, acquiring current state information, generating a receiving control signal according to the current state information and based on a current receiving strategy acquired by deep reinforcement learning training so as to determine whether to execute a data receiving action in a current time slot; the current state information indicates that the current time slot is the receiving time slot after the data packet is received last time. The invention avoids redundant receiving and reduces receiving power consumption.

Description

Wireless communication receiving window prediction method and device and wireless communication equipment
Technical Field
The present invention relates to the field of wireless communication technologies, and in particular, to a method and an apparatus for predicting a wireless communication receiving window, and a wireless communication device.
Background
The common wireless communication equipment in the prior art mainly comprises a processing unit and a wireless signal receiving and transmitting unit, wherein the wireless signal receiving and transmitting unit comprises a radio frequency antenna, a power amplifier and other functional modules and is responsible for carrying out data communication with other wireless communication equipment based on a preset wireless communication protocol; the processing unit is responsible for controlling, managing and processing data of various functions of the wireless communication device, and generally includes a Host Controller (Host Controller), a Link Manager (Link Manager), a Link Controller (Link Controller), and a Hardware abstraction Layer (Hardware Abstract Layer), where the included Link Controller is mainly used to control the wireless signal transceiver unit to perform data transceiving based on a wireless communication protocol.
In a Time Division Duplex (TDD) wireless communication system, the transceiving states of communication devices are usually alternated at fixed time intervals, which are called time slots. In the conventional communication system, a transmitting device does not transmit data in every transmission slot, that is, there are some idle slots. For the receiving device, the link control module generally controls the wireless signal transceiver unit to open a data receiving window in two ways:
one is to keep the open state of the receiving window under each receiving time slot, i.e. each receiving time slot starts the radio frequency receiving, even redundant time slots; for example, in an ACL Link (Asynchronous connection Link) of a bluetooth system, after a packet is successfully transmitted, the next packet is often transmitted at intervals of 30 slots. If the receiving device is in the rf receiving state during these idle timeslots, although the reliability of data reception can be ensured, unnecessary power consumption may be generated due to the redundant receiving timeslots.
Alternatively, the reception window is opened at a reception slot previously agreed with the other party of communication. For example, Link Policy (Link Policy) of some communication systems is defined in a communication protocol, for example, in the bluetooth specification, a connection state of a communication device is divided into an active (active) mode, a sniff (sniff) mode, a Hold (Hold) mode, a Park (Park) mode, and the like, in the sniff mode, a receiving device only receives in a time slot predetermined by both a transmitting side and a receiving side, but a data reception manner based on a fixed rule cannot adapt to a change of a communication environment, and in the active mode, the receiving device performs radio frequency reception in each receiving time slot, and power consumption is still high.
Disclosure of Invention
The invention overcomes the defects and provides a method, a device and a system for predicting a wireless communication receiving window, which avoid redundant receiving by predicting the receiving window so as to reduce receiving power consumption.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a method for predicting a wireless communication receive window, comprising: which has a training mode and a utilization mode,
while in the training mode, comprising the steps of,
obtaining global information including current state information and information whether a reception action taken in a current time slot is correct,
performing deep reinforcement learning training based on the global information to obtain an optimal receiving strategy,
updating the current receiving strategy with the optimal receiving strategy;
in the utilization mode, comprising the steps of,
the current state information is obtained and the current state information,
generating a receiving control signal according to the current state information and based on a current receiving strategy obtained by deep reinforcement learning training to decide whether to execute a data receiving action in the current time slot,
the current state information indicates that the current time slot is the receiving time slot after the data packet is received last time.
Further, a deep reinforcement learning technology is adopted to obtain a current receiving strategy or an optimal receiving strategy, wherein according to a state space formed by state information, an action space formed by actions of receiving data executed by the wireless communication equipment, a state transition rule function and an incentive function, a preset decision process is adopted for modeling, a reinforcement learning task is executed, an action strategy is generated, and the action strategy is iteratively updated according to multi-step discount accumulated incentives until the optimal receiving strategy or the current receiving strategy is obtained through convergence; wherein the receiving strategy is a probability distribution of the receiving action in a state space.
Further, in the utilization mode, a specific method for selecting whether to perform reception is as follows:
generating random numbers with uniformly distributed [0,1) intervals and comparing the random numbers with the receiving probability of executing receiving action in the current state, and selecting receiving when the random numbers are less than or equal to the receiving probability, otherwise not selecting receiving.
Further, when the result of the deep reinforcement learning training reaches convergence in the training mode, an optimal receiving strategy is obtained, the training mode is ended, and the mode is switched to a utilization mode; and when the data is in the utilization mode, if the data reception does not meet the reliability requirement, switching to a training mode.
Further, the data reception failing to meet the reliability requirement includes that packet leakage occurs or the packet loss rate is greater than a set threshold value in the data reception.
Further, when the probability distribution of the receiving action in the state space obtains a peak value in a state p, setting the peak value state p and k states which are continuous before and after the peak value state p as a forced receiving state, and executing the data receiving action, wherein k is a positive integer.
Further, in the utilization mode, after receiving a data packet from the transmitting end, setting m subsequent consecutive states as a forced receiving state, and continuously performing a data receiving action, where m is a positive integer.
Further, in the utilization mode, if the received data packet fails CRC or is incomplete, the data receiving operation is continuously performed until the received data packet passes CRC check and there is no header error.
Further, the predetermined decision process is a markov decision process;
the reinforcement learning task is modeled by a Markov decision process and is represented as a four-tuple E (S, A, P and R), wherein S is a state space, A is an action space, P is a state transfer function, R is a reward function, and the output result of the reinforcement learning task is a receiving strategy pi;
the action space a is {0,1}, where action a is 0 to indicate that the current timeslot performs radio frequency reception shutdown, and a is 1 to indicate that the current timeslot turns radio frequency reception on;
the state space is expressed as a set S ═ {1,2, L, i, L, N }, wherein the state i represents that the current time slot is the ith receiving time slot after the latest data packet receiving is completed, and N represents the maximum receiving time slot number between two adjacent received data packets; wherein
Figure GDA0003514164520000041
njIndicates the idle time slot of the jth transmitting device, L indicates the n of L kinds of the transmission devices in the setj
The state transition function P is specifically: if the execution of action a is 0 in state i, that is, if no reception is performed, the state transitions to state i + 1; if the action a is 1, i.e. reception is performed, the state transition depends on the reception result, if no packet is received, the state transition is to the state i +1, if a packet is received, the state returns to the state 1 after reception is completed;
the reward function R specifically is that in the iterative process of the deep reinforcement learning task, according to the global information, the action a is selected to be scored under the state s to serve as feedback of a deep reinforcement learning model, and subsequent actions are adjusted according to the current reward;
the probability that a is the correct action to be selected in state s is pi (s, a), with the probability normalization condition:
Figure GDA0003514164520000042
the iterative mode of the deep reinforcement learning task adopts a Q-learning algorithm, the algorithm is converged to an optimal receiving strategy by maximizing the Q value,
Figure GDA0003514164520000043
estimating the Q value by using a deep neural network, wherein the definition of the convergence of the algorithm is as follows:
t(s,a)-πt-1(s,a)|max≤10-6.。
a wireless communication receiving window prediction device is used for controlling data transceiving of a wireless signal transceiving unit and comprises a link control module and a protection module, wherein the device is provided with a training mode and a utilization mode,
when the device is operating in the training mode,
a prediction module for obtaining global information from the link control module, performing deep reinforcement learning training to obtain an optimal receiving strategy, and updating the current receiving strategy in the protection module with the optimal receiving strategy,
wherein the global information includes current state information and information whether a receiving action taken by the wireless communication device in a current time slot is correct;
when the device is operating in the utilization mode,
the protection module is used for receiving the current state information from the link control module and generating a receiving control signal for deciding whether to execute a data receiving action at the current time slot or not based on the current receiving strategy obtained by deep reinforcement learning training,
the link control module is used for controlling the wireless signal transceiving unit to receive data based on the receiving control signal from the protection module,
and the current state information indicates that the current time slot is the receiving time slot after the data packet is received last time.
Further, when the device is in a training mode and the result of the deep reinforcement learning training reaches convergence, an optimal receiving strategy is obtained, the training mode is ended, and the device is switched to a utilization mode; when the device is in the utilization mode, if the receiving of the data does not meet the reliability requirement, the device is switched to the training mode; when the device processes the training mode, the link control module controls the wireless signal receiving and sending unit to keep a data receiving state at any receiving time slot.
Further, the current receiving strategy in the protection module is obtained by offline training, wherein global information collected in advance is utilized on an offline platform, is obtained by deep reinforcement learning training, and is loaded into the protection module and/or the prediction module; or, the current receiving strategy in the protection module is obtained by on-line training, wherein when the prediction module works in a training mode for the first time, the prediction module obtains the current receiving strategy through deep reinforcement learning training from zero by using global information provided by the link control module, and loads the current receiving strategy into the protection module.
Further, the data reception failing to meet the reliability requirement includes that a packet leakage occurs or a packet loss rate is greater than a set threshold value in the data reception.
Further, the processing unit comprises a wireless communication reception window prediction apparatus according to one of claims 11 to 15 and executes a wireless communication reception window prediction method according to one of claims 1 to 10.
When the invention works in the utilization mode, the receiving control signal is generated by acquiring the current state information and based on the current receiving strategy acquired by the deep reinforcement learning training so as to determine whether to execute the data receiving action in the current time slot, and the receiving strategy is utilized to predict the receiving window, thereby controlling the data receiving, avoiding the redundant receiving and reducing the receiving power consumption.
Furthermore, the invention also has a training mode, and carries out deep reinforcement learning training by using the acquired global information to obtain and update the current receiving strategy, so that the receiving strategy is continuously optimized. After data reception is carried out for a period of time, along with the change of the actual channel condition, if the current receiving strategy deviates so that the data reception cannot meet the reliability evaluation requirement, the current receiving strategy is switched to the training mode again to carry out the updating of the receiving strategy. Therefore, redundant reception can be effectively avoided on the basis of ensuring reliable reception.
Drawings
Fig. 1 is a schematic block diagram of a wireless communication receive window prediction apparatus according to a preferred embodiment of the present invention;
FIG. 2 is a schematic block diagram of a wireless communication device according to a preferred embodiment of the present invention;
fig. 3 is a flowchart of a method for predicting a wireless communication reception window according to a preferred embodiment of the present invention;
FIG. 4 is a diagram illustrating state transition rules in accordance with a preferred embodiment of the present invention.
Detailed Description
The core idea of the invention is that a deep reinforcement learning technology is adopted to predict a receiving window of the wireless communication equipment so as to determine whether to execute data receiving action under the current time slot, thereby avoiding redundant receiving as much as possible and effectively reducing the power consumption of the wireless communication equipment.
The technical solution of the present invention will be further described in detail with reference to specific embodiments. It should be noted that, in order to make the technical solutions and advantages in the embodiments of the present application more clearly understood, the following description of the exemplary embodiments of the present application with reference to the accompanying drawings is made in further detail, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and are not exhaustive of all the embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Example 1
Wireless communication reception window prediction apparatus fig. 1 is a schematic block diagram of a wireless communication reception window prediction apparatus according to embodiment 1 of the present invention, which includes a link control module, a protection module, and a prediction module, and has a utilization mode and a training mode.
When the device works in a utilization mode, the protection module is used for receiving current state information from the link control module and generating a receiving control signal for determining whether to receive and transmit data in the current time slot based on a current receiving strategy obtained through deep reinforcement learning training; and the link control module is used for controlling the wireless signal receiving and transmitting unit to receive data based on the receiving control signal from the protection module.
The current state information indicates that the current time slot is the receiving time slot after the data packet is received last time.
The current receiving strategy can be pre-stored in the protection module or obtained through the training of a prediction module.
When the device works in a training mode, the prediction module is used for obtaining global information from the link control module, performing deep reinforcement learning training to obtain an optimal receiving strategy, and updating the current receiving strategy in the protection module according to the optimal receiving strategy; wherein the global information includes the current state information and information whether a reception action taken by the wireless communication device in the current time slot is correct.
The deep reinforcement learning task can be realized by adopting various existing or future deep reinforcement learning technologies. As a preferred implementation, the obtaining of the current reception policy or the optimal reception policy by using the deep reinforcement learning technique may be modeling by using a predetermined decision process according to a state space S formed by state information S, S belonging to S, an action space a formed by an action a of receiving data executed by the wireless communication device, a belonging to a, a state transition rule function P, and a reward function R, executing a reinforcement learning task, generating an action policy, and iteratively updating the action policy according to a multi-step discount accumulated reward until convergence to obtain the optimal reception policy or the current reception policy;
wherein the receiving strategy is a probability distribution of the receiving action in a state space.
The protection mode and the training mode can be switched to each other. When the device is in a training mode and the result of the deep reinforcement learning training reaches convergence, obtaining an optimal receiving strategy, ending the training mode, and switching the device to a utilization mode; when the device is in the utilization mode, if the reception of the data does not meet the reliability requirement, the device switches to the training mode. The link control module can perform intelligent receiving through the receiving control signal output by the protection module, and the radio frequency receiving is closed in the redundant time slot to reduce the power consumption.
Example 2
An embodiment 2 of the present invention provides a wireless communication device, as shown in fig. 2, which is a schematic block diagram of the wireless communication device in this embodiment, and includes a processing unit and a wireless signal transceiver unit, where the processing unit may be formed by a chip, and the wireless signal transceiver unit is formed by a radio frequency antenna, a power amplifier, and the like. The processing unit comprises a wireless communication receiving window prediction device which is composed of a link control module, a protection module and a prediction module and is used for controlling data receiving and transmitting of the wireless signal receiving and transmitting unit.
Example 3
Embodiment 3 of the present invention provides a method for predicting a wireless communication receiving window, for example, fig. 3 is a flowchart of the method for predicting a wireless communication receiving window provided in this embodiment, which may include a training mode and a utilization mode:
acquiring global information in the training mode, wherein the global information comprises the current state information and information whether a receiving action taken in the current time slot is correct or not; performing deep reinforcement learning training based on the global information, and obtaining an optimal receiving strategy when the result of the deep reinforcement learning training is converged; and updating the optimal receiving strategy as the current receiving strategy, ending the training mode, and switching to a utilization mode.
And under the utilization mode, acquiring current state information, and generating a receiving control signal according to the current state information and a current receiving strategy obtained by deep reinforcement learning training so as to determine whether to execute a data receiving action in the current time slot.
And if the receiving of the data does not meet the preset reliability requirement, switching to a training mode. The unsatisfied reliability requirements include, but are not limited to, the following: and the packet leakage or the packet loss rate is greater than a set threshold value when the data is received.
The method and apparatus for predicting the wireless communication receiving window in the above embodiments and the above wireless communication device will be further described in detail below.
In the embodiment of the present invention, the wireless communication receiving window predicting apparatus and the wireless communication device may provide two working modes: the training mode and the utilization mode can be switched mutually. In connection with fig. 2, the utilization pattern is shown in solid lines and the training pattern in dashed lines.
The device may be in a utilization mode when operating normally. The protection module receives the current state information from the link control module and generates a receiving control signal for determining whether to receive and transmit data in the current time slot based on the current receiving strategy obtained through deep reinforcement learning training; and the link control module is used for controlling whether the wireless signal receiving and sending unit executes the receiving action under the current time slot or not based on the receiving control signal from the protection module.
After the data is in the utilization mode for a period of time, along with the change of the actual channel condition, if the receiving of the data does not meet the preset reliability requirement, switching to the training mode. And the link control module starts the training module, enters a training mode and trains the deep reinforcement learning of the prediction module. After training is finished, the equipment is switched to a utilization mode by the link control module. Because the receiving strategy adopted by the wireless communication equipment is obtained through deep reinforcement learning training, the wireless data receiving can be intelligently controlled, some redundant receiving can be avoided, and the equipment is in a low power consumption mode. In addition, the equipment can be repeatedly switched between the training mode and the utilization mode for many times in the working process, and the receiving strategy is adjusted in time to adapt to various different communication environments.
In addition, in a preferred embodiment, the link control module may not receive the reception control signal from the protection module during the training process, i.e. not perform the output result of the prediction module, but control the wireless signal transceiver unit to maintain the data receiving state in any receiving time slot, so that the normal wireless signal receiving activity of the device is not affected, thereby effectively coping with the poor channel condition.
The prediction module executes a deep reinforcement learning task, and the training process is a process which continuously iterates and converges according to the global information and the output feedback.
As a preferred embodiment, the deep reinforcement learning training may be modeled by a markov decision process, and is represented as a quadruple E ═ S, a, P, R >, where S is a state space, a is an action space, P is a state transition rule function, and R is a reward function; and the output result of the reinforcement learning task is a receiving strategy pi.
The action space is denoted as a ═ 0,1, where action a ═ 0 denotes that the current timeslot performs radio reception off, and a ═ 1 denotes that the current timeslot turns radio reception on.
The state space is represented as a set S ═ {1,2, L, i, L, N }, where state i represents that the current slot is the i-th receive slot after the last completion of receiving a packet, and N represents the maximum number of receive slots between received adjacent packets. If the two parties of the transceiver communicate in a time division duplex manner, for example, in the bluetooth specification, the receiving time slots of the slave device are even time slots, so that the state space does not include odd time slots, and the i state of the state space may correspond to the 2i-1 time slot of the bluetooth time slot, i.e., the count is started from the 1 st master device sending time slot after the data packet reception is completed each time.
As shown in fig. 4, in the state i, if the execution action a is 0, that is, if the reception is not performed, the state transition rule function P transitions to the state i + 1; if the action a is 1, i.e. the radio frequency reception is started, the state transition depends on the reception result, if no data packet is received, the state transition will be to the state i +1, and if a data packet is received, the state will be returned to the state 1 directly after the reception is completed and the state i +1 is not entered. It should be noted that the received data packet may occupy multiple slots, for example, the 2-DH5 packet and the 3-DH5 packet in bluetooth are 5-slot packets, and the state transition is performed only after the reception is completed, and the state remains unchanged during the reception.
As an embodiment, for special cases such as that a received data packet needs to be discarded and waits for retransmission by a transmitting end without passing CRC check, processing may not be performed in the training mode, but may be performed in the utilization mode to ensure reliability of communication. Thus, in training mode, once the device receives a packet from the sender, it defaults to correct receipt of the packet and returns from state s directly to state 1 (setting the variable marking the state to 1) after completion of the receipt.
For different transmitting devices or different QoS conditions, the time slot interval (i.e., idle time slot) from the time when the current data packet is successfully transmitted to the time when the next data packet is transmitted is different, and is marked as N ═ N1,n2,L,nLAnd H, the selection requirement of N in the state space is satisfied.
Figure GDA0003514164520000101
Wherein n isjIndicating the idle time slot of the jth transmitting device, for example, 38 idle time slots for brand A mobile phone, which is marked as n138, the B-brand mobile phone has 36 idle timeslots, which is denoted as n236, etc. L denotes a total of L such n in the setj. For various value cases in N, the actual corresponding global information is different. Different results can be converged in the process of reinforcement learning, and different receiving strategies can be obtained.
The reward function R is to select the action a to be scored as feedback in the state s according to the global information (whether the current timeslot and the received action taken are correct) in the iterative process, and the subsequent actions are adjusted according to the current reward. The selection of reception when there is a packet to send and the selection of non-reception when there is no packet to send are defined as correct reception actions, and vice versa as incorrect reception actions, if correct reception actions are selected in the current state, the score should be positive, and vice versa, i.e. penalized. The specific prize value settings are shown in table 1.
TABLE 1 reward value settings
Figure GDA0003514164520000111
If the sending end does not send data in the current state, selecting action a to be 0 is correct, so that the power consumption can be reduced, and the score is 2, and a to be 1 is false action, and belongs to redundancy receiving, and the score is-1; if the transmitting end has the data packet to be transmitted under the current state, selecting a to be 1 is correct action and gets 3 points, and selecting a to be 0 will miss the packet and belongs to serious error and gets 5 points. The selection of action in the current state depends on the size of the discounted jackpot for the previous time slot.
The receiving strategy is the output result of the reinforcement learning task and can be expressed as the probability distribution of the action in the state space, if the probability that a is the correct action to be selected in the state s is recorded as pi (s, a), the probability normalization condition is provided
Figure GDA0003514164520000112
The iterative mode selects a Q-learning algorithm in reinforcement learning, and the algorithm is converged to an optimal receiving strategy by maximizing the Q value
Figure GDA0003514164520000113
In order to improve the convergence speed and accuracy of the algorithm, a deep neural network is used for estimating the Q value. The convergence of the algorithm is defined as
t(s,a)-πt-1(s,a)|max≤10-6. (4)
At this point, the obtained output result of the prediction module is the receiving strategy pi. In the utilization mode, the protection module generates a receiving control signal by adopting a receiving strategy pi output by the prediction module according to the current state information from the link control module, so that the link control module controls the signal receiving and sending states of the wireless signal receiving and sending unit under the action of the receiving control signal.
In the utilization mode, the specific method for the protection module to select whether to receive is as follows: a random number q with a uniformly distributed [0,1) interval is generated and compared with the reception probability, e.g. the probability that s performs a reception action in the current state is r, i.e. pi (s, a ═ 1) ═ r, then reception is selected when q ≦ r and not when q > r.
In order to deal with various special situations and ensure the reliability of reception, the utilization mode can also adopt one or more of the following reception strategies:
1) when the protection module obtains a peak value in a state p according to the probability distribution obtained by the current state information according to the current receiving strategy, the protection module sends a receiving control signal to enable the link control module to execute a receiving action in the peak value state, in order to improve the robustness of the system, the peak value state and k continuous states before and after the peak value state can be set as a forced receiving state, and the wireless signal receiving and sending unit is controlled to execute the receiving action, wherein k is a positive integer. For example, when k takes 3, pi (s, a ═ 1) is set to 1, s ∈ (p-3, p +3), that is, the peak state and the nearby 3 states are set to the forced reception state, that is, the state range is (p-3, p +3), so as to improve the robustness of the system.
2) And setting m continuous states after the device receives the data packet from the transmitting end as a forced receiving state, for example, when m takes 3, that is, pi (s, a is 1) is 1, and s is 1,2 and 3. Here, it is considered that the sender may retransmit the data packets after the current data packet is received, for example, the sender fails to receive the returned ACK, and retransmits the data packets, and the receiver still receives the data packets. Thus, in one embodiment, m may be valued with reference to the number of retransmissions.
3) And if the received data packet does not pass the CRC or is incomplete, the current receiving strategy obtained based on the deep reinforcement learning training is not used, the wireless signal receiving and sending unit is controlled to receive all the time, the retransmission of the sending end is waited, and the control signal is generated by continuously utilizing the current receiving strategy obtained based on the deep reinforcement learning training when the received data packet passes the CRC and has no packet header error after the successful reception.
4) Normally, the state transition process will return to state 1 before reaching state N, i.e. state N will not occur, if state N occurs or exceeds T of bluetooth specificationpoll-timeoutIf the packet missing occurs due to the fact that the large deviation exists, the system is considered that the data receiving does not meet the reliability requirement at the moment, the strategy is updated when the system is switched to the training mode, after the training is finished, the algorithm reaches convergence as the formula (4), the optimal action strategy is obtained again, and then the system is switched back to the utilization mode.
5) In the utilization mode, after data is received for a period of time, along with the change of the actual channel condition, if the packet loss rate counted in the system is too high, the data reception is considered to not meet the reliability requirement, and the system is switched to the training mode again to update the reception strategy.
The training mode process in the embodiment of the invention can be carried out not only on line, but also under line. The current reception strategy in the protection module may also be obtained by offline training or online training. The off-line training is to acquire global information in advance in a data packet capturing mode, train the model on an off-line platform such as Windows/Linux by using the acquired global information data, and load the training result, namely a receiving strategy, into the protection module as an initial receiving strategy. When the online process is carried out, when the prediction module works in a training mode for the first time, the global information provided by the link control module is utilized, and the current receiving strategy is obtained through the deep reinforcement learning training of the prediction module from zero. If offline training is adopted, the prediction module only needs to perform strategy updating when running for the first time without performing model training from the beginning, and as the initial receiving strategy of the strategy updating is previously subjected to offline training, namely is directly used as the current receiving strategy and is closer to the receiving strategy in a convergence state, convergence can be achieved more quickly, and the calculation amount is effectively reduced.
Example 4
Unlike the wireless communication reception window prediction apparatus described in embodiment 1, the apparatus of this embodiment includes only a link control module and a protection module, and the apparatus has only a utilization mode. The current receiving strategy obtained by deep reinforcement learning training in the protection module can be obtained by online platform pre-training and is updated by subsequent training of the online platform. The apparatus of this embodiment may also be configured to collect the global information to provide the global information to an offline platform, and initiate training and updating of a new reception strategy.
The working principle of the device of this embodiment can refer to the principle and method in the foregoing embodiments, and therefore, the detailed description is omitted.
The foregoing describes in detail a method, an apparatus, and a wireless communication device for predicting a wireless communication receiving window provided in an embodiment of the present invention, and a specific example is applied in the present disclosure to explain the principle and the implementation of the present invention, and the description of the foregoing embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and applications, and in summary, the above description is only a specific embodiment of the present invention and is not intended to limit the scope of the present invention, and any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the present invention.
It will be further appreciated by those of ordinary skill in the art that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether these functions are performed in hardware or software depends on the particular application of the solution and design constraints. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.

Claims (14)

1. A method for predicting a wireless communication receive window, comprising: which has a training mode and a utilization mode,
while in the training mode, comprising the steps of,
obtaining global information including current state information and information whether a reception action taken in a current time slot is correct,
performing deep reinforcement learning training based on the global information to obtain an optimal receiving strategy,
updating the current receiving strategy with the optimal receiving strategy;
in the utilization mode, comprising the steps of,
the current state information is obtained and the current state information,
generating a receiving control signal according to the current state information and based on a current receiving strategy obtained by deep reinforcement learning training to decide whether to execute a data receiving action in the current time slot,
the current state information indicates that the current time slot is the receiving time slot after the data packet is received last time.
2. The method of claim 1, wherein: and obtaining a current receiving strategy or an optimal receiving strategy by adopting a deep reinforcement learning technology, wherein,
according to a state space formed by state information, an action space formed by actions of receiving data executed by the wireless communication equipment, a state transition rule function and an incentive function, modeling is carried out by adopting a preset decision process, a reinforcement learning task is executed, an action strategy is generated, and the action strategy is iteratively updated according to multi-step discount accumulated incentive until an optimal receiving strategy or a current receiving strategy is obtained through convergence;
wherein the receiving strategy is a probability distribution of the receiving action in a state space.
3. The method of claim 2, wherein: in the utilization mode, the specific method for selecting whether to receive or not is as follows:
generating random numbers with uniformly distributed [0,1) intervals and comparing the random numbers with the receiving probability of executing receiving action in the current state, and selecting receiving when the random numbers are less than or equal to the receiving probability, otherwise not selecting receiving.
4. The method of claim 2, wherein: when the result of the deep reinforcement learning training reaches convergence in the training mode, obtaining an optimal receiving strategy, ending the training mode, and switching to a utilization mode;
and when the data is in the utilization mode, if the data reception does not meet the reliability requirement, switching to a training mode.
5. The method of claim 4, wherein: the data receiving does not meet the reliability requirement, and the packet missing rate or the packet loss rate of the data receiving is greater than a set threshold value.
6. The method of claim 2, wherein: when the probability distribution of the receiving action in the state space obtains a peak value in a state p, setting the peak value state p and k states which are continuous before and after the peak value state p as a forced receiving state, and executing the data receiving action, wherein k is a positive integer.
7. The method of predicting a wireless communication reception window according to one of claims 1 to 6, wherein: in the utilization mode, after receiving a data packet from a sending end, setting m subsequent continuous states as a forced receiving state, and continuously executing a data receiving action, wherein m is a positive integer.
8. The method of claim 7, wherein: in the utilization mode, if the received data packet fails CRC or is incomplete, the data receiving operation is continuously performed until the received data packet passes CRC and no header error occurs.
9. The method of any of claims 2 to 6, wherein: the predetermined decision process is a Markov decision process;
the reinforcement learning task is modeled by a Markov decision process and is represented as a four-tuple E (S, A, P and R), wherein S is a state space, A is an action space, P is a state transfer function, R is a reward function, and the output result of the reinforcement learning task is a receiving strategy pi;
the action space a is {0,1}, where action a is 0 to indicate that the current timeslot performs radio frequency reception shutdown, and a is 1 to indicate that the current timeslot turns radio frequency reception on;
the state space is represented as a set S ═ {1,2, …, i, …, N }, where a state i represents that a current slot is an i-th receiving slot after a latest receiving packet is completed, and N represents a maximum number of receiving slots between two adjacent received packets; wherein
Figure DEST_PATH_IMAGE001
j=1,2, … ,L. ,
njIndicates the idle time slot of the jth transmitting device, L indicates the n of L kinds of the transmission devices in the setj
The state transition function P is specifically: if the execution of action a is 0 in state i, that is, if no reception is performed, the state transitions to state i + 1; if the action a is 1, i.e. reception is performed, the state transition depends on the reception result, if no packet is received, the state transition is to the state i +1, if a packet is received, the state returns to the state 1 after reception is completed;
the reward function R specifically is that in the iterative process of the deep reinforcement learning task, according to the global information, the action a is selected to be scored under the state s to serve as feedback of a deep reinforcement learning model, and subsequent actions are adjusted according to the current reward;
the probability that a is the correct action to be selected in state s is pi (s, a), with the probability normalization condition:
Figure FDA0003514164510000032
the iterative mode of the deep reinforcement learning task adopts a Q-learning algorithm, the algorithm is converged to an optimal receiving strategy by maximizing the Q value,
Figure FDA0003514164510000033
estimating the Q value by using a deep neural network, wherein the definition of the convergence of the algorithm is as follows:
t(s,a)-πt-1(s,a)|max≤10-6.。
10. a wireless communication reception window prediction apparatus for controlling data transmission and reception of a wireless signal transmission and reception unit, comprising: comprises a link control module and a protection module, the device has a training mode and a utilization mode,
when the device is operated in the training mode,
a prediction module for obtaining the global information from the link control module, performing deep reinforcement learning training, obtaining the optimal receiving strategy, and updating the current receiving strategy in the protection module with the optimal receiving strategy,
wherein the global information includes current state information and information whether a receiving action taken by the wireless communication device in a current time slot is correct;
when the device is operating in the utilization mode,
the protection module is used for receiving the current state information from the link control module and generating a receiving control signal for deciding whether to execute a data receiving action at the current time slot or not based on the current receiving strategy obtained by deep reinforcement learning training,
the link control module is used for controlling the wireless signal transceiving unit to receive data based on the receiving control signal from the protection module,
the current state information indicates that the current time slot is the receiving time slot after the data packet is received last time.
11. The wireless communication reception window prediction apparatus according to claim 10, wherein: when the device is in a training mode and the result of the deep reinforcement learning training reaches convergence, obtaining an optimal receiving strategy, ending the training mode, and switching the device to a utilization mode;
when the device is in the utilization mode, if the receiving of the data does not meet the reliability requirement, the device is switched to the training mode;
when the device processes the training mode, the link control module controls the wireless signal receiving and sending unit to keep a data receiving state at any receiving time slot.
12. The wireless communication reception window prediction apparatus according to claim 10 or 11, wherein: the current receiving strategy in the protection module is obtained by offline training, wherein global information acquired in advance is utilized on an offline platform, is obtained by deep reinforcement learning training and is loaded into the protection module and/or the prediction module; alternatively, the first and second electrodes may be,
the current receiving strategy in the protection module is obtained by on-line training, wherein when the prediction module works in a training mode for the first time, the current receiving strategy is obtained by utilizing global information provided by the link control module through deep reinforcement learning training from zero, and the current receiving strategy is loaded into the protection module.
13. The wireless communication reception window prediction apparatus according to claim 11, wherein: the data receiving failure meets the reliability requirement and comprises that packet missing or packet loss rate is larger than a set threshold value in data receiving.
14. A wireless communication device comprising a processing unit and a wireless signal transceiving unit, characterized in that: the processing unit comprises a wireless communication reception window prediction apparatus according to one of claims 10 to 13 and performs a wireless communication reception window prediction method according to one of claims 1 to 9.
CN201911367541.7A 2019-12-26 2019-12-26 Wireless communication receiving window prediction method and device and wireless communication equipment Active CN111130698B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911367541.7A CN111130698B (en) 2019-12-26 2019-12-26 Wireless communication receiving window prediction method and device and wireless communication equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911367541.7A CN111130698B (en) 2019-12-26 2019-12-26 Wireless communication receiving window prediction method and device and wireless communication equipment

Publications (2)

Publication Number Publication Date
CN111130698A CN111130698A (en) 2020-05-08
CN111130698B true CN111130698B (en) 2022-05-31

Family

ID=70503134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911367541.7A Active CN111130698B (en) 2019-12-26 2019-12-26 Wireless communication receiving window prediction method and device and wireless communication equipment

Country Status (1)

Country Link
CN (1) CN111130698B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240152736A1 (en) * 2021-03-18 2024-05-09 Telefonaktiebolaget Lm Ericsson (Publ) Systems, methods, computer programs for predicting whether a device will change state
CN116627041B (en) * 2023-07-19 2023-09-29 江西机电职业技术学院 Control method for motion of four-foot robot based on deep learning

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018083671A1 (en) * 2016-11-04 2018-05-11 Deepmind Technologies Limited Reinforcement learning with auxiliary tasks
US10423861B2 (en) * 2017-10-16 2019-09-24 Illumina, Inc. Deep learning-based techniques for training deep convolutional neural networks
US10417556B1 (en) * 2017-12-07 2019-09-17 HatchB Labs, Inc. Simulation-based controls optimization using time series data forecast
CN110135573B (en) * 2018-02-02 2023-10-03 阿里巴巴集团控股有限公司 Training method, computing equipment and system for deep learning model
CN110278149B (en) * 2019-06-20 2022-10-18 南京大学 Multi-path transmission control protocol data packet scheduling method based on deep reinforcement learning

Also Published As

Publication number Publication date
CN111130698A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
JP4536933B2 (en) Power control in a CDMA mobile communication system
JP5772345B2 (en) Parameter setting apparatus, computer program, and parameter setting method
WO2022089002A1 (en) Multi-link low-latency communication method and apparatus, and storage medium and electronic apparatus
CN111130698B (en) Wireless communication receiving window prediction method and device and wireless communication equipment
US20090080455A1 (en) Systems and methods for reducing data collisions in wireless network communications
JP5646054B2 (en) How to set multiple parameters in a wireless communication network
CN101523752A (en) Methods, systems, and computer program products for controlling data transmission based on power cost
CN103856954A (en) Method and system for detecting roam terminal heartbeat cycle, roam terminal and server
US20140106801A1 (en) Transceiver operating in a wireless communications network, a system and method for transmission in the network
CN113966596B (en) Method and apparatus for data traffic routing
US7489928B2 (en) Adaptive RF link failure handler
CN110933638B (en) Heterogeneous network access selection strategy method applied to vehicle following queue
JP5708102B2 (en) Wireless communication terminal apparatus and wireless communication terminal apparatus control method
CN102523608B (en) Message sending method and device
CN114915394B (en) Dual mode communication device, dual mode communication method, and storage medium
CN113225794A (en) Full-duplex cognitive communication power control method based on deep reinforcement learning
JP3860538B2 (en) Method for adjusting transmission power in wireless system
CN101145999A (en) Method, node and device for realizing public channel frame mechanism in radio grid network
EP2426983B1 (en) Method for link adaptation and apparatus thereof
US20070076741A1 (en) Apparatus and method for transmitting data in a wireless local area network
CN107026700B (en) Trust model construction method and device based on data packet forwarding
US8588783B2 (en) Power control in wireless communications networks during hand-over
CN110875756B (en) Method and equipment for automatically adjusting transmitting power in frequency hopping communication
CN115103332B (en) Reliable and efficient Internet of vehicles direct communication method based on intelligent reflecting surface
Constante et al. Enhanced association mechanism for IEEE 802.15. 4 networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Gao Yingbin

Inventor after: Zheng Yunjun

Inventor after: Xia Weiwei

Inventor after: Yan Feng

Inventor after: Zhang Yinong

Inventor after: Shen Lianfeng

Inventor before: Gao Yingbin

Inventor before: Zheng Yunjun

Inventor before: Xia Weiwei

Inventor before: Yan Feng

Inventor before: Zhang Yinong

Inventor before: Shen Lianfeng

GR01 Patent grant
GR01 Patent grant