CN114339788B - Multi-agent ad hoc network planning method and system - Google Patents

Multi-agent ad hoc network planning method and system Download PDF

Info

Publication number
CN114339788B
CN114339788B CN202210013929.2A CN202210013929A CN114339788B CN 114339788 B CN114339788 B CN 114339788B CN 202210013929 A CN202210013929 A CN 202210013929A CN 114339788 B CN114339788 B CN 114339788B
Authority
CN
China
Prior art keywords
agent
relay
edge
central control
control station
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210013929.2A
Other languages
Chinese (zh)
Other versions
CN114339788A (en
Inventor
魏辰华
陈维祥
余泽
张辉
谭晓军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202210013929.2A priority Critical patent/CN114339788B/en
Publication of CN114339788A publication Critical patent/CN114339788A/en
Application granted granted Critical
Publication of CN114339788B publication Critical patent/CN114339788B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Mobile Radio Communication Systems (AREA)

Abstract

The application discloses a multi-agent ad hoc network planning method and a system, wherein the method comprises the following steps: the edge intelligent agent is used for determining whether to switch the relay node and adjusting the transmission power according to the link quality; the relay agent is used for determining whether to accept the relay request according to the overall benefit of the system and sending the former state, action, rewards and new state of the edge agent to the central control station; and the central control station is used for updating the strategy network according to the received data of the relay agent and transmitting the strategy network to the edge agent. The system comprises an edge intelligent body, a relay intelligent body and a central control station, wherein the edge intelligent body is connected with the relay intelligent body, and the relay intelligent body is connected with the central control station. The application can solve the problem of selecting the relay node and the transmission power by the edge agent, thereby perfecting the self-networking planning. The application can be widely applied to the technical field of communication.

Description

Multi-agent ad hoc network planning method and system
Technical Field
The application relates to the technical field of communication, in particular to a multi-agent ad hoc network planning method and system.
Background
The multi-agent ad hoc network technology is widely applied to various fields due to flexibility, and in an outdoor open domain data acquisition task, the data acquisition range can be effectively enlarged through controlling the multi-agent cluster, so that the maintenance of the stability of the multi-agent ad hoc network becomes an important premise for providing service. However, in a multi-agent cluster communication network, the high mobility of agents causes rapid changes in network topology, and the relay selection problem becomes more complex. In addition, the energy source of the agent is limited, and the agent can communicate to the closer nodes to reduce the transmission power, so that the derivation of the optimal association and the transmission power of the edge node and the relay node has important significance. In the existing routing protocol improvement work aiming at the multi-agent self-organizing network, most of traditional routing protocols need nodes to maintain routing tables and update the routing tables regularly, and when the number of the nodes is large, larger delay and congestion are brought to the network.
Disclosure of Invention
In order to solve the technical problems, the application aims to provide a multi-agent ad hoc network planning method, which is based on multi-agent (IA, intelligent Agent) relay nodes and transmission power selection of a Deep Q Network (DQN) and a Long Short memory network (LSTM, long Short-Term Memory Network), and solves the problem of selection of the relay nodes and the transmission power of edge agents, thereby improving the ad hoc network planning.
The first technical scheme adopted by the application is as follows: a multi-agent ad hoc network planning method comprises the following steps:
the edge intelligent agent is used for determining whether to switch the relay node and adjusting the transmission power according to the link quality;
the relay agent is used for determining whether to accept the relay request according to the overall benefit of the system and sending the former state, action, rewards and new state of the edge agent to the central control station;
and the central control station is used for updating the strategy network according to the received data of the relay agent and transmitting the strategy network to the edge agent.
Further, the step of determining whether to switch the relay node and adjust the transmission power according to the link quality specifically includes:
monitoring the link quality of the intelligent agent in real time based on the edge;
judging that the link quality is reduced, and generating a decision according to the corresponding local strategy and environment information;
and switching the relay node and adjusting the transmission power according to the decision.
Further, the step of determining that the link quality is reduced and generating a decision according to the corresponding local policy and environment information specifically includes:
the edge agent perceives the channel gains of all links currently connected with the relay node and the quantity of the adjacent agents and is used as a current state to be transmitted into a strategy network;
and the edge agent obtains actions executed by the edge agent based on the epsilon-greedy algorithm according to the input state and the strategy network to form a decision.
Further, the step of determining whether to accept the relay request according to the overall benefit of the system specifically includes:
the relay intelligent agent receives a relay request of the edge intelligent agent and calculates the influence of the relay request on the overall benefit of the network;
and when judging that the relay service is provided for the edge agent, the overall benefit of the network can be maximized, the relay agent determines to provide the relay service for the edge agent, and returns a decision to the corresponding edge agent.
Further, the step of sending the previous state, action, reward, and new state of the edge agent to the central control station specifically includes:
the edge agent calculates rewards after the edge agent performs actions;
transmitting the previous step status, action, rewards and new status to the relay agent;
the relay agent transmits the previous step status, action, rewards and new status of the edge agent to the central control station.
Further, the step of calculating the rewards after the edge agent performs the action comprises the following steps:
calculating the channel capacity when the edge agent transmits data to the central control station through the relay agent to use the channel;
calculating the transmission rate of the edge agent on the channel for transmitting data to the central control station through the relay agent;
calculating the transmission satisfaction degree of the edge agent for transmitting data to the central control station through the relay agent;
the rewards are calculated based on the transmission satisfaction and the transmission rate.
Further, the step of updating the policy network according to the received data of the relay agent and issuing to the edge agent specifically includes:
the central control station receives the previous state, action, rewards and new state returned by the relay agent in the time slot and stores the previous state, action, rewards and new state into the experience playback zone;
sampling from the experience playback area to obtain a sample, and inputting the sample into an LSTM network;
taking the output of the LSTM network as the input of the Double-DQN network;
updating strategy network parameters based on a mean square error loss function;
and the central control station transmits the updated strategy network to all the edge agents.
The second technical scheme adopted by the application is as follows: the utility model provides a many agent ad hoc network planning system, includes edge agent, relay agent and central control station, edge agent is connected with relay agent, relay agent is connected with central control station.
The method and the system have the beneficial effects that: the application provides a relay node and a transmission power selection method for an edge agent in a multi-agent ad hoc network scene based on DQN and LSTM networks, and the edge agent selects the relay node and the transmission power through a local strategy network and environment observation, thereby achieving the effects of optimizing network performance and enhancing network stability.
Drawings
FIG. 1 is a flow chart of steps of a multi-agent ad hoc network planning method according to the present application;
FIG. 2 is a block diagram of a multi-agent ad hoc network planning system according to the present application;
FIG. 3 is a schematic diagram of establishing link communications in accordance with an embodiment of the present application;
fig. 4 is a schematic diagram of a network update according to an embodiment of the present application.
Detailed Description
The application will now be described in further detail with reference to the drawings and to specific examples. The step numbers in the following embodiments are set for convenience of illustration only, and the order between the steps is not limited in any way, and the execution order of the steps in the embodiments may be adaptively adjusted according to the understanding of those skilled in the art.
As shown in fig. 1, the present application provides a multi-agent ad hoc network planning method, which includes the following steps:
s1, determining whether to switch a relay node and adjust transmission power according to link quality by an edge agent;
first, referring to fig. 3, a communication link is established between the relay agent and the central control station, and the selection of a communication link channel is based on the transmission rate calculated by the relay agent, i.e., the channel having the highest transmission rate is selected to establish the communication link between the central control stations.
Specifically, the transmission rate that relay agent j can achieve on channel l is calculated as follows:
wherein W is the bandwidth of channel l;the number of relay agents to select channel l.
At each time slot, each relay agent will broadcast the number of edge agents connected to it to the other relay agents. And meanwhile, the edge agent monitors the transmission quality of the current transmission link, and if the transmission rate is found to be reduced, the edge agent inputs the estimated channel gain and the current relay link number of each relay agent into a local strategy network as a state.
Specifically, the state includes two parts, the first part is the estimated channel gain of all communication links of the relay agent, and the second part is the number of edge agents adjacent to the edge agent.
And then the local strategy network outputs a decision (action), wherein the action comprises a new relay node number and a transmission power level to be adopted, and a relay request is sent to the corresponding relay node.
Specifically, the present application discretizes candidate power into L levels, denoted as [ P ] 1 ,P 2 ,P 3 …P L ]Assuming that there are M relay nodes, and candidate power of each relay node has L levels, the action space is:
s2, the relay agent determines whether to accept the relay request according to the overall benefit of the system;
after receiving the relay request of the edge agent, the relay agent decides whether to accept the relay request based on the influence of the relay request on the overall benefit of the system, and feeds back the decision result to the edge agent.
If the relay agent does not accept the relay request, the edge agent maintains the original communication link. And if the relay agent receives the relay request, the edge agent switches the relay node to be a new relay agent.
After the edge agent establishes a communication link with the new relay agent, the edge agent calculates a reward (reward) obtained after the last state performs an action.
Specifically, assuming that the current edge agent is i and a new relay agent is selected as j, the edge agent first calculates a channel capacity when transmitting data to the central control station d through the relay agent j and using the channel l, and the expression is as follows:
where W is the bandwidth of channel l,for the signal-to-noise ratio of channel l between edge agent i and relay agent j,for relaying the signal-to-noise ratio of the channel l between agent j and central control station d.
Next, edge agent i calculates the transmission rate of data over channel l through relay agent j to central control station d as follows:
wherein,resources occupied for the relay agent j to transmit its own data,/>For selecting the number of relay agents of channel l, < +.>To select relay agent j as the number of edge agents for relay and channel l.
Then, the edge agent i calculates satisfaction of transmitting data to the central control station d through the relay agent j, expressed as follows:
wherein lambda is i,d Indicating the urgency, lambda, of the edge agent i to transmit data to the central control station d i,d Larger represents the more urgent the transmission task, U′ i,j,d Representing the transmission requirement between the edge agent i and the central control station d, v being a coefficient, here set as v>7。
Finally, calculate the action's reward, the expression is as follows:
reward=f i,d -ω×P level
where ω represents the weight of the power class, P level Indicating the transmission power level selected by the agent.
S3, the relay agent sends the former state, action, rewards and new state of the edge agent to the central control station;
after the interaction of the relay agent with the edge agent is completed, the relay agent sends state, action, reward and the new state of the edge agent to the central control station, and the central control station trains and updates the policy network based on the data and issues the new policy network to the edge agent.
Specifically, the central control station receives state, action, reward and new state returned by the relay agent in the time slot and stores the received state, action, reward and new state in the experience playback zone, and if the data quantity in the experience playback zone is greater than the maximum storable quantity, the earliest data in the experience playback zone is replaced by the current data.
And S4, the central control station updates the strategy network according to the received data of the relay agent and transmits the strategy network to the edge agent.
Next, samples of mini-batch are sampled from within the empirical playback region, denoted asWherein s is i Is the state, a of sample i i For action of sample i, r i For sample i reward, +.>State after execution of action for sample i.
Next, referring to fig. 4, the sample is input to the LSTM network and the output of the LSTM is taken as input to the Double-DQN network.
Next, the Double-DQN network is updated.
Specifically, y is calculated first i The expression is as follows
Where γ is the attenuation coefficient and θ is a parameter of the network.
Then, the policy network parameters are updated using the mean square error loss function, expressed as follows:
and finally, the central control station transmits the updated strategy network to all the edge agents.
As shown in fig. 2, the multi-agent ad hoc network planning system comprises an edge agent, a relay agent and a central control station, wherein the edge agent is connected with the relay agent, and the relay agent is connected with the central control station.
The content in the method embodiment is applicable to the system embodiment, the functions specifically realized by the system embodiment are the same as those of the method embodiment, and the achieved beneficial effects are the same as those of the method embodiment.
A multi-agent ad hoc network planning device comprises:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement a multi-agent ad hoc network planning method as described above.
The content in the method embodiment is applicable to the embodiment of the device, and the functions specifically realized by the embodiment of the device are the same as those of the method embodiment, and the obtained beneficial effects are the same as those of the method embodiment.
A storage medium having stored therein instructions executable by a processor, characterized by: the processor-executable instructions, when executed by the processor, are for implementing a multi-agent ad hoc network planning method as described above.
The content in the method embodiment is applicable to the storage medium embodiment, and functions specifically implemented by the storage medium embodiment are the same as those of the method embodiment, and the achieved beneficial effects are the same as those of the method embodiment.
While the preferred embodiment of the present application has been described in detail, the application is not limited to the embodiment, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the application, and these equivalent modifications and substitutions are intended to be included in the scope of the present application as defined in the appended claims.

Claims (6)

1. The multi-agent ad hoc network planning method is characterized by comprising the following steps of:
the edge intelligent agent is used for determining whether to switch the relay node and adjusting the transmission power according to the link quality;
the relay agent is used for determining whether to accept the relay request according to the overall benefit of the system and sending the former state, action, rewards and new state of the edge agent to the central control station;
the central control station is used for updating the strategy network according to the received data of the relay agent and transmitting the strategy network to the edge agent;
the step of sending the previous step status, action, rewards and new status of the edge agent to the central control station specifically comprises:
the edge agent calculates rewards after the edge agent performs actions;
transmitting the previous step status, action, rewards and new status to the relay agent;
the relay agent sends the previous step state, action, rewards and new state of the edge agent to the central control station;
the step of calculating rewards after the edge agent performs actions, specifically comprising the following steps:
calculating the channel capacity when the edge agent transmits data to the central control station through the relay agent to use the channel;
calculating the transmission rate of the edge agent on the channel for transmitting data to the central control station through the relay agent;
calculating the transmission satisfaction degree of the edge agent for transmitting data to the central control station through the relay agent;
calculating rewards according to the transmission satisfaction and the transmission rate;
the channel capacity is calculated as follows:
where W is the bandwidth of channel l,for the signal-to-noise ratio of channel l between edge agent i and relay agent j, +.>Signal to noise ratio of channel l between relay agent j and central control station d;
the transmission rate is calculated as follows:
wherein,resources occupied for the relay agent j to transmit its own data,/>Number of relay agents to select channel l,/>To select relay agent j as the number of edge agents for relay and channel l;
the satisfaction degree is calculated as follows:
wherein lambda is i,d Indicating the urgency, lambda, of the edge agent i to transmit data to the central control station d i,d Larger represents more urgent transmission tasks, U' i,d Representing the transmission requirement between the edge intelligent body i and the central control station d, wherein upsilon is a coefficient and is set as upsilon>7。
2. The multi-agent ad hoc network planning method according to claim 1, wherein said determining whether to switch the relay node and adjust the transmission power according to the link quality comprises:
monitoring the link quality of the intelligent agent in real time based on the edge;
judging that the link quality is reduced, and generating a decision according to the corresponding local strategy and environment information;
and switching the relay node and adjusting the transmission power according to the decision.
3. The multi-agent ad hoc network planning method according to claim 2, wherein said determining that the link quality is reduced, generating a decision according to the corresponding local policy and environmental information, specifically comprises:
the edge agent perceives the channel gains of all links currently connected with the relay node and the quantity of the adjacent agents and is used as a current state to be transmitted into a strategy network;
and the edge agent obtains actions executed by the edge agent based on the epsilon-greedy algorithm according to the input state and the strategy network to form a decision.
4. A multi-agent ad hoc network planning method according to claim 3, wherein said step of determining whether to accept the relay request according to the overall benefit of the system comprises:
the relay intelligent agent receives a relay request of the edge intelligent agent and calculates the influence of the relay request on the overall benefit of the network;
and when judging that the relay service is provided for the edge agent, the overall benefit of the network can be maximized, the relay agent determines to provide the relay service for the edge agent, and returns a decision to the corresponding edge agent.
5. The multi-agent ad hoc network planning method according to claim 4, wherein said step of updating the policy network according to the received data of the relay agents and issuing to the edge agents comprises:
the central control station receives the previous state, action, rewards and new state returned by the relay agent in the time slot and stores the previous state, action, rewards and new state in the experience playback zone;
sampling from the experience playback area to obtain a sample, and inputting the sample into an LSTM network;
taking the output of the LSTM network as the input of the Double-DQN network;
updating strategy network parameters based on a mean square error loss function;
and the central control station transmits the updated strategy network to all the edge agents.
6. A multi-agent ad hoc network planning system for performing the multi-agent ad hoc network planning method according to claim 1, comprising an edge agent, a relay agent and a central control station, wherein the edge agent is connected with the relay agent, and the relay agent is connected with the central control station.
CN202210013929.2A 2022-01-06 2022-01-06 Multi-agent ad hoc network planning method and system Active CN114339788B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210013929.2A CN114339788B (en) 2022-01-06 2022-01-06 Multi-agent ad hoc network planning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210013929.2A CN114339788B (en) 2022-01-06 2022-01-06 Multi-agent ad hoc network planning method and system

Publications (2)

Publication Number Publication Date
CN114339788A CN114339788A (en) 2022-04-12
CN114339788B true CN114339788B (en) 2023-11-17

Family

ID=81024857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210013929.2A Active CN114339788B (en) 2022-01-06 2022-01-06 Multi-agent ad hoc network planning method and system

Country Status (1)

Country Link
CN (1) CN114339788B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013079703A1 (en) * 2011-12-02 2013-06-06 Thales Method of communicating in an ad hoc network, transmitting/receiving station and associated computer programme
CN110390431A (en) * 2019-07-19 2019-10-29 大连海事大学 A kind of search and rescue net and its dispatching method based on unmanned machine Swarm Intelligence Algorithm
CN110474319A (en) * 2019-07-05 2019-11-19 湖北工业大学 The method of the micro electric network coordination of isolated island containing renewable energy control based on multiple agent
CN111885671A (en) * 2020-07-17 2020-11-03 燕山大学 Underwater joint relay selection and power distribution method based on deep reinforcement learning
CN112584347A (en) * 2020-09-28 2021-03-30 西南电子技术研究所(中国电子科技集团公司第十研究所) UAV heterogeneous network multi-dimensional resource dynamic management method
CN112995142A (en) * 2021-02-03 2021-06-18 中国电子科技集团公司第十五研究所 Anonymous network dynamic link selection method and device
CN113064671A (en) * 2021-04-27 2021-07-02 清华大学 Multi-agent-based edge cloud extensible task unloading method
CN113163479A (en) * 2021-02-05 2021-07-23 北京中电飞华通信有限公司 Cellular Internet of things uplink resource allocation method and electronic equipment
CN113660681A (en) * 2021-05-31 2021-11-16 西北工业大学 Multi-agent resource optimization method applied to unmanned aerial vehicle cluster auxiliary transmission

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013079703A1 (en) * 2011-12-02 2013-06-06 Thales Method of communicating in an ad hoc network, transmitting/receiving station and associated computer programme
CN110474319A (en) * 2019-07-05 2019-11-19 湖北工业大学 The method of the micro electric network coordination of isolated island containing renewable energy control based on multiple agent
CN110390431A (en) * 2019-07-19 2019-10-29 大连海事大学 A kind of search and rescue net and its dispatching method based on unmanned machine Swarm Intelligence Algorithm
CN111885671A (en) * 2020-07-17 2020-11-03 燕山大学 Underwater joint relay selection and power distribution method based on deep reinforcement learning
CN112584347A (en) * 2020-09-28 2021-03-30 西南电子技术研究所(中国电子科技集团公司第十研究所) UAV heterogeneous network multi-dimensional resource dynamic management method
CN112995142A (en) * 2021-02-03 2021-06-18 中国电子科技集团公司第十五研究所 Anonymous network dynamic link selection method and device
CN113163479A (en) * 2021-02-05 2021-07-23 北京中电飞华通信有限公司 Cellular Internet of things uplink resource allocation method and electronic equipment
CN113064671A (en) * 2021-04-27 2021-07-02 清华大学 Multi-agent-based edge cloud extensible task unloading method
CN113660681A (en) * 2021-05-31 2021-11-16 西北工业大学 Multi-agent resource optimization method applied to unmanned aerial vehicle cluster auxiliary transmission

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
M2M通信中基于多智能体强化学习的无线资源分配算法;徐少毅;郑姗姗;;北京交通大学学报(第05期);全文 *
集成Ad Hoc和移动蜂窝网络系统的切换算法;鲁蔚锋;吴蒙;;中兴通讯技术(第05期);全文 *

Also Published As

Publication number Publication date
CN114339788A (en) 2022-04-12

Similar Documents

Publication Publication Date Title
US9877245B2 (en) Determining a threshold value for determining whether to steer a particular node from associating with one node to another node in a wireless environment
JP4408570B2 (en) Adaptive modulation and adaptive channel coding at the cell level
Yau et al. Applications of reinforcement learning to cognitive radio networks
US8391802B2 (en) Link adaptation and power control with consumed energy minimization
EP3248323B1 (en) Method and system for rate adaptation for data traffic
Chen et al. Spectral efficiency and relay energy efficiency of full-duplex relay channel
CN111385201A (en) RPL routing method based on bidirectional father node decision
Musaddiq et al. Energy-aware adaptive trickle timer algorithm for RPL-based routing in the Internet of Things
CN115001537B (en) Routing networking method of carrier communication system based on clustering algorithm
CN109951239B (en) Adaptive modulation method of energy collection relay system based on Bayesian classifier
Lott et al. Stochastic routing in ad hoc wireless networks
CN114339788B (en) Multi-agent ad hoc network planning method and system
Chang et al. D2D transmission scheme in URLLC enabled real-time wireless control systems for tactile Internet
CN116506918A (en) Relay selection method based on cache region prediction
CN113133106B (en) Multi-hop relay transmission method and device based on storage assistance and terminal equipment
CN111542121B (en) Multi-dimensional resource allocation method meeting SWIPT and applied to bidirectional DF relay system
CN108055676B (en) 4G system D2D routing method based on terminal level and node number
CN107708174B (en) Terminal direct D2D routing method in 5G system
CN109309606B (en) Communication method for household appliance network
Hohmann et al. Optimal resource allocation policy for multi-rate opportunistic forwarding
CN114867030B (en) Dual-time scale intelligent wireless access network slicing method
CN116016336B (en) HRPL-based efficient inter-node communication method
Liao et al. Energy-Efficient and Link-Stability based Ant Colony Optimization Routing Protocol for Underwater Acoustic Networks
Sapwatcharasakun et al. Green energy of handover management in seamless networks
CN111464444B (en) Sensitive information distribution method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant