CN106961574B - Transmission method of fusion image in cognitive wireless multimedia sensor network - Google Patents

Transmission method of fusion image in cognitive wireless multimedia sensor network Download PDF

Info

Publication number
CN106961574B
CN106961574B CN201710098750.0A CN201710098750A CN106961574B CN 106961574 B CN106961574 B CN 106961574B CN 201710098750 A CN201710098750 A CN 201710098750A CN 106961574 B CN106961574 B CN 106961574B
Authority
CN
China
Prior art keywords
image
fusion
source image
source
sensor network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710098750.0A
Other languages
Chinese (zh)
Other versions
CN106961574A (en
Inventor
易本顺
谢秋莹
黄太奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Research Institute of Wuhan University
Original Assignee
Shenzhen Research Institute of Wuhan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Research Institute of Wuhan University filed Critical Shenzhen Research Institute of Wuhan University
Priority to CN201710098750.0A priority Critical patent/CN106961574B/en
Publication of CN106961574A publication Critical patent/CN106961574A/en
Application granted granted Critical
Publication of CN106961574B publication Critical patent/CN106961574B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/70Media network packetisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/18Self-organising networks, e.g. ad-hoc networks or sensor networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention discloses a transmission method of a fusion image in a cognitive wireless multimedia sensor network, which comprises the steps of firstly collecting multi-focus images at collection nodes, then obtaining the fusion image by the fusion nodes with a dynamic spectrum management function by adopting a multi-focus fusion method based on a quadtree structure and weighted regional gradient energy, then coding fountain codes on the fusion image to continuously generate data packets, and selecting a proper link in the cognitive wireless multimedia sensor network to transmit the data packets to a target node in a multi-hop self-organizing mode. By the mode, the image data in the cognitive wireless multimedia sensor network are transmitted, the network data transmission quantity and the transmission energy consumption are reduced, and meanwhile, a user can obtain images with richer information quantity and higher definition.

Description

Transmission method of fusion image in cognitive wireless multimedia sensor network
Technical Field
The invention relates to the field of wireless communication and image processing, in particular to a transmission method of a fusion image in a cognitive wireless multimedia sensor network.
Background
In recent years, with the increasing demand of people for Wireless communication services and the rapid development of mobile internet, Wireless spectrum resources are increasingly strained, and meanwhile, with the increasing use of Wireless communication devices, especially with the rapid development of Wireless Sensor Networks (WSNs), a large number of Sensor nodes need to transmit environment information sensed by the Sensor nodes to a remote destination node in a Wireless communication manner, so that a large amount of communication spectrum resources are needed. The traditional fixed spectrum allocation mode greatly limits the improvement of the spectrum utilization rate, which seriously restricts the popularization of the wireless sensing network in smart home, disaster monitoring and prediction and other production and living aspects.
Although the CR technology applied to the wireless sensor network can alleviate the problem of spectrum resource shortage, many challenges still face in practical application. Under the condition that a plurality of slave users exist in the cognitive wireless sensing network, for the slave users, in addition to the interference caused by the dynamic burstiness of the master user to the spectrum resource occupation, the interference caused by mutual coexistence and opportunistic spectrum occupation among the plurality of slave users with the same status can not be avoided. For the master user and any slave user, once interference occurs, a data packet loss may result. Therefore, in the complex dynamic interference environment faced by spectrum sharing wireless communication, a coding technique with high flexibility, which can be applied to randomly deleting channels, is needed to complete data transmission. In addition, the cognitive wireless sensor network is a typical distributed network, and sensing nodes are usually deployed in harsh or special disaster environments, such as earthquake, flood, fire and the like, and are used for detecting and acquiring data in the environments. Under such a scenario, sensing nodes in the network often become extremely fragile and seriously affect the durability and reliability of monitoring data in the network, and the sensor nodes have the characteristic of limited energy, the energy cannot be supplemented generally due to limited environmental conditions, and when the nodes fail due to energy exhaustion, the availability of the data is also seriously reduced. In wireless sensor networks, power consumption and bandwidth utilization are the main factors that determine network lifetime and efficiency. For networks applied to the fields of safety monitoring, remote sensing images, medical images and the like, most of data transmitted between nodes and base stations are videos and images. In practical application, a plurality of cameras monitor the same scene at the same position, the acquired data redundancy is large, and the cameras are generally powered by fixed batteries and need to efficiently apply limited resources in a network.
Disclosure of Invention
In order to solve the technical problem, the invention provides a transmission method of a fusion image in a cognitive wireless multimedia sensor network.
The technical scheme of the invention is as follows: a transmission method of a fusion image in a cognitive wireless multimedia sensor network comprises the following steps:
step S1, arranging two or more than two camera nodes in the cognitive wireless multimedia sensor network, and collecting a plurality of pieces of source image information with different focusing degrees in the same monitoring scene;
s2, fusing information of the plurality of source images with different focusing degrees in the step S1 by adopting a method based on a quadtree structure and gradient energy of a weighted region at a fusion node with sufficient bandwidth resources and energy to obtain a fused image, wherein the definition and the information content of the fused image are higher than those of any one of the plurality of source images with different focusing degrees in the step S1;
step S3, the fusion node in step S2 adopts fountain codes as a channel coding mode to code the fusion image into a transmission data packet;
and step S4, the fusion node has dynamic spectrum management capability, and selects a proper link to transmit the data packet to the destination node in a multi-hop self-organizing manner.
Preferably, the step S2 adopts a quadtree structure and weighted region gradient energy-based method to fuse the source image information of different focusing degrees in the step S1, including the following steps:
s2.1, using the collected multiple source images with different focusing degrees as an original block of the quadtree decomposition, calculating the maximum possible series of decomposition according to the size of the image, and calculating a variance mapping chart of each source image;
s2.2, performing quadtree decomposition on each source image, solving a focus clear region in each source image, and directly copying the focus clear region to a fusion image;
and S2.3, for a transition region between a focus region and a non-focus region in each source image, fusing image pixel values and taking the average value of each source image pixel.
Preferably, the method of calculating the variance map of each source image in step S2.1 is:
map of variance
Figure BDA0001231038010000031
m n denotes the number of rows and columns of the image, VMmaxAnd VMminIs an image ofMaximum and minimum of all source image variances at pixel position:
VMmax(i,j)=max(VM1(i,j),VM2(i,j),...,VMm(i,j)),i=1,...m
VMmin(i,j)=min(VM1(i,j),VM2(i,j),...,VMm(i,j)),i=1,...m
variance VM of source imagei:VMi=Varience(Ii),i=1,2,...,m。
Preferably, in step S2.2, the method of finding the in-focus sharp region in each source image comprises the steps of:
step S2.2.1, taking each collected multi-focus source image as an original block of the quadtree decomposition, and calculating the maximum possible stage number L of the decomposition according to the size of the image;
step S2.2.2, during the quadtree decomposition, adopting an adaptive threshold BVM which is not less than k SMVM according to the pixel value of the source image, wherein the BVM is a variance map of the current block to be decomposed:
Figure BDA0001231038010000041
p q represents the number of rows and columns of the decomposed block; k is a coefficient and ranges from 0 to 1; VMmax,VMminThe same as above;
step S2.2.3, when the block to be decomposed satisfies the threshold, calculating the gradient energy of the focusing definition evaluation index weighting area of the block to be decomposed of each source image:
Figure BDA0001231038010000042
n represents the window length, f (i, j) is the pixel value on the ith row and the jth column;
if the focus measure of only one block is the maximum, the block of the source image is considered as a focus area, and the block is directly copied to the fusion image; otherwise, the corresponding blocks of all the source images enter the next level of quadtree decomposition until the decomposition level reaches L.
Preferably, in step S3, a novel poisson robust soliton distribution and a modulo operation are adopted as the degree distribution and coding/decoding mode of the fountain code.
Preferably, when the link is established in step S4, a DATA/ACK handshake mechanism is used to provide the current channel status and the user contention condition information to the transmitter via ACK, so as to optimize the access policy.
The invention has the beneficial effects that:
1. the data transmission amount in the cognitive wireless multimedia sensing network applied to the multi-image processing fields of safety monitoring, remote sensing images, medical images and the like is reduced by adopting a multi-focus image fusion technology, and the utilization rate of sensor node image information is improved. Meanwhile, for a time-varying random channel, a code-rate-free fountain code is adopted as a channel coding mode to realize reliable data transmission in a network.
2. A Cognitive Radio Wireless Sensor Network (CRSN) utilizes a Cognitive Radio technology to modify the Wireless Sensor Network, so that Sensor nodes have dynamic spectrum management capability, and can dynamically utilize spectrum hole resources from the identity of a user after an event is detected to communicate in a multi-hop self-organizing manner so as to meet the Network of specific application in a specific scene.
3. As a code-rate-free channel coding technology, fountain codes can effectively resist burst interference from authorized users in the cognitive wireless sensor, feedback retransmission is not needed, and an effective channel coding scheme can be provided for the cognitive wireless sensor network. In the cognitive wireless sensor network link adopting fountain codes, in the link establishment stage, the frequency spectrum availability of the receiving end and the transmitting end is not mutually uncertain, and an authorized user occupies a channel suddenly, so that the data packet loss is serious, the fountain codes continuously generate data packets, the channel is not required to be estimated when the data packets are transmitted, the correct decoding of the receiving end is not influenced by part of the lost data packets, and the wireless link can be quickly established without feedback and retransmission. Meanwhile, in a cognitive wireless sensor network communication link, once communication among cognitive users affects normal communication of authorized users, packet loss occurs, and the packet loss situation is uncertain, namely the channel capacity of communication is time-varying, and the code rate of fountain codes when decoding succeeds at a receiving end can adaptively vary with the variation of the channel capacity, so that the problem can be well solved.
4. The multi-focus image fusion technology is characterized in that a plurality of images of different focus points shot by a camera are processed by the fusion technology to obtain a single image which has higher definition and is more suitable for visual perception and computer detection and identification, the utilization rate of image information of a sensor can be effectively improved, the transmission quantity of network data is reduced, and the service life of a network is prolonged.
The invention is further described below with reference to the accompanying drawings and specific embodiments.
Drawings
FIG. 1 is a schematic structural diagram of a cognitive wireless sensor network for image acquisition according to an embodiment of the present invention;
FIG. 2 is a flow diagram illustrating data processing and transmission in accordance with an embodiment of the present invention;
FIG. 3 is a diagram illustrating the effects of multi-focus image fusion before and after the multi-focus image fusion according to an embodiment of the present invention.
Detailed Description
In order to more fully understand the technical contents of the present invention, the technical solutions of the present invention will be further described and illustrated with reference to specific embodiments.
As shown in fig. 1 to 3, a transmission method of a fusion image in a cognitive wireless multimedia sensor network includes the following steps:
step S1, arranging two or more than two camera nodes in the cognitive wireless multimedia sensor network, and collecting a plurality of pieces of source image information with different focusing degrees in the same monitoring scene;
s2, fusing the information of the plurality of source images with different focusing degrees in the step S1 by adopting a method based on a quadtree structure and gradient energy of a weighted region at a fusion node with sufficient bandwidth resources and energy to obtain a fused image, wherein the definition and the information content of the fused image are higher than those of any one of the plurality of source images with different focusing degrees in the step S1;
step S3, the fusion node adopts fountain code as channel coding mode in step S2, and the fusion image is coded into the transmitted data packet;
and step S4, the fusion node has dynamic spectrum management capability, and selects a proper link to transmit the data packet to the destination node in a multi-hop self-organizing manner.
In the above technical solution, the step S2 of fusing the information of the plurality of source images with different focusing degrees in the step S1 by using a method based on a quadtree structure and weighted regional gradient energy includes the following steps:
s2.1, using the collected multiple source images with different focusing degrees as an original block of the quadtree decomposition, calculating the maximum possible stage number of the decomposition according to the size of the image, and calculating a variance mapping chart of each source image:
map of variance
Figure BDA0001231038010000071
m n denotes the number of rows and columns of the image, VMmaxAnd VMminIs the maximum and minimum of all source image variances at each pixel position:
VMmax(i,j)=max(VM1(i,j),VM2(i,j),...,VMm(i,j)),i=1,...m
VMmin(i,j)=min(VM1(i,j),VM2(i,j),...,VMm(i,j)),i=1,...m
variance VM of source imagei:VMi=Varience(Ii),i=1,2,...,m
S2.2, performing quadtree decomposition on each source image, solving a focus clear region in each source image, and directly copying the focus clear region to a fusion image;
step S2.2.1 uses each source image of collected multiple focuses as the original block of the quadtree decomposition, and calculates the maximum possible stage number L of the decomposition according to the image size.
In the process of decomposing the quad-tree in the step S2.2.2, an adaptive threshold BVM (minimum value of zero) k SMVM (minimum value of zero) is adopted according to the pixel value of the source image
The BVM is a variance mapping chart of a current block to be decomposed:
Figure BDA0001231038010000081
p q represents the number of rows and columns of the decomposed block; k is a coefficient and ranges from 0 to 1; VMmax,VMminAs described above.
Step S2.2.3, when the block to be decomposed satisfies the threshold, calculating the gradient energy of the focusing definition evaluation index weighting area of the block to be decomposed of each source image:
Figure BDA0001231038010000082
n denotes the window length, and f (i, j) is the pixel value on the ith row and jth column.
If the focus measure of only one block is maximum, the block of the source image is considered as a focus area, and the block is directly copied to the fusion image. Otherwise, the corresponding blocks of all the source images enter the next level of quadtree decomposition until the decomposition level reaches L.
And S2.3, for a transition region between a focus region and a non-focus region in each source image, fusing image pixel values and taking the average value of each source image pixel.
And obtaining a fused image through the steps.
Fig. 3 (c) is a multi-focus fusion image obtained by using a multi-focus fusion algorithm based on a quadtree structure and weighted region gradient energy. Compared with the original image (a) and (b) in FIG. 3, the fused image has higher definition and richer information content, can save the network transmission capacity by 50 percent, and improves the information entropy by 1.69 percent.
In step S3, a new poisson robust soliton distribution and a modulo operation are adopted as the degree distribution and the encoding and decoding method of the fountain code. Compared with the traditional short code length LT code algorithm based on robust soliton distribution improvement, the novel Poisson robust soliton distribution algorithm at least reduces 5.78% (5.78% -12.9%) of decoding cost and accelerates 13.4% -53.78% of coding and decoding speed.
In step S4, after the encoding is completed, the fusion node with dynamic spectrum management capability selects a suitable link to transmit the data packet to the destination node in a multi-hop ad hoc manner. When a link is established, a DATA/ACK handshaking mechanism is adopted, and the current channel state and user competition condition information are provided for the transmitter through ACK, so that an access strategy is optimized.
The invention adopts multiple acquisition nodes to acquire images in the cognitive wireless multimedia sensor network, and adopts a multi-focus fusion method based on a quadtree structure and regional gradient energy to process the acquired images at fusion nodes with rich resources and strong processing capability, so as to reduce the network data transmission quantity and improve the node image information utilization rate. Fountain code coding is carried out on the fused image to continuously generate data packets, and the data packets are transmitted to a target node in a multi-hop self-organizing mode by selecting a proper link in the cognitive wireless multimedia sensing network. Through the mode, the method and the device realize the transmission of the image data in the cognitive wireless sensor network. The multi-focus image fusion technology is applied to the cognitive wireless multimedia sensor network, so that the network data transmission quantity and the transmission energy consumption are reduced, and meanwhile, a user can obtain images with richer information quantity and higher definition. The fountain codes which are suitable for randomly deleting channels and do not need feedback retransmission are adopted for coding transmission, so that the transmission performance of the system is improved, and the communication quality is ensured. The effective combination of the two makes the utilization of wireless communication spectrum resources more efficient, and has reference function for the practical application of the cognitive wireless multimedia sensing network.
The technical contents of the present invention are further illustrated by the examples, so as to facilitate the understanding of the reader, but the embodiments of the present invention are not limited thereto, and any technical extension or re-creation based on the present invention is protected by the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

Claims (3)

1. A transmission method of a fusion image in a cognitive wireless multimedia sensor network is characterized by comprising the following steps:
step S1, arranging two or more than two camera nodes in the cognitive wireless multimedia sensor network, and collecting a plurality of pieces of source image information with different focusing degrees in the same monitoring scene;
s2, fusing information of the plurality of source images with different focusing degrees in the step S1 by adopting a method based on a quadtree structure and gradient energy of a weighted region at a fusion node with sufficient bandwidth resources and energy to obtain a fused image, wherein the definition and the information content of the fused image are higher than those of any one of the plurality of source images with different focusing degrees in the step S1;
step S3, the fusion node in step S2 adopts fountain codes as a channel coding mode to code the fusion image into a transmission data packet;
step S4, the fusion node has dynamic spectrum management ability, and the fusion node with dynamic spectrum management ability selects a proper link to transmit the data packet to the destination node in a multi-hop self-organizing way;
step S2, fusing the information of the plurality of source images with different focusing degrees in step S1 by adopting a method based on a quadtree structure and weighted region gradient energy, comprising the following steps:
s2.1, using the collected multiple source images with different focusing degrees as an original block of the quadtree decomposition, calculating the maximum possible series of decomposition according to the size of the image, and calculating a variance mapping chart of each source image;
s2.2, performing quadtree decomposition on each source image, solving a focus clear region in each source image, and directly copying the focus clear region to a fusion image;
s2.3, fusing image pixel values to obtain the average value of each source image pixel for a transition region between a focusing region and a non-focusing region in each source image;
the method for calculating the variance map of each source image in step S2.1 is:
map of variance
Figure FDA0002690895710000021
m x n represents the number of rows and columns of the image, i represents the number of rows of pixel points, j represents the number of columns of pixel points, VMmaxAnd VMminIs the maximum and minimum of all source image variances at each pixel position:
VMmax(i,j)=max(VM1(i,j),VM2(i,j),...,VMm(i,j)),i=1,...m,j=1,...n
VMmin(i,j)=min(VM1(i,j),VM2(i,j),...,VMm(i,j)),i=1,...m,j=1,...n
variance VM of source imagei:VMi=Varience(Ii),i=1,2,...,m。IiA source image is represented.
In step S2.2, the method of finding the in-focus sharp region in each source image comprises the steps of:
step S2.2.1, taking each collected multi-focus source image as an original block of the quadtree decomposition, and calculating the maximum possible stage number L of the decomposition according to the size of the image;
step S2.2.2, during the quadtree decomposition, adopting an adaptive threshold BVM which is not less than k SMVM according to the pixel value of the source image, wherein the BVM is a variance map of the current block to be decomposed:
Figure FDA0002690895710000022
p q represents the number of rows and columns of the decomposed block; k is a coefficient and ranges from 0 to 1; VMmax,VMminThe same as above;
step S2.2.3, when the block to be decomposed satisfies the threshold, calculating the gradient energy of the focusing definition evaluation index weighting area of the block to be decomposed of each source image:
Figure FDA0002690895710000023
n represents the window length, f (i, j) is the pixel value on the ith row and the jth column, x represents the xth row of the image, and y represents the yth column of the image;
if the focus measure of only one block is the maximum, the block of the source image is considered as a focus area, and the block is directly copied to the fusion image; otherwise, the corresponding blocks of all the source images enter the next level of quadtree decomposition until the decomposition level reaches L.
2. The method for transmitting the fused image in the cognitive wireless multimedia sensor network according to claim 1,
in step S3, a novel poisson robust soliton distribution and a modular operation are used as the degree distribution and the encoding and decoding mode of the fountain code.
3. The method for transmitting the fused image in the cognitive wireless multimedia sensor network according to claim 1,
in step S4, a DATA/ACK handshake mechanism is used to provide the current channel status and user contention information for the transmitter via ACK, so as to optimize the access policy.
CN201710098750.0A 2017-02-23 2017-02-23 Transmission method of fusion image in cognitive wireless multimedia sensor network Active CN106961574B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710098750.0A CN106961574B (en) 2017-02-23 2017-02-23 Transmission method of fusion image in cognitive wireless multimedia sensor network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710098750.0A CN106961574B (en) 2017-02-23 2017-02-23 Transmission method of fusion image in cognitive wireless multimedia sensor network

Publications (2)

Publication Number Publication Date
CN106961574A CN106961574A (en) 2017-07-18
CN106961574B true CN106961574B (en) 2020-12-29

Family

ID=59481042

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710098750.0A Active CN106961574B (en) 2017-02-23 2017-02-23 Transmission method of fusion image in cognitive wireless multimedia sensor network

Country Status (1)

Country Link
CN (1) CN106961574B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109389573A (en) * 2018-09-30 2019-02-26 湖南大学 The method of multi-focus image fusion based on quadtree decomposition
CN110191248B (en) * 2019-06-07 2020-09-29 天府新区西南交通大学研究院 Feedback-based unmanned aerial vehicle image transmission method of Bats Code
CN111127375B (en) * 2019-12-03 2023-04-07 重庆邮电大学 Multi-focus image fusion method combining DSIFT and self-adaptive image blocking

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521814A (en) * 2011-10-20 2012-06-27 华南理工大学 Wireless sensor network image fusion method based on multi-focus fusion and image splicing
CN104469811A (en) * 2014-12-04 2015-03-25 东南大学 Clustering cooperative spectrum sensing hard fusion method for cognitive wireless sensor network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103298406B (en) * 2011-01-06 2017-06-09 美国医软科技公司 System and method for carrying out treating planning to organ disease in function and dissection level

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521814A (en) * 2011-10-20 2012-06-27 华南理工大学 Wireless sensor network image fusion method based on multi-focus fusion and image splicing
CN104469811A (en) * 2014-12-04 2015-03-25 东南大学 Clustering cooperative spectrum sensing hard fusion method for cognitive wireless sensor network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Quadtree-based multi-focus image fusion using a weighted focus-measure";Xiangzhi Bai;《Information Fusion 》;20140606;第105–118页 *
"喷泉码在认知无线电网络中的应用";易本顺;《电讯技术》;20150831;第935-941页 *

Also Published As

Publication number Publication date
CN106961574A (en) 2017-07-18

Similar Documents

Publication Publication Date Title
CN106961574B (en) Transmission method of fusion image in cognitive wireless multimedia sensor network
Lecuire et al. Energy-efficient transmission of wavelet-based images in wireless sensor networks
Kumari Investigation: life-time and stability period in wireless sensor network
CN102547590B (en) Pairing method for user pairs of device to device communication based on business content relevance under cellular network
Staikopoulos et al. Image transmission via LoRa networks–a survey
CN103945281A (en) Method, device and system for video transmission processing
CN106537959B (en) Method for encoding and decoding frames in a telecommunication network
Gottapu et al. Maximizing cognitive radio networks throughput using limited historical behavior of primary users
Tao et al. Efficient Image Transmission Schemes over Zigbee‐Based Image Sensor Networks
CN111278161B (en) WLAN protocol design and optimization method based on energy collection and deep reinforcement learning
Pham et al. Performances of multi-hops image transmissions on IEEE 802.15. 4 Wireless Sensor Networks for surveillance applications
CN110611831B (en) Video transmission method and device
Dai et al. Correlation-aware qos routing for wireless video sensor networks
Gali et al. Multi-Context Trust Aware Routing For Internet of Things.
Huu et al. Low-complexity and energy-efficient algorithms on image compression for wireless sensor networks
Liu et al. Energy efficiency optimization of channel access probabilities in IEEE 802.15. 6 UWB WBANs
Chang et al. Not every bit counts: A resource allocation problem for data gathering in machine-to-machine communications
Tran-Quang et al. Adaptive transmission range assignment algorithm for in-routing image compression on wireless sensor networks
Han et al. Deep learning based loss recovery mechanism for video streaming over mobile information-centric network
Pandremmenou et al. Game-theoretic solutions through intelligent optimization for efficient resource management in wireless visual sensor networks
JP6502171B2 (en) Communication apparatus, control method, and program
CN1361963A (en) Network slot synchronization scheme for a computer network communication channel
Hsu et al. Design and analysis for effective proximal discovery in Machine-to-Machine wireless networks
CN113573270B (en) Method for reliably collecting partial area of trigger data under large-scale network
CN117440419B (en) Wireless self-organizing network performance evaluation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant