CN101841930B - Node cooperative work method based on sensing direction guide in internet of things environment - Google Patents

Node cooperative work method based on sensing direction guide in internet of things environment Download PDF

Info

Publication number
CN101841930B
CN101841930B CN2010101557993A CN201010155799A CN101841930B CN 101841930 B CN101841930 B CN 101841930B CN 2010101557993 A CN2010101557993 A CN 2010101557993A CN 201010155799 A CN201010155799 A CN 201010155799A CN 101841930 B CN101841930 B CN 101841930B
Authority
CN
China
Prior art keywords
video
node
sensing node
sensing
monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2010101557993A
Other languages
Chinese (zh)
Other versions
CN101841930A (en
Inventor
王汝传
魏烨嘉
黄海平
孙力娟
沙超
肖甫
叶宁
凡高娟
黄小桑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN2010101557993A priority Critical patent/CN101841930B/en
Publication of CN101841930A publication Critical patent/CN101841930A/en
Application granted granted Critical
Publication of CN101841930B publication Critical patent/CN101841930B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention discloses a node cooperative work method based on the sensing direction guide in an Internet of things environment and mainly relates to an effective realization method monitored by an object in a multimedia sensor network. Based on a sensing node perceived radius theory and a video sensing model which are provided by the invention, the method is combined with practical application and meets different practical monitoring requirements according to two work modes of group and individual provided by different scenes. By elaborating a direction guiding method in the wireless multimedia sensor network from two views of theory and practicability and combining the characteristics of easy manipulation, conciseness, high efficiency, cost saving, low expenditure and energy consumption, high flexibility and the like of the wireless multimedia sensor network, the invention provides constructive suggestions for the application of the multimedia sensor network in the monitoring field.

Description

The node cooperative work method that guides based on sensing direction under the environment of internet of things
Technical field
The present invention relates to the processing method of collaborative work between guiding and each sensor node of radio multimedium video sensor network node sensing direction in the sensor field of Internet of Things core technology, belong to the crossing domain of embedded development and technology of Internet of things.
Background technology
Internet of Things (The Internet of things) is through radio frequency identification (RFID) and information sensing equipment such as infrared inductor, global positioning system, laser scanner; Agreement by agreement; Get up any article and Internet connection; Carry out information exchange and communication, to realize a kind of network of intelligent identification, location, tracking, monitoring and management.Two big core technologies of Internet of Things are REID and sensor network technique.
Wireless sensor network (WSNs) is that a kind of brand-new information is obtained and treatment technology as the product that calculating, communication and transducer three technology combine, and also is simultaneously the main core technology that constitutes Internet of Things.It combines cutting edge technologies such as embedded system, wireless telecommunications, microelectronics; Not only huge applying value is arranged in traditional field such as industry, agricultural, military affairs, environment; Also embody its superiority at many emerging fields, like fields such as intelligent building medical monitoring family expenses, health care, traffic.
Deepen continuously along with what use; The user also constantly increases for the demand of wireless sensor network perception environmental data ability; So multimedia messages has been introduced into sensor network, the wireless multimedia sensor network (Wireless Multimedia Sensor Networks is called for short WMSNs) that thereupon produces has all produced very big impact to traditional wireless sensor network on working method still is data processing method.
The various kinds of sensors that loads on traditional wireless sensor node belongs to omnidirectional's transducer; Being such transducer does not have directivity for the perception of environmental information; Only need be with node deployment back sampling ring environment information at any time just in the monitored area, common have transducers such as temperature, luminosity, humidity.For audio sensor, after node deployment finishes, need not adjust the real-time sampling that just can carry out the audio-frequency information of surrounding environment, so audio sensor belongs to omnidirectional's transducer on working method to the sensing direction of the transducer that loads on the node; In contrast; Video sensor itself has limited visual angle, perception zone; Can know through the directional perception model that proposes based on video sensor in the many documents of learning overseas simultaneously, effective perception zone of video sensor be with transducer oneself as the sector region in the center of circle, its perceived direction can be that starting point is represented along the outside ray of fan-shaped diagonal in order to the center of circle; And can rotate arbitrarily around the center of circle, so video sensor belongs to non-omnidirectional transducer on working method.
In the application of a reality; In order to satisfy the demand of high accuracy environmental monitoring; Realize that through simple video sensor being set or audio sensor only is set the monitoring to monitoring object or zone is not reach required optimum efficiency, under this kind situation, certainly will obtain the video in monitoring target or zone and the optimum efficiency that audio-frequency information could satisfy necessary for monitoring simultaneously.In one network; Exist simultaneously under the situation of omnidirectional's transducer and non-omnidirectional transducer; How effectively to utilize Internet resources; And reduce system energy consumption as much as possible and the purpose that reaches target object or the more effective monitoring of environment becomes the focus of research now, also new requirement has been proposed for the multimedia sensor network of mixed mode of operation.
The notion of in existing many achievements in research, introducing fictitious force usually realizes the adjustment qualitatively to video sensor sensing direction angle, and the method that also has utilization Voronoi to scheme realizes the quantitative adjustment of directivity.
The inventor finds that in research process all there is certain deficiency in above-mentioned two kinds of sensing direction adjustment schemes to wireless multimedia sensor node: the former is fully based on a kind of non-existent virtual potential field; And with stressed being incorporated between the sensor node between microcosmic particle, so the complicated applied force analysis based on virtual hypothesis prerequisite certainly will exist deviation and very difficult realization in practical application significantly; The split path that adjacent two the node perpendicular bisectors of The latter are formed comes the sensing direction of each sensor node in the adjustment region as direction; It is very big that the adjustment direction of each node receives the initial position influence; Very big randomness is arranged, and the prerequisite realizability based on hypothesis is very low equally.
Summary of the invention
Technical problem: the objective of the invention is to be directed against deficiency and the defective that exists in the existing Internet of Things node work disposal mechanism; Node cooperative work method based on the sensing direction guiding is provided under a kind of omnidirectional transducer and the environment of internet of things that non-omnidirectional transducer mixes, matches the method for collaborative work between the direction bootstrap technique that this method is divided into the video sensor node and video sensor and audio sensor node.This method can effectively reduce network overhead and load, and is low-cost, be easy to realize the energy consumption of practicing thrift node in the network as much as possible simultaneously make each node can adapt to the demand of network node isomerism well, accomplishes separately function and task more in phase.
Technical scheme: because the sensor network nodes self characteristics; There are bigger difference in the disposal ability of each category node, memory capacity etc.; The multi-media nodes function is segmented; With the demand of abundant adaptation isomerism, should as much as possible multimedia sensor node be divided into the node that multiclass has difference in functionality, cause transmitting the increase of the error rate and specific energy consumption because of workload is excessive to avoid individual node.From short-term interests; Can satisfy the required multiple demand of user simultaneously though be integrated with the node of multiple function; But from long-term interest; This category node that is invested in the monitored area will cause the serious waste of Internet resources and hardware node resource because monomer cost height adds the life cycle weak point that energy consumption causes greatly, and multimedia messages its data amount is huge in addition, and other sensing datas take more bandwidth on year-on-year basis when transmission; Require short as far as possible time delay simultaneously again; So in reality, with video and audio-frequency function segmentation, be designed to video sensor node and audio sensor node, the collection of accomplishing video and voice data respectively is more reasonable with transmission.
All kinds of multimedia sensor node self-organizings are built bunch in wireless multimedia sensor network, adopt the control of level type topological structure, are fit to simultaneously on a small scale and large-scale network design.Select bunch head through certain algorithm, constitute an interconnected network by each leader cluster node and be responsible for routing of data transmitted.Leader cluster node is administered other working nodes in its close region, and what of dump energy dynamically to change leader cluster nodes according to.
The present invention proposes in a kind of wireless multimedia sensor network the radio multimedia sensor node cooperative work processing method based on the sensing direction guiding, may further comprise the steps:
Step 1) video sensing node perception radius: use clear perception radius and fuzzy perception radius to come the sensing range of qualitative measurement video sensor, in sensing range separately, have diacritic apperceive characteristic; Promptly in clear perception radius, the video sensing node can clearly identify the object that gets into the perception zone, enough satisfies the demand of user for environmental monitoring, and can make a response to the object in the general entering sensing range; In fuzzy perception radius, can make a response to special " Video Events " such as intense light sources;
Step 2) Video Events induction model: an intense light source is set at a distance, this light source position is in the fuzzy perception radius of video sensor; In a certain discrete moment, the sensing direction of video sensing node vector and initial point satisfy the above-mentioned situation that is provided with when the angle value between the direction vector of light source line drops in [visual angle, perception zone/2, visual angle, perception zone/2] codomain scope;
Step 3) Video Events induction application of model: after getting into the snapshot mode of operation, the video sensing node single frames ambient video data of at first sampling, and to the single frames video data traverse scanning of current sampling; After identifying Video Events; Calculate the rectification average train value of this Video Events in current single frames video data; Manipulate for subsequent step; And no matter Video Events can utilize the principle of induction model to realize the mode of the single frames video data traverse scanning of sampling is identified Video Events in the clear perception radius of video sensing node in the still fuzzy perception radius;
The guiding of video sensor direction in the step 4) wireless multimedia sensor network: the purpose of direction bootstrap technique is in order to make the sensing direction of video sensing node towards testee or FX to be measured; After the direction guiding; Make angle codomain between the sensing direction vector of line and sensing node of Video Events and sensing node in [visual angle, perception zone/2; Visual angle, perception zone/2] in the scope, and dwindle angle as much as possible to arrive more excellent monitoring effect;
Step 5) video and audio sensor pairing Synergistic method: come active to initiate the application of pairing collaborative work by negligible amounts in the whole network and the lower audio frequency sensing node of unit interval energy consumption, by an audio frequency sensing node and the collaborative completion of a video sensing node pairing environmental monitoring;
Step 6) adapts to the mode of operation that changes: under the application scenarios of different scales, can adopt the different working pattern, i.e. two kinds of mode of operations to different monitoring scope and monitoring demand of colony and monomer; The teamwork mode class is similar to the method that step 5) is described, and drives the collaborative sampling of video sensing node environment multimedia messages by an audio frequency sensing node, realizes area monitoring through man-to-man pairing; The special character of monomer mode of operation is; Set up the work sub-district by single audio frequency sensing node driving; The a plurality of video sensing node linkage work sampling of in this work sub-district other environment multimedia data information is applicable to the interior area monitoring or the high-precision monitoring demand of multi-angle of target object more accurately to satisfy.
Beneficial effect: the collaborative work scheme has been compared following several advantage between the present invention and more existing wireless multimedia sensor network interior nodes directions guiding and node:
(1) be easy to control, succinct efficient.From the guiding of video sensor node sensing direction; Avoided adopting complicated algorithm and virtual principle and method that can't practical application, the mode that has proposed to utilize Video Events to drive guides the effective perception region direction of multimedia video sensing node with gearing to actual circumstances.Conveniently change at any time and the direction that changes sensitive zones according to the different monitoring demand, whole adjustment process is convenient and swift, meets low delay requirement, and actual experiment and theoretical foundation are arranged, and has possessed practical feasibility guarantee.From collaborative work scheme between multimedia sensor node, the pairing work agreement easy to understand of video and audio frequency sensing node is convenient to be implemented, and has proposed a kind of simple and easy to do dynamically carrying into execution a plan for the monitoring of actual wireless multimedia sensor network.Working node according to the pairing of network node residue energy consumption Dynamic Selection; This process has realized the complete transparence to the monitoring personnel; The fill order of monitor terminal transmits through wireless multimedia sensor network immediately; The sampled data information of real-time receiving multimedia sensing node can not brought any operation burden to the monitor staff.
(2) practice thrift cost, the expense energy consumption is low.In whole video sensor node sensing direction bootup process; Main cost cost is " Video Events " caused expense; Be that sensing direction of every adjustment just need be thrown in once " Video Events " in the orientation of testee or FX; The video adjustment instruction of being sent by monitor terminal simultaneously is sent to the single or a plurality of video sensing nodes that need the adjustment orientation through whole wireless multimedia sensor network, receives that the sensing node after the instruction begins to carry out sensing orientation adjustment algorithm." Video Events " can be a simple point-source of light in practical application, only need satisfy certain light-source brightness and the light source works duration gets final product.The cost expense of single " Video Events " can be controlled effectively.The matchmaking protocol implementation procedure of video and audio frequency sensing node has also fully taken into account the demand of energy efficient; Select node on year-on-year basis with minimum signal attenuation value; Promptly nearest with the local audio nodal distance preferential object that matches of video node conduct can reduce matchmaking protocol transmission energy consumption as far as possible.
(3) with strong points, flexibility is high.The present invention is directed to the specific monitoring demand, promptly a certain specific testee or FX are carried out real-time video and Voice Surveillance.For the application scenarios that satisfies this kind monitoring demand, the dynamic routing that the process of whole sensing direction adjustment can adapt to multimedia sensor network neatly changes.The position that need not change the multimedia sensing node is provided with, and the position that only need change Video Events gets final product.Orientation, Video Events place is the orientation of testee or FX; The position of Video Events can change along with different application flexibly; Only need send the instruction of readjusting the sensing orientation, just can locate adjustment rapidly to the network monitoring target area again by monitor terminal.The video node pairing that the adaptive selection of audio frequency sensing node closes on need not the user and participates in the process of whole dynamic adjustment networking, demonstrated fully the characteristics of high-adaptability and high flexibility.
(4) be easy to post-processed.After the multimedia sensing node sends back Surveillance center with multimedia video and the audio-frequency information of sampling; Need merge the back in terminal plays to video and voice data; Video data and voice data information for same time sampling; The characteristics that possessed higher time synchronized property, and this merging for the different information flows at terminal provides prerequisite to guarantee with reliable; Multimedia messages to different node samples among the present invention inserts the temporal information of sampling in the insertion node device identifier when encoding; And the device identification number of each node in whole network is unique; Be similar to the MAC Address of each sensing node in wireless multimedia sensor network, just can extract video information and the audio-frequency information of same time identical testee or monitored area monitoring through fetch equipment identifier and sampling time information.What need particularly point out here is that the multi-medium data that the multi-media nodes of collaborative work is sampled after the pairing possesses identical sampling time information, and this provides the foundation for the feasibility of said method.
(5) high monitoring performance.Method through the present invention's proposition; Energy consumption during the work that both can slow down individual node, prolong node life cycle; Can the multimedia video for same testee or FX of different node samples, be play audio-frequency information after merging through monitor terminal software again; Implementation method is also very simple and easy to do, thereby improves monitoring effect effectively.
Description of drawings
Fig. 1 is Video Events sensing model realization flow figure;
Fig. 2 is a video sensing node direction guiding workflow diagram;
Fig. 3 is a wireless multimedia sensor network monomer mode of operation exemplary plot.
Embodiment
The method for designing that the present invention adopts embedded software to combine with systems soft ware; Based on the support of related hardware equipment, the principle and the treatment mechanism of the direction guiding of video sensing node and multimedia video and audio frequency sensing node pairing collaborative work in the whole wireless multimedia sensor network described.Should be clear and definite, following content only is used for describing the present invention and not as restriction of the present invention.
Radio multimedia sensor node cooperative work processing method based on the sensing direction guiding in the wireless multimedia sensor network may further comprise the steps:
Step 1) analysis and research video sensing node perception radius.
Video sensor belongs to a kind of of direction sensor, and the characteristic actual according to the video sensing node is quantified as two parts with its perception radius, promptly clear perception radius and fuzzy perception radius, also difference to some extent of its perception and intensity in sensing range separately.
The two kinds of notions that can discern the perception radius that the present invention proposes can be divided into.
A. clear perception radius.The video sensing node can clearly be recognized the object that gets into the perception zone in this perception radius, enough satisfies the demand of user for environmental monitoring, and can make a response to the object in the general entering sensing range, and this radius codomain scope is [0, R].Generally speaking, the different performance of the video sensor that the value of R need be loaded according to video sensing node self is decided, and its codomain scope and video sensor pixel value are proportional.
B. fuzzy perception radius.The video sensing node can't clearly be recognized the object that gets into the perception zone in this perception radius, can't satisfy the demand of user for environmental monitoring, but can make a response to special " Video Events ".The illumination incident is exactly the typical case representative of this type of special video incident, this radius codomain scope be (R ,+∞).In the codomain scope, the perception of sensing node is exponential downward trend, and would not therefore receive very big influence to the sound of special video incident.
The effective perception zone of direction sensor is for being the center of circle with the node, abducent fan-shaped institute region covered, and this zone can rotate arbitrarily around direction sensing node position.
Figure GSA00000097663700061
is the unit vector that points to effective sensing sector region position of centre of gravity through the center of circle, node place; Promptly represent the direction in the effective perception of direction sensor zone, used the sensing direction vector representation.The initial value of this direction often also is at random in the sensing network that dispenses at random, promptly in [0,2 π], obeys evenly to distribute.Can realize changing effective perception zone of direction sensing node through the deflection of adjustment vector.
At a time, judge arbitrarily that a bit the method for effective sensing range of approach axis sensing node has two: the one, this distance of putting sensing node is less than clear perception radius R; The 2nd, the vectorial angle of this point and the line of sensing node and sensing direction is less than 1/2 of the regional visual angle of perception of sensing node self.If satisfy above-mentioned two conditions simultaneously then can think that sensing node can recognize this and put locational object.
Step 2) sets up Video Events induction model.
Perception radius based on the video sensing node is theoretical, does following setting at present:
Be provided with direction sensor (is example with the video sensor) at origin position.There is an intense light source at a distance.This light source position is in the fuzzy perception radius of video sensor; Promptly for the video sensing node; Can't clearly recognize the target object at light source position place; Also can't carry out the monitoring of high definition, but can pick out this intense light source this " Video Events ", and this Video Events is made a response the environment at light source place.
In a certain discrete moment; The sensing direction vector and the initial point of video sensing node satisfies the above-mentioned situation that is provided with when dropping in [visual angle, perception zone/2, visual angle, perception zone/2] codomain scope when the value of
Figure GSA00000097663700064
to the angle between the direction vector of light source line is designated as
Figure GSA00000097663700063
.Under this kind situation, the image sensing node is judged the position of intense light source in current scan image through the view data scan process to sampling.The present invention proposes the Video Events searching algorithm based on average train value information of comparatively simple and easy to do and low algorithm complex, will in step 3, set forth in detail.
Certainly under the normal condition, the video sensing node can pick out " Video Events " equally effectively in clear perception radius.What need specify here is that the distance range that the present invention relates to is meant that like " in the clear perception radius " at radius be R, in the sector region scope that the visual angle, effective perception zone of video sensing node is formed.In like manner also can similarly draw " fuzzy perception radius ".
Step 3) is used Video Events induction model.
After Video Events that video sensing node As-received tiny node the sends search enabled instruction, get into the snapshot mode of operation, the environment view data of at first sampling, and to the single image data traverse scanning of current sampling.Suppose when being scanned up to the capable Y row of X, to perceive Video Events, and note the train value Y that this Video Events occurs.The Video Events that in certain row, occurs generally speaking can be by a plurality of train value records, so all row that Video Events occurs in this row are designated as Y i(0<i<picture traverse pixel value), and Video Events shared width in present image directly has influence on Y iNumerical values recited.In like manner, as scanning X+k (0<k<picture altitude pixel value-will perceive the whole records of train value of Video Events when X) going, up to finishing image scanning.Y with each row iBe recorded as after averaging
Figure GSA00000097663700071
Again each is gone
Figure GSA00000097663700072
Average is handled.Obtain the average train value of the rectification of Video Events in the current image date at last.
The average train value of rectification of the single image data that calculate through above-mentioned steps provides the foundation and prerequisite for the sensing direction bootstrap algorithm.
Video sensor direction guiding in the step 4) wireless multimedia sensor network.
The final purpose of whole direction bootstrap technique is in order to make the sensing direction of video sensing node towards testee or FX; Also be appreciated that to after the process adjustment; Make angle codomain between the sensing direction vector of line and sensing node of Video Events and sensing node in [visual angle, perception zone/2; Visual angle, perception zone/2] in the scope; And advance to dwindle possibly angle and make angle trend towards zero, reach the center that makes the as close as possible sample video of Video Events effective coverage, improve the degree of controllability of monitoring with this.The method that the present invention proposes only drives and need not adjust artificially through Video Events.
Send Video Events search enabled instruction through base-station node, the video sensing node that receives instruction starts the Video Events searching algorithm, to being described below of Video Events searching algorithm:
Predefine ε is a less numerical value, and the numerical value that is provided with satisfies γ greater than zero and less than ε, define simultaneously M be in for the video sensing node current snapshot mode down-sampling view data pixel wide numerical value 1/2nd.Sensing node at first gets into snapshot mode, sampling single image data, and system produces a numerical value immediately between codomain [1 ,+1], and gives the RAND variable with its assignment.Utilize the single image data of the method traversal sampling in the step 3); If can't identify Video Events then judge the numerical value of RAND; If RAND greater than zero then the sensing direction vector of video sensing node turns clockwise γ; If RAND is less than zero then be rotated counterclockwise the γ angle, the anglec of rotation gets into the snapshot working method after finishing again, and sampling single image data also repeat aforesaid operations.In the single image process of traversal sampling in case identify the average train value of rectification that method that Video Events describes in just according to step 3) is calculated current scan image, the numerical value of definition Δ d be M with the average train value additive operation of rectification after the absolute value of difference.If the numerical value of judging Δ d is greater than ε, system and then judge the numerical value of RAND, if the numerical value of RAND greater than zero, the sensing direction vector of the video sensing node γ angle that turns clockwise then; If the numerical value of RAND then is rotated counterclockwise the γ angle less than zero.Again get into afterwards repeat after the snapshot mode sampling single image data above-mentioned flow process up to the numerical value of judging Δ d less than ε, finish the adjustment process of whole sensing direction guiding.Whole workflow diagram is as shown in Figure 2.The purpose of whole flow process is in order to realize in the time can't identifying Video Events in the single image data; The sensing direction of video sensing node is done appropriateness adjustment, and repeat above-mentioned steps up to calculate Video Events current sampling single frames video data comparatively near the centre position till.
Step 5) video and audio sensor pairing Synergistic method.
In wireless multimedia sensor network; In order to realize that testee or FX are monitored all sidedly; Except video and image information are arranged, also need corresponding audio-frequency information and improve quality monitoring; So how to make video sensing node and the collaborative work effectively of audio frequency sensing node in the wireless multimedia sensor network, just become the key factor that influences quality monitoring.After the completion of initial setting up, just can begin to mutually interact after needing between video and the audio frequency sensing node to accomplish the process of once matching.
In common multimedia wireless sensor network, the video sensing node belongs to the direction sensing node and the audio frequency sensing node belongs to omnidirectional's sensing node.On monitoring range; Because the energy consumption of video sensing node unit interval is far longer than the unit interval energy consumption of audio frequency sensing node; And effective sensing coverage rate of the same area (selecting in the clear perception radius for the video sensing node) obviously is that omnidirectional's sensing node is greater than the direction sensing node; Promptly for the situation that covers same area, the actual usage quantity of omnidirectional's sensing node is less than the direction sensing node.Be not difficult to learn that according to above-mentioned situation in order to satisfy identical coverage rate, the actual omnidirectional's sensing node quantity that comes into operation should be less than the quantity of direction sensing node in real work.Therefore the present invention considers to come the initiatively application of initiation pairing collaborative work by negligible amounts and the lower audio frequency sensing node of unit interval energy consumption.
Before pairing beginning, the guiding of the direction of video sensing node has been accomplished and the packet that in initialized process, had current start-up time of information by Surveillance center's broadcasting is used for the current start-up time and the intelligence sample time of each multimedia sensing node of synchronizing network.The pairing enabled instruction of being sent by Surveillance center sends the audio frequency sensing node to the mode of broadcasting, is left intact and directly abandons though the video sensing node also can receive this command information.The local audio sensing node sends pairing application signal after receiving this instruction; Jump other these pairing request of video sensing node response in the reach distance scope at local audio sensing node one, the device identification number that the request signal afterbody that each video sensing node sends at the local audio sensing node adds self sends back the audio frequency sensing node simultaneously.For fear of conflict; Near other non-local audio sensing node after receiving the pairing feedback message of these video sensing nodes; Check the initial pairing application number of heading earlier; This field is by the device identification number unique identification of the local audio sensing node of initiating the pairing application, if do not conform to the application number of self then abandon this message.The device identification number and the corresponding RSSI value record thereof that possibly occur behind the feedback message of video sensing node near the local audio sensing node receives; Search the device identification number that has the minimum signal attenuation value afterwards, send once more behind the afterbody with its adding feedback message.Each video sensing node in that the signal one of local audio sensing node is jumped in the coverage all can be received this message, judges through the device identification number of check message tail whether the object of this pairing success application is self.If not then this packet loss not being processed the application of being sent by other audio frequency sensing nodes pairing message that continues simultaneously in the listens for network.Confirm that the message of successful matching can send it back the local audio sensing node once more by the target video sensing node this moment; Send out the monitoring that has target video sensing node equipment identifier and this pairing number by the local audio sensing node after the confirmation of receipt and start message; Back audio frequency sensing node self starts audio frequency monitoring beginning to sample environment audio-frequency information, and the target video sensing node with the successful matching of local audio sensing node after receiving monitoring startup message begins the environment video information of sampling synchronously.Synchronous specified message finishes this environmental information sampling after the sampling time when the audio & video sensing node sampling time arrives initialization; And add this sampling time started information and this pairing information at data packet header, again these information are sent back base-station node and realize having identical sampling time started and the video that matches number and the fusion broadcast of audio-frequency information at monitor terminal software with the mode of multi-hop.
Step 6) adapts to the mode of operation of different application demand.
After guarded region is provided with the multimedia sensing node; To a certain testee or FX; Carry out the sensing direction guiding, video sensing node afterwards and the collaborative work of audio frequency sensing node can be switched between two kinds of mode of operations of monomer and colony according to different application requirements.
Under the teamwork pattern, be similar to the situation of describing in the step 5), drive the collaborative sampling of video sensing node environment multimedia messages, realize area monitoring through man-to-man pairing by an audio frequency sensing node.This mode of operation can adapt to area monitoring in a big way, realizes the covering rough to the monitoring objective panorama.Monomer mode of operation and teamwork pattern are different; The initialization synchronous network device operating time; Near testee or guarded region, throw Video Events; The time throw one piece of audio frequency sensing node as drive node guaranteeing guarded region in the effective coverage range of single audio frequency sensing node, and note the device identification number of this audio frequency sensing node.The effect of drive node is to set up the sub-district in order to order about on every side in the close region self-organizing of a plurality of video sensing node.Monitor terminal sends the Video Events search instruction; After the sensing direction guiding finishes; Send the packet of the unique device identification number of whole network that has drive node through base-station node by monitor terminal; Whether the device identification number that other non-drive node receive inspection oneself behind this packet is identical with the device identification number of this data packet header, if difference then do not process and transmit this packet.After drive node is received this packet, start the sub-district at once and seek algorithm, the detailed process of this method is following:
Drive node is sent the sub-district and is sought packet, and this packet comprises the device identification number of drive node self.Each video sensing node in the effective coverage of signal of drive node all can respond this bag; And feed back to drive node behind the device identification number adding packet afterbody with oneself; Because this packet can not transmitted by other any multimedia sensing nodes in the jumping coverage, so each video sensing node in the useful signal coverage of drive node will become each alternative child node that the sub-district is set up.Drive node is that the data packet sampling that alternative child node feeds back to goes out separately device identification number and signal attenuation RSSI numerical value with each neighbours' video sensing node; Picking out the RSSI absolute value all is encapsulated in the new sub-district establishment request for data bag less than the device identification number of whole nodes of a certain experience threshold value and with it; Send to each the alternative child node in the sub-district equally; Each alternative child node is after receiving that the request for data bag is set up in the sub-district that sent out by drive node, if feed back to drive node after checking out self device identification number in this packet, just self device identification number to be added the packet afterbody once more.Whole feedback informations that drive node will be received once more device identification number after the affirmation of each alternative child node of sampling out; It all is encapsulated in the monitoring enabled instruction packet that has cell identification information and sends this packet, after the environment audio-frequency information that begins to sample.The alternative child node of in the sub-district each begins the environment video information of sampling immediately after receiving the monitoring enabled instruction packet that drive node sends out.In sampling finishes environment multi-medium data bag that each node of back number is encapsulated into sampling time started of this sampling together with cell identification own sampling and with it, send it back base-station node and in the broadcast of the broadcast of terminal software realization multi-angle video and synchronous voice data.

Claims (1)

  1. Under the environment of internet of things based on the node cooperative work method of sensing direction guiding, it is characterized in that the step that this method comprises is following:
    Step 1) video sensing node perception radius: use clear perception radius and fuzzy perception radius to come the sensing range of qualitative measurement video sensor, in sensing range separately, have diacritic apperceive characteristic; Promptly in clear perception radius, the video sensing node can clearly identify the object that gets into the perception zone, enough satisfies the demand of user for environmental monitoring, and can make a response to the object in the general entering sensing range; In fuzzy perception radius, can be to making a response as the intense light source of special Video Events;
    Step 2) Video Events induction model: an intense light source is set at a distance, this light source position is in the fuzzy perception radius of video sensor; In a certain discrete moment, the sensing direction of video sensing node vector and initial point satisfy the above-mentioned situation that is provided with when the angle value between the direction vector of light source line drops in [visual angle, perception zone/2, visual angle, perception zone/2] codomain scope;
    Step 3) Video Events induction application of model: after getting into the snapshot mode of operation, the video sensing node single frames ambient video data of at first sampling, and to the single frames video data traverse scanning of current sampling; After identifying Video Events; Calculate the rectification average train value of this Video Events in current single frames video data; Manipulate for subsequent step; And no matter Video Events can utilize the principle of induction model to realize the mode of the single frames video data traverse scanning of sampling is identified Video Events in the clear perception radius of video sensing node in the still fuzzy perception radius;
    The guiding of video sensor direction in the step 4) wireless multimedia sensor network: the purpose of direction bootstrap technique is in order to make the sensing direction of video sensing node towards testee or FX to be measured; After the direction guiding; Make angle codomain between the sensing direction vector of line and sensing node of Video Events and sensing node in [visual angle, perception zone/2; Visual angle, perception zone/2] in the scope, and dwindle angle as much as possible to arrive more excellent monitoring effect;
    Step 5) video and audio sensor pairing Synergistic method: come active to initiate the application of pairing collaborative work by negligible amounts in the whole network and the lower audio frequency sensing node of unit interval energy consumption, by an audio frequency sensing node and the collaborative completion of a video sensing node pairing environmental monitoring;
    Step 6) adapts to the mode of operation that changes: under the application scenarios of different scales, can adopt the different working pattern, i.e. two kinds of mode of operations to different monitoring scope and monitoring demand of colony and monomer; The method that teamwork pattern such as step 5) are described drives the collaborative sampling of video sensing node environment multimedia messages by an audio frequency sensing node, realizes area monitoring through man-to-man pairing; The special character of monomer mode of operation is; Set up the work sub-district by single audio frequency sensing node driving; The a plurality of video sensing node linkage work sampling of in this work sub-district other environment multimedia data information is applicable to the interior area monitoring or the high-precision monitoring demand of multi-angle of target object more accurately to satisfy.
CN2010101557993A 2010-04-23 2010-04-23 Node cooperative work method based on sensing direction guide in internet of things environment Active CN101841930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101557993A CN101841930B (en) 2010-04-23 2010-04-23 Node cooperative work method based on sensing direction guide in internet of things environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101557993A CN101841930B (en) 2010-04-23 2010-04-23 Node cooperative work method based on sensing direction guide in internet of things environment

Publications (2)

Publication Number Publication Date
CN101841930A CN101841930A (en) 2010-09-22
CN101841930B true CN101841930B (en) 2012-04-11

Family

ID=42744938

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101557993A Active CN101841930B (en) 2010-04-23 2010-04-23 Node cooperative work method based on sensing direction guide in internet of things environment

Country Status (1)

Country Link
CN (1) CN101841930B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108121286A (en) * 2017-12-19 2018-06-05 大连威迪欧信息技术有限公司 A kind of method of environmental monitoring and system based on Internet of Things

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102281608B (en) * 2011-07-04 2013-12-11 南京邮电大学 Wireless sensor network clustering routing method based on fuzzy control
CN103188707B (en) * 2013-03-12 2015-06-17 南京邮电大学 Path coverage monitoring method for wireless multimedia sensor network
CN104468715B (en) * 2014-10-31 2018-09-04 广东工业大学 A kind of manufacturing industry Internet of things node collaboration storage method
TWI622896B (en) * 2015-12-23 2018-05-01 絡達科技股份有限公司 Electric device responsive to external audio information
TWI762465B (en) * 2016-02-12 2022-05-01 瑞士商納格維遜股份有限公司 Method and system to share a snapshot extracted from a video transmission
CN105721296A (en) * 2016-02-23 2016-06-29 重庆邮电大学 Method for improving stability of chain structure ZigBee network
CN106100866B (en) * 2016-05-27 2021-04-02 上海物联网有限公司 Intelligent detection device, configuration device and method based on regional linkage

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2767605Y (en) * 2005-01-01 2006-03-29 薛恒伟 Intelligent cabinet control device
CN101140646A (en) * 2007-11-05 2008-03-12 陆航程 'Data great tracking' tax controlling system and tax controlling terminal based on EPC, EBC article internet

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6769127B1 (en) * 2000-06-16 2004-07-27 Minerva Networks, Inc. Method and system for delivering media services and application over networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2767605Y (en) * 2005-01-01 2006-03-29 薛恒伟 Intelligent cabinet control device
CN101140646A (en) * 2007-11-05 2008-03-12 陆航程 'Data great tracking' tax controlling system and tax controlling terminal based on EPC, EBC article internet

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王保云.物联网技术研究综述.《电子测量与仪器学报》.2009,1-7页. *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108121286A (en) * 2017-12-19 2018-06-05 大连威迪欧信息技术有限公司 A kind of method of environmental monitoring and system based on Internet of Things

Also Published As

Publication number Publication date
CN101841930A (en) 2010-09-22

Similar Documents

Publication Publication Date Title
CN101841930B (en) Node cooperative work method based on sensing direction guide in internet of things environment
Baidya et al. FlyNetSim: An open source synchronized UAV network simulator based on ns-3 and ardupilot
EP2160883B1 (en) Method for coordinating a plurality of sensors
US20230209635A1 (en) Method and apparatus for supporting early data transmission in inactive state in wireless communication system
US20220053390A1 (en) Support of inter-gnb handover in higher layer multi-connectivity
JP2020536447A (en) A method of receiving a signal in the core set of a wireless communication system and a device using the above method.
US11432204B2 (en) Method and apparatus for enhancing handover procedure for supporting conditional handover in wireless communication system
US11140744B2 (en) Method and apparatus for restricting to receive a system information in a wireless communication system
Ahmadi et al. Wireless connectivity of CPS for smart manufacturing: A survey
US20210045031A1 (en) Method and apparatus for performing conditional cell change based on channel occupancy in wireless communication system
US11503516B2 (en) Method and apparatus for providing beam related information for connection failure detection in wireless communication system
US20220070826A1 (en) Sidelink resource handling for cu-du split based v2x communication
CN109347926A (en) Edge calculations intelligent perception system building method towards the protection of bright Ruins of Great Wall
US20220053478A1 (en) Resource handling for nr v2x based on split of cu-du
CN108650299A (en) A kind of air-ground interaction feels combination of plant upgrowth situation more and monitors system
US20210232144A1 (en) Method of controlling artificial intelligence robot device
Degada et al. Smart village: An iot based digital transformation
CN103019172A (en) Self-organizing radio monitoring system and method
CN103209467A (en) Method and device for accessing to multiple ZigBee networks
US20220014960A1 (en) Method and apparatus for handling bearers based on congestion level in a wireless communication system
EP4228325A1 (en) Provision of rach related information for connection failure detection
CN110062197A (en) A kind of apparatus control method, device, system, electronic equipment and storage medium
Delgado et al. OROS: Orchestrating ROS-driven collaborative connected robots in mission-critical operations
US20230043593A1 (en) Method and apparatus for performing measurement in wireless communication system
US20200136771A1 (en) Dual connectivity support for v2x communication

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20100922

Assignee: Jiangsu Nanyou IOT Technology Park Ltd.

Assignor: Nanjing Post & Telecommunication Univ.

Contract record no.: 2016320000217

Denomination of invention: Node cooperative work method based on sensing direction guide in Internet of things environment

Granted publication date: 20120411

License type: Common License

Record date: 20161118

LICC Enforcement, change and cancellation of record of contracts on the licence for exploitation of a patent or utility model
EC01 Cancellation of recordation of patent licensing contract

Assignee: Jiangsu Nanyou IOT Technology Park Ltd.

Assignor: Nanjing Post & Telecommunication Univ.

Contract record no.: 2016320000217

Date of cancellation: 20180116

EC01 Cancellation of recordation of patent licensing contract