US20150348235A1 - Distributed path planning for mobile sensors - Google Patents
Distributed path planning for mobile sensors Download PDFInfo
- Publication number
- US20150348235A1 US20150348235A1 US14/294,417 US201414294417A US2015348235A1 US 20150348235 A1 US20150348235 A1 US 20150348235A1 US 201414294417 A US201414294417 A US 201414294417A US 2015348235 A1 US2015348235 A1 US 2015348235A1
- Authority
- US
- United States
- Prior art keywords
- resolution
- sensor
- sensors
- environment
- imaging system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003384 imaging method Methods 0.000 claims abstract description 20
- 238000000034 method Methods 0.000 claims abstract description 20
- 238000004891 communication Methods 0.000 claims abstract description 9
- 238000000701 chemical imaging Methods 0.000 claims description 3
- 238000012634 optical imaging Methods 0.000 claims description 3
- 239000013598 vector Substances 0.000 description 6
- 238000005457 optimization Methods 0.000 description 5
- 230000001788 irregular Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 229920000535 Tan II Polymers 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4092—Image resolution transcoding, e.g. by using client-server architectures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/28—Investigating the spectrum
- G01J3/2823—Imaging spectrometer
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/87—Combinations of radar systems, e.g. primary radar and secondary radar
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/40—Means for monitoring or calibrating
- G01S7/4004—Means for monitoring or calibrating of parts of a radar system
- G01S7/4021—Means for monitoring or calibrating of parts of a radar system of receivers
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/10—Simultaneous control of position or course in three dimensions
- G05D1/101—Simultaneous control of position or course in three dimensions specially adapted for aircraft
- G05D1/104—Simultaneous control of position or course in three dimensions specially adapted for aircraft involving a plurality of aircrafts, e.g. formation flying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0007—Image acquisition
-
- G06T7/0018—
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
- G01C11/06—Interpretation of pictures by comparison of two or more pictures of the same area
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/28—Investigating the spectrum
- G01J2003/2866—Markers; Calibrating of scan
- G01J2003/2879—Calibrating scan, e.g. Fabry Perot interferometer
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/89—Radar or analogous systems specially adapted for specific applications for mapping or imaging
- G01S13/90—Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10036—Multispectral image; Hyperspectral image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Definitions
- each sensor In a distributed manner, the motion of each sensor and its orientation are optimized by minimizing a cost function that characterizes how well the environment has been imaged, compared to the pre-specified resolution. To perform this optimization, each sensor communicates with neighboring sensors, and exchanges local information. The images acquired by the sensors can be combined into an overall image of the environment to achieve the desired resolution.
- FIG. 2 is a side view of a sensor and environment in sensor coordinates according to embodiments of the invention.
- FIG. 4A is a schematic of an example environment according to embodiments of the invention.
- FIG. 4B is a schematic of an irregular example environment
- FIG. 5 is a top view of the sensor footprint in sensor coordinates according to embodiments of the invention.
- FIG. 6 is a flow diagram of a method for planning paths of the sensors shown in FIG. 1 according to embodiments of the invention.
- FIG. 1 shows a set of mobile sensors 100 according to embodiments of our invention.
- the sensors can be airborne, ground-based or underwater, among others.
- the sensors can, for example, be arranged in indoor or outdoor drones, aircraft, or satellites.
- Each sensor includes a processor 101 , an imaging system 102 , and a communication system 103 .
- the imaging system has a known footprint for imaging an environment 400 , which may depend on the orientation of the sensor.
- the footprint is a projection of the imaging plane, such as a camera image plane or a radar beam pattern, onto the environment.
- the imaging system can use, among others, optical imaging, synthetic aperture radar (SAR), hyperspectral imaging, physical aperture radar imaging. For this reason, the term “image” is used broadly.
- SAR synthetic aperture radar
- the sensors move along paths 102 to image the environment 400 .
- the communication system includes a transceiver so that the sensors can communicate with each other over channels 105 .
- the channels are wireless using, e.g., radio or optical signals.
- the sensors can perform the path planning in a distributed manner.
- the other communication techniques can also be used, for example some or all of the sensors can use digital fine wire tethers.
- the model uses a subadditive function to model the resolution provided by overlapping footprints 104 .
- the model also uses an objective function that is time varying. At each time j, the objective function provides a measure of a difference between a desired resolution and a resolution achieved up to a previous time j ⁇ 1.
- FIG. 2 is a side view of a sensor and environment in sensor coordinates.
- a point z in the environment is located on a line bisecting an angle ⁇ v .
- the table in FIG. 3 gives the variables 301 , description 302 and exemplar values 303 used by our model.
- the variables include a height 311 of the sensor, horizontal 312 and vertical 313 angular widths, position 314 of the sensor, and declination 315 and azimuth 317 angles.
- the angles specify the orientation of the sensors. In the described embodiment, there are two degrees of freedom, however three degrees are not precluded.
- an example regular environment is a 100 ⁇ 100 polygonal region Q in the xy plane.
- An arbitrary point 401 in the environment is labeled q ⁇ Q.
- the environment is the zy plane, which is a translated and rotated version of the x plane.
- a given sensor is located at a height H above the origin of the zv plane and the angle ⁇ is measured with respect to the z axis.
- the height H, and the x, y location specify the position of the sensors.
- the angles ⁇ and ⁇ specify the orientation.
- FIG. 4B shows an irregular example irregular environment 500 .
- the four vertices in equation (2) defining the sensor footprint 104 can be transformed into global coordinates using equation (4).
- the sensor footprint is defined by four variables (c x , c y , ⁇ , ⁇ ).
- the first two variables are the projection of the sensor position onto the environment plane, and the last two parameters are the horizontal and vertical angular variables.
- Other sensors may have different footprint shapes, with the footprint shape and size depending on the sensor orientation.
- the position parameters are updated on a relatively slow time scale because these parameters correspond to the physical position of the sensor, while the angular variables are updated on a relatively fast time scale because the angles can quickly change values.
- position parameters and angle parameters might change values at the same time scale, or angle parameters might change values at a slower time scale.
- a subadditive model for the resolution obtained by overlapping sensors is the l p norm of a vector of individual resolutions, where 1 ⁇ p ⁇ .
- Other embodiments can use different subadditive functions to model how images of different resolutions are combined.
- ⁇ d (q) be the desired resolution defined at every point q 401 in the environment 400 .
- x j be a vector of the position variables of all of the sensors at time j.
- ⁇ j and ⁇ j be vectors corresponding to the vertical (declination) and horizontal (azimuth) angular variables at time j, respectively, of all of the sensors.
- R i be the resolution provided by the i th sensor at all points in its footprint F i , which is defined by sensor variables (cx i ,cy i , ⁇ i , ⁇ i ) as
- R i ⁇ ( cx i , cy i , ⁇ i , ⁇ i , q ) ⁇ K H 2 ⁇ [ 1 + tan 2 ⁇ ( ⁇ i ) ] , q ⁇ F i ⁇ ( cx i , cy i , ⁇ i , ⁇ i ) 0 , otherwise , ( 8 )
- K is a sensor constant that depends on the number of pixels in an acquired image. If all of the sensors have the same value of K, then the value is unimportant for the optimization described below.
- the objective function we minimize is a difference between the desired resolution and an achieved resolution up to time j ⁇ 1 according to the following function:
- ⁇ j-1 (q) is the resolution achieved by the sensors up to time j ⁇ 1
- p defines the norm used to model a subadditive combination of overlapping footprints
- ⁇ (x) is a penalty function that penalizes deviation from the desired resolution
- ⁇ (x) x 2 .
- This penalty function penalizes the achieved resolution when the resolution is lower or greater than the desired resolution. This forces the sensors to move to a different area of the environment when some of the sensors have been mapped to a sufficient resolution.
- FIG. 6 shows a method for path planning according to embodiments of the invention.
- the initial achieved resolution ⁇ 0 (q) is identically zero.
- initial sensor angles are determined 605 by the following optimization 611
- ⁇ 0 , ⁇ 0 arg ⁇ ⁇ min ⁇ ⁇ , ⁇ ⁇ ⁇ G 0 ⁇ ( x 0 ⁇ ⁇ , ⁇ ) . ( 11 )
- the initial position gradient g 0 is the gradient with respect to x of G 0 (x, ⁇ 0 , ⁇ 0 ) evaluated at x 0 .
- ⁇ is a step size
- g j-1 is the position gradient at a previous time evaluated at position x j-1 . It should be understood that the moving to the next position and orientation can be a “null” move, i.e., the sensor remains in place.
- Update 640 the sensor angular parameters and the position gradient 641 according to
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Aviation & Aerospace Engineering (AREA)
- Automation & Control Theory (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
Abstract
Description
- The invention relates generally to mobile sensors, and more particularly to distributed path planning for the sensors.
- Mobile sensor can be used to acquire images in a coordinated manner. The images can then be used in applications such as surveillance, cartography, and environmental monitoring. One problem in such systems is planning the paths the sensors should follow, a problem known as path planning.
- In one such system with holonomic robots, where the controllable degrees of freedom are equal to the total degrees of freedom, anisotropic sensors with a bounded footprint are considered, see Hexsel et al., “Distributed Coverage Control for Mobile Anisotropic Sensor Networks,” Tech. Report CMU-RI-TR-13-01, Robotics Institute, Carnegie Mellon University, January 2013. That system models a 2-dimensional (2D) environment as a polygon, possibly containing obstacles. A fixed objective function maximizes a joint probability to detect objects. The objective function uses an a priori fixed density function that represents an importance of each point in the environment.
- The embodiments of the invention provide a method for planning paths of a set of mobile sensors, that can be, for example, airborne, ground-based, or underwater. Each sensor includes an imaging system for imaging an environment. The imaging system can use optical imaging, synthetic aperture radar (SAR), hyperspectral imaging, physical aperture radar imaging, for example. Images acquired by the sensors can be used in surveillance, cartography and monitoring applications, among others.
- At any time instant, each sensor moves to a next position and orientation to optimize a resolution of the imaging system is used to select a resolution for optimal coverage of the environment, according to a pre-specified desired resolution at each point in the environment. As used herein, the resolution depends on the size and density of pixels in the images. It should be noted that the coverage can be complete, or partial and images can overlap or not.
- In a distributed manner, the motion of each sensor and its orientation are optimized by minimizing a cost function that characterizes how well the environment has been imaged, compared to the pre-specified resolution. To perform this optimization, each sensor communicates with neighboring sensors, and exchanges local information. The images acquired by the sensors can be combined into an overall image of the environment to achieve the desired resolution.
-
FIG. 1 is a schematic of an example airborne mobile sensor system according to embodiments of the invention; -
FIG. 2 is a side view of a sensor and environment in sensor coordinates according to embodiments of the invention; -
FIG. 3 is a table giving variables, description, and exemplar values used by a model according to embodiments of the invention; -
FIG. 4A is a schematic of an example environment according to embodiments of the invention; -
FIG. 4B is a schematic of an irregular example environment; -
FIG. 5 is a top view of the sensor footprint in sensor coordinates according to embodiments of the invention; and -
FIG. 6 is a flow diagram of a method for planning paths of the sensors shown inFIG. 1 according to embodiments of the invention. -
FIG. 1 shows a set ofmobile sensors 100 according to embodiments of our invention. The sensors can be airborne, ground-based or underwater, among others. The sensors can, for example, be arranged in indoor or outdoor drones, aircraft, or satellites. Each sensor includes aprocessor 101, animaging system 102, and acommunication system 103. The imaging system has a known footprint for imaging anenvironment 400, which may depend on the orientation of the sensor. The footprint is a projection of the imaging plane, such as a camera image plane or a radar beam pattern, onto the environment. The imaging system can use, among others, optical imaging, synthetic aperture radar (SAR), hyperspectral imaging, physical aperture radar imaging. For this reason, the term “image” is used broadly. - The sensors move along
paths 102 to image theenvironment 400. The communication system includes a transceiver so that the sensors can communicate with each other overchannels 105. In the preferred embodiment the channels are wireless using, e.g., radio or optical signals. By exchanging information between neighboring sensors, i.e., sensors within communication range, the sensors can perform the path planning in a distributed manner. However, it should be understood that the other communication techniques can also be used, for example some or all of the sensors can use digital fine wire tethers. - It is an objective to provide an image resolution over the environment that achieves a specified value at each point in the environment. As used herein, the resolution depends on the size and density of pixels in the images. The model uses a subadditive function to model the resolution provided by overlapping
footprints 104. The model also uses an objective function that is time varying. At each time j, the objective function provides a measure of a difference between a desired resolution and a resolution achieved up to a previous time j−1. - Sensor Model
-
FIG. 2 is a side view of a sensor and environment in sensor coordinates. A point z in the environment is located on a line bisecting an angle γv. The table inFIG. 3 gives thevariables 301,description 302 andexemplar values 303 used by our model. The variables include aheight 311 of the sensor, horizontal 312 and vertical 313 angular widths,position 314 of the sensor, anddeclination 315 andazimuth 317 angles. The angles specify the orientation of the sensors. In the described embodiment, there are two degrees of freedom, however three degrees are not precluded. - For convenience, most of the subsequent computations use the
angle ψ 316 to measure the declination, which is related to the actual declination angle φ using ψ=90°−φ. The labeled ranges inFIG. 2 are given by the following: -
- As shown in
FIG. 4A , an example regular environment is a 100×100 polygonal region Q in the xy plane. Anarbitrary point 401 in the environment is labeled qεQ. In sensor coordinates, the environment is the zy plane, which is a translated and rotated version of the x plane. A given sensor is located at a height H above the origin of the zv plane and the angle ψ is measured with respect to the z axis. The height H, and the x, y location specify the position of the sensors. The angles φ and ψ specify the orientation. -
FIG. 4B shows an irregular exampleirregular environment 500. - Each sensor has the declination angle φ, with ψ=90°−φ. When the azimuth angle θ=0, the z-axis of the sensor is aligned with the x-axis of a global coordinate system. All variables associated with the ith sensor have a superscript i for indexing the sensors.
-
FIG. 5 is an example of a top view of thesensor footprint 104 in sensor coordinates specific to an optical sensor. The footprint extends from Zmin to Zmax as shown inFIG. 2 . The footprint is a polygon defined by the four labeled vertices (1, 2, 3, 4). In sensor coordinates, the footprint vertices - (zk, yk) corresponding to the kth vertex are
-
- Let S(θ) be the rotation matrix defined below
-
- The global coordinates (x, y) of a point (z, y) in the sensor footprint are obtained by rotating the point by the azumith angle θ, and translating the footprint by the camera location (cx, cy):
-
- The four vertices in equation (2) defining the
sensor footprint 104 can be transformed into global coordinates using equation (4). The sensor footprint is defined by four variables (cx, cy, θ, φ). The first two variables are the projection of the sensor position onto the environment plane, and the last two parameters are the horizontal and vertical angular variables. Other sensors may have different footprint shapes, with the footprint shape and size depending on the sensor orientation. - In most practical embodiments, the position parameters are updated on a relatively slow time scale because these parameters correspond to the physical position of the sensor, while the angular variables are updated on a relatively fast time scale because the angles can quickly change values. However, in some embodiments, position parameters and angle parameters might change values at the same time scale, or angle parameters might change values at a slower time scale.
- Subadditive Combination of Overlapping Sensors
- Assume that sensor i provides a resolution ri in the footprint Fi, i=1, . . . , n. The problem is to model the resolution obtained in an intersection of complete or partial overlapping footprints. The best, and unrealistically optimistic, situation is for the overall resolution to be the sum of the individual resolutions. The worst, and unrealistically pessimistic, situation is for the overall resolution to equal the maximum of the individual sensor resolutions. The actual overall resolution is somewhere between these extremes.
- That is, if
-
r=[r 1 r 2 . . . r N] (5) - is a vector of the resolutions achieved by N sensors, the overall resolution res(r) obtained at points in the intersection of the sensor footprints satisfies the following inequalities
-
- One example of a function that satisfies this property is the lp norm of the vector r, 1<p<∞,
-
- where 1<p<∞. When p=1, the lp norm equals the upper bound in equation (6). When p=∞, the lp norm equals the lower bound in equation (6). Thus, a particular example of a subadditive model for the resolution obtained by overlapping sensors is the lp norm of a vector of individual resolutions, where 1<p<∞. Other embodiments can use different subadditive functions to model how images of different resolutions are combined.
- Objective Function and Optimization
- Let φd(q) be the desired resolution defined at every
point q 401 in theenvironment 400. Let xj be a vector of the position variables of all of the sensors at time j. Let ψj and θj be vectors corresponding to the vertical (declination) and horizontal (azimuth) angular variables at time j, respectively, of all of the sensors. Let Ri be the resolution provided by the ith sensor at all points in its footprint Fi, which is defined by sensor variables (cxi,cyi,θi,ψi) as -
- where K is a sensor constant that depends on the number of pixels in an acquired image. If all of the sensors have the same value of K, then the value is unimportant for the optimization described below.
- At any time j, the objective function we minimize is a difference between the desired resolution and an achieved resolution up to time j−1 according to the following function:
-
- where φj-1(q) is the resolution achieved by the sensors up to time j−1, p defines the norm used to model a subadditive combination of overlapping footprints, and ƒ(x) is a penalty function that penalizes deviation from the desired resolution.
- For example, in one embodiment, ƒ(x)=x2. This penalty function penalizes the achieved resolution when the resolution is lower or greater than the desired resolution. This forces the sensors to move to a different area of the environment when some of the sensors have been mapped to a sufficient resolution.
- In another embodiment,
-
- This penalty function penalizes the achieved resolution only when it has not attained the desired resolution, which enables the sensor to continue improving the resolution of the imaged area beyond the pre-specified desired resolution. Of course, other embodiments may use other penalty functions.
- Path Planning
-
FIG. 6 shows a method for path planning according to embodiments of the invention. By definition, the initial achieved resolution φ0(q) is identically zero. - A gradient-based optimization is described by the following initialization and iterative steps. At each time j, a complete gradient-based minimization with respect to the angle parameters of the sensors is performed. However, sensor positions are updated using only a single gradient step. The reason is that after the sensors have moved and acquired new data, the objective function has changed.
- Initialization
- Given the desired resolution φd(q), and a vector x0 of initial sensor positions, initial sensor angles are determined 605 by the following
optimization 611 -
- The initial position gradient g0 is the gradient with respect to x of G0(x, θ0, ψ0) evaluated at x0.
- Iteration 650 j=1, 2, . . . at Each Sensor for Each Time Step
- Acquire 610
images 601 from allsensors 100, and update 620 an achievedresolution 621 according to -
- Moving 630 the set of sensors to a next position and orientation of the sensor based on the achieved resolution and the desired resolution. The moving is in a direction of the
negative position gradient 631 -
x j =x j-1 −αg j-1 (13) - where α is a step size, and gj-1 is the position gradient at a previous time evaluated at position xj-1. It should be understood that the moving to the next position and orientation can be a “null” move, i.e., the sensor remains in place.
-
Update 640 the sensor angular parameters and theposition gradient 641 according to -
- and iterate for the next time instant j=
j+ 1. - Although the invention has been described by way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications can be made within the spirit and scope of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.
Claims (15)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/294,417 US9218646B1 (en) | 2014-06-03 | 2014-06-03 | Distributed path planning for mobile sensors |
JP2015101802A JP6468941B2 (en) | 2014-06-03 | 2015-05-19 | How to plan the path of a set of sensors in the environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/294,417 US9218646B1 (en) | 2014-06-03 | 2014-06-03 | Distributed path planning for mobile sensors |
Publications (2)
Publication Number | Publication Date |
---|---|
US20150348235A1 true US20150348235A1 (en) | 2015-12-03 |
US9218646B1 US9218646B1 (en) | 2015-12-22 |
Family
ID=54702390
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/294,417 Active 2034-06-14 US9218646B1 (en) | 2014-06-03 | 2014-06-03 | Distributed path planning for mobile sensors |
Country Status (2)
Country | Link |
---|---|
US (1) | US9218646B1 (en) |
JP (1) | JP6468941B2 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107063468A (en) * | 2017-04-27 | 2017-08-18 | 杭州天铂红外光电技术有限公司 | Air navigation aid and device that substation equipment infrared chart is shot |
CN108759840A (en) * | 2018-05-25 | 2018-11-06 | 北京建筑大学 | A kind of indoor and outdoor integrated three-dimensional navigation path planning method |
CN109885087A (en) * | 2019-03-12 | 2019-06-14 | 中国人民解放军军事科学院国防科技创新研究院 | The double star short distance formation method of micro-nano satellite |
CN110160652A (en) * | 2018-04-17 | 2019-08-23 | 云南电网有限责任公司昭通供电局 | A kind of thermal imaging system data transmission method |
CN111683375A (en) * | 2020-05-08 | 2020-09-18 | 北京科技大学 | Unmanned aerial vehicle deployment optimization method for unmanned aerial vehicle-assisted wireless cellular network |
US10852421B1 (en) * | 2019-01-24 | 2020-12-01 | Descartes Labs, Inc. | Sparse phase unwrapping |
US20230066768A1 (en) * | 2021-08-25 | 2023-03-02 | Rockwell Collins, Inc. | Airborne sensor to sensor information sharing technique |
FR3129003A1 (en) * | 2021-11-10 | 2023-05-12 | Universite De Lorraine | Method for determining optimized positions of an object for following a trajectory in a simulated environment. |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9880561B2 (en) | 2016-06-09 | 2018-01-30 | X Development Llc | Sensor trajectory planning for a vehicle |
US10265850B2 (en) * | 2016-11-03 | 2019-04-23 | General Electric Company | Robotic sensing apparatus and methods of sensor planning |
CN110017790B (en) * | 2019-03-15 | 2021-02-09 | 南京航空航天大学 | Curved surface scanning track generation and optimization method based on measurement precision |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006277007A (en) * | 2005-03-28 | 2006-10-12 | Mitsubishi Electric Corp | Observation satellite, satellite communication ground station, and observation satellite system |
US8184157B2 (en) * | 2005-12-16 | 2012-05-22 | Siemens Corporation | Generalized multi-sensor planning and systems |
US7676064B2 (en) * | 2006-05-17 | 2010-03-09 | The Boeing Company | Sensor scan planner |
US8451333B2 (en) * | 2007-08-06 | 2013-05-28 | Frostbyte Video, Inc. | Video capture system and method |
WO2012056537A1 (en) * | 2010-10-27 | 2012-05-03 | 三菱電機株式会社 | Programmable controller |
-
2014
- 2014-06-03 US US14/294,417 patent/US9218646B1/en active Active
-
2015
- 2015-05-19 JP JP2015101802A patent/JP6468941B2/en active Active
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107063468A (en) * | 2017-04-27 | 2017-08-18 | 杭州天铂红外光电技术有限公司 | Air navigation aid and device that substation equipment infrared chart is shot |
CN110160652A (en) * | 2018-04-17 | 2019-08-23 | 云南电网有限责任公司昭通供电局 | A kind of thermal imaging system data transmission method |
CN108759840A (en) * | 2018-05-25 | 2018-11-06 | 北京建筑大学 | A kind of indoor and outdoor integrated three-dimensional navigation path planning method |
US10852421B1 (en) * | 2019-01-24 | 2020-12-01 | Descartes Labs, Inc. | Sparse phase unwrapping |
US11635510B1 (en) * | 2019-01-24 | 2023-04-25 | Descartes Labs, Inc. | Sparse phase unwrapping |
CN109885087A (en) * | 2019-03-12 | 2019-06-14 | 中国人民解放军军事科学院国防科技创新研究院 | The double star short distance formation method of micro-nano satellite |
CN111683375A (en) * | 2020-05-08 | 2020-09-18 | 北京科技大学 | Unmanned aerial vehicle deployment optimization method for unmanned aerial vehicle-assisted wireless cellular network |
US20230066768A1 (en) * | 2021-08-25 | 2023-03-02 | Rockwell Collins, Inc. | Airborne sensor to sensor information sharing technique |
US12038500B2 (en) * | 2021-08-25 | 2024-07-16 | Rockwell Collins, Inc. | Airborne sensor to sensor information sharing technique |
FR3129003A1 (en) * | 2021-11-10 | 2023-05-12 | Universite De Lorraine | Method for determining optimized positions of an object for following a trajectory in a simulated environment. |
WO2023083577A1 (en) * | 2021-11-10 | 2023-05-19 | Universite De Lorraine | Method for determining optimised positions of an object for tracking a path in a simulated environment |
Also Published As
Publication number | Publication date |
---|---|
JP6468941B2 (en) | 2019-02-13 |
JP2015230729A (en) | 2015-12-21 |
US9218646B1 (en) | 2015-12-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9218646B1 (en) | Distributed path planning for mobile sensors | |
US11237572B2 (en) | Collision avoidance system, depth imaging system, vehicle, map generator and methods thereof | |
US20220124303A1 (en) | Methods and systems for selective sensor fusion | |
US8610708B2 (en) | Method and apparatus for three-dimensional image reconstruction | |
US11120560B2 (en) | System and method for real-time location tracking of a drone | |
CN110930508B (en) | Two-dimensional photoelectric video and three-dimensional scene fusion method | |
Hernandez-Lopez et al. | An automatic approach to UAV flight planning and control for photogrammetric applications | |
CN105004354B (en) | Unmanned plane visible ray and infrared image object localization method under large slanting view angle machine | |
CN107490364A (en) | A kind of wide-angle tilt is imaged aerial camera object positioning method | |
CN105698762A (en) | Rapid target positioning method based on observation points at different time on single airplane flight path | |
CN106468552A (en) | A kind of two-shipper crossing location method based on airborne photoelectric platform | |
CN111867932A (en) | Unmanned aerial vehicle comprising omnidirectional depth sensing and obstacle avoidance air system and operation method thereof | |
US11372455B2 (en) | Imaging sensor-based position detection | |
US10386857B2 (en) | Sensor-centric path planning and control for robotic vehicles | |
Caron et al. | Spherical visual gyroscope for autonomous robots using the mixture of photometric potentials | |
CN112490636B (en) | Automatic switching method of airborne antenna based on visibility | |
Mejias et al. | Omnidirectional bearing-only see-and-avoid for small aerial robots | |
CN114721436A (en) | Automatic air route planning method for unmanned aerial vehicle-mounted hyperspectral imaging system | |
Chen et al. | Real-time geo-localization using satellite imagery and topography for unmanned aerial vehicles | |
Celik et al. | Mono-vision corner SLAM for indoor navigation | |
Ivancsits et al. | Visual navigation system for small unmanned aerial vehicles | |
US10484659B2 (en) | Large-scale environmental mapping in real-time by a robotic system | |
CN109764864A (en) | A kind of indoor UAV position and orientation acquisition methods and system based on color identification | |
US10037035B2 (en) | Methods and apparatus for positioning aircraft based on images of mobile targets | |
US11415990B2 (en) | Optical object tracking on focal plane with dynamic focal length |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MITSUBISHI ELECTRIC RESEARCH LABORATORIES, INC., M Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BOUFOUNOS, PETROS;VACCARO, RICHARD;BENOSMAN, MOUHACINE;SIGNING DATES FROM 20140609 TO 20141124;REEL/FRAME:035523/0210 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |