WO2019010524A1 - Method and system for imaging a scene - Google Patents

Method and system for imaging a scene Download PDF

Info

Publication number
WO2019010524A1
WO2019010524A1 PCT/AU2018/050708 AU2018050708W WO2019010524A1 WO 2019010524 A1 WO2019010524 A1 WO 2019010524A1 AU 2018050708 W AU2018050708 W AU 2018050708W WO 2019010524 A1 WO2019010524 A1 WO 2019010524A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
generating
field
code
generator
Prior art date
Application number
PCT/AU2018/050708
Other languages
French (fr)
Inventor
Xiaopeng Wang
Branka Vucetic
Zihuai Lin
Original Assignee
The University Of Sydney
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2017902693A external-priority patent/AU2017902693A0/en
Application filed by The University Of Sydney filed Critical The University Of Sydney
Publication of WO2019010524A1 publication Critical patent/WO2019010524A1/en

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/3761Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35 using code combining, i.e. using combining of codeword portions which may have been transmitted separately, e.g. Digital Fountain codes, Raptor codes or Luby Transform [LT] codes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging

Definitions

  • This disclosure relates to methods and systems for imaging a scene. Background
  • Imaging scenes has become increasingly important in the area of machine vision.
  • autonomous cars rely heavily on detecting other vehicles and obstacles in their vicinity.
  • optical cameras such as active infrared or passive visible cameras are accurate on clear days, they become unusable in heavy fog or other bad weather conditions. Radar systems penetrate bad weather better but are complex and have poor spatial resolution especially for small antennas.
  • a method for imaging a scene comprises: generating a field to illuminate the scene;
  • sensing by a sensor directed at the scene, a response from the scene; and determining an image of the scene based on the response sensed by the sensor, wherein generating the field is based on a linear code and determining the image of the scene is based on the response sensed by the sensor and the linear code.
  • the linear code allows error correction when the image is determined. As a result, the determined image is more accurate even under high noise levels compared to existing methods. Further, linear code implementations are available that are computationally efficient.
  • the method may further comprise repeating the method to generate multiple sensor values indicative of the sensed response at respective points in time, wherein determining the image may comprise processing the multiple sensor values by using a decoding method corresponding to the linear code.
  • the linear code may be a block code.
  • the block code may be a fountain code.
  • the fountain code may be a Luby Transform code.
  • Generating the field may comprise: randomly determining a generator value for each of multiple locations; and generating the field based on the generator value for each of the multiple locations.
  • Randomly generating the generator value may comprise randomly generating a binary value for each of the multiple locations.
  • Randomly generating the binary value for each of the multiple locations may comprise randomly determining a degree indicative of the number of locations associated with a positive binary value.
  • Randomly determining the degree may be based on a Robust Soliton
  • the method may further comprise repeating the steps of generating a field value for each of the multiple locations for multiple respective points in time.
  • the method may further comprise generating multiple sensor values indicative of the sensed response at respective points in time; and processing the multiple sensor values by using a decoding method that applies an XOR operation between a selected sensor value and the remaining sensor values.
  • the method may further comprise generating multiple sensor values indicative of the sensed response at respective points in time; and processing the multiple sensor values by using a decoding method that is based on belief propagation.
  • the linear code may be a convolutional code.
  • Generating the field may comprise determining a generator value for each of the multiple locations based on a generator polynomial; and generating the field based on the generator value for each of the multiple locations.
  • Determining the generator value may comprise generating a binary value for each of the multiple locations.
  • Generating the generator value for each of the multiple locations may be based on selected generator values generated for a previous point in time and may comprise applying the generator polynomial to the selected generator values to determine additional generator values for a subsequent point in time.
  • the method may further comprise generating multiple sensor values indicative of the sensed response at respective points in time, wherein determining the image may comprise processing the multiple sensor values by using a decoding method based on a Viterby decoder.
  • the field may be an electromagnetic field.
  • the sensed response may comprise sensed radiation that represents the response.
  • a system for imaging a scene comprising: a field generator to generate a field that illuminates the scene; a sensor directed at the scene to sense a response from the scene; and a processor to determine an image of the scene based on the response sensed by the sensor, wherein generating the field is based on a linear code and determining the image of the scene is based on the response sensed by the sensor and the linear code.
  • FIG. 1 is a simplified illustration of an introductory example experiment for imaging a scene.
  • Fig. 2 illustrates an advanced example, where a lens spreads the laser beam to more than the size of the object.
  • FIG. 3 illustrates another example system to image a scene.
  • Fig. 4 illustrates a method for imaging a scene.
  • Fig. 5 illustrates the general concept of fountain codes.
  • Fig. 6 illustrates an example block diagram of an imaging procedure.
  • Fig. 8 illustrates the third step of the imaging procedure in Fig. 6 where the EM fields on the imaging plane are manipulated according to the generated matrix [E]
  • Fig. 9 illustrates the fourth step of the imaging procedure in Fig. 6 where reflections from objects are collected by a single sensor directed at the scene, that is, a single receiving antenna.
  • the total numbers of manipulations M is larger than the amount of sub-grids on the target imaging plane.
  • Figs. 10 to Fig. 14 illustrate an example imaging scenario and background EM fields.
  • Fig. 10 shows a 3-dimensional (3D) scenario.
  • Several objects are deployed in the target imaging plane while dish antennas working at 24 GHz are used for illumination at a distance of lm away.
  • Fig. 11 illustrates background EM fields generated by dish antennas according to the generator matrix [E] .
  • Figs. 12 and 13 show two background EM fields generated by the dish antennas with different degrees.
  • Fig. 14 and 15 illustrate the reconstruction of two binary-valued targets with the same size and shape.
  • Fig. 14 shows the original objects while Fig. 15 illustrates the reconstruction of objects under 5dB SNR condition.
  • Fig. 16 illustrates the mean square error (MSE) of the reconstruction of objects under different SNR conditions using a block code.
  • MSE mean square error
  • Fig. 17 illustrates the time consumption with the increasing of the scale of imaging plane using a block code.
  • the time consumption of both the proposed method and conventional microwave GI increases.
  • the curve corresponds to the proposed method has a much more moderate slope.
  • Fig. 18 illustrates the time consumption with the increasing of SNR conditions using a block code.
  • the reconstruction time of the proposed method does not change significantly with different SNR values while the conventional microwave GI suffers greatly from SNR conditions.
  • Fig. 19 illustrates the selection of elements in each row of matrix [O] when the parameter (2, 1, 4) is chosen for the employed convolutional code.
  • the hatched blocks represent the selected elements in corresponding rows while the empty ones represent non- selected elements.
  • the blocks in grey represent different patterns that are generated. Those grey blocks are later assigned with value I s.
  • Fig. 21 illustrates the result of repeating the step shown in Fig. 15 until all rows in [O] have been processed and then assign value 1 to the grey boxes and therefore form the convolutional code structured background matrix [E] .
  • Fig. 22 illustrates constructions of objects under different SNR conditions using a convolutional code.
  • the proposed method in this disclosure can achieve a significantly better reconstruction performance under different SNR conditions and especially for an SNR greater than 5 dB.
  • Fig. 23 illustrates time consumption with the increasing of the scale of the imaging plane using a convolutional code. As the scale increases, the time consumption of the conventional microwave GI increases exponentially while the time consumption of the proposed method does not change significantly.
  • Fig. 24 illustrates the time consumption with the increasing of SNR conditions using a convolutional code.
  • the reconstruction time of the proposed method does not change significantly with different SNR values while the conventional microwave GI suffers greatly from SNR conditions.
  • an imaging method that uses a structured field to illuminate the scene.
  • the field is structured according to a linear code, which allows the reconstruction of a scene using decoding methods corresponding to the linear code.
  • the reconstruction of the scene based on the linear code has similar advantages to using linear codes in data transmission in that the data (here the image) can be reconstructed despite noise present in the sensed signal.
  • Fig. 1 is a simplified illustration of an introductory example experiment for imaging a scene 100.
  • the scene 100 comprises a single object 101 to be imaged and a light source 102.
  • the light source is a laser in this simplified example.
  • the light sensor 103 is a single sensor, i.e. a single pixel, and spans the entire scene.
  • the laser 102 scans the scene by moving from left to right and top to bottom to create a line pattern across the scene 100.
  • the light sensor 103 does not sense any light.
  • the light sensor 103 does sense the light.
  • Fig. 2 illustrates an advanced example, where a lens 110 spreads the laser beam to more than the size of the object 101.
  • a mask 111 or other optical element that creates a structure within the light.
  • the mask 111 creates multiple horizontal stripes each indicated by a line originating from mask 111 in Fig. lb.
  • a second lens 112 is located behind the object 101 and focusses the beams onto image sensor 103.
  • only stripes of the structured light that are not blocked by the object are collected at the sensor 103.
  • the intensity of light sensed by sensor 103 is indicative of the number of stripes that are blocked by the object 101.
  • Figs. 1 and 2 can be expanded by using reflected light instead of transmitted light and by using a more general light source.
  • a distributed microwave source may be used to irradiate the scene.
  • Other sources include acoustic sources, optical or quasi optical sources, such as mm wave/THz sources, where the size of the wavelength is comparable to the size of the optical components.
  • the field generated by those sources is structured according to a linear code.
  • Fig. 3 illustrates another example system 200 comprising multiple transmitters 201, such as microwave antennas, that generate a field 202 to illuminate a scene 203 comprising multiple objects shows as boxes.
  • the multiple transmitters 201 together generate the field 202 from multiple locations.
  • other field generators may also be used that may not comprise multiple transmitter elements, such as a quasi-optical system with a single source and passive elements that create the field 202 at different locations.
  • the field 202 is structured in a deterministic way.
  • the field is generated based on a linear code in the sense that generator values that define the generation of the field 202 are determined according to a linear code as will be described in more detail later.
  • Example system 200 also comprises a sensor 204 directed at the scene 203 to sense a response from the scene 203. While the example in Fig. 3 shows that the sensor 204 senses transmitted radiation, in other examples, sensor 204 may sense reflected radiation or a response in the form of blocked radiation, which may also be referred to as a negative response or absence of radiation. Radiation is to be understood broadly as any non-contact quantitative measure including sound radiation, electromagnetic radiation including light, x-ray and microwave radiation.
  • a processor 205 is connected to sensor 204 and due to the known structure of the field 202, processor 205 can reconstruct an image of the scene based on multiple sensor measurements by single sensor 204 according to the decoding method as described below. It is noted however, that more than a single sensor may be used, such as multiple sensors.
  • the generator values are denoted or in the mathematical description below.
  • the generator values may be T for On' and '0' for Off .
  • Processor 205 may send these binary values to the generators 201, which then switch their respective signal generators on or off accordingly to generate the field.
  • connection 206 between the processor 205 and the actual field generator 201 that allows the processor 205 to control the field generation.
  • Connection 206 may also be wireless or via the internet. It is also noted that sensor 204 may be stationary.
  • Fig. 4 illustrates a method 300 for imaging scene 203.
  • Method 300 commences by generating 301 field 202 to illuminate the scene 203.
  • Sensor 203 is directed at the scene 203 and senses 302 radiation that represents a response from the scene 203.
  • a processor 205 determines 303 an image of the scene based on the radiation sensed by the sensor. It is noted that generating the field is based on a linear code and determining the image of the scene is based on the radiation sensed by the sensor and the linear code.
  • method 300 may be repeated to generate multiple sensor values indicative of the sensed radiation at respective points in time. Processor 205 may then process the multiple sensor values by using a decoding method corresponding to the linear code and thereby determine the image of scene 203.
  • the linear code is a block code, such as a fountain code.
  • the fountain code may be a Luby Transform code.
  • Fig. 5 illustrates the general concept of fountain codes for the transmission of data 401 from a transmitter 402 to a receiver 403.
  • Transmitter 402 splits the data 401 into multiple blocks and creates random combinations of the blocks to create packets 404. Since there is a large number of combination of blocks, transmitter 402 can generate a large number of unique data packets 404 and send them towards receiver 403. However, some packets are lost 405 while some others 406 reach the receiver 403.
  • the receiver of fountain code packets does not need to request the missing packets but can re-create the missing packets using the received packets. In other words, receiver 403 simply receives as many packets until the data 401 can be reconstructed.
  • each packet is a combination of different blocks of the data 401, so eventually, receiver 403 will have received each block at least once in a combination with other blocks.
  • the concept of the fountain code explained above can be used to create multiple illumination patterns of scene 203.
  • the selection of blocks to be combined into multiple packets can be applied to the generation of a field at multiple locations, such as multiple antennas.
  • a block is selected for a packet by the fountain code, this is equivalent to an antenna that is activated for illuminating scene 203 at a particular point in time. It is noted, however, that in other examples, the antennas are used to manipulate the field directly.
  • the background EM fields generated within the framework of conventional microwave GI can be expressed as:
  • reflections from objects can be expressed as:
  • [R] [E][s] (3)
  • [s] : [a l v a 21 ,..., ⁇ ⁇ Q ] (4) is a vector of the frequency-independent reflectivity of the target imaging plane.
  • telecommunications with forward-error-correction (FEC) ability can be used in a variety of applications, providing reliable transmissions for satellite broadcasting, television, 4th generation (4G) mobile networks, Internet, relay communications, deep space communications, etc.
  • FCs As a theoretically representative of FCs, the encoding process of RLFC is given by the following equation:
  • [t] [s][G] (5)
  • [s] is a vector containing K information symbols in total
  • [G] is a randomly structured generator matrix determined by LT encoding rules
  • [t] is a vector of the encoded symbols after the process.
  • FCs can be applied for imaging scene 203.
  • a LT code can be selected, which is a FC with deterministic structures of a generator matrix [G] , and can be adopted into the scenario of microwave GI as disclosed herein.
  • [ ⁇ ] : [ ⁇ , ⁇ 2 , ⁇ (6) where ⁇ e ⁇ 0, 1 ⁇ represents the binary-valued reflectivity of the corresponding sub- grid.
  • G ⁇ 0, 1 ⁇ represents the binary- valued background EM fields during the m th illumination. It is noted that processor 205 generates the generator values E ⁇ that later define the generation of the structured field.
  • processor 205 Unlike the random structured matrix [E] in Eq.(3), processor 205 generates the matrix [E] following the modified LT encoding process as shown below:
  • microwave GI based on LT code structured fields can be expressed as,
  • [R] [R 1 , R 2 ,... , R M ] (14)
  • Fig. 6 illustrates an example block diagram of an imaging procedure 500 as performed by processor 205 of the proposed non-random microwave GI based on LT code structured fields.
  • Procedure 500 comprises five steps in total. To be more specific:
  • Step 502 Generation. As shown in Fig. 7, according to the scale that is determined by the first step 501 and modified LT encoding rules, generate a
  • Step 504 Reception. As depicted by Fig. 9, after M times manipulations, reflections from objects are collected by a single receiving antenna. The received signals are expressed as:
  • Step 505 Reconstruction. Reconstruct the imaging of objects on the target imaging plane according to the modified LT decoding rules as below,
  • results in [ ⁇ ] are the reconstructed image of binary-valued objects on the target imaging plane.
  • XOR operation may be replaced by performing a belief propagation method is described in Mirrezaei, Seyed, Karim Faez, and Shahram Yousefi. "Towards Fountain Codes. Part II: Belief Propagation
  • Example geometry settings of a scenario are shown in Fig. 10, where the target imaging plane containing objects to be imaged is set to be in the XY Plane.
  • Parabolic dish antennas working around 24 GHz are deployed at a distance of 1 m away along the Z -axis. With a beamwidth of 2° , each of those dish antennas centres at one sub-grid on the target imaging plane and generate corresponding EM fields.
  • Fig. 9b, 9c and 9d different patterns of EM fields are generated with different degrees.
  • symbol 1 correspond to high illumination power from the dish antennas while the symbol 0 correspond to zero output.
  • an omnidirectional antenna is deployed to collect reflections from objects.
  • This disclosure provides a non-random microwave GI scheme based on LT code structured background EM fields.
  • This approach employs coding technique from telecommunications into the scenario of microwave GI. Further, the FEC ability is introduced into microwave imaging applications.
  • the proposed method not only extends the diversity of microwave GI, but also brings the possibilities of further introducing information theory from telecommunications into microwave imaging applications in the future. Compared with conventional microwave GI, the proposed method can effectively obtain the image of binary-valued objects with a reduced system complexity, improved reconstruction performance and time efficiency.
  • [s] : [a l v a 21 ,..., ⁇ ⁇ Q f (19) are the frequency-independent reflectivity of the target imaging plane.
  • the convolutional code originated from telecommunications can be further adapted to simplify the imaging model of microwave GI and improve its efficiency and performance under different SNR conditions.
  • processor 205 generates matrix [E] according to the modified convolutional encoding process as shown below:
  • 19 shows the resulting matrix [O] 1400 with a size of 9 x 9 and corresponding selection procedures where shaded elements are selected.
  • Processor 205 repeats the above procedures until all rows in matrix [O] 1400 have been used and processed. Then, processor 205 combines all the generated patterns to form the convolutional code structured background matrix [E] , as shown in Fig. 21 where the grey boxes indicate selected blocks.
  • FIG. 6 The imaging procedures of the proposed non-random microwave GI based on convolutional code structured EM fields is shown in the block diagram 500 in Fig. 6 which contains the same five steps is described above with reference to Fig. 6 with the main difference being the application of the convolutional code instead of the fountain code.
  • processor 205 performs the following steps:
  • Step 501 Equally divide the imaging plane containing objects into sub-grids with P rows and Q columns.
  • Step 502 Generation. As shown in Fig. 7, according to the modified convolutional encoding rules described above, generate the background EM field matrix [E] consisting of binary elements.
  • Step 504 Reception. As depicted by Fig. 9 (where M is replaced by S), after S times manipulations, reflections from objects on the target imaging plane are collected by a single receiving antenna. The received signals are expressed as:
  • Step 506 Reconstruction. Reconstruct the imaging of objects on the target imaging plane according to the modified convolutional decoding rules as below, 1. Normalize [R] by reflections corresponding to the first illumination;
  • results in [ ⁇ ] are the reconstructed image of binary-valued objects on the target imaging plane.
  • Viterbi Decoding for convolutional codes can be found in T. Zhang, M. Guo, L. Ding, F. Yang, and L. Qian, "Soft output viterbi decoding for space-based ais receiver," in 2016 22nd Asia- Pacific Conference on Communications (APCC), Aug 2016, pp. 383-387, which is incorporated herein by reference.
  • Fig. 23 shows that the time consumption of the conventional microwave GI using iterative optimization algorithms increases drastically with an increase of the scale of the imaging problem. In contrast, the time consumption of the proposed method remains relatively constant.
  • This disclosure provides a non-random microwave GI scheme based on convolutional code structured background EM fields.
  • This approach employs purposely encoded non-random discrete EM fields to perform microwave GI.
  • the object reconstructions are obtained via decoding method on GF (2) domain.
  • the proposed method not only extends the diversity of microwave GI, but also introduces the possibility of combining coding techniques from telecommunications with microwave imaging applications. Numerical examples demonstrated that compared with conventional microwave GI, the proposed method can obtain the image of binary- valued objects with a reduced system complexity and improved reconstruction efficiency and performance.

Abstract

A method for imaging a scene, the method comprising: generating a field to illuminate the scene; sensing, by a sensor directed at the scene, a response from the scene; and determining an image of the scene based on the response sensed by the sensor, wherein generating the field is based on a linear code and determining the image of the scene is based on the response sensed by the sensor and the linear code.

Description

Method and System for Imaging a Scene
Technical Field
[0001] This disclosure relates to methods and systems for imaging a scene. Background
[0002] Imaging scenes has become increasingly important in the area of machine vision. For example, autonomous cars rely heavily on detecting other vehicles and obstacles in their vicinity. While optical cameras, such as active infrared or passive visible cameras are accurate on clear days, they become unusable in heavy fog or other bad weather conditions. Radar systems penetrate bad weather better but are complex and have poor spatial resolution especially for small antennas.
[0003] Any discussion of documents, acts, materials, devices, articles or the like which has been included in the present specification is not to be taken as an admission that any or all of these matters form part of the prior art base or were common general knowledge in the field relevant to the present disclosure as it existed before the priority date of each of the appended claims.
[0004] Throughout this specification the word "comprise", or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps.
Summary
[0005] In accordance with a first aspect of the present invention, there is provided, a method for imaging a scene comprises: generating a field to illuminate the scene;
sensing, by a sensor directed at the scene, a response from the scene; and determining an image of the scene based on the response sensed by the sensor, wherein generating the field is based on a linear code and determining the image of the scene is based on the response sensed by the sensor and the linear code.
[0006] It is one advantage of some embodiments that the linear code allows error correction when the image is determined. As a result, the determined image is more accurate even under high noise levels compared to existing methods. Further, linear code implementations are available that are computationally efficient.
[0007] The method may further comprise repeating the method to generate multiple sensor values indicative of the sensed response at respective points in time, wherein determining the image may comprise processing the multiple sensor values by using a decoding method corresponding to the linear code.
[0008] The linear code may be a block code. The block code may be a fountain code. The fountain code may be a Luby Transform code.
[0009] Generating the field may comprise: randomly determining a generator value for each of multiple locations; and generating the field based on the generator value for each of the multiple locations.
[0010] Randomly generating the generator value may comprise randomly generating a binary value for each of the multiple locations.
[0011] Randomly generating the binary value for each of the multiple locations may comprise randomly determining a degree indicative of the number of locations associated with a positive binary value.
[0012] Randomly determining the degree may be based on a Robust Soliton
Distribution (RSD).
[0013] The method may further comprise repeating the steps of generating a field value for each of the multiple locations for multiple respective points in time. [0014] The method may further comprise generating multiple sensor values indicative of the sensed response at respective points in time; and processing the multiple sensor values by using a decoding method that applies an XOR operation between a selected sensor value and the remaining sensor values.
[0015] The method may further comprise generating multiple sensor values indicative of the sensed response at respective points in time; and processing the multiple sensor values by using a decoding method that is based on belief propagation.
[0016] The linear code may be a convolutional code.
[0017] Generating the field may comprise determining a generator value for each of the multiple locations based on a generator polynomial; and generating the field based on the generator value for each of the multiple locations.
[0018] Determining the generator value may comprise generating a binary value for each of the multiple locations.
[0019] Generating the generator value for each of the multiple locations may be based on selected generator values generated for a previous point in time and may comprise applying the generator polynomial to the selected generator values to determine additional generator values for a subsequent point in time.
[0020] The method may further comprise generating multiple sensor values indicative of the sensed response at respective points in time, wherein determining the image may comprise processing the multiple sensor values by using a decoding method based on a Viterby decoder. The field may be an electromagnetic field.
[0021] The sensed response may comprise sensed radiation that represents the response.
In accordance with a further aspect of the present invention there is provided a system for imaging a scene comprising: a field generator to generate a field that illuminates the scene; a sensor directed at the scene to sense a response from the scene; and a processor to determine an image of the scene based on the response sensed by the sensor, wherein generating the field is based on a linear code and determining the image of the scene is based on the response sensed by the sensor and the linear code.
[0022] Features that are provided above in relation to the method are equally applicable to other aspects described herein, such as the software and the system.
Brief Description of Drawings
[0023] An example will now be described with reference to:
[0024] Fig. 1 is a simplified illustration of an introductory example experiment for imaging a scene.
[0025] Fig. 2 illustrates an advanced example, where a lens spreads the laser beam to more than the size of the object.
[0026] Fig. 3 illustrates another example system to image a scene.
[0027] Fig. 4 illustrates a method for imaging a scene.
[0028] Fig. 5 illustrates the general concept of fountain codes.
[0029] Fig. 6 illustrates an example block diagram of an imaging procedure.
[0030] Fig. 7 illustrates the second step of the imaging procedure in Fig. 6 where a matrix with LT code structures is generated with K rows and N columns where K > N = P x Q .
[0031] Fig. 8 illustrates the third step of the imaging procedure in Fig. 6 where the EM fields on the imaging plane are manipulated according to the generated matrix [E] [0032] Fig. 9 illustrates the fourth step of the imaging procedure in Fig. 6 where reflections from objects are collected by a single sensor directed at the scene, that is, a single receiving antenna. The total numbers of manipulations M is larger than the amount of sub-grids on the target imaging plane.
[0033] Figs. 10 to Fig. 14 illustrate an example imaging scenario and background EM fields. Fig. 10 shows a 3-dimensional (3D) scenario. Several objects are deployed in the target imaging plane while dish antennas working at 24 GHz are used for illumination at a distance of lm away. Fig. 11 illustrates background EM fields generated by dish antennas according to the generator matrix [E] .
[0034] Figs. 12 and 13 show two background EM fields generated by the dish antennas with different degrees.
[0035] Fig. 14 and 15 illustrate the reconstruction of two binary-valued targets with the same size and shape. Fig. 14 shows the original objects while Fig. 15 illustrates the reconstruction of objects under 5dB SNR condition.
[0036] Fig. 16 illustrates the mean square error (MSE) of the reconstruction of objects under different SNR conditions using a block code. Compared with reconstruction under conventional framework of microwave GI using iterative optimization algorithms, the proposed method can achieve a significantly better reconstruction performance when SNR values are higher than 2 dB.
[0037] Fig. 17 illustrates the time consumption with the increasing of the scale of imaging plane using a block code. As the increasing of the scale, the time consumption of both the proposed method and conventional microwave GI increases. However, the curve corresponds to the proposed method has a much more moderate slope.
[0038] Fig. 18 illustrates the time consumption with the increasing of SNR conditions using a block code. The reconstruction time of the proposed method does not change significantly with different SNR values while the conventional microwave GI suffers greatly from SNR conditions.
[0039] Fig. 19 illustrates the selection of elements in each row of matrix [O] when the parameter (2, 1, 4) is chosen for the employed convolutional code. The hatched blocks represent the selected elements in corresponding rows while the empty ones represent non- selected elements.
[0040] Fig. 20 illustrates how in the 6 th row, m = 4 elements are selected and then used to generate n = 2 different patterns according to the pre-defined generator polynomial P of the convolutional code. The blocks in grey represent different patterns that are generated. Those grey blocks are later assigned with value I s.
[0041] Fig. 21 illustrates the result of repeating the step shown in Fig. 15 until all rows in [O] have been processed and then assign value 1 to the grey boxes and therefore form the convolutional code structured background matrix [E] .
[0042] Fig. 22 illustrates constructions of objects under different SNR conditions using a convolutional code. Compared with reconstruction under conventional framework of microwave GI using iterative optimization algorithms, the proposed method in this disclosure can achieve a significantly better reconstruction performance under different SNR conditions and especially for an SNR greater than 5 dB.
[0043] Fig. 23 illustrates time consumption with the increasing of the scale of the imaging plane using a convolutional code. As the scale increases, the time consumption of the conventional microwave GI increases exponentially while the time consumption of the proposed method does not change significantly.
[0044] Fig. 24 illustrates the time consumption with the increasing of SNR conditions using a convolutional code. The reconstruction time of the proposed method does not change significantly with different SNR values while the conventional microwave GI suffers greatly from SNR conditions. Description of Embodiments
[0045] There is a need for an imaging solution that can basically see through bad weather and is less complex than existing solutions. Therefore, there is provided an imaging method that uses a structured field to illuminate the scene. The field is structured according to a linear code, which allows the reconstruction of a scene using decoding methods corresponding to the linear code. In particular, the reconstruction of the scene based on the linear code has similar advantages to using linear codes in data transmission in that the data (here the image) can be reconstructed despite noise present in the sensed signal.
Introductory example
[0046] Fig. 1 is a simplified illustration of an introductory example experiment for imaging a scene 100. The scene 100 comprises a single object 101 to be imaged and a light source 102. For ease of explanation, the light source is a laser in this simplified example. There is also a light sensor 103 behind the object. The light sensor 103 is a single sensor, i.e. a single pixel, and spans the entire scene. The laser 102 scans the scene by moving from left to right and top to bottom to create a line pattern across the scene 100. When the laser 102 is obstructed by the object 101, the light sensor 103 does not sense any light. On the other hand, when the laser 102 is not obstructed by the object 101, the light sensor 103 does sense the light. In other words, there is an area 104 behind the object 101 on the light sensor 103 that the light from laser 102 will not reach. However, the spatial information or spatial extent of this area 104 is not used because the sensor 103 generates only a single signal for the entire area. In other words, the sensor 103 does not distinguish where on the sensor 103 the laser light hits the sensor 103. However, by combining the non-local signal from the light sensor 103 with the angles from laser 102, the position of the object 101 can be calculated. In other words, the object 101 is located at angles for which the sensor 103 senses no light. [0047] Fig. 2 illustrates an advanced example, where a lens 110 spreads the laser beam to more than the size of the object 101. In addition, there is a mask 111 or other optical element that creates a structure within the light. In this example, the mask 111 creates multiple horizontal stripes each indicated by a line originating from mask 111 in Fig. lb. The difference to Fig. la is now that the scene is not scanned but completely illuminated by the structured light. A second lens 112 is located behind the object 101 and focusses the beams onto image sensor 103. As indicated in Fig. lb, only stripes of the structured light that are not blocked by the object are collected at the sensor 103. As a result, the intensity of light sensed by sensor 103 is indicative of the number of stripes that are blocked by the object 101. By repeating this measurement with differently structured light, the shape of the object 101 can be gradually calculated based on the different sensor readings and the information about the structure of the light.
[0048] The simplified examples of Figs. 1 and 2 can be expanded by using reflected light instead of transmitted light and by using a more general light source. In the current disclosure, a distributed microwave source may be used to irradiate the scene. Other sources include acoustic sources, optical or quasi optical sources, such as mm wave/THz sources, where the size of the wavelength is comparable to the size of the optical components. As described herein, the field generated by those sources is structured according to a linear code.
General description
[0049] Fig. 3 illustrates another example system 200 comprising multiple transmitters 201, such as microwave antennas, that generate a field 202 to illuminate a scene 203 comprising multiple objects shows as boxes. In essence, the multiple transmitters 201 together generate the field 202 from multiple locations. This means the multiple transmitters 201 together may be considered as a single field generator of field generating system. It is noted that other field generators may also be used that may not comprise multiple transmitter elements, such as a quasi-optical system with a single source and passive elements that create the field 202 at different locations. Importantly, the field 202 is structured in a deterministic way. In particular, the field is generated based on a linear code in the sense that generator values that define the generation of the field 202 are determined according to a linear code as will be described in more detail later.
[0050] Example system 200 also comprises a sensor 204 directed at the scene 203 to sense a response from the scene 203. While the example in Fig. 3 shows that the sensor 204 senses transmitted radiation, in other examples, sensor 204 may sense reflected radiation or a response in the form of blocked radiation, which may also be referred to as a negative response or absence of radiation. Radiation is to be understood broadly as any non-contact quantitative measure including sound radiation, electromagnetic radiation including light, x-ray and microwave radiation. A processor 205 is connected to sensor 204 and due to the known structure of the field 202, processor 205 can reconstruct an image of the scene based on multiple sensor measurements by single sensor 204 according to the decoding method as described below. It is noted however, that more than a single sensor may be used, such as multiple sensors.
[0051] It may be the same processor 205 or a different processor that generates the field 202 in the sense that the processor determines generator values, stores the generator values in memory, such as RAM, and then controls the antennas or other field generators to generate the field accordingly. The generator values are denoted or in the mathematical description below. For example, the generator values may be T for On' and '0' for Off . Processor 205 may send these binary values to the generators 201, which then switch their respective signal generators on or off accordingly to generate the field. In other examples, there is one or more signal generators integrated into processor 205.
[0052] In particular, there may be a connection 206 between the processor 205 and the actual field generator 201 that allows the processor 205 to control the field generation. In one example, there is one connection for each transmitter 201 but in other examples there is a bus or a single connection to a more complex generator system, such as a quasi-optical system. Connection 206 may also be wireless or via the internet. It is also noted that sensor 204 may be stationary.
[0053] Fig. 4 illustrates a method 300 for imaging scene 203. Method 300 commences by generating 301 field 202 to illuminate the scene 203. Sensor 203 is directed at the scene 203 and senses 302 radiation that represents a response from the scene 203. Finally, a processor 205 determines 303 an image of the scene based on the radiation sensed by the sensor. It is noted that generating the field is based on a linear code and determining the image of the scene is based on the radiation sensed by the sensor and the linear code. In particular, method 300 may be repeated to generate multiple sensor values indicative of the sensed radiation at respective points in time. Processor 205 may then process the multiple sensor values by using a decoding method corresponding to the linear code and thereby determine the image of scene 203.
Example 1; Block codes
[0054] In a first example, the linear code is a block code, such as a fountain code. In particular, the fountain code may be a Luby Transform code.
[0055] Fig. 5 illustrates the general concept of fountain codes for the transmission of data 401 from a transmitter 402 to a receiver 403. Transmitter 402 splits the data 401 into multiple blocks and creates random combinations of the blocks to create packets 404. Since there is a large number of combination of blocks, transmitter 402 can generate a large number of unique data packets 404 and send them towards receiver 403. However, some packets are lost 405 while some others 406 reach the receiver 403. Unlike other codes, the receiver of fountain code packets does not need to request the missing packets but can re-create the missing packets using the received packets. In other words, receiver 403 simply receives as many packets until the data 401 can be reconstructed. This works because each packet is a combination of different blocks of the data 401, so eventually, receiver 403 will have received each block at least once in a combination with other blocks. [0056] The concept of the fountain code explained above can be used to create multiple illumination patterns of scene 203. In particular, the selection of blocks to be combined into multiple packets can be applied to the generation of a field at multiple locations, such as multiple antennas. In other words, if a block is selected for a packet by the fountain code, this is equivalent to an antenna that is activated for illuminating scene 203 at a particular point in time. It is noted, however, that in other examples, the antennas are used to manipulate the field directly.
[0057] The following description provides detailed mathematical explanation of this use of a linear code, such as a fountain code.
[0058] Under Born's Approximation (BA), assuming the target imaging plane is divided into sub-grids with P rows and Q columns, under N = P x Q times
illuminations from antennas transmitting randomly modulated signals, the background EM fields generated within the framework of conventional microwave GI can be expressed as:
Fm
[E] := (1)
where represents the background EM field formed during the nth illumination at the corresponding sub-grid. After N times illuminations, reflections from objects can be expressed as:
[R] := [/?,, R2,..., RN]T (2) where Rn represents the reflection from the nth illumination. In other words, equation
(2) represents multiple sensor values indicative of the sensed radiation at respective points in time. Then the imaging equation of microwave GI can be written as:
[R] = [E][s] (3) where: [s] := [al va21,...,σρ Q] (4) is a vector of the frequency-independent reflectivity of the target imaging plane.
[0059] Fountain codes (FC) are a class of linear coding schemes from
telecommunications with forward-error-correction (FEC) ability. They can be used in a variety of applications, providing reliable transmissions for satellite broadcasting, television, 4th generation (4G) mobile networks, Internet, relay communications, deep space communications, etc. As a theoretically representative of FCs, the encoding process of RLFC is given by the following equation:
[t] = [s][G] (5) where [s] is a vector containing K information symbols in total, [G] is a randomly structured generator matrix determined by LT encoding rules and [t] is a vector of the encoded symbols after the process. There is a similarity between the imaging equation of microwave GI presented in Eq.(3) and the above expression of RLFC. Therefore, the framework of conventional microwave GI can be transformed and FCs can be applied for imaging scene 203. In order to further simplify the system complexity, a LT code can be selected, which is a FC with deterministic structures of a generator matrix [G] , and can be adopted into the scenario of microwave GI as disclosed herein.
Imaging with LT Code Structured Fields
[0060] Redefine the frequency-independent reflectivity on the target imaging plane to be:
[γ] := [^ι,ι^2,ι (6) where γ e {0, 1 } represents the binary-valued reflectivity of the corresponding sub- grid. Let [E] to be a new background EM field matrix:
Figure imgf000014_0001
E (2)
1(,21) ^2,1
[E] := (7)
where G {0, 1 } represents the binary- valued background EM fields during the m th illumination. It is noted that processor 205 generates the generator values E^ that later define the generation of the structured field.
[0061] Unlike the random structured matrix [E] in Eq.(3), processor 205 generates the matrix [E] following the modified LT encoding process as shown below:
1. Randomly choose the degree d according to pre-designed distributions;
2. Select uniformly at random d distinct elements in the m th row vector of
matrix [E] :
1 , selected
Ep {m q ) = I 0, otherwise (8)
3. Repeat until all the M rows have been generated.
[0062] It is noted that although the generation of [E] uses randomly created values, the generation of the field itself is then deterministic since the field generation is controlled by [E] .
[0063] As an example of the degree distribution mentioned in Step (3), the Robust Soliton Distribution (RSD), which can be used in the encoding process of LT code, is given as below: β
Where: β =∑ρ(ί) + τ(ί) (10) where
η I im, for i = \, 2,... , m l η - r( = ηΙη{η I δ) Ι m, for i = m l η
0, for i = m l η + \,... ,Μ
where
Figure imgf000015_0001
where c > 0 and 0 < S < 1 are pre-designed constants.
[0064] Therefore, the overall imaging equation of the proposed non-random
microwave GI based on LT code structured fields can be expressed as,
[R] = [E][g] (13) where
[R] = [R1, R2,... , RM ] (14) where [Rm ] represents the received signal from the m th illumination. Note that unlike G R presented in Eq.(l) where n = 1, 2,... N and N = P x Q , in the new background EM field matrix [E] that generated from modified LT encoding rules, m = 1 , 2, ... M and M should satisfy M≥ P x Q . This is to introduce redundancy to enable the FEC ability, as well as to allow FCs to start decoding.
Imaging Procedures and Reconstruction
[0065] Fig. 6 illustrates an example block diagram of an imaging procedure 500 as performed by processor 205 of the proposed non-random microwave GI based on LT code structured fields. Procedure 500 comprises five steps in total. To be more specific:
[0066] Step 501: Division. Equally divide the imaging plane containing objects into sub-grids with P rows and Q columns. The total number N (N = P Q) of divided sub-grids determines the scale of the LT code structured matrix [E] to be generated in the next step.
[0067] Step 502: Generation. As shown in Fig. 7, according to the scale that is determined by the first step 501 and modified LT encoding rules, generate a
K x N, (K > N) matrix [E] consisting of binary elements.
[0068] Step 503: Manipulation. As illustrated by Fig. 8, decompose the row vectors of the matrix [E] into K sub- vectors with each of N = P x Q elements. Reshape them into sub-matrices with the size of P x Q to suit the divided target imaging plane. Each sub-matrix is then used as the reference to manipulate the background EM fields to illuminate the objects.
[0069] Step 504: Reception. As depicted by Fig. 9, after M times manipulations, reflections from objects are collected by a single receiving antenna. The received signals are expressed as:
[Κ] := [^, ^2,... ,¾ ]Γ (15)
[0070] Step 505: Reconstruction. Reconstruct the imaging of objects on the target imaging plane according to the modified LT decoding rules as below,
1. Normalize [R] by reflections corresponding to manipulations with degree
d = l ;
2. Divide normalized [R] by 2 and assign closest integers to the remainders into
[ '] ;
3. Retrieve initial degree information [D] from matrix [E] ;
4. Find R e [R'] corresponds to Dk = 1 and perform exclusive-or (XOR)
operation with other elements in [R'] ;
5. Update [D] with Dk = Dk -l ;
6. Assign Rk' corresponds to Dk = 0 into [γ] ; 7. Repeat Step (5) to (5) until [D] = 0.
[0071] Then, results in [γ] are the reconstructed image of binary-valued objects on the target imaging plane. It is noted that the XOR operation above may be replaced by performing a belief propagation method is described in Mirrezaei, Seyed, Karim Faez, and Shahram Yousefi. "Towards Fountain Codes. Part II: Belief Propagation
Decoding." Wireless personal communications 77.2 (2014), which is incorporated herein by reference.
Numerical Examples
[0072] Example geometry settings of a scenario are shown in Fig. 10, where the target imaging plane containing objects to be imaged is set to be in the XY Plane. Parabolic dish antennas working around 24 GHz are deployed at a distance of 1 m away along the Z -axis. With a beamwidth of 2° , each of those dish antennas centres at one sub-grid on the target imaging plane and generate corresponding EM fields. As shown in Fig. 9b, 9c and 9d, different patterns of EM fields are generated with different degrees. Given the matrix [E] , symbol 1 correspond to high illumination power from the dish antennas while the symbol 0 correspond to zero output. In addition, at the geometry centre of the illumination plane, an omnidirectional antenna is deployed to collect reflections from objects.
Effectiveness
[0073] Two binary-valued objects with the same size and shape were investigated to validate the effectiveness of the disclosed microwave imaging method with LT code structured fields. As shown in Figs. 14 and 15, images of target objects are successfully reconstructed under 5dB and lOdB signal-to-noise ratio (SNR) conditions. The results also show that most of the reconstruction errors brought by the noise have been eliminated, indicating the FEC ability of LT code has been enabled. Performance
[0074] From Fig. 16 it can be seen that as the increasing of SNR values, the reconstruction mean-square-error (MSE) of the proposed method drops dramatically first and then remains closed to 0 . Although MSE values of iterative algorithms based conventional microwave GI also have a decreasing trend, the proposed method in this disclosure possesses a significantly better performance when SNR values are higher than 2 dB. It also worth mentioning that the performance degradation under 2 dB is due to the quantization errors brought by the normalization and module-2 division process in the current reconstruction algorithm described above. Belief propagation can help to increase the performance.
Reconstruction Time
[0075] The time consumption of image reconstruction is compared here between the proposed method and conventional microwave GI. From Fig. 17, it can be concluded that both the proposed method and conventional microwave GI using iterative optimization algorithms experience a continuously increasing time consumption as the enlarging of the scale of the imaging problem. However, the curve belongs to the proposed method has a much more moderate slope, indicating a reduced reconstruction complexity and improved efficiency.
[0076] In addition, from Fig. 18 it can be seen that when the scale of the imaging problem is fixed, the reconstruction time consumption of the proposed method does not significantly change along with the changing of SNR conditions. However, the reconstruction time consumption of conventional microwave GI using iterative optimization algorithms is inversely proportional to the SNR conditions.
Conclusion
[0077] This disclosure provides a non-random microwave GI scheme based on LT code structured background EM fields. This approach employs coding technique from telecommunications into the scenario of microwave GI. Further, the FEC ability is introduced into microwave imaging applications. The proposed method not only extends the diversity of microwave GI, but also brings the possibilities of further introducing information theory from telecommunications into microwave imaging applications in the future. Compared with conventional microwave GI, the proposed method can effectively obtain the image of binary-valued objects with a reduced system complexity, improved reconstruction performance and time efficiency.
Example 2: Convolutional codes
[0078] While the example 1 above describes the use of a fountain code as an example of a block code, the following description provides an example using a convolutional code.
[0079] Again, under Born's Approximation (BA), assuming the target imaging plane is divided into sub-grids with P rows and Q columns, under L = P x Q times illuminations from antennas transmitting randomly modulated signals, the background EM fields generated within the framework of conventional microwave GI can be expressed as:
^2,1 · ^P,Q
E(2) E(2)
· ^P,Q
[E] (16)
F(L) F(L)
■■ ^P.Q where E p, q represents the background EM field formed during the / th illumination at the corresponding sub-grid. After L times illuminations, reflections from objects can be expressed as:
[Κ] := [^ , ^2,... , ^]Γ (17) where R; represents the reflections from the / th illumination. Then the imaging equation of conventional microwave GI can be written as:
[R] = [E][s] (18) where:
[s] := [al va21,...,σρ Qf (19) are the frequency-independent reflectivity of the target imaging plane.
[0080] Since the conventional microwave GI imaging procedure presented above can also be explained as retrieving the object's information by the background EM fields which structured under according to random encoding rules, the convolutional code originated from telecommunications can be further adapted to simplify the imaging model of microwave GI and improve its efficiency and performance under different SNR conditions.
Imaging with Convolutional Code Structured Background EM Fields
[0081] Redefine the frequency-independent reflectivity on the target imaging plane to be:
[yY-= [yl,v y2,v ->rP,ef (20)
where γ e {0,1 } represents the binary-valued reflectivity of the corresponding sub- grid. Let [E] to be the background EM field matrix with convolutional code structures:
Figure imgf000020_0001
where G {0,1 } represents the binary-valued background EM fields generated during the s th illumination at the corresponding sub-grid. Unlike the randomly structured matrix [E] in Eq.(18), processor 205 generates matrix [E] according to the modified convolutional encoding process as shown below:
1. Define parameters (n, k,m) of the employed convolutional code;
2. Select the generator polynomial P for the employed convolutional code;
3. Define a new matrix [O] with P Q rows and P Q columns; 4. From the first row of [O] , sequentially select m elements. Between different rows, drop the last k elements and introduce the same number of new ones;
5. For each selection of m elements, generate n different patterns of I s according to the generator polynomial P .
6. Assign those patterns of 1 s to form [E] ;
[0082] For example, assuming parameters (n = 2,k = l, m = 4) are selected for the employed convolutional code and the target imaging plane containing binary-valued targets to be imaged is divided into P x Q (where P = Q = 3 ) sub-grids in total. Fig.
19 shows the resulting matrix [O] 1400 with a size of 9 x 9 and corresponding selection procedures where shaded elements are selected.
[0083] Fig. 20 illustrates the selection of certain m elements in a row 1501 in the matrix [O] 1400. Those selected elements are further used to generate n different patterns according to the generator polynomials P . As shown in the Fig. 20, after m = 4 elements in the 6 th rows 1501 have been selected, those elements are used to generate n = 2 different patterns 1502 and 1503 according to P .
[0084] Processor 205 repeats the above procedures until all rows in matrix [O] 1400 have been used and processed. Then, processor 205 combines all the generated patterns to form the convolutional code structured background matrix [E] , as shown in Fig. 21 where the grey boxes indicate selected blocks.
[0085] Therefore, the overall imaging equation of the proposed non-random microwave GI based on convolutional code structured EM fields can be expressed as:
[R] = [E][g] (22) where
[R] = [R1,R2,... ,RS ] (23) where [Rs ] represents the received signal from the s th illumination. Note that unlike e R presented in Eq.(16) where / = 1, 2,... L and L = P x Q , in the new background EM field matrix [E] that generated from modified convolutional encoding rules, s = i, 2,... S and S should satisfy S = n x P x Q .
Imaging Procedures and Reconstruction
[0086] The imaging procedures of the proposed non-random microwave GI based on convolutional code structured EM fields is shown in the block diagram 500 in Fig. 6 which contains the same five steps is described above with reference to Fig. 6 with the main difference being the application of the convolutional code instead of the fountain code. In particular, processor 205 performs the following steps:
[0087] Step 501: Division. Equally divide the imaging plane containing objects into sub-grids with P rows and Q columns.
[0088] Step 502: Generation. As shown in Fig. 7, according to the modified convolutional encoding rules described above, generate the background EM field matrix [E] consisting of binary elements.
[0089] Step 503: Manipulation. As illustrated by Fig. 8, decompose the row vectors of the matrix [E] into S sub-vectors with each of L = P x Q elements. Reshape them into sub-matrices with the size of P x Q to suit the divided target imaging plane. Each sub-matrix is then used as the reference to manipulate the background EM fields to illuminate the objects.
[0090] Step 504: Reception. As depicted by Fig. 9 (where M is replaced by S), after S times manipulations, reflections from objects on the target imaging plane are collected by a single receiving antenna. The received signals are expressed as:
[R] := [Rl,R2,... ,Rs ]T (24)
[0091] Step 506: Reconstruction. Reconstruct the imaging of objects on the target imaging plane according to the modified convolutional decoding rules as below, 1. Normalize [R] by reflections corresponding to the first illumination;
2. Divide normalized [R] by 2 and assign closest integers to the remainders into
[ '] ;
3. Perform Viterbi Decoding according to convolutional codes to get [γ] .
[0092] Then, results in [γ] are the reconstructed image of binary-valued objects on the target imaging plane. Details of Viterbi Decoding for convolutional codes can be found in T. Zhang, M. Guo, L. Ding, F. Yang, and L. Qian, "Soft output viterbi decoding for space-based ais receiver," in 2016 22nd Asia-Pacific Conference on Communications (APCC), Aug 2016, pp. 383-387, which is incorporated herein by reference.
Numerical Examples
[0093] The example of using a convolutional code can be applied to the scenario previously described with reference to Fig. 10 to Fig. 13.
Effectiveness
[0094] Two binary-valued objects with the same size and shape are investigated to validate the effectiveness of the proposed non-random microwave GI with
convolutional code structured EM fields. The results are essentially identical to what is shown in Figs. 14 and 15 where the image of target objects is successfully reconstructed under 5dB signal-to-noise ratio (SNR) condition. It also shows that most of the reconstruction errors brought by the noise have been eliminated, indicating the FEC ability of convolutional code has been enabled.
Performance
[0095] From Fig. 22 it can be seen that as the increasing of SNR values, the reconstruction Mean Square Error (MSE) of the proposed method remains closed to 0 . As a comparison, although the MSE values of conventional microwave GI using iterative optimizations have a decreasing trend, the proposed imaging method possesses a significantly better performance over SNR values higher than 5 dB.
Complexity
[0096] The time consumption is investigated to evaluate the reconstruction complexity. Fig. 23 shows that the time consumption of the conventional microwave GI using iterative optimization algorithms increases drastically with an increase of the scale of the imaging problem. In contrast, the time consumption of the proposed method remains relatively constant..
[0097] In addition, from Fig. 24 it can be seen that when the scale of the imaging problem is fixed, the reconstruction time consumption of the proposed method does not significantly change along with the changing of SNR conditions. As another comparison, the time consumption of conventional microwave GI using iterative optimization algorithms is very sensitive to different SNR values.
Conclusion
[0098] This disclosure provides a non-random microwave GI scheme based on convolutional code structured background EM fields. This approach employs purposely encoded non-random discrete EM fields to perform microwave GI. Further, the object reconstructions are obtained via decoding method on GF(2) domain. The proposed method not only extends the diversity of microwave GI, but also introduces the possibility of combining coding techniques from telecommunications with microwave imaging applications. Numerical examples demonstrated that compared with conventional microwave GI, the proposed method can obtain the image of binary- valued objects with a reduced system complexity and improved reconstruction efficiency and performance. [0099] It will be appreciated by persons skilled in the art that numerous variations and/or modifications may be made to the above-described embodiments, without departing from the broad general scope of the present disclosure. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive.

Claims

CLAIMS:
1. A method for imaging a scene, the method comprising:
generating a field to illuminate the scene;
sensing, by a sensor directed at the scene, a response from the scene; and determining an image of the scene based on the response sensed by the sensor, wherein
generating the field is based on a linear code and determining the image of the scene is based on the response sensed by the sensor and the linear code.
2. The method of claim 1, further comprising repeating the method to generate multiple sensor values indicative of the sensed response at respective points in time, wherein determining the image comprises processing the multiple sensor values by using a decoding method corresponding to the linear code.
3. The method of claim 1 or 2, wherein the linear code is a block code.
4. The method of claim 3, wherein the block code is a fountain code.
5. The method of claim 4, wherein the fountain code is a Luby Transform code.
6. The method of claim 3, 4 or 5, wherein generating the field comprises:
randomly determining a generator value for each of multiple locations; and generating the field based on the generator value for each of the multiple locations .
7. The method of claim 6, wherein randomly generating the generator value comprises randomly generating a binary value for each of the multiple locations.
8. The method of claim 7, wherein randomly generating the binary value for each of the multiple locations comprises randomly determining a degree indicative of the number of locations associated with a positive binary value.
9. The method of claim 8, wherein randomly determining the degree is based on a Robust Soliton Distribution (RSD).
10. The method of any one of claims 6 to 9, further comprising repeating the steps of generating a field value for each of the multiple locations for multiple respective points in time.
11. The method of any one of any one of claims 3 to 10, further comprising generating multiple sensor values indicative of the sensed response at respective points in time; and
processing the multiple sensor values by using a decoding method that applies an XOR operation between a selected sensor value and the remaining sensor values.
12. The method of any one of any one of claims 3 to 10, further comprising generating multiple sensor values indicative of the sensed response at respective points in time; and
processing the multiple sensor values by using a decoding method that is based on belief propagation.
13. The method of claim 1 or 2, wherein the linear code is a convolutional code.
14. The method of claim 13, wherein generating the field comprises determining a generator value for each of the multiple locations based on a generator polynomial; and generating the field based on the generator value for each of the multiple locations.
15. The method of claim 14, wherein determining the generator value comprises generating a binary value for each of the multiple locations.
16. The method of claim 14 or 15, wherein generating the generator value for each of the multiple locations is based on selected generator values generated for a previous point in time and comprises applying the generator polynomial to the selected generator values to determine additional generator values for a subsequent point in time.
17. The method of any one of any one of claims 12 to 16, further comprising generating multiple sensor values indicative of the sensed response at respective points in time,
wherein determining the image comprises processing the multiple sensor values by using a decoding method based on a Viterby decoder.
18. The method of any one of the preceding claims, wherein the field is an electromagnetic field.
19. The method of any one of the preceding claims, wherein the sensed response comprises sensed radiation that represents the response.
20. Software that, when installed on a computer, causes the computer to perform any one of the preceding claims.
21. A system for imaging a scene, the system comprising:
a field generator to generate a field that illuminates the scene;
a sensor directed at the scene to sense a response from the scene; and a processor to determine an image of the scene based on the response sensed by the sensor, wherein
generating the field is based on a linear code and determining the image of the scene is based on the response sensed by the sensor and the linear code.
PCT/AU2018/050708 2017-07-10 2018-07-10 Method and system for imaging a scene WO2019010524A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2017902693 2017-07-10
AU2017902693A AU2017902693A0 (en) 2017-07-10 Imaging a scene

Publications (1)

Publication Number Publication Date
WO2019010524A1 true WO2019010524A1 (en) 2019-01-17

Family

ID=65000951

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2018/050708 WO2019010524A1 (en) 2017-07-10 2018-07-10 Method and system for imaging a scene

Country Status (1)

Country Link
WO (1) WO2019010524A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6307487B1 (en) * 1998-09-23 2001-10-23 Digital Fountain, Inc. Information additive code generator and decoder for communication systems
US20100194627A1 (en) * 2007-09-20 2010-08-05 Panasonic Corporation Spread spectrum radar apparatus, method for determining virtual image, and method for suppressing virtual image
US20150219437A1 (en) * 2012-01-03 2015-08-06 Ascentia Imaging, Inc. Coded localization systems, methods and apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6307487B1 (en) * 1998-09-23 2001-10-23 Digital Fountain, Inc. Information additive code generator and decoder for communication systems
US20100194627A1 (en) * 2007-09-20 2010-08-05 Panasonic Corporation Spread spectrum radar apparatus, method for determining virtual image, and method for suppressing virtual image
US20150219437A1 (en) * 2012-01-03 2015-08-06 Ascentia Imaging, Inc. Coded localization systems, methods and apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIN, L. ET AL.: "Cascading polar coding and LT coding for radar and sonar networks", EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2016, pages 1 - 12, XP021240027 *
LUBY, M.: "LT Codes", PROCEEDINGS OF THE 43RD SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE, 16 November 2002 (2002-11-16), Washington, DC, USA, pages 271 - 280, XP010628282 *

Similar Documents

Publication Publication Date Title
US7916932B2 (en) Method and system of structural light-based 3D depth imaging using signal separation coding and error correction thereof
Silverstein Application of orthogonal codes to the calibration of active phased array antennas for communication satellites
JP6967678B2 (en) Methods and equipment for advanced wireless systems
US9013348B2 (en) Radiometric imaging device and corresponding method
JP2009296580A (en) Signal transmitting and receiving method, and multicode array for transmitting and receiving signal
CN109981224B (en) Deep space communication channel coding and decoding system and method thereof
Akbari et al. Sparse recovery-based error concealment
US7743315B2 (en) System and method for low-density parity check (LDPC) code design
CN111337878B (en) Sound source direct positioning method suitable for large-aperture horizontal linear array
Wang et al. Nonrandom microwave ghost imaging
WO2019010524A1 (en) Method and system for imaging a scene
CN115825953B (en) Forward-looking super-resolution imaging method based on random frequency coding signal
EP1642403B1 (en) Method for optimising at least one property of a satellite system and corresponding device
Jagadesh et al. Modeling Target Detection and Performance Analysis of Electronic Countermeasures for Phased Radar.
CN110398712A (en) Method for orienting the reflector of Terahertz communication system
US20240054698A1 (en) Three-dimensional imaging method and apparatus and three-dimensional imaging device
CN113900095B (en) Inverse sparse Bayesian imaging method and system based on metamaterial dynamic grid
CN108599775B (en) Construction method of hybrid check LDPC code
Qiu-Wen et al. Depth map intra coding based on virtual view rendering distortion estimation
Weng et al. Optical demultiplexing with fractal-structured beams in turbulent atmospheric environments
CN116245965A (en) Terahertz single-pixel real-time imaging method and system based on physical model
CN109379085B (en) Decoding method, device and storage medium for burst channel based on LDPC code
KR101934214B1 (en) Apparatus and method for measuring antenna
RU2288547C1 (en) Message compression and recovery method
Dvornikov et al. STATISTICAL ARITHMETIC CODING ALGORITHM ADAPTIVE TO CORRELATION PROPERTIES OF WAVELET TRANSFORM COEFFICIENTS

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18831527

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18831527

Country of ref document: EP

Kind code of ref document: A1