CN113141202A - MIMO space non-stationary channel estimation method based on image contour extraction - Google Patents

MIMO space non-stationary channel estimation method based on image contour extraction Download PDF

Info

Publication number
CN113141202A
CN113141202A CN202110446241.9A CN202110446241A CN113141202A CN 113141202 A CN113141202 A CN 113141202A CN 202110446241 A CN202110446241 A CN 202110446241A CN 113141202 A CN113141202 A CN 113141202A
Authority
CN
China
Prior art keywords
channel
image
point
path
estimation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110446241.9A
Other languages
Chinese (zh)
Other versions
CN113141202B (en
Inventor
石琦
樊丁皓
张舜卿
徐树公
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN202110446241.9A priority Critical patent/CN113141202B/en
Publication of CN113141202A publication Critical patent/CN113141202A/en
Application granted granted Critical
Publication of CN113141202B publication Critical patent/CN113141202B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/0224Channel estimation using sounding signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/024Channel estimation channel estimation algorithms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/024Channel estimation channel estimation algorithms
    • H04L25/0256Channel estimation using minimum mean square error criteria

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Power Engineering (AREA)
  • Image Analysis (AREA)

Abstract

A space non-stationary channel estimation method based on image contour extraction is characterized in that under a large-scale MIMO space non-stationary channel moving scene, after a received signal with a sparse angle delay domain is represented in an image form, the number of estimated paths, the angle and the time delay of each path are obtained by using an estimation algorithm of image contour extraction, after the received signal with the sparse space delay domain is represented in the image form, an effective visible region estimation corresponding to each channel path is obtained by using the estimation algorithm of image contour extraction, and therefore path gain and channel reconstruction are achieved; the invention utilizes the image contour extraction technology and the sparsity of the channel in the angle delay domain and the space delay domain to estimate the angle and the time delay of each path of the channel and the visual area which is not divided by subarrays, and replaces the traditional iterative optimization method to solve the problem of estimating the non-stationary channel in the space.

Description

MIMO space non-stationary channel estimation method based on image contour extraction
Technical Field
The invention relates to a technology in the field of wireless communication, in particular to large-scale MIMO space non-stationary channel estimation based on an image contour extraction technology.
Background
Most of the existing channel estimation technologies proposed for the non-stationary characteristics of the space only relate to the situation that both ends of a receiving terminal and a transmitting terminal are stationary, but the calculation complexity of an algorithm and the scene of terminal movement are less concerned, and a method design capable of achieving a good balance between the performance and the calculation complexity is lacked. Meanwhile, in the prior art, for the non-stationary characteristic of the space, namely, the consideration of the visual area mostly depends on the sub-array division, and improper division setting inevitably causes matching errors, thereby causing the performance loss of the reconstructed channel.
Disclosure of Invention
The invention provides an MIMO space non-stationary channel estimation method based on image contour extraction, aiming at the problems that the performance and complexity of the prior art are unbalanced, the calculation time delay required by an actual mobile scene is difficult to meet, and the space non-stationary characteristic and the matching error are neglected, in a large-scale MIMO OFDM system, the angle and the time delay of each path of a channel and a visual area which is not dependent on sub-array division are estimated by using the image contour extraction technology and the sparsity of the channel in an angle time delay domain and a space time delay domain, and the traditional iterative optimization method is replaced to solve the problem of space non-stationary channel estimation.
The invention is realized by the following technical scheme:
the invention relates to a space non-stationary channel estimation method based on image contour extraction, which is characterized in that under a large-scale MIMO space non-stationary channel moving scene, after a received signal with a sparse angle delay domain is represented in an image form, the number of estimated paths, the angle and the time delay of each path are obtained by utilizing an estimation algorithm of the image contour extraction, after the received signal with the sparse space delay domain is represented in the image form, an effective visible region estimation corresponding to each channel path is obtained by utilizing the estimation algorithm of the image contour extraction, and therefore path gain and channel reconstruction are achieved.
The estimation algorithm for extracting the image contour specifically comprises the following steps:
1) when the two-dimensional coordinates (i, j) of any point pixel on the imageWhen the value is the outer boundary starting point, a marker NBD is set to NBD +1, the value of (i, j) and the updated NBD value are recorded, and a point (i, j-1) on the left of (i, j) is recorded as (i, j)2,j2) (ii) a Otherwise jump to step 6).
2) To (i)2,j2) Taking (i, j) as a center of a circle as a starting point, and detecting clockwise: when there is a non-zero pixel point in the upper, lower, left, and right neighborhoods, it is recorded as (i)1,j1) And update (i)2,j2)=(i1,j1) (i, j) is denoted as (i)3,j3) (ii) a Otherwise the point binarizes the pixel value
Figure BDA0003037008180000011
And jumps to step 6).
3) To (i)3,j3) As the center of circle, in (i)2,j2) The previous point which is the starting point is detected anticlockwise: center point (i)3,j3) When there are non-zero pixels above, below, left and right, it is recorded as (i)4,j4)。
4) When (i)3,j3+1) is the zero pixel point detected in step 3), then the binary pixel value of the point
Figure BDA0003037008180000021
When (i)3,j3+1) zero pixels not detected in step 3) and satisfying
Figure BDA0003037008180000022
Then
Figure BDA0003037008180000023
Otherwise
Figure BDA0003037008180000024
The value of (c) does not change.
5) When the outer boundary starting point of the current contour is searched, i4,j4) (ii) and (i, j)3,j3)=(i1,j1) If yes, jumping to the step 6); otherwise update (i)2,j2)=(i3,j3),(i3,j3)=(i4,j4) And jumps to step 3).
6) When the pixel value is binarized at the point
Figure BDA0003037008180000025
When it is time, the marker is updated
Figure BDA0003037008180000026
And starting to continue raster scanning detection from the pixel point (i, j +1) until the vertex of the lower right corner of the image is scanned.
Technical effects
The invention integrally solves the defects that the calculation complexity is too high in the prior art, the method cannot be suitable for a mobile scene and the matching error exists in the estimation of a visual area due to the dependence on sub-array division; compared with the prior art, the method has the advantages that the contour extraction algorithm in the image processing and the low-complexity pixel domain processing adopted by the contour extraction algorithm are utilized, so that the estimation and tracking of the spatial non-stationary channel in the mobile scene can be realized with low calculation complexity while the estimation performance is ensured.
Drawings
FIG. 1 is a diagram of a large-scale MIMO spatial non-stationary channel model scenario;
FIG. 2 is a color image and a gray scale image of a spatial non-stationary angular time-delay domain received signal;
FIG. 3 is a color image and a gray scale image of a spatially non-stationary spatial time delay domain received signal;
FIG. 4 is a flow chart of a spatial non-stationary channel estimation implementation based on image contour extraction;
FIG. 5 is a comparison of channel estimation mean square error performance and processing delay based on image contour extraction and baseline algorithms;
figure 6 is a comparison of channel tracking mean square error performance and processing delay based on image contour extraction and baseline algorithms.
Detailed Description
As shown in FIG. 4, the present embodiment is based on a massive MIMO OFDM system with spatially non-stationary characteristicsBase station receiving end configuration NrThe transmitting antennas form a uniform linear array with antenna spacing of
Figure BDA0003037008180000027
λ is a transmission signal wavelength, a transmitting end is a single antenna terminal, and then a received signal Y in a spatial frequency domain after fourier transform is H + N, where: from Nr×NsThe complex matrix H composed of elements is the large-scale MIMO fading correlation coefficient, NsIs the number of frequency domain subcarriers.
For convenience of illustration, all 1 pilots are transmitted in this embodiment, so the transmission signal is omitted here, and N is additive white gaussian noise with zero mean unit variance, and the wireless channel model of the spatial non-stationary state at the transmitting end and the receiving end is:
Figure BDA0003037008180000028
wherein: l is the number of paths, glFor the gain of the path/to be,
Figure BDA0003037008180000029
Figure BDA00030370081800000210
and
Figure BDA00030370081800000211
are respectively as
Figure BDA00030370081800000212
And mul= ΔfτlCorresponding airspace and frequency domain rudder vector, thetalAnd τlAnd f is a subcarrier interval, namely a signal starting angle and time delay corresponding to the path l. PhilThe visible area representing the path/is,
Figure BDA0003037008180000031
effective antenna index selection for the visible region if the mth antenna belongs to the set ΦlThen 1 is placed at the mth position of the vector p, otherwise 0 is placed.
The embodiment relates to an MIMO space non-stationary channel estimation method based on image contour extraction, which comprises the steps of representing received signals with sparse angular time delay domains in an image form in a large-scale MIMO space non-stationary channel moving scene, obtaining the number of estimated paths, the signal departure angle of each path and propagation time delay by using an image contour extraction estimation algorithm, representing the received signals with sparse spatial time delay domains in the image form, and obtaining effective visible region estimation corresponding to each channel path by using the image contour extraction estimation algorithm, so that path gain and channel reconstruction are realized.
The estimated path number, the signal departure angle of each path and the propagation delay are obtained by the following modes:
firstly, in a large-scale MIMO channel, when the number of paths is far less than that of antennas, the received signals corresponding to the channel and the all-1 pilot frequency sequence have sparse characteristics in an angle time delay domain, and the received signals are converted from a space frequency domain to the angle time delay domain
Figure BDA0003037008180000032
Figure BDA0003037008180000033
Then the received signal with sparse angular time delay domain is represented in the form of image, namely, the received signal is represented in the form of image
Figure BDA0003037008180000034
Each element of
Figure BDA0003037008180000035
Further scaled to fit the pixel value range of images 0-255:
Figure BDA0003037008180000036
based on scaled received signals
Figure BDA0003037008180000037
And Image and Mat2gray functions in MATLAB, the corresponding angular time delay domain sparse received signal color Image and gray scale Image can be generated, as shown in fig. 2(a) and (b), where: daAnd DSRespectively being front NrLine and first NsN of a linerDimension and η NsThe discrete fourier transform matrix of the dimension, η is the oversampling ratio, and the function max (-) is used to obtain the maximum element modulus value in the input matrix.
Secondly, on the basis of the gray image of the original received signal, setting a threshold delta to the pixel value f of each pixel point (i, j) in the gray imagei,jPerforming binarization processing, namely:
Figure BDA0003037008180000038
setting markers NBD-1 and LNBD-0, wherein NBD and LNBD are a New boundary (New Border) and a previous New boundary (Last New Border), respectively, traversing each pixel point of the image in a raster scanning manner, resetting LNBD-0 when scanning to a start position of a New line, and resetting LNBD-0 when and only detecting that the pixel point (i, j) is an outer boundary start point
Figure BDA0003037008180000039
And is
Figure BDA00030370081800000310
) And when the current LNBD is less than or equal to 0, extracting the outline of the image.
The contour extraction specifically comprises the following steps:
1) when (i, j) is the starting point of the outer boundary and NBD is NBD +1, (i, j) and the updated NBD value are recorded, and the point (i, j-1) on the left side of (i, j) is recorded as (i, j)2,j2) (ii) a If (i, j) is not the outer boundary starting point, jump to step 6).
2) To (i)2,j2) As a starting point, clockwise detecting whether non-zero pixel points exist around (i, j) or not, if so, recording as (i)1,j1) And update (i)2,j2)=(i1,j1) (i, j) is denoted as (i)3,j3) (ii) a Otherwise, then
Figure BDA00030370081800000311
And jumps to step 6).
3) Around (i)3,j3) To (i) with2,j2) Detecting the center point (i) counterclockwise as the previous point of the start point3,j3) If there is a non-zero pixel point at the top, bottom, left, and right, then it is recorded as (i)4,j4)。
4) Judgment (i)3,j3+1) whether the detected zero pixel point is detected in step 3), then
Figure BDA00030370081800000312
If not, and satisfy
Figure BDA00030370081800000313
Then
Figure BDA00030370081800000314
Otherwise, the next step is executed.
5) Judging whether (i) is satisfied when the algorithm has searched the outer boundary starting point of the current contour4,j4) (ii) and (i, j)3,j3)=(i1,j1) If yes, jumping to the step 6); otherwise, updating (i)2,j2)=(i3,j3),(i3,j3)=(i4,j4) And jumps to step 3).
6) If it is
Figure BDA0003037008180000041
Then update
Figure BDA0003037008180000042
And starting to continue raster scanning detection from the pixel point (i, j +1) until the vertex of the lower right corner of the image is scanned.
Thirdly, after all the outlines are extracted, the pixel value of the starting point of the outer boundary of the last outline is the estimated path number
Figure BDA0003037008180000043
Then, taking the starting point (i, j) of the outer boundary of each contour as the initial point, calculating the current second point
Figure BDA0003037008180000044
The number of pixels occupied by each contour in the transverse and longitudinal directions is used for obtaining the height h of the contourlAnd width wlCombining the coordinates of the starting point of the outer boundary to obtain the coordinates of the central point of the rectangle surrounded by the current contour
Figure BDA0003037008180000045
Obtaining a signal starting angle corresponding to the ith channel path according to the central coordinate
Figure BDA0003037008180000046
Figure BDA0003037008180000047
And delay estimation results
Figure BDA0003037008180000048
The estimation of the effective visible area corresponding to each channel path is obtained by the following method:
i) converting the received signal from the spatial frequency domain to the spatial time delay domain:
Figure BDA0003037008180000049
and to
Figure BDA00030370081800000410
Each element of
Figure BDA00030370081800000411
Further scaling to fit the pixel value range of images 0-255:
Figure BDA00030370081800000412
based on scaled received signals
Figure BDA00030370081800000413
And generating corresponding space time delay domain sparse received signal color Image and gray Image with Image and Mat2gray function in MATLAB3(a) and (b).
And ii) setting a threshold delta to complete binarization preprocessing on the pixel value of each pixel point of the image, and performing contour extraction on the image after the execution condition of a contour extraction algorithm is met.
iii) after all the contours are extracted, taking the starting point (i, j) of the outer boundary of each contour as the initial point, calculating the current second contour
Figure BDA00030370081800000414
The horizontal and vertical pixel points of the profile are used to obtain the height h of the profilelAnd width wl(ii) a Then combining the coordinates of the starting point of the outer boundary to obtain the coordinates of the transverse center point of the rectangle surrounded by the current contour
Figure BDA00030370081800000415
From the close image center coordinate xlObtaining the time delay information corresponding to the first channel path with the longitudinal coordinate j of the start point of the first outer boundary
Figure BDA00030370081800000416
Visual area
Figure BDA00030370081800000417
Figure BDA00030370081800000418
iv) based on the obtained time delay
Figure BDA00030370081800000419
The estimated angle and time delay information can be matched with the visual area to obtain the corresponding parameter information of each channel path
Figure BDA00030370081800000420
The path gain refers to: and obtaining the gain of each path by a least square method according to the angle and the time delay of each path and the effective visible area corresponding to each channel path:
Figure BDA00030370081800000421
wherein:
Figure BDA00030370081800000422
Figure BDA00030370081800000423
and vec (-) represents the operator of the pseudo-inverse of the matrix and vectorizing the matrix by columns.
The channel estimation refers to: substituting the total path number obtained by estimation and channel parameters of each path into a wireless channel model to reconstruct a space non-stationary channel:
Figure BDA00030370081800000424
wherein:
Figure BDA00030370081800000425
in order to estimate the number of paths,
Figure BDA00030370081800000426
Figure BDA00030370081800000427
and
Figure BDA00030370081800000428
and a (-) and q (-) are airspace and frequency domain rudder vectors corresponding to the angle and the time delay, and p (-) is used for selecting effective antenna indexes in the corresponding visual area.
The channel parameters of each path refer to: signal departure angle, signal propagation delay, effective visible area estimation and path gain.
Preferably, after reconstructing the channel, the present invention further performs channel tracking by: changing channel space parameters, namely an angle, a time delay and a visual area, of a corresponding updated current time slot t according to whether the visual area is changed or not, specifically comprising the following steps: when channel reconstruction is carried out on the time slot t, channel visible region estimation based on image contour extraction is carried out, and when channel reconstruction is carried out on the time slot t
Figure BDA0003037008180000051
When the channel estimation is established, path gain and channel estimation are directly carried out, so that the processing time delay required by the channel estimation is shortened; otherwise, complete channel space parameter and path gain updating estimation are carried out, so that the space non-stationary wireless channel of the time slot t is reconstructed.
The Channel data used in the embodiment for verifying the performance of the algorithm is generated from a moving scene space Non-Stationary Channel Model [ J ]. IEEE Transactions on Communications,2018,66(7): 3065:. J.: 3065:. 3078.) in the literature (Wu S, Wang C X, Aggoune E, et al. A General 3-D Non-Stationary 5G Wireless Channel Model [ J ]. the embodiment uses the most advanced Newton orthogonal matching Based Channel estimation algorithm mentioned in the literature (Han Y, Li M, Jin S, et al. deep standing FDD Non-Stationary Mass MIMO Downlink Channel reconstruction. IEEE Journal on Selected Areas in Communications,2020.) as a baseline for performance comparison. The specific parameters used in this example are shown in table 1:
table 1 relevant parameters used in this example
Parameter(s) Value taking Parameter(s) Value taking
Number of base station antennas Nr 64 CPU Intel i7
Carrier frequency 2.6GHz Memory capacity 16GB
Number of subcarriers Ns 56 Hard disk capacity 256GB
Terminal moving speedDegree of rotation 3m/s Operating system Windows 10
The present embodiment compares the spatial non-stationary channel estimation and baseline method based on image contour extraction to show the superiority of the architecture proposed in the present embodiment in terms of performance and computational complexity. As shown in fig. 5, the channel estimation algorithm based on contour extraction is significantly better than the baseline with 4 subarray numbers in terms of mean square error performance, mainly because the estimation of the visible region in this embodiment does not depend on subarrays, so that the matching error between the subarray division and the visible region can be eliminated, and the visible region estimation and final channel estimation performance is more excellent than the baseline. Compared with the baseline with 64 subarrays, although the channel estimation method based on contour extraction still has a space for improving the accuracy, the embodiment has a significant advantage in terms of computational complexity, so that the method can be applied to a moving scene with a limit on processing time delay.
As shown in fig. 6, the present embodiment expands the above experiment to the channel scenario after 2 seconds to show the performance of the spatially unstable channel tracking of the present embodiment. Because the processing delay is short, the present embodiment can estimate the channel state after 2 seconds in real time, and thus has reliable mean square error performance compared with a baseline method that cannot track the channel in real time due to high computational complexity.
Compared with the prior art, the method has the advantages that on the basis of the problem of space non-stationary channel estimation, the sparse characteristics of the channel in the angle delay domain and the space delay domain and the image contour extraction algorithm are utilized, and the high-complexity traditional iterative optimization method is replaced by the pixel processing with low calculation amount, so that the difficulty and the defect that the traditional method is unbalanced in performance and calculation complexity and is difficult to use for mobile scene channel tracking are avoided.
Secondly, in an actual wireless system, the estimation of the visible region based on the division of the antenna subarrays usually has a matching error compared with the actual situation, so in order to avoid the error, the invention extracts the visible region by taking each antenna as a unit for each path channel in a spatial domain, and obtains superior estimation precision on the premise of low calculation complexity.
The method provides significant performance gain in the aspect of mean square error, simultaneously benefits from the low calculation complexity of the algorithm, can better realize channel tracking in a mobile scene on the premise of ensuring the precision, and has high feasibility applied to an actual system.
The foregoing embodiments may be modified in many different ways by those skilled in the art without departing from the spirit and scope of the invention, which is defined by the appended claims and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (7)

1. A space non-stationary channel estimation method based on image contour extraction is characterized in that under a large-scale MIMO space non-stationary channel moving scene, after a received signal with a sparse angle delay domain is represented in an image form, the number of estimated paths, the angle and the time delay of each path are obtained by using an estimation algorithm of image contour extraction, after the received signal with the sparse space delay domain is represented in the image form, an effective visible region estimation corresponding to each channel path is obtained by using the estimation algorithm of image contour extraction, and therefore path gain and channel reconstruction are achieved;
the estimation algorithm for extracting the image contour specifically comprises the following steps:
1) when the two-dimensional coordinate (i, j) of any point pixel on the image is the outer boundary starting point, setting a marker NBD to be NBD +1, recording (i, j) and the updated NBD value, and recording a point (i, j-1) on the left of (i, j) as (i, j)2,j2) (ii) a Otherwise, jumping to the step 6);
2) to (i)2,j2) Taking (i, j) as a center of a circle as a starting point, and detecting clockwise: when there are non-zero pixels in the upper, lower, left and right four neighborhoods, it is recorded as (i)1,j1) And update (i)2,j2)=(i1,j1) (i, j) is denoted as (i)3,j3) (ii) a Otherwise the point binary imageElemental value
Figure FDA0003037008170000011
And jumping to step 6);
3) to (i)3,j3) As the center of circle, in (i)2,j2) The previous point which is the starting point is detected anticlockwise: center point (i)3,j3) When there are non-zero pixels above, below, left and right, it is recorded as (i)4,j4);
4) When (i)3,j3+1) is the zero pixel point detected in step 3), then the binary pixel value of the point
Figure FDA0003037008170000012
When (i)3,j3+1) zero pixels not detected in step 3) and satisfying
Figure FDA00030370081700000112
When it is, then
Figure FDA0003037008170000014
Otherwise
Figure FDA0003037008170000015
The value of (a) does not change;
5) when the outer boundary starting point of the current contour is searched, i4,j4) (ii) and (i, j)3,j3)=(i1,j1) If yes, jumping to the step 6); otherwise update (i)2,j2)=(i3,j3),(i3,j3)=(i4,j4) And jumping to the step 3);
6) when the pixel value is binarized at the point
Figure FDA0003037008170000016
When it is time, the marker is updated
Figure FDA0003037008170000017
And starting to continue raster scanning detection from the pixel point (i, j +1) until the vertex of the lower right corner of the image is scanned.
2. The method for estimating the spatial non-stationary channel based on the image contour extraction as claimed in claim 1, wherein the estimated number of paths, the signal departure angle of each path, and the propagation delay are obtained by:
firstly, in a large-scale MIMO channel, when the number of paths is far less than that of antennas, the received signals corresponding to the channel and the all-1 pilot frequency sequence have sparse characteristics in an angle time delay domain, and the received signals are converted from a space frequency domain to the angle time delay domain
Figure FDA0003037008170000018
Then the received signal with sparse angular time delay domain is represented in the form of image, namely, the received signal is represented in the form of image
Figure FDA0003037008170000019
Each element of
Figure FDA00030370081700000110
Is further scaled to
Figure FDA00030370081700000111
Based on scaled received signals
Figure FDA00030370081700000113
Generating corresponding angle time delay domain sparse received signal color images and gray level images, wherein: daAnd DSRespectively being front NrLine and first NsN of a linerDimension and η NsA dimensional discrete Fourier transform matrix, η being the oversampling ratio, the function max (-) being used to obtain the maximum element modulus value in the input matrix;
secondly, setting a threshold 6 to the pixel value f of each pixel point (i, j) in the gray image on the basis of the gray image of the original received signali,jThe binary processing is carried out, and the binary processing,namely:
Figure FDA0003037008170000021
setting markers NBD ═ 1 and LNBD ═ 0, wherein NBD and LNBD are respectively a new boundary and a previous new boundary, traversing each pixel point of the image in a raster scanning mode, resetting LNBD ═ 0 when a new line starting position is scanned, and only when the pixel point (i, j) is detected to be an outer boundary starting point, namely the pixel point (i, j) is detected to be an outer boundary starting point
Figure FDA0003037008170000022
And is
Figure FDA0003037008170000023
When the current LNBD is less than or equal to 0, extracting the outline of the image;
thirdly, after all the outlines are extracted, the pixel value of the starting point of the outer boundary of the last outline is the estimated path number
Figure FDA0003037008170000024
Then, taking the starting point (i, j) of the outer boundary of each contour as the initial point, calculating the current second point
Figure FDA0003037008170000025
The horizontal and vertical pixel points of the profile are used to obtain the height h of the profilelAnd width wlCombining the coordinates of the starting point of the outer boundary to obtain the coordinates of the central point of the rectangle surrounded by the current contour
Figure FDA0003037008170000026
Obtaining a signal starting angle corresponding to the ith channel path according to the central coordinates
Figure FDA0003037008170000027
Figure FDA0003037008170000028
And delay estimation results
Figure FDA0003037008170000029
3. The method as claimed in claim 1, wherein the estimation of the spatially non-stationary channel based on the image contour extraction is obtained by the following steps:
i) converting the received signal from the spatial frequency domain to the spatial time delay domain:
Figure FDA00030370081700000210
and to
Figure FDA00030370081700000211
Each element of
Figure FDA00030370081700000212
Further scaling to:
Figure FDA00030370081700000213
based on scaled received signals
Figure FDA00030370081700000214
Generating a corresponding space time delay domain sparse received signal color image and a corresponding gray image;
ii) setting a threshold value delta to complete binarization preprocessing on the pixel value of each pixel point of the image, and performing contour extraction on the image after the execution condition of a contour extraction algorithm is met;
iii) after all the contours are extracted, taking the starting point (i, j) of the outer boundary of each contour as the initial point, calculating the current second contour
Figure FDA00030370081700000215
The horizontal and vertical pixel points of the profile are used to obtain the height h of the profilelAnd width wl(ii) a Then combining the coordinates of the starting point of the outer boundary to obtain the coordinates of the transverse center point of the rectangle surrounded by the current contour
Figure FDA00030370081700000216
From the close image center coordinate xlObtaining the time delay information corresponding to the ith channel path according to the longitudinal coordinate j of the starting point of the ith outer boundary
Figure FDA00030370081700000217
Visual area
Figure FDA00030370081700000218
Figure FDA00030370081700000219
Further matching the estimated angle and time delay information with the visible region to obtain the corresponding parameter information of each channel path
Figure FDA00030370081700000220
4. The method as claimed in claim 1, wherein the path gain is: and obtaining the gain of each path by a least square method according to the angle and the time delay of each path and the effective visual area corresponding to each channel path:
Figure FDA00030370081700000221
wherein:
Figure FDA00030370081700000222
Figure FDA00030370081700000223
and vec (-) represents the operator of the pseudo-inverse of the matrix and vectorizing the matrix by columns.
5. The method as claimed in claim 1, wherein the channel estimation method is based on image contour extractionThe estimation means that: substituting the total path number obtained by estimation and channel parameters of each path into a wireless channel model to reconstruct a space non-stationary channel:
Figure FDA0003037008170000031
wherein:
Figure FDA0003037008170000032
in order to estimate the number of paths,
Figure FDA0003037008170000033
and
Figure FDA0003037008170000034
and a (-) and q (-) are airspace and frequency domain rudder vectors corresponding to the angle and the time delay, and p (-) is used for selecting effective antenna indexes in the corresponding visual area.
6. The method as claimed in claim 5, wherein the channel parameters of each path are: signal departure angle, signal propagation delay, effective visible area estimation and path gain.
7. The method as claimed in claim 1, wherein after reconstructing the channel, the method further comprises following steps: the method includes that whether channel space parameters, namely angles, time delays and visual areas, of a current time slot t are changed and updated correspondingly according to the visual areas, specifically: when channel reconstruction is carried out on the time slot t, channel visible region estimation based on image contour extraction is carried out, and when channel reconstruction is carried out on the time slot t
Figure FDA0003037008170000035
Or directly performing path gain and channel estimation in real time, thereby reducing the processing time delay required by channel estimation; otherwise, the complete channel space parameter is carried outAnd the path gain update estimate to reconstruct the spatially non-stationary wireless channel for time slot t.
CN202110446241.9A 2021-04-25 2021-04-25 MIMO space non-stationary channel estimation method based on image contour extraction Active CN113141202B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110446241.9A CN113141202B (en) 2021-04-25 2021-04-25 MIMO space non-stationary channel estimation method based on image contour extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110446241.9A CN113141202B (en) 2021-04-25 2021-04-25 MIMO space non-stationary channel estimation method based on image contour extraction

Publications (2)

Publication Number Publication Date
CN113141202A true CN113141202A (en) 2021-07-20
CN113141202B CN113141202B (en) 2022-06-17

Family

ID=76813522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110446241.9A Active CN113141202B (en) 2021-04-25 2021-04-25 MIMO space non-stationary channel estimation method based on image contour extraction

Country Status (1)

Country Link
CN (1) CN113141202B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115314082A (en) * 2022-07-11 2022-11-08 南通先进通信技术研究院有限公司 User visual area identification method in super-large scale MIMO system
WO2023165630A1 (en) * 2022-03-04 2023-09-07 东南大学 Method for calculating spatial non-stationary wireless channel capacity for large-scale antenna array communication

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103763223A (en) * 2014-01-20 2014-04-30 清华大学 Sparse MIMO-OFDM channel estimation method based on space-time correlation of channel
US20140140375A1 (en) * 2012-11-19 2014-05-22 King Fahd University Of Petroleum And Minerals Method for compressive sensing , reconstruction, and estimation of ultra-wideband channels
CN104185090A (en) * 2014-08-14 2014-12-03 青岛大学 Video abstraction extraction and transmission method based on cooperative wireless communication
CN104539335A (en) * 2014-12-24 2015-04-22 无锡北邮感知技术产业研究院有限公司 Limiting feedback method and device for large-scale antenna system
US20190123796A1 (en) * 2016-11-04 2019-04-25 Huawei Technologies Co.,Ltd. Information feedback method, user equipment, and network device
WO2020034394A1 (en) * 2018-08-13 2020-02-20 南京邮电大学 Compressed sensing-based large scale mimo channel feedback reconstruction algorithm
CN112565122A (en) * 2020-12-08 2021-03-26 江南大学 Super-large-scale MIMO channel estimation method based on Newton-orthogonal matching pursuit

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140140375A1 (en) * 2012-11-19 2014-05-22 King Fahd University Of Petroleum And Minerals Method for compressive sensing , reconstruction, and estimation of ultra-wideband channels
CN103763223A (en) * 2014-01-20 2014-04-30 清华大学 Sparse MIMO-OFDM channel estimation method based on space-time correlation of channel
CN104185090A (en) * 2014-08-14 2014-12-03 青岛大学 Video abstraction extraction and transmission method based on cooperative wireless communication
CN104539335A (en) * 2014-12-24 2015-04-22 无锡北邮感知技术产业研究院有限公司 Limiting feedback method and device for large-scale antenna system
US20190123796A1 (en) * 2016-11-04 2019-04-25 Huawei Technologies Co.,Ltd. Information feedback method, user equipment, and network device
WO2020034394A1 (en) * 2018-08-13 2020-02-20 南京邮电大学 Compressed sensing-based large scale mimo channel feedback reconstruction algorithm
CN112565122A (en) * 2020-12-08 2021-03-26 江南大学 Super-large-scale MIMO channel estimation method based on Newton-orthogonal matching pursuit

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
QI SHI等: ""A Unified Channel Estimation Framework for Stationary and Non-Stationary Fading Environments"", 《IEEE TRANSACTIONS ON COMMUNICATIONS》 *
张舜卿等: ""基于C-V2X直连通信的车辆编队行驶性能优化"", 《中兴通讯技术》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023165630A1 (en) * 2022-03-04 2023-09-07 东南大学 Method for calculating spatial non-stationary wireless channel capacity for large-scale antenna array communication
CN115314082A (en) * 2022-07-11 2022-11-08 南通先进通信技术研究院有限公司 User visual area identification method in super-large scale MIMO system
CN115314082B (en) * 2022-07-11 2023-05-09 南通先进通信技术研究院有限公司 User visible area identification method in ultra-large-scale MIMO system

Also Published As

Publication number Publication date
CN113141202B (en) 2022-06-17

Similar Documents

Publication Publication Date Title
CN113141202B (en) MIMO space non-stationary channel estimation method based on image contour extraction
CN104299260B (en) Contact network three-dimensional reconstruction method based on SIFT and LBP point cloud registration
US8189428B2 (en) Methods and systems to detect changes in multiple-frequency band sonar data
Chen et al. Error-optimized sparse representation for single image rain removal
JP5950835B2 (en) System for reconstructing reflectors in the scene
CN104063898A (en) Three-dimensional point cloud auto-completion method
CN110969105B (en) Human body posture estimation method
CN112764116B (en) Sparse array sparse frequency point planar scanning system rapid imaging method
CN108257098A (en) Video denoising method based on maximum posteriori decoding and three-dimensional bits matched filtering
CN114966560B (en) Ground penetrating radar backward projection imaging method and system
CN113593037A (en) Building method and application of Delaunay triangulated surface reconstruction model
CN113438682A (en) SAGE-BEM5G wireless channel parameter extraction method based on beam forming
CN109214088A (en) A kind of extensive supersparsity planar array fast layout method that minimum spacing is controllable
CN115965943A (en) Target detection method, device, driving device, and medium
CN111313943A (en) Three-dimensional positioning method and device under deep learning assisted large-scale antenna array
KR100886647B1 (en) Apparatus and method for restoring loss pixel using directional interpolation
CN111539966A (en) Colorimetric sensor array image segmentation method based on fuzzy c-means clustering
Zhao Motion track enhancement method of sports video image based on otsu algorithm
CN109977892B (en) Ship detection method based on local saliency features and CNN-SVM
CN103916953A (en) Method and system for target positioning, and detection nodes
Jia et al. NSLIC: SLIC superpixels based on nonstationarity measure
Klette A comparative discussion of distance transforms and simple deformations in image processing
Jianyu et al. MTD and range-velocity decoupling of LFMCW radar
CN113890798A (en) Structured sparse estimation method and device for RIS cascade channel multi-user combination
CN103679201B (en) Calibration method of point set matching for image matching, recognition and retrieval

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant