CN111860827A - Multi-target positioning method and device of direction-finding system based on neural network model - Google Patents

Multi-target positioning method and device of direction-finding system based on neural network model Download PDF

Info

Publication number
CN111860827A
CN111860827A CN202010502016.8A CN202010502016A CN111860827A CN 111860827 A CN111860827 A CN 111860827A CN 202010502016 A CN202010502016 A CN 202010502016A CN 111860827 A CN111860827 A CN 111860827A
Authority
CN
China
Prior art keywords
target points
network model
target
data set
training data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010502016.8A
Other languages
Chinese (zh)
Other versions
CN111860827B (en
Inventor
齐飞
王政府
李景泉
牛毅
石光明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202010502016.8A priority Critical patent/CN111860827B/en
Publication of CN111860827A publication Critical patent/CN111860827A/en
Application granted granted Critical
Publication of CN111860827B publication Critical patent/CN111860827B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a multi-target positioning method and a multi-target positioning device of a direction finding system based on a neural network model, which relate to the field of AOA positioning, and the method comprises the following steps: collecting coordinate information of a plurality of target points and the observation platform; rendering a geometric space between the coordinate information of the observation platform and the target points into a sample image space through a sample generation algorithm to obtain a training data set; modeling the training data set according to a semantic segmentation network model to obtain a prediction result of the semantic segmentation network model; and acquiring the number of the target points and the specific positions of the target points according to the prediction result. The technical problems that a plurality of target points often appear simultaneously in a positioning space and the number of the target points and specific position information of the target points cannot be efficiently determined in the prior art are solved. The technical effects of effectively eliminating false target points and remarkably improving the multi-target positioning performance are achieved.

Description

Multi-target positioning method and device of direction-finding system based on neural network model
Technical Field
The invention relates to the technical field of positioning, in particular to a multi-target positioning method and device of a direction-finding system based on a neural network model.
Background
AOA positioning is a positioning technology based on direction-finding angles in passive positioning, mainly utilizes the direction-finding angle information to realize positioning, does not need synchronization of a sender and a receiver, and has the advantages of high concealment, strong anti-interference capability, simple operation and the like. Plays an important role in the fields of wireless sensor networks, indoor positioning and the like.
Due to the fact that environmental noise exists in the positioning scene, the anchor point obtains direction finding angles from the target point, error noise exists, and therefore the positioning performance of the real target is affected.
However, in the process of implementing the technical solution in the embodiment of the present application, the inventor of the present application finds that the above prior art has at least the following technical problems:
in a positioning space, a plurality of target points often appear simultaneously, and the technical problem that the number of the target points and specific position information thereof cannot be determined efficiently exists in the prior art.
Disclosure of Invention
The embodiment of the invention provides a multi-target positioning method and device of a direction-finding system based on a neural network model, which are used for solving the technical problems that a plurality of target points often appear simultaneously in a positioning space and the number of the target points and the specific position information of the target points cannot be determined efficiently in the prior art. The technical effects of effectively eliminating false target points and remarkably improving the multi-target positioning performance are achieved.
In view of the above problems, the embodiments of the present application are provided to provide a multi-target positioning method and device for a direction-finding system based on a neural network model.
In a first aspect, the invention provides a multi-target positioning method of a direction-finding system based on a neural network model, which comprises the following steps: collecting coordinate information of a plurality of target points and the observation platform; rendering a geometric space between the coordinate information of the observation platform and the target points into a sample image space through a sample generation algorithm to obtain a training data set; modeling the training data set according to a semantic segmentation network model to obtain a prediction result of the semantic segmentation network model; and acquiring the number of the target points and the specific positions of the target points according to the prediction result.
Further, the acquiring coordinate information of the plurality of target points and the observation platform includes: in a two-dimensional positioning space, the two-dimensional space coordinate system takes the rightward horizontal direction as the positive direction of an x axis and takes the upward direction vertical to the horizontal direction as the positive direction of a y axis to obtain the position of the observation platform; and obtaining direction-finding angle information of all target points in the interval.
Further, the rendering a geometric space between the coordinate information of the observation platform and the target points into a sample image space through a sample generation algorithm to obtain a training data set includes: the sample generation algorithm function is Gx(. and G)y(. to) obtain a training data set Z:
Z=Gx(x111,…,β1M,…,xNNM,…,βN1)
wherein the generated sample label is Y ═ Gy(t1,…,tM);Gx(. is a generating function of the sample image space; gy(. cndot.) is the generating function of the sample label.
Further, the modeling the training data set according to the semantic segmentation network model to obtain a prediction result of the semantic segmentation network model includes: wherein, F (;) is the semantic segmentation network model function, the prediction result of the training data set Z in the semantic segmentation network model can be represented as:
Figure BDA0002525125630000021
wherein
Figure BDA0002525125630000022
And theta is a weight parameter of the semantic segmentation model for the result of the neural network prediction.
Further, the obtaining the number of the target points and the specific positions of the target points according to the prediction result includes: obtaining a region center coordinate through the gray scale gravity center method; obtaining the number of the target points and the specific positions of the plurality of target points according to the area center coordinates, wherein the formula for calculating the area center coordinates is as follows:
Figure BDA0002525125630000031
Wherein, f (v) is the gray value of the pixel point with the coordinate v, Ω is the target area set, and u is the area center coordinate.
Further, the sample generation algorithm Gx(. comprises:
obtaining a training data set Z by a multi-channel input modeA
Figure BDA0002525125630000032
Obtaining a training data set Z in a single-channel input modeB
Figure BDA0002525125630000033
Wherein alpha isBTo normalize the coefficients, Gx(xnnm) Is represented by reference point xnAnd direction angle information betanmThe generated sample image space.
Further, the sample generation algorithm Gy(. comprises:
obtaining a training data set Y by taking the circle as an output modeC
Figure BDA0002525125630000034
Obtaining a training data set Y by taking the rectangle as an output modeD
Figure BDA0002525125630000035
Wherein Δ is a default hyper-parameter; t is tmV is a coordinate point as a target point.
In a second aspect, the invention provides a multi-target positioning device of a direction-finding system based on a neural network model, which comprises:
the first acquisition unit is used for acquiring coordinate information of a plurality of target points and the observation platform;
a first obtaining unit, configured to render a geometric space between the coordinate information of the observation platform and the plurality of target points into a sample image space through a sample generation algorithm, so as to obtain a training data set;
a second obtaining unit, configured to model the training data set according to a semantic segmentation network model, and obtain a prediction result of the semantic segmentation network model;
A third obtaining unit configured to obtain the number of the plurality of target points and specific positions of the plurality of target points according to the prediction result.
Preferably, the apparatus further comprises:
a fourth obtaining unit, configured to obtain, in a two-dimensional positioning space, a position of the observation platform by using a rightward horizontal direction as an x-axis positive direction and using an upward direction perpendicular to the horizontal direction as a y-axis positive direction in the two-dimensional space coordinate system;
a fifth obtaining unit, configured to obtain direction-finding angle information of all target points in the interval.
Preferably, the apparatus further comprises:
a sixth obtaining unit for the sample generation algorithm function to be Gx(. and G)y(. to) obtain a training data set Z:
Z=Gx(x111,…,β1M,…,xNNM,…,βN1)
wherein G isxThe function being a generating algorithm for generating a sample image space, Gy(. cndot.) is a generation algorithm that generates a sample label.
Preferably, the apparatus further comprises:
a first prediction unit, configured to F (;) be the semantic segmentation network model function, and then the prediction result of the training data set Z in the semantic segmentation network model can be represented as:
Figure BDA0002525125630000051
Wherein
Figure BDA0002525125630000052
And theta is a weight parameter of the semantic segmentation model for the result of the neural network prediction.
Preferably, the apparatus further comprises:
a seventh obtaining unit configured to obtain a region center coordinate by the grayscale centroid method;
an eighth obtaining unit, configured to obtain the number of the target points and specific positions of the multiple target points according to a region center coordinate, where a formula for calculating the region center coordinate is as follows:
Figure BDA0002525125630000053
wherein, f (v) is the gray value of the pixel point with the coordinate v, Ω is the target area set, and u is the area center coordinate.
Preferably, the apparatus further comprises:
a ninth obtaining unit for obtaining a training data set Z in a multi-channel input mannerA
Figure BDA0002525125630000054
Tenth itemAn obtaining unit, wherein the tenth obtaining unit is used for obtaining a training data set Z in a single-channel input modeB
Figure BDA0002525125630000055
Wherein alpha isBTo normalize the coefficients, Gx(xnnm) Is represented by reference point xnAnd direction angle information betanmThe generated sample image space.
Preferably, the apparatus further comprises:
an eleventh obtaining unit for obtaining the training data set Y in a circle as an output modeC
Figure BDA0002525125630000061
A twelfth obtaining unit for obtaining the training data set Y by using the rectangle as an output mode D
Figure BDA0002525125630000062
Wherein Δ is a default hyper-parameter; t is tmV is a coordinate point as a target point.
In a third aspect, the present invention provides a multi-target positioning device for direction-finding system based on a neural network model, including a memory, a processor and a computer program stored in the memory and operable on the processor, where the processor implements the following steps when executing the program:
collecting coordinate information of a plurality of target points and the observation platform; rendering a geometric space between the coordinate information of the observation platform and the target points into a sample image space through a sample generation algorithm to obtain a training data set; modeling the training data set according to a semantic segmentation network model to obtain a prediction result of the semantic segmentation network model; and acquiring the number of the target points and the specific positions of the target points according to the prediction result.
In a fourth aspect, the present invention provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
collecting coordinate information of a plurality of target points and the observation platform; rendering a geometric space between the coordinate information of the observation platform and the target points into a sample image space through a sample generation algorithm to obtain a training data set; modeling the training data set according to a semantic segmentation network model to obtain a prediction result of the semantic segmentation network model; and acquiring the number of the target points and the specific positions of the target points according to the prediction result.
One or more technical solutions in the embodiments of the present application have at least one or more of the following technical effects:
the embodiment of the invention provides a multi-target positioning method and a multi-target positioning device of a direction-finding system based on a neural network model, wherein the method comprises the following steps: collecting coordinate information of a plurality of target points and the observation platform; rendering a geometric space between the coordinate information of the observation platform and the target points into a sample image space through a sample generation algorithm to obtain a training data set; modeling the training data set according to a semantic segmentation network model to obtain a prediction result of the semantic segmentation network model; and acquiring the number of the target points and the specific positions of the target points according to the prediction result. The technical problems that a plurality of target points often appear simultaneously in a positioning space and the number of the target points and specific position information of the target points cannot be efficiently determined in the prior art are solved. The technical effects of effectively eliminating false target points and remarkably improving the multi-target positioning performance are achieved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
FIG. 1 is a schematic flow chart of a direction-finding system multi-target positioning method based on a neural network model according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a direction-finding system multi-target positioning device based on a neural network model according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of another direction-finding system multi-target positioning device based on a neural network model according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of sample generation modes A and B in the embodiment of the present invention;
FIG. 5 is a schematic structural diagram of sample label generation modes C and D in the embodiment of the present invention;
fig. 6 is a schematic structural diagram of an MLocNet network model in an embodiment of the present invention.
Description of reference numerals: the system comprises a first acquisition unit 11, a first obtaining unit 12, a second obtaining unit 13, a third obtaining unit 14, a bus 300, a receiver 301, a processor 302, a transmitter 303, a memory 304 and a bus interface 306.
Detailed Description
The embodiment of the invention provides a multi-target positioning method and device of a direction-finding system based on a neural network model, which solve the technical problems that a plurality of target points often appear simultaneously in a positioning space and the number of the target points and the specific position information thereof cannot be determined efficiently in the prior art.
The technical scheme provided by the invention has the following general idea: the embodiment of the invention provides a multi-target positioning method and a multi-target positioning device of a direction-finding system based on a neural network model, wherein the method comprises the following steps: collecting coordinate information of a plurality of target points and the observation platform; rendering a geometric space between the coordinate information of the observation platform and the target points into a sample image space through a sample generation algorithm to obtain a training data set; modeling the training data set according to a semantic segmentation network model to obtain a prediction result of the semantic segmentation network model; and acquiring the number of the target points and the specific positions of the target points according to the prediction result. The technical effects of effectively eliminating false target points and remarkably improving the multi-target positioning performance are achieved.
The technical solutions of the present invention are described in detail below with reference to the drawings and specific embodiments, and it should be understood that the specific features in the embodiments and examples of the present invention are described in detail in the technical solutions of the present application, and are not limited to the technical solutions of the present application, and the technical features in the embodiments and examples of the present application may be combined with each other without conflict.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Example one
Fig. 1 is a schematic flow chart of a direction-finding system multi-target positioning method based on a neural network model in the embodiment of the present invention. As shown in fig. 1, an embodiment of the present invention provides a multi-target positioning method for a direction-finding system based on a neural network model, where the method includes:
step 110: collecting coordinate information of a plurality of target points and the observation platform;
further, the acquiring coordinate information of the plurality of target points and the observation platform includes: in a two-dimensional positioning space, the two-dimensional space coordinate system takes the rightward horizontal direction as the positive direction of an x axis and takes the upward direction vertical to the horizontal direction as the positive direction of a y axis to obtain the position of the observation platform; and obtaining direction-finding angle information of all target points in the interval.
Specifically, in a two-dimensional positioning space, the coordinate system of the two-dimensional space is defined by the x-axis positive direction in the horizontal direction to the right and the y-axis positive direction in the direction perpendicular to the horizontal direction. The position of the observation platform or anchor point is obtained. And obtaining direction-finding angle information from different target points through angle-finding equipment of the observation platform. In multi-objective positioning, individual observations The order in which the platform or anchor point obtains the direction finding angle from the target point is different. At reference point x1May be in the order of beta to obtain the direction finding angle1=[β1312,…,β1M]TAt a reference point x2May be in the order of beta to obtain the direction finding angle2=[β2123,…,β22]T
Step 120: rendering a geometric space between the coordinate information of the observation platform and the target points into a sample image space through a sample generation algorithm to obtain a training data set;
specifically, the multi-target positioning task is converted into a semantic segmentation task through the conversion of a solution domain, namely, a geometric space between an observation platform and a target point is rendered into a sample image space through a sample generation algorithm, so that a training data set is generated.
Further, the rendering a geometric space between the coordinate information of the observation platform and the target points into a sample image space through a sample generation algorithm to obtain a training data set includes:
the sample generation algorithm function is Gx(. and G)y(. to) obtain a training data set Z:
Z=Gx(x111,…,β1M,…,xNNM,…,βN1)
wherein the generated sample label is Y ═ Gy(t1,…,tM);Gx(. is a generating function of the sample image space; gy(. cndot.) is the generating function of the sample label.
Specifically, a mathematical model of the multi-target positioning task is analyzed, and it is assumed that there are N reference points and M target points of the multi-target positioning task in a two-dimensional space. Wherein the reference point coordinate x i=[xi,yi]TWhere i is 1, …, N. The coordinate of the target point is tj=[txj,tyj]TWhere j is 1, …, M. In direction-finding location, at reference point xiCan obtain information about the target pointtjObtaining a beta for a direction-finding angleijAnd (4) showing. Then beta isijCan be expressed as
βij=β(xi,tj)+ij
Wherein beta (t, x)i) Representing the target point tjTo a reference point xiThe angle between the vector of (a) and the horizontal direction.ijIs independent and identically distributed observation noise, obeys zero mean value and has variance of sigma2A gaussian distribution of (a). The multi-target positioning task based on the direction finding lines is to solve the position of a target point according to the coordinates of a reference point and direction finding angle data. With the increase of the number of the target points and the observation platforms, a large number of false intersection points are generated by the intersection of every two direction-finding lines in the positioning space, and the multi-target positioning task becomes complex. In the application, a multi-target positioning task is converted into a semantic segmentation task, a training data set is generated by providing a sample generation algorithm, and a convolutional neural network model is utilized to realize modeling on the training data set, so that high-performance target positioning is realized. The specific procedure is as follows.
In the multi-target positioning space, the sample generation algorithm function is assumed to be Gx(. and G)y(. C.), the generated training data set Z. The process of generating the training data set can be described as:
Z=Gx(x111,…,β1M,…,xNNM,…,βN1)
The generated sample label is Y ═ Gy(t1,…,tM). Wherein G isx(. is a generation algorithm of the sample image space, Gy(. cndot.) is a generation algorithm of a sample label.
In the multi-target positioning task, the direction-finding angle sequence obtained by each reference point is different, so that the direction-finding line cannot be directly related to the target, therefore, a multi-target positioning training data set cannot be generated by multiplicative mixed Gaussian distribution, and only an additive mixed Gaussian noise distribution mode can be adopted to generate the training data set. In the present application, the method is divided into two modes, i.e. multi-channel input and single-channel input according to the number of channels of the training data set. Hypothetical positioningWithin the region 1 × 1, the scale size of the training data set is H × W, and the number of reference points is N. The sample generation algorithm G will be described in detail belowx(-) and sample Label Generation Algorithm Gy(·)。
The sample generation algorithm Gx(. comprises: obtaining a training data set Z by a multi-channel input modeA
Figure BDA0002525125630000111
Specifically, in the generation process of the multi-channel sample input method, first, a reference point x is setnDirection-finding line beta ofnmGenerating H multiplied by W sample image space with dimension size by probability rendering of Gaussian distribution, wherein all direction finding lines under the reference point utilize GxObtaining a sample image space Z from a sample image space generated by the (-) function nThen, sequentially generating sample image spaces under each reference point, and finally, converting the sample image space Z into the sample image space ZnThe training data set stitched together as input is ZA(ZA∈RN×H×W). Then the multi-channel input mode obtains a training data set ZAThis is available from:
Figure BDA0002525125630000112
wherein G isx(xnnm) Is represented by reference point xnAnd direction angle information betanmThe generated sample image space. Pixel values Z in sample image space generated by a single reference pointAn(v) Can be formally expressed as:
Figure BDA0002525125630000121
wherein alpha isAAre normalized coefficients. In the multi-channel sample input mode, a sample image space generated by a direction-finding line of each reference point is taken as a single channel, and points in the sample image space are generated by an additive mixed Gaussian distribution. By setting the respective reference points individually toAnd one channel can avoid interference of the direction-finding line information of other reference points. For the multi-target positioning task, the number of channels of the multi-channel sample input mode is equal to the number of the reference points, and the data size of the generated training data set Z is influenced by the number of the reference points.
The sample generation algorithm Gx(. h) further comprising: obtaining a training data set Z in a single-channel input modeB
Figure BDA0002525125630000122
Wherein alpha isBTo normalize the coefficients, Gx(xnnm) Is represented by reference point x nAnd direction angle information betanmThe generated sample image space.
Specifically, the generation process of the single-channel input mode: first, G is used for each direction-finding line with the probability of Gaussian distributionx(DEG) rendering the function to generate a sample image space with the dimension of H multiplied by W, then sequentially superposing the sample image space generated by all direction-finding lines under all reference points in the sample image space, and finally generating a training data set ZB(ZB∈RH×W). Then the single-channel input mode obtains a training data set ZBThis is available from:
Figure BDA0002525125630000123
wherein alpha isBAre normalized coefficients. The final sample image space Z obtained by the single-channel input methodBIs composed of
Figure BDA0002525125630000124
Compared with the multi-channel input mode, the generated sample image is denser in the single-channel input mode. The sample image generated by the single-channel input mode is single-channel and contains spatial information of direction finding angles obtained by all the reference points, and the information obtained by all the reference points is superposed. For the multi-target positioning task, compared with a multi-channel input mode, the single-channel input mode is not influenced by the number of reference points.
In summary, the sample image Z generated by the multi-channel input method and the single-channel input method in the multi-target positioning task can be represented by the following equation:
Figure BDA0002525125630000131
Figure BDA0002525125630000132
Where ρ (·) can be expressed as
Figure BDA0002525125630000133
For example, in the multi-target positioning task, the direction-finding angle sequence obtained by each reference point is different, so that the direction-finding line cannot be directly related to the target, therefore, a multiplicative mixed gaussian distribution cannot be used for generating the multi-target positioning sample, and only an additive mixed gaussian noise distribution can be used for generating the sample. In the embodiment of the application, the method is divided into two modes of multi-channel input and single-channel input according to the number of channels of a sample image space. Assume that within the localization area 1 × 1, the scale size of the sample image space is H × W, and the number of reference points is N. Wherein the sample generation algorithm Gx(-) and sample Label Generation Algorithm GyFor a multi-channel sample input mode, the condition that the direction finding angle of the observation station is lost can be effectively processed. Aiming at a single-channel sample input mode, the conditions of multi-anchor point and multi-target positioning tasks can be effectively processed. Assuming that the scale size H × W of the sample image is 224 × 224, the noise level is 0.025, the number of reference points is 4, and the number of target points is 6, sample image spaces are generated in two ways, respectively, as shown in fig. 4. In fig. 4, the multi-channel sample input method corresponds to the method a, and the single-channel sample input method corresponds to the method B, and the center point in the channel 1 of the method a indicates the position of the reference point. It obtains direction finding of different targets The line is shown as channel 1. Each channel contained 6 straight lines. Since two of the direction-finding angles obtained by the reference point are close, the generated spaces are superposed together, and 5 straight lines appear. In the single-channel sample input mode, sample images drawn by the reference points are superposed together.
Further, the sample generation algorithm Gy(. comprises:
obtaining a training data set Y by taking the circle as an output modeC
Figure BDA0002525125630000141
Specifically, the circular shape is used as an output mode: through the analysis of the multi-target positioning task, the real target point obeys Gaussian mixture distribution. To introduce this a priori information, the present application takes the target point as output in a circular manner, assuming that there is an existing target point tmThe generated semantic graph is YC(YC∈RH×W). Then the semantic graph YCCan be obtained from the following equation:
Figure BDA0002525125630000142
then the semantic graph YCPixel value Y ofC(v) Can be expressed as:
Figure BDA0002525125630000143
where Δ is a default hyper-parameter that determines the radius of the long and wide circles of the matrix in the semantic graph.
Obtaining a training data set Y by taking the rectangle as an output modeD
Figure BDA0002525125630000144
Wherein Δ is a default hyper-parameter; t is tmV is a coordinate point as a target point.
In particular toIn terms of the output, the rectangle: in order to improve the proportion of positive and negative categories in the semantic graph generated by the sample labels, when the target points are mapped into the semantic graph, a rectangular block mode is used for replacing a point mode. Assuming an existing target point t mThe generated semantic graph is YD(YD∈RH×W). Then the semantic graph YDPixel value Y ofD(v) Can be expressed as
Figure BDA0002525125630000145
Where Δ is a default hyper-parameter that determines the length and width of the matrix in the semantic graph.
In summary, the semantic graph Y generated by the label sample approach C, DC、YDCan be expressed by the following:
Figure BDA0002525125630000151
Figure BDA0002525125630000152
where Δ is a default hyper-parameter. Only whether targets exist or not is distinguished in the semantic graph, and target individuals are not distinguished.
For example, the sample tag generation algorithm GyAnd (c) only distinguishing whether a target exists in the generated sample label semantic graph, and not distinguishing target individuals. Assuming that the sample image size is 224 × 224, the noise level is 0.025, the number of reference points is 4, and the number of target points is 5, tag sample semantic maps are generated in two ways, i.e., a round form as an output and a rectangular form as an output, as shown in fig. 5, the round form as an output and the rectangular form as a form D are shown.
Step 130: modeling the training data set according to a semantic segmentation network model to obtain a prediction result of the semantic segmentation network model;
further, the modeling the training data set according to the semantic segmentation network model to obtain a prediction result of the semantic segmentation network model includes:
Wherein, F (;) is the semantic segmentation network model function, the prediction result of the training data set Z in the semantic segmentation network model can be represented as:
Figure BDA0002525125630000153
wherein
Figure BDA0002525125630000154
And theta is a weight parameter of the semantic segmentation model for the result of the neural network prediction.
Specifically, through the domain-solving conversion, in the multi-target positioning task, the multi-target positioning task is also converted into a semantic segmentation task of two categories. Assuming the existing semantic segmentation network model function F (-;) the prediction result of the training data set Z in the network model can be expressed as:
Figure BDA0002525125630000161
wherein
Figure BDA0002525125630000162
And theta is a weight parameter of the semantic segmentation model for the result of the neural network prediction. The prediction result can be obtained through a semantic segmentation model
Figure BDA0002525125630000163
And then the post-processing module P (-) can obtain the target point number and coordinate information predicted by the model
Figure BDA0002525125630000164
This process can be described by the following equation
Figure BDA0002525125630000165
Observation platform or anchorPoint coordinates and direction-finding angle matrix beta obtained by sameijObtaining the number of target points and coordinate information predicted by the model sequentially through a sample generation algorithm, a semantic segmentation network model and a post-processing module
Figure BDA0002525125630000166
In the embodiment of the application, a convolutional neural network MLocNet is provided for modeling the AOA sample image generated by multi-target positioning. The MLocNet has the main functions of denoising samples and extracting high-level semantic features near target points of the samples through a coding-decoding structure, so that the quantity and the positions of multiple targets are predicted. The MLocNet framework diagram in the present embodiment is shown in FIG. 6. The default size of the sample is 224 × 224. The learning rate of the model is set to 1e-3, and a linear gradient attenuation strategy is adopted. Model training uses Adam as the optimizer. Training for 60 rounds, and realizing the network model code by adopting a Pythrch neural network framework. In the embodiment of the present application, the loss function to be optimized by the multi-objective positioning task is:
Figure BDA0002525125630000167
In the semantic segmentation task, the cross entropy loss J (Θ) is usually adopted as a loss function to optimize the weight parameters of the network model. Namely, the weight parameter theta of the training convolutional neural network model can be obtained by optimizing the J (theta) function.
Step 140: and acquiring the number of the target points and the specific positions of the target points according to the prediction result.
Further, the obtaining the number of the target points and the specific positions of the target points according to the prediction result includes: obtaining a region center coordinate through the gray scale gravity center method; obtaining the number of the target points and the specific positions of the plurality of target points according to the area center coordinates, wherein the formula for calculating the area center coordinates is as follows:
Figure BDA0002525125630000171
wherein, f (v) is the gray value of the pixel point with the coordinate v, Ω is the target area set, and u is the area center coordinate.
Specifically, during reasoning, anchor coordinates and corresponding direction-finding lines pass through modules such as a sample generation algorithm and a semantic segmentation network model, and then a spatial map in the region is obtained. The space map obtained at this time only contains the presence or absence of the target point, and the individual information of the target cannot be distinguished. Therefore, the post-processing module P (-) is needed to obtain the number and position information of the target points. The method and the device are used for solving the technical problem that a large number of false intersection points are generated due to the fact that the number of reference points and target points is increased along with the fact that every two direction-finding lines in a positioning space intersect. The high-performance multi-positioning algorithm can extract and obtain accurate information in the fields of wireless sensor networks and indoor positioning, and is convenient for preempting a first machine.
In the embodiment of the application, a gray scale gravity center method is adopted to perform post-processing on target points existing in the semantic graph, so that the corresponding position information of the number of the target points is obtained. The gray scale gravity center method takes the gray scale value of the pixel point in the communication area as the corresponding 'quality' of the point, thereby calculating the area center coordinate. The formula for calculating the region center coordinates is as follows:
Figure BDA0002525125630000172
where f (v) is the gray value of the pixel point with the coordinate v, Ω is the target area set, and u is the area center coordinate.
Example two
Based on the same inventive concept as the direction-finding system multi-target positioning method based on the neural network model in the foregoing embodiment, the present invention further provides a direction-finding system multi-target positioning device based on the neural network model, as shown in fig. 2, the device includes:
the first acquisition unit 11 is used for acquiring coordinate information of a plurality of target points and the observation platform;
a first obtaining unit 12, where the first obtaining unit 12 is configured to render a geometric space between the coordinate information of the observation platform and the target points into a sample image space through a sample generation algorithm, so as to obtain a training data set;
A second obtaining unit 13, where the second obtaining unit 13 is configured to model the training data set according to a semantic segmentation network model, and obtain a prediction result of the semantic segmentation network model;
a third obtaining unit 14, wherein the third obtaining unit 14 is configured to obtain the number of the target points and the specific positions of the target points according to the prediction result.
Preferably, the apparatus further comprises:
a fourth obtaining unit, configured to obtain, in a two-dimensional positioning space, a position of the observation platform by using a rightward horizontal direction as an x-axis positive direction and using an upward direction perpendicular to the horizontal direction as a y-axis positive direction in the two-dimensional space coordinate system;
a fifth obtaining unit, configured to obtain direction-finding angle information of all target points in the interval.
Preferably, the apparatus further comprises:
a sixth obtaining unit for the sample generation algorithm function to be Gx(. and G)y(. to) obtain a training data set Z:
Z=Gx(x111,…,β1M,…,xNNM,…,βN1)
wherein G isxThe function being a generating algorithm for generating a sample image space, Gy(. cndot.) is a generation algorithm that generates a sample label.
Preferably, the apparatus further comprises:
A first prediction unit, configured to F (;) be the semantic segmentation network model function, and then the prediction result of the training data set Z in the semantic segmentation network model can be represented as:
Figure BDA0002525125630000191
wherein
Figure BDA0002525125630000192
And theta is a weight parameter of the semantic segmentation model for the result of the neural network prediction.
Preferably, the apparatus further comprises:
a seventh obtaining unit configured to obtain a region center coordinate by the grayscale centroid method;
an eighth obtaining unit, configured to obtain the number of the target points and specific positions of the multiple target points according to a region center coordinate, where a formula for calculating the region center coordinate is as follows:
Figure BDA0002525125630000193
wherein, f (v) is the gray value of the pixel point with the coordinate v, Ω is the target area set, and u is the area center coordinate.
Preferably, the apparatus further comprises:
a ninth obtaining unit for obtaining a training data set Z in a multi-channel input mannerA
Figure BDA0002525125630000194
A tenth obtaining unit, configured to obtain the training data set Z in a single-channel input mannerB
Figure BDA0002525125630000201
Wherein alpha isBTo normalize the coefficients, Gx(xnnm) Is represented by reference point xnAnd direction angle information beta nmThe generated sample image space.
Preferably, the apparatus further comprises:
an eleventh obtaining unit for obtaining the training data set Y in a circle as an output modeC
Figure BDA0002525125630000202
A twelfth obtaining unit for obtaining the training data set Y by using the rectangle as an output modeD
Figure BDA0002525125630000203
Wherein Δ is a default hyper-parameter; t is tmV is a coordinate point as a target point.
Various changes and specific examples of the direction-finding system multi-target positioning method based on the neural network model in the first embodiment of fig. 1 are also applicable to the direction-finding system multi-target positioning device based on the neural network model in the present embodiment, and through the foregoing detailed description of the direction-finding system multi-target positioning method based on the neural network model, those skilled in the art can clearly know the implementation method of the direction-finding system multi-target positioning device based on the neural network model in the present embodiment, so for the brevity of the description, detailed description is not repeated here.
EXAMPLE III
Based on the same inventive concept as the direction-finding system multi-target positioning method based on the neural network model in the previous embodiment, the invention also provides a direction-finding system multi-target positioning device based on the neural network model, wherein a computer program is stored on the direction-finding system multi-target positioning device, and when the computer program is executed by a processor, the steps of any one of the methods of the direction-finding system multi-target positioning method based on the neural network model are realized.
Where in fig. 3 a bus architecture (represented by bus 300), bus 300 may include any number of interconnected buses and bridges, bus 300 linking together various circuits including one or more processors, represented by processor 302, and memory, represented by memory 304. The bus 300 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface 306 provides an interface between the bus 300 and the receiver 301 and transmitter 303. The receiver 301 and the transmitter 303 may be the same element, i.e., a transceiver, providing a means for communicating with various other apparatus over a transmission medium.
The processor 302 is responsible for managing the bus 300 and general processing, and the memory 304 may be used for storing data used by the processor 302 in performing operations.
Example four
Based on the same inventive concept as the multi-target positioning method of the direction-finding system based on the neural network model in the foregoing embodiments, the present invention further provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the following steps:
Collecting coordinate information of a plurality of target points and the observation platform; rendering a geometric space between the coordinate information of the observation platform and the target points into a sample image space through a sample generation algorithm to obtain a training data set; modeling the training data set according to a semantic segmentation network model to obtain a prediction result of the semantic segmentation network model; obtaining the number of the target points and the specific positions of the target points according to the prediction result
In a specific implementation, when the program is executed by a processor, any method step in the first embodiment may be further implemented.
One or more technical solutions in the embodiments of the present application have at least one or more of the following technical effects:
the embodiment of the invention provides a multi-target positioning method and a multi-target positioning device of a direction-finding system based on a neural network model, wherein the method comprises the following steps: collecting coordinate information of a plurality of target points and the observation platform; rendering a geometric space between the coordinate information of the observation platform and the target points into a sample image space through a sample generation algorithm to obtain a training data set; modeling the training data set according to a semantic segmentation network model to obtain a prediction result of the semantic segmentation network model; and acquiring the number of the target points and the specific positions of the target points according to the prediction result. The technical problems that a plurality of target points often appear simultaneously in a positioning space and the number of the target points and specific position information of the target points cannot be efficiently determined in the prior art are solved. The technical effects of effectively eliminating false target points and remarkably improving the multi-target positioning performance are achieved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A multi-target positioning method of a direction-finding system based on a neural network model is disclosed, wherein the method comprises the following steps;
collecting coordinate information of a plurality of target points and the observation platform;
rendering a geometric space between the coordinate information of the observation platform and the target points into a sample image space through a sample generation algorithm to obtain a training data set;
modeling the training data set according to a semantic segmentation network model to obtain a prediction result of the semantic segmentation network model;
and acquiring the number of the target points and the specific positions of the target points according to the prediction result.
2. The method of claim 1, wherein the acquiring coordinate information of the plurality of target points and the observation platform comprises:
in a two-dimensional positioning space, the two-dimensional space coordinate system takes the rightward horizontal direction as the positive direction of an x axis and takes the upward direction vertical to the horizontal direction as the positive direction of a y axis to obtain the position of the observation platform;
and obtaining direction-finding angle information of all target points in the interval.
3. The method of claim 1, wherein said rendering, by a sample generation algorithm, a geometric space between coordinate information of the observation platform and the plurality of target points into a sample image space, obtaining a training data set, comprises:
The sample generation algorithm function is Gx(. and G)y(. to) obtain a training data set Z:
Z=Gx(x111,…,β1M,…,xNNM,…,βN1),
wherein the generated sample label is Y ═ Gy(t1,…,tM);Gx(. is a generating function of the sample image space; gy(. cndot.) is the generating function of the sample label.
4. The method of claim 3, wherein the modeling the training data set according to a semantic segmentation network model to obtain a prediction result of the semantic segmentation network model comprises:
wherein the content of the first and second substances,
Figure FDA0002525125620000021
for the semantic segmentation network model function, the prediction result of the training data set Z in the semantic segmentation network model can be represented as:
Figure FDA0002525125620000022
wherein
Figure FDA0002525125620000023
And theta is a weight parameter of the semantic segmentation model for the result of the neural network prediction.
5. The method of claim 4, wherein the obtaining the number of the plurality of target points and the specific locations of the plurality of target points according to the prediction comprises:
obtaining a region center coordinate through the gray scale gravity center method;
obtaining the number of the target points and the specific positions of the plurality of target points according to the area center coordinates, wherein the formula for calculating the area center coordinates is as follows:
Figure FDA0002525125620000024
wherein, f (v) is the gray value of the pixel point with the coordinate v, Ω is the target area set, and u is the area center coordinate.
6. The method of claim 3, wherein the sample generation algorithm Gx(. comprises:
obtaining a training data set Z by a multi-channel input modeA
Figure FDA0002525125620000025
Obtaining a training data set Z in a single-channel input modeB
Figure FDA0002525125620000026
Wherein alpha isBTo normalize the coefficients, Gx(xnnm) Is represented by reference point xnAnd direction angle information betanmThe generated sample image space.
7. The method of claim 3, wherein the sample generation algorithm Gy(. comprises:
obtaining a training data set Y by taking the circle as an output modeC
Figure FDA0002525125620000031
Obtaining a training data set Y by taking the rectangle as an output modeD
Figure FDA0002525125620000032
Wherein Δ is a default hyper-parameter; t is tmV is a coordinate point as a target point.
8. A multi-target positioning device of a direction-finding system based on a neural network model is characterized by comprising the following components:
the first acquisition unit is used for acquiring coordinate information of a plurality of target points and the observation platform;
a first obtaining unit, configured to render a geometric space between the coordinate information of the observation platform and the plurality of target points into a sample image space through a sample generation algorithm, so as to obtain a training data set;
a second obtaining unit, configured to model the training data set according to a semantic segmentation network model, and obtain a prediction result of the semantic segmentation network model;
A third obtaining unit configured to obtain the number of the plurality of target points and specific positions of the plurality of target points according to the prediction result.
9. A multi-target positioning device of direction-finding system based on neural network model comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, and is characterized in that the processor implements the following steps when executing the program:
collecting coordinate information of a plurality of target points and the observation platform;
rendering a geometric space between the coordinate information of the observation platform and the target points into a sample image space through a sample generation algorithm to obtain a training data set;
modeling the training data set according to a semantic segmentation network model to obtain a prediction result of the semantic segmentation network model;
and acquiring the number of the target points and the specific positions of the target points according to the prediction result.
10. A computer-readable storage medium, on which a computer program is stored, which program, when executed by a processor, carries out the steps of:
collecting coordinate information of a plurality of target points and the observation platform;
Rendering a geometric space between the coordinate information of the observation platform and the target points into a sample image space through a sample generation algorithm to obtain a training data set;
modeling the training data set according to a semantic segmentation network model to obtain a prediction result of the semantic segmentation network model;
and acquiring the number of the target points and the specific positions of the target points according to the prediction result.
CN202010502016.8A 2020-06-04 2020-06-04 Multi-target positioning method and device of direction-finding system based on neural network model Active CN111860827B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010502016.8A CN111860827B (en) 2020-06-04 2020-06-04 Multi-target positioning method and device of direction-finding system based on neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010502016.8A CN111860827B (en) 2020-06-04 2020-06-04 Multi-target positioning method and device of direction-finding system based on neural network model

Publications (2)

Publication Number Publication Date
CN111860827A true CN111860827A (en) 2020-10-30
CN111860827B CN111860827B (en) 2023-04-07

Family

ID=72985527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010502016.8A Active CN111860827B (en) 2020-06-04 2020-06-04 Multi-target positioning method and device of direction-finding system based on neural network model

Country Status (1)

Country Link
CN (1) CN111860827B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109711413A (en) * 2018-12-30 2019-05-03 陕西师范大学 Image, semantic dividing method based on deep learning
CN110335312A (en) * 2019-06-17 2019-10-15 武汉大学 A kind of object space localization method neural network based and device
CN110363210A (en) * 2018-04-10 2019-10-22 腾讯科技(深圳)有限公司 A kind of training method and server of image, semantic parted pattern
CN110807496A (en) * 2019-11-12 2020-02-18 智慧视通(杭州)科技发展有限公司 Dense target detection method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363210A (en) * 2018-04-10 2019-10-22 腾讯科技(深圳)有限公司 A kind of training method and server of image, semantic parted pattern
CN109711413A (en) * 2018-12-30 2019-05-03 陕西师范大学 Image, semantic dividing method based on deep learning
CN110335312A (en) * 2019-06-17 2019-10-15 武汉大学 A kind of object space localization method neural network based and device
CN110807496A (en) * 2019-11-12 2020-02-18 智慧视通(杭州)科技发展有限公司 Dense target detection method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王志恒等: "基于语义分割的分拣系统目标检测与定位", 《计量与测试技术》 *

Also Published As

Publication number Publication date
CN111860827B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
US11878433B2 (en) Method for detecting grasping position of robot in grasping object
CN110246181B (en) Anchor point-based attitude estimation model training method, attitude estimation method and system
CN109816769A (en) Scene based on depth camera ground drawing generating method, device and equipment
CN111862126A (en) Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm
CN108734058B (en) Obstacle type identification method, device, equipment and storage medium
CN108229591A (en) Neural network adaptive training method and apparatus, equipment, program and storage medium
CN113822284B (en) RGBD image semantic segmentation method based on boundary attention
CN111707275B (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
CN108875903B (en) Image detection method, device, system and computer storage medium
CN112801047B (en) Defect detection method and device, electronic equipment and readable storage medium
CN110807379A (en) Semantic recognition method and device and computer storage medium
CN115100741A (en) Point cloud pedestrian distance risk detection method, system, equipment and medium
CN115018999A (en) Multi-robot-cooperation dense point cloud map construction method and device
CN109766896B (en) Similarity measurement method, device, equipment and storage medium
Seo et al. An efficient detection of vanishing points using inverted coordinates image space
CN111860827B (en) Multi-target positioning method and device of direction-finding system based on neural network model
CN111833395B (en) Direction-finding system single target positioning method and device based on neural network model
CN116704029A (en) Dense object semantic map construction method and device, storage medium and electronic equipment
CN114627365B (en) Scene re-recognition method and device, electronic equipment and storage medium
CN114266879A (en) Three-dimensional data enhancement method, model training detection method, three-dimensional data enhancement equipment and automatic driving vehicle
CN111765892B (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
CN115346041A (en) Point position marking method, device and equipment based on deep learning and storage medium
CN111833397B (en) Data conversion method and device for orientation-finding target positioning
CN111815658B (en) Image recognition method and device
US10776973B2 (en) Vanishing point computation for single vanishing point images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant