CN112330536B - Sensor data processing method and device, electronic equipment and storage medium - Google Patents

Sensor data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112330536B
CN112330536B CN202110000711.9A CN202110000711A CN112330536B CN 112330536 B CN112330536 B CN 112330536B CN 202110000711 A CN202110000711 A CN 202110000711A CN 112330536 B CN112330536 B CN 112330536B
Authority
CN
China
Prior art keywords
target
field
sensor
ray
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110000711.9A
Other languages
Chinese (zh)
Other versions
CN112330536A (en
Inventor
王宇翔
苏永恒
吴功友
佟雨
郭云肖
张雪萍
姜文俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Aerospace Hongtu Information Technology Co Ltd
Aerospace Hongtu Information Technology Co Ltd
Original Assignee
Aerospace Hongtu Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Hongtu Information Technology Co Ltd filed Critical Aerospace Hongtu Information Technology Co Ltd
Priority to CN202110000711.9A priority Critical patent/CN112330536B/en
Publication of CN112330536A publication Critical patent/CN112330536A/en
Application granted granted Critical
Publication of CN112330536B publication Critical patent/CN112330536B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Mathematics (AREA)
  • Computing Systems (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Position Input By Displaying (AREA)

Abstract

The application provides a sensor data processing method, a sensor data processing device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring a field-of-view ray set of the target sensor, wherein the field-of-view ray set is used for representing a three-dimensional field-of-view range of the target sensor pointing to a target shelter; determining a composite polygon in a target two-dimensional plane according to the field ray set, wherein the composite polygon is a polygon obtained by projecting a field ray of a target sensor and a target blocking object to the target two-dimensional plane; determining the intersection and difference set of the composite polygon; and performing coordinate conversion on the two-dimensional coordinate points in the intersection and the difference set to obtain three-dimensional coordinate points corresponding to the two-dimensional coordinate points in the target three-dimensional coordinate system. The method converts the analytic problem in the three-dimensional space into the geometric relation problem of the graph on the two-dimensional plane, simplifies the complexity of the processing process, reduces the operation amount and improves the data processing efficiency.

Description

Sensor data processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method and an apparatus for processing sensor data, an electronic device, and a storage medium.
Background
A sensor, which is a kind of detection device, can sense information of an object to be measured, and can convert the sensed information into an electric signal or other forms of sensor data according to a certain rule and output the electric signal or other forms of sensor data. In the process of space measurement and control, field data pointing to the earth are acquired through sensors attached to objects such as satellites and stations, and data processing is needed in the process of acquiring the field data.
In the related art, data processing operations such as curved surface operation are performed on the process of collecting field data based on a sensor analysis and calculation system. This not only makes the processing procedure very complicated, but also the data amount of the processing procedure is very large, which is very unfavorable for the efficient processing of data. It can be seen that the current sensor data processing method has the problem of low data processing efficiency.
Disclosure of Invention
An embodiment of the present application aims to provide a method and an apparatus for processing sensor data, an electronic device, and a storage medium, and aims to solve the problem that the current processing method of sensor data is low in data processing efficiency.
In a first aspect, an embodiment of the present application provides a method for processing sensor data, including:
acquiring a field-of-view ray set of the target sensor, wherein the field-of-view ray set is used for representing a three-dimensional field-of-view range of the target sensor pointing to a target shelter;
determining a composite polygon in a target two-dimensional plane according to the field ray set, wherein the composite polygon is a polygon obtained by projecting a field ray of a target sensor and a target blocking object to the target two-dimensional plane;
determining the intersection and difference set of the composite polygon;
and performing coordinate conversion on the two-dimensional coordinate points in the intersection and the difference set to obtain three-dimensional coordinate points corresponding to the two-dimensional coordinate points in the target three-dimensional coordinate system.
In the implementation process, the earth (namely the target shelter) and the sensor rays are projected to the selected plane through the field ray set to obtain the composite polygon, so that the problem of analysis in a three-dimensional space is converted into the problem of the geometric relationship of the graphs on a two-dimensional plane, the complexity of the processing process is simplified, the operation amount is reduced, and the data processing efficiency is improved. And then, calculating an intersection and a difference set according to the projected composite polygon, thereby acquiring the two-dimensional field data of the part shielded by the earth and the two-dimensional field data of the part not shielded by the earth. And finally, converting the two-dimensional field data back to a three-dimensional coordinate system to obtain the three-dimensional field data of the sensor pointing to the earth.
Further, before acquiring the field-of-view ray set of the target sensor, the method further includes:
acquiring a pointing matrix of the target sensor, wherein the pointing matrix is used for representing the posture of the target sensor in a target coordinate system;
multiplying the pointing matrix by a target rotation matrix to obtain an attitude matrix of the target sensor pointing to the target shielding object, wherein the target rotation matrix is a rotation change matrix when the attitude of the target sensor is adjusted to point to the target shielding object;
and multiplying the attitude matrix and the target movement matrix to obtain a field ray set, wherein the target movement matrix is a movement change matrix when the position of the target sensor is adjusted to the target position.
In the implementation process, the final state that the target sensor points to the earth (namely the target shelter) is determined through the pointing matrix of the target sensor and the change matrix in the process of adjusting the posture and the offset of the target sensor, the earth is ensured in the field range of the target sensor, and therefore the accuracy of the field data is ensured.
Further, before acquiring the orientation matrix of the target sensor, the method further includes:
acquiring the spatial position of an attachment object of a target sensor under a target coordinate system and the pointing parameters of the target sensor;
calculating the ray center of the field ray of the target sensor according to the space position and the pointing parameters;
and performing matrixing on the ray center to obtain a pointing matrix of the target sensor in a target coordinate system.
In the implementation process, the space position of the target sensor attached object is taken as a reference point, and the initial posture and the initial offset of the target sensor in the target coordinate system are determined through the pointing parameters corresponding to the field ray of the target sensor. It is understood that the offset amount of the target sensor at this time is 0.
Further, before calculating the ray center of the field ray of the target sensor according to the pointing parameters, the method further includes:
carrying out rationality verification on the pointing parameters;
and if the pointing parameters meet the preset rationality condition, calculating the ray center of the field ray of the target sensor according to the pointing parameters.
In the implementation process, the pointing parameters are subjected to rationality verification to ensure that correct results can be calculated by using correct parameters when the sensor data are calculated, so that the accuracy of the results is ensured.
Further, determining a composite polygon of the target two-dimensional plane according to the field-of-view ray set, including:
determining a target two-dimensional plane, and constructing a shielding model between a field ray of a target sensor and a target shielding object based on the target two-dimensional plane;
based on the occlusion model, determining a first projection set of a field ray set corresponding to a target two-dimensional plane and determining a second projection set of a target occlusion object, wherein the first projection set is used for representing a two-dimensional field range of the field ray projection of the target sensor in the target two-dimensional plane, and the second projection set is used for representing a two-dimensional projection range of the target occlusion object projected in the two-dimensional plane;
and combining the first projection set and the second projection set into a composite polygon.
In the implementation process, a shielding model of the earth (namely the target shielding object) and the field of view rays of the target sensor is constructed, a reasonable projection plane is selected through modeling, and the model of the earth in the space and the field of view rays are projected to the selected plane, so that the resolution problem in the three-dimensional space is converted into the problem of the graphic geometric relationship on the two-dimensional plane, the complexity of the data processing process is simplified, and the calculated amount is reduced.
Further, determining, based on the occlusion model, a first set of projections of the set of field of view rays corresponding to the target two-dimensional plane and a second set of projections of the target occlusion, comprising:
under the condition that a target shelter does not shelter a field ray of a target sensor, projecting a field ray set to a target two-dimensional plane to obtain a first projection set;
and based on the shielding model, projecting the target shielding object by taking the target sensor as a projection center and the target two-dimensional plane as a projection plane to obtain a second projection set of the target shielding object.
In the implementation process, the positions of the field ray and the target shielding object in the plane are obtained in a projection mode, and the three-dimensional space problem is converted into the two-dimensional plane problem, so that the subsequent data processing process can be rapidly processed.
Further, determining intersection and difference sets of the composite polygons comprises:
determining projection boundary intersection points of the first projection set and the second projection set corresponding to the target two-dimensional plane;
tracking edges based on the projection boundary intersection points, and constructing a first boundary of an intersection and a second boundary of a difference set;
based on the directionality of the first boundary and the directionality of the second boundary, an intersection and a difference are determined.
In the implementation process, the boundary of the two sets is determined, a plurality of intermediate polygons of each boundary combination are formed based on the boundary, and whether the intermediate polygons are an intersection or a difference is determined according to the directionality of the boundary, so that the difficulty of the computer programming for implementing the intersection and difference determination process is reduced.
In a second aspect, an embodiment of the present application provides a sensor data processing apparatus, including:
the acquisition module is used for acquiring a field ray set of the target sensor, and the field ray set is used for representing a three-dimensional field range of the target sensor pointing to the target shelter;
the first determining module is used for determining a composite polygon in a target two-dimensional plane according to the field ray set, wherein the composite polygon is a polygon obtained by projecting a field ray of the target sensor and a target shelter to the target two-dimensional plane;
the second determining module is used for determining the intersection and the difference set of the composite polygon;
and the conversion module is used for carrying out coordinate conversion on the two-dimensional coordinate points in the intersection and the difference set to obtain the three-dimensional coordinate points corresponding to the two-dimensional coordinate points in the target three-dimensional coordinate system.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory is used to store a computer program, and the processor runs the computer program to make the electronic device execute the processing method of sensor data as in the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, which is characterized by storing a computer program, and when the computer program is executed by a processor, the method for processing sensor data in the first aspect is implemented.
It is understood that the beneficial effects of the above and the third to fourth aspects can be seen from the above description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic flow chart illustrating a method for processing sensor data according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart illustrating a method for processing sensor data according to another embodiment of the present disclosure;
FIG. 3 is a schematic flow chart illustrating a method for processing sensor data according to yet another embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a sensor data processing apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
As described in the related art, currently, based on a sensor analysis and calculation system, the process of performing data processing such as curved surface operation on the process of acquiring field data is complex, the data amount is large, the efficient processing of data is not very facilitated, and the problem of low data processing efficiency exists.
In view of the above problems in the prior art, the present application provides a method for processing sensor data, which projects the earth (i.e., a target obstruction) and sensor rays onto a selected plane through a field ray set to obtain a composite polygon, thereby converting the resolution problem in a three-dimensional space into the problem of the geometric relationship of the graphics on a two-dimensional plane, simplifying the complexity of the processing process, reducing the computation workload, and improving the data processing efficiency. And then, calculating an intersection and a difference set according to the projected composite polygon, thereby acquiring the two-dimensional field data of the part shielded by the earth and the two-dimensional field data of the part not shielded by the earth. And finally, converting the two-dimensional field data back to a three-dimensional coordinate system to obtain the three-dimensional field data of the sensor pointing to the earth.
Referring to fig. 1, fig. 1 shows a flowchart of an implementation of a method for processing sensor data according to an embodiment of the present application. The sensor data processing methods described below in the embodiments of the present application may be applied to electronic devices including, but not limited to, computing devices communicatively coupled to sensors on a satellite, a station, a missile, a rocket, a space shuttle, or the like. The sensor data processing method of the embodiment of the application includes steps S101 to S104, which are detailed as follows:
step S101, a field-of-view ray set of the target sensor is obtained, and the field-of-view ray set is used for representing a three-dimensional field range of the target sensor pointing to the target shelter.
In the present embodiment, the target sensor (or sensor hereinafter) is a sensor applied in the field of aerospace measurement and control, and includes, but is not limited to, a simple cone sensor, a rectangular cone sensor, a half-power sensor, a composite cone sensor, and a SAR (Synthetic Aperture Radar) sensor. The target shelter (or shelter hereinafter) is a measured object in the aerospace measurement and control process, such as the earth, other planets or satellites. The field of view ray set performs a three-dimensional view of the target obstruction for the target sensor, i.e., a detection range of the target sensor in a direction pointing to the target obstruction.
By way of example and not limitation, the field of view ray set of a simple cone sensor is a set of points in the field of view whose shape is a cone, the field of view ray set of a rectangular cone sensor is a set of points in the field of view whose shape is a rectangular cone, the field of view ray set of a half-power sensor is a set of points in the field of view whose shape is a cone, and the field of view ray set of a complex cone sensor is a set of points in the field of view whose shape is a hollow cone.
And S102, determining a composite polygon in the target two-dimensional plane according to the field ray set, wherein the composite polygon is a polygon obtained by projecting the field ray of the target sensor and the target shelter onto the target two-dimensional plane.
In this embodiment, the target two-dimensional plane may be any two-dimensional plane behind the target obstruction perpendicular to the rays of the field of view of the target sensor. The earth (namely the target shelter) and the sensor rays are projected to a selected plane through a field ray set to obtain a composite polygon, so that the problem of analysis in a three-dimensional space is converted into the problem of geometric relationship of graphs on a two-dimensional plane, the complexity of a processing process is simplified, the operation amount is reduced, and the data processing efficiency is improved.
In one possible implementation, determining a composite polygon of a two-dimensional plane of the object from the set of field-of-view rays includes: determining a target two-dimensional plane, and constructing a shielding model between a field ray of a target sensor and a target shielding object based on the target two-dimensional plane; based on the occlusion model, determining a first projection set of a field ray set corresponding to a target two-dimensional plane and determining a second projection set of a target occlusion object, wherein the first projection set is used for representing a two-dimensional field range of the field ray projection of the target sensor in the target two-dimensional plane, and the second projection set is used for representing a two-dimensional projection range of the target occlusion object projected in the two-dimensional plane; and combining the first projection set and the second projection set into a composite polygon.
In the implementation process, a projection plane is selected, a shielding model of the earth and the field ray of the sensor is constructed, and then a composite polygon formed by the field ray and a point set of the earth in the projection plane is respectively solved. Specifically, under the condition that a target shelter does not shelter field rays of a target sensor, a field ray set is projected onto a target two-dimensional plane to obtain a first projection set; and based on the shielding model, projecting the target shielding object by taking the target sensor as a projection center and the target two-dimensional plane as a projection plane to obtain a second projection set of the target shielding object. Therefore, the positions of the field ray and the target shielding object in the plane are obtained in a projection mode, so that the three-dimensional space problem is converted into the two-dimensional plane problem, and the subsequent data processing process is convenient to rapidly process.
For the sake of understanding and explanation, the projection plane is exemplarily thought of as a projection screen, the projector is disposed opposite to the screen (i.e. the position of the sensor in space), the light beam of the projector (i.e. the ray of the sensor field of view) projects a complete circle on the screen, which is denoted as PolygonA, the projection of the exemplary simple cone sensor is a circle, and the projection of the rectangular cone sensor is a rectangle. Then a basketball (the basketball is regarded as the earth) is held to slowly move to the area illuminated by the projector light beam, the original complete circle or rectangle changes in shape due to the shielding of the earth, the basketball has a projection area on the curtain, the projection area is marked as PolygonN B, and the combination of PolygonN and PolygonN B is a composite polygon.
In the embodiment, a shielding model of the earth (namely, a target shielding object) and field rays of a target sensor is constructed, a reasonable projection plane is selected through modeling, and the model of the earth in the space and the field rays are projected to the selected plane, so that the resolution problem in the three-dimensional space is converted into the problem of the graphic geometric relationship on the two-dimensional plane, the complexity of the data processing process is simplified, and the calculated amount is reduced.
Step S103, determining the intersection and the difference of the composite polygon.
In this embodiment, the intersection is a portion where the two sets are the same, that is, a common region of the first projection set and the second projection set; the difference set is a portion where the two sets are not identical, i.e., a non-common region of the first projection set and the second projection set. The intersection is used for representing the position of the field ray irradiation shielding object part in a target two-dimensional plane, namely, the two-dimensional field data of the shielding object currently detected by the sensor; the difference set is used for representing the positions of the ray part of the field of view not irradiating on the obstruction and the obstruction part of the field of view not irradiating on the target two-dimensional plane, namely, the two-dimensional field of view data of the sensor which does not detect the obstruction currently.
In one possible implementation, determining intersections and differences of the composite polygons includes: determining projection boundary intersection points of the first projection set and the second projection set corresponding to the target two-dimensional plane; tracking edges based on the projection boundary intersection points, and constructing a first boundary of an intersection and a second boundary of a difference set; based on the directionality of the first boundary and the directionality of the second boundary, an intersection and a difference are determined.
In the implementation process, the intersection points of the first projection set Polygona and the second projection set Polygonab are calculated, complex edges (edges with intersection points of two polygons) and simple edges (edges without intersection points of two polygons) are decomposed, the parity and topology types of the edges of the Polygona and the Polygonab are determined at the same time, the edges are tracked, finally, intermediate polygons forming intersection Polygona ^ Polygonab and difference Polygona-Polygonab are output, the edges of each intermediate polygon are constructed in sequence, the directionality of each intermediate polygon is determined, when the intermediate polygons are the intermediate polygons or the external polygons, the relation between the holes and the external polygons is judged, and then intersection Polygona ^ Polygonab and difference Polygona-Polygona b are determined.
The realization process reduces the difficulty of the computer programming realization intersection and difference set determination process by determining the boundaries of the two sets, forming a plurality of intermediate polygons of each boundary combination based on the boundaries, and determining whether the intermediate polygons are an intersection or a difference set according to the directionality of the boundaries.
And step S104, performing coordinate conversion on the two-dimensional coordinate points in the intersection and the difference set to obtain a three-dimensional coordinate point corresponding to the two-dimensional coordinate point in the target three-dimensional coordinate system.
In this embodiment, the target three-dimensional coordinate system may be a J2000 coordinate system, i.e., a J2000 equatorial coordinate system, the origin of which is also at the centroid of the earth, the xy plane is the equatorial plane of the earth at time J2000, and the x-axis points to the vernalization point at time J2000 (an intersection of the equatorial plane and the equatorial plane at time J2000), which is often referred to as the inertial coordinate system of the earth satellite. And converting the two-dimensional field data into a three-dimensional coordinate system to obtain three-dimensional field data of the sensor pointing to the earth. Specifically, two-dimensional coordinates (x 1, x 2) in the two-dimensional field data are converted into three-dimensional coordinates (x 2, y2, z 2), and then the three-dimensional coordinates (x 2, y2, z 2) are multiplied by an inverse matrix of the direction matrix to obtain three-dimensional coordinates (x 3, y3, z 3) of a two-dimensional coordinate point corresponding to a J2000 coordinate system. It should be noted that, when converting two-dimensional coordinates (x 1, y 1) in the two-dimensional field data into three-dimensional coordinates (x 2, y2, z 2), it is necessary to determine a value of z2 in the three-dimensional coordinates based on a distance relationship between the sensor and the two-dimensional plane of the earth and the target, and the value can be calculated based on the cosine theorem.
On the basis of the embodiment of fig. 1, fig. 2 shows a schematic flow chart of a processing method of sensor data provided by another embodiment of the present application. It should be understood that the same steps as in the embodiment of fig. 1 are not described again here. As shown in fig. 2, the sensor data processing method of the present embodiment further includes, before step S101, S201 to S203, which are detailed as follows:
s201, acquiring a pointing matrix of the target sensor, wherein the pointing matrix is used for representing the posture of the target sensor in a target coordinate system.
S202, multiplying the pointing matrix by a target rotation matrix to obtain an attitude matrix of the target sensor pointing to the target shielding object, wherein the target rotation matrix is a rotation change matrix when the attitude of the target sensor is adjusted to point to the target shielding object;
and S203, multiplying the attitude matrix and the target movement matrix to obtain a field ray set, wherein the target movement matrix is a movement change matrix when the position of the target sensor is adjusted to the target position.
In the above steps S201 to S203, a method of calculating a simple cone sensor with a half cone angle of 45 degrees is described as an example: the positive direction of the Z axis of a J2000 coordinate system is taken as an initial Ray (namely a directional matrix) which is recorded as Ray, the starting point of the Ray is at the origin of coordinates, and the length of the initial Ray is 50000 kilometers. And clockwise rotating the Ray by 45 degrees around the Y axis to obtain a new Ray, recording the new Ray as Ray1, and sequentially rotating the Ray1 by Degree degrees around the Z axis (Degree is equal to 2 degrees multiplied by Delta, Delta is 0, and Delta is 2 … … 179) to obtain a group of Ray sets with the origin of coordinates as a starting point and the length of 50000 kilometers, and recording the Ray sets as RaySet (namely a rotation matrix). Multiplying the rotation matrix RaySet by the pointing matrix Ray to obtain a rotated sensor Ray set which is recorded as RaySetRatated; and multiplying the RaySetRatated by a moving matrix (a position change matrix between the coordinate origin and the sensor space position) to obtain a final sensor ray set which is recorded as RaySetRatated transformed.
In the embodiment, the final state that the target sensor points to the earth (namely, the target shelter) is determined through the pointing matrix of the target sensor and the change matrix in the process of adjusting the posture and the offset of the target sensor, and the earth is ensured in the field range of the target sensor, so that the accuracy of field data is ensured.
On the basis of the embodiment of fig. 2, fig. 3 is a schematic flow chart illustrating a processing method of sensor data according to still another embodiment of the present application. It should be understood that the steps identical to those in the embodiment of fig. 1 and 2 are not repeated herein. As shown in fig. 3, the sensor data processing method of the present embodiment further includes, before step S201, S301 to S303, which are detailed as follows:
s301, acquiring the spatial position of the attachment object of the target sensor in the target coordinate system and the pointing parameters of the target sensor.
In this step, the orientation of the sensor is equal to the superposition of the in-orbit attitude (i.e. the spatial position) of the satellite and the installation orientation (i.e. the orientation parameters) of the sensor relative to the system in the satellite, and the meaning of the sensor is expressed by a matrix, namely an orientation matrix. Satellite attitude refers to the spatial pointing state in which the satellite stars travel on the orbit. The origin of the rectangular coordinate system is arranged on a star satellite body, the Z axis pointing to the ground reflects the yaw direction, the Y axis reflects the pitch direction, and the X axis reflects the rolling direction, and the attitude is kept stable by adopting the modes of three-axis stability, spin stability, gravity gradient stability and the like. The installation orientation of the sensor means an installation orientation expressed by azimuth and pitch, euler angle, yaw roll pitch, quaternion, and the like with respect to the main system of a satellite, a station, a missile, a rocket, a vehicle, an airplane, a ship, or the like.
And S302, calculating the ray center of the field ray of the target sensor according to the space position and the pointing parameters.
In this step, the in-orbit attitude of the satellite and the sensor installation orientation are more various, and by way of example and not limitation, a pointing matrix is calculated by taking the satellite in a VVLH three-axis stable attitude and the sensor installation orientation as fixed pointing parameters, wherein a satellite centroid orbit coordinate system (VVLH) takes a satellite centroid as an origin, a Z axis is collinear with a satellite misalignment and points to a geocentric, an X axis is located in a satellite orbit plane and points to a satellite velocity direction, and a Y axis and an XZ axis form a right-hand coordinate system.
Azimuth angle (center of ray) at which the sensor is fixedly pointed: if the system is based on a satellite, a vehicle, a ship and an airplane, the azimuth angle is defined as the angle which rotates clockwise by taking the positive direction of the X axis as a starting point in an XOY plane; if the system is based on the measuring station, the azimuth angle is defined in an XOY plane and rotates anticlockwise by taking the positive direction of the X axis as a starting point. The elevation angle (El) at which the sensor is fixedly pointed is defined as the angle between the plane X0Y of the reference coordinate system and the orientation of the sensor, with the measurement direction being the positive Z-axis (above the plane X0Y, the angle is positive, and vice versa negative).
And S303, performing matrixing on the ray center to obtain a pointing matrix of the target sensor in a target coordinate system.
In this step, the spatial position of the sensor attachment object and the anchor point (ray center) calculated by the pointing parameter form a direction vector, and the attitude and the offset of the sensor in the target coordinate system are obtained through matrix operation, and the attitude and the offset are usually expressed by using a 4X4 matrix. It is understood that the offset amount of the target sensor at this time is 0. In the embodiment, the space position of the target sensor attached object is taken as a reference point, and the initial posture and the initial offset of the target sensor in the target coordinate system are determined through the pointing parameters corresponding to the field ray of the target sensor.
In a possible implementation manner, before calculating the ray center of the field-of-view ray of the target sensor according to the pointing parameter, the method further includes: carrying out rationality verification on the pointing parameters; and if the pointing parameters meet the preset rationality condition, calculating the ray center of the field ray of the target sensor according to the pointing parameters.
In the implementation process, the pointing parameter is the optional range of the view angle of the sensor, and due to different attributes of various sensors, the preset reasonableness conditions are slightly different. Illustratively, the half cone angle of a simple cone sensor ranges from 0 to 90 degrees. The horizontal half angle and the vertical half angle of the rectangular cone sensor range from 0 to 90 degrees. Half-power frequency of half-power sensor is 0~1 x 106GHz and antenna diameter of 0.1-1 × 104Varying between meters. The composite cone sensor has an inner/outer half angle ranging from 0 to 90 degrees and an inner half angle smaller than the outer half angle, a maximum/minimum clock angle ranging from 0 to 360 degrees and a minimum clock angle smaller than the maximum clock angle. The SAR sensor maximum/small altitude angle variation range is 0 to 90 degrees, the minimum altitude angle is smaller than the maximum altitude angle, and the front/rear exclusion angle variation range is 0 to 90 degrees. Further, sensor constraints are also verified: the variation range of the maximum/minimum azimuth angle is 0-90 degrees, the variation range of the maximum/minimum altitude angle is 0-90 degrees, and the detection distance of the sensor is constrained to be 0-5 km.
The realization process ensures that a correct result can be calculated by using correct parameters when calculating the sensor data by carrying out rationality verification on the pointing parameters, thereby ensuring the accuracy of the result.
Optionally, if the pointing parameter does not meet the preset rationality condition, recording error information to a system problem log.
Compared with the existing sensor data processing method, the method can support the field-of-view calculation of the simple cone sensor, the rectangular cone sensor and the half-power sensor, and can also support the field-of-view calculation of the composite cone sensor and the SAR sensor, so that the sensor data processing software has a wider application range.
In order to carry out the above-described method and implement the corresponding method to achieve the corresponding functions and technical effects, a sensor data processing device is provided below. Referring to fig. 4, fig. 4 is a block diagram of a sensor data processing apparatus according to an embodiment of the present disclosure. The modules included in the apparatus in this embodiment are used to execute the steps in the embodiment corresponding to fig. 1, and refer to fig. 1 and the related description in the embodiment corresponding to fig. 1 specifically. For convenience of explanation, only the part related to the present embodiment is shown, and the sensor data processing apparatus provided in the embodiment of the present application includes:
an obtaining module 401, configured to obtain a field of view ray set of a target sensor, where the field of view ray set is used to represent a three-dimensional field of view range of the target sensor pointing to a target obstruction;
a first determining module 402, configured to determine a composite polygon in a target two-dimensional plane according to the field ray set, where the composite polygon is a polygon obtained by projecting a field ray of the target sensor and a target obstruction onto the target two-dimensional plane;
a second determining module 403, configured to determine intersections and differences of the composite polygons;
and a converting module 404, configured to perform coordinate conversion on the two-dimensional coordinate points in the intersection and the difference set to obtain a three-dimensional coordinate point corresponding to the two-dimensional coordinate point in the target three-dimensional coordinate system.
As an optional implementation manner, the processing apparatus further includes:
the second acquisition module is used for acquiring a pointing matrix of the target sensor, and the pointing matrix is used for representing the posture of the target sensor in a target coordinate system;
the first multiplying module is used for multiplying the pointing matrix and the target rotation matrix to obtain an attitude matrix of the target sensor pointing to the target shielding object, and the target rotation matrix is a rotation change matrix when the attitude of the target sensor is adjusted to point to the target shielding object;
and the second multiplying module is used for multiplying the attitude matrix and the target moving matrix to obtain a field ray set, wherein the target moving matrix is a moving change matrix when the position of the target sensor is adjusted to the target position.
As an optional implementation manner, the processing apparatus further includes:
the third acquisition module is used for acquiring the spatial position of an attachment object of the target sensor in a target coordinate system and the pointing parameters of the target sensor;
the operation module is used for calculating the ray center of the field ray of the target sensor according to the space position and the pointing parameter;
and the matrixing module is used for matrixing the ray center to obtain a pointing matrix of the target sensor in a target coordinate system.
As an optional implementation manner, the processing apparatus further includes:
the verification module is used for carrying out rationality verification on the pointing parameters;
and the execution module is used for calculating the ray center of the field ray of the target sensor according to the pointing parameter if the pointing parameter meets the preset rationality condition.
As an optional implementation manner, the first determining module 402 is specifically configured to:
determining a target two-dimensional plane, and constructing a shielding model between a field ray of a target sensor and a target shielding object based on the target two-dimensional plane;
based on the occlusion model, determining a first projection set of a field ray set corresponding to a target two-dimensional plane and determining a second projection set of a target occlusion object, wherein the first projection set is used for representing a two-dimensional field range of the field ray projection of the target sensor in the target two-dimensional plane, and the second projection set is used for representing a two-dimensional projection range of the target occlusion object projected in the two-dimensional plane;
and combining the first projection set and the second projection set into a composite polygon.
As an optional implementation manner, the first determining module 402 is further specifically configured to:
under the condition that a target shelter does not shelter a field ray of a target sensor, projecting a field ray set to a target two-dimensional plane to obtain a first projection set;
and based on the shielding model, projecting the target shielding object by taking the target sensor as a projection center and the target two-dimensional plane as a projection plane to obtain a second projection set of the target shielding object.
As an optional implementation manner, the first determining module 402 is further specifically configured to:
determining projection boundary intersection points of the first projection set and the second projection set corresponding to the target two-dimensional plane;
tracking edges based on the projection boundary intersection points, and constructing a first boundary of an intersection and a second boundary of a difference set;
based on the directionality of the first boundary and the directionality of the second boundary, an intersection and a difference are determined.
The sensor data processing device can implement the sensor data processing method of the method embodiment. The alternatives in the above-described method embodiments are also applicable to this embodiment and will not be described in detail here. The rest of the embodiments of the present application may refer to the contents of the above method embodiments, and in this embodiment, details are not described again.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 5, the electronic apparatus 5 of this embodiment includes: at least one processor 50 (only one shown in fig. 5), a memory 51, and a computer program 52 stored in the memory 51 and executable on the at least one processor 50, the steps of any of the method embodiments described above being implemented when the computer program 52 is executed by the processor 50.
The electronic device 5 may be a computing device communicatively coupled to sensors on a satellite, a survey station, a missile, a rocket, a space shuttle, or the like. The electronic device may include, but is not limited to, a processor 50, a memory 51. Those skilled in the art will appreciate that fig. 5 is merely an example of the electronic device 5, and does not constitute a limitation of the electronic device 5, and may include more or less components than those shown, or combine some of the components, or different components, such as an input-output device, a network access device, etc.
The Processor 50 may be a Central Processing Unit (CPU), and the Processor 50 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may in some embodiments be an internal storage unit of the electronic device 5, such as a hard disk or a memory of the electronic device 5. The memory 51 may also be an external storage device of the electronic device 5 in other embodiments, such as a plug-in hard disk provided on the electronic device 5, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and so on. Further, the memory 51 may also include both an internal storage unit and an external storage device of the electronic device 5. The memory 51 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of a computer program. The memory 51 may also be used to temporarily store data that has been output or is to be output.
In addition, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in any of the method embodiments described above.
The embodiments of the present application provide a computer program product, which when running on an electronic device, enables the electronic device to implement the steps in the above method embodiments when executed.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (9)

1. A method of processing sensor data, comprising:
acquiring a field-of-view ray set of a target sensor, wherein the field-of-view ray set is used for representing a three-dimensional field-of-view range of the target sensor pointing to a target shelter;
determining a composite polygon in a target two-dimensional plane according to the field ray set, wherein the composite polygon is a polygon obtained by projecting a field ray of the target sensor and a target obstruction to the target two-dimensional plane;
determining the intersection and difference set of the composite polygon;
performing coordinate conversion on the two-dimensional coordinate points in the intersection and the difference set to obtain a three-dimensional coordinate point of the two-dimensional coordinate point in a target three-dimensional coordinate system;
wherein, the determining a composite polygon of a target two-dimensional plane according to the field ray set comprises:
determining the target two-dimensional plane, and constructing an occlusion model between a field ray of the target sensor and the target occlusion object based on the target two-dimensional plane;
based on the occlusion model, determining a first projection set of the field of view ray set corresponding to the target two-dimensional plane, and determining a second projection set of the target occlusion, the first projection set being used for representing a two-dimensional field of view range of the field of view ray projection of the target sensor in the target two-dimensional plane, the second projection set being used for representing a two-dimensional projection range of the target occlusion projected in the two-dimensional plane;
and combining the first projection set and the second projection set into the composite polygon.
2. The method of processing sensor data according to claim 1, wherein before acquiring the set of field of view rays of the target sensor, further comprising:
acquiring a pointing matrix of a target sensor, wherein the pointing matrix is used for representing the posture of the target sensor under a target coordinate system;
multiplying the pointing matrix by a target rotation matrix to obtain an attitude matrix of the target sensor pointing to the target shielding object, wherein the target rotation matrix is a rotation change matrix when the attitude of the target sensor is adjusted to point to the target shielding object;
and multiplying the attitude matrix and a target movement matrix to obtain the view field ray set, wherein the target movement matrix is a movement change matrix when the position of the target sensor is adjusted to a target position.
3. The method of processing sensor data according to claim 2, wherein before the obtaining the orientation matrix of the target sensor, further comprising:
acquiring the spatial position of an attached object of the target sensor in a target coordinate system and the pointing parameters of the target sensor;
calculating the ray center of the field ray of the target sensor according to the space position and the pointing parameters;
and performing matrixing on the ray center to obtain a pointing matrix of the target sensor in the target coordinate system.
4. The method for processing sensor data according to claim 3, wherein before calculating the ray center of the field-of-view ray of the target sensor according to the pointing parameter, the method further comprises:
carrying out rationality verification on the pointing parameters;
and if the pointing parameters meet preset rationality conditions, executing the step of calculating the ray center of the field ray of the target sensor according to the pointing parameters.
5. The method of processing sensor data according to claim 1, wherein said determining, based on the occlusion model, that the set of rays of the field of view correspond to a first set of projections on the target two-dimensional plane and that the set of second projections of the target occlusion comprises:
under the condition that the target shielding object does not shield the field of view rays of the target sensor, projecting the field of view ray set onto the target two-dimensional plane to obtain a first projection set;
and based on the shielding model, projecting the target shielding object by taking the target sensor as a projection center and the target two-dimensional plane as a projection plane to obtain the second projection set of the target shielding object.
6. The method of processing sensor data of claim 1, wherein the determining intersections and differences of the composite polygons comprises:
determining projection boundary intersection points of the first projection set and the second projection set corresponding to the target two-dimensional plane;
tracking edges based on the projection boundary intersection points, and constructing a first boundary of the intersection and a second boundary of the difference set;
determining the intersection and the difference set based on the directionality of the first boundary and the directionality of the second boundary.
7. An apparatus for processing sensor data, comprising:
the acquisition module is used for acquiring a field ray set of a target sensor, and the field ray set is used for representing a three-dimensional field range of the target sensor pointing to a target shelter;
the first determining module is used for determining a composite polygon in a target two-dimensional plane according to the field ray set, wherein the composite polygon is a polygon obtained by projecting a field ray of the target sensor and a target obstruction to the target two-dimensional plane;
a second determining module, configured to determine an intersection and a difference of the composite polygon;
the conversion module is used for carrying out coordinate conversion on the two-dimensional coordinate points in the intersection and the difference set to obtain three-dimensional coordinate points of the two-dimensional coordinate points in a target three-dimensional coordinate system;
the first determining module is specifically configured to:
determining the target two-dimensional plane, and constructing an occlusion model between a field ray of the target sensor and the target occlusion object based on the target two-dimensional plane;
based on the occlusion model, determining a first projection set of the field of view ray set corresponding to the target two-dimensional plane, and determining a second projection set of the target occlusion, the first projection set being used for representing a two-dimensional field of view range of the field of view ray projection of the target sensor in the target two-dimensional plane, the second projection set being used for representing a two-dimensional projection range of the target occlusion projected in the two-dimensional plane;
and combining the first projection set and the second projection set into the composite polygon.
8. An electronic device, comprising a memory for storing a computer program and a processor for executing the computer program to cause the electronic device to perform the method of processing sensor data according to any one of claims 1 to 6.
9. A computer-readable storage medium, characterized in that it stores a computer program which, when executed by a processor, implements a method of processing sensor data according to any one of claims 1 to 6.
CN202110000711.9A 2021-01-04 2021-01-04 Sensor data processing method and device, electronic equipment and storage medium Active CN112330536B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110000711.9A CN112330536B (en) 2021-01-04 2021-01-04 Sensor data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110000711.9A CN112330536B (en) 2021-01-04 2021-01-04 Sensor data processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112330536A CN112330536A (en) 2021-02-05
CN112330536B true CN112330536B (en) 2021-04-09

Family

ID=74302466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110000711.9A Active CN112330536B (en) 2021-01-04 2021-01-04 Sensor data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112330536B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113345081A (en) * 2021-06-24 2021-09-03 广东三维家信息科技有限公司 Stereo model shielding elimination method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108994844A (en) * 2018-09-26 2018-12-14 广东工业大学 A kind of scaling method and device of sanding operation arm trick relationship
CN109960269A (en) * 2019-04-04 2019-07-02 武汉大学 A kind of simplification environmental modeling method for serving unmanned plane independent navigation
CN110411361A (en) * 2019-05-15 2019-11-05 首都师范大学 A kind of mobile tunnel laser detection data processing method
CN111311663A (en) * 2020-02-17 2020-06-19 清华大学深圳国际研究生院 Real-time large-scene three-dimensional semantic modeling method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10089778B2 (en) * 2015-08-07 2018-10-02 Christie Digital Systems Usa, Inc. System and method for automatic alignment and projection mapping

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108994844A (en) * 2018-09-26 2018-12-14 广东工业大学 A kind of scaling method and device of sanding operation arm trick relationship
CN109960269A (en) * 2019-04-04 2019-07-02 武汉大学 A kind of simplification environmental modeling method for serving unmanned plane independent navigation
CN110411361A (en) * 2019-05-15 2019-11-05 首都师范大学 A kind of mobile tunnel laser detection data processing method
CN111311663A (en) * 2020-02-17 2020-06-19 清华大学深圳国际研究生院 Real-time large-scene three-dimensional semantic modeling method

Also Published As

Publication number Publication date
CN112330536A (en) 2021-02-05

Similar Documents

Publication Publication Date Title
Opromolla et al. Uncooperative pose estimation with a LIDAR-based system
Johnson et al. Overview of terrain relative navigation approaches for precise lunar landing
Lemmens et al. Radar mappings for attitude analysis of objects in orbit
CN105677942A (en) Rapid simulation method of repeat-pass spaceborne natural scene SAR complex image data
CN103644918A (en) Method for performing positioning processing on lunar exploration data by satellite
CN111680596B (en) Positioning true value verification method, device, equipment and medium based on deep learning
CN112330536B (en) Sensor data processing method and device, electronic equipment and storage medium
Liu et al. A star identification algorithm based on simplest general subgraph
Park et al. Design and performance validation of integrated navigation system based on geometric range measurements and GIS map for urban aerial navigation
Li et al. A shadow function model based on perspective projection and atmospheric effect for satellites in eclipse
CN103743488A (en) Infrared imaging simulation method for globe limb background characteristics of remote sensing satellite
CN110108281B (en) Space astronomical observation task calculation analysis system, method, medium and device
Kozorez et al. A solution of the navigation problem for autonomous insertion of payload into a geostationary orbit using a low-thrust engine
Zhu et al. A hybrid relative navigation algorithm for a large–scale free tumbling non–cooperative target
Getchius et al. Hazard detection and avoidance for the nova-c lander
Yan et al. Horizontal velocity estimation via downward looking descent images for lunar landing
Nocerino et al. Analysis of LIDAR-based relative navigation performance during close-range rendezvous toward an uncooperative spacecraft
Zhang et al. A diverse space target dataset with multidebris and realistic on-orbit environment
LeGrand Space-based relative multitarget tracking
CN113670253A (en) Space target posture inversion method and device, computing equipment and storage medium
Kaiser et al. Position and orientation of an aerial vehicle through chained, vision-based pose reconstruction
Puchades et al. Relativistic positioning: errors due to uncertainties in the satellite world lines
Nayak et al. Real-time attitude commanding to detect coverage gaps and generate high resolution point clouds for RSO shape characterization with a laser rangefinder
Wright et al. Relative terrain imaging navigation (RETINA) tool for the asteroid redirect robotic mission (ARRM)
González De Santos et al. Point cloud simulator for space in-orbit close range autonomous operations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220516

Address after: 100195 Room 301, 3 / F, building 5, zone 4, Xishan Creative Park, Haidian District, Beijing

Patentee after: Aerospace Hongtu Information Technology Co.,Ltd.

Patentee after: Xi'an Aerospace Hongtu Information Technology Co., Ltd

Address before: 100195 Room 301, 3 / F, building 5, zone 4, Xishan Creative Park, Haidian District, Beijing

Patentee before: Aerospace Hongtu Information Technology Co.,Ltd.