CN111127661B - Data processing method and device and electronic equipment - Google Patents
Data processing method and device and electronic equipment Download PDFInfo
- Publication number
- CN111127661B CN111127661B CN201911301161.3A CN201911301161A CN111127661B CN 111127661 B CN111127661 B CN 111127661B CN 201911301161 A CN201911301161 A CN 201911301161A CN 111127661 B CN111127661 B CN 111127661B
- Authority
- CN
- China
- Prior art keywords
- target
- projected
- equipment
- coordinate system
- content
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention provides a data processing method, a device and electronic equipment, wherein a scene coordinate system comprising the current position of the equipment and the target position of the target to be projected is constructed when the target content of the target to be projected is projected, namely the current position of the equipment and the target position of the target to be projected are compared under the same coordinate system, so that the relative position relation between the current position of the equipment and the target position of the target to be projected is more accurate, the target content can be accurately projected to a target display position during projection, and the accuracy of projection is further ensured.
Description
Technical Field
The present invention relates to the field of augmented reality AR, and in particular, to a data processing method, apparatus, and electronic device.
Background
Augmented Reality (AR) is a direct or indirect real-time view of a physical, real-world environment whose elements are "augmented" by computer-generated sensory information, ideally across a variety of sensory modes, including visual, auditory, tactile, somatosensory and olfactory. Augmented reality is simply a mixture of real world objects and computer generated objects. It provides an immersive experience for the user. AR fuses and integrates with the real world, where virtual objects are projected.
While AR products may provide an immersive experience for a user, in AR products the relative positions of the projected virtual object and the projected area are inaccurate and thus cannot project the virtual object onto its desired location.
Disclosure of Invention
In view of the above, the present invention provides a data processing method, apparatus and electronic device, so as to solve the problem that the position of the virtual object projected and the relative position of the projection area are inaccurate, and thus the virtual object cannot be projected to the desired position.
In order to solve the technical problems, the invention adopts the following technical scheme:
a data processing method, comprising:
acquiring a target position and target content of a target to be projected, and acquiring attitude information and a current position of equipment;
determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the gesture information of the equipment and the current position as references; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be projected;
and determining a target display position of the target content of the target to be projected in a camera coordinate system taking the equipment as a reference according to the gesture information of the equipment, and displaying the target content at the target display position.
Optionally, the target position of the target to be projected is a coordinate located in a geographic coordinate system; the coordinates of the target content of the target to be projected are coordinates located in a local coordinate system taking the target content as a reference;
correspondingly, determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the gesture information of the equipment and the current position as references, wherein the display coordinates comprise:
determining the relative distance between the target position of the target to be projected and the current position of the equipment according to the current position of the equipment and the target position of the target to be projected;
calculating the relative position relation between the target position of the target to be projected and the current position of the equipment based on the posture information of the equipment and the relative distance;
and determining display coordinates of the target content of the target to be projected in a scene coordinate system according to the coordinates of the target content of the target to be projected and the relative position relation.
Optionally, determining, according to pose information of the device, that the target content of the target to be projected is located at a target display position in a camera coordinate system based on the device includes:
generating a gesture matrix corresponding to gesture information of the equipment;
and taking the product of the display coordinates and the gesture matrix of the equipment as the target content of the target to be projected, wherein the target content is positioned at a target display position in a camera coordinate system taking the equipment as a reference.
Optionally, the target position of the target to be projected is a coordinate located in a camera coordinate system based on the device; the coordinates of the target content of the target to be projected are coordinates located in a local coordinate system taking the target content as a reference;
correspondingly, determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the gesture information of the equipment and the current position as references, wherein the display coordinates comprise:
determining display coordinates of the target position of the target to be projected in a scene coordinate system according to the gesture information of the equipment; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be transmitted.
Optionally, the acquiring the gesture information of the device includes:
acquiring gravity data, acceleration data and magnetic field data of the equipment;
correcting the acceleration data by using the gravity data to obtain corrected acceleration data;
and integrating the magnetic field data and the corrected acceleration data to determine the attitude information of the equipment.
Optionally, acquiring the current location of the device includes:
acquiring initial position information of the equipment;
and filtering the initial position information to obtain the current position.
A data processing apparatus comprising:
the data acquisition module is used for acquiring the target position and the target content of the target to be projected, and acquiring the attitude information and the current position of the equipment;
the coordinate determining module is used for determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the gesture information of the equipment and the current position as references; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be projected;
and the data display module is used for determining the target display position of the target content of the target to be projected in a camera coordinate system taking the equipment as a reference according to the gesture information of the equipment, and displaying the target content at the target display position.
Optionally, the target position of the target to be projected is a coordinate located in a geographic coordinate system; the coordinates of the target content of the target to be projected are coordinates located in a local coordinate system taking the target content as a reference;
correspondingly, the coordinate determining module comprises:
a distance determining sub-module, configured to determine a relative distance between a target position of the target to be projected and a current position of the device according to the current position of the device and the target position of the target to be projected;
a relation determining sub-module, configured to calculate a relative positional relation between a target position of the target to be projected and a current position of the device based on the posture information of the device and the relative distance;
and the coordinate determination submodule is used for determining display coordinates of the target content of the target to be projected in a scene coordinate system according to the coordinates of the target content of the target to be projected and the relative position relation.
Optionally, the data display module is configured to determine, according to pose information of the device, when the target content of the target to be projected is located at a target display position in a camera coordinate system based on the device, specifically configured to:
and generating a gesture matrix corresponding to the gesture information of the equipment, and taking the product of the display coordinates and the gesture matrix of the equipment as the target display position of the target content of the target to be projected in a camera coordinate system taking the equipment as a reference.
An electronic device, comprising: a memory and a processor;
wherein the memory is used for storing programs;
the processor invokes the program and is configured to:
acquiring a target position and target content of a target to be projected, and acquiring attitude information and a current position of equipment;
determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the gesture information of the equipment and the current position as references; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be projected;
and determining a target display position of the target content of the target to be projected in a camera coordinate system taking the equipment as a reference according to the gesture information of the equipment, and displaying the target content at the target display position.
Compared with the prior art, the invention has the following beneficial effects:
the invention provides a data processing method, a device and electronic equipment, wherein a scene coordinate system comprising the current position of the equipment and the target position of the target to be projected is constructed when the target content of the target to be projected is projected, namely the current position of the equipment and the target position of the target to be projected are compared under the same coordinate system, so that the relative position relation between the current position of the equipment and the target position of the target to be projected is more accurate, the target content can be accurately projected to a target display position during projection, and the accuracy of projection is further ensured.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for data processing according to an embodiment of the present invention;
FIG. 2 is a flow chart of another method for processing data according to an embodiment of the present invention;
fig. 3 is a schematic view of an AR scene according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment of the invention relates to a data processing method combining Augmented Reality (AR) and GIS (geographic information system) application programs, which comprises the following steps of firstly explaining the technical terms in the embodiment:
1. augmented reality (AR, augmented Reality) is a direct or indirect real-time view of a physical, real-world environment whose elements are "augmented" by computer-generated sensory information, ideally across a variety of sensory modes, including visual, auditory, tactile, somatosensory and olfactory. Augmented reality is simply a mixture of real world objects and computer generated objects. It provides the user with an immersive experience. The difference between AR and VR is that VR does not blend with the real world, but rather mimics the real world. AR fuses and integrates with the real world, where virtual objects are projected.
2. POI is an abbreviation for "Point of Interest" and Chinese can be translated into "points of interest". In map software, a POI may be a house, a shop, a post, a bus stop, etc. Typically, each POI contains four aspects of information, name, category, coordinates, classification. The comprehensive POI information is necessary information for enriching the navigation map, timely POI interest points can remind users of detailed information of road conditions and surrounding buildings, and the navigation can conveniently find all needed places, and select the most convenient and unobstructed roads for path planning. Therefore, the navigation map POI information directly influences the navigation usability.
3. Immersive programs are a new term used to describe applications that provide an immersive experience for a user. However, the GIS application programs currently on the market cannot directly provide an immersive experience, and need to be implemented by means of AR or VR technologies, where VR can provide a completely immersive virtual reality experience, and AR provides an immersive experience in an environment where the real world and the virtual world are fused.
Although the AR product can provide an immersive experience for a user, the position of the virtual object projected in the AR product is relative to the position of the photographing device, and the relative position of the virtual object projected and the projection area is inaccurate, so that the virtual object cannot be projected to the expected position. Moreover, the position of the projected virtual object in the AR product is relative to the position of the photographing device and is not based on the real geographic position, which results in that the projected virtual object and the photographing device cannot be accurately positioned, so that the virtual object and the device to be projected are difficult to combine with the real geographic position, and the AR technology cannot be combined with the GIS (geographic information system). The embodiment of the invention provides a scheme that AR and GIS of panoramic display POI information (virtual object) are combined in the static or moving process of equipment (photographing equipment such as a camera, a mobile phone and a tablet and the like), when the target content of a target to be projected is projected, a scene coordinate system comprising the current position of the equipment and the target position of the target to be projected is constructed, namely the current position of the equipment and the target position of the target to be projected are compared under the same coordinate system, so that the relative position relation between the current position of the equipment and the target position of the target to be projected can be more accurate, the target content can be accurately projected to a target display position during projection, and the accuracy of projection is further ensured. In addition, the method can meet urgent requirements of applications such as immersive data acquisition, enhancement display of two-dimensional and three-dimensional virtual objects in GIS industry, and can be applied to the fields of navigation, military command and the like.
Referring to fig. 1, the data processing method may include:
s11, acquiring a target position and target content of a target to be projected, and acquiring gesture information and a current position of equipment.
The object to be projected may be referred to as a POI, and may be obtained from different sources, such as may be derived from a file form (e.g., a local JSON file), a workspace dataset, a database, a network, etc. In this embodiment, for an AR scene, the POI needs to include coordinates of a target to be projected, where the coordinates may be coordinates under a real geographic coordinate system, and further include target contents of the target to be projected, where the target contents may be a picture, a two-dimensional map, an obj-format three-dimensional model, etc., and the target contents in this embodiment may carry names, categories, and classifications of the target contents, where the names refer to names of the target contents, such as a picture, the categories may be two-dimensional or three-dimensional, etc., and the classifications may be divided into pictures, maps, etc.
It should be noted that, if the position information and the attribute information of the data are to be intuitively displayed, the category of the target content may be set to be two-dimensional, and if the data on more details of the target to be projected are to be displayed, the target to be projected may be modeled to form a three-dimensional model for displaying.
After obtaining the target content, the AR may not normally recognize the target content, and at this time, the target content needs to be converted into a content that the AR can recognize by performing format conversion by a manager, such as an image manager. In addition, if the target position and the target content are stored in the form of instantaneous data, the data can be converted into persistent data through a renderer, so that the long-term storage of the data can be realized.
In addition, instead of directly acquiring the target position of the target to be projected, a POI point may be directly selected from the camera coordinate system, where the POI point is based on the camera coordinate system, and if the POI point is to be saved later, the POI point needs to be converted into the geographic coordinate system to obtain the coordinates in the geographic coordinate system.
In this embodiment, the coordinate system in which the target content of the target to be projected is located is a local coordinate system based on the target content itself.
The device in this embodiment will be described, and the device in this embodiment may be a photographing device, such as a camera, a mobile phone, a tablet, etc., and the state of the device is not limited, and may be in a stationary state or a moving state.
Most of the mobile devices currently have built-in sensors, and the motion direction, the gesture and the like of the devices can be calculated through high-precision raw data (such as gravity data of the devices collected by gravity sensors, magnetic field data of the devices collected by magnetic field sensors and acceleration data of the devices collected by acceleration sensors) measured by the sensors. The multi-sensor fusion is based on filtering, combining, optimizing and the like of measurement results of a plurality of sensors according to a certain algorithm, acquires consistency interpretation and description of targets, assists a system to perform environment judgment, path planning, verification and the like to form a higher-level comprehensive decision, and is a simulation of the humanization of a robot system. In this embodiment, the determination of the posture of the device is implemented by using multi-sensor fusion, and specifically, the posture of the device may be determined by built-in sensors, such as a gravity sensor, a magnetic field sensor, and data acquired by an acceleration sensor. Gravity data of the gravity sensor acquisition equipment, magnetic field data of the magnetic field sensor acquisition equipment and acceleration data of the acceleration sensor acquisition equipment can be subjected to high-pass filtering after the gravity data and the acceleration data are obtained in order to ensure the accuracy of the data, and low-pass filtering is performed on the acceleration data so as to reduce noise data. The posture of the equipment is obtained by fusing the magnetic field, the acceleration and the gravity sensor data of the equipment, specifically, the data acquired by the acceleration sensor can be influenced by the action of gravity, so as to reduce the gravity to the acceleration sensorThe influence of the data is collected, and the acceleration is corrected by using the gravity data of the device to obtain corrected acceleration data. The specific correction process comprises the following steps: setting x ', y'; g is the actual acceleration component 0 For local gravitational acceleration values, there is the formula x' 2 +y’ 2 +z’ 2 =G 0 2
The relation between the measured value x 'and the corrected x is x' =ax+b, and the obtained value is obtained by using least square solution coefficients through a plurality of groups of measured values.
The magnetic field data acquired by the magnetic field sensor is a magnetic field quaternion, and the corrected acceleration data and the magnetic field quaternion are integrated, so that the equipment posture can be obtained. Specifically, the magnetic field quaternion is converted into a magnetic field vector, and the magnetic field vector and the corrected acceleration are subjected to cross multiplication to obtain a vector V 0 Vector V 0 Cross multiplying with gravity vector to obtain vector V 1 The device posture matrix is composed of V 0 、V 1 And a gravity vector.
After the equipment gesture is determined, a gesture matrix corresponding to the equipment gesture can be determined, gesture information is three angles of the equipment, the gesture matrix is an Euler matrix and is used for carrying out coordinate conversion and calculating the position of the POI in the AR scene, and the gesture information and the gesture matrix are different expression forms of the gesture.
The determination of the current location of the device may be:
the initial position information of the device measured by GNSS (global navigation satellite system) or SLAM (simultaneous localization and mapping, instant localization and mapping) is first acquired, and then the initial position information is filtered, such as kalman filtering, to obtain the current position of the device, where the current position is used to describe the position of the device in the geographic coordinate system in the AR scene.
It should be noted that, in a range where the device is far from the target position of the POI, high-precision positioning (such as GNSS or SLAM) may not be started to save unnecessary computing power of the device, and in a certain range near the POI, high-precision positioning may be started to assist. The position of the target position of the POI in the geographic coordinate system is unchanged, but the position of the device is variable, and thus the relative positions of the device and the POI are changed, so that the position change of the camera in the geographic coordinate system needs to be tracked in real time to realize the immersive augmented reality experience.
And S12, determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the gesture information of the equipment and the current position as references.
In practical applications, the scene coordinate system is constructed by the current position of the device and the target position of the target to be projected, and after the scene coordinate system is constructed based on the current position of the device and the target position of the target to be projected, it is required to determine the display coordinate of the target content of the target to be projected in the scene coordinate system, that is, which position in the scene coordinate system displays the target content of the target to be projected. In a specific implementation process, the implementation process of step S12 is related to a reference coordinate system selected from the target positions of the targets to be projected, and will be described separately.
1. The target position of the target to be projected is a coordinate in a geographic coordinate system; the coordinates of the target content of the target to be projected are coordinates located in a local coordinate system with the target content as a reference.
The target position of the target to be projected may be determined based on manual experience or obtained based on the position of the virtual object projected at present relative to the position of the photographing apparatus, which is not limited.
Specifically, referring to fig. 2, step S12 may include:
s21, determining the relative distance between the target position of the target to be projected and the current position of the equipment according to the current position of the equipment and the target position of the target to be projected.
Because the current position of the device and the target position of the target to be projected are both data in a geographic coordinate system, the relative distance between the target position of the target to be projected and the current position of the device can be obtained directly by position difference.
S22, calculating the relative position relation between the target position of the target to be projected and the current position of the equipment based on the posture information of the equipment and the relative distance.
In practical application, the relative position relation between the target position of the target to be projected and the current position of the equipment is calculated through the posture information of the equipment and the relative distance between the equipment and the target to be projected. Specifically, a posture angle is obtained according to the posture information (expressed in a posture matrix form), and the relative offset of the target position of the target to be projected relative to the target to be projected is calculated according to the posture angle and the relative distance, so that the relative position relation can be obtained.
The relative positional relationship is the positional deviation of the target position of the target to be projected relative to the current position of the device, and can be represented by a model matrix.
S23, according to the coordinates of the target content of the target to be projected and the relative position relation, determining the display coordinates of the target content of the target to be projected in a scene coordinate system.
In practical application, the display coordinates of the object to be projected in the scene coordinate system can be obtained by adding the relative offset to the coordinates of the device in the scene coordinate system.
2. The target position of the target to be projected is a coordinate in a camera coordinate system taking the equipment as a reference; the coordinates of the target content of the target to be projected are coordinates located in a local coordinate system with the target content as a reference.
Unlike the above-described embodiments, the target position of the target to be projected in the present embodiment is selected directly in the camera coordinate system, that is, the target position of the target to be projected is a coordinate point in the camera coordinate system.
Specifically, step S12 may include:
determining display coordinates of the target position of the target to be projected in a scene coordinate system according to the gesture information of the equipment; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be transmitted.
In this embodiment, since a scene coordinate system including the target position of the target to be projected and the current position of the device needs to be determined, the previous embodiment provides that the target position of the target to be projected in the camera coordinate system needs to be converted into the scene coordinate system, and the conversion from the scene coordinate system to the camera coordinate system is given according to the gesture information of the device, and then the conversion from the camera coordinate system to the scene coordinate system can be obtained by performing the inverse transformation, so that the display coordinate of the target position of the target to be projected in the scene coordinate system can be determined.
It should be noted that if it is desired to record the coordinates of the target position of the target to be projected in the geographic coordinate system, the scene coordinate system needs to be converted into the geographic coordinate system, and the above geographic coordinate system is converted into the scene coordinate system for inverse transformation.
S13, determining that target content of the target to be projected is located at a target display position in a camera coordinate system taking the equipment as a reference according to the gesture information of the equipment, and displaying the target content at the target display position.
If the device is moving, the relative position of the device and the object to be projected in the scene coordinate system is continuously changed, and then the object display position of the object to be projected in the camera coordinate system is also continuously changed along with the movement of the device, and further the object display position of the object to be projected in the camera coordinate system taking the device as a reference needs to be determined in real time, and the object content of the object to be projected is continuously displayed in the object display position.
Referring to fig. 3, fig. 3 shows a schematic diagram of a POI display, in fig. 3, the target content is a "spot type" part, the POI content is suspended in mid-air, and has a coordinate in a detailed geographic coordinate system, and the picture shot by the device also has a coordinate in a detailed geographic coordinate system, and by judging whether the two coordinates respectively correspond to the land used for parking or not, it can be determined whether the land is illegally occupied, and other uses except for the parking land are provided. Namely, the method can be used for verifying whether the use of the land is legal or not through the embodiment. In addition, the system also has a pipeline operation and maintenance function (by throwing pipeline data at a designated position, constructors can check, overhaul, report and the like on site) and a library exhibition function (by throwing book information at the designated position, visitors can conveniently position books and review).
In a specific implementation process, step S13 may specifically include:
and generating a gesture matrix corresponding to the gesture information of the equipment, and taking the product of the display coordinates and the gesture matrix of the equipment as the target display position of the target content of the target to be projected in a camera coordinate system taking the equipment as a reference.
In practical application, the gesture information and the gesture matrix are two different expression modes of gesture, in this embodiment, the gesture information is converted to obtain a gesture matrix, and then the display coordinates of the target content of the target to be projected under the scene coordinate system are multiplied by the gesture matrix, so that the obtained result is that the target content of the target to be projected is located at the target display position in the camera coordinate system taking the device as a reference.
Through the description of the implementation manner, the panoramic POI augmented reality model based on multi-sensor fusion can be established through the embodiment of the invention. And carrying out immersive GIS industry application on the basis: the method has the basis of high-precision positioning, the two-dimensional and three-dimensional integrated rendering capability and the management of the data positions in the AR scene, and can easily establish immersive GIS industry application and solutions on the basis. Conventionally, an indoor map (AR indoor track collection) is generated by combining AR with track data with high-precision position information collected by a visual inertial mileage positioning technology, dynamic effects such as situation deduction and the like are added on a map displayed in an AR scene, gesture operation is performed on a three-dimensional model, AR calculation and the like.
In this embodiment, when the target content of the target to be projected is projected, a scene coordinate system including the current position of the device and the target position of the target to be projected is constructed, that is, the current position of the device and the target position of the target to be projected are compared under the same coordinate system, so that the relative position relationship between the current position of the device and the target position of the target to be projected is more accurate, and then the target content can be accurately projected to the target display position during projection, thereby ensuring the accuracy of projection.
In addition, the embodiment of the invention has the following effects:
1. immersive AR experience: the immersive AR experience is brought to the two-dimensional map and the three-dimensional scene by fusing various sensor data and calculating in real time.
2. Full scene POI display: the POI display device can display POIs in any plane and any position in the space. These POIs originate from multiple types of data, such as local JSON files, map space datasets for users, or multi-source data for online networks.
3. Multiple categories of POI rendering: multiple categories, i.e., POIs, may be pictures, two-dimensional maps, or three-dimensional models, all serve as a way for POIs (points of interest) to be displayed. The data of different data types such as two dimensions, three dimensions and the like are effectively managed in a node management mode and put into a scene, so that two-dimensional and three-dimensional integration can be effectively achieved.
4. GIS-based immersive application: high-precision positioning and two-dimensional and three-dimensional integrated rendering bring a foundation for GIS application. The immersive data acquisition, processing and display can be realized by selecting POI positions in a camera coordinate system through the manual work of the whole scene, and the immersive dynamic effect can be added on the original map so as to enrich the map display function.
Optionally, on the basis of the embodiment of the data processing method, another embodiment of the present invention provides a data processing apparatus, referring to fig. 4, may include:
a data acquisition module 11, configured to acquire a target position and target content of a target to be projected, and acquire pose information and a current position of the device;
a coordinate determining module 12, configured to determine display coordinates of the target content of the target to be projected in a scene coordinate system based on the target position of the target to be projected, the gesture information of the device, and the current position; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be projected;
and the data display module 13 is used for determining that the target content of the target to be projected is positioned at a target display position in a camera coordinate system taking the equipment as a reference according to the gesture information of the equipment, and displaying the target content at the target display position.
Further, the target position of the target to be projected is a coordinate located in a geographic coordinate system; the coordinates of the target content of the target to be projected are coordinates located in a local coordinate system taking the target content as a reference;
correspondingly, the coordinate determining module comprises:
a distance determining sub-module, configured to determine a relative distance between a target position of the target to be projected and a current position of the device according to the current position of the device and the target position of the target to be projected;
a relation determining sub-module, configured to calculate a relative positional relation between a target position of the target to be projected and a current position of the device based on the posture information of the device and the relative distance;
and the coordinate determination submodule is used for determining display coordinates of the target content of the target to be projected in a scene coordinate system according to the coordinates of the target content of the target to be projected and the relative position relation.
Further, the data display module is configured to determine, according to pose information of the device, when the target content of the target to be projected is located at a target display position in a camera coordinate system based on the device, specifically configured to:
and generating a gesture matrix corresponding to the gesture information of the equipment, and taking the product of the display coordinates and the gesture matrix of the equipment as the target display position of the target content of the target to be projected in a camera coordinate system taking the equipment as a reference.
Further, the target position of the target to be projected is a coordinate located in a camera coordinate system taking the equipment as a reference; the coordinates of the target content of the target to be projected are coordinates located in a local coordinate system taking the target content as a reference;
correspondingly, the coordinate determining module 12 is configured to determine, based on the target position of the target to be projected, the gesture information of the device, and the current position, display coordinates of the target content of the target to be projected in the scene coordinate system, specifically:
determining display coordinates of the target position of the target to be projected in a scene coordinate system according to the gesture information of the equipment; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be transmitted.
Further, when the data acquisition module 11 is configured to acquire pose information of the device, the data acquisition module is specifically configured to:
acquiring gravity data, acceleration data and magnetic field data of the equipment;
correcting the acceleration data by using the gravity data to obtain corrected acceleration data;
and integrating the magnetic field data and the corrected acceleration data to determine the attitude information of the equipment.
Further, when the data obtaining module 11 is configured to obtain the current location of the device, the data obtaining module is specifically configured to:
and acquiring initial position information of the equipment, and performing filtering processing on the initial position information to obtain the current position.
In this embodiment, when the target content of the target to be projected is projected, a scene coordinate system including the current position of the device and the target position of the target to be projected is constructed, that is, the current position of the device and the target position of the target to be projected are compared under the same coordinate system, so that the relative position relationship between the current position of the device and the target position of the target to be projected is more accurate, and then the target content can be accurately projected to the target display position during projection, thereby ensuring the accuracy of projection.
It should be noted that, in the working process of each module and sub-module in this embodiment, please refer to the corresponding description in the above embodiment, and the description is omitted here.
Optionally, on the basis of the embodiment of the data processing method, another embodiment of the present invention provides an electronic device, including: a memory and a processor;
wherein the memory is used for storing programs;
the processor invokes the program and is configured to:
acquiring a target position and target content of a target to be projected, and acquiring attitude information and a current position of equipment;
determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the gesture information of the equipment and the current position as references; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be projected;
and determining a target display position of the target content of the target to be projected in a camera coordinate system taking the equipment as a reference according to the gesture information of the equipment, and displaying the target content at the target display position.
In this embodiment, when the target content of the target to be projected is projected, a scene coordinate system including the current position of the device and the target position of the target to be projected is constructed, that is, the current position of the device and the target position of the target to be projected are compared under the same coordinate system, so that the relative position relationship between the current position of the device and the target position of the target to be projected is more accurate, and then the target content can be accurately projected to the target display position during projection, thereby ensuring the accuracy of projection.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (6)
1. A method of data processing, comprising:
acquiring a target position and target content of a target to be projected, and acquiring attitude information and a current position of equipment;
determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the gesture information of the equipment and the current position as references; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be projected;
according to the gesture information of the equipment, determining a target display position of target content of the target to be projected in a camera coordinate system taking the equipment as a reference, and displaying the target content at the target display position;
wherein the target position of the target to be projected is a coordinate in a geographic coordinate system; the coordinates of the target content of the target to be projected are coordinates located in a local coordinate system taking the target content as a reference;
correspondingly, determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the gesture information of the equipment and the current position as references, wherein the display coordinates comprise:
determining the relative distance between the target position of the target to be projected and the current position of the equipment according to the current position of the equipment and the target position of the target to be projected;
calculating the relative position relation between the target position of the target to be projected and the current position of the equipment based on the posture information of the equipment and the relative distance;
according to the coordinates of the target content of the target to be projected and the relative position relation, determining the display coordinates of the target content of the target to be projected in a scene coordinate system;
according to the gesture information of the device, determining that the target content of the target to be projected is located at the target display position in the camera coordinate system taking the device as a reference comprises:
generating a gesture matrix corresponding to gesture information of the equipment;
and taking the product of the display coordinates and the gesture matrix of the equipment as the target content of the target to be projected, wherein the target content is positioned at a target display position in a camera coordinate system taking the equipment as a reference.
2. The data processing method according to claim 1, wherein the target position of the target to be projected is coordinates located in a camera coordinate system with reference to the apparatus; the coordinates of the target content of the target to be projected are coordinates located in a local coordinate system taking the target content as a reference;
correspondingly, determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the gesture information of the equipment and the current position as references, wherein the display coordinates comprise:
determining display coordinates of the target position of the target to be projected in a scene coordinate system according to the gesture information of the equipment; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be projected.
3. The data processing method according to claim 1, wherein the acquiring the posture information of the device includes:
acquiring gravity data, acceleration data and magnetic field data of the equipment;
correcting the acceleration data by using the gravity data to obtain corrected acceleration data;
and integrating the magnetic field data and the corrected acceleration data to determine the attitude information of the equipment.
4. The data processing method according to claim 1, wherein acquiring the current position of the device comprises:
acquiring initial position information of the equipment;
and filtering the initial position information to obtain the current position.
5. A data processing apparatus, comprising:
the data acquisition module is used for acquiring the target position and the target content of the target to be projected, and acquiring the attitude information and the current position of the equipment;
the coordinate determining module is used for determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the gesture information of the equipment and the current position as references; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be projected;
the data display module is used for determining that the target content of the target to be projected is positioned at a target display position in a camera coordinate system taking the equipment as a reference according to the gesture information of the equipment, and displaying the target content at the target display position;
wherein the target position of the target to be projected is a coordinate in a geographic coordinate system; the coordinates of the target content of the target to be projected are coordinates located in a local coordinate system taking the target content as a reference;
correspondingly, the coordinate determining module comprises:
a distance determining sub-module, configured to determine a relative distance between a target position of the target to be projected and a current position of the device according to the current position of the device and the target position of the target to be projected;
a relation determining sub-module, configured to calculate a relative positional relation between a target position of the target to be projected and a current position of the device based on the posture information of the device and the relative distance;
the coordinate determination submodule is used for determining display coordinates of the target content of the target to be projected in a scene coordinate system according to the coordinates of the target content of the target to be projected and the relative position relation;
the data display module is configured to determine, according to pose information of the device, when target content of the target to be projected is located at a target display position in a camera coordinate system based on the device, specifically configured to:
and generating a gesture matrix corresponding to the gesture information of the equipment, and taking the product of the display coordinates and the gesture matrix of the equipment as the target display position of the target content of the target to be projected in a camera coordinate system taking the equipment as a reference.
6. An electronic device, comprising: a memory and a processor;
wherein the memory is used for storing programs;
the processor invokes the program and is configured to:
acquiring a target position and target content of a target to be projected, and acquiring attitude information and a current position of equipment;
determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the gesture information of the equipment and the current position as references; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be projected;
according to the gesture information of the equipment, determining a target display position of target content of the target to be projected in a camera coordinate system taking the equipment as a reference, and displaying the target content at the target display position;
wherein the target position of the target to be projected is a coordinate in a geographic coordinate system; the coordinates of the target content of the target to be projected are coordinates located in a local coordinate system taking the target content as a reference;
correspondingly, determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the gesture information of the equipment and the current position as references, wherein the display coordinates comprise:
determining the relative distance between the target position of the target to be projected and the current position of the equipment according to the current position of the equipment and the target position of the target to be projected;
calculating the relative position relation between the target position of the target to be projected and the current position of the equipment based on the posture information of the equipment and the relative distance;
according to the coordinates of the target content of the target to be projected and the relative position relation, determining the display coordinates of the target content of the target to be projected in a scene coordinate system;
according to the gesture information of the device, determining that the target content of the target to be projected is located at the target display position in the camera coordinate system taking the device as a reference comprises:
generating a gesture matrix corresponding to gesture information of the equipment;
and taking the product of the display coordinates and the gesture matrix of the equipment as the target content of the target to be projected, wherein the target content is positioned at a target display position in a camera coordinate system taking the equipment as a reference.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911301161.3A CN111127661B (en) | 2019-12-17 | 2019-12-17 | Data processing method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911301161.3A CN111127661B (en) | 2019-12-17 | 2019-12-17 | Data processing method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111127661A CN111127661A (en) | 2020-05-08 |
CN111127661B true CN111127661B (en) | 2023-08-29 |
Family
ID=70498391
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911301161.3A Active CN111127661B (en) | 2019-12-17 | 2019-12-17 | Data processing method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111127661B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112379344B (en) * | 2020-11-09 | 2024-04-02 | 中国科学院电子学研究所 | Signal compensation method and device, equipment and storage medium |
CN112541971A (en) * | 2020-12-25 | 2021-03-23 | 深圳市慧鲤科技有限公司 | Point cloud map construction method and device, electronic equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105354820A (en) * | 2015-09-30 | 2016-02-24 | 深圳多新哆技术有限责任公司 | Method and apparatus for regulating virtual reality image |
CN106791784A (en) * | 2016-12-26 | 2017-05-31 | 深圳增强现实技术有限公司 | Augmented reality display methods and device that a kind of actual situation overlaps |
CN107463261A (en) * | 2017-08-11 | 2017-12-12 | 北京铂石空间科技有限公司 | Three-dimensional interaction system and method |
CN108335365A (en) * | 2018-02-01 | 2018-07-27 | 张涛 | Image-guided virtual-real fusion processing method and device |
CN108537889A (en) * | 2018-03-26 | 2018-09-14 | 广东欧珀移动通信有限公司 | Method of adjustment, device, storage medium and the electronic equipment of augmented reality model |
CN108876900A (en) * | 2018-05-11 | 2018-11-23 | 重庆爱奇艺智能科技有限公司 | A kind of virtual target projective techniques merged with reality scene and system |
CN109961522A (en) * | 2019-04-02 | 2019-07-02 | 百度国际科技(深圳)有限公司 | Image projecting method, device, equipment and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9508146B2 (en) * | 2012-10-31 | 2016-11-29 | The Boeing Company | Automated frame of reference calibration for augmented reality |
-
2019
- 2019-12-17 CN CN201911301161.3A patent/CN111127661B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105354820A (en) * | 2015-09-30 | 2016-02-24 | 深圳多新哆技术有限责任公司 | Method and apparatus for regulating virtual reality image |
CN106791784A (en) * | 2016-12-26 | 2017-05-31 | 深圳增强现实技术有限公司 | Augmented reality display methods and device that a kind of actual situation overlaps |
CN107463261A (en) * | 2017-08-11 | 2017-12-12 | 北京铂石空间科技有限公司 | Three-dimensional interaction system and method |
CN108335365A (en) * | 2018-02-01 | 2018-07-27 | 张涛 | Image-guided virtual-real fusion processing method and device |
CN108537889A (en) * | 2018-03-26 | 2018-09-14 | 广东欧珀移动通信有限公司 | Method of adjustment, device, storage medium and the electronic equipment of augmented reality model |
CN108876900A (en) * | 2018-05-11 | 2018-11-23 | 重庆爱奇艺智能科技有限公司 | A kind of virtual target projective techniques merged with reality scene and system |
CN109961522A (en) * | 2019-04-02 | 2019-07-02 | 百度国际科技(深圳)有限公司 | Image projecting method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111127661A (en) | 2020-05-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zollmann et al. | Augmented reality for construction site monitoring and documentation | |
US8850337B2 (en) | Information processing device, authoring method, and program | |
TWI667618B (en) | Integrated sensing positioning based on 3D information model applied to construction engineering and facility equipment management system | |
Gomez-Jauregui et al. | Quantitative evaluation of overlaying discrepancies in mobile augmented reality applications for AEC/FM | |
Fukuda et al. | Improvement of registration accuracy of a handheld augmented reality system for urban landscape simulation | |
CN111127661B (en) | Data processing method and device and electronic equipment | |
KR20210022343A (en) | Method and system for providing mixed reality contents related to underground facilities | |
CA3119609A1 (en) | Augmented reality (ar) imprinting methods and systems | |
Wither et al. | Using aerial photographs for improved mobile AR annotation | |
CN115240140A (en) | Equipment installation progress monitoring method and system based on image recognition | |
Afif et al. | Orientation control for indoor virtual landmarks based on hybrid-based markerless augmented reality | |
CN111724485B (en) | Method, device, electronic equipment and storage medium for realizing virtual-real fusion | |
US9230366B1 (en) | Identification of dynamic objects based on depth data | |
CN114266876B (en) | Positioning method, visual map generation method and device | |
CN115187709A (en) | Geographic model processing method and device, electronic equipment and readable storage medium | |
Burkard et al. | Mobile location-based augmented reality framework | |
Dekker et al. | MARWind: mobile augmented reality wind farm visualization | |
CN112862976B (en) | Data processing method and device and electronic equipment | |
KR20120048888A (en) | 3d advertising method and system | |
CN108062786B (en) | Comprehensive perception positioning technology application system based on three-dimensional information model | |
Thomas et al. | 3D modeling for mobile augmented reality in unprepared environment | |
CN118379453B (en) | Unmanned aerial vehicle aerial image and webGIS three-dimensional scene linkage interaction method and system | |
Leder et al. | Mobile Outdoor AR Assistance Systems-Insights from a Practical Application | |
Asiminidis | Augmented and Virtual Reality: Extensive Review | |
CN113836249B (en) | Map information point management method, related device and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |