CN117764848A - Map merging method and device - Google Patents

Map merging method and device Download PDF

Info

Publication number
CN117764848A
CN117764848A CN202311792945.7A CN202311792945A CN117764848A CN 117764848 A CN117764848 A CN 117764848A CN 202311792945 A CN202311792945 A CN 202311792945A CN 117764848 A CN117764848 A CN 117764848A
Authority
CN
China
Prior art keywords
map
pose information
pose
information
conversion relation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311792945.7A
Other languages
Chinese (zh)
Inventor
黄亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202311792945.7A priority Critical patent/CN117764848A/en
Publication of CN117764848A publication Critical patent/CN117764848A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure provides a map merging method and device, wherein when a virtual object in a virtual scene enters a first map, first pose information of a second map where the virtual object is currently located is obtained, the second pose information of the first map is corrected according to the pose information of a superposition area between the second map and the first map, third pose information of the first map is obtained, the first map is any map where the virtual object is located before the current moment, and then target pose information of a target map obtained after the second map and the first map are merged is determined according to the first pose information and the third pose information. According to the technical scheme, the previously constructed map is corrected by using the pose information of the overlapping area between the two maps and is combined into the current map, so that the problem of repeated construction can be reduced, and the accuracy of the map is improved.

Description

Map merging method and device
Technical Field
The embodiment of the disclosure relates to the technical field of computer and network communication, in particular to a map merging method and map merging equipment.
Background
In Mixed Reality (MR) scenarios, headsets often need to construct a corresponding map for anchor points to assist virtual objects in the implementation of corresponding tasks in the map. The current industry adopts a global map mode, namely, a global map is constructed in an MR scene.
The mode can build a map where the virtual object does not need to reach the position, and occupies a large amount of memory resources, so that a small map scheme is provided, namely, as the virtual object continuously moves in the map, a local map is continuously built at the corresponding anchor point position, so that the problem of global building is solved.
However, in practical implementation, there may be a situation of repeated construction between the minimap, thereby causing a certain waste of memory resources.
Disclosure of Invention
The embodiment of the disclosure provides a map merging method and device, which are used for solving the problem that a certain memory resource waste is caused by the fact that repeated construction situations can exist between small maps.
In a first aspect, an embodiment of the present disclosure provides a map merging method, including:
when a virtual object in a virtual scene is detected to enter a first map, acquiring first pose information of a second map where the virtual object is currently located;
correcting the second pose information of the first map according to the pose information of the overlapping area between the second map and the first map to obtain third pose information of the first map, wherein the first map is any map where the virtual object is located before the current moment;
And determining target pose information of a target map obtained after the second map and the first map are combined according to the first pose information and the third pose information.
In a second aspect, an embodiment of the present disclosure provides a map merging apparatus, including:
the device comprises an acquisition unit, a storage unit and a display unit, wherein the acquisition unit is used for acquiring first pose information of a second map where a virtual object is currently located when the virtual object in a virtual scene is detected to enter the first map;
the correcting unit is used for correcting the second pose information of the first map according to the pose information of the overlapping area between the second map and the first map to obtain third pose information of the first map, wherein the first map is any map where the virtual object is located before the current moment;
and the determining unit is used for determining the target pose information of the target map obtained after the second map and the first map are combined according to the first pose information and the third pose information.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: a processor and a memory;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory, causing the at least one processor to perform the map merging method as described above in the possible designs of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer readable storage medium having stored therein computer executable instructions that, when executed by a processor, implement a map merging method as described in the possible designs of the first aspect above.
In a fifth aspect, embodiments of the present disclosure provide a computer program product comprising a computer program which, when executed by a processor, implements a map merging method as described in the possible designs of the first aspect above.
According to the map merging method and the map merging equipment, when the virtual object in the virtual scene is detected to enter the first map, the first pose information of the second map where the virtual object is currently located is obtained; correcting the second pose information of the first map according to the pose information of the overlapping area between the second map and the first map to obtain third pose information of the first map, wherein the first map is any map where the virtual object is located before the current moment, and then determining the target pose information of the target map obtained after the second map and the first map are combined according to the first pose information and the third pose information. According to the technical scheme, the previously constructed map is corrected by using the pose information of the overlapping area between the two maps and is combined into the current map, so that the problem of repeated construction can be reduced, and the accuracy of the map is improved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, a brief description will be given below of the drawings that are needed in the embodiments or the description of the prior art, it being obvious that the drawings in the following description are some embodiments of the present disclosure, and that other drawings may be obtained from these drawings without inventive effort to a person of ordinary skill in the art.
Fig. 1 is an application scenario schematic diagram of a map merging method provided in an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a map merging method according to an embodiment of the disclosure;
fig. 3 is a schematic flow chart diagram of a map merging method according to an embodiment of the disclosure;
FIG. 4 is a schematic diagram of offset determination provided by an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a map merging device according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
In Mixed Reality (MR) scenarios, headsets often need to construct a corresponding map for anchor points to assist virtual objects in the implementation of corresponding tasks in the map. The current industry adopts a global map mode, namely, a global map is constructed in an MR scene.
The mode can build a map where the virtual object does not need to reach the position, and occupies a large amount of memory resources, so that a small map scheme is provided, namely, as the virtual object continuously moves in the map, a local map is continuously built at the corresponding anchor point position, so that the problem of global building is solved.
However, in practical implementation, there may be a situation of repeated construction between the minimap, thereby causing a certain waste of memory resources.
In order to solve the above-mentioned existing technical problems, the technical idea of the inventor is as follows: because there is the condition of overlapping when small map is built in the prior art, then along with the continuous construction of small map, when detecting that current map and previous map overlap, can merge the map that follows into current map to avoid the condition of the map before the repeated construction, and because the head-mounted equipment can take place the position appearance drift, there is the inaccurate condition in above-mentioned merging process, then before merging, can carry out error compensation to the map before based on the position appearance information of each overlapping region in two maps, merge the map after the compensation with current map, alright eliminate the error, and avoid the problem of repeated construction in the prior art, reduced the resource occupation.
Fig. 1 is an application scenario schematic diagram of a map merging method provided by an embodiment of the present disclosure, as shown in fig. 1, where the application scenario includes: a first map 11, a second map 12, and a target map 13.
In one possible implementation, when the distance from the anchor point 1 is smaller than 20 in the motion process of the virtual object in the head-mounted device, a quadrilateral is built by taking the anchor point 1 as the center, so as to obtain a first map 11, along with the continuous motion of the virtual object, the quadrilateral is built based on the anchor point 3 before the virtual object is detected to enter, a corresponding second map 12 is generated, and the first map 11 and the second map 12 are overlapped.
At this time, because the head-mounted device may have pose drift at different time or/and during the displacement process, that is, pose information in the first map 11 is pose information read at the current time, and when the map constructed by the anchor point 3 is read, there is a difference from the previous second map 12, at this time, the second map 12 may be corrected based on the overlap occurring between the first map 11 and the second map 12, and then the corrected second map 12 and the first map 11 may be combined to obtain the target map 13.
The above application scenario is only an example, and is not limited to a specific scenario, which is based on actual situations.
The main execution body of the map merging method according to the embodiment of the disclosure is an electronic device, which may be a head-mounted device, a mobile phone, a computer, a tablet, a computing device, or the like.
The following are specific implementation processes of the map merging method, the map merging device, the electronic device and the like according to the embodiments of the present disclosure, and some examples are only examples, but not limiting.
Fig. 2 is a schematic flowchart of a map merging method according to an embodiment of the disclosure. As shown in fig. 2, the map merging method includes:
step 21, when detecting that a virtual object in a virtual scene enters a first map, acquiring first pose information of a second map where the virtual object is currently located;
in this step, under the virtual scene corresponding to the MR, when it is detected that the virtual object enters the region where the previously constructed map is located, i.e., enters the first map, pose information of the map where the virtual object is currently located, i.e., first pose information of the second map, is obtained.
Alternatively, the first map may be a map constructed in a previous scene based on the virtual object; the second map may be a map constructed under the current scene based on the virtual object.
Further, the pose information may be pose information of each element displayed under the corresponding map, for example, a tree, a person, a house, a street lamp, a vehicle, a railing, a valley, or the like in a virtual scene, which is acquired based on the head-mounted device.
The pose information may be embodied in the form of point cloud data.
Alternatively, the virtual object may be a virtual person or virtual object in a virtual scene that manipulates the user of the headset.
In addition, the map where the virtual object is located in the virtual scene may be constructed with the virtual object as the center, for example, a table of n×n with the virtual object as the center, a circle with N as the radius (where N is any value, which can be preset); may be constructed with a field of view and a non-field of view of the virtual object; may be constructed with a certain anchor point appearing in the virtual object field of view; the anchor point where the distance from the virtual object is smaller than N may be constructed, that is, the embodiment of the disclosure does not limit the construction of the map, and the shape, the construction foundation, and the like of the anchor point are arbitrary, and the embodiment of the disclosure is illustrated by taking a 4-sided polygon as an example.
Step 22, correcting the second pose information of the first map according to the pose information of the overlapping area between the second map and the first map to obtain the third pose information of the first map;
the first map is any map where the virtual object is located before the current moment;
in this step, because a certain pose drift occurs during the running process based on the head-mounted device, the pose drift also occurs in the second pose information of the first map acquired previously at the current time.
At this time, the pose information of the region where the coincidence region in the second map is located and the pose information of the region where the coincidence region in the first map is located can be obtained based on the pose information of the region where the coincidence region in the second map is located.
And then correcting the second pose information of the first map based on the two pose information to eliminate the deviation between the pose information corresponding to the first map and the first pose information of the second map, namely ensuring that all the same elements on the two overlapped areas can be positioned at the same position.
Furthermore, it is also ensured that each element on the non-overlapping area in the first map has no deviation relative to the second map under the global map, and the third pose information of the first map is obtained.
And step 23, determining target pose information of the target map obtained after the second map and the first map are combined according to the first pose information and the third pose information.
In this step, after correcting the second pose information of the first map, the obtained third pose information has no deviation with respect to the first map, and at this time, the first pose information and the third pose information of the second map are combined, so as to obtain the target pose information of the target map after the second map and the first map are combined.
Alternatively, the merging process may be: the pose information of the overlapping area between the third pose information and the first pose information is matched, so that two points on the matching are located at the same position, namely, the two pose information of the same element in different maps in the virtual scene are matched, and specifically, the matching of the point cloud data can be realized.
Under the implementation, the first pose information is matched with the third pose information, namely, the second map is ensured to be unchanged, and the corrected first map is combined on the basis of the second map to obtain the target map.
The embodiment of the disclosure provides a map merging method, which is used for acquiring first pose information of a second map where a virtual object is currently located when the virtual object in a virtual scene is detected to enter the first map; correcting the second pose information of the first map according to the pose information of the overlapping area between the second map and the first map to obtain third pose information of the first map, wherein the first map is any map where the virtual object is located before the current moment, and then determining the target pose information of the target map obtained after the second map and the first map are combined according to the first pose information and the third pose information. According to the technical scheme, the previously constructed map is corrected by using the pose information of the overlapping area between the two maps and is combined into the current map, so that the problem of repeated construction can be reduced, and the accuracy of the map is improved.
Based on the foregoing embodiments, fig. 3 is a second schematic flowchart of a map merging method according to an embodiment of the disclosure, as shown in fig. 3, where the step 22 may include the following steps:
step 31, determining the offset between the second map and the first map according to the pose information of the overlapping region in the second map, the pose information of the overlapping region in the first map and the pose conversion relation under the global coordinate system;
wherein the global coordinate system is a coordinate system established by any point of the global map, and the global map comprises: a first map and a second map;
in this step, a global coordinate system under the virtual scene is previously established from the global map, and the pose conversion relationship between the local maps is described in the pose conversion relationship, for example, the conversion relationship between the first map and the second map is X.
In a scene where there is no offset, pose information of a point D in the second map (i.e., each of the points D exists in the overlapping area) can be obtained from pose information of an element (exemplified by the point D) in the first map and the conversion relation X.
However, in the offset scene, according to the pose information of the point D in the first map and the conversion relation X, there is a certain error between the obtained pose information of the point D and the pose information of the point D in the second map, that is, the offset amount existing between the second map and the first map.
Based on the overlapping area between the two maps, pose information of the overlapping area in the second map and pose information of the overlapping area in the first map can be obtained.
Wherein, any point in the global coordinate system which is a coordinate system established with any point of the global map can be the center point of the global map.
Optionally, the pose conversion relationship includes: a coordinate system rotational relationship and a spatial translational relationship.
The coordinate system rotation relationship refers to a transformation relationship in which one coordinate system rotates around a fixed point or a fixed axis in a space corresponding to a two-dimensional or three-dimensional virtual scene. The spatial translation relationship refers to a transformation relationship in which one coordinate system is translated along a certain direction in a space corresponding to a two-dimensional or three-dimensional virtual scene.
For example, the pose information of the point D in the first map is P, and the pose conversion relationship between the first map and the second map includes: the coordinate system rotation relationship Y1 and the spatial translation relationship Y2, the pose information q=pxy1+y2 of the point D in the first map is normally.
However, in the implementation case, due to the presence of a pose drift of the headset, in reality q=p×y1+y2+f, F is the offset.
Optionally, the overlapping area includes at least one reference object, and fig. 4 is a schematic diagram for determining an offset according to an embodiment of the disclosure, as shown in fig. 4, where the schematic diagram includes: a first map 11 and a second map 12.
Wherein, between the first map 11 and the second map 12, further comprises: a superimposed area 14 (superimposed area on the first map 11 is denoted 141, superimposed area on the first map 12 is denoted 142); the overlapping areas 141 and 142 have a reference object a and a reference object B, respectively, and the reference objects may be elements in a map.
The implementation of this step 31 may be:
s1, determining pose information of each reference object after transformation according to the pose information and the pose conversion relation of the reference object in a first map;
in this implementation, on the overlapping region 141, the pose information of the reference object a is (3, 4, 5), the pose information of the reference object B is (4,3,5), and the pose conversion relationship is (0.8,0.8,0); the pose information after conversion of the reference object a is (3.8,4.8,5), and the pose information after conversion of the reference object B is (4.8,3.8,5).
S2, determining the offset of the reference object between the second map and the first map according to the pose information of the reference object after transformation and the pose information of the reference object in the second map.
In this embodiment, in the overlapping region 142, the pose information of the reference object a is (4, 5), the pose information of the reference object B is (5,4,5), the transformed pose information of the reference object a is (3.8,4.8,5), and the transformed pose information of the reference object B is (4.8,3.8,5).
The method can obtain: the offset of the reference object a between the second map and the first map is (0.2,0.2,0), and the offset of the reference object B between the second map and the first map is (0.2,0.2,0).
Step 32, correcting the pose conversion relation according to the offset to obtain a corrected pose conversion relation;
in this step, after determining the offset between the second map and the first map (i.e., the error value between the pose information obtained by actually using the pose conversion relationship and the actual pose information), the pose conversion relationship may be corrected by using the offset, so that the corrected pose information may enable the same element in the overlapping area between the second map and the first map to be on the same point, and the pose drift of the headset in the motion process is eliminated.
Alternatively, the implementation of this step may be: correcting the pose conversion relation according to the offset corresponding to all the references to obtain a new pose conversion relation, and repeating the steps S1 and S2 until the sum of the offset corresponding to all the references is determined to be minimum, and taking the pose conversion relation corresponding to the minimum sum of the offset corresponding to all the references as the corrected pose conversion relation.
In one possible implementation, following the above example, when the offset of the reference object a between the second map and the first map is obtained as (0.2,0.2,0), the offset of the reference object B between the second map and the first map is obtained as (0.2,0.2,0), the new pose conversion relationship (1, 0) may be corrected based on the offset of the reference object a to the pose conversion relationship (0.8,0.8,0), and at this time, the above steps S1 and S2 are repeated, so that the sum of the offsets corresponding to all the reference objects is 0, that is, the minimum, and the new pose conversion relationship (1, 0) is taken as the corrected pose conversion relationship.
In another possible implementation, the average offset of each reference object can be determined, and the pose conversion relationship is corrected.
And step 33, determining third pose information of the first map according to the corrected pose conversion relation and the second pose information.
In this step, after the corrected pose conversion relationship is obtained, the second pose information of the first map may be converted to obtain pose information corrected for the first map, that is, third pose information.
Optionally, the first map includes: the second pose information of the first map includes: the pose information of each of the plurality of targets may be: according to the corrected pose conversion relation, converting the pose information of each object to obtain the pose information of each object after conversion;
Wherein the third pose information of the first map includes: the pose information of all the objects after being converted respectively can be the elements in the map, namely, the elements in the first map.
Under the implementation, the above example is followed, and the corrected pose conversion relation (1, 0) is obtained, and the target objects can be a target object H (7,1,1), a target object I (7,2,1) and a target object J (7, 3, 1); the pose information of the converted targets is target H (8, 2, 1), target I (8,3,1), and target J (8,4,1).
That is, the third pose information of the first map includes: target H (8, 2, 1), target I (8,3,1), target J (8,4,1).
Optionally, the pose conversion relationship between the first map and the second map includes: the coordinate system rotation relationship Y1 and the space translation relationship Y2, the corrected pose conversion relationship is: a coordinate system rotation relationship Y11 and a spatial translation relationship Y21. At this time, the second pose information is rotated according to the coordinate system rotation relationship Y11, and then the rotation result is translated according to the spatial translation relationship Y21, so that the third pose information of the first map can be obtained.
The embodiment of the disclosure provides a map merging method, which determines an offset existing between a second map and a first map according to pose information of a coincident region in the second map, pose information of a coincident region in the first map and a pose conversion relation under a global coordinate system, wherein the global coordinate system is a coordinate system established by any point of the global map, and the global map comprises: and correcting the pose conversion relation according to the offset to obtain a corrected pose conversion relation, and then determining third pose information of the first map according to the corrected pose conversion relation and the second pose information. According to the technical scheme, the determination of the offset between the two areas is realized by using the pose information of the overlapped areas acquired by different positions, and then the pose conversion relation is adjusted so as to improve the accuracy of the pose conversion relation.
On the basis of the above method embodiment, fig. 5 is a schematic structural diagram of a map merging device according to an embodiment of the present disclosure, as shown in fig. 5, where the map merging device includes:
an obtaining unit 51, configured to obtain first pose information of a second map where a virtual object is currently located when it is detected that the virtual object in the virtual scene enters the first map;
a correction unit 52, configured to correct the second pose information of the first map according to the pose information of the overlapping area between the second map and the first map, so as to obtain third pose information of the first map, where the first map is any map where the virtual object is located before the current time;
the determining unit 53 is configured to determine target pose information of a target map obtained after the second map and the first map are combined according to the first pose information and the third pose information.
According to one or more embodiments of the present disclosure, the correction unit 52 is specifically configured to:
determining an offset existing between the second map and the first map according to pose information of a coincident region in the second map, pose information of a coincident region in the first map and a pose conversion relation under a global coordinate system, wherein the global coordinate system is a coordinate system established by any point of the global map, and the global map comprises: a first map and a second map;
Correcting the pose conversion relation according to the offset to obtain a corrected pose conversion relation;
and determining third pose information of the first map according to the corrected pose conversion relation and the second pose information.
According to one or more embodiments of the present disclosure, the coincident region contains at least one reference;
accordingly, the correction unit 52 determines the offset existing between the second map and the first map according to the pose information of the overlapping area in the second map, the pose information of the overlapping area in the first map, and the pose conversion relationship in the global coordinate system, which is specifically configured to:
s1, determining pose information of each reference object after transformation according to the pose information and the pose conversion relation of the reference object in a first map;
s2, determining the offset of the reference object between the second map and the first map according to the pose information of the reference object after transformation and the pose information of the reference object in the second map.
According to one or more embodiments of the present disclosure, the correction unit 52 corrects the pose conversion relationship according to the offset, to obtain a corrected pose conversion relationship, specifically for:
and correcting the pose conversion relation according to the offset corresponding to all the references to obtain a new pose conversion relation, and repeating the steps S1 and S2 until the minimum sum of the offset corresponding to all the references is determined, and taking the pose conversion relation corresponding to the minimum sum of the offset corresponding to all the references as the corrected pose conversion relation.
According to one or more embodiments of the present disclosure, the first map includes: the plurality of objects, the second pose information of the first map includes: pose information of each of the plurality of targets;
accordingly, the correction unit 52 determines, according to the corrected pose conversion relationship and the second pose information, third pose information of the first map, specifically for:
according to the corrected pose conversion relation, converting the pose information of each object to obtain the pose information of each object after conversion;
wherein the third pose information of the first map includes: and the pose information of all the target objects after being converted respectively.
According to one or more embodiments of the present disclosure, the pose conversion relationship includes: a coordinate system rotational relationship and a spatial translational relationship.
The map merging device provided in the embodiments of the present disclosure has similar implementation principles and technical effects to those of the above embodiments, and will not be described herein.
In order to achieve the above embodiments, the embodiments of the present disclosure further provide an electronic device. Fig. 6 is a schematic structural diagram of an electronic device provided in an embodiment of the disclosure, and referring to fig. 6, the electronic device may be a terminal device.
The terminal device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a personal digital assistant (Personal Digital Assistant, PDA for short), a tablet (Portable Android Device, PAD for short), a portable multimedia player (Portable Media Player, PMP for short), an in-vehicle terminal (e.g., an in-vehicle navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 6 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 6, the electronic apparatus may include a processing device (e.g., a central processing unit, a graphics processor, etc.) 61 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 62 or a program loaded from a storage device 68 into a random access Memory (Random Access Memory, RAM) 63. In the RAM 63, various programs and data required for the operation of the electronic apparatus are also stored. The processing device 61, the ROM 62, and the RAM 63 are connected to each other via a bus 64. An input/output (I/O) interface 66 is also connected to bus 64.
In general, the following devices may be connected to the I/O interface 65: input devices 66 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 67 including, for example, a liquid crystal display (Liquid Crystal Display, LCD for short), a speaker, a vibrator, and the like; storage devices 68 including, for example, magnetic tape, hard disk, etc.; and communication means 69. The communication means 69 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 shows an electronic device having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 69, or from the storage means 68, or from the ROM 62. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing means 61.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods shown in the above-described embodiments.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (Local Area Network, LAN for short) or a wide area network (Wide Area Network, WAN for short), or it may be connected to an external computer (e.g., connected via the internet using an internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring at least two internet protocol addresses".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In a first aspect, according to one or more embodiments of the present disclosure, there is provided a map merging method, including:
when a virtual object in a virtual scene is detected to enter a first map, acquiring first pose information of a second map where the virtual object is currently located;
correcting the second pose information of the first map according to the pose information of the overlapping area between the second map and the first map to obtain third pose information of the first map, wherein the first map is any map where the virtual object is located before the current moment;
and determining target pose information of a target map obtained after the second map and the first map are combined according to the first pose information and the third pose information.
According to one or more embodiments of the present disclosure, the correcting the second pose information of the first map according to the pose information of the overlapping area between the second map and the first map, to obtain third pose information of the first map, includes:
determining an offset existing between the second map and the first map according to pose information of a coincident region in the second map, pose information of a coincident region in the first map and a pose conversion relation under a global coordinate system, wherein the global coordinate system is a coordinate system established by any point of the global map, and the global map comprises: the first map and the second map;
Correcting the pose conversion relation according to the offset to obtain a corrected pose conversion relation;
and determining third pose information of the first map according to the corrected pose conversion relation and the second pose information.
According to one or more embodiments of the present disclosure, the coincident region contains at least one reference;
correspondingly, the determining the offset existing between the second map and the first map according to the pose information of the overlapping area in the second map, the pose information of the overlapping area in the first map, and the pose conversion relationship under the global coordinate system includes:
s1, for each reference object, determining pose information of the reference object after transformation according to pose information of the reference object in the first map and the pose conversion relation;
s2, determining the offset of the reference object between the second map and the first map according to the pose information of the reference object after transformation and the pose information of the reference object in the second map.
According to one or more embodiments of the present disclosure, the correcting the pose conversion relationship according to the offset to obtain a corrected pose conversion relationship includes:
And correcting the pose conversion relation according to the offset corresponding to all the references to obtain a new pose conversion relation, and repeating the steps S1 and S2 until the sum of the offset corresponding to all the references is determined to be minimum, and taking the pose conversion relation corresponding to the minimum sum of the offset corresponding to all the references as the corrected pose conversion relation.
According to one or more embodiments of the present disclosure, the first map includes: the second pose information of the first map includes: pose information of each of the plurality of targets;
correspondingly, the determining third pose information of the first map according to the corrected pose conversion relation and the second pose information includes:
according to the corrected pose conversion relation, converting the pose information of each object to obtain the pose information of each object after conversion;
wherein the third pose information of the first map includes: and the pose information of all the target objects after being converted respectively.
According to one or more embodiments of the present disclosure, the pose conversion relationship includes: a coordinate system rotational relationship and a spatial translational relationship.
In a second aspect, according to one or more embodiments of the present disclosure, there is provided a map merging apparatus including:
the device comprises an acquisition unit, a storage unit and a display unit, wherein the acquisition unit is used for acquiring first pose information of a second map where a virtual object is currently located when the virtual object in a virtual scene is detected to enter the first map;
the correcting unit is used for correcting the second pose information of the first map according to the pose information of the overlapping area between the second map and the first map to obtain third pose information of the first map, wherein the first map is any map where the virtual object is located before the current moment;
and the determining unit is used for determining the target pose information of the target map obtained after the second map and the first map are combined according to the first pose information and the third pose information.
According to one or more embodiments of the present disclosure, the correction unit is specifically configured to:
determining an offset existing between the second map and the first map according to pose information of a coincident region in the second map, pose information of a coincident region in the first map and a pose conversion relation under a global coordinate system, wherein the global coordinate system is a coordinate system established by any point of the global map, and the global map comprises: the first map and the second map;
Correcting the pose conversion relation according to the offset to obtain a corrected pose conversion relation;
and determining third pose information of the first map according to the corrected pose conversion relation and the second pose information.
According to one or more embodiments of the present disclosure, the coincident region contains at least one reference;
correspondingly, the correction unit determines the offset existing between the second map and the first map according to the pose information of the overlapping area in the second map, the pose information of the overlapping area in the first map and the pose conversion relation under the global coordinate system, and is specifically configured to:
s1, for each reference object, determining pose information of the reference object after transformation according to pose information of the reference object in the first map and the pose conversion relation;
s2, determining the offset of the reference object between the second map and the first map according to the pose information of the reference object after transformation and the pose information of the reference object in the second map.
According to one or more embodiments of the present disclosure, the correction unit corrects the pose conversion relationship according to the offset, to obtain a corrected pose conversion relationship, and is specifically configured to:
And correcting the pose conversion relation according to the offset corresponding to all the references to obtain a new pose conversion relation, and repeating the steps S1 and S2 until the sum of the offset corresponding to all the references is determined to be minimum, and taking the pose conversion relation corresponding to the minimum sum of the offset corresponding to all the references as the corrected pose conversion relation.
According to one or more embodiments of the present disclosure, the first map includes: the second pose information of the first map includes: pose information of each of the plurality of targets;
correspondingly, the correction unit is used for determining third pose information of the first map according to the corrected pose conversion relation and the second pose information, and is specifically used for:
according to the corrected pose conversion relation, converting the pose information of each object to obtain the pose information of each object after conversion;
wherein the third pose information of the first map includes: and the pose information of all the target objects after being converted respectively.
According to one or more embodiments of the present disclosure, the pose conversion relationship includes: a coordinate system rotational relationship and a spatial translational relationship.
In a third aspect, according to one or more embodiments of the present disclosure, there is provided an electronic device comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executes the computer-executable instructions stored by the memory, causing the at least one processor to perform the map merging method as described above in the first aspect and the various possible designs of the first aspect.
In a fourth aspect, according to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, implement the map merging method as described above in the first aspect and the various possible designs of the first aspect.
In a fifth aspect, according to one or more embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the map merging method according to the first aspect and the various possible designs of the first aspect as described above
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (10)

1. A map merging method, characterized by comprising:
when a virtual object in a virtual scene is detected to enter a first map, acquiring first pose information of a second map where the virtual object is currently located;
Correcting the second pose information of the first map according to the pose information of the overlapping area between the second map and the first map to obtain third pose information of the first map, wherein the first map is any map where the virtual object is located before the current moment;
and determining target pose information of a target map obtained after the second map and the first map are combined according to the first pose information and the third pose information.
2. The method according to claim 1, wherein the correcting the second pose information of the first map according to the pose information of the overlapping area between the second map and the first map to obtain the third pose information of the first map includes:
determining an offset existing between the second map and the first map according to pose information of a coincident region in the second map, pose information of a coincident region in the first map and a pose conversion relation under a global coordinate system, wherein the global coordinate system is a coordinate system established by any point of the global map, and the global map comprises: the first map and the second map;
Correcting the pose conversion relation according to the offset to obtain a corrected pose conversion relation;
and determining third pose information of the first map according to the corrected pose conversion relation and the second pose information.
3. The method of claim 2, wherein the coincident region comprises at least one reference;
correspondingly, the determining the offset existing between the second map and the first map according to the pose information of the overlapping area in the second map, the pose information of the overlapping area in the first map, and the pose conversion relationship under the global coordinate system includes:
s1, for each reference object, determining pose information of the reference object after transformation according to pose information of the reference object in the first map and the pose conversion relation;
s2, determining the offset of the reference object between the second map and the first map according to the pose information of the reference object after transformation and the pose information of the reference object in the second map.
4. A method according to claim 3, wherein said correcting said pose conversion relationship according to said offset to obtain a corrected pose conversion relationship comprises:
And correcting the pose conversion relation according to the offset corresponding to all the references to obtain a new pose conversion relation, and repeating the steps S1 and S2 until the sum of the offset corresponding to all the references is determined to be minimum, and taking the pose conversion relation corresponding to the minimum sum of the offset corresponding to all the references as the corrected pose conversion relation.
5. The method according to any one of claims 2-4, wherein the first map comprises: the second pose information of the first map includes: pose information of each of the plurality of targets;
correspondingly, the determining third pose information of the first map according to the corrected pose conversion relation and the second pose information includes:
according to the corrected pose conversion relation, converting the pose information of each object to obtain the pose information of each object after conversion;
wherein the third pose information of the first map includes: and the pose information of all the target objects after being converted respectively.
6. The method of any one of claims 2-4, wherein the pose conversion relationship comprises: a coordinate system rotational relationship and a spatial translational relationship.
7. A map merging apparatus, characterized in that the apparatus comprises:
the device comprises an acquisition unit, a storage unit and a display unit, wherein the acquisition unit is used for acquiring first pose information of a second map where a virtual object is currently located when the virtual object in a virtual scene is detected to enter the first map;
the correcting unit is used for correcting the second pose information of the first map according to the pose information of the overlapping area between the second map and the first map to obtain third pose information of the first map, wherein the first map is any map where the virtual object is located before the current moment;
and the determining unit is used for determining the target pose information of the target map obtained after the second map and the first map are combined according to the first pose information and the third pose information.
8. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the method of any one of the preceding claims 1 to 6.
9. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to carry out the method of any one of the preceding claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the method according to any of the preceding claims 1 to 6.
CN202311792945.7A 2023-12-22 2023-12-22 Map merging method and device Pending CN117764848A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311792945.7A CN117764848A (en) 2023-12-22 2023-12-22 Map merging method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311792945.7A CN117764848A (en) 2023-12-22 2023-12-22 Map merging method and device

Publications (1)

Publication Number Publication Date
CN117764848A true CN117764848A (en) 2024-03-26

Family

ID=90321614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311792945.7A Pending CN117764848A (en) 2023-12-22 2023-12-22 Map merging method and device

Country Status (1)

Country Link
CN (1) CN117764848A (en)

Similar Documents

Publication Publication Date Title
CN111127563A (en) Combined calibration method and device, electronic equipment and storage medium
CN111292420B (en) Method and device for constructing map
CN110728622B (en) Fisheye image processing method, device, electronic equipment and computer readable medium
US11494961B2 (en) Sticker generating method and apparatus, and medium and electronic device
CN110487264B (en) Map correction method, map correction device, electronic equipment and storage medium
CN116740382B (en) Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium
CN113129366B (en) Monocular SLAM initialization method and device and electronic equipment
CN115808929B (en) Vehicle simulation obstacle avoidance method and device, electronic equipment and computer readable medium
CN117764848A (en) Map merging method and device
CN113566847B (en) Navigation calibration method and device, electronic equipment and computer readable medium
CN112337675B (en) Spraying control method and device for spraying robot and electronic equipment
CN111862342A (en) Texture processing method and device for augmented reality, electronic equipment and storage medium
CN110908860A (en) Method, device, medium and electronic equipment for acquiring Java threads
CN111311665B (en) Video processing method and device and electronic equipment
CN112987934B (en) Portrait identification interaction method and device and electronic equipment
CN110288554B (en) Video beautifying method and device and electronic equipment
CN117710612A (en) Augmented reality anchor point management method, device, storage medium, and program product
CN115205501B (en) Road surface condition display method, device, equipment and medium
US20230418072A1 (en) Positioning method, apparatus, electronic device, head-mounted display device, and storage medium
CN115908143B (en) Vehicle cross-layer parking method, device, electronic equipment and computer readable medium
CN112880675B (en) Pose smoothing method and device for visual positioning, terminal and mobile robot
US11216977B2 (en) Methods and apparatuses for outputting information and calibrating camera
CN117311837A (en) Visual positioning parameter updating method and device, electronic equipment and storage medium
CN111461982B (en) Method and apparatus for splice point cloud
WO2022247619A1 (en) Image processing method and device, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination