CN111009038B - Space labeling method based on SLAM - Google Patents
Space labeling method based on SLAM Download PDFInfo
- Publication number
- CN111009038B CN111009038B CN201911217292.3A CN201911217292A CN111009038B CN 111009038 B CN111009038 B CN 111009038B CN 201911217292 A CN201911217292 A CN 201911217292A CN 111009038 B CN111009038 B CN 111009038B
- Authority
- CN
- China
- Prior art keywords
- space
- coordinate system
- virtual camera
- dimensional
- picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000002372 labelling Methods 0.000 title claims abstract description 41
- 239000011521 glass Substances 0.000 claims abstract description 10
- 230000000694 effects Effects 0.000 claims abstract description 7
- 238000003384 imaging method Methods 0.000 claims abstract description 5
- 239000000463 material Substances 0.000 claims description 12
- 238000009877 rendering Methods 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 5
- 238000005520 cutting process Methods 0.000 claims description 4
- 238000006073 displacement reaction Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 abstract description 4
- 238000006243 chemical reaction Methods 0.000 abstract description 3
- 239000007788 liquid Substances 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 4
- 238000000034 method Methods 0.000 description 3
- 238000009776 industrial production Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G06T3/04—
Abstract
The invention discloses a space labeling method based on SLAM (sequential liquid level), which is characterized in that a two-dimensional map with remote labeling is subjected to space conversion based on SLAM technology, the two-dimensional plane content labeled by an expert is accurately placed at a three-dimensional virtual space position corresponding to the expert, finally, the three-dimensional virtual space is rendered by using glasses equipment, the imaging effect of eyes of a person is overlapped with a real scene through glasses, and the effect seen by a final user is completely consistent with the position labeled by the expert. And does not change with the change in the position of the user. The space labeling method based on SLAM realizes that the labeling content is converted from a two-dimensional plane labeling mapping to a three-dimensional space placement mapping, is accurately overlapped in a real three-dimensional space, and follows a labeling target in real time. The marked position and information can be visually seen, and the text or language expression is omitted.
Description
Technical Field
The invention relates to the field of labeling methods. More particularly, the present invention relates to a SLAM-based spatial annotation method.
Background
In life or industrial production, in order to solve some problems which cannot be solved by the expert, the expert is required to be far away, and the expert is required to request a remote expert to assist in completing the operation through video call. When an expert gives directions to another party, problems or operations are encountered, the position may not be accurately described by simple language expression, and the position may need to be completed by means of drawing marks and the like.
In the existing labeling mode, one is to directly label at the uppermost layer of the picture, and the labeling content always does not interact with the real environment; one way is that the annotation content moves with the camera, but the annotation position is far from the expert's annotation position.
Disclosure of Invention
The invention aims to provide a space labeling method based on SLAM, which is used for accurately placing two-dimensional plane contents labeled by an expert into a three-dimensional virtual space position corresponding to the expert.
To achieve these objects and other advantages and in accordance with the purpose of the invention, there is provided a space labeling method based on SLAM, comprising the steps of:
s1, acquiring space feature points through a real camera based on SLAM technology, and constructing a virtual three-dimensional coordinate system;
s2, creating a virtual camera corresponding to a real camera in the three-dimensional coordinate system, and correcting the position of the virtual camera in the three-dimensional coordinate system in real time through SLAM technology so that the position moment of the virtual camera in the three-dimensional coordinate system corresponds to the position of the real camera in a real space;
s3, recording the position and the rotation angle of the virtual camera, simultaneously acquiring a real picture shot by the real camera, marking the real picture, and then storing the marked content as a two-dimensional picture with a transparent channel;
s4, establishing a plane coordinate system in the two-dimensional picture, performing edge cutting treatment on the marked picture according to the transparent channel to obtain a marked picture of a rectangular area where marked content is located, acquiring plane coordinate points A and B of the upper left corner and the lower right corner of the marked picture, and uniformly sampling the rectangular area where the marked picture is located in the plane coordinate system at equal intervals to obtain a screen coordinate array;
s5, restoring the position and the orientation of the virtual camera through the position and the rotation angle of the virtual camera recorded in the S3, traversing the screen coordinate array obtained in the S4, sequentially sending rays to the three-dimensional coordinate system by the virtual camera to obtain a group of space coordinates intersected with the three-dimensional coordinate system, and calculating the space coordinates of the point closest to the virtual camera;
s6, creating an infinite first plane at the position closest to the virtual camera in the three-dimensional coordinate system, rotating the first plane after displacement according to the rotation angle of the virtual camera to obtain a second plane, respectively sending rays from the virtual camera to the plane coordinate points A and B obtained in the S4 in the three-dimensional coordinate system to obtain space coordinate points C and D of two points intersected with the second plane, wherein the center point between the space coordinate points C and D is the position where labeling content is placed in a virtual space, creating a virtual camera coordinate system with the virtual camera as an origin, setting the direction of a Z axis of the virtual camera coordinate system to be the same as the direction of the Z axis of the three-dimensional coordinate system, respectively calculating space coordinate points E and F of the space coordinate points C and D in the virtual camera coordinate system, subtracting absolute values corresponding to the X axis and the Y axis of the space coordinate points E and F in the virtual camera coordinate system to be the virtual coordinate system, scaling the labeling content, and scaling the labeling content after the space coordinate system is scaled;
s7, under the Unity3D environment, scaling, coordinates and rotation of the marked picture in space are calculated and obtained according to the S6, and an object with a MeshFilter and a MeshRenderer is created; creating a Material and designating a loader of the Material as a loader with a transparent channel, and setting a map of the Material as the labeling picture; and assigning the Material to the Meshrender, rendering by a Unity3D rendering system, and displaying on a glasses screen, wherein the final imaging effect can be seen through the glasses screen.
According to the invention, through space conversion based on SLAM technology, the two-dimensional plane content marked by the expert is accurately placed at the position of the three-dimensional virtual space corresponding to the expert, the three-dimensional virtual space is finally rendered by using glasses equipment, the human eyes are overlapped with the real scene through the imaging effect of the glasses, and the effect seen by the end user is completely consistent with the position marked by the expert. The method is free from change along with the position change of a user, the conversion of the labeling content from the two-dimensional plane labeling mapping to the three-dimensional space placement mapping is realized, the labeling content is accurately overlapped in the real three-dimensional space, and the labeling content follows the labeling target in real time. The marked position and information can be visually seen, and the text or language expression is omitted.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention.
Drawings
FIG. 1 is a flow chart of a SLAM-based spatial annotation method according to the present invention;
FIG. 2 is a schematic diagram of labeling on the real frame according to the embodiment of the invention;
FIG. 3 is a schematic diagram of uniformly sampling a rectangle formed by a start position and an end position of the real picture at equal intervals according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of obtaining a set of spatial coordinates intersecting the three-dimensional coordinate system in an embodiment of the invention;
FIG. 5 is a schematic representation of the spatial coordinates of two points intersecting the second plane according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of converting an annotation map into a spatial annotation in accordance with an embodiment of the invention.
Detailed Description
The present invention is described in further detail below with reference to the drawings to enable those skilled in the art to practice the invention by referring to the description.
It should be noted that, in the description of the present invention, the terms "transverse", "longitudinal", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention.
As shown in fig. 1 to 6, an embodiment of the present invention provides a space labeling method based on SLAM, including the following steps:
s1, acquiring space feature points through a real camera based on SLAM technology, and constructing a virtual three-dimensional coordinate system;
s2, creating a virtual camera corresponding to a real camera in the three-dimensional coordinate system, and correcting the position of the virtual camera in the three-dimensional coordinate system in real time through SLAM technology so that the position moment of the virtual camera in the three-dimensional coordinate system corresponds to the position of the real camera in a real space;
s3, recording the position and the rotation angle of the virtual camera, simultaneously acquiring a real picture shot by the real camera, marking the real picture as shown in FIG. 2, and then storing the marked content as a two-dimensional picture with a transparent channel;
s4, establishing a plane coordinate system in the two-dimensional picture, performing edge cutting treatment on the marked picture according to the transparent channel to obtain a marked picture of a rectangular area where marked content is located, acquiring plane coordinate points A and B of the upper left corner and the lower right corner of the marked picture, and uniformly sampling the rectangular area where the marked picture is located in the plane coordinate system at equal intervals, as shown in FIG. 3, to obtain a screen coordinate array;
s5, restoring the position and the orientation of the virtual camera through the position and the rotation angle of the virtual camera recorded in the S3, traversing the screen coordinate array obtained in the S4, sequentially sending rays to the three-dimensional coordinate system by the virtual camera to obtain a group of space coordinates intersected with the three-dimensional coordinate system, and calculating the space coordinates of the point closest to the virtual camera as shown in fig. 4;
s6, creating an infinite first plane at the position closest to the virtual camera in the three-dimensional coordinate system, rotating the first plane after displacement according to the rotation angle of the virtual camera to obtain a second plane, respectively sending rays from the virtual camera to the plane coordinate points A and B obtained in the S4 in the three-dimensional coordinate system to obtain space coordinate points C and D of two points intersected with the second plane, wherein the center point between the space coordinate points C and D is the position where labeling content is placed in a virtual space, creating a virtual camera coordinate system with the virtual camera as an origin, setting the direction of a Z axis of the virtual camera coordinate system to be the same as the direction of the Z axis of the three-dimensional coordinate system, respectively calculating space coordinate points E and F of the space coordinate points C and D in the virtual camera coordinate system, subtracting absolute values corresponding to the X axis and the Y axis of the space coordinate points E and F in the virtual camera coordinate system as shown in FIG. 6, scaling the labeling content, and scaling the labeling content by scaling the labeling content;
s7, under the Unity3D environment, scaling, coordinates and rotation of the marked picture in space are calculated and obtained according to the S6, and an object with a MeshFilter and a MeshRenderer is created; creating a Material and designating a loader of the Material as a loader with a transparent channel, and setting a map of the Material as the labeling picture; and assigning the Material to the Meshrender, rendering by a Unity3D rendering system, and displaying on a glasses screen, wherein the final imaging effect can be seen through the glasses screen.
S1, acquiring space feature points through a real camera based on SLAM technology, and constructing a virtual three-dimensional coordinate system;
s2, creating a virtual camera corresponding to a real camera in the three-dimensional coordinate system, and correcting the position of the virtual camera in the three-dimensional coordinate system in real time through SLAM technology so that the position moment of the virtual camera in the three-dimensional coordinate system corresponds to the position of the real camera in a real space;
s3, recording the position and the rotation angle of the virtual camera, simultaneously acquiring a real picture shot by the real camera, marking the real picture as shown in FIG. 2, and then storing the marked content as a two-dimensional picture with a transparent channel;
s4, performing edge cutting processing on the two-dimensional picture by using a transparent channel to obtain a marked picture of a rectangular area where marked content is located, calculating coordinates of a starting position and an ending position of the marked picture relative to the two-dimensional picture (the starting position and the ending position of the marked picture relative to the two-dimensional picture are respectively two right angles relative to the marked picture), obtaining coordinates of the starting position and the ending position of the real picture according to the relative position relation between the marked picture and the real picture, and uniformly sampling rectangles formed by the starting position and the ending position of the real picture at equal intervals, as shown in FIG. 3, so as to obtain a screen coordinate array;
s5, restoring the position and the orientation of the virtual camera through the position and the rotation angle of the virtual camera recorded in the S3, traversing the screen coordinate array obtained in the S4, sequentially sending rays to the three-dimensional coordinate system by the virtual camera to obtain a group of space coordinates intersected with the three-dimensional coordinate system, and calculating the space coordinates of the point closest to the virtual camera as shown in fig. 4;
s6, creating an infinite first plane, displacing the first plane according to the space coordinates of the point closest to the virtual camera, rotating the displaced first plane according to the rotation angle of the virtual camera to obtain a second plane, obtaining the space coordinates of the initial position and the final position of the real picture according to the relative position relation between the labeling picture and the real picture, respectively sending rays to the initial position and the final position of the two-dimensional picture from the virtual camera to obtain the space coordinates of two points intersected with the second plane, wherein the central point of the two points is the position where labeling content is placed in the virtual space, the absolute value of the space coordinates of the two points subtracted is the scaling of the labeling content, and then moving the labeling content to the position where the labeling content is placed in the virtual space after scaling according to the scaling, so that the labeling map is converted into a space labeling.
Although embodiments of the present invention have been disclosed above, it is not limited to the details and embodiments shown, it is well suited to various fields of use for which the invention is suited, and further modifications may be readily made by one skilled in the art, and the invention is therefore not to be limited to the particular details and examples shown and described herein, without departing from the general concepts defined by the claims and the equivalents thereof.
Claims (1)
1. The space labeling method based on SLAM is characterized by comprising the following steps:
s1, acquiring space feature points through a real camera based on SLAM technology, and constructing a virtual three-dimensional coordinate system;
s2, creating a virtual camera corresponding to a real camera in the three-dimensional coordinate system, and correcting the position of the virtual camera in the three-dimensional coordinate system in real time through SLAM technology so that the position moment of the virtual camera in the three-dimensional coordinate system corresponds to the position of the real camera in a real space;
s3, recording the position and the rotation angle of the virtual camera, simultaneously acquiring a real picture shot by the real camera, marking the real picture, and then storing the marked content as a two-dimensional picture with a transparent channel;
s4, establishing a plane coordinate system in the two-dimensional picture, performing edge cutting treatment on the marked picture according to the transparent channel to obtain a marked picture of a rectangular area where marked content is located, acquiring plane coordinate points A and B of the upper left corner and the lower right corner of the marked picture, and uniformly sampling the rectangular area where the marked picture is located in the plane coordinate system at equal intervals to obtain a screen coordinate array;
s5, restoring the position and the orientation of the virtual camera through the position and the rotation angle of the virtual camera recorded in the S3, traversing the screen coordinate array obtained in the S4, sequentially sending rays to the three-dimensional coordinate system by the virtual camera to obtain a group of space coordinates intersected with the three-dimensional coordinate system, and calculating the space coordinates of the point closest to the virtual camera;
s6, creating an infinite first plane at the position closest to the virtual camera in the three-dimensional coordinate system, rotating the first plane after displacement according to the rotation angle of the virtual camera to obtain a second plane, respectively sending rays from the virtual camera to the plane coordinate points A and B obtained in the S4 in the three-dimensional coordinate system to obtain space coordinate points C and D of two points intersected with the second plane, wherein the center point between the space coordinate points C and D is the position where labeling content is placed in a virtual space, creating a virtual camera coordinate system with the virtual camera as an origin, setting the direction of a Z axis of the virtual camera coordinate system to be the same as the direction of the Z axis of the three-dimensional coordinate system, respectively calculating space coordinate points E and F of the space coordinate points C and D in the virtual camera coordinate system, subtracting absolute values corresponding to the X axis and the Y axis of the space coordinate points E and F in the virtual camera coordinate system to be the virtual coordinate system, scaling the labeling content, and scaling the labeling content after the space coordinate system is scaled;
s7, under the Unity3D environment, scaling, coordinates and rotation of the marked picture in space are calculated and obtained according to the S6, and an object with a MeshFilter and a MeshRenderer is created; creating a Material and designating a loader of the Material as a loader with a transparent channel, and setting a map of the Material as the labeling picture; and assigning the Material to the Meshrender, rendering by a Unity3D rendering system, and displaying on a glasses screen, wherein the final imaging effect can be seen through the glasses screen.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911217292.3A CN111009038B (en) | 2019-12-03 | 2019-12-03 | Space labeling method based on SLAM |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911217292.3A CN111009038B (en) | 2019-12-03 | 2019-12-03 | Space labeling method based on SLAM |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111009038A CN111009038A (en) | 2020-04-14 |
CN111009038B true CN111009038B (en) | 2023-12-29 |
Family
ID=70112665
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911217292.3A Active CN111009038B (en) | 2019-12-03 | 2019-12-03 | Space labeling method based on SLAM |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111009038B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112686948A (en) * | 2020-12-25 | 2021-04-20 | 北京像素软件科技股份有限公司 | Editor operation method and device and electronic equipment |
CN112950755A (en) * | 2021-03-23 | 2021-06-11 | 广东电网有限责任公司 | Security fence arrangement method and device |
CN113066007A (en) * | 2021-06-03 | 2021-07-02 | 潍坊幻视软件科技有限公司 | Method for indicating target position in 3D space |
CN115268658A (en) * | 2022-09-30 | 2022-11-01 | 苏芯物联技术(南京)有限公司 | Multi-party remote space delineation marking method based on augmented reality |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20140033868A (en) * | 2012-09-11 | 2014-03-19 | 한국과학기술원 | Method and apparatus for environment modeling for ar |
KR20150076574A (en) * | 2013-12-27 | 2015-07-07 | 한청훈 | Method and apparatus for space touch |
CN108830894A (en) * | 2018-06-19 | 2018-11-16 | 亮风台(上海)信息科技有限公司 | Remote guide method, apparatus, terminal and storage medium based on augmented reality |
CN109584295A (en) * | 2017-09-29 | 2019-04-05 | 阿里巴巴集团控股有限公司 | The method, apparatus and system of automatic marking are carried out to target object in image |
-
2019
- 2019-12-03 CN CN201911217292.3A patent/CN111009038B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20140033868A (en) * | 2012-09-11 | 2014-03-19 | 한국과학기술원 | Method and apparatus for environment modeling for ar |
KR20150076574A (en) * | 2013-12-27 | 2015-07-07 | 한청훈 | Method and apparatus for space touch |
CN109584295A (en) * | 2017-09-29 | 2019-04-05 | 阿里巴巴集团控股有限公司 | The method, apparatus and system of automatic marking are carried out to target object in image |
CN108830894A (en) * | 2018-06-19 | 2018-11-16 | 亮风台(上海)信息科技有限公司 | Remote guide method, apparatus, terminal and storage medium based on augmented reality |
Non-Patent Citations (1)
Title |
---|
基于深度学习的语义地图生成;李佳芮;;电子制作(24);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN111009038A (en) | 2020-04-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111009038B (en) | Space labeling method based on SLAM | |
CN110335292B (en) | Method, system and terminal for realizing simulation scene tracking based on picture tracking | |
WO2019242262A1 (en) | Augmented reality-based remote guidance method and device, terminal, and storage medium | |
US11200457B2 (en) | System and method using augmented reality for efficient collection of training data for machine learning | |
JP6264834B2 (en) | Guide method, information processing apparatus, and guide program | |
WO2019062619A1 (en) | Method, apparatus and system for automatically labeling target object within image | |
CN110009561A (en) | A kind of monitor video target is mapped to the method and system of three-dimensional geographical model of place | |
EP3455686A1 (en) | Systems and methods for initializing a robot to autonomously travel a trained route | |
CN108335365A (en) | A kind of image-guided virtual reality fusion processing method and processing device | |
CN109887003A (en) | A kind of method and apparatus initialized for carrying out three-dimensional tracking | |
CN104656893B (en) | The long-distance interactive control system and method in a kind of information physical space | |
TWI553590B (en) | Method and device for retargeting a 3d content | |
CN111862333A (en) | Content processing method and device based on augmented reality, terminal equipment and storage medium | |
CN104680532A (en) | Object labeling method and device | |
CN104867113A (en) | Method and system for perspective distortion correction of image | |
WO2019164502A1 (en) | Methods, devices and computer program products for generating 3d models | |
CN111160360A (en) | Image recognition method, device and system | |
CN110544315B (en) | Virtual object control method and related equipment | |
US11138743B2 (en) | Method and apparatus for a synchronous motion of a human body model | |
Moeslund et al. | A natural interface to a virtual environment through computer vision-estimated pointing gestures | |
EP3825804A1 (en) | Map construction method, apparatus, storage medium and electronic device | |
CN114029952A (en) | Robot operation control method, device and system | |
CN105072433A (en) | Depth perception mapping method applied to head track virtual reality system | |
CN110910484A (en) | SLAM-based object mapping method from two-dimensional image to three-dimensional real scene | |
CN115268658A (en) | Multi-party remote space delineation marking method based on augmented reality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |