CN111009038A - Space labeling method based on SLAM - Google Patents

Space labeling method based on SLAM Download PDF

Info

Publication number
CN111009038A
CN111009038A CN201911217292.3A CN201911217292A CN111009038A CN 111009038 A CN111009038 A CN 111009038A CN 201911217292 A CN201911217292 A CN 201911217292A CN 111009038 A CN111009038 A CN 111009038A
Authority
CN
China
Prior art keywords
space
virtual camera
coordinate system
dimensional
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911217292.3A
Other languages
Chinese (zh)
Other versions
CN111009038B (en
Inventor
胡鹏程
裴科峰
房文博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shichang Information Technology Co Ltd
Original Assignee
Shanghai Shichang Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shichang Information Technology Co Ltd filed Critical Shanghai Shichang Information Technology Co Ltd
Priority to CN201911217292.3A priority Critical patent/CN111009038B/en
Publication of CN111009038A publication Critical patent/CN111009038A/en
Application granted granted Critical
Publication of CN111009038B publication Critical patent/CN111009038B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • G06T3/04

Abstract

The invention discloses a space labeling method based on SLAM, which is characterized in that a two-dimensional map which is labeled remotely is subjected to space conversion based on SLAM technology, two-dimensional plane contents labeled by experts are accurately placed at a three-dimensional virtual space position corresponding to the experts, the three-dimensional virtual space is rendered by glasses equipment, human eyes are superposed with a real scene through glasses imaging effects, and the effect seen by a user is completely consistent with the position labeled by the experts. And does not change as the user's location changes. The space labeling method based on SLAM provided by the invention realizes that the labeled content is converted from a two-dimensional plane labeling map into a three-dimensional space placement map, is accurately superposed in a real three-dimensional space, and follows the labeling target in real time. The marking position and information can be visually seen, and the expression of characters or languages is omitted.

Description

Space labeling method based on SLAM
Technical Field
The invention relates to the field of labeling methods. More particularly, the present invention relates to a spatial labeling method based on SLAM.
Background
In the life or industrial production, in order to solve some problems which cannot be solved by the experts, the experts are requested to be taught and are far away, and the remote experts are requested to assist in completing the operation through video call. When an expert gives guidance to another party, some problems or operations are encountered, the positions may not be accurately described through simple language expression, and the guidance may need to be finished by drawing, labeling and the like.
In the existing labeling mode, one mode is to label the top layer of a picture directly, and the labeled content can not interact with the real environment all the time; one way is that the marked content moves along with the camera, but the marked position has a large error with the position marked by the expert.
Disclosure of Invention
The invention aims to provide a space labeling method based on SLAM, which accurately places two-dimensional plane contents labeled by experts to three-dimensional virtual space positions corresponding to the experts.
To achieve these objects and other advantages in accordance with the purpose of the invention, there is provided a spatial labeling method based on SLAM, comprising the steps of:
s1, acquiring spatial feature points through a real camera based on an SLAM technology, and constructing a virtual three-dimensional coordinate system;
s2, creating a virtual camera corresponding to the real camera in the three-dimensional coordinate system, and modifying and correcting the position of the virtual camera in the three-dimensional coordinate system in real time through a SLAM technology, so that the position moment of the virtual camera in the three-dimensional coordinate system corresponds to the position of the real camera in the real space;
s3, recording the position and the rotation angle of the virtual camera, simultaneously acquiring a real picture shot by the real camera, marking the real picture, and storing the marked content as a two-dimensional picture with a transparent channel;
s4, establishing a plane coordinate system in the two-dimensional picture, performing edge cutting processing on the marked picture according to a transparent channel to obtain a marked picture of a rectangular area where marked content is located, acquiring plane coordinate points A and B of the upper left corner and the lower right corner of the marked picture, and uniformly sampling the rectangular area where the marked picture is located in the plane coordinate system at equal intervals to obtain a screen coordinate array;
s5, restoring the position and the orientation of the virtual camera through the position and the rotation angle of the virtual camera recorded in S3, traversing the screen coordinate array obtained in S4, sequentially sending rays to the three-dimensional coordinate system by the virtual camera to obtain a group of space coordinates intersected with the three-dimensional coordinate system, and then calculating the space coordinates of the point closest to the virtual camera;
s6, creating an infinite first plane at the nearest point in the three-dimensional coordinate system to the virtual camera, rotating the first plane after displacement according to the rotation angle of the virtual camera to obtain a second plane, sending rays from the virtual camera to the plane coordinate points A and B obtained in S4 in the three-dimensional coordinate system to obtain space coordinate points C and D of two points intersected with the second plane, wherein the central point between the space coordinate points C and D is the position where the marked content is placed in the virtual space, creating a virtual camera coordinate system with the virtual camera as the origin, and the direction of the Z axis of the virtual camera coordinate system is the same as the direction of the Z axis of the three-dimensional coordinate system, and then calculating the space coordinate points E and F of the space coordinate points C and D under the virtual camera coordinate system respectively, the absolute value of the subtraction of the values corresponding to the X axis and the Y axis of the space coordinate points E and F in the virtual camera coordinate system is the scaling ratio of the marked content, and then the marked content is scaled according to the scaling ratio and then is moved to the position where the marked content is placed in the virtual space;
s7, under the environment of Unity3D, calculating and obtaining the scaling, coordinates and rotation of the marked picture in the space according to the S6 to create an object with a MeshFilter and a MeshRenderer; creating Material and designating the Shader of the Material as the Shader with a transparent channel, and setting the mapping of the Material as the labeled picture; and assigning the Material to the MeshRender, rendering the Material through a rendering system of the Unity3D, and displaying the Material on a glasses screen, namely, viewing the final imaging effect through the glasses screen.
According to the method, through space conversion based on the SLAM technology, the two-dimensional plane content marked by the expert is accurately placed at the three-dimensional virtual space position corresponding to the expert, the three-dimensional virtual space is rendered by using glasses equipment, human eyes are overlapped with a real scene through the glasses imaging effect, and the effect seen by a user is completely consistent with the position marked by the expert. And the labeling content does not change along with the position change of the user, so that the labeling content is converted from a two-dimensional plane labeling map into a three-dimensional space placing map, is accurately superposed in a real three-dimensional space, and follows a labeling target in real time. The marking position and information can be visually seen, and the expression of characters or languages is omitted.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention.
Drawings
FIG. 1 is a flow chart of a spatial annotation method based on SLAM according to the present invention;
FIG. 2 is a schematic diagram illustrating a labeling performed on the real picture according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating equally spaced and uniform sampling of a rectangle formed by the start position and the end position of the real picture according to the embodiment of the present invention;
FIG. 4 is a schematic diagram of obtaining a set of spatial coordinates intersecting the three-dimensional coordinate system in accordance with an embodiment of the present invention;
FIG. 5 is a schematic diagram of obtaining spatial coordinates of two points intersecting the second plane in an embodiment of the present invention;
FIG. 6 is a diagram illustrating conversion of a label map into a spatial label according to an embodiment of the present invention.
Detailed Description
The present invention is further described in detail below with reference to the attached drawings so that those skilled in the art can implement the invention by referring to the description text.
It should be noted that in the description of the present invention, the terms "lateral", "longitudinal", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc. indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of description and simplicity of description, and do not indicate or imply that the referred device or element must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
As shown in fig. 1 to fig. 6, an embodiment of the present invention provides a spatial annotation method based on SLAM, including the following steps:
s1, acquiring spatial feature points through a real camera based on an SLAM technology, and constructing a virtual three-dimensional coordinate system;
s2, creating a virtual camera corresponding to the real camera in the three-dimensional coordinate system, and modifying and correcting the position of the virtual camera in the three-dimensional coordinate system in real time through a SLAM technology, so that the position moment of the virtual camera in the three-dimensional coordinate system corresponds to the position of the real camera in the real space;
s3, recording the position and the rotation angle of the virtual camera, simultaneously acquiring a real picture shot by the real camera, marking the real picture as shown in figure 2, and then storing the marked content as a two-dimensional picture with a transparent channel;
s4, establishing a plane coordinate system in the two-dimensional picture, performing edge cutting processing on the marked picture according to a transparent channel to obtain a marked picture of a rectangular area where marked content is located, acquiring plane coordinate points A and B of the upper left corner and the lower right corner of the marked picture, and uniformly sampling the rectangular area where the marked picture is located in the plane coordinate system at equal intervals, as shown in FIG. 3, so as to obtain a screen coordinate array;
s5, restoring the position and orientation of the virtual camera according to the position and rotation angle of the virtual camera recorded in S3, traversing the screen coordinate array obtained in S4, the virtual camera sequentially emitting rays to the three-dimensional coordinate system to obtain a set of spatial coordinates intersecting the three-dimensional coordinate system, as shown in fig. 4, and then calculating the spatial coordinates of the point closest to the virtual camera;
s6, creating an infinite first plane at the nearest point in the three-dimensional coordinate system to the virtual camera, rotating the first plane after displacement according to the rotation angle of the virtual camera to obtain a second plane, sending rays from the virtual camera to the plane coordinate points A and B obtained in S4 in the three-dimensional coordinate system to obtain space coordinate points C and D of two points intersected with the second plane, wherein the central point between the space coordinate points C and D is the position where the marked content is placed in the virtual space, creating a virtual camera coordinate system with the virtual camera as the origin, and the direction of the Z axis of the virtual camera coordinate system is the same as the direction of the Z axis of the three-dimensional coordinate system, and then calculating the space coordinate points E and F of the space coordinate points C and D under the virtual camera coordinate system respectively, as shown in fig. 6, an absolute value of a subtraction between values corresponding to the X axis and the Y axis of the spatial coordinate points E and F in the virtual camera coordinate system is a scaling ratio of the marked content, and then the marked content is scaled according to the scaling ratio and then moved to a position where the marked content is placed in the virtual space;
s7, under the environment of Unity3D, calculating and obtaining the scaling, coordinates and rotation of the marked picture in the space according to the S6 to create an object with a MeshFilter and a MeshRenderer; creating Material and designating the Shader of the Material as the Shader with a transparent channel, and setting the mapping of the Material as the labeled picture; and assigning the Material to the MeshRender, rendering the Material through a rendering system of the Unity3D, and displaying the Material on a glasses screen, namely, viewing the final imaging effect through the glasses screen.
S1, acquiring spatial feature points through a real camera based on an SLAM technology, and constructing a virtual three-dimensional coordinate system;
s2, creating a virtual camera corresponding to the real camera in the three-dimensional coordinate system, and modifying and correcting the position of the virtual camera in the three-dimensional coordinate system in real time through a SLAM technology, so that the position moment of the virtual camera in the three-dimensional coordinate system corresponds to the position of the real camera in the real space;
s3, recording the position and the rotation angle of the virtual camera, simultaneously acquiring a real picture shot by the real camera, marking the real picture as shown in figure 2, and then storing the marked content as a two-dimensional picture with a transparent channel;
s4, performing edge cutting processing on the two-dimensional picture by using a transparent channel to obtain a labeled picture in a rectangular region where labeled content is located, calculating coordinates of a starting position and an ending position of the labeled picture relative to the two-dimensional picture (the starting position and the ending position of the labeled picture relative to the two-dimensional picture are two right angles relative to the labeled picture respectively), obtaining coordinates of the starting position and the ending position of the real picture from a relative position relationship between the labeled picture and the real picture, and performing uniform sampling on a rectangle formed by the starting position and the ending position of the real picture at equal intervals, as shown in fig. 3, to obtain a screen coordinate array;
s5, restoring the position and orientation of the virtual camera according to the position and rotation angle of the virtual camera recorded in S3, traversing the screen coordinate array obtained in S4, the virtual camera sequentially emitting rays to the three-dimensional coordinate system to obtain a set of spatial coordinates intersecting the three-dimensional coordinate system, as shown in fig. 4, and then calculating the spatial coordinates of the point closest to the virtual camera;
s6, creating an infinite first plane, displacing the first plane according to the space coordinate of the point nearest to the virtual camera, rotating the displaced first plane according to the rotation angle of the virtual camera to obtain a second plane, obtaining the space coordinates of the initial position and the end position of the real picture according to the relative position relationship between the marked picture and the real picture, and respectively sending rays to the initial position and the end position of the two-dimensional picture from the virtual camera to obtain the space coordinates of two points intersected with the second plane, as shown in FIG. 5, as shown in FIG. 6, the central point of the two points is the position where the marked content is placed in the virtual space, and the absolute value of the subtraction of the space coordinates of the two points is the scaling ratio of the marked content, and then scaling the marked content according to the scaling ratio, and moving the label to the position where the label is placed in the virtual space, namely completing the conversion of the label map into the space label.
While embodiments of the invention have been described above, it is not limited to the applications set forth in the description and the embodiments, which are fully applicable to various fields of endeavor for which the invention may be embodied with additional modifications as would be readily apparent to those skilled in the art, and the invention is therefore not limited to the details given herein and to the embodiments shown and described without departing from the generic concept as defined by the claims and their equivalents.

Claims (1)

1. A space labeling method based on SLAM is characterized by comprising the following steps:
s1, acquiring spatial feature points through a real camera based on an SLAM technology, and constructing a virtual three-dimensional coordinate system;
s2, creating a virtual camera corresponding to the real camera in the three-dimensional coordinate system, and modifying and correcting the position of the virtual camera in the three-dimensional coordinate system in real time through a SLAM technology, so that the position moment of the virtual camera in the three-dimensional coordinate system corresponds to the position of the real camera in the real space;
s3, recording the position and the rotation angle of the virtual camera, simultaneously acquiring a real picture shot by the real camera, marking the real picture, and storing the marked content as a two-dimensional picture with a transparent channel;
s4, establishing a plane coordinate system in the two-dimensional picture, performing edge cutting processing on the marked picture according to a transparent channel to obtain a marked picture of a rectangular area where marked content is located, acquiring plane coordinate points A and B of the upper left corner and the lower right corner of the marked picture, and uniformly sampling the rectangular area where the marked picture is located in the plane coordinate system at equal intervals to obtain a screen coordinate array;
s5, restoring the position and the orientation of the virtual camera through the position and the rotation angle of the virtual camera recorded in S3, traversing the screen coordinate array obtained in S4, sequentially sending rays to the three-dimensional coordinate system by the virtual camera to obtain a group of space coordinates intersected with the three-dimensional coordinate system, and then calculating the space coordinates of the point closest to the virtual camera;
s6, creating an infinite first plane at the nearest point in the three-dimensional coordinate system to the virtual camera, rotating the first plane after displacement according to the rotation angle of the virtual camera to obtain a second plane, sending rays from the virtual camera to the plane coordinate points A and B obtained in S4 in the three-dimensional coordinate system to obtain space coordinate points C and D of two points intersected with the second plane, wherein the central point between the space coordinate points C and D is the position where the marked content is placed in the virtual space, creating a virtual camera coordinate system with the virtual camera as the origin, and the direction of the Z axis of the virtual camera coordinate system is the same as the direction of the Z axis of the three-dimensional coordinate system, and then calculating the space coordinate points E and F of the space coordinate points C and D under the virtual camera coordinate system respectively, the absolute value of the subtraction of the values corresponding to the X axis and the Y axis of the space coordinate points E and F in the virtual camera coordinate system is the scaling ratio of the marked content, and then the marked content is scaled according to the scaling ratio and then is moved to the position where the marked content is placed in the virtual space;
s7, under the environment of Unity3D, calculating and obtaining the scaling, coordinates and rotation of the marked picture in the space according to the S6 to create an object with a MeshFilter and a MeshRenderer; creating Material and designating the Shader of the Material as the Shader with a transparent channel, and setting the mapping of the Material as the labeled picture; and assigning the Material to the MeshRender, rendering the Material through a rendering system of the Unity3D, and displaying the Material on a glasses screen, namely, viewing the final imaging effect through the glasses screen.
CN201911217292.3A 2019-12-03 2019-12-03 Space labeling method based on SLAM Active CN111009038B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911217292.3A CN111009038B (en) 2019-12-03 2019-12-03 Space labeling method based on SLAM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911217292.3A CN111009038B (en) 2019-12-03 2019-12-03 Space labeling method based on SLAM

Publications (2)

Publication Number Publication Date
CN111009038A true CN111009038A (en) 2020-04-14
CN111009038B CN111009038B (en) 2023-12-29

Family

ID=70112665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911217292.3A Active CN111009038B (en) 2019-12-03 2019-12-03 Space labeling method based on SLAM

Country Status (1)

Country Link
CN (1) CN111009038B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686948A (en) * 2020-12-25 2021-04-20 北京像素软件科技股份有限公司 Editor operation method and device and electronic equipment
CN112950755A (en) * 2021-03-23 2021-06-11 广东电网有限责任公司 Security fence arrangement method and device
CN113066007A (en) * 2021-06-03 2021-07-02 潍坊幻视软件科技有限公司 Method for indicating target position in 3D space
CN115268658A (en) * 2022-09-30 2022-11-01 苏芯物联技术(南京)有限公司 Multi-party remote space delineation marking method based on augmented reality

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140033868A (en) * 2012-09-11 2014-03-19 한국과학기술원 Method and apparatus for environment modeling for ar
KR20150076574A (en) * 2013-12-27 2015-07-07 한청훈 Method and apparatus for space touch
CN108830894A (en) * 2018-06-19 2018-11-16 亮风台(上海)信息科技有限公司 Remote guide method, apparatus, terminal and storage medium based on augmented reality
CN109584295A (en) * 2017-09-29 2019-04-05 阿里巴巴集团控股有限公司 The method, apparatus and system of automatic marking are carried out to target object in image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140033868A (en) * 2012-09-11 2014-03-19 한국과학기술원 Method and apparatus for environment modeling for ar
KR20150076574A (en) * 2013-12-27 2015-07-07 한청훈 Method and apparatus for space touch
CN109584295A (en) * 2017-09-29 2019-04-05 阿里巴巴集团控股有限公司 The method, apparatus and system of automatic marking are carried out to target object in image
CN108830894A (en) * 2018-06-19 2018-11-16 亮风台(上海)信息科技有限公司 Remote guide method, apparatus, terminal and storage medium based on augmented reality

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李佳芮;: "基于深度学习的语义地图生成", 电子制作, no. 24 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686948A (en) * 2020-12-25 2021-04-20 北京像素软件科技股份有限公司 Editor operation method and device and electronic equipment
CN112950755A (en) * 2021-03-23 2021-06-11 广东电网有限责任公司 Security fence arrangement method and device
CN113066007A (en) * 2021-06-03 2021-07-02 潍坊幻视软件科技有限公司 Method for indicating target position in 3D space
CN115268658A (en) * 2022-09-30 2022-11-01 苏芯物联技术(南京)有限公司 Multi-party remote space delineation marking method based on augmented reality

Also Published As

Publication number Publication date
CN111009038B (en) 2023-12-29

Similar Documents

Publication Publication Date Title
CN111009038B (en) Space labeling method based on SLAM
US11803185B2 (en) Systems and methods for initializing a robot to autonomously travel a trained route
CN108830894B (en) Remote guidance method, device, terminal and storage medium based on augmented reality
US11978243B2 (en) System and method using augmented reality for efficient collection of training data for machine learning
US10755480B2 (en) Displaying content in an augmented reality system
CN108401461A (en) Three-dimensional mapping method, device and system, cloud platform, electronic equipment and computer program product
CN109887003A (en) A kind of method and apparatus initialized for carrying out three-dimensional tracking
CN107369205B (en) Mobile terminal city two-dimensional and three-dimensional linkage display method
CN105701828B (en) A kind of image processing method and device
CN110603122B (en) Automated personalized feedback for interactive learning applications
US11209277B2 (en) Systems and methods for electronic mapping and localization within a facility
CN104050859A (en) Interactive digital stereoscopic sand table system
US11315313B2 (en) Methods, devices and computer program products for generating 3D models
JP6310149B2 (en) Image generation apparatus, image generation system, and image generation method
CN111862333A (en) Content processing method and device based on augmented reality, terminal equipment and storage medium
CN110648274B (en) Method and device for generating fisheye image
CN104656893A (en) Remote interaction control system and method for physical information space
CN104680532A (en) Object labeling method and device
CN109648568A (en) Robot control method, system and storage medium
JP2021136017A5 (en)
JPH0997355A (en) Method and system for modeling
CN110544315B (en) Virtual object control method and related equipment
CN112732075B (en) Virtual-real fusion machine teacher teaching method and system for teaching experiments
US20230224576A1 (en) System for generating a three-dimensional scene of a physical environment
CN111047674A (en) Animation rendering method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant