AU2020103993A4 - Mobile augmented reality service apparatus and method using deep learning based positioning technology - Google Patents

Mobile augmented reality service apparatus and method using deep learning based positioning technology Download PDF

Info

Publication number
AU2020103993A4
AU2020103993A4 AU2020103993A AU2020103993A AU2020103993A4 AU 2020103993 A4 AU2020103993 A4 AU 2020103993A4 AU 2020103993 A AU2020103993 A AU 2020103993A AU 2020103993 A AU2020103993 A AU 2020103993A AU 2020103993 A4 AU2020103993 A4 AU 2020103993A4
Authority
AU
Australia
Prior art keywords
spatial information
space
augmented reality
spatial
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2020103993A
Inventor
Alankrita Aggarwal
Shivani Gaba
Prachi Garg
Rajender Kumar
Sita Rani
Shally
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Garg Prachi Dr
Original Assignee
Garg Prachi Dr
Rani Sita Dr
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Garg Prachi Dr, Rani Sita Dr filed Critical Garg Prachi Dr
Priority to AU2020103993A priority Critical patent/AU2020103993A4/en
Application granted granted Critical
Publication of AU2020103993A4 publication Critical patent/AU2020103993A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

MOBILE AUGMENTED REALITY SERVICE APPARATUS AND METHOD USING DEEP LEARNING BASED POSITIONING TECHNOLOGY Present invention relates to a mobile augmented reality service device and method using 5 deep learning-based location positioning technology that allows users to accurately identify the user's direction and location only by inputting an image from a camera (10) of a mobile device (smartphone). Spatial information storage (30) that stores spatial information generated by the spatial information generation unit (20). The spatial information generation unit organizes a space (21), divides the constituent space (22), and 10 then learns using a deep learning-based algorithm (70), and generates spatial information based on the learning result. When implementing augmented reality (70), a spatial information extracting unit (40) that extracts spatial information by extracting important feature points of an image from the image information obtained from the camera (20), and spatial information stored in the spatial information storage unit (30). 15 18 Application Number: Applicant Name: Total Number of sheets: 4 Page 2 of 4 START F Acquire Images with the Camera S101 Space configuration with Visual SLAM S102 Composed Space Split by Grid S103 The Partitioned Dataset Learning with CNN S104 Interfering with learned Spatial Information S105 Augmented NO Reality Implementation S106 Occurred? YES F Extraction of Spatial Information S107 Correction of Extracted Spatial 0 Information S 108 Storage of Corrected Spatial 'so FInformation S0 FAugmented Reality Implementation s110 STOPOD Fig 2. Flow chart of a mobile augmented reality service method using a deep learning based location positioning technology

Description

Application Number: Applicant Name: Total Number of sheets: 4 Page 2 of 4
START
F Acquire Images with the Camera S101
Space configuration with Visual SLAM S102
Composed Space Split by Grid S103
The Partitioned Dataset Learning with CNN S104
Interfering with learned Spatial Information S105
Augmented NO Reality Implementation S106 Occurred?
YES
F Extraction of Spatial Information S107
Spatial Correction of Extracted S 108 0 Information
Storage of Corrected Spatial FInformation 'so S0
FAugmented Reality Implementation s110
STOPOD
Fig 2. Flow chart of a mobile augmented reality service method using a deep learning based location positioning technology
Australian Government
IP Australia
Innovation Patent Application Australia Patent Office
1. TITLE OF THE INVENTION MOBILE AUGMENTED REALITY SERVICE APPARATUS AND METHOD USING DEEP LEARNING BASED POSITIONING TECHNOLOGY
2. APPLICANTS (S) NAME NATIONALITY ADDRESS
ALANKRITAAGGARWAL INDIAN DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, PANIPAT INSTITUTE OF ENGINEERING AND TECHNOLOGY, SAMALKHA-132101, PANIPAT (INDIA)
SHIVANI GABA INDIAN DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, PANIPAT INSTITUTE OF ENGINEERING AND TECHNOLOGY, SAMALKHA-132101, PANIPAT(INDIA) SHALLY INDIAN DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, PANIPAT INSTITUTE OF ENGINEERING AND TECHNOLOGY, SAMALKHA-132101, PANIPAT(INDIA) DR. PRACHI GARG INDIAN DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, MAHARISHI MARKANDESHWAR ENGINEERING COLLEGE, MMDU MULLANA- AMABALA, HARYANA(INDIA) DR. SITA RANI INDIAN DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GULZAAR INSTITUTE OF ENGINEERING AND TECHNOLOGY, GULZAR GROUP OF INSTITUTES, GT ROAD, KHANNA 141401 LUDHIANA, PUNJAB (INDIA) RAJENDER KUMAR INDIAN DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, PANIPAT INSTITUTE OF ENGINEERING AND TECHNOLOGY, SAMALKHA-132101, PANIPAT (NDIA) 3. PREAMBLE TO THE DESCRIPTION COMPLETE SPECIFICATION AUSTRALIAN GOVERNMENT
The following specification particularly describes the invention and the manner in which it is to be performed
MOBILE AUGMENTED REALITY SERVICE APPARATUS AND METHOD USING DEEP LEARNING BASED POSITIONING TECHNOLOGY FIELD OF THE INVENTION
[0001] The present invention relates to a mobile augmented reality service apparatus and method using a deep learning-based location positioning technology, and in particular, a deep learning-based system that enables the user's direction and location to be accurately identified only with image input from a camera of a mobile device (smartphone). It relates to a mobile augmented reality service apparatus and method using location positioning technology.
BACKGROUND OF THE INVENTION
[0002] Recently, the provision of contents using an augmented reality technique in which various information is overlaid on a real space when photographing using a camera module has been actively studied.
[0003] Augmented Reality (AR) is a technology that belongs to a field of Virtual Reality (VR), and by synthesizing the virtual environment to the real environment that the user feels through their senses, it makes them feel as if the virtual environment exists in the original real environment. It is a computer technique.
[0004]Unlike conventional virtual reality that targets only virtual spaces and objects, such augmented reality has the advantage of being able to reinforce and provide additional information that is difficult to obtain in the real world by synthesizing virtual objects on the basis of the real world.
[0005]In particular, with the improvement and popularization of smart devices, mobile augmented reality is in the spotlight, and it is expected that the market for new market creation and related content development will grow significantly.However, only monotonous apps that are difficult to interact with users and virtual objects are mass produced and are key element technologies. Innovation is in demand.
[0006]In an augmented reality service, when a virtual object is placed in the real environment by accurately measuring the user's location and direction of sight, it must be naturally synthesized without awkwardness. Due to this, it is difficult to implement easily.
[0007]In order to reduce such errors, various tracking methods such as sensors, magnetic fields, ultrasonic waves, inertial and optical methods are used, but there is a disadvantage of requiring expensive equipment and a limited acquisition environment.
[0008]Unlike outdoor spaces, satellite-based positioning systems such as GPS cannot be used for indoor spaces, so positioning technologies are being developed to increase the accuracy of location data, but additional investment costs are required when building a separate infrastructure and changing the indoor structure. There is a need to solve problems, compatibility problems between various positioning platforms and devices and operating systems.
[0009]On the other hand, techniques previously proposed for an augmented reality service are disclosed as below:
[0010]The prior art disclosed, key frame information and spatial information in a specific lighting area using visible light communication (LiFi), and provides augmented reality experience in the lighting area through a user's mobile device. The present invention relates to a system and method for providing an augmented reality in an indoor space using communication, and it is possible to accurately estimate a user's location and a mobile device's posture as well as an accurate indoor space restoration.
[0011]In addition, the prior art disclosed an augmented reality terminal having a built-in image camera that acquires real image information by photographing an image around a user and displaying a 3D virtual image on a display, and the augmented reality terminal is, by recognizing the space three-dimensionally, generating a space-based three-dimensional matched coordinate system, and displaying each virtual object at the coordinates of each pre-allocated real object based on the space-based three-dimensional matched coordinate system. Whenever a real object of image information is identified by an image based on object recognition, the coordinates of the real object are re-identified, the coordinates of the identified real object are updated, and a virtual object previously allocated to the coordinates of the updated real object by displaying, it provides an augmented reality system in which spatial recognition and object recognition are applied at the same time.
[0012]Korean Patent Registration No. 10-1971791 (2019.04.17. Registration) (Augmented reality provision system and method for indoor space using visible light communication) Korean Patent Registration No. 10-1898075 (2018.09.06. Registration) (Augmented reality system with spatial recognition and object recognition applied at the same time), are the two prior arts listed here.
[0013]However, the general augmented reality system as described above and the prior art accurately measure the user's position and gaze direction in an indoor space to place a virtual object in the real environment, because the real-world coordinate system and the coordinate system of the virtual world do not match. It was difficult to implement.
[0014]In addition, in order to reduce the error between the real-world coordinate system and the virtual world coordinate system, when various sensors or tracking methods are used, there is a disadvantage in that the system implementation cost is high due to the use of expensive equipment.
[0015]In addition, there is a disadvantage that additional investment costs are required when changing the interior structure.
[0016]Therefore, the present invention has been proposed to solve the problems occurring in the general augmented reality system and the prior art as described above, so that the direction and location of the user can be accurately identified only by inputting an image from a camera of a mobile device (smartphone). Its purpose is to provide a mobile augmented reality service device and method using a deep learning-based location positioning technology.
OBJECTS OF THE INVENTION
[0017]An object of the present invention is to provide a mobile augmented reality service apparatus and method using a deep learning-based positioning technology that operates only with a smartphone camera and application indoors without installing additional hardware infrastructure.
[0018]Another object of the present invention is to use a deep learning-based location positioning technology to visualize media/content in augmented reality according to the user's moving position or viewing position and direction. It is to provide an augmented reality service device and method.
[0019]Another object of the present invention is to provide a mobile augmented reality service apparatus and method using a deep learning-based positioning technology in which virtual objects are seamlessly matched in real time in real time by minimizing location tracking errors.
[0020]Another object of the present invention is a deep learning-based positioning technology that provides a user interface that supports intuitive and natural interactions, rather than a method of increasing requirements to users as the location and gaze of a user who is a tourist viewing the real-world changes. It is to provide a mobile augmented reality service device and method using.
SUMMARY
[0021]In order to achieve the above object, the "mobile augmented reality service apparatus using deep learning-based positioning technology" according to the present invention constructs a space from image information acquired by a camera, divides the composition space, and then dips it. A spatial information generator that learns with a learning-based algorithm and generates spatial information based on the learning result is formed, and the spatial information generator uses Visual SLAM (Simultaneous Localization and Mapping) from image information acquired by a camera. A spatial configuration unit to configure, a spatial segmentation unit that generates a data set that is a divided space by dividing the space configured in the spatial configuration unit into a grid, and a data set divided by the spatial segmentation unit is a deep learning algorithm. A spatial information storage unit that includes a spatial learning unit for learning with a convolution neural network (CNN), and stores spatial information generated by the spatial information generation unit; an image from image information obtained from the camera when implementing augmented reality Spatial information extraction unit for extracting spatial information by extracting the important feature points of the; Location and direction checking the location and direction of the user by comparing the spatial information extracted from the spatial information extraction unit with the spatial information stored in the spatial information storage unit; Including a unit, wherein the position and direction checking unit is characterized in that it estimates at which grid position the divided grid is located by correcting the extracted spatial information based on motion information obtained from an inertial measurement unit (IMU).
[0022]In order to achieve the above object, the "mobile augmented reality service method using deep learning-based positioning technology" according to the present invention comprises: (a) constructing a space from image information acquired by a camera from a spatial information generator, and , After dividing the constituent space, learning is performed using a deep learning-based algorithm, and spatial information is generated based on the learning result, and the step is (al) Visual SLAM (Simultaneous Localization and Mapping) to construct a space, (a2) dividing the space configured in step (al) into a grid to create a data set that is a divided space, and (a3)the divided data set including the step of learning with a convolutional neural network (CNN) which is a deep learning algorithm, (b) storing the spatial information generated in the step (a); (c) when implementing augmented reality, extracting spatial information by extracting important feature points of an image from the image information obtained from the camera in a spatial information extraction unit; and (d) comparing the spatial information extracted in step (c) with the stored spatial information by the location and direction checking unit to check the location and direction of the user.
[0023]According to the present invention, there is an advantage of being able to accurately identify a user's direction and location without implementing additional hardware only by inputting an image from a camera of a mobile device (smartphone).
[0024]In particular, according to the present invention, there is an advantage of being able to accurately identify a user's location and direction in real time by using a deep learning based location positioning technology that operates only with a smartphone camera and an application without installing additional hardware infrastructure indoors.
[0025]In addition, according to the present invention, there is an advantage of being able to naturally visualize media/contents in real time without awkwardness in accordance with a user's moving position or viewing position and direction in augmented reality by using a deep learning-based position positioning technology.
[0026]In addition, according to the present invention, there is an advantage in that a virtual object can be seamlessly matched in real time with a real space by minimizing an error in location tracking.
DETAILED DESCRIPTION
[0027]Fig. 1 is a block diagram of a mobile augmented reality service apparatus using a deep learning-based location positioning technology according to the present invention;Fig. 2 is a flow chart showing a mobile augmented reality service method using a deep learning-based location positioning technology according to the present invention;Fig. 3 is an exemplary view of configuring a space using Visual SLAM in the present invention;Fig. 4 is an exemplary view of implementing a mobile augmented reality service device using a deep learning-based location positioning technology in the present invention.
[0028]Hereinafter, a mobile augmented reality service apparatus and method using a deep learning-based positioning technology according to a preferred embodiment of the present invention will be described in detail with reference to the accompanying drawings.
[0029]Fig. 1 is a configuration diagram of a mobile augmented reality service apparatus using a deep learning-based positioning technology according to a preferred embodiment of the present invention, after configuring a space from image information acquired by a camera (10) and dividing the configuration space. A spatial information generation unit (20) that performs learning with a deep learning-based algorithm and generates spatial information based on the learning result, and a spatial information storage unit (30) that stores spatial information generated by the spatial information generation unit (20) , When implementing augmented reality, the spatial information extraction unit (40) extracting spatial information by extracting important feature points of the image from the image information obtained from the camera (10), the error of the spatial information extracted by the spatial information extraction unit (40). A spatial information correction unit (50) for correcting, a position to check the location and direction of the user by comparing the spatial information corrected by the spatial information correction unit (50) with the spatial information stored in the spatial information storage unit (30). And an augmented reality implementation unit (70) that implements augmented reality in connection with the direction checking unit (60) and the position and direction checking unit (60).
[0030]In an aspect of the present invention, a device for implementing mobile augmented reality using deep learning-based positioning technology comprises, a space (21) is constructed from image information acquired by a camera (10); the space is divided (22), and then learned (23) with a deep learning-based algorithm; based on the learning result, it is stored with a spatial information generation unit (30) that generates spatial information, wherein the spatial information generation unit (20) comprises a spatial configuration unit that configures a space using Visual Simultaneous Localization and Mapping from image information acquired by a camera (10), and the spatial configuration unit; a space division unit (22) that generates a data set that is a divided space by dividing the configured space into a grid, and a data set that is divided by the spatial division unit is trained using a convolutional neural network (CNN); a spatial information storage unit (30) for storing spatial information generated by the spatial information generator; a space for extracting spatial (40) information by extracting important feature points of an image from image information acquired from the camera when implementing augmented reality Information extraction unit; comparing the spatial information extracted from the spatial information extraction unit with the spatial information stored in the spatial information storage unit includes a location and direction check unit to check the location and direction of the user, wherein the position and direction check unit measures inertia. A mobile augmented reality service (70) using deep learning-based positioning technology, characterized in that the extracted spatial information is corrected based on motion information acquired from an inertial measurement unit (IMU) to estimate at which grid position the divided grid is located.
[0031]The mobile augmented reality service apparatus using the deep learning-based location positioning technology according to the present invention configured as described above can be implemented through various mobile devices, but in the present invention, it is assumed that the mobile device used by the user as an example.
[0032]The spatial information generation unit (20) is a spatial configuration unit (21) configuring a space using Visual SLAM (Simultaneous Localization and Mapping) from the image information acquired by the camera (10), the spatial configuration unit (22),A space division unit (22) that generates a data set as a divided space by dividing the space into a grid, and a convolutional neural network (CNN) which is a deep learning algorithm for the data set divided by the space division unit (22) Network and a spatial learning unit (23) for learning.
[0033]The operation of the mobile augmented reality service apparatus using the deep learning-based location positioning technology according to the present invention configured as described above will be described in detail as follows.
[0034]First, as a preparatory work before implementing augmented reality, a space is constructed from the image information of the indoor space obtained by the camera (10) from the spatial information generator (20), and after dividing the composition space, a deep learning-based algorithm. It learns and generates spatial information based on the learning result.
[0035]When using a smartphone indoors without GPS, it is difficult to determine from which location and in which direction it is being used unless additional hardware infrastructure is used.
[0036]Therefore, in order to check the user's direction and location with only the image input coming from the camera, information on the indoor space is created and stored in advance.
[0037]That is, the spatial configuration unit (21) of the spatial information generating unit (20) constructs a space using Visual SLAM (Simultaneous Localization and Mapping) from the indoor image information acquired by the camera (10).A space is constructed by using Visual SLAM through a map-building process in which important points are extracted as feature points from an image of an indoor space acquired from the camera (10), and a map is created using this as a key frame.
[0038] As a method for configuring a space using Visual SLAM from an acquired indoor image, Korean Patent Application Laid-Open No. 10-2016-0003066 (2016.01.08).It can also be implemented by adopting the Visual SLAM technology disclosed in (Publication) (Monocular Vision SLAM with General Camera Movement and Panoramic Camera Movement).
[0039]An example in which a feature point is extracted from an indoor image input by a camera using the Visual SLAM, and a user tracking method is applied to configure a space capable of user tracking.
[0040]Next, the space dividing unit (22) divides the space configured by the space construction unit (22) into a grid and divides the data sets (SET P1, Q1 #1) (SET P2, Q2 #2) (Create SET P3, Q3 #3).Here, P represents the position and Q represents the direction.
[0041]Subsequently, the spatial learning unit (23) learns the data set divided by the spatial dividing unit (22) using a convolutional neural network (CNN), which is a deep learning algorithm.Since the CNN algorithm directly learns to find patterns and classify features, manual work is not required, and has the advantage of showing high-level recognition results.
[0042]As the input data passes through the convolution layer through a filter (kernel), features are extracted, and finally, the divided data set is learned through the process of classifying the features into values between 0 and 1 through the Softmax function.
[0043]Here, the filter is mainly defined as a 4x4 or 3x3 matrix, and is a parameter used for feature extraction.Filter, Stride, Channel, Padding, Activation Function, Max pooling, Dropout, Softmax, and other processes to learn the segmented data set. Thus, spatial information is recognized. For example, recognizing a divided space by learning a divided data set (divided space). The spatial information learned in this way is stored in the spatial information storage unit (30).
[0044]In a state in which the preliminary preparation process for confirming the location and direction of the user from the input image inputted to the camera in real time as described above is completed, when actual augmented reality is implemented, the spatial information extraction unit (40) extracts the spatial information by extracting important feature points of an image from image information obtained from the camera (10).Here, as a method of extracting spatial information from an input image, a method of obtaining key frame information by extracting important feature points of an image, and extracting spatial information using the obtained key frame may be used.Here, when the augmented reality implementation occurs, the spatial information generation part does not operate.
[0045]Although not shown in the drawing, a control device (CPU, microcomputer, etc.) that integrally controls, required to implement an augmented reality service, and when generating spatial information in such a control device, only the spatial information generation part is operated, and augmented reality is implemented. The part is not operated. On the contrary, when the augmented reality is implemented, the augmented reality implementation part is operated and the spatial information generation part is not operated.
[0046]Finally, the location and direction checking unit (60) compares the corrected spatial information or the spatial information extracted by the spatial information extraction unit (40) with the spatial information stored in the spatial information storage unit (30), and checks in real time.
[0047] For example, if the key frame information for classifying the spatial information is used, it is possible to easily check the location and direction information from the spatial information stored in the spatial information storage unit (30).
[0048]In other words, just like tracking objects in this faudds image, by looking at one space as each divided large object and using a method to track it, real-time users can recognize the location and direction viewed by the user only with the image input to the camera. You can do it.Using this principle, space and direction can be tracked even when the user's camera moves in reverse, just as an object can be tracked even if it moves in an image.
[0049]Subsequently, the augmented reality implementation unit (70) implements augmented reality in conjunction with the location and direction checking unit (60).
[0050]The technology for implementing augmented reality is a known technology, and recognition of the location or direction viewed by the user is the key.By using the technology proposed in the present invention, the user can minimize errors in location tracking, so that virtual objects can be seamlessly matched in real time in real time.
[0051]Fig. 4 is an example of implementing a mobile augmented reality in a real space using a mobile augmented reality service device using a deep learning-based location positioning technology according to the present invention.
[0052]Fig.2 is a flow chart showing a "mobile augmented reality service method using deep learning-based positioning technology" according to the present invention, where S represents a step.
[0053]The mobile augmented reality service method using deep learning-based positioning technology according to the present invention comprises: (a) configuring a space from the image information acquired by the camera (10) from the spatial information generator (20), and dividing the configuration space. After learning with a deep learning-based algorithm and generating spatial information based on the learning result (S101-S104), (b) storing the spatial information generated in the (a) step (S105), (c)When implementing augmented reality, extracting important feature points of the image from the image information obtained from the camera (10) in the spatial information extraction unit (40) to extract spatial information (S106-SI07), (d) Checking the location and direction of the user by comparing the spatial information extracted in step (c) and the stored spatial information in the verification unit (60) (S108-S109), (e) the location in the augmented reality implementation unit (70)And it includes a step (S110) of implementing augmented reality in conjunction with the direction check unit (60).
[0054]The step (a) includes steps of (al) configuring a space using Visual SLAM (Simultaneous Localization and Mapping) from the image information acquired by the camera (S101-S102), (a2) the space configured in step (al). Generating a data set that is a divided space by dividing it into a grid (S103), (a3) learning the divided data set with a convolutional neural network (CNN), a deep learning algorithm (S104) Includes,
[0055]The mobile augmented reality service method using the deep learning-based location positioning technology according to the present invention configured as described above will be described in detail as follows.
[0056]First, as a preliminary preparatory work before implementing augmented reality as in steps S101 to S104, a space is constructed from the image information of the indoor space acquired by the camera (10) from the spatial information generator (20), and the configuration space is divided. After that, it learns with a deep learning-based algorithm, and generates spatial information based on the learning result.
[0057]When using a smartphone indoors without GPS, it is difficult to determine from which location and in which direction it is being used unless additional hardware infrastructure is used.
[0058]Therefore, in order to check the user's direction and location with only the image input coming from the camera, information on the indoor space is created and stored in advance.
[0059]That is, in step S102, the spatial configuration unit (21) of the spatial information generating unit (20) constructs a space using Visual SLAM (Simultaneous Localization and Mapping) from the indoor image information acquired by the camera (10).
[0060] A space is constructed using Visual SLAM through a map-building process in which important points are extracted as feature points from an image of an indoor space acquired from the camera (10), and a map is created using this as a key frame.
[0061]Fig.3 is an example of configuring a space capable of user tracking by extracting feature points from an indoor image input by a camera using the Visual SLAM and applying a map creation method.
[0062]Next, the space division unit (22) divides the space constructed by the space construction unit (22) into a grid in step S103 to generate a data set that is a divided space.
For example, a space divided by a grid,can be created as a data set (SET P1, Q1 #1) (SET P2, Q2 #2) (SET P3, Q3 #3).Here, P represents the position and Q represents the direction.
[0063]Subsequently, in step S104, the spatial learning unit (23) learns the data set divided by the spatial dividing unit (22) using a convolutional neural network (CNN), which is a deep learning algorithm.Since the CNN algorithm directly learns to find patterns and classify features, manual work is not required, and has the advantage of showing high level recognition results.The spatial information thus learned is stored in the spatial information storage unit (30) in step S105.
[0064]In a state in which the preliminary preparation process for confirming the location and direction of the user from the input image input to the camera in real time is completed, and the actual augmented reality is implemented (S106), the spatial information extraction unit (40) extracts important feature points of an image from the image information acquired from the camera (10) in step S107 to extract spatial information.Here, as a method of extracting spatial information from an input image, a method of obtaining key frame information by extracting important feature points of an image, and extracting spatial information using the obtained key frame may be used.Subsequently, the spatial information correction unit (50) corrects an error of the spatial information extracted by the spatial information extracting unit (40) in step S108.When acquiring actual spatial information, there is a greater error in extracting spatial information than when the motion is severe and the motion is very small.
[0065]Accordingly, in the present invention, a motion information acquisition device such as an inertial measurement unit (IMU) is added from the spatial information correction unit (50), or motion information is acquired using a motion measurement device provided in an existing smartphone. The spatial information extracted based on the motion information is corrected to minimize the error range.Here, the spatial information correction process may be omitted.
[0066]Finally, the location and direction checking unit (60) stores the spatial information corrected by the spatial information correction unit (50) in step 109 or the spatial information extracted by the spatial information extraction unit (40) in the spatial information storage unit (30). And checks the location and direction of the user in real time.
[0067]For example, by using key frame information for classifying spatial information, location and direction information can be easily identified from spatial information stored in the spatial information storage unit (30).In other words, the present invention sees one space as a divided large object, and uses a method to track it, as if tracking an object in an image. Using this principle, space and direction can be tracked even when the user's camera moves in reverse, just as an object can be tracked even if it moves in an image.
[0068]Subsequently, the augmented reality implementation unit (70) implements augmented reality in conjunction with the location and direction check unit (60) in step Si.
[0069]Figure 4 is an illustration of the Implementation of a mobile augmented reality service device using a deep learning-based location positioning technology.The technology for implementing augmented reality is a known technology, and recognition of the location or direction the user is looking at is the key.By using the technology proposed in the present invention, the user can minimize errors in location tracking, so that virtual objects can be seamlessly matched in real time.
[0070]Although the invention made by the present inventor has been described in detail according to the above embodiment, the present invention is not limited to the above embodiment, and it is common knowledge in the art that various changes can be made without departing from the gist of the invention.

Claims (3)

We claim
1. A device for implementing mobile augmented reality using deep learning-based positioning technology comprises,
a space (21) is constructed from image information acquired by a camera (10);
the space is divided (22), and then learned (23) with a deep learning-based algorithm;
based on the learning result, it is stored with a spatial information generation unit (30) that generates spatial information, wherein the spatial information generation unit (20) comprises a spatial configuration unit that configures a space using Visual Simultaneous Localization and Mapping from image information acquired by a camera (10), and the spatial configuration unit;
a space division unit (22) that generates a data set that is a divided space by dividing the configured space into a grid, and a data set that is divided by the spatial division unit is trained using a convolutional neural network (CNN);
a spatial information storage unit (30) for storing spatial information generated by the spatial information generator;
a space for extracting spatial (40) information by extracting important feature points of an image from image information acquired from the camera when implementing augmented reality Information extraction unit;
comparing the spatial information extracted from the spatial information extraction unit with the spatial information stored in the spatial information storage unit includes a location and direction check unit to check the location and direction of the user, wherein the position and direction check unit measures inertia.
a mobile augmented reality service (70) using deep learning-based positioning technology, characterized in that the extracted spatial information is corrected based on motion information acquired from an inertial measurement unit (IMU) to estimate at which grid position the divided grid is located.
2. A method of service augmented reality in mobile using deep learning-based positioning technology, wherein:
a deep learning-based algorithm after constructing a space from image information acquired by a camera (101) from the spatial information generator and dividing the composition space, consists of a step of learning and generating spatial information based on the learning result (101-104);
characterized in that,
the steps include configuring a space using Visual Simultaneous Localization and Mapping (102) from the image information acquired by the camera;
generating a data set (103) which is a divided space by dividing the space configured in step (102) into a grid, and a convolutional neural network (CNN) which is a deep learning algorithm for the divided data set (103);
Convolution Neural Network (104), and storing spatial information (109) generated when implementing augmented reality (106), extracting spatial information (107) by extracting important feature points of an image from the image information obtained from the camera in a spatial information extraction unit;
checking the location and direction of the user by comparing the spatial information extracted (107) with the stored spatial information (109) in the location and direction checking unit.
3. The method to service augmented reality in mobile using deep learning-based positioning technology as claimed in claim 2, wherein the mobile augmented reality service (110) method using deep learning-based positioning technology is corrected (108) based on the motion information of the extracted spatial information and then compared with the stored spatial information.
Application Number: Applicant Name: Total Number of sheets: 4 Dec 2020
Page 1 of 4 2020103993
Spatial information Camera storage (10) (30)
Spatial information Spatial information extraction unit correction unit (40) (50)
Position and direction check (60)
Augmented Reality Implementation Department (70)
Fig 1. Block Diagram of a mobile augmented reality service apparatus using a deep learning-based location positioning technology
Application Number: Applicant Name: Total Number of sheets: 4 Dec 2020
Page 2 of 4
START
Acquire Images with the Camera S101 2020103993
Space configuration with Visual SLAM S102
Composed Space Split by Grid S103
The Partitioned Dataset Learning with CNN S104
Interfering with learned Spatial Information S105
Augmented NO Reality Implementation S106 Occurred?
YES
Extraction of Spatial Information S107
Correction of Extracted Spatial Information S108
Storage of Corrected Spatial Information S109
Augmented Reality Implementation S110
STOP
Fig 2. Flow chart of a mobile augmented reality service method using a deep learning- based location positioning technology
Application Number: Applicant Name: Total Number of sheets: 4 Dec 2020
Page 3 of 4 2020103993
Fig 3. Space configuration using Visual SLAM
Application Number: Applicant Name: Total Number of sheets: 4 Dec 2020
Page 4 of 4 2020103993
Fig 4. Implementation of a mobile augmented reality service device using a deep learning-based location positioning technology
AU2020103993A 2020-12-10 2020-12-10 Mobile augmented reality service apparatus and method using deep learning based positioning technology Ceased AU2020103993A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2020103993A AU2020103993A4 (en) 2020-12-10 2020-12-10 Mobile augmented reality service apparatus and method using deep learning based positioning technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2020103993A AU2020103993A4 (en) 2020-12-10 2020-12-10 Mobile augmented reality service apparatus and method using deep learning based positioning technology

Publications (1)

Publication Number Publication Date
AU2020103993A4 true AU2020103993A4 (en) 2021-02-18

Family

ID=74591592

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2020103993A Ceased AU2020103993A4 (en) 2020-12-10 2020-12-10 Mobile augmented reality service apparatus and method using deep learning based positioning technology

Country Status (1)

Country Link
AU (1) AU2020103993A4 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114637412A (en) * 2022-05-17 2022-06-17 广东控银实业有限公司 Rocker control method and system for VR device figure movement

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114637412A (en) * 2022-05-17 2022-06-17 广东控银实业有限公司 Rocker control method and system for VR device figure movement

Similar Documents

Publication Publication Date Title
CN106679648B (en) Visual inertia combination SLAM method based on genetic algorithm
JP6896077B2 (en) Vehicle automatic parking system and method
CN101833896B (en) Geographic information guide method and system based on augment reality
CN107665506B (en) Method and system for realizing augmented reality
KR101761751B1 (en) Hmd calibration with direct geometric modeling
CN105678754B (en) A kind of unmanned plane real-time map method for reconstructing
JP5952001B2 (en) Camera motion estimation method and apparatus using depth information, augmented reality system
CN101794349B (en) Experimental system and method for augmented reality of teleoperation of robot
CN110062871A (en) Method and system for positioning and mapping based on video
CN110377015A (en) Robot localization method and robotic positioning device
CN108303994B (en) Group control interaction method for unmanned aerial vehicle
EP3248029A1 (en) Visual localization within lidar maps
WO2018019272A1 (en) Method and apparatus for realizing augmented reality on the basis of plane detection
CN107665508B (en) Method and system for realizing augmented reality
CN107665505A (en) The method and device of augmented reality is realized based on plane monitoring-network
CN107665507A (en) The method and device of augmented reality is realized based on plane monitoring-network
KR20200110120A (en) A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof
CN110749308B (en) SLAM-oriented outdoor positioning method using consumer-grade GPS and 2.5D building models
CN110456904B (en) Augmented reality glasses eye movement interaction method and system without calibration
CN105989586A (en) SLAM method based on semantic bundle adjustment method
KR102166586B1 (en) Mobile Augmented Reality Service Apparatus and Method Using Deep Learning Based Positioning Technology
CN113920263A (en) Map construction method, map construction device, map construction equipment and storage medium
CN115031758A (en) Live-action navigation method, device, equipment, storage medium and program product
AU2020103993A4 (en) Mobile augmented reality service apparatus and method using deep learning based positioning technology
CN113129451A (en) Holographic three-dimensional image space quantitative projection method based on binocular vision positioning

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry