KR101870276B1 - System and method for providing augmented reality contents using markers - Google Patents
System and method for providing augmented reality contents using markers Download PDFInfo
- Publication number
- KR101870276B1 KR101870276B1 KR1020160183582A KR20160183582A KR101870276B1 KR 101870276 B1 KR101870276 B1 KR 101870276B1 KR 1020160183582 A KR1020160183582 A KR 1020160183582A KR 20160183582 A KR20160183582 A KR 20160183582A KR 101870276 B1 KR101870276 B1 KR 101870276B1
- Authority
- KR
- South Korea
- Prior art keywords
- marker
- content
- time
- story
- unit
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/835—Generation of protective data, e.g. certificates
- H04N21/8355—Generation of protective data, e.g. certificates involving usage data, e.g. number of copies or viewings allowed
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
Abstract
Description
The present invention relates to a system and a method for providing augmented reality contents using a marker that provides a combined content story using a marker recognized by a user terminal. More specifically, To a system and method for providing augmented reality contents using a marker that provides a personalized story.
Virtual Reality (VR) is a human-computer interface that makes a certain environment or situation computerized and makes it appear as if the person using it is interacting with the actual environment and environment. So that it can be shown and manipulated as if it were in the environment without experiencing it directly.
And Augmented Reality (AR) is a technology derived from one field of virtual reality (VR), which refers to superimposing virtual objects on the real world.
In other words, virtual reality interacts with the user through a virtual environment created by computer graphics, but augmented reality can improve the reality by interacting with a virtual object based on the real world.
Therefore, in the augmented reality, the user recognizes the actual environment in which the user is present and also recognizes the virtual information expressed on the actual image.
In this way, the augmented reality combines the reality image with the virtual graphic, so that the virtual objects must be accurately positioned on the screen in order to obtain more realistic and accurate images.
In order to realize this, three-dimensional coordinates of the virtual object are required, and these coordinates should be coordinate values based on the camera.
However, in order to acquire three-dimensional coordinates of a camera with respect to a certain point or object in the real world, there is a problem in that it is not easy to grasp the three-dimensional position because two or more cameras are required.
Accordingly, a marker-based recognition technique for locating a marker for positioning a virtual object in a space of a real world and extracting a relative coordinate based on the marker to estimate a position and a posture for positioning the virtual object .
For example, Korean Patent Registration No. 10-0701784 discloses a technique for realizing an AR that easily and intuitively and quickly outputs an efficient and various three-dimensional virtual image at an appropriate position by utilizing the convergence of markers have.
However, in the conventional art, since the three-dimensional images corresponding to the markers simultaneously recognized in the terminal are fused and output, there is a problem that the user can not provide contents including various stories.
SUMMARY OF THE INVENTION The present invention has been made to solve the above-mentioned problems, and it is an object of the present invention to provide a system and method for providing augmented reality contents using markers for accumulating usage information of users in order to provide a user- .
It is another object of the present invention to provide a system and method for providing an augmented reality content using a marker capable of providing a user with a content story combined with a recognition history of a marker even if objects including markers are not simultaneously recognized in a user terminal.
According to an aspect of the present invention for achieving the above object, the present invention provides a method of detecting an image, comprising the steps of: (A) recognizing a marker of a target object in an image captured by an imaging unit; (B) a control unit reading an object of the recognized marker from the storage unit; (C) extracting a content story about the object read by the control unit; (D) generating output content through the object and the content story read out by the control unit; And (E) outputting the output content.
Here, the content story may be a time-series operation of objects or objects output according to a combination of markers or markers recognized by the recognition unit.
In addition, the content story of the step (C) can be extracted at random according to a selection probability value to which a value is added from content stories of an object of the recognized marker by the control unit.
(C21) the control unit reads the cumulative output time for each content story from the storage unit; (C22) calculating an average output time of the entire content story of the calculation unit, which is received from the control unit and received from the cumulative output time for each content story; And (C23) calculating an added value of an individual content story according to an output time ratio of the individual content story to an average output time of the total content story calculated by the calculating unit.
At this time, the content story of the step (C) may be extracted according to the object identification direction of the terminal.
On the other hand, the present invention provides (F) a step of identifying the increase / decrease of the marker by the recognition unit; (G) storing the output time of the individual content story when the control unit discriminates a marker increase / decrease of the recognition unit; (H) storing the recognition time of the marker when the recognition unit identifies a decrease in the marker in the (F) step; (I) counting the extinction time from the extinction point of the marker; And (J) stopping the output of the marker object if the extinction time of the marker is greater than or equal to the residence time.
Here, the extinction time of the step (I) may be a time counted from a time at which the marker is not recognized from the recognition unit.
The remaining time of the step (J) may be a predetermined time for maintaining the object output of the marker for a predetermined time after the marker is detached from the recognition unit.
Here, the remaining time of the step (J) may include: (J1) reading the cumulative recognition time for each marker from the storage unit by the control unit and transferring the accumulated recognition time to the calculation unit; (J2) calculating the average recognition time of all the markers by the calculation unit; And (J3) the cumulative recognition time of the individual marker with respect to the average recognition time of all the markers calculated from the calculation unit.
According to another aspect of the present invention, there is provided an image processing apparatus comprising: a plurality of objects including a marker; And a user terminal for recognizing a marker included in the object, the user terminal comprising: an imaging unit for imaging an image including a target object; A control unit for recognizing a marker in the image received from the image sensing unit, reading an object corresponding to the marker, extracting a content story of the read object, and generating an output content; And an output unit for outputting the output content.
Here, the user terminal may further comprise a storage unit for storing information for generating the output content; The storage unit may include a marker storage unit for storing information for recognizing the marker, an object storage unit for storing an object linked to the marker, a content story storage unit for storing a time series operation according to the combination of objects, And a marker recognition time storage unit for storing the marker recognition time.
Here, the content story may be a time-series operation of objects or objects output according to a combination of markers or markers recognized by the recognition unit.
The control unit may include: (a) recognizing a marker of a target object through the recognition unit by receiving the image captured by the imaging unit; (b) reading an object of a marker recognized through the recognition unit from a storage unit; (c) extracting a content story of the read object; And (d) generate output content using the extracted content story.
Also, the content story extraction in the step (c) may be randomly extracted according to the selection probability value to which the added value is added from the content stories of the recognized marker object.
Here, the addition value calculation in the step (c2) may include: (c21) the control unit reading the cumulative output time for each content story from the storage unit; (c22) calculating an average output time of an entire contents story by transmitting an accumulated output time of each contents story read out to the calculation unit; And (c23) calculating the addition value of the individual content story according to the output time of the individual content story with respect to the average output time of the entire content story.
The content story of the step (c) may be extracted according to the object identification direction of the terminal.
The control unit may include: (I) identifying the increase or decrease of the marker through the recognition unit; (II) storing an output time of an individual content story when identifying a marker change in the recognition unit; (III) storing the recognition time of the marker in the storage unit when the recognizing unit identifies a decrease in the marker in the step (I); (IV) counting the extinction time from the extinction point of the marker; And (V) stopping the output of the marker object if the extinction time of the marker is greater than or equal to the residence time.
Here, the extinction time of the (IV) step may be the time counted from the time when the marker is not recognized from the recognition unit.
The remaining time of the step (V) may be a predetermined time for maintaining the object output of the marker for a predetermined time after the marker is detached from the recognition unit.
The remaining time of the step (V) may include: (i) reading the cumulative recognition time for each marker from the storage unit by the control unit and transferring the accumulated recognition time to the calculation unit; (Ii) calculating the average recognition time of all the markers by the calculation unit; And (iii) the cumulative recognition time of the individual marker compared to the average recognition time of all the markers calculated from the calculation unit.
Meanwhile, the content story can be extracted in various manners. Hereinafter, a method of selecting a content story according to various embodiments of the present invention will be described.
The first method of selecting the content story according to the present invention is a method of extracting another content story according to the object identification direction of the user terminal.
The second method of selecting a content story according to the present invention is a method of recognizing a background image of an image captured by an image pickup unit and extracting a content story stored for each type according to the recognized background type.
For example, when the background image is recognized as a beach, the background image may be classified into an outdoor / waterfront space, and a content story allocated to the outdoor / waterfront space may be selected and output.
In this case, it is possible to implement a content story that is compatible with the user's real world by reflecting a background factor of a microscopic viewpoint.
A third method of selecting a content story according to the present invention is to acquire location information using the GPS information of the
At this time, the location information is classified according to the area type, and the stored content stories classified by the area type can be matched and selected.
For example, the division of the regional type may be by city or country depending on the administrative area.
In this case, although there is a disadvantage that it is difficult to acquire the location information in the building, in the outdoors, GPS information is used to grasp the information on which the user is located, and then a content story corresponding to the location is extracted, There is an effect that can be.
Another end of the method for selecting a content story is to provide a content story by outputting the same content story for each taste of a user when there are a plurality of user terminals.
Specifically, when terminals of a plurality of users recognize the same object, users having the same taste output the same content story, and users with different taste output a content story corresponding to each taste.
To this end, the present invention performs interworking between adjacent user terminals through short-range communication when a plurality of users' terminals are close to each other.
Thereafter, the additive value matching rate for each content story is calculated according to the added value for each content story stored in the content story storage unit of the user terminals.
If the coincidence rate of the added value is equal to or larger than the set value, the users are judged to have the same taste and output the same content story.
On the other hand, when the coincidence rate of the additive value is less than the set value, the users judge that their preferences are different from each other, and output a content story randomly calculated by different additive values individually.
That is, it is possible to output the same content story among users having a similar tendency according to the user's tendencies, and to output a content story suitable for the user's taste to users having different tendencies, There is an effect of forming a bond between the two.
In the system and method for providing the augmented reality contents using the marker according to the present invention as described above, the following effects can be expected.
The present invention has an effect of cumulatively storing user's usage information and providing a user-customized story reflecting a user's taste.
Further, the present invention has an effect of providing a user with a content story combined with a marker recognition history even if objects including markers are not simultaneously recognized in a user terminal.
1 is a block diagram showing a configuration of an augmented reality contents providing system using a marker according to the present invention;
FIG. 2 is an exemplary diagram showing an example in which contents are stored in a content story storage unit according to the present invention; FIG.
3 is a flowchart illustrating a method of providing an augmented reality content using a marker according to the present invention.
FIG. 4 is a flowchart illustrating a content story extraction process according to the present invention. FIG.
FIG. 5 is a flowchart illustrating a content value adding process of a content story according to the present invention. FIG.
6A to 6D illustrate an example of a content story according to the marker arrangement according to the present invention.
FIG. 7 is a flowchart illustrating a method of setting a residual time per marker according to the present invention. FIG.
Hereinafter, a system and method for providing augmented reality contents using a marker according to a specific embodiment of the present invention will be described with reference to the accompanying drawings.
FIG. 1 is a configuration diagram showing a configuration of an augmented reality contents providing system using a marker according to the present invention.
Hereinafter, a detailed embodiment of a system for providing augmented reality contents using a marker according to the present invention will be described in detail with reference to FIG.
The augmented reality contents providing system using a marker according to the present invention includes a
First, the
At this time, the marker may be included in the
In addition, the
Meanwhile, the
At this time, the content story refers to an operation of an object output according to a marker recognized by the
The
First, the
The
In addition, when the marker recognized by the recognition unit 222 is increased or decreased, the
The
The
The
For this, the
At this time, as shown in FIG. 2, the content
The
Hereinafter, a method for providing an augmented reality contents using a marker according to the present invention will be described in detail with reference to FIG.
3 is a flowchart illustrating a method of providing an augmented reality content using a marker according to the present invention.
As shown in FIG. 3, the method of providing an augmented reality contents using a marker according to the present invention starts with the recognition unit 222 of the
Thereafter, the
Herein, the content story can be extracted in various ways. In the specific embodiment of the present invention, one content story is extracted at random among a plurality of content stories, and the user's taste is reflected to select a content story The method will be described in detail with reference to FIG.
As shown in FIG. 4, in the method of extracting a content story according to a specific embodiment of the present invention, the
At this time, selection of a content story according to the recognized object refers to reading all content stories that can be executed according to the type of the recognized object, as shown in FIG.
Thereafter, the
Here, the addition value is given in accordance with the cumulative output time of the content story in order to increase the extraction probability of the user-preferred content story.
5, the
This is because if the output time of the individual content story is longer than the average output time of the entire content story, it is determined that the recognition time of the marker is long because the user is the favorite content story, And the output probability of the user's favorite content story is increased by giving a high added value to the long content story.
For example, referring to FIG. 2, when a user recognizes a marker associated with an object A and an object B, if the S1 content story of an operation of moving objects around each other is output, When the user touches the
In this case, the S1 content story that the user feels uncomfortable is output shortly, and the S4 content story that the user feels that is interesting is outputted long. Therefore, in the present invention, the cumulative output time for each content story is reflected, to provide.
Meanwhile, the content story can be extracted in various manners. Hereinafter, a method of selecting a content story according to various embodiments of the present invention will be described.
The first method of selecting the content story according to the present invention is a method of extracting another content story according to the object identification direction of the user terminal.
For example, as shown in FIG. 6A, when the Pororo marker 1 is being picked up from the
6C, when the Pororo marker 1 is being picked up from the
The second method of selecting a content story according to the present invention is a method of recognizing a background image of an image captured by an image pickup unit and extracting a content story stored for each type according to the recognized background type.
For example, when the background image is recognized as a beach, the background image may be classified into an outdoor / waterfront space, and a content story allocated to the outdoor / waterfront space may be selected and output.
In this case, it is possible to implement a content story that is compatible with the user's real world by reflecting a background factor of a microscopic viewpoint.
A third method of selecting a content story according to the present invention is to acquire location information using the GPS information of the
At this time, the location information is classified according to the area type, and the stored content stories classified by the area type can be matched and selected.
For example, the division of the regional type may be by city or country depending on the administrative area.
In this case, although there is a disadvantage that it is difficult to acquire the location information in the building, in the outdoors, GPS information is used to grasp the information on which the user is located, and then a content story corresponding to the location is extracted, There is an effect that can be.
Another end of the method for selecting a content story is to provide a content story by outputting the same content story for each taste of a user when there are a plurality of user terminals.
Specifically, when terminals of a plurality of users recognize the same object, users having the same taste output the same content story, and users with different taste output a content story corresponding to each taste.
To this end, the present invention performs interworking between adjacent user terminals through short-range communication when a plurality of users' terminals are close to each other.
Thereafter, the additive value matching rate for each content story is calculated according to the added value for each content story stored in the content story storage unit of the user terminals.
If the coincidence rate of the added value is equal to or larger than the set value, the users are judged to have the same taste and output the same content story.
On the other hand, when the coincidence rate of the additive value is less than the set value, the users judge that their preferences are different from each other, and output a content story randomly calculated by different additive values individually.
That is, it is possible to output the same content story among users having a similar tendency according to the user's tendencies, and to output a content story suitable for the user's taste to users having different tendencies, There is an effect of forming a bond between the two.
Meanwhile, the
Then, the
At this time, if the
If the
In operation S700, the
If it is detected that the marker has departed from the image, the recognition time of the marker is stored (S800), and the time of disappearance of the marker is counted (S900).
This is to prevent an object linked to the detached marker from being output after a predetermined time from the time when the marker is detached from the recognition unit 222. The decay time of the detached marker is set to a predetermined time Or may be set by the user.
Then, the
Here, the residence time is used to extend the output time of the user's preferred object by using the accumulated data of the user, and a method of setting the residence time will be described in detail with reference to FIG.
At this time, if the residence time is an extension of a continuous content story or a deviation of a marker opposite to a user's intention, the object is not directly excluded from the content story until the user corrects (re-recognizes) the input of the marker In order to avoid such problems.
7, the
This is because it is determined that the longer the time that the marker is recognized by the
Accordingly, even if the user does not recognize the marker of the preferred object repeatedly, the user can output the image for a longer time than other objects.
If it is determined in step 700 that the marker has not been detached but the increase in the marker has been detected,
Of course, in the present invention, even if the marker is not recognized in real time, an object linked to the marker may be implemented and the output may not be output to the
It will be understood by those skilled in the art that various changes and modifications may be made without departing from the scope of the invention as defined in the appended claims. It is self-evident.
The present invention relates to a system and a method for providing augmented reality contents using a marker that provides a combined content story using a marker recognized by a user terminal. More specifically, The present invention relates to a system and a method for providing an augmented reality content using a marker that provides a personalized story, and is capable of providing a user-customized story reflecting the user's taste by cumulatively storing user's usage information.
100: object 200: user terminal
210: image pickup unit 220:
222: recognition unit 224:
230: storage unit 232: marker storage unit
234: Object storage unit 236: Content story storage unit
238: marker recognition time storage unit 240: output unit
Claims (20)
(B) a control unit reading an object of the recognized marker from the storage unit;
(C) extracting a content story about the object read by the control unit;
(D) generating output content through the object and the content story read out by the control unit; And
(E) outputting the output content;
The content story may include:
A time-series operation of an object or objects output according to a combination of markers or markers recognized by the recognition unit,
The content story extraction in the step (C)
Wherein the control unit is randomly extracted according to a selection probability value to which an additive value is added from content stories of the recognized marker object,
When a plurality of terminals of a plurality of users are close to each other, interworking between adjacent user terminals through short-range communication, and user terminals whose match rate of the additive value is equal to or greater than a set value according to a content- A content story is extracted and output in a synchronized manner, and each user terminal having a matching rate of a set value less than a set value individually extracts a content story randomly calculated by different additive values and outputs the extracted content story. Content providing method.
The calculation of the addition value may be performed,
(C21) the control unit reading the cumulative output time for each content story from the storage unit;
(C22) calculating an average output time of the entire content story of the calculation unit, which is received from the control unit and received from the cumulative output time for each content story; And
(C23) calculating an additive value of an individual content story according to an output time ratio of an individual content story to an average output time of the entire content story calculated by the calculating unit; and .
(F) identifying the increase or decrease of the marker by the recognition unit;
(G) storing the output time of the individual content story when the control unit discriminates a marker increase / decrease of the recognition unit;
(H) storing the recognition time of the marker when the recognition unit identifies a decrease in the marker in the (F) step;
(I) counting the extinction time from the extinction point of the marker; And
(J) stopping outputting of an object of the marker if the extinction time of the marker is greater than or equal to the residence time.
The extinction time of the step (I)
Wherein the marker is counted from a time at which the marker is not recognized from the recognition unit.
The residence time of the step (J)
Wherein the predetermined time is a predetermined time for maintaining the object output of the marker for a predetermined time after the marker is detached from the recognition unit.
The residence time of the step (J)
(J1) the control unit reading the cumulative recognition time by each marker from the storage unit and transmitting the same to the calculation unit;
(J2) calculating the average recognition time of all the markers by the calculation unit; And
(J3) is set according to the cumulative recognition time of the individual marker with respect to the average recognition time of all the markers calculated from the calculation unit.
And a user terminal for recognizing a marker included in the object,
The user terminal comprises:
An imaging unit for imaging an image including a target object;
A controller for recognizing a marker in the image received from the image sensing unit, reading an object corresponding to the marker, extracting a content story of the read object, and generating an output content;
An output unit outputting the output content; And
And a storage unit for storing information for generating the output content;
Wherein,
A marker storage unit for storing information for recognizing the marker, an object storage unit for storing an object linked to the marker, a content story storage unit for storing the time series of the object combination, And a recognition time storage unit:
The content story may include:
A time-series operation of an object or objects output according to a combination of recognized markers or markers in a recognition unit;
Wherein,
(a) receiving an image captured by the imaging unit and recognizing a marker of the object through the recognition unit;
(b) reading an object of a marker recognized through the recognition unit from a storage unit;
(c) extracting a content story of the read object; And
(d) outputting the output content using the extracted content story, the output content including:
The content story extraction in the step (c)
Wherein the controller is randomly extracted according to a selection probability value to which an additive value is added, from content stories of a recognized marker object;
When a plurality of terminals of a plurality of users are close to each other, interworking between adjacent user terminals through short-range communication, and user terminals whose match rate of the additive value is equal to or greater than a set value according to a content- A content story is extracted and output in a synchronized manner, and each user terminal having a matching rate of a set value less than a set value individually extracts a content story randomly calculated by different additive values and outputs the extracted content story. Content providing system.
The calculation of the addition value may be performed,
(c21) the control unit reading the cumulative output time for each content story from the storage unit;
(c22) calculating an average output time of the entire content story by transmitting cumulative output time for each content story read out by the calculating unit; And
(c23) calculating the added value of the individual content story according to the output time of the individual content story with respect to the average output time of the entire content story by the calculating unit.
Wherein,
(I) identifying the increase or decrease of the marker through the recognition unit;
(II) storing an output time of an individual content story when identifying a marker change in the recognition unit;
(III) storing the recognition time of the marker in the storage unit when the recognizing unit identifies a decrease in the marker in the step (I);
(IV) counting the extinction time from the extinction point of the marker; And
(V) stopping the output of the marker object if the extinction time of the marker is greater than or equal to the residence time.
The disappearance time of the step (IV)
Wherein the marker is counted from a time at which the marker is not recognized from the recognition unit.
The residence time of the step (V)
Wherein the predetermined time is a predetermined time for maintaining the object output of the marker for a predetermined time after the marker is detached from the recognition unit.
The residence time of the step (V)
(I) the control unit reads out the cumulative recognition time for each marker from the storage unit and transmits the same to the calculation unit;
(Ii) calculating the average recognition time of all the markers by the calculation unit; And
(Iii) the cumulative recognition time of the individual marker with respect to the average recognition time of all the markers calculated from the calculation unit.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020160183582A KR101870276B1 (en) | 2016-12-30 | 2016-12-30 | System and method for providing augmented reality contents using markers |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020160183582A KR101870276B1 (en) | 2016-12-30 | 2016-12-30 | System and method for providing augmented reality contents using markers |
Publications (1)
Publication Number | Publication Date |
---|---|
KR101870276B1 true KR101870276B1 (en) | 2018-06-22 |
Family
ID=62768247
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020160183582A KR101870276B1 (en) | 2016-12-30 | 2016-12-30 | System and method for providing augmented reality contents using markers |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101870276B1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102483609B1 (en) * | 2022-10-05 | 2023-01-02 | 신정옥 | Information playing system for flower art and method theof |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100701784B1 (en) | 2005-12-08 | 2007-04-02 | 신믿음 | Method and apparatus of implementing an augmented reality by merging markers |
KR20110104676A (en) * | 2010-03-17 | 2011-09-23 | 에스케이텔레콤 주식회사 | Augmented reality system and method for realizing interaction between virtual object using the plural marker |
KR20140094892A (en) * | 2013-01-23 | 2014-07-31 | 에스케이플래닛 주식회사 | Method to recommend digital contents based on usage log and apparatus therefor |
KR20150025114A (en) * | 2013-08-28 | 2015-03-10 | 엘지전자 주식회사 | Apparatus and Method for Portable Device displaying Augmented Reality image |
KR20160132251A (en) * | 2015-05-08 | 2016-11-17 | 주식회사 세인테크 | Mat for study |
-
2016
- 2016-12-30 KR KR1020160183582A patent/KR101870276B1/en active IP Right Grant
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100701784B1 (en) | 2005-12-08 | 2007-04-02 | 신믿음 | Method and apparatus of implementing an augmented reality by merging markers |
KR20110104676A (en) * | 2010-03-17 | 2011-09-23 | 에스케이텔레콤 주식회사 | Augmented reality system and method for realizing interaction between virtual object using the plural marker |
KR20140094892A (en) * | 2013-01-23 | 2014-07-31 | 에스케이플래닛 주식회사 | Method to recommend digital contents based on usage log and apparatus therefor |
KR20150025114A (en) * | 2013-08-28 | 2015-03-10 | 엘지전자 주식회사 | Apparatus and Method for Portable Device displaying Augmented Reality image |
KR20160132251A (en) * | 2015-05-08 | 2016-11-17 | 주식회사 세인테크 | Mat for study |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102483609B1 (en) * | 2022-10-05 | 2023-01-02 | 신정옥 | Information playing system for flower art and method theof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9959681B2 (en) | Augmented reality contents generation and play system and method using the same | |
US9721388B2 (en) | Individual identification character display system, terminal device, individual identification character display method, and computer program | |
US10719993B1 (en) | Augmented reality system and method with space and object recognition | |
US10853966B2 (en) | Virtual space moving apparatus and method | |
US20110090252A1 (en) | Markerless augmented reality system and method using projective invariant | |
CN108351522A (en) | Direction of gaze maps | |
CN103189827A (en) | Object display device and object display method | |
KR101227237B1 (en) | Augmented reality system and method for realizing interaction between virtual object using the plural marker | |
US20180150967A1 (en) | Entity visualization method | |
CN110794955B (en) | Positioning tracking method, device, terminal equipment and computer readable storage medium | |
US10621787B2 (en) | Method and apparatus for overlaying a picture of a real scene with a virtual image, and mobile device | |
US20190073796A1 (en) | Method and Image Processing System for Determining Parameters of a Camera | |
CN110798677B (en) | Three-dimensional scene modeling method and device, electronic device, readable storage medium and computer equipment | |
US10726631B1 (en) | Augmented reality system and method with frame region recording and reproduction technology based on object tracking | |
JP2015230236A (en) | Merchandise guidance device, terminal equipment, merchandise guidance method, and program | |
KR101703013B1 (en) | 3d scanner and 3d scanning method | |
CN108629799B (en) | Method and equipment for realizing augmented reality | |
JP5735861B2 (en) | Image display program, image display apparatus, image display method, image display system, marker | |
US11520409B2 (en) | Head mounted display device and operating method thereof | |
US11610375B2 (en) | Modulated display AR tracking systems and methods | |
KR101870276B1 (en) | System and method for providing augmented reality contents using markers | |
JP2019096062A (en) | Object tracking device, object tracking method, and object tracking program | |
KR20090000777A (en) | Augmented reality system using tangible object and method for providing augmented reality | |
CN108896035B (en) | Method and equipment for realizing navigation through image information and navigation robot | |
JP7012485B2 (en) | Image information processing device and image information processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant |