KR101870276B1 - System and method for providing augmented reality contents using markers - Google Patents

System and method for providing augmented reality contents using markers Download PDF

Info

Publication number
KR101870276B1
KR101870276B1 KR1020160183582A KR20160183582A KR101870276B1 KR 101870276 B1 KR101870276 B1 KR 101870276B1 KR 1020160183582 A KR1020160183582 A KR 1020160183582A KR 20160183582 A KR20160183582 A KR 20160183582A KR 101870276 B1 KR101870276 B1 KR 101870276B1
Authority
KR
South Korea
Prior art keywords
marker
content
time
story
unit
Prior art date
Application number
KR1020160183582A
Other languages
Korean (ko)
Inventor
최인호
Original Assignee
주식회사 픽스게임즈
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 픽스게임즈 filed Critical 주식회사 픽스게임즈
Priority to KR1020160183582A priority Critical patent/KR101870276B1/en
Application granted granted Critical
Publication of KR101870276B1 publication Critical patent/KR101870276B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8355Generation of protective data, e.g. certificates involving usage data, e.g. number of copies or viewings allowed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Abstract

The present invention relates to a system and a method for providing augmented reality content using markers, which provide a combined content story by using markers recognized by a user terminal, and more specifically, to a system and a method for providing augmented reality content using markers, which provide a customized story reflecting a taste of a user by accumulating and storing usage history of the user. The method for providing augmented reality content using markers comprises (A) a step in which a recognition unit recognizes a marker of a target in an image photographed by a photographing unit; (B) a step in which a control unit reads an object of the recognized marker from a storage unit; (C) a step in which the control unit extracts a content story for the read object; (D) a step in which the control unit generates output content through the read object and the content story; and (E) a step in which an output unit outputs the output content. According to the present invention, it is possible to provide a user-customized story reflecting a taste of a user by accumulating and storing usage information of the user.

Description

Technical Field [0001] The present invention relates to a system and a method for providing augmented reality contents using a marker,

The present invention relates to a system and a method for providing augmented reality contents using a marker that provides a combined content story using a marker recognized by a user terminal. More specifically, To a system and method for providing augmented reality contents using a marker that provides a personalized story.

Virtual Reality (VR) is a human-computer interface that makes a certain environment or situation computerized and makes it appear as if the person using it is interacting with the actual environment and environment. So that it can be shown and manipulated as if it were in the environment without experiencing it directly.

And Augmented Reality (AR) is a technology derived from one field of virtual reality (VR), which refers to superimposing virtual objects on the real world.

In other words, virtual reality interacts with the user through a virtual environment created by computer graphics, but augmented reality can improve the reality by interacting with a virtual object based on the real world.

Therefore, in the augmented reality, the user recognizes the actual environment in which the user is present and also recognizes the virtual information expressed on the actual image.

In this way, the augmented reality combines the reality image with the virtual graphic, so that the virtual objects must be accurately positioned on the screen in order to obtain more realistic and accurate images.

In order to realize this, three-dimensional coordinates of the virtual object are required, and these coordinates should be coordinate values based on the camera.

However, in order to acquire three-dimensional coordinates of a camera with respect to a certain point or object in the real world, there is a problem in that it is not easy to grasp the three-dimensional position because two or more cameras are required.

Accordingly, a marker-based recognition technique for locating a marker for positioning a virtual object in a space of a real world and extracting a relative coordinate based on the marker to estimate a position and a posture for positioning the virtual object .

For example, Korean Patent Registration No. 10-0701784 discloses a technique for realizing an AR that easily and intuitively and quickly outputs an efficient and various three-dimensional virtual image at an appropriate position by utilizing the convergence of markers have.

However, in the conventional art, since the three-dimensional images corresponding to the markers simultaneously recognized in the terminal are fused and output, there is a problem that the user can not provide contents including various stories.

(001) Korea Patent No. 10-0701784

SUMMARY OF THE INVENTION The present invention has been made to solve the above-mentioned problems, and it is an object of the present invention to provide a system and method for providing augmented reality contents using markers for accumulating usage information of users in order to provide a user- .

It is another object of the present invention to provide a system and method for providing an augmented reality content using a marker capable of providing a user with a content story combined with a recognition history of a marker even if objects including markers are not simultaneously recognized in a user terminal.

According to an aspect of the present invention for achieving the above object, the present invention provides a method of detecting an image, comprising the steps of: (A) recognizing a marker of a target object in an image captured by an imaging unit; (B) a control unit reading an object of the recognized marker from the storage unit; (C) extracting a content story about the object read by the control unit; (D) generating output content through the object and the content story read out by the control unit; And (E) outputting the output content.

Here, the content story may be a time-series operation of objects or objects output according to a combination of markers or markers recognized by the recognition unit.

In addition, the content story of the step (C) can be extracted at random according to a selection probability value to which a value is added from content stories of an object of the recognized marker by the control unit.

(C21) the control unit reads the cumulative output time for each content story from the storage unit; (C22) calculating an average output time of the entire content story of the calculation unit, which is received from the control unit and received from the cumulative output time for each content story; And (C23) calculating an added value of an individual content story according to an output time ratio of the individual content story to an average output time of the total content story calculated by the calculating unit.

At this time, the content story of the step (C) may be extracted according to the object identification direction of the terminal.

On the other hand, the present invention provides (F) a step of identifying the increase / decrease of the marker by the recognition unit; (G) storing the output time of the individual content story when the control unit discriminates a marker increase / decrease of the recognition unit; (H) storing the recognition time of the marker when the recognition unit identifies a decrease in the marker in the (F) step; (I) counting the extinction time from the extinction point of the marker; And (J) stopping the output of the marker object if the extinction time of the marker is greater than or equal to the residence time.

Here, the extinction time of the step (I) may be a time counted from a time at which the marker is not recognized from the recognition unit.

The remaining time of the step (J) may be a predetermined time for maintaining the object output of the marker for a predetermined time after the marker is detached from the recognition unit.

Here, the remaining time of the step (J) may include: (J1) reading the cumulative recognition time for each marker from the storage unit by the control unit and transferring the accumulated recognition time to the calculation unit; (J2) calculating the average recognition time of all the markers by the calculation unit; And (J3) the cumulative recognition time of the individual marker with respect to the average recognition time of all the markers calculated from the calculation unit.

According to another aspect of the present invention, there is provided an image processing apparatus comprising: a plurality of objects including a marker; And a user terminal for recognizing a marker included in the object, the user terminal comprising: an imaging unit for imaging an image including a target object; A control unit for recognizing a marker in the image received from the image sensing unit, reading an object corresponding to the marker, extracting a content story of the read object, and generating an output content; And an output unit for outputting the output content.

Here, the user terminal may further comprise a storage unit for storing information for generating the output content; The storage unit may include a marker storage unit for storing information for recognizing the marker, an object storage unit for storing an object linked to the marker, a content story storage unit for storing a time series operation according to the combination of objects, And a marker recognition time storage unit for storing the marker recognition time.

Here, the content story may be a time-series operation of objects or objects output according to a combination of markers or markers recognized by the recognition unit.

The control unit may include: (a) recognizing a marker of a target object through the recognition unit by receiving the image captured by the imaging unit; (b) reading an object of a marker recognized through the recognition unit from a storage unit; (c) extracting a content story of the read object; And (d) generate output content using the extracted content story.

Also, the content story extraction in the step (c) may be randomly extracted according to the selection probability value to which the added value is added from the content stories of the recognized marker object.

Here, the addition value calculation in the step (c2) may include: (c21) the control unit reading the cumulative output time for each content story from the storage unit; (c22) calculating an average output time of an entire contents story by transmitting an accumulated output time of each contents story read out to the calculation unit; And (c23) calculating the addition value of the individual content story according to the output time of the individual content story with respect to the average output time of the entire content story.

The content story of the step (c) may be extracted according to the object identification direction of the terminal.

The control unit may include: (I) identifying the increase or decrease of the marker through the recognition unit; (II) storing an output time of an individual content story when identifying a marker change in the recognition unit; (III) storing the recognition time of the marker in the storage unit when the recognizing unit identifies a decrease in the marker in the step (I); (IV) counting the extinction time from the extinction point of the marker; And (V) stopping the output of the marker object if the extinction time of the marker is greater than or equal to the residence time.

Here, the extinction time of the (IV) step may be the time counted from the time when the marker is not recognized from the recognition unit.

The remaining time of the step (V) may be a predetermined time for maintaining the object output of the marker for a predetermined time after the marker is detached from the recognition unit.

The remaining time of the step (V) may include: (i) reading the cumulative recognition time for each marker from the storage unit by the control unit and transferring the accumulated recognition time to the calculation unit; (Ii) calculating the average recognition time of all the markers by the calculation unit; And (iii) the cumulative recognition time of the individual marker compared to the average recognition time of all the markers calculated from the calculation unit.

Meanwhile, the content story can be extracted in various manners. Hereinafter, a method of selecting a content story according to various embodiments of the present invention will be described.

The first method of selecting the content story according to the present invention is a method of extracting another content story according to the object identification direction of the user terminal.

The second method of selecting a content story according to the present invention is a method of recognizing a background image of an image captured by an image pickup unit and extracting a content story stored for each type according to the recognized background type.

For example, when the background image is recognized as a beach, the background image may be classified into an outdoor / waterfront space, and a content story allocated to the outdoor / waterfront space may be selected and output.

In this case, it is possible to implement a content story that is compatible with the user's real world by reflecting a background factor of a microscopic viewpoint.

A third method of selecting a content story according to the present invention is to acquire location information using the GPS information of the user terminal 200 and extract a content story matching the location information.

At this time, the location information is classified according to the area type, and the stored content stories classified by the area type can be matched and selected.

For example, the division of the regional type may be by city or country depending on the administrative area.

In this case, although there is a disadvantage that it is difficult to acquire the location information in the building, in the outdoors, GPS information is used to grasp the information on which the user is located, and then a content story corresponding to the location is extracted, There is an effect that can be.

Another end of the method for selecting a content story is to provide a content story by outputting the same content story for each taste of a user when there are a plurality of user terminals.

Specifically, when terminals of a plurality of users recognize the same object, users having the same taste output the same content story, and users with different taste output a content story corresponding to each taste.

To this end, the present invention performs interworking between adjacent user terminals through short-range communication when a plurality of users' terminals are close to each other.

Thereafter, the additive value matching rate for each content story is calculated according to the added value for each content story stored in the content story storage unit of the user terminals.

If the coincidence rate of the added value is equal to or larger than the set value, the users are judged to have the same taste and output the same content story.

On the other hand, when the coincidence rate of the additive value is less than the set value, the users judge that their preferences are different from each other, and output a content story randomly calculated by different additive values individually.

That is, it is possible to output the same content story among users having a similar tendency according to the user's tendencies, and to output a content story suitable for the user's taste to users having different tendencies, There is an effect of forming a bond between the two.

In the system and method for providing the augmented reality contents using the marker according to the present invention as described above, the following effects can be expected.

The present invention has an effect of cumulatively storing user's usage information and providing a user-customized story reflecting a user's taste.

Further, the present invention has an effect of providing a user with a content story combined with a marker recognition history even if objects including markers are not simultaneously recognized in a user terminal.

1 is a block diagram showing a configuration of an augmented reality contents providing system using a marker according to the present invention;
FIG. 2 is an exemplary diagram showing an example in which contents are stored in a content story storage unit according to the present invention; FIG.
3 is a flowchart illustrating a method of providing an augmented reality content using a marker according to the present invention.
FIG. 4 is a flowchart illustrating a content story extraction process according to the present invention. FIG.
FIG. 5 is a flowchart illustrating a content value adding process of a content story according to the present invention. FIG.
6A to 6D illustrate an example of a content story according to the marker arrangement according to the present invention.
FIG. 7 is a flowchart illustrating a method of setting a residual time per marker according to the present invention. FIG.

Hereinafter, a system and method for providing augmented reality contents using a marker according to a specific embodiment of the present invention will be described with reference to the accompanying drawings.

FIG. 1 is a configuration diagram showing a configuration of an augmented reality contents providing system using a marker according to the present invention.

Hereinafter, a detailed embodiment of a system for providing augmented reality contents using a marker according to the present invention will be described in detail with reference to FIG.

The augmented reality contents providing system using a marker according to the present invention includes a target object 100 and a user terminal 200.

First, the object 100 is an object including a marker and is recognized by the user terminal 200.

At this time, the marker may be included in the object 100 by a geometric pattern, a certain pattern, an asymmetric arrangement, or the like.

In addition, the object 100 may be formed in various shapes such as a circle, a rectangle, a triangle, and a diamond.

Meanwhile, the user terminal 200 recognizes the marker included in the object 100, reads the object corresponding to the recognized marker, extracts a content story between the recognized markers, and generates an output content It plays a role in implementing augmented reality.

At this time, the content story refers to an operation of an object output according to a marker recognized by the user terminal 200. [

The user terminal 200 includes an image sensing unit 210, a control unit 220, a storage unit 230, and an output unit 240.

First, the image sensing unit 210 captures an image and transmits the sensed image to the control unit 220 and the output unit 240.

The control unit 220 compares the captured image received from the image sensing unit 210 with the marker stored in the marker storage unit 232 of the storage unit 230 and stores the captured image received from the image sensing unit 210 The marker is recognized through the recognition unit 222, the object of the marker recognized through the recognition unit 222 is read out, and the content story between the recognized markers is extracted from the storage unit 230 And outputs the generated content to the output unit 240 to control the implementation of the augmented reality.

In addition, when the marker recognized by the recognition unit 222 is increased or decreased, the control unit 220 stores the individual content output time in the storage unit 230.

The control unit 220 stores the time at which the marker is recognized in the storage unit 230 when the marker is separated from the recognition unit 222, If the destruction time is shorter than the residence time, the object corresponding to the marker is not output.

The control unit 220 calculates the average recognition time of all the markers through the calculation unit 224, and outputs the average recognition time of all the markers, Set the remaining residence time. Hereinafter, a method of providing an augmented reality content using a marker according to the present invention will be described in detail with reference to FIG. 7 to be described later.

The storage unit 230 stores the shape of the marker in order to recognize the marker in the image captured by the imaging unit 210 and stores an object linked to the marker in order to output the object corresponding to the marker , And stores information for implementing a content story for recognized markers as an augmented reality.

For this, the storage unit 230 includes a marker storage unit 232 for storing a marker type, an object storage unit 234 for storing objects linked to the marker, a content story storage unit for storing content- (236), and a marker recognition time storage unit (238) for storing the recognized time of the marker by each marker.

At this time, as shown in FIG. 2, the content story storage unit 236 may store the object information, the content story of each object, the cumulative output time of the content story, and the added value information of the content story.

The output unit 240 receives the output content from the controller 220 and outputs the received content to implement an augmented reality.

Hereinafter, a method for providing an augmented reality contents using a marker according to the present invention will be described in detail with reference to FIG.

3 is a flowchart illustrating a method of providing an augmented reality content using a marker according to the present invention.

As shown in FIG. 3, the method of providing an augmented reality contents using a marker according to the present invention starts with the recognition unit 222 of the user terminal 200 reading an object of a recognized marker (S100).

Thereafter, the controller 220 extracts a content story between the recognized markers (S200).

Herein, the content story can be extracted in various ways. In the specific embodiment of the present invention, one content story is extracted at random among a plurality of content stories, and the user's taste is reflected to select a content story The method will be described in detail with reference to FIG.

As shown in FIG. 4, in the method of extracting a content story according to a specific embodiment of the present invention, the controller 220 selects a content story according to the recognized object (S210).

At this time, selection of a content story according to the recognized object refers to reading all content stories that can be executed according to the type of the recognized object, as shown in FIG.

Thereafter, the controller 220 randomly extracts a content story according to the selected probability value (S220).

Here, the addition value is given in accordance with the cumulative output time of the content story in order to increase the extraction probability of the user-preferred content story.

5, the control unit 220 reads the accumulated output time for each content story from the content story storage unit 246 (S2100), and the calculation unit 224). After the calculating unit 224 calculates the average output time of the entire contents story, the calculation unit 224 calculates an addition value of the individual contents story according to the output time of the individual contents story with respect to the average output time of the calculated total contents story (S2200).

This is because if the output time of the individual content story is longer than the average output time of the entire content story, it is determined that the recognition time of the marker is long because the user is the favorite content story, And the output probability of the user's favorite content story is increased by giving a high added value to the long content story.

For example, referring to FIG. 2, when a user recognizes a marker associated with an object A and an object B, if the S1 content story of an operation of moving objects around each other is output, When the user touches the user terminal 200 to see the other contents story, and if the S4 content story of the operation in which the objects are confronted is output, the user feels that the content story is interesting, As shown in FIG.

In this case, the S1 content story that the user feels uncomfortable is output shortly, and the S4 content story that the user feels that is interesting is outputted long. Therefore, in the present invention, the cumulative output time for each content story is reflected, to provide.

Meanwhile, the content story can be extracted in various manners. Hereinafter, a method of selecting a content story according to various embodiments of the present invention will be described.

The first method of selecting the content story according to the present invention is a method of extracting another content story according to the object identification direction of the user terminal.

For example, as shown in FIG. 6A, when the Pororo marker 1 is being picked up from the user terminal 200, the user terminal 200 is moved to the left and the marker 2 located on the left side of the marker 1 6C, a content story in which a marker 1 and a marker 2 are crisscrossed together is extracted and implemented as an augmented reality in the user terminal 200. In addition, as shown in FIG.

6C, when the Pororo marker 1 is being picked up from the user terminal 200, the user terminal 200 is moved to the right, and the marker 2, which is located on the right side of the marker 1, 6D, a content story in the form of anger is extracted for the marker 1 Poror and the marker 2 crong, and the user terminal 200 is implemented as an augmented reality.

The second method of selecting a content story according to the present invention is a method of recognizing a background image of an image captured by an image pickup unit and extracting a content story stored for each type according to the recognized background type.

For example, when the background image is recognized as a beach, the background image may be classified into an outdoor / waterfront space, and a content story allocated to the outdoor / waterfront space may be selected and output.

In this case, it is possible to implement a content story that is compatible with the user's real world by reflecting a background factor of a microscopic viewpoint.

A third method of selecting a content story according to the present invention is to acquire location information using the GPS information of the user terminal 200 and extract a content story matching the location information.

At this time, the location information is classified according to the area type, and the stored content stories classified by the area type can be matched and selected.

For example, the division of the regional type may be by city or country depending on the administrative area.

In this case, although there is a disadvantage that it is difficult to acquire the location information in the building, in the outdoors, GPS information is used to grasp the information on which the user is located, and then a content story corresponding to the location is extracted, There is an effect that can be.

Another end of the method for selecting a content story is to provide a content story by outputting the same content story for each taste of a user when there are a plurality of user terminals.

Specifically, when terminals of a plurality of users recognize the same object, users having the same taste output the same content story, and users with different taste output a content story corresponding to each taste.

To this end, the present invention performs interworking between adjacent user terminals through short-range communication when a plurality of users' terminals are close to each other.

Thereafter, the additive value matching rate for each content story is calculated according to the added value for each content story stored in the content story storage unit of the user terminals.

If the coincidence rate of the added value is equal to or larger than the set value, the users are judged to have the same taste and output the same content story.

On the other hand, when the coincidence rate of the additive value is less than the set value, the users judge that their preferences are different from each other, and output a content story randomly calculated by different additive values individually.

That is, it is possible to output the same content story among users having a similar tendency according to the user's tendencies, and to output a content story suitable for the user's taste to users having different tendencies, There is an effect of forming a bond between the two.

Meanwhile, the controller 220 generates an output content according to the extracted content story (S300), transfers the generated output content to the output unit 240, and causes the output unit 240 to implement the augmented reality (S400).

Then, the control unit 220 determines whether the marker is increased or decreased in the image captured and transmitted from the image sensing unit 210 (S500).

At this time, if the control unit 220 identifies an increase or a decrease of the marker in the image, the output time of the individual content story is stored (S600).

If the control unit 220 does not identify the increase or decrease of the marker in the image, the output unit 240 continuously outputs the output contents to implement the augmented reality.

In operation S700, the control unit 220 determines whether the increase or decrease of the marker detected in operation 500 is a deviation of the marker from the image.

If it is detected that the marker has departed from the image, the recognition time of the marker is stored (S800), and the time of disappearance of the marker is counted (S900).

This is to prevent an object linked to the detached marker from being output after a predetermined time from the time when the marker is detached from the recognition unit 222. The decay time of the detached marker is set to a predetermined time Or may be set by the user.

Then, the controller 220 determines whether the extinction time is less than the residence time set in the marker (S1000). If the extinction time is greater than or equal to the residence time of the marker, The output is terminated.

Here, the residence time is used to extend the output time of the user's preferred object by using the accumulated data of the user, and a method of setting the residence time will be described in detail with reference to FIG.

At this time, if the residence time is an extension of a continuous content story or a deviation of a marker opposite to a user's intention, the object is not directly excluded from the content story until the user corrects (re-recognizes) the input of the marker In order to avoid such problems.

7, the control unit 220 reads the cumulative recognition time for each marker from the marker recognition time storage unit 238 (S2100) and transfers it to the calculation unit 224, After calculating the average recognition time of all the markers (S2200), the calculation unit 224 sets the residual time per marker according to the cumulative recognition time of the individual markers with respect to the average recognition time of the calculated all markers (S2300) .

This is because it is determined that the longer the time that the marker is recognized by the user terminal 200, that the user is the preferred object, and the output time of the object linked to the marker is increased even though the marker is not recognized in the user terminal 200 .

Accordingly, even if the user does not recognize the marker of the preferred object repeatedly, the user can output the image for a longer time than other objects.

If it is determined in step 700 that the marker has not been detached but the increase in the marker has been detected, steps 100 through 400 are repeatedly performed. In step 1000, The augmented reality is implemented by continuously outputting the output content of the marker recognized by the user terminal 200 by repeating steps 100 through 400. [

Of course, in the present invention, even if the marker is not recognized in real time, an object linked to the marker may be implemented and the output may not be output to the output unit 240, thereby limiting the number of objects that can be included in the output content.

It will be understood by those skilled in the art that various changes and modifications may be made without departing from the scope of the invention as defined in the appended claims. It is self-evident.

The present invention relates to a system and a method for providing augmented reality contents using a marker that provides a combined content story using a marker recognized by a user terminal. More specifically, The present invention relates to a system and a method for providing an augmented reality content using a marker that provides a personalized story, and is capable of providing a user-customized story reflecting the user's taste by cumulatively storing user's usage information.

100: object 200: user terminal
210: image pickup unit 220:
222: recognition unit 224:
230: storage unit 232: marker storage unit
234: Object storage unit 236: Content story storage unit
238: marker recognition time storage unit 240: output unit

Claims (20)

(A) the recognition unit recognizes a marker of a target object in an image captured by the imaging unit;
(B) a control unit reading an object of the recognized marker from the storage unit;
(C) extracting a content story about the object read by the control unit;
(D) generating output content through the object and the content story read out by the control unit; And
(E) outputting the output content;
The content story may include:
A time-series operation of an object or objects output according to a combination of markers or markers recognized by the recognition unit,
The content story extraction in the step (C)
Wherein the control unit is randomly extracted according to a selection probability value to which an additive value is added from content stories of the recognized marker object,
When a plurality of terminals of a plurality of users are close to each other, interworking between adjacent user terminals through short-range communication, and user terminals whose match rate of the additive value is equal to or greater than a set value according to a content- A content story is extracted and output in a synchronized manner, and each user terminal having a matching rate of a set value less than a set value individually extracts a content story randomly calculated by different additive values and outputs the extracted content story. Content providing method.
delete delete The method according to claim 1,
The calculation of the addition value may be performed,
(C21) the control unit reading the cumulative output time for each content story from the storage unit;
(C22) calculating an average output time of the entire content story of the calculation unit, which is received from the control unit and received from the cumulative output time for each content story; And
(C23) calculating an additive value of an individual content story according to an output time ratio of an individual content story to an average output time of the entire content story calculated by the calculating unit; and .
delete The method according to claim 1 or 4,
(F) identifying the increase or decrease of the marker by the recognition unit;
(G) storing the output time of the individual content story when the control unit discriminates a marker increase / decrease of the recognition unit;
(H) storing the recognition time of the marker when the recognition unit identifies a decrease in the marker in the (F) step;
(I) counting the extinction time from the extinction point of the marker; And
(J) stopping outputting of an object of the marker if the extinction time of the marker is greater than or equal to the residence time.
The method according to claim 6,
The extinction time of the step (I)
Wherein the marker is counted from a time at which the marker is not recognized from the recognition unit.
8. The method of claim 7,
The residence time of the step (J)
Wherein the predetermined time is a predetermined time for maintaining the object output of the marker for a predetermined time after the marker is detached from the recognition unit.
9. The method of claim 8,
The residence time of the step (J)
(J1) the control unit reading the cumulative recognition time by each marker from the storage unit and transmitting the same to the calculation unit;
(J2) calculating the average recognition time of all the markers by the calculation unit; And
(J3) is set according to the cumulative recognition time of the individual marker with respect to the average recognition time of all the markers calculated from the calculation unit.
A plurality of objects including a marker;
And a user terminal for recognizing a marker included in the object,
The user terminal comprises:
An imaging unit for imaging an image including a target object;
A controller for recognizing a marker in the image received from the image sensing unit, reading an object corresponding to the marker, extracting a content story of the read object, and generating an output content;
An output unit outputting the output content; And
And a storage unit for storing information for generating the output content;
Wherein,
A marker storage unit for storing information for recognizing the marker, an object storage unit for storing an object linked to the marker, a content story storage unit for storing the time series of the object combination, And a recognition time storage unit:
The content story may include:
A time-series operation of an object or objects output according to a combination of recognized markers or markers in a recognition unit;
Wherein,
(a) receiving an image captured by the imaging unit and recognizing a marker of the object through the recognition unit;
(b) reading an object of a marker recognized through the recognition unit from a storage unit;
(c) extracting a content story of the read object; And
(d) outputting the output content using the extracted content story, the output content including:
The content story extraction in the step (c)
Wherein the controller is randomly extracted according to a selection probability value to which an additive value is added, from content stories of a recognized marker object;
When a plurality of terminals of a plurality of users are close to each other, interworking between adjacent user terminals through short-range communication, and user terminals whose match rate of the additive value is equal to or greater than a set value according to a content- A content story is extracted and output in a synchronized manner, and each user terminal having a matching rate of a set value less than a set value individually extracts a content story randomly calculated by different additive values and outputs the extracted content story. Content providing system.
delete delete delete delete 11. The method of claim 10,
The calculation of the addition value may be performed,
(c21) the control unit reading the cumulative output time for each content story from the storage unit;
(c22) calculating an average output time of the entire content story by transmitting cumulative output time for each content story read out by the calculating unit; And
(c23) calculating the added value of the individual content story according to the output time of the individual content story with respect to the average output time of the entire content story by the calculating unit.
delete 16. The method according to claim 10 or 15,
Wherein,
(I) identifying the increase or decrease of the marker through the recognition unit;
(II) storing an output time of an individual content story when identifying a marker change in the recognition unit;
(III) storing the recognition time of the marker in the storage unit when the recognizing unit identifies a decrease in the marker in the step (I);
(IV) counting the extinction time from the extinction point of the marker; And
(V) stopping the output of the marker object if the extinction time of the marker is greater than or equal to the residence time.
18. The method of claim 17,
The disappearance time of the step (IV)
Wherein the marker is counted from a time at which the marker is not recognized from the recognition unit.
19. The method of claim 18,
The residence time of the step (V)
Wherein the predetermined time is a predetermined time for maintaining the object output of the marker for a predetermined time after the marker is detached from the recognition unit.
20. The method of claim 19,
The residence time of the step (V)
(I) the control unit reads out the cumulative recognition time for each marker from the storage unit and transmits the same to the calculation unit;
(Ii) calculating the average recognition time of all the markers by the calculation unit; And
(Iii) the cumulative recognition time of the individual marker with respect to the average recognition time of all the markers calculated from the calculation unit.
KR1020160183582A 2016-12-30 2016-12-30 System and method for providing augmented reality contents using markers KR101870276B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020160183582A KR101870276B1 (en) 2016-12-30 2016-12-30 System and method for providing augmented reality contents using markers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020160183582A KR101870276B1 (en) 2016-12-30 2016-12-30 System and method for providing augmented reality contents using markers

Publications (1)

Publication Number Publication Date
KR101870276B1 true KR101870276B1 (en) 2018-06-22

Family

ID=62768247

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160183582A KR101870276B1 (en) 2016-12-30 2016-12-30 System and method for providing augmented reality contents using markers

Country Status (1)

Country Link
KR (1) KR101870276B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102483609B1 (en) * 2022-10-05 2023-01-02 신정옥 Information playing system for flower art and method theof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100701784B1 (en) 2005-12-08 2007-04-02 신믿음 Method and apparatus of implementing an augmented reality by merging markers
KR20110104676A (en) * 2010-03-17 2011-09-23 에스케이텔레콤 주식회사 Augmented reality system and method for realizing interaction between virtual object using the plural marker
KR20140094892A (en) * 2013-01-23 2014-07-31 에스케이플래닛 주식회사 Method to recommend digital contents based on usage log and apparatus therefor
KR20150025114A (en) * 2013-08-28 2015-03-10 엘지전자 주식회사 Apparatus and Method for Portable Device displaying Augmented Reality image
KR20160132251A (en) * 2015-05-08 2016-11-17 주식회사 세인테크 Mat for study

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100701784B1 (en) 2005-12-08 2007-04-02 신믿음 Method and apparatus of implementing an augmented reality by merging markers
KR20110104676A (en) * 2010-03-17 2011-09-23 에스케이텔레콤 주식회사 Augmented reality system and method for realizing interaction between virtual object using the plural marker
KR20140094892A (en) * 2013-01-23 2014-07-31 에스케이플래닛 주식회사 Method to recommend digital contents based on usage log and apparatus therefor
KR20150025114A (en) * 2013-08-28 2015-03-10 엘지전자 주식회사 Apparatus and Method for Portable Device displaying Augmented Reality image
KR20160132251A (en) * 2015-05-08 2016-11-17 주식회사 세인테크 Mat for study

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102483609B1 (en) * 2022-10-05 2023-01-02 신정옥 Information playing system for flower art and method theof

Similar Documents

Publication Publication Date Title
US9959681B2 (en) Augmented reality contents generation and play system and method using the same
US9721388B2 (en) Individual identification character display system, terminal device, individual identification character display method, and computer program
US10719993B1 (en) Augmented reality system and method with space and object recognition
US10853966B2 (en) Virtual space moving apparatus and method
US20110090252A1 (en) Markerless augmented reality system and method using projective invariant
CN108351522A (en) Direction of gaze maps
CN103189827A (en) Object display device and object display method
KR101227237B1 (en) Augmented reality system and method for realizing interaction between virtual object using the plural marker
US20180150967A1 (en) Entity visualization method
CN110794955B (en) Positioning tracking method, device, terminal equipment and computer readable storage medium
US10621787B2 (en) Method and apparatus for overlaying a picture of a real scene with a virtual image, and mobile device
US20190073796A1 (en) Method and Image Processing System for Determining Parameters of a Camera
CN110798677B (en) Three-dimensional scene modeling method and device, electronic device, readable storage medium and computer equipment
US10726631B1 (en) Augmented reality system and method with frame region recording and reproduction technology based on object tracking
JP2015230236A (en) Merchandise guidance device, terminal equipment, merchandise guidance method, and program
KR101703013B1 (en) 3d scanner and 3d scanning method
CN108629799B (en) Method and equipment for realizing augmented reality
JP5735861B2 (en) Image display program, image display apparatus, image display method, image display system, marker
US11520409B2 (en) Head mounted display device and operating method thereof
US11610375B2 (en) Modulated display AR tracking systems and methods
KR101870276B1 (en) System and method for providing augmented reality contents using markers
JP2019096062A (en) Object tracking device, object tracking method, and object tracking program
KR20090000777A (en) Augmented reality system using tangible object and method for providing augmented reality
CN108896035B (en) Method and equipment for realizing navigation through image information and navigation robot
JP7012485B2 (en) Image information processing device and image information processing method

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant