WO2016099189A1 - 자석 등을 이용한 컨텐츠 표시 방법 및 이를 수행하는 사용자 단말 - Google Patents
자석 등을 이용한 컨텐츠 표시 방법 및 이를 수행하는 사용자 단말 Download PDFInfo
- Publication number
- WO2016099189A1 WO2016099189A1 PCT/KR2015/013922 KR2015013922W WO2016099189A1 WO 2016099189 A1 WO2016099189 A1 WO 2016099189A1 KR 2015013922 W KR2015013922 W KR 2015013922W WO 2016099189 A1 WO2016099189 A1 WO 2016099189A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user terminal
- content
- image
- virtual object
- location
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
Definitions
- the following description relates to a content display method and a user terminal performing the same.
- Augmented reality refers to a mixture of real images and virtual images including virtual objects by inserting images such as computer graphics into the real environment.
- Augmented reality technology combines the real world and the virtual world to allow users to interact with virtual objects in real time. It means technology that can interact.
- This conventional augmented reality realizes augmented reality by taking an image of a real object spaced at a considerable distance and augmenting a virtual object in the photographed image.
- the actual object is not only a small case, such as a book, but also a large volume of the dining table, chairs, etc., there is an inconvenience of keeping the device displaying augmented reality farther away from the object.
- Korean Patent Application No. 10-2010-0026720 which discloses optical-based augmented reality (name of the invention: an augmented reality system and method using light source recognition, and an augmented reality processing apparatus for implementing the same).
- the present invention provides a method and apparatus for minimizing heterogeneity between content and a virtual object displayed on the user terminal by augmenting and outputting a virtual object in a state in which the user terminal is directly in contact with or in close proximity to a content book including the content. can do.
- Content display method comprises the steps of identifying the content included in the peripheral area of the user terminal; Determining a location of the user terminal relative to the content; And augmenting and outputting a virtual object on a blind image corresponding to a blind region covered by the user terminal based on the content and the position of the user terminal.
- the user terminal may be arranged to contact the peripheral area.
- Content display method further comprises the step of obtaining a peripheral image corresponding to the peripheral area of the user terminal, the step of determining the position of the user terminal, from the peripheral image to the content The location of the user terminal may be determined.
- the user terminal is spaced apart from the peripheral area by a predetermined distance through a support mounted to the user terminal, and obtaining the peripheral image may include:
- the surrounding image may be acquired by using a rear-facing camera embedded in the camera.
- the surrounding image including the surrounding area of the user terminal may be received through a communication unit built in the user terminal.
- the determining of the location of the user terminal in the content display method may include identifying at least one of a content pattern, a dot pattern, a visual marker, and a reference object included in the surrounding image.
- the location of the user terminal may be determined.
- the determining of the position of the user terminal in the content display method may include comparing at least one of a content pattern, a dot pattern, a visual marker, and a reference object included in the peripheral image with information stored in a memory. Thus, the content included in the peripheral area can be identified.
- the determining of the position of the user terminal in the content display method further comprises determining at least one of an arrangement angle and an arrangement direction of the user terminal with respect to the content, and determining the virtual object from the blind spot.
- the step of augmenting and outputting the image may include augmenting and outputting the virtual object to the quadrangular image in consideration of at least one of an arrangement angle and an arrangement direction of the user terminal.
- Determining the position of the user terminal in the content display method using the magnetic field signal received from the magnetic field generating unit around the user terminal to determine the position of the user terminal, or The location of the user terminal may be determined using an acoustic signal received from an external speaker around the user terminal.
- Determining the position of the user terminal in the content display method generating a sound signal for determining the position of the user terminal and transmits to an external device located near the user terminal, The location of the user terminal may be determined using the sound signal received by the external device.
- the step of augmenting and outputting the virtual object to the quadrangular image may include determining movement of the virtual object and the virtual object based on the location of the content and the user terminal.
- the virtual object may be augmented and output on the rectangular image according to the determined movement.
- the step of augmenting and outputting the virtual object to the rectangular image may include: augmenting the virtual object based on the content in the rectangular image in consideration of a change in the position of the user terminal. Can be output.
- the step of augmenting and outputting the virtual object to the quadrangular image may include selecting at least one of a position, a shape, and a movement of the virtual object based on a user input signal input from a user.
- the controller may control and augment and output the controlled virtual object.
- Identifying the content in the content display method according to an embodiment of the present invention by identifying a content pattern, a dot pattern, a visual marker and a reference object included in the peripheral image of the user terminal, it is included in the peripheral area Content can be identified.
- Identifying the content in the content display method by comparing the content pattern, dot pattern, visual markers and reference objects included in the peripheral image of the user terminal with the information stored in the memory, Content included in the surrounding area may be identified.
- the identifying of the content may include identifying the content included in the peripheral area by receiving identification information about the content through a communication unit.
- Identifying the content in the content display method identifying the content through a signal input from the user or for the content received from the NFC chip or RF chip around the user terminal
- the content may be identified based on identification information.
- a user terminal includes a processor for controlling augmentation for a virtual object; And a display displaying the augmented virtual object, wherein the processor is configured to identify content included in a peripheral area of the user terminal, determine a location of the user terminal relative to the content, and the content and the user terminal.
- the virtual object is augmented and output on the blind spot image corresponding to the blind spot covered by the user terminal based on the position of.
- FIG. 1 is a diagram illustrating an operation of a user terminal according to an exemplary embodiment.
- FIG. 2 is a diagram illustrating a peripheral area and a blind area of a user terminal, according to an exemplary embodiment.
- 3 to 6 are diagrams for describing an example of obtaining a peripheral image corresponding to a peripheral area of a user terminal, according to an exemplary embodiment.
- FIG. 7 and 8 are diagrams for describing an operation of a user terminal using a reference object, according to an exemplary embodiment.
- FIG. 9 is a diagram for describing an example of determining a location of a user terminal using a magnetic field generator or an external speaker, according to an exemplary embodiment.
- FIG. 10 is a diagram illustrating a user terminal according to an exemplary embodiment.
- FIG. 11 is a diagram illustrating a content display method according to one embodiment.
- first or second may be used to describe various components, but such terms should be interpreted only for the purpose of distinguishing one component from another component.
- first component may be referred to as a second component
- second component may also be referred to as a first component.
- Embodiments to be described below can be used to implement augmented reality.
- Embodiments may be implemented in various types of products, such as smart phones, smart pads, wearable devices, tablet computers, personal computers, laptop computers, smart home appliances.
- embodiments may be applied to implementing augmented reality in smart phones, smart pads, wearable devices, and the like.
- exemplary embodiments will be described in detail with reference to the accompanying drawings.
- Like reference numerals in the drawings denote like elements.
- FIG. 1 is a diagram illustrating an operation of a user terminal according to an exemplary embodiment.
- augmented reality may be implemented in the user terminal 110.
- the user terminal 110 is a device capable of implementing augmented reality, and is mounted on various computing devices and / or systems such as, for example, a smart phone, a smart pad, a wearable device, a tablet computer, a personal computer, a laptop computer, a smart home appliance, and the like. Can be.
- the user terminal 110 may be located on the content book 120 including the content 130 and may augment and output a virtual object on an image of a portion of the content 130. By augmenting and outputting a virtual object in a state in which the user terminal 110 is directly in contact with or in close proximity to the content book 120, the heterogeneity between the content 130 and the virtual object displayed on the user terminal 110 may be minimized. Can be.
- the user terminal 110 when the person-shaped content 130 is printed on the content book 120, the user terminal 110 may correspond to the content 130 corresponding to the position of the user terminal 110.
- the virtual object may be augmented and output in an image of a part.
- the user terminal 110 may increase and output a virtual object in consideration of at least one of an arrangement angle and an arrangement direction of the user terminal 110 as well as the position of the user terminal 110.
- the placement angle may represent an angle difference formed between the content book 120 and the user terminal 110.
- an arrangement angle of the user terminal 110 when the user terminal 110 is placed on the content book 120, an arrangement angle of the user terminal 110 may be 0 degrees.
- the arrangement direction may refer to a direction in which the user terminal 110 is placed in the content book 120.
- the augmented virtual object may be controlled according to a user input. For example, additional information (eg, head muscle, head bone, etc.) about a human head may be augmented and displayed as a virtual object. Alternatively, a virtual object representing a predetermined movement may be augmented and displayed in response to a user input.
- additional information eg, head muscle, head bone, etc.
- a virtual object representing a predetermined movement may be augmented and displayed in response to a user input.
- the user terminal 110 determines the position of the user terminal 110 with respect to the content, thereby enhancing the virtual object in the entire image of the content 130. You can determine some of the images that are the target. Alternatively, the user terminal 110 may further determine at least one of an image to be augmented with the virtual object by considering at least one of an arrangement angle and an arrangement direction of the user terminal 110 with respect to the content.
- the content 130 printed on the content book 120 shown in FIG. 1 and the virtual object displayed on the user terminal 110 are for convenience of description, and the content 130 and the corresponding virtual object are not limited thereto. Various contents 130 and virtual objects corresponding thereto may be applied.
- the content book 120 refers to a medium including the content 130. If the content book 120 includes various media (eg, an augmented reality card) that may include the content 130, the content book 120 may be used. The description of the same may apply.
- the user terminal 110 identifies the content 130 included in the surrounding area, determines the position of the user terminal 110 with respect to the content 130, and the content 130. ) And the virtual object may be output on the display of the user terminal 110 based on the location of the user terminal 110. In this case, the virtual object may be augmented and output on the image of the content corresponding to the blind region covered by the user terminal 110.
- a blind image an image of the content corresponding to the blind area covered by the user terminal 110 is referred to as a blind image.
- FIG. 2 is a diagram illustrating a peripheral area and a blind area of a user terminal, according to an exemplary embodiment.
- the peripheral area and the blind area 220 may be determined based on the user terminal 200.
- the peripheral area is an area for the surrounding environment that can infer at least one of the position, the placement angle, and the placement direction of the user terminal 200, and infers at least one of the position, the placement angle, and the placement direction of the user terminal 200.
- the peripheral area may be determined as the wide area 210-1 including the user terminal 200.
- the peripheral area may also be determined as the partial area 210-2 near the user terminal 200.
- the user terminal 220 may determine the position of the user terminal 200 with respect to the content through the surrounding image corresponding to the surrounding area.
- the blind spot 220 is an area that is covered by the user terminal 200 among the areas where the content is printed, and may be determined based on the position of the user terminal 200. Alternatively, the blind spot 220 may be determined by further considering at least one of an arrangement angle and an arrangement direction of the user terminal 200.
- the user terminal 200 may augment the virtual object in the blind image corresponding to the blind area 220 and display the blind image and the virtual object corresponding to the blind area 220 on the display. Through this, the user may be provided with the printed content and the augmented virtual object without the area covered by the user terminal 200.
- 3 to 6 are diagrams for describing an example of obtaining a peripheral image corresponding to a peripheral area of a user terminal, according to an exemplary embodiment.
- the user terminal 310 may acquire a peripheral image corresponding to the peripheral area 330 using the camera 311 and the mirror 320 in the front direction.
- the user terminal 310 may include a camera 311 in the front direction on the same surface on which the display is located.
- the camera 311 in the front direction may take a peripheral image corresponding to the peripheral area 330 of the user terminal 310 using the mirror 320.
- the mirror 320 is a device mounted on the user terminal 310 and reflects the peripheral image corresponding to the peripheral area 330 to the camera 311 in the front direction.
- the mirror 320 is a peripheral image corresponding to the peripheral area 330. It may include a first sub-mirror reflecting in the direction (310) and a second sub-mirror reflecting the peripheral image reflected by the first sub-mirror to the camera 311 in the front direction.
- a convex lens may be additionally provided to focus the peripheral image reflected by the second sub-mirror to the camera 311 in the front direction.
- FIG. 3 the specific structure in which the mirror 320 is mounted on the user terminal 310 is not illustrated, but the structure or material of such a structure may be easily selected by those skilled in the art to which the present invention pertains. Since the decision can be made, a detailed description thereof will be omitted.
- the user terminal 410 may acquire a peripheral image corresponding to the peripheral area 430 including the reference object 440 by using the camera 411 and the mirror 420 in the front direction. .
- the peripheral area 430 illustrated in FIG. 4 includes a reference object 440 that can be a reference for determining at least one of the position, the placement angle, and the placement direction of the user terminal 410.
- the user terminal 410 analyzes the reference object 440 included in the surrounding image by using the position information about the reference object 440 previously recognized to determine the position, the placement angle, and the placement direction of the user terminal 410. Can be.
- the mirror 420 may be mounted on the user terminal 410 to reflect the peripheral image corresponding to the peripheral area 430 to the camera 411 in the front direction.
- a convex lens may be additionally installed to focus the peripheral image reflected from the mirror 320 to the camera 411 in the front direction.
- FIG. 4 although the specific structure in which the mirror 420 is mounted on the user terminal 410 is not illustrated, the structure or material of such a structure may be easily selected by those skilled in the art to which the present invention pertains. Since the decision can be made, a detailed description thereof will be omitted.
- the user terminal 510 may acquire a peripheral image corresponding to the peripheral area 530 by using the camera 511 and the mirror 520 in the front direction.
- the mirror 520 may be mounted on the user terminal 510 or positioned outside the user terminal 510 to reflect the peripheral image corresponding to the peripheral area 530 to the camera 511 in the front direction.
- the mirror 520 may be a convex mirror that reflects a peripheral image corresponding to the peripheral area 530 including the user terminal 510.
- a convex lens may be additionally installed to focus the peripheral image reflected from the mirror 520 to the camera 511 in the front direction.
- FIG. 5 a specific structure in which the mirror 520 is mounted on the user terminal 510 or a structure in which the mirror 520 may be located outside the user terminal 510 is not illustrated.
- the art can be easily selected and determined by those of ordinary skill in the art, and thus a detailed description thereof will be omitted.
- the user terminal 610 may acquire a surrounding image corresponding to the surrounding area 620 using the camera 611 in the rear direction.
- the user terminal 610 may include a camera 611 in the rear direction on a surface where the display is not located.
- the user terminal 610 may acquire a surrounding image corresponding to the surrounding area 620 using the camera 611 in the rear direction while being spaced apart from the surrounding area 620 by a predetermined distance d.
- the user terminal 610 may be spaced apart from the peripheral area 620 by a predetermined distance d through a support mounted to the user terminal 610.
- the support is a structure that is disposed on the same surface as the peripheral area 620 to support the user terminal 610, even if the user does not hold the user terminal 610 separately, the user terminal 610 by the predetermined distance d (the peripheral area ( The user terminal 610 may be supported to be spaced apart from the 620.
- FIG. 6 although a specific support for separating the user terminal 610 from the peripheral area 620 by a predetermined distance d is not illustrated, the structure and the material of the support may have a general knowledge in the art. As it grows, selection and determination can be easily performed, and thus a detailed description thereof will be omitted.
- the user terminal may identify the content included in the surrounding area from the obtained surrounding image.
- the user terminal may identify content included in the surrounding area by identifying at least one of a content pattern, a dot pattern, a visual marker, and a reference object included in the surrounding image.
- the user terminal may identify content included in a peripheral area by comparing at least one of a content pattern, a dot pattern, a visual marker, and a reference object with information stored in a memory.
- the memory may store a reference image for at least one of a content pattern, a dot pattern, a visual marker, and a reference object, and information about the corresponding reference image (eg, corresponding content information).
- the user terminal may identify the content included in the surrounding image by using the information stored in the memory.
- the content pattern refers to a specific pattern constituting the content and may include, for example, a pattern constituting text, a symbol, a figure, a figure, and the like.
- the dot pattern is a pattern in which a plurality of dots are arranged at different distances and intervals, and the user terminal may identify the content included in the surrounding area by identifying the dot pattern included in the surrounding image by using information about the previously stored dot pattern. Can be.
- the user terminal may determine the position of the user terminal with respect to the content from the surrounding image.
- the user terminal may identify the content included in the surrounding area by comparing at least one of a content pattern, a dot pattern, a visual marker, and a reference object included in the surrounding image with information stored in the memory.
- the memory may store a reference image for at least one of a content pattern, a dot pattern, a visual marker, and a reference object, and information about the corresponding reference image (eg, corresponding position information).
- the user terminal may identify the location of the user terminal from the surrounding image by using the information stored in the memory. According to an embodiment, the user terminal may identify not only the location of the user terminal but also the placement angle and the direction of the user terminal from the surrounding image.
- FIG. 7 and 8 are diagrams for describing an operation of a user terminal using a reference object, according to an exemplary embodiment.
- the user terminal 710 may acquire a surrounding image corresponding to a peripheral area of the user terminal 710 or determine a location of the user terminal 710 using the reference object 730. As described above, the user terminal 710 may identify the content included in the surrounding area by using the acquired surrounding image and determine the position of the user terminal 710 with respect to the content. Alternatively, the user terminal 710 may further determine an arrangement angle and an arrangement direction of the user terminal 710 using the surrounding image.
- the reference object 730 is an apparatus that can be used as a reference for identifying the content included in the content book 720 or for determining at least one of the position, the placement angle, and the placement direction of the user terminal 710. 730 may be applied.
- the reference object 730 may reflect the surrounding image corresponding to the surrounding area to the user terminal 710.
- a mirror capable of reflecting the surrounding image corresponding to the surrounding area to the user terminal 710 may be located.
- the user terminal 710 may acquire a surrounding image corresponding to the surrounding area by using a built-in front-facing camera and a mirror included in the reference object 730.
- the reference object 730 may photograph a surrounding image corresponding to the surrounding area through the built-in camera and provide the surrounding image to the user terminal 710.
- the reference object 730 controls the operation of the reference object 730 and the camera capable of capturing the peripheral image corresponding to the peripheral area of the user terminal 710, the communication unit capable of transmitting the captured peripheral image to the user terminal 710. It may include a processor capable of.
- the user terminal 710 may receive a surrounding image corresponding to the surrounding area from the reference object 730 through the communication unit.
- the reference object 730 may provide a visual marker to the user terminal 710.
- the hemispherical structure located on the top of the reference object 730 may include a unique visual marker.
- the visual marker may be configured such that different patterns appear according to the photographing position.
- the user terminal 710 may photograph a visual marker of the reference object 730 using a built-in front camera.
- the user terminal 710 may determine the location of the current user terminal 710 by comparing the visual marker of the photographed reference object 730 with previously stored visual marker information.
- the user terminal 710 may determine not only the position of the user terminal 710 but also the placement angle and the arrangement direction of the user terminal 710 using the visual marker.
- the user terminal 710 identifies the content included in the content book 720 through a signal input from the user, or the NFC chip, RF of the content book 720 through a communication unit embedded in the user terminal 710. By receiving identification information about the content from the chip or the like, the content included in the peripheral area can be identified.
- the user terminal 810 may identify content included in the content book 820 using the reference object 830 of the content book 820 produced in the form of a pop-up book. have.
- the content book 820 may be produced in the form of a pop-up book including a unique reference object 830 for each page.
- the reference object 830 may be included in the page without being drawn out of the content book 820 until the page including the reference object 830 is unfolded. When the page including the reference object 830 is unfolded, the reference object 830 may protrude from the page in a three-dimensional form.
- the user terminal 810 may acquire the surrounding image corresponding to the surrounding area including the reference object 830 by photographing the reference object 830 by using a built-in front-facing camera.
- the user terminal 810 may identify the currently unfolded page among the pages of the content book 820 by identifying the reference object 830 included in the surrounding image, and may also determine the content included in the identified page.
- FIG. 9 is a diagram for describing an example of determining a location of a user terminal using a magnetic field generator or an external speaker, according to an exemplary embodiment.
- the location of the user terminal 910 may be determined based on information received from the external devices 920-1, 920-2, and 920-3.
- the user terminal 910 may be located on the same side where the content is shown, and external devices 920-1, 920-2, and 920-3 may be disposed around the user terminal 910.
- the external devices 920-1, 920-2, and 920-3 may include an external speaker for generating an acoustic signal.
- the external devices 920-1, 920-2, and 920-3 may transmit sound signals to the user terminal 910. Since the sound signals transmitted from the external devices 920-1, 920-2, and 920-3 are transmitted at a constant speed, the transmission time of the signal also increases as the moving distance increases.
- the user terminal 910 uses the reception time or time difference of arrival of the received signals, and thus the user terminal 910 from the positions of the external devices 920-1, 920-2, and 920-3. Can determine the position of.
- the user terminal 910 may further determine at least one of an arrangement angle and an arrangement direction of the user terminal 910 by receiving a sound signal through the microphones.
- the external devices 920-1, 920-2, and 920-3 may include a magnetic field generator that generates a magnetic field signal.
- one or more external devices 920-1, 920-2, and 920-3 may generate one or more magnetic field signals.
- Magnetic field signals transmitted from one or more external devices 920-1, 920-2, and 920-3 are reduced in magnitude with distance and enter the user terminal 910 according to an arrangement angle at which the user terminal 910 is placed. The incident angle is different.
- the user terminal 910 compares the magnitudes of the received magnetic field signals, thereby determining the position, the placement angle and the position of the user terminal 910 from the positions of the one or more external devices 920-1, 920-2, 920-3.
- the direction of placement can be determined.
- the placement angle and the placement direction may be determined together with the position of the user terminal 910.
- the external devices 920-1, 920-2, and 920-3 may include a magnetic field generator such as a permanent magnet, an electromagnet, or an external speaker. That is, the signal transmitted from the external devices 920-1, 920-2, and 920-3 may be a magnetic field signal generated by the magnetic field generator or an acoustic signal generated from an external speaker.
- the user terminal 910 receives the magnetic field signal through the built-in magnetic field sensor, and when the sound signal is transmitted, the user terminal 910 may receive the sound signal through the built-in microphone. .
- the magnetic field signal generated by the magnetic field generator may be an alternating magnetic field signal having a magnetic field field value incident to a three-axis magnetic field sensor of the user terminal 910 at a predetermined size and an incident angle or having a specific frequency.
- the magnetic field generating unit when using a magnetic field signal having a predetermined size and the angle of incidence, the magnetic field generating unit generates a magnetic field signal to be significantly larger than the environmental magnetic field strength, such as the earth magnetic field, so that the magnetic field sensor included in the user terminal 910 is environmental It is possible to measure magnetic field signals without being affected by magnetic fields.
- the user terminal 910 generates a sound signal through a built-in speaker, and the generated sound signal is external devices 920-1, 920-2, and 920-3. ) May be sent.
- the sound signal generated by the user terminal 910 may be received through microphones built in the external devices 920-1, 920-2, and 920-3.
- the position, placement angle, and placement direction of the user terminal 910 may be determined by using a reception time or a time difference of a sound signal received by the external devices 920-1, 920-2, and 920-3.
- the position, placement angle, and placement direction of the user terminal 910 are determined in the external devices 920-1, 920-2, and 920-3 based on the reception time or the time difference of the sound signal, and the determined result.
- the value may be transmitted from the external devices 920-1, 920-2, and 920-3 to the user terminal 910.
- the information on the reception time or time difference from the external devices 920-1, 920-2, and 920-3 is transmitted to the user terminal 910, and the information on the reception time or time difference in the user terminal 910.
- the position, the placement angle, and the placement direction of the user terminal 910 may be determined based on the above.
- the user terminal 910 may generate a magnetic field signal through a built-in magnetic field generator, and the generated magnetic field signal may be transmitted to external devices 920-1, 920-2, and 920-3. have.
- the magnetic field signal generated by the user terminal 910 may be received through a magnetic field sensor built in the external devices 920-1, 920-2, and 920-3.
- the position, the placement angle, and the placement direction of the user terminal 910 may be determined.
- the position, placement angle, and placement direction of the user terminal 910 are determined in the external devices 920-1, 920-2, and 920-3 based on the magnitudes of the received magnetic field signals.
- the value may be transmitted from the external devices 920-1, 920-2, and 920-3 to the user terminal 910.
- information about the magnitudes of the magnetic field signals received from the external devices 920-1, 920-2, and 920-3 is transmitted to the user terminal 910, and the magnitudes of the magnetic field signals in the user terminal 910.
- the position, the placement angle, and the placement direction of the user terminal 910 may be determined based on the information about.
- FIG. 10 is a diagram illustrating a user terminal according to an exemplary embodiment.
- the user terminal 1000 includes a processor 1010 and a display 1020.
- the user terminal 1000 may further include a camera 1030, a communication unit 1040, a memory 1050, a speaker 1060, a magnetic field sensor 1070, and a microphone 1080.
- the processor 1010 may control augmentation of the virtual object. In addition, the processor 1010 may control operations of devices built in the user terminal 1000.
- the processor 1010 obtains a surrounding image corresponding to the surrounding area of the user terminal 1000.
- the processor 1010 may acquire a surrounding image by using a front-facing camera built in the user terminal 1000 and a mirror that reflects a surrounding image corresponding to the surrounding area to the front-facing camera.
- the processor 1010 may acquire a peripheral image by using a rear-facing camera built in the user terminal 1000.
- the processor 1010 may receive a surrounding image corresponding to the user terminal 1000 and the surrounding area from the reference object through the communication unit 1040.
- the processor 1010 identifies content included in a peripheral area of the user terminal 1000.
- the processor 1010 may identify the content from the surrounding image corresponding to the surrounding area of the user terminal 1000.
- the processor 1010 may identify the content by comparing at least one of a content pattern, a dot pattern, a visual marker, and a reference object included in the surrounding image with information stored in the memory 1050.
- the processor 1010 may identify the content included in the peripheral area by receiving identification information about the content through the communication unit 1040.
- the identification information about the content may be received from an NFC chip, an RF chip, or the like included in a content book or a reference object located around the user terminal 1000.
- the content book includes an NFC chip or an RF chip indicating the page for each page, and the processor 1010 may receive identification information about the content from the NFC chip or the RF chip included in the unfolded page.
- the processor 1010 determines the location of the user terminal 1000 with respect to the content.
- the processor 1010 may determine the location of the user terminal 910 with respect to the content using the surrounding image.
- the processor 1010 may determine the location of the user terminal 1000 by comparing at least one of a content pattern, a dot pattern, a visual marker, and a reference object included in the surrounding image with information stored in the memory 1050.
- the processor 1010 may determine a location of the user terminal 1000 by using a magnetic field signal received from a magnetic field generator around the user terminal 1000, or may receive the signal from an external speaker around the user terminal 1000.
- the position of the user terminal 1000 may be determined using one sound signal.
- the processor 1010 may further determine at least one of the placement angle and the placement direction as well as the position of the user terminal 1000 with respect to the content from the surrounding image.
- the processor 1010 augments and outputs the virtual object on the blind spot image corresponding to the blind spot covered by the user terminal 1000 based on the content and the position of the user terminal 1000. In this case, a part of the quadrangular image output together with the virtual object may not be output to the display 1020 as the virtual object is hidden.
- the processor 1010 may change the quadrangle image and output the quadrature image together with the virtual object. For example, the processor 1010 may change the rectangular image by changing the color of the rectangular image, distorting the rectangular image, or adding a specific animation to the rectangular image. In some cases, the processor 1010 may augment and output only the virtual object to the display 1020 except for the rectangular image.
- the processor 1010 may determine the movement of the virtual object and the object based on the content and the position of the user terminal 1000, and augment and output the virtual object in the rectangular image according to the determined movement. In addition, the processor 1010 may augment and output the virtual object based on the content in the quadrangular image in consideration of a change in the position of the user terminal 1000.
- the processor 1010 may control the virtual object based on a user input signal (eg, a touch signal, a drag signal, a button input signal, a voice signal, etc.) input from the user, and may output the controlled virtual object.
- a user input signal eg, a touch signal, a drag signal, a button input signal, a voice signal, etc.
- the user input signal may be received from the user through a touch sensor mounted on the display 1020, a button key included in the user terminal 1000, a microphone, or the like.
- the processor 1010 may control the position, shape, and movement of the virtual object based on the user input signal. Further, the processor 1010 may change the blind spot image based on the user input signal and output the changed blind spot image. For example, the processor 1010 may change and output the square image by changing a color of the square image, distorting the square image, or adding a specific animation to the square image based on a user input signal.
- the processor 1010 may augment and output the virtual object in the quadrangular image based on the content, the position, the placement angle, and the placement direction of the user terminal 1000.
- the processor 1010 may augment and output the virtual object in the quadrangular image based on the content, the position, the placement angle, and the placement direction of the user terminal 1000.
- an embodiment using the location of the user terminal 1000 will be described. However, the embodiment is not limited thereto, and an arrangement angle and an arrangement direction of the user terminal 1000 may be further considered.
- the processor 1010 may augment and output a virtual character as a virtual object on a quadrangular image corresponding to a part of the content obscured by the user terminal 910.
- the processor 1010 may augment and output the virtual character moving through the maze. If the user leaves the maze and moves the user terminal 1000 inappropriately, the processor 1010 may output a message indicating that the movement of the user terminal 1000 is inappropriate using the augmented virtual character.
- the processor 1010 may output a message that the escape to the maze by using the augmented virtual character.
- the processor 1010 may augment and output the virtual object of the item corresponding to the location of the user terminal 1000 to the display 1020.
- the processor 1010 may augment and display a virtual object in a situation in which the land mine explodes.
- the processor 1010 may augment and display a virtual object in a situation in which the item is acquired by the virtual character.
- the processor 1010 identifies an enemy corresponding to the position of the user terminal 1000, and augments the virtual object in a situation where the identified enemy threatens the virtual character. You can print
- the processor 1010 may augment a virtual tank corresponding to the position of the user terminal 1000 as a virtual object.
- the enhanced virtual tank may perform various actions (eg, shell firing, etc.) in accordance with user input.
- an additional terminal may exist in addition to the user terminal 1000, and the enemy tank may be augmented and output as a virtual object.
- the additional terminal can be automatically controlled movement according to the computer instructions using the built-in wheels.
- the display 1020 is a device disposed on the front surface of the user terminal 1000 and may display the augmented virtual object together with the square image.
- the display 1020 may be equipped with a touch sensor to receive a user input signal such as a touch signal or a drag signal from the user.
- the camera 1030 is an apparatus capable of capturing an image, and may include, for example, a first sub camera in a front direction and a second sub camera in a rear direction.
- the first sub camera in the front direction may be disposed on the same surface on which the display 1020 is disposed, and the second sub camera in the rear direction may be disposed on the surface on which the display 1020 is not disposed.
- the communicator 1040 may communicate with a reference object located near the user terminal 1000.
- the communicator 1040 may receive the surrounding image photographed by the reference object.
- the memory 1050 may record specific information as an electrical signal.
- the memory 1050 may store the acquired surrounding image, or may include a content pattern, a dot pattern, a visual marker, a reference image for a reference object, and information about the reference image (eg, corresponding content information and a corresponding position). Information, etc.).
- the memory 1050 may store information necessary to augment the virtual object.
- the speaker 1060 is a device capable of reproducing an acoustic signal.
- the speaker 1060 may reproduce an acoustic signal corresponding to a virtual object augmented by the processor 1010.
- the magnetic field sensor 1070 is a device capable of detecting a magnetic field change around the user terminal 1000 and may receive a magnetic field signal transmitted to the user terminal 1000.
- the magnetic field sensor 1070 may receive a magnetic field signal transmitted from the magnetic field generator.
- the microphone 1080 is a device that converts sound generated around the user terminal 1000 into an electrical signal.
- the microphone 1080 may receive a sound signal transmitted from the user terminal 1000.
- the microphone 1080 may receive a sound signal transmitted from an external speaker.
- FIG. 11 is a diagram illustrating a content display method according to one embodiment.
- a content display method performed by a user terminal includes identifying a content included in a peripheral area of the user terminal 1110, determining a location of the user terminal with respect to the content, and a content and And augmenting and outputting the virtual object on the blind spot image corresponding to the blind spot covered by the user terminal based on the position of the user terminal (1130).
- FIG. 11 Each of the steps shown in FIG. 11 is the same as described above with reference to FIGS. 1 through 10, and thus a detailed description thereof will be omitted.
- the embodiments described above may be implemented as hardware components, software components, and / or combinations of hardware components and software components.
- the devices, methods, and components described in the embodiments may include, for example, processors, controllers, arithmetic logic units (ALUs), digital signal processors, microcomputers, field programmable gates (FPGAs). It may be implemented using one or more general purpose or special purpose computers, such as an array, a programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions.
- the processing device may execute an operating system (OS) and one or more software applications running on the operating system.
- the processing device may also access, store, manipulate, process, and generate data in response to the execution of the software.
- OS operating system
- the processing device may also access, store, manipulate, process, and generate data in response to the execution of the software.
- processing device includes a plurality of processing elements and / or a plurality of types of processing elements. It can be seen that it may include.
- the processing device may include a plurality of processors or one processor and one controller.
- other processing configurations are possible, such as parallel processors.
- the software may include a computer program, code, instructions, or a combination of one or more of the above, and configure the processing device to operate as desired, or process it independently or collectively. You can command the device.
- Software and / or data may be any type of machine, component, physical device, virtual equipment, computer storage medium or device in order to be interpreted by or to provide instructions or data to the processing device. Or may be permanently or temporarily embodied in a signal wave to be transmitted.
- the software may be distributed over networked computer systems so that they may be stored or executed in a distributed manner.
- Software and data may be stored on one or more computer readable recording media.
- the method according to the embodiment may be embodied in the form of program instructions that can be executed by various computer means and recorded in a computer readable medium.
- the computer readable medium may include program instructions, data files, data structures, etc. alone or in combination.
- the program instructions recorded on the media may be those specially designed and constructed for the purposes of the embodiments, or they may be of the kind well-known and available to those having skill in the computer software arts.
- Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tape, optical media such as CD-ROMs, DVDs, and magnetic disks, such as floppy disks.
- Examples of program instructions include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like.
- the hardware device described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.
Abstract
Description
Claims (27)
- 사용자 단말의 주변 영역에 포함된 컨텐츠를 식별하는 단계;상기 컨텐츠에 대한 상기 사용자 단말의 위치를 결정하는 단계; 및상기 컨텐츠 및 상기 사용자 단말의 위치에 기초하여 가상 오브젝트를 상기 사용자 단말에 의해 가려진 사각 영역(blind region)에 대응하는 사각 이미지(blind image)에 증강하여 출력하는 단계를 포함하는 컨텐츠 표시 방법.
- 제1항에 있어서,상기 사용자 단말은, 상기 주변 영역에 접촉되도록 배치되는, 컨텐츠 표시 방법.
- 제1항에 있어서,상기 사용자 단말의 주변 영역에 대응하는 주변 이미지를 획득하는 단계를 더 포함하고,상기 사용자 단말의 위치를 결정하는 단계는,상기 주변 이미지로부터 상기 컨텐츠에 대한 상기 사용자 단말의 위치를 결정하는, 컨텐츠 표시 방법.
- 제3항에 있어서,상기 주변 이미지를 획득하는 단계는,상기 사용자 단말에 내장된 전면 방향의 카메라와 상기 주변 이미지를 상기 전면 방향의 카메라로 반사시키는 거울을 이용하여, 상기 주변 이미지를 획득하는, 컨텐츠 표시 방법.
- 제3항에 있어서,상기 사용자 단말은,상기 사용자 단말에 장착된 지지대를 통해, 상기 주변 영역과 미리 결정된 거리로 이격되고,상기 주변 이미지를 획득하는 단계는,상기 사용자 단말에 내장된 후면 방향의 카메라를 이용하여 상기 주변 이미지를 획득하는, 컨텐츠 표시 방법.
- 제3항에 있어서,상기 주변 이미지를 획득하는 단계는,상기 사용자 단말에 내장된 통신부를 통해 상기 사용자 단말의 주변 영역이 포함된 주변 이미지를 수신하는, 컨텐츠 표시 방법.
- 제3항에 있어서,상기 사용자 단말의 위치를 결정하는 단계는,상기 주변 이미지에 포함된 컨텐츠 패턴, 도트 패턴(dot pattern), 비주얼 마커, 기준 오브젝트 중 적어도 하나를 식별함으로써, 상기 사용자 단말의 위치를 결정하는, 컨텐츠 표시 방법.
- 제7항에 있어서,상기 사용자 단말의 위치를 결정하는 단계는,상기 주변 이미지에 포함된 컨텐츠 패턴, 도트 패턴, 비주얼 마커, 기준 오브젝트 중 적어도 하나를 메모리에 저장된 정보와 비교함으로써, 상기 주변 영역에 포함된 컨텐츠를 식별하는, 컨텐츠 표시 방법.
- 제1항에 있어서,상기 사용자 단말의 위치를 결정하는 단계는,상기 컨텐츠에 대한 상기 사용자 단말의 배치 각도 및 배치 방향 중 적어도 하나를 더 결정하고,상기 가상 오브젝트를 상기 사각 이미지에 증강하여 출력하는 단계는,상기 사용자 단말의 배치 각도 및 배치 방향 중 적어도 하나를 더 고려하여 상기 가상 오브젝트를 상기 사각 이미지에 증강하여 출력하는, 컨텐츠 표시 방법.
- 제1항에 있어서,상기 사용자 단말의 위치를 결정하는 단계는,상기 사용자 단말 주변의 자기장 발생부로부터 수신한 자기장 신호를 이용하여 상기 사용자 단말의 위치를 결정하거나, 또는상기 사용자 단말 주변의 외부 스피커로부터 수신한 음향 신호를 이용하여 상기 사용자 단말의 위치를 결정하는, 컨텐츠 표시 방법.
- 제1항에 있어서,상기 사용자 단말의 위치를 결정하는 단계는,상기 사용자 단말의 위치를 결정하기 위한 음향 신호를 생성하여 상기 사용자 단말 주변에 위치하는 외부기기로 전송하고, 상기 외부기기에 수신된 상기 음향 신호를 이용하여 상기 사용자 단말의 위치를 결정하는, 컨텐츠 표시 방법.
- 제1항에 있어서,상기 가상 오브젝트를 상기 사각 이미지에 증강하여 출력하는 단계는,상기 컨텐츠 및 상기 사용자 단말의 위치에 기초하여 상기 가상 오브젝트 및 상기 가상 오브젝트의 움직임을 결정하고, 상기 결정된 움직임에 따라 상기 가상 오브젝트를 상기 사각 이미지에 증강하여 출력하는, 컨텐츠 표시 방법.
- 제1항에 있어서,상기 가상 오브젝트를 상기 사각 이미지에 증강하여 출력하는 단계는,상기 사용자 단말의 위치에 대한 변화를 고려하여 상기 컨텐츠에 기초한 가상 오브젝트를 상기 사각 이미지에 증강하여 출력하는, 컨텐츠 표시 방법.
- 제1항에 있어서,상기 가상 오브젝트를 상기 사각 이미지에 증강하여 출력하는 단계는,사용자로부터 입력되는 사용자 입력 신호에 기초하여 상기 가상 오브젝트의 위치, 형상, 움직임 중 적어도 하나를 제어하고, 상기 제어된 가상 오브젝트를 증강하여 출력하는, 컨텐츠 표시 방법.
- 제1항에 있어서,상기 컨텐츠를 식별하는 단계는,상기 사용자 단말의 주변 이미지에 포함된 컨텐츠 패턴, 도트 패턴, 비주얼 마커 및 기준 오브젝트를 식별함으로써, 상기 주변 영역에 포함된 컨텐츠를 식별하는, 컨텐츠 표시 방법.
- 제15항에 있어서,상기 컨텐츠를 식별하는 단계는,상기 사용자 단말의 주변 이미지에 포함된 컨텐츠 패턴, 도트 패턴, 비주얼 마커 및 기준 오브젝트를 메모리에 저장된 정보와 비교함으로써, 상기 주변 영역에 포함된 컨텐츠를 식별하는, 컨텐츠 표시 방법.
- 제1항에 있어서,상기 컨텐츠를 식별하는 단계는,상기 컨텐츠에 대한 식별 정보를 통신부를 통해 수신함으로써, 상기 주변 영역에 포함된 컨텐츠를 식별하는, 컨텐츠 표시 방법.
- 제1항에 있어서,상기 컨텐츠를 식별하는 단계는,사용자로부터 입력된 신호를 통해 상기 컨텐츠를 식별하거나 또는 상기 사용자 단말 주변의 NFC 칩 또는 RF 칩으로부터 수신된 상기 컨텐츠에 대한 식별 정보에 기초하여 상기 컨텐츠를 식별하는, 컨텐츠 표시 방법.
- 제1항 내지 제18항 중에서 어느 하나의 항의 방법을 실행시키기 위한 프로그램이 기록된 컴퓨터 판독 가능한 기록 매체.
- 가상 오브젝트에 대한 증강을 제어하는 프로세서; 및상기 증강된 가상 오브젝트를 표시하는 디스플레이를 포함하고,상기 프로세서는,사용자 단말의 주변 영역에 포함된 컨텐츠를 식별하고,상기 컨텐츠에 대한 상기 사용자 단말의 위치를 결정하며,상기 컨텐츠 및 상기 사용자 단말의 위치에 기초하여 가상 오브젝트를 상기 사용자 단말에 의해 가려진 사각 영역에 대응하는 사각 이미지에 증강하여 출력하는,사용자 단말.
- 제20항에 있어서,상기 사용자 단말은, 상기 주변 영역에 접촉되는, 사용자 단말.
- 제20항에 있어서,상기 사용자 단말의 주변 영역에 대응하는 주변 이미지를 촬영하는 카메라를 더 포함하고,상기 프로세서는,상기 주변 이미지로부터 상기 컨텐츠에 대한 사용자 단말의 위치를 결정하는, 사용자 단말.
- 제22항에 있어서,상기 카메라는,상기 주변 이미지를 상기 사용자 단말에 내장된 카메라로 반사시키는 거울을 이용하여, 상기 주변 이미지를 촬영하는 전면 방향의 카메라를 포함하는, 사용자 단말.
- 제22항에 있어서,상기 카메라는,상기 주변 영역과 미리 결정된 거리로 이격된 사용자 단말에 내장되어 상기 주변 이미지를 촬영하는 후면 방향의 카메라를 포함하고,상기 사용자 단말은,상기 사용자 단말에 장착된 지지대를 통해 상기 주변 영역과 미리 결정된 거리로 이격되는, 사용자 단말.
- 제22항에 있어서,상기 프로세서는,상기 주변 이미지에 포함된 컨텐츠 패턴, 도트 패턴, 비주얼 마커, 기준 오브젝트 중 적어도 하나를 식별함으로써, 상기 사용자 단말의 위치를 결정하는, 사용자 단말.
- 제20항에 있어서,상기 프로세서는,상기 컨텐츠에 대한 상기 사용자 단말의 배치 각도 및 배치 방향 중 적어도 하나를 더 결정하고,상기 사용자 단말의 배치 각도 및 배치 방향 중 적어도 하나에 기초하여 상기 가상 오브젝트를 상기 사각 이미지에 증강하여 출력하는, 사용자 단말.
- 제20항에 있어서,상기 프로세서는,상기 컨텐츠 및 상기 사용자 단말의 위치에 기초하여 상기 가상 오브젝트 및 상기 가상 오브젝트의 움직임을 결정하고, 상기 결정된 움직임에 따라 상기 가상 오브젝트를 상기 사각 이미지에 증강하여 출력하는, 사용자 단말.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/535,240 US20170352189A1 (en) | 2014-12-19 | 2015-12-18 | Content display method using magnet and user terminal for performing same |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20140184611 | 2014-12-19 | ||
KR10-2014-0184611 | 2014-12-19 | ||
KR1020150181236A KR101740827B1 (ko) | 2014-12-19 | 2015-12-17 | 자석 등을 이용한 컨텐츠 표시 방법 및 이를 수행하는 사용자 단말 |
KR10-2015-0181236 | 2015-12-17 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016099189A1 true WO2016099189A1 (ko) | 2016-06-23 |
Family
ID=56126980
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2015/013922 WO2016099189A1 (ko) | 2014-12-19 | 2015-12-18 | 자석 등을 이용한 컨텐츠 표시 방법 및 이를 수행하는 사용자 단말 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2016099189A1 (ko) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100309225A1 (en) * | 2009-06-03 | 2010-12-09 | Gray Douglas R | Image matching for mobile augmented reality |
US20120327119A1 (en) * | 2011-06-22 | 2012-12-27 | Gwangju Institute Of Science And Technology | User adaptive augmented reality mobile communication device, server and method thereof |
KR20130007767A (ko) * | 2011-07-11 | 2013-01-21 | 한국과학기술연구원 | 착용형 디스플레이 장치 및 컨텐츠 디스플레이 방법 |
US20130291126A1 (en) * | 2010-06-11 | 2013-10-31 | Blueprint Growth Institute, Inc. | Electronic Document Delivery, Display, Updating, and Interaction Systems and Methods |
US20140160161A1 (en) * | 2012-12-06 | 2014-06-12 | Patricio Barreiro | Augmented reality application |
-
2015
- 2015-12-18 WO PCT/KR2015/013922 patent/WO2016099189A1/ko active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100309225A1 (en) * | 2009-06-03 | 2010-12-09 | Gray Douglas R | Image matching for mobile augmented reality |
US20130291126A1 (en) * | 2010-06-11 | 2013-10-31 | Blueprint Growth Institute, Inc. | Electronic Document Delivery, Display, Updating, and Interaction Systems and Methods |
US20120327119A1 (en) * | 2011-06-22 | 2012-12-27 | Gwangju Institute Of Science And Technology | User adaptive augmented reality mobile communication device, server and method thereof |
KR20130007767A (ko) * | 2011-07-11 | 2013-01-21 | 한국과학기술연구원 | 착용형 디스플레이 장치 및 컨텐츠 디스플레이 방법 |
US20140160161A1 (en) * | 2012-12-06 | 2014-06-12 | Patricio Barreiro | Augmented reality application |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020171540A1 (en) | Electronic device for providing shooting mode based on virtual character and operation method thereof | |
WO2019221464A1 (en) | Apparatus and method for recognizing an object in electronic device | |
WO2020171553A1 (en) | An electronic device applying bokeh effect to image and controlling method thereof | |
WO2018062817A1 (en) | Electronic device including input module | |
WO2021020814A1 (en) | Electronic device for providing avatar and operating method thereof | |
WO2020013545A1 (en) | Apparatus and method for authenticating object in electronic device | |
WO2020171621A1 (en) | Method of controlling display of avatar and electronic device therefor | |
WO2016117962A1 (ko) | 홀로그램 영상 기반 메시지 서비스 제공 방법 및 사용자 단말, 그리고 홀로그램 영상 표시 장치 | |
WO2020153785A1 (ko) | 전자 장치 및 이를 이용한 감정 정보에 대응하는 그래픽 오브젝트를 제공하는 방법 | |
WO2021158017A1 (en) | Electronic device and method for recognizing object | |
WO2020145517A1 (en) | Method for authenticating user and electronic device thereof | |
WO2019017585A1 (en) | ELECTRONIC DEVICE FOR CONTROLLING THE DEVELOPMENT OF A LENS AND METHOD OF CONTROLLING THE SAME | |
WO2012086984A2 (en) | Method, device, and system for providing sensory information and sense | |
KR101740827B1 (ko) | 자석 등을 이용한 컨텐츠 표시 방법 및 이를 수행하는 사용자 단말 | |
WO2020032383A1 (ko) | 이미지에 대한 인식 정보, 인식 정보와 관련된 유사 인식 정보, 및 계층 정보를 이용하여 외부 객체에 대한 인식 결과를 제공하는 전자 장치 및 그의 동작 방법 | |
WO2021045552A1 (en) | Electronic device for image synthesis and operating method thereof | |
WO2015102476A1 (ko) | 이동형 3d 멀티디스플레이 기반의 실감형 교육 서비스 제공 차량 | |
WO2016099189A1 (ko) | 자석 등을 이용한 컨텐츠 표시 방법 및 이를 수행하는 사용자 단말 | |
WO2023106895A1 (ko) | 가상 입력 장치를 이용하기 위한 전자 장치 및 그 전자 장치에서의 동작 방법 | |
WO2021080307A1 (en) | Method for controlling camera and electronic device therefor | |
WO2021149938A1 (en) | Electronic device and method for controlling robot | |
WO2021261619A1 (ko) | 영상에서 평면을 검출하는 전자 장치 및 그 동작 방법 | |
WO2020175760A1 (en) | Electronic device and content generation method | |
WO2020171359A1 (ko) | 전자 장치 및 그 촬영 관련 정보 안내 방법 | |
WO2023096264A1 (ko) | 오토포커싱을 위한 전자 장치 및 그 동작 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15870356 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15535240 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.10.2017) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15870356 Country of ref document: EP Kind code of ref document: A1 |