KR101877901B1 - Method and appratus for providing vr image - Google Patents

Method and appratus for providing vr image Download PDF

Info

Publication number
KR101877901B1
KR101877901B1 KR1020160182252A KR20160182252A KR101877901B1 KR 101877901 B1 KR101877901 B1 KR 101877901B1 KR 1020160182252 A KR1020160182252 A KR 1020160182252A KR 20160182252 A KR20160182252 A KR 20160182252A KR 101877901 B1 KR101877901 B1 KR 101877901B1
Authority
KR
South Korea
Prior art keywords
image
electronic device
map
vr image
vr
Prior art date
Application number
KR1020160182252A
Other languages
Korean (ko)
Other versions
KR20180077666A (en
Inventor
장윤
백희원
피민규
Original Assignee
세종대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 세종대학교산학협력단 filed Critical 세종대학교산학협력단
Priority to KR1020160182252A priority Critical patent/KR101877901B1/en
Publication of KR20180077666A publication Critical patent/KR20180077666A/en
Application granted granted Critical
Publication of KR101877901B1 publication Critical patent/KR101877901B1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B27/00Other optical systems; Other optical apparatus
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B27/00Other optical systems; Other optical apparatus
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • G02B2027/0134Head-up displays characterised by optical features comprising binocular systems of stereoscopic type

Abstract

The present invention provides a method and an electronic device for an electronic device to provide VR images. The method includes the steps of setting location information, acquiring a plurality of map images corresponding to the place information, integrating a plurality of map images to generate a VR image capable of rotating 360 degrees, Determines a moving position that the user wants to move in the VR image, acquires at least one map image corresponding to the moving position, and generates a VR image of the moving position.

Description

METHOD AND APPARATUS FOR PROVIDING VR IMAGE < RTI ID = 0.0 >

The present invention relates to a method and an electronic apparatus for providing a VR image.

With the emergence of the Internet sector as a new industrial field and various services and businesses that are pursuing this, the functions and effects of multimedia are showing remarkable growth. One of the most noteworthy things in this Internet field is the 3D geographic information service. That is, a new type of 3D image is provided on the Internet using spatial information constructed by utilizing geographic information system (GIS) technology such as aerial photographing and precision surveying, Service is getting attention recently.

In addition, there is a service that provides panoramic images by linking photographic images of major roads to 3D images. However, the conventional three-dimensional image providing service only provides VR (Virtual Reality) images at specific points linked with the map information, and does not actively provide the VR images of the changed positions according to the user's motion in real time.

U.S. Patent No. 8,818,138 (entitled SYSTEM AND METHOD FOR CREATING, STORING AND UTILIZING IMAGES OF A GEOGRAPHICAL LOCATION)

SUMMARY OF THE INVENTION The present invention has been made to solve the above problems of the conventional art, and it is an object of the present invention to provide a VR image which can be rotated 360 degrees at a desired position by a user, And to provide an electronic device. It is also an object of the present invention to provide a VR image of a place corresponding to an image taken or captured by a user.

It should be understood, however, that the technical scope of the present invention is not limited to the above-described technical problems, and other technical problems may exist.

According to a first aspect of the present invention, there is provided a VR image providing method comprising: setting location information; Obtaining a plurality of map images corresponding to the place information; Generating a VR image capable of rotating 360 degrees by integrating a plurality of map images; Determining, as the motion of the user of the electronic device is sensed, a movement position in which the user is to move within the VR image; And acquiring at least one map image corresponding to the movement position to generate a VR image of the movement position.

According to a second aspect of the present invention, there is provided an electronic device comprising a memory for storing a program for providing a VR image and a processor for executing the program, wherein the processor sets location information as the program is executed, Acquires a plurality of map images corresponding to the information, generates a VR image capable of rotating 360 degrees by integrating a plurality of map images, and generates a VR image in which the user moves in a VR image And acquires at least one map image corresponding to the movement position to generate a VR image of the movement position.

A third aspect of the present invention provides a computer-readable recording medium on which a program for implementing the method of the first aspect is recorded.

According to an embodiment of the present invention, even when the user does not know the name of the place, the VR image of the place is provided by capturing or capturing an image of the place, It is possible to provide the indirect experience of the user to the place by changing it in response to the movement of the user.

1 is a block diagram showing the configuration of an electronic device according to an embodiment of the present invention.
FIG. 2 is an example in which the controller of FIG. 1 determines a location according to an embodiment of the present invention.
FIG. 3 is an example of providing a map image to determine the location of a location according to an embodiment of the present invention.
4 is an example of a VR image generated by the controller of FIG. 1 according to an embodiment of the present invention.
5 is an example of the VR image displayed on the screen by the controller of FIG. 1 according to an embodiment of the present invention.
6 is a flowchart illustrating a method of providing an electronic device with a VR image according to an embodiment of the present invention.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings, which will be readily apparent to those skilled in the art. The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. In order to clearly illustrate the present invention, parts not related to the description are omitted, and similar parts are denoted by like reference characters throughout the specification.

Throughout the specification, when a part is referred to as being "connected" to another part, it includes not only "directly connected" but also "electrically connected" with another part in between . Also, when an element is referred to as "comprising ", it means that it can include other elements as well, without departing from the other elements unless specifically stated otherwise.

In this specification, the term " part " includes a unit realized by hardware, a unit realized by software, and a unit realized by using both. Further, one unit may be implemented using two or more hardware, or two or more units may be implemented by one hardware.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings.

 1 is a block diagram showing the configuration of an electronic device according to an embodiment of the present invention.

Referring to FIG. 1, an electronic device 100 includes a control unit 110, a memory 120, a communication unit 130, a sensing unit 140, a user input unit 150, and a display unit 160. However, not all of the components shown in Fig. 1 are essential components of the electronic device 100. Fig. The electronic device 100 may be implemented by more components than the components shown in FIG. 1 and the electronic device 100 may be implemented by fewer components than the components shown in FIG. For example, the electronic device 100 may further include a camera (not shown), a microphone (not shown).

The control unit 110 is typically capable of controlling the overall operation of the electronic device 100. For example, the control unit 110 controls the signal flow of the communication unit 130, the sensing unit 140, the user input unit 150, and the display unit 160 by executing the programs stored in the memory 120, It is possible to perform a function of processing data.

The control unit 110 includes a RAM used as a storage area corresponding to various operations performed in the electronic device 100, a ROM storing a control program for controlling the electronic device 100, . ≪ / RTI > For example, in addition to a central processing unit (CPU) for data processing, the controller 110 may further include a graphic processor unit (GPU) for processing graphics and a digital signal processor (DSP) And a system on chip (SoC) incorporating at least one processor. Meanwhile, the term 'control unit' may be interpreted in the same sense as the terms 'processor', 'processing circuit', 'controller', 'arithmetic unit' and the like.

As the VR image providing program stored in the memory 120 is executed, the controller 110 can set the location information for generating the VR image. Here, the place information may be photographed through a photographing unit (not shown) or extracted from the captured image. For example, a user can capture or capture a picture of a place he or she wants to go while surfing the Internet or viewing a magazine. In addition, the user can receive the VR image of the location of the captured or captured image through the VR image providing program.

Specifically, when the VR image providing program is executed, the control unit 110 can acquire an image previously stored in the electronic memory 120, a captured image, or a captured image. In addition, the control unit 110 may extract at least one object included in the acquired image. Then, the control unit 110 can determine the place information corresponding to the extracted at least one object. For example, the control unit 110 may extract the feature points from the acquired image and extract at least one object from the feature points. The control unit 110 may search for information on a place where at least one object is located. Alternatively, the controller 110 may use a neural network implemented in an external server (not shown). In this case, the control unit 110 transmits the acquired image or at least one object to an external server (not shown) through the communication unit 130, and receives information about a place where at least one object included in the image is located can do.

Alternatively, the control unit 110 may provide a user interface (e.g., a keyboard image or the like) that receives text indicating place information instead of the captured or captured image. Therefore, the user can directly input a desired place.

Next, the control unit 110 acquires a plurality of map images corresponding to the place information. For example, the control unit 110 may acquire a plurality of map images from an external map image database (database) 200. At this time, the map image stored in the map image DB 200 may be a three-dimensional image photographed corresponding to a predetermined position on the map. The control unit 110 may acquire a map image using an application programming interface (API) provided by the external map image DB 200. [ At this time, the control unit 110 may divide one map image stored in the external map image DB 200 into a plurality of images by adjusting the parameters provided by the API, and obtain a plurality of divided map images . In addition, the control unit 110 can adjust the divided map images so as to have a resolution higher than the critical resolution by adjusting the parameters provided by the API. Here, the threshold resolution may be equal to or smaller than the resolution of the map image previously stored in the external map image DB 200, for example.

Thereafter, the controller 110 combines a plurality of map images to generate a VR image capable of rotating 360 degrees. At this time, since the plurality of map images have a resolution higher than the threshold resolution, a high-resolution VR image can be generated as compared with the case of acquiring only one map image. Specifically, the controller 110 may combine a plurality of map images and crop unnecessary portions to generate a planar VR image. Then, the control unit 110 divides the VR image in a planar form into a left eye image and a right eye image, and maps each image to a sphere, thereby generating spherical VR images (i.e., left eye VR image and right eye VR image) Can be generated. The control unit 110 controls the display unit 160 to display a part of the left eye VR image on the left screen of the electronic device 100 and a part of the right eye VR image on the right screen. Therefore, a user wearing the electronic device 100 directly or indirectly can experience virtual reality through the VR image.

Next, as the motion of the user of the electronic device 100 is detected, the control unit 110 determines the movement position in which the user is to move within the VR image. Here, the motion of the user may include a movement of the user, a movement of the user, a finger instruction of the user, and the like.

The controller 110 may determine the movement position in response to the user motion sensed through the sensing unit 140. [ In particular, the controller 110 may determine the direction of movement of the user within the VR image using the orientation of the electronic device 100 sensed by an orientation sensor (not shown). In addition, the controller 110 can determine the movement position of the user to move within the VR image by using the shake of the electronic device 100 sensed through the accelerometer (not shown). The control unit 110 may control the unit distance from the reference point to the movement direction (for example, 5 [deg.]), For example, if the size of the shake of the electronic device 100 is equal to or greater than the threshold value Meter, 10-meter, etc.) as the movement position. Here, the reference point corresponds to the place information, and may be an arbitrary position in the created VR image. .

More specifically, using Equation (1), the controller 110 calculates the gravity acceleration magnitudes of the x, y, and z axes detected by the acceleration sensor and the gravity acceleration magnitudes of the current x, y, The magnitude of the shake can be calculated.

Figure 112016129036316-pat00001

In the above equation

Figure 112016129036316-pat00002
Represents the shake size,
Figure 112016129036316-pat00003
Represents the gravity acceleration magnitude of each of the currently sensed x, y, and z axes,
Figure 112016129036316-pat00004
Represents the magnitude of gravitational acceleration of each of the x, y, and z axes detected in the past. Also,
Figure 112016129036316-pat00005
Represents the current time information,
Figure 112016129036316-pat00006
Indicates the previous time information.

On the other hand, the control unit 110 can determine different movement positions based on the magnitude of the shake. For example, when the magnitude of the shaking exceeds the first threshold value, the control unit 110 may determine a position that is a distance of the first unit distance in the moving direction as the moving position. In addition, when the magnitude of the shake exceeds the second threshold value (second threshold value> first threshold value), the control unit 110 determines that the second unit distance (second unit distance> first unit distance) The position can be determined as the movement position. Accordingly, the control unit 110 can provide a user with an effect of walking slowly in a virtual space, an effect of walking at a normal speed, or a beating effect.

However, if the size of the shake does not exceed the threshold value (or the first threshold value), the controller 110 may display another portion of the existing VR image after determining that the user does not move. In addition, the controller 110 may determine that the user does not move if the magnitude of the shake does not last more than the threshold time. This is to prevent the VR image from being changed due to unintentional movement.

The control unit 110 can sense the user's motion through the sensing unit 140 provided in the electronic device 100 and can detect the motion of the user using an auxiliary device (for example, a helmet, an HMD (e.g., a head mounted device, etc.). In this case, the control unit 110 can perform the above-described operation based on the received information.

Next, as the movement position is determined, the control unit 110 acquires at least one map image corresponding to the movement position. First, the control unit 110 may extract at least one path included in the 360-degree image being reproduced on the screen. For example, the control unit 110 may extract a path that the user can proceed, such as a roadway, a lead, a corridor, a stairway, etc., included in the VR image. Subsequently, the control unit 110 can select one of the extracted at least one route based on the moved position. The control unit 110 can acquire at least one map image corresponding to the selected route from the external map image DB 200 through the communication unit 130. [ The control unit 110 may generate a VR image of the movement position based on the obtained map image. At this time, the control unit 110 may reuse some of the existing map images, but the present invention is not limited thereto.

As the VR image of the movement position is generated, the control unit 110 controls the display unit 160 so that the VR image being reproduced on the screen is changed to the VR image of the movement position and displayed.

As described above, the electronic device 100 senses the user's intention to move within the virtual space and converts and provides the virtual space according to the user's intention to move, thereby enabling the user to more realistically search for and lighten the virtual space . In addition, since the electronic device 100 according to the disclosed embodiment judges the user's intention to move using the shake of the electronic device 100, the electronic device 100 can be operated in a state in which the electronic device 100 is directly or indirectly worn It is possible to generate the effect of moving in a desired direction in the virtual space by taking a motion such as nodding the head. This can reduce the potential risk of accidental interception of the actual environment and the blocked user.

The memory 120 stores various data, programs or applications for driving and controlling the electronic device 100 under the control of the control unit 110. [ The memory 120 may store input / output signals or data corresponding to driving of the communication unit 130, the sensing unit 140, the user input unit 150, and the like.

The memory 120 may also include a control program for controlling the electronic device 100, an application originally provided or externally downloaded from the manufacturer, a GUI associated with the application, various objects (e.g., images, Text, video, etc.), user information, documents, databases, or related data.

The memory 120 may include nonvolatile memory, volatile memory, a hard disk drive (HDD), or a solid state drive (SSD).

The communication unit 130 may connect the electronic device 100 with an external device (e.g., the map image DB 200) under the control of the control unit 110. [ The control unit 110 may access the external device through the communication unit 130 to acquire the content (e.g., a map image), download the application from the external device, or browse the web. The communication unit 130 may include at least one of a wireless LAN (not shown), a Bluetooth (not shown), and a wired Ethernet (not shown) corresponding to the performance and structure of the electronic device 100. The communication unit 130 may include a combination of a wireless LAN, a Bluetooth, and a wired Ethernet, but is not limited thereto.

The sensing unit 140 may sense the user's motion of the electronic device 100 and provide the sensed information to the control unit 110. The sensing unit 140 may include at least one sensor, for example, a direction sensor and an acceleration sensor, and may further include a gyro sensor, a proximity sensor, an infrared sensor, an electromagnetic sensor, and the like .

The user input unit 150 receives a user input for controlling the electronic device 100. For example, the user input unit 150 may include at least one of a key (not shown), a touch panel (not shown), and a pen recognition panel (not shown).

In addition, the electronic device 100 may further include a camera (not shown) for shooting or capturing an image in order to set place information to provide VR images. The camera can obtain image frames such as still images or moving images through the image sensor. The image obtained through the image sensor can be processed through the control unit 110 or a separate image processing unit (not shown).

FIG. 2 is an example in which the controller 110 of FIG. 1 determines a location according to an embodiment of the present invention.

2, the control unit 110 can extract at least one object from the captured or captured image 210 and search for the place information from the extracted object as the VR image providing program is executed . For example, the location information may be "Eiffel Tower "," Paris ", and the like.

Then, the controller 110 may provide a user interface 220 including the place information 230. If the location information 230 is a location desired by the user, the user can input a user input to provide a VR image corresponding to the location, and if the location information 230 is not desired by the user, have. In order to provide a VR image or to search for other place information, the control unit 110 may provide various GUIs (user interfaces) on the user interface 220. For example, the control unit 110 may provide a first GUI ("GO") 221 for receiving a user input requesting a VR image corresponding to the place information 230.

Further, the control unit 110 may include a second GUI ("GET IMAGE FROM CAMERA") 222 for shooting or capturing a new image to search for other place information, and a third GUI "GET IMAGE FROM GALLERY") 223.

Further, the control unit 110 may further provide a fourth GUI ("NO IMAGE") 224 for receiving the place information directly from the user. When a user input to the fourth GUI 224 is received, the control unit 110 may switch the user interface 220 and provide it. For example, the control unit 110 may provide a keyboard image for receiving place information from a user.

Alternatively, the control unit 110 may provide a map image 310, as shown in FIG. 3, as user input to the fourth GUI 224 is received. The control unit 110 may set the location information in response to a user input received on the map image 310. Meanwhile, the control unit 110 may provide a pop-up window (e.g., a "Louvre museum") 320 indicating the set place information in response to a user input.

4 is an example in which the controller 110 of FIG. 1 generates a VR image according to an embodiment of the present invention.

Referring to FIG. 4, the controller 110 can acquire a map image from an external map image DB 200 through a communication unit. At this time, the map image DB 200 is linked with the map, and can store the actually photographed images in a three-dimensional panorama image form corresponding to a predetermined position on the map. For example, the map image DB 200 can store a street view photographed by a three-dimensional camera. The control unit 110 divides one map image 411 stored in the map image DB 200 into eight and adjusts the API provided by the map image DB 200 to obtain eight divided map images 412 can do. At this time, each of the eight map images 412 may be the same or similar resolution as one of the map images 411.

Thereafter, the control unit 110 can generate the VR image 422 in a planar form by integrating the eight map images 412 and cropping the unnecessary area. Then, the controller 110 divides the planar VR image 422 into a left eye image and a right eye image, and maps each image to a sphere, thereby generating a spherical VR image 423. The controller 110 controls the display unit 160 to display a part of the left eye VR image 423-1 of the VR image 423 on the left screen and the right eye VR image 423-2 on the right screen, Can be displayed.

5 is an example of the VR image displayed on the screen by the controller 110 of FIG. 1 according to an embodiment of the present invention. A part 510 of the left eye VR image 423-1 is displayed on the left screen of the electronic device 100 and a part 520 of the right eye VR image 423-2 is displayed on the left screen of the electronic device 100, Can be displayed on the right screen of the display unit 100.

Meanwhile, in the above description, the electronic device 100 may be implemented as a head-mounted-device (HMD) to provide a VR image. Alternatively, the electronic device 100 may be a smart device that can be combined with a separate head-starting device. For example, the electronic device 100 may be a smartphone, a tablet personal computer, a mobile phone, a video phone, an e-book reader, a desktop personal computer A laptop personal computer, a netbook computer, a personal digital assistant (PDA), a portable multimedia player (PMP), an MP3 player, a mobile medical device, a camera or a wearable device, Or the like.

Figure 6 is a flow diagram illustrating a method by which electronic device 100 provides VR images in accordance with one embodiment of the present invention. The steps of the method for providing the VR image shown in Fig. 6 are related to the embodiments described in Figs. 1 to 5 and the like described above. Therefore, the contents described above in Figs. 1 to 5 and the like can be applied to the method of Fig. 6 even if omitted below.

Referring to FIG. 6, the electronic device 100 sets place information (s610). For example, the electronic device 100 may set location information from an image pre-stored in the electronic device 100, a captured image, or a captured image. More specifically, the electronic device 100 can extract at least one object included in the pre-stored image, the captured image, or the captured image, and search for the place information corresponding to the extracted object. Alternatively, the electronic device 100 may acquire location information corresponding to the image using a neural network implemented in an external server. Alternatively, the electronic device 100 may receive location information directly from the user.

Next, the electronic device 100 acquires a plurality of map images corresponding to the place information (s620). The electronic device 100 may acquire a plurality of map images from an external map image database (database) At this time, the electronic device 100 can acquire a map image using an application programming interface (API) provided by the external map image DB 200. [ In addition, the electronic device 100 may divide one map image stored in the external map image DB 200 into a plurality of maps by adjusting the parameters provided by the API, and acquire a plurality of divided map images have.

Subsequently, the electronic device 100 combines the plurality of images to generate a VR image capable of rotating 360 degrees (s630). Specifically, the electronic device 100 may integrate a plurality of map images, and may generate a flat VR image by cropping unnecessary portions. Then, the electronic device 100 divides the planar VR image into a left eye image and a right eye image, and maps each image to a sphere to generate spherical VR images (i.e., left eye VR image and right eye VR image) Can be generated. The electronic device 100 displays a part of the left eye VR image on the left screen and a part of the right eye VR image on the right screen.

As the user motion is detected, the electronic device 100 determines the movement position in which the user wants to move within the VR image (s640). The electronic device 100 may include an orientation sensor, an acceleration sensor (not shown) provided in the electronic device 100, and a motion sensor accelerator sensor or the like. In addition, the electronic device 100 may determine a movement location (e.g., latitude and longitude information) that the user is to move in response to the sensed user motion.

Specifically, the electronic device 100 can determine the movement position based on the direction of the electronic device 100 sensed through the direction sensor and the magnitude of the shake of the electronic device 100 sensed through the acceleration sensor. For example, if the magnitude of the shake is greater than or equal to a threshold value, then the electronic device 100 may determine a movement position away from the reference point by a unit distance (e.g., 5 meters, 10 meters, etc.) . Here, the reference point corresponds to the place information and may be an arbitrary position in the VR image. Also, the magnitude of the shake can be calculated on the basis of the magnitude of the gravitational acceleration of the past x, y, z axis sensed by the acceleration sensor and the difference of the gravitational acceleration magnitude of the current x, y, z axes. On the other hand, as the magnitude of the shake increases, the electronic device 100 can determine the movement position at a distance from the reference point.

Thereafter, the electronic device 100 acquires at least one map image corresponding to the movement position and generates a VR image of the movement position (s650).

The electronic device 100 may extract at least one path included in the 360 degree image to obtain at least one map image. Subsequently, the electronic device 100 can select one of the extracted at least one path based on the determined movement position. The electronic device 100 may obtain at least one map image corresponding to the selected path from the external map image DB 200. [ The electronic device 100 may generate a VR image of the moving position based on the obtained map image. At this time, the electronic device 100 may reuse some of the existing map images, but the present invention is not limited thereto.

Also, as the VR image of the moving position is generated, the electronic device 100 can display the VR image being reproduced on the screen of the electronic device 100 by changing the VR image of the moving position.

On the other hand, steps s610 to s650 may be further divided into additional steps or combined into fewer steps, according to an embodiment of the present invention. Also, some of the steps may be omitted as necessary, and the order between the steps may be changed.

One embodiment of the invention may also be embodied in the form of a computer-readable medium having instructions executable by a computer, such as program modules, being executed by a computer. Computer readable media can be any available media that can be accessed by a computer and includes both volatile and nonvolatile media, removable and non-removable media. The computer-readable medium also includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data .

It will be understood by those skilled in the art that the foregoing description of the present invention is for illustrative purposes only and that those of ordinary skill in the art can readily understand that various changes and modifications may be made without departing from the spirit or essential characteristics of the present invention. will be. It is therefore to be understood that the above-described embodiments are illustrative in all aspects and not restrictive. For example, each component described as a single entity may be distributed and implemented, and components described as being distributed may also be implemented in a combined form.

The scope of the present invention is defined by the appended claims rather than the detailed description and all changes or modifications derived from the meaning and scope of the claims and their equivalents are to be construed as being included within the scope of the present invention do.

100: Electronic device
110:
120: Memory
130:
140:
150: User input
160:
200: Map image DB

Claims (12)

  1. A method for an electronic device to provide VR images,
    Setting location information;
    Obtaining a plurality of map images corresponding to the place information;
    Generating a VR image capable of rotating 360 degrees by integrating the plurality of map images;
    Determining a movement position in which the user is to move within the VR image as motion of a user of the electronic device is sensed; And
    Acquiring at least one map image corresponding to the movement position and generating a VR image of the movement position,
    Wherein the determining the movement position comprises:
    Sensing a direction of movement of the electronic device;
    Sensing a shake of the electronic device; And
    And determining, as a movement position, a position that is a unit distance from the place information in the movement direction when the size of the shake exceeds a threshold value,
    The magnitude of the shake is determined based on the difference between the previous magnitude of gravitational acceleration and the current magnitude of gravitational acceleration,
    VR image providing method.
  2. The method according to claim 1,
    Wherein the setting of the place information comprises:
    Obtaining an image captured or captured by a user;
    Extracting at least one object included in the image; And
    Searching information about a place where the at least one object is located; The VR image providing method comprising:
  3. The method according to claim 1,
    The step of acquiring the plurality of map images
    Dividing a map image stored in a server into a plurality of images; And
    And obtaining the divided plurality of map images from the server.
  4. The method according to claim 1,
    Wherein the magnitude of the shake is determined based on a difference between a sum of gravity acceleration magnitudes of each of the x, y, and z axes and a sum of a gravity acceleration magnitude of each of the x, y, .
  5. The method according to claim 1,
    The step of generating the VR image of the movement position
    Extracting at least one path included in the VR image being reproduced on the screen of the electronic device;
    Selecting one of the at least one path based on the moved position;
    Obtaining at least one map image corresponding to the selected path; And
    And generating a VR image of the movement position based on the obtained at least one map image.
  6. An electronic device for providing a VR image,
    A memory for storing a program for providing a VR image, and a processor for executing the program,
    The processor, as the program is executed,
    Setting location information, acquiring a plurality of map images corresponding to the location information,
    Generating a VR image capable of rotating 360 degrees by integrating the plurality of map images,
    As the motion of the user of the electronic device is sensed, determines a movement position in which the user is to move within the VR image,
    Acquiring at least one map image corresponding to the movement position to generate a VR image of the movement position,
    The electronic device
    And a sensor unit for sensing motion of the user,
    The processor comprising:
    Wherein the sensor unit detects a direction of movement of the electronic device, detects a shake of the electronic device,
    Wherein when the size of the shaking exceeds a threshold value, a position that is a unit distance from the place information in the moving direction is determined as a moving position,
    Wherein the magnitude of the shake is determined based on a difference between a magnitude of previous gravitational acceleration and a magnitude of current gravitational acceleration.
    Electronic device.
  7. The method according to claim 6,
    The processor comprising:
    Acquiring an image captured or captured by a user, extracting at least one object included in the image, and searching for information on a location where the at least one object is located.
  8. The method according to claim 6,
    The electronic device
    And a communication unit for communicating with the server,
    The processor comprising:
    Wherein the server is connected to the server via the communication unit to divide the map image stored in the server into a plurality of images and obtains the divided plurality of map images from the server.
  9. The method according to claim 6,
    Wherein the magnitude of the shake is determined based on the sum of the magnitude of the gravitational acceleration of each of the previous x, y, z axes and the sum of the gravitational acceleration magnitudes of each of the current x, y, z axes.
  10. The method according to claim 6,
    The processor comprising:
    Extracting at least one path included in the generated VR image, selecting one of the at least one path based on the moved position,
    Obtain at least one map image corresponding to the selected path, and generate a VR image of the moved position based on the obtained at least one map image.
  11. The method according to claim 6,
    Wherein the electronic device is implemented in the form of a head mounted device or is implemented as a smart device coupled with a head wearable device.
  12. A computer-readable recording medium on which a program for implementing the method of any one of claims 1 to 5 is recorded.
KR1020160182252A 2016-12-29 2016-12-29 Method and appratus for providing vr image KR101877901B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020160182252A KR101877901B1 (en) 2016-12-29 2016-12-29 Method and appratus for providing vr image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020160182252A KR101877901B1 (en) 2016-12-29 2016-12-29 Method and appratus for providing vr image

Publications (2)

Publication Number Publication Date
KR20180077666A KR20180077666A (en) 2018-07-09
KR101877901B1 true KR101877901B1 (en) 2018-07-12

Family

ID=62919051

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160182252A KR101877901B1 (en) 2016-12-29 2016-12-29 Method and appratus for providing vr image

Country Status (1)

Country Link
KR (1) KR101877901B1 (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08124100A (en) * 1994-10-28 1996-05-17 Nikon Corp Monitoring device for distance between vehicles
JPH11259673A (en) * 1998-01-08 1999-09-24 Nippon Telegr & Teleph Corp <Ntt> Space stroll video display method, in-space object retrieving method, and in-space object extracting method, device therefor, and recording medium where the methods are recorded
JP2004062618A (en) * 2002-07-30 2004-02-26 Koei:Kk Program, recording medium, metaball plotting method and game machine
US20070265084A1 (en) * 2006-04-28 2007-11-15 Nintendo Co., Ltd. Game apparatus and recording medium recording game program
EP2660645A1 (en) * 2012-05-04 2013-11-06 Sony Computer Entertainment Europe Limited Head-mountable display system
US20140347390A1 (en) * 2013-05-22 2014-11-27 Adam G. Poulos Body-locked placement of augmented reality objects
KR20150084200A (en) * 2014-01-13 2015-07-22 엘지전자 주식회사 A head mounted display and the method of controlling thereof
KR20150123605A (en) * 2014-04-25 2015-11-04 세종대학교산학협력단 Apparatus for playing mixed reality content and method for rendering mixed reality content
WO2016209167A1 (en) * 2015-06-23 2016-12-29 Paofit Holdings Pte. Ltd. Systems and methods for generating 360 degree mixed reality environments

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08124100A (en) * 1994-10-28 1996-05-17 Nikon Corp Monitoring device for distance between vehicles
JPH11259673A (en) * 1998-01-08 1999-09-24 Nippon Telegr & Teleph Corp <Ntt> Space stroll video display method, in-space object retrieving method, and in-space object extracting method, device therefor, and recording medium where the methods are recorded
JP2004062618A (en) * 2002-07-30 2004-02-26 Koei:Kk Program, recording medium, metaball plotting method and game machine
US20070265084A1 (en) * 2006-04-28 2007-11-15 Nintendo Co., Ltd. Game apparatus and recording medium recording game program
EP2660645A1 (en) * 2012-05-04 2013-11-06 Sony Computer Entertainment Europe Limited Head-mountable display system
US20140347390A1 (en) * 2013-05-22 2014-11-27 Adam G. Poulos Body-locked placement of augmented reality objects
KR20150084200A (en) * 2014-01-13 2015-07-22 엘지전자 주식회사 A head mounted display and the method of controlling thereof
KR20150123605A (en) * 2014-04-25 2015-11-04 세종대학교산학협력단 Apparatus for playing mixed reality content and method for rendering mixed reality content
WO2016209167A1 (en) * 2015-06-23 2016-12-29 Paofit Holdings Pte. Ltd. Systems and methods for generating 360 degree mixed reality environments

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
정재훈 외 7인, "동작기반의 체험형 리모트 콘트롤", 한국 HCI학회(2007) *
정재훈 외 7인, "동작기반의 체험형 리모트 콘트롤", 한국 HCI학회(2007)*

Also Published As

Publication number Publication date
KR20180077666A (en) 2018-07-09

Similar Documents

Publication Publication Date Title
CN105046752B (en) Method for describing virtual information in the view of true environment
CN100534158C (en) Generating images combining real and virtual images
JP5724543B2 (en) Terminal device, object control method, and program
US9570113B2 (en) Automatic generation of video and directional audio from spherical content
US8907983B2 (en) System and method for transitioning between interface modes in virtual and augmented reality applications
US8447136B2 (en) Viewing media in the context of street-level images
KR20170052675A (en) Transmission of three-dimensional video
KR20150116871A (en) Human-body-gesture-based region and volume selection for hmd
EP3432273B1 (en) System and method of indicating transition between street level images
CA2804096C (en) Methods, apparatuses and computer program products for automatically generating suggested information layers in augmented reality
KR101260576B1 (en) User Equipment and Method for providing AR service
EP2355440B1 (en) System, terminal, server, and method for providing augmented reality
JP5053404B2 (en) Capture and display digital images based on associated metadata
EP2732436B1 (en) Simulating three-dimensional features
US8963954B2 (en) Methods, apparatuses and computer program products for providing a constant level of information in augmented reality
JP2014525089A5 (en)
US20130321461A1 (en) Method and System for Navigation to Interior View Imagery from Street Level Imagery
US8624974B2 (en) Generating a three-dimensional model using a portable electronic device recording
KR101637990B1 (en) Spatially correlated rendering of three-dimensional content on display components having arbitrary positions
US9665986B2 (en) Systems and methods for an augmented reality platform
AU2012232976B2 (en) 3D Position tracking for panoramic imagery navigation
KR20150143659A (en) Holographic snap grid
US9699375B2 (en) Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
US20120212405A1 (en) System and method for presenting virtual and augmented reality scenes to a user
US9286721B2 (en) Augmented reality system for product identification and promotion

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant