WO2015145863A1 - Display system, attachment, display method, and program - Google Patents

Display system, attachment, display method, and program Download PDF

Info

Publication number
WO2015145863A1
WO2015145863A1 PCT/JP2014/080393 JP2014080393W WO2015145863A1 WO 2015145863 A1 WO2015145863 A1 WO 2015145863A1 JP 2014080393 W JP2014080393 W JP 2014080393W WO 2015145863 A1 WO2015145863 A1 WO 2015145863A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
user
attachment
mobile terminal
head
Prior art date
Application number
PCT/JP2014/080393
Other languages
French (fr)
Japanese (ja)
Inventor
直敬 藤井
Original Assignee
国立研究開発法人理化学研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立研究開発法人理化学研究所 filed Critical 国立研究開発法人理化学研究所
Priority to JP2016509900A priority Critical patent/JP6278490B2/en
Publication of WO2015145863A1 publication Critical patent/WO2015145863A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/64Constructional details of receivers, e.g. cabinets or dust covers
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture

Definitions

  • the present invention relates to a display system, an attachment, a display method, and a program.
  • the present invention relates to a head proximity image display system that displays an image for a user who has placed a head proximity image display device close to the head, a head proximity body used in the head proximity image display system, and a head proximity image display Suitable for programs.
  • VR Virtual reality
  • CG computer graphics
  • the SR system is a technology that keeps in mind that the virtual alternative world and the reality are replaced and the subject can experience the virtual and the reality without distinction.
  • metacognition that is convinced that an event that occurs in reality is true, or that there is a suspicion when an inconsistent event occurs in reality. It is necessary to operate well the human higher cognitive function called.
  • the SR system has not yet been put into practical use due to various technical limitations associated with this meta-cognition operation.
  • virtual substitute events that have not occurred in front of the eyes, such as past images or images of different locations, can be seen in front of the eyes. A major issue is how to make viewers recognize that this is happening.
  • “Augmented reality” is a technology that superimposes a visual image based on CG (Computer Graphics) technology on an actual live image that can be seen from the user's viewpoint, and displays the image via a display device such as a head mounted display.
  • This “augmented reality” is a technology that can newly add information that does not actually exist in reality.
  • disclosed techniques of Patent Documents 1 to 3 have been proposed as conventional techniques studied with respect to this “augmented reality”.
  • this “augmented reality” has a problem in that since the visual image to be superimposed is a monolayer, the information of the real image displayed on the back of the CG superimposed thereon is deleted.
  • Telepresence technology is a real-time display of images captured by a camera provided at a remote location via a head-mounted display mounted on the viewer's head.
  • the imaging direction of the camera is remotely controlled in conjunction with the movement of the viewer's head.
  • this "telepresence technology” is intended to give viewers imaging information from a camera placed at a remote location, and it is a technology intended to add new information to the real space. Absent.
  • Google glass registered trademark
  • Google glass is a concept of embedding various information in the real space visually recognized by viewers through glasses, so that information can be displayed hands-free and the Internet can be used with natural language voice commands. It has become.
  • this Google glass has a narrow information display area, so it must be exclusive when displaying multiple information, and when they are superimposed on each other, the information hidden underneath There is a problem that the viewer cannot be visually recognized.
  • the effect of making the displayed image look real is not taken into consideration, and it does not exceed the use of a portable small monitor.
  • augmented reality As described above, in the conventional “augmented reality”, “telepresence technology”, and “Google glass (registered trademark)”, a virtual substitute event that does not occur in front of the eyes actually occurs in front of the eyes. It does not lead to let the viewer recognize it as an event. That is, there are various meta-cognitive obstacles for recognizing the virtual substitute event as a real event due to the CG of the video describing the virtual substitute event or the unnaturalness of the video scene. For example, in “augmented reality”, the viewer determines that the CG portion is artificial because of the unnaturalness of the CG superimposed on the live video. In “telepresence”, video from a remote place is projected on the screen in the first place, so even if the viewer views the video, it is not interpreted as real.
  • the present invention has been devised in view of the above-described problems, and its object is to provide a display system, a display method, and an attachment that can be easily and inexpensively replaced with a head-mounted display or the like. As well as providing a program.
  • the present invention allows a viewer to recognize an event that has not actually occurred in front of the user as an event that has actually occurred in front of the user, and has already been used in smartphones and the like.
  • a head proximity type video display system that can be easily realized by using a portable terminal and that can be provided to a user as an inexpensive system
  • a head proximity body used in the system and a head proximity type video display program Is preferred.
  • a head proximity type video display system implemented as an application program of a mobile terminal such as a smartphone and the like, a head proximity body used therefor, a display system suitable for application to a head proximity type video display program, Invented display methods, attachments, and programs.
  • the display system of the present invention is: A mobile terminal having a screen; An attachment that houses the mobile terminal so that the screen is visible to the user; With Mobile devices The sensor of the mobile terminal detects the action by the user through the attachment, The video displayed on the screen is controlled according to the detected action.
  • the head proximity type video display system of the present invention detects a movement of a recording means in which one or more alternative videos are recorded, a display means for displaying at least the alternative videos recorded in the recording means, and its own movement.
  • a mobile terminal having a motion sensor; and a head proximity body having an accommodating means in which the portable terminal is accommodated so that the display means is positioned in front of the user's eyes during video viewing.
  • the display of the substitute video is started or stopped according to the motion detected by the motion sensor.
  • the portable terminal further includes an imaging unit that captures a real space as a live video from substantially the same viewpoint as the user, and the display unit includes the motion sensor. Depending on the detected movement, the alternative video recorded in the recording unit or the live video captured by the imaging unit is displayed and stopped, or the display of the alternative video and the live video is switched.
  • the head proximity body includes a head frame that is close to the user's head when viewing the image, and the head frame that is closer to the user's head than the head frame.
  • the head frame has openable and closable protruding pieces protruding from both sides, and the motion sensor detects a movement according to the opening / closing operation of the protruding piece by the user.
  • the motion sensor moves the head proximity body to the front of the head at the start of visual recognition, and opens and closes the protruding piece in front of the head. Detects deceleration accompanying movement.
  • the head proximity body further includes a lens for enlarging an image displayed by the display means of the portable terminal accommodated in the accommodation means.
  • Video control means for cutting out the video to be displayed from the one alternative video along the time series is further provided.
  • the alternative video acquisition means processes information acquired by accessing the public communication network as needed or information stored in advance into one alternative video.
  • the head proximity type video display system of the present invention further includes audio data acquisition means for acquiring audio data, wherein the video control means is an alternative to reproduce the audio data acquired by the audio data acquisition means. Play in conjunction with the video.
  • the head proximity body of the present invention is used in the head proximity type video display system, and the storage means in which the portable terminal is stored so that the display means is positioned in front of the user's eyes when viewing the video.
  • a head frame that is close to the user's head when viewing an image, and an open / close projecting from both sides of the head frame at a smaller interval than the user's head And a free protruding piece.
  • the head proximity type video display program is a head proximity type video display program that causes a mobile terminal to display a video for a user whose head proximity body is close to the head. It is detected by the motion detection step that an alternative video recording step for recording the alternative video, a motion detection step for detecting the movement of the portable terminal, and displaying one or more alternative videos recorded in the alternative video recording step.
  • the display step of starting or stopping according to the movement is executed by the portable terminal.
  • a display system configured as described above, a display method, an attachment, and a program can be provided.
  • an alternative reality technology a technology for experiencing current and past images seamlessly as an extension of reality.
  • a portable terminal such as a smartphone that has already been widely used, it can be provided to the user as an inexpensive system, and rapid spread is expected. it can.
  • a user feels as if a head-mounted display is attached to the user by allowing the user to use the portable terminal housed in the head proximity body.
  • the start of video display on the display panel can be automatically controlled through the proximity movement of the head proximity object.
  • the portable terminal is used in a state of being accommodated in the head proximity body. For this reason, it is very troublesome for the user to operate the mobile terminal covered with the head proximity object.
  • the video can be automatically started only by a natural proximity operation. Therefore, the operability is extremely excellent.
  • the present invention since it can be operated via a portable terminal that operates wirelessly, it is possible to realize complete wirelessization of the entire system.
  • FIG. 1 is an overall configuration diagram of a head proximity type video display system to which the present invention is applied. It is a figure for demonstrating the structure of a panoramic video camera. It is a figure which shows the external appearance shape of a head proximity body. It is a disassembled perspective view of a head proximity body. It is a figure for demonstrating the usage method of the head proximity
  • (a), (b) is a figure for demonstrating the external appearance shape of a portable terminal. It is a figure which shows the block configuration of a portable terminal. It is a figure for demonstrating the step structure of a control application.
  • (a), (b) is a figure for demonstrating the example of a detection of the proximity
  • (a), (b) is a figure for demonstrating the example of a detection of the proximity
  • (a), (b) is a figure for demonstrating the example which applies the past image
  • (a), (b), (c), (d) is explanatory drawing of the procedure which comprises an attachment with the packaging box of a portable terminal.
  • (a), (b), (c) is explanatory drawing of the procedure which comprises an attachment with the packaging box of a portable terminal.
  • (a), (b) is explanatory drawing of the attachment comprised by the packaging box of the portable terminal.
  • FIG. 1 shows an overall configuration diagram of a head proximity image display system 1 to which the present invention is applied.
  • This head proximity type image display system 1 includes a portable terminal 2, a recording module 3 connected thereto, and a head proximity body 4, and is further connected to a communication network 5 via the portable terminal 2. ing.
  • the portable terminal 2 serves as a so-called central control device that controls the entire head proximity image display system 1.
  • the mobile terminal 2 is embodied as a smartphone, a mobile phone, a tablet terminal, or the like, for example.
  • the mobile terminal 2 is not limited to this, and any mobile terminal such as a mobile game device, a mobile music player, or a wearable terminal can be used. It is a concept that includes a type electronic device terminal.
  • the communication network 5 may be any wired or wireless communication network.
  • the Internet an intranet, an extranet, a LAN (Local Area Network), an ISDN (Integrated Services Digital Network), a VAN (value added network). ), CATV (Community Antenna Television) communication network, virtual private network (Virtual Private Network), telephone line network, mobile communication network, satellite communication network, and the like.
  • LAN Local Area Network
  • ISDN Integrated Services Digital Network
  • VAN value added network
  • CATV Common Antenna Television
  • virtual private network Virtual Private Network
  • telephone line network mobile communication network
  • satellite communication network and the like.
  • the recording module 3 is used to record a substitute video based on a past event separately from an actual event, and includes a panoramic video camera 31.
  • the panoramic video camera 31 has a base 311 and a camera array 312 provided on the base 311 as shown in FIG.
  • the height of the base 311 can be adjusted as necessary, and is set as needed to record a desired video.
  • the camera array 312 includes a main body 320 and a plurality of imaging devices 321 mounted inside the main body 320.
  • An imaging device 321 is mounted in each of the openings formed in the main body 320, and an object can be imaged through the openings.
  • the plurality of imaging devices 321 are set with an angle of view and an imaging direction in order to enable imaging in different directions.
  • each imaging device 321 may be mounted so as to be able to capture all directions in the main body 320 (360 ° in the horizontal direction and 360 ° in the vertical direction) without leakage as shown in FIG. 2, for example.
  • the imaging device 321 it is possible to perform imaging in all directions at the same time. For this reason, when imaging a person in a room, if the person moves, the person who has moved is imaged around the entire virtual viewpoint built in the camera body. A video is generated.
  • the recording module 3 is not limited to the configuration described above, and may be replaced by any configuration as long as it can capture all directions of the main body 320.
  • the imaging device 321 uses a solid-state imaging device such as a CCD (Charge Coupled Device) for each of the images captured in this manner, forms a subject image incident through the lens on the imaging surface, and performs photoelectric conversion.
  • a video signal is generated and transmitted to the portable terminal 2 or the recording computer via the interface 330.
  • a microphone may be mounted on the imaging device 321.
  • the microphone collects peripheral sounds and converts them into audio signals.
  • the microphone transmits the converted audio signal to the portable terminal 2 or the recording computer via the interface 330.
  • the head proximity body 4 is an attachment of the mobile terminal 2, and the mobile terminal 2 is accommodated in the attachment, so that the head close to the so-called viewer as shown in the perspective view of FIG. 3 and the assembly drawing of FIG. It is configured as a pseudo head-mounted display that can be placed close to and can be mounted.
  • a glasses-type or goggles-type display device is realized.
  • the head proximity body 4 is roughly classified into a proximity unit 41 and a terminal storage unit 42. I have.
  • Each component of the attachment head proximity body 4 (attachment) can be provided with a pattern, a color, a decoration, and the like according to the use and function.
  • the proximity unit 41 is made of paper such as cardboard or resin, but is not limited to this, and may be made of any material such as a ceramic material or metal.
  • a head frame 102 close to the head and projecting pieces 101a and 101b projecting from both sides of the head frame 102 are provided.
  • the heel head frame 102 is formed of a cylindrical body having a rectangular cross section.
  • a terminal accommodating unit 42 can be inserted into the cylindrical head frame 102.
  • the proximity unit 41 is brought close to the user's head in the direction C in the head frame 102 and is therefore attached.
  • the head frame 102 is formed with side plates 102a and 102b facing each other, and is attached to the side plate 102a such that the protruding piece 101a protrudes in the C direction, and the protruding piece 101a is provided on the side plate 102b. It is attached to protrude in the direction.
  • the distance between the side plates 102a and 102b in the head frame 102 is substantially equal to or less than the general width of the user's head.
  • the interval between the protruding piece 101a and the protruding piece 101b is equal to or less than the general head of the user.
  • the protruding piece 101a is openable and closable in the D direction with respect to the side plate 102a via the hinge mechanism 103.
  • the protruding piece 101b can be opened and closed in the D direction via the hinge mechanism 103 with respect to the side plate 102b.
  • the protruding pieces 101a and 101b may be connectable to each other via rubber, string, or the like, and can be firmly attached by covering them on the user's head. The user can freely adjust the angle D of the projecting piece to minimize the influence from outside light.
  • the hinge mechanism 103 only needs to be configured as a bent portion of the cardboard. Further, when the proximity unit 41 is made of metal or resin, the hinge mechanism 103 is provided with a through hole (not shown) at the joint position of the protruding piece 101 and the side plate 102 in the vertical direction. You may make it comprise the hinge which can be opened and closed by inserting. However, it is desirable that the protruding pieces 101 are not opened in the D direction in a so-called normal state in which an image is not visually recognized, and the protruding pieces 101a and 101b are narrower than the user's head.
  • the terminal housing unit 42 is made of paper such as cardboard or resin, but is not limited to this, and may be made of any material such as ceramic material or metal and is inserted into the proximity unit 41.
  • the rear plate 110 provided in the C direction, the lens 111 formed on the rear plate 110, the rear plate 110, and the lens 111 are opposed to each other, in other words, in a direction opposite to the C direction.
  • a front plate 112 is provided, and a pressing piece 113 is provided between the front plate 112 and the rear plate 110, and in particular, is disposed closer to the front plate 112. Between the front plate 112 and the pressing piece 113, an accommodating portion 114 for accommodating the portable terminal 2 is formed.
  • the lens 111 is a lens medium that can refract visible light emitted from the display panel 62 of the mobile terminal 2. An image displayed on the display panel 62 of the mobile terminal 2 can be enlarged and displayed in the user's field of view through the lens 111.
  • an aspherical lens for example, a convex lens, a plano-convex lens, a Fresnel lens, or the like can be adopted.
  • the Fresnel lens is particularly suitable because it is light and thin and inexpensive.
  • a stereoscopic image using parallax or a stereoscopic image using random dots is presented from the mobile terminal 2, it is easier for the user to visually recognize the left and right visual fields.
  • a two-lens lens can be configured using one lens by opening two eyepiece openings on the left and right, or a two-lens lens by arranging two lenses on the left and right Can also be configured.
  • the same number of lenses may be prepared, or individual lenses may be fitted into the eyepiece openings.
  • a wall that blocks the left and right fields of view may be provided in the proximity unit 41.
  • the bag accommodating portion 114 is configured with an interval that allows the front plate 112 and the pressing piece 113 to accommodate the portable terminal 2.
  • the portable terminal 2 is accommodated in the accommodating portion 114 in a state where the display panel 62 side is directed in the C direction in the drawing and the imaging unit 44 side is directed in the direction opposite to the C direction.
  • the holding pieces 113 protrude from the side plates 132a and 132b in the terminal housing unit 42 toward the inside, respectively, and have a size and shape that can support the end portion of the mobile terminal 2. As a result, it is possible to prevent the display image from the display panel 62 in the mobile terminal 2 from being blocked by the pressing piece 113.
  • the front plate 112 is provided with grooves 123 and 124 at two locations.
  • the groove 123 is provided to make it easier for the user to grasp the portable terminal 2 when taking it out.
  • the groove 124 is provided at a location corresponding to the position of the imaging unit 44 when the mobile terminal 2 is accommodated. As a result, the shooting direction by the imaging unit 44 is released by the groove 124 and is not blocked by the front plate 112.
  • the head proximity body 4 accommodates the portable terminal 2 in the accommodation portion 114 in the terminal accommodation unit 42 having the above-described configuration, and inserts the terminal accommodation unit 42 into the proximity unit 41. Be available.
  • the protruding pieces 101a and 101b are opened as shown in FIG. 5, and the head frame 102 in the proximity unit 41 is brought close to the user's head.
  • the user's field of view is shielded through the protruding piece 101 and the head frame 102, and can be concentrated over a long time on the display image of the display panel 62 of the mobile terminal 2 that is enlarged and displayed through the lens 111. It becomes possible.
  • the user can experience the same immersive feeling as if he / she was wearing a head mounted display.
  • the user may make the terminal accommodating unit 42 inserted in the proximity unit 41 approach or separate from the user. Thereby, the distance between the user's eyes and the lens 111 can be changed, and the focus on the image displayed on the display panel 62 can be freely adjusted.
  • FIG. 6 (a) is a plan view of the portable terminal 2
  • FIG. 6 (b) is a bottom view thereof.
  • the mobile terminal 2 includes a display panel 62, headphones 43, an imaging unit 44, and a motion sensor 59.
  • Headphone 43 can be worn on the user's ear.
  • the headphone 43 is not limited to a shape and size that completely envelops the user's ears, and may be a small earphone type.
  • the configuration of the headphone 43 is not essential and may be omitted as necessary. However, it is desirable to have a noise canceling function when in use.
  • FIG. 7 shows a block configuration of the portable terminal 2.
  • the mobile terminal 2 further includes a microphone 60 and a recording unit 69 in addition to the motion sensor 59, the display panel 62, the headphones 43, and the imaging unit 44 described above.
  • the portable terminal 2 includes a peripheral interface (I / F) 57 and a control application 20.
  • the power switch 58 and the operation unit 65 are connected to the peripheral I / F 57.
  • the motion sensor 59, the microphone 60, the display panel 62, the headphones 43, the imaging unit 44, and the recording unit 44 are each connected to the peripheral I / F 57.
  • the display panel 62 displays the video imaged by the imaging unit 44 or the video image transmitted from the control application 20.
  • the display panel 62 When a video signal is input, the display panel 62 generates signals (R, G, and B primary color signals) that are elements for generating an image based on the video signal. Further, light based on the generated RGB signals is emitted and combined, and the light is scanned two-dimensionally. The two-dimensionally scanned light is converted so that its center line converges on the user's pupil and projected onto the retina of the user's eye.
  • the headphone 43 receives the audio signal transmitted from the mobile terminal 2.
  • the headphone 43 outputs sound based on the input sound signal.
  • the heel peripheral I / F 57 is an interface that plays a role of relaying transmission / reception of various information for transmitting information acquired from the motion sensor 59, the microphone 60, the operation unit 65, and the like to the control application 20.
  • the power switch 58 is constituted by, for example, a pushable button-type switch exposed to the outside, and when the user presses the switch, the processing operation by the portable terminal 2 is started or the processing operation is ended.
  • the heel movement sensor 59 detects the movement of the mobile terminal 2.
  • a gyro sensor, an acceleration sensor, a geomagnetic sensor, or the like is used, and detects the angle and inclination of the portable terminal 2, and the head proximity body 4 in which the portable terminal 2 is accommodated, and also the speed.
  • the motion sensor 59 may detect the position of the mobile terminal 2 itself.
  • the motion sensor 59 may be replaced with a motion capture system using a camera (not shown) installed in the experience environment. Data on the movement of the user's head acquired by the motion sensor 59 is transmitted to the control application 20 via the peripheral I / F 57.
  • Microphone 60 collects ambient sounds and converts them into audio signals.
  • the microphone 60 can transmit the converted audio signal to the headphones 43 via the peripheral I / F 57. In the process of transmission, some processing may be performed on the audio signal.
  • the microphone 60 is also disposed therein. Accordingly, the user touches the head proximity body 4, hits the head proximity body 4, rubs it, deforms the head proximity body 4 by opening and closing the protruding pieces 101 a and 101 b, etc.
  • the sound resulting from the action is resonated by the head proximity body 4 and collected by the microphone 60. Therefore, by filtering or the like in the resonance frequency band unique to the head proximity body 4, it is possible to distinguish the external sound from the user's action on the head proximity body 4 that is an attachment.
  • the heel operation unit 65 is a so-called user interface for the user to perform input based on his / her own intention.
  • the operation unit 65 includes a button or a touch screen that also serves as the display panel 66, and a user inputs via the touch screen.
  • the information is transmitted to the control application 20 via the peripheral I / F 57.
  • the touch screen is also disposed therein. Therefore, as will be described later, a small opening that allows about one finger of the user to pass through the head proximity body 4 may be provided so that the touch screen can be touched through the opening.
  • the opening is enlarged, the area where the finger can touch increases, but external light is likely to enter, so the feeling of immersion is inferior. Therefore, although the area that can be touched by the finger is narrowed, a small opening is provided, and it is possible to touch an operation target such as an icon or a button displayed in the touch screen by a method described later.
  • the bag recording unit 69 is a storage for recording images, videos, and other various data. The writing and reading of data in the recording unit 69 are executed based on control by the control application 20.
  • the eyelid control application 20 is application software for controlling the entire head proximity image display system 1.
  • the control application software itself is recorded in the recording unit 69, a memory (not shown), or the like.
  • FIG. 8 shows a step configuration of the control application 20 in the mobile terminal 2.
  • the portable terminal 2 includes a video accumulation step S22 connected to the reproduction control step S28, a live video acquisition step S23, a video cutout direction update step S25, an audio data acquisition step S35, and a motion detection step S38.
  • the video accumulation step S22 is continuous from the alternative video acquisition step S21
  • the video cutout direction update step S25 is continuous from the head direction specifying step S24.
  • the substitute video acquisition step S21 acquires the video imaged by the panoramic video camera 31 as a substitute video.
  • an alternative video processed based on various information and data acquired via the communication network 5 is acquired.
  • information input via the flash memory or the recording medium may be acquired as an alternative video, or input via a user interface (not shown) in the portable terminal 2. It may be based on information.
  • Video accumulation step S22 accumulates the alternative video acquired in alternative video acquisition step S21.
  • the storage of the substitute video in this video storage step S22 may be executed by the recording unit 69, for example.
  • this video accumulation step S22 not only the alternative video from the alternative video acquisition step S21 but also various contents and information may be accumulated in advance from the beginning and stored as alternative video.
  • the substitute video recorded by the recording unit 69 in the video accumulation step S22 is read under the control of the reproduction control step S28.
  • the live video acquisition step S23 acquires the video signal captured by the imaging unit 44.
  • the head direction specifying step S24 acquires data relating to the movement of the user's head through the movement of the mobile terminal 2 detected by the motion sensor 59 in the mobile terminal 2. In this head direction specifying step S24, the direction of the user's head is actually specified from the acquired data relating to the movement of the user's head.
  • Video cutout direction update step S25 updates the cutout direction of the video to be displayed on the display panel 62 in the portable terminal 2 based on the direction of the user's head specified by the head direction specifying step S24.
  • the sound data acquisition step S35 acquires sound from the outside and accumulates it.
  • an external audio acquisition method for example, it may be acquired from a public communication network via wire or wireless, or audio data recorded on a recording medium may be read and recorded.
  • the audio data acquisition step S35 is not limited to the case where the acquired audio data is accumulated, and is read out and used.
  • the audio data acquired from the outside is directly output in real time as it is. Also good.
  • the sound data acquired in the sound data acquisition step S35 or the sound data acquired from the outside is output by the headphones 43.
  • the reproduction control step S28 performs processing for reproducing the substitute video stored in the recording unit 69 in the video storage step S22 and the live video acquired in the live video acquisition step S23.
  • video reproduction is controlled using the information from the video cutout direction updating unit step S25 in its own reproduction operation.
  • the playback video controlled in the playback control step S28 is displayed on the display panel 62 described above.
  • the movement of the portable terminal 2 is detected by the movement sensor 59 in the portable terminal 2.
  • the detailed operation by this motion detection step S38 will be described.
  • This motion detection step S38 detects the proximity movement of the head proximity body 4 as shown in FIGS.
  • the user holds the head proximity body 4 in which the portable terminal 2 is accommodated in the accommodation portion 114.
  • the operation of the control application 20 is in a start state via the operation unit 65, but the display of video on the display panel 62 has not yet started.
  • the head proximity body 4 is moved to the front of the user's head.
  • the motion sensor 59 detects the movement of the head proximity body 4 via the mobile terminal 2 accommodated therein.
  • the user tries to bring the head proximity body 4 closer, but in fact, the distance between the side plates 102a and 102b in the head frame 102 is approximately equal to or less than the general head width of the user. It is said that. Therefore, as shown in FIG. 9A, the user performs an operation of opening the side plates 102a and 102b in accordance with the size of his / her head.
  • the head proximity body 4 is decelerated in comparison with the speed at which it moves to the front of the user's head as shown in FIG.
  • the motion sensor 59 detects the deceleration of the moving speed of the head proximity body 4 via the mobile terminal 2 accommodated therein.
  • the orientation of the head proximity body 4 also changes. For example, even when the imaging unit 44 of the mobile terminal 2 faces the ground at the stage of gripping the head proximity body 4, the mobile terminal 2 is at the stage of moving the head proximity body 4 to the trouble of the user's head.
  • the image pickup unit 44 may rise somewhat in the horizontal direction.
  • FIGS. 9 (a) and 9 (b) and FIG. 10 (a) to be described later the user's actions are expressed roughly, and it is as if the orientation of the head proximity body 4 has not changed.
  • the user brings the head frame 102 close to the head, as shown in FIG. Shield the sides.
  • the side plates 102a and 102b are connected to each other via rubber or a string, they may be firmly attached by covering them on the user's head.
  • the mounting as referred to in the present invention is used not only for firmly mounting such a rubber or string on the user's head, but also while being held by the user's hand as shown in FIG. 10 (b). Cases are also included.
  • FIG. 11 shows a state in which the movement of the mobile terminal 2 accommodated in the head proximity body 4 in this proximity process is viewed from the side of the user.
  • the initial gripping state of the head proximity body 4 as shown in FIG. 9A is E, and the side plates 102a and 102b are opened as shown in FIGS. 9B and 10A.
  • Let state be F.
  • the proximity state to the user's head is assumed to be G.
  • the moving speed from E to F is high, while F to G are low.
  • the operation of opening the side plates 102a and 102b is essential when F reaches, the moving speed of the portable terminal 2 is always low in the vicinity of F.
  • the display panel 62 is almost upward in the state E, whereas in the states F and G, the orientation of the display panel 62 is almost horizontal. .
  • the direction and speed of the portable terminal 2 are specific movements that can occur when the mobile terminal 2 is mounted on the head proximity body 4 and used as the head proximity type video display system 1. .
  • the motion application 59 detects the movement of moving the head proximity body 4 to the front of the head and the deceleration accompanying the opening / closing operation of the protruding piece 101 before the head by the motion sensor 59 when the image is viewed.
  • the user desires to start video display via the display panel 62, determines that the head proximity body 4 is close, and actually starts video display on the display panel 62. This makes it possible to smoothly link the start of the SR video experience with the user's action.
  • the mobile terminal 2 is used in a state of being accommodated in the head proximity body 4. For this reason, it is very complicated for the user to operate the mobile terminal 2 covered with the head proximity body 4, but in the present invention, the video is automatically started only by a natural proximity operation. Therefore, the operability is extremely excellent.
  • the control application 20 does not particularly start the video display via the display panel 62.
  • the movement of the mobile terminal 2 as shown in FIG. 11 hardly occurs except when the present invention is used. Therefore, when the normal mobile terminal 2 is used, the above-described movement is caused by the motion sensor 59. It is hardly detected. For this reason, it is possible to prevent the control application 20 from erroneously starting the display of the video via the display panel 62 even though it is not intended when the normal mobile terminal 2 is used.
  • the stopping operation is not limited to the above-described method, and the user himself takes out the mobile terminal 2 from the head proximity body 4 and instructs to stop visual recognition through an operation by the normal operation unit 65. It may be. Further, in the present embodiment, it takes a certain time until the reproduction of the video is stopped after the head proximity body 4 is removed from the user's head. For this reason, when the user is viewing video content such as a movie or video as an alternative video, the mobile terminal 2 starts to remove the head proximity object 4 from the user's head (step G in FIG. 11). The elapsed time from playback to the point when playback is stopped (step E in FIG. 11) is measured.
  • the mobile terminal 2 displays the video content only for the measured elapsed time or the time obtained by adding a certain grace time to the elapsed time. It is desirable to resume playback of the video content after rewinding. Thereby, the user can view the video content without interruption.
  • this invention is not limited to the case where it implement
  • the mobile terminal 2 displays either live video display or alternative video display on the display panel 62, or displays the live video and alternative video in combination with each other on the display panel 62.
  • the imaging unit 44 performs imaging.
  • the live video imaged by the imaging unit 44 corresponds to the direction and position of the mobile terminal 2 housed in the head proximity body 4. Actually, since the head proximity body 4 is close to the user, a live image corresponding to the orientation and position of the user's head is imaged through the imaging unit 44.
  • the subject image picked up via the image pickup unit 44 is formed on the image pickup surface via the image pickup element and the lens, and a video signal is generated by photoelectric conversion.
  • the mobile terminal 2 receives such a video signal in a live video acquisition step S23, and a process for playing it as a live video is performed in a playback control step S28.
  • the captured live video is transmitted to the display panel 62 via the mobile terminal 2.
  • the live video sent to the display panel 62 is emitted as light based on each RGB signal, which is an element for generating an image based on the video signal, and is scanned two-dimensionally. As a result, the live video imaged by the imaging unit 44 is projected onto the retina of the user's eye.
  • the imaging range by the imaging unit 44 corresponds to the movement of the user's head. For this reason, since the user can visually recognize the live video according to the movement of his / her head through the display panel 62, the user can view the real space from the display panel 62 with almost the same viewpoint as the user. It is possible to make the player feel the same feeling as he is visually recognizing.
  • the mobile terminal 2 detects an alternative video display opportunity constantly or at intervals.
  • the meaning of the detection of the alternative video display opportunity indicates that the user has expressed some intention regarding the display of the alternative video, or the case where the user has captured any opportunity to display the alternative video regardless of the user's intention. Is.
  • the detection of the substitute video display opportunity based on the user's intention may be based on, for example, the movement of the user's head detected by the motion sensor 59. Further, the proximity movement of the head proximity body 4 described above may be detected by the motion sensor 59, and an alternative image may be displayed accordingly. In such a case, instead of displaying a live video from the beginning, an alternative video is displayed. Furthermore, the detection of the substitute video display opportunity may be performed when some intention is displayed from the user via the operation unit 65 or detected via voice input via the microphone 60. It may also be based on any information acquired from the user, such as brain waves, vibrations, movements, position information, and the like. Alternatively, an alternative video display opportunity may be detected based on the surrounding environment including the user, for example, some smell, heat, or touch.
  • the detection of the substitute video display opportunity not based on the user's intention may be, for example, a forced transition to the display of the substitute video at intervals, or the substitute video based on the information received from the communication network 5 You may make it detect the display opportunity of an image
  • an event generated based on a predetermined program or algorithm incorporated in the mobile terminal 2 may be regarded as an alternative video display opportunity.
  • the substitute video is displayed on the display panel 62.
  • a specific example of the alternative video display method will be described later.
  • the display may return to the normal live video only.
  • the above-described various events to be detected as an opportunity to display the alternative video may be detected.
  • one or more alternative videos are acquired in the alternative video acquisition step S21.
  • this alternative video acquisition step S21 when two or more alternative videos are acquired, a so-called multi-layer alternative video is acquired in advance and recorded in the recording unit 69. deep. Then, under the control of the reproduction control step S28, one or more alternative videos are selected from the multi-layer alternative videos, and are reproduced in parallel with or superimposed on the live video.
  • a live video and a substitute video may be displayed in combination.
  • processing such as removing the background or extracting a person from at least one of the videos may be performed.
  • a target object such as a person who has been recognized to exist in reality or a person's own body on a past image, and it is possible to further improve the reality of an alternative image. .
  • the image quality of the substitute video and the live video may be set to be the same or similar.
  • the image quality of the substitute video and the live video may be set to be the same or similar.
  • the opacity of the substitute image and the opacity of the live image are adjusted in the reproduction control step S28, respectively.
  • the transparency of the live video may be controlled, or the opacity of the substitute video as well as the live video may be controlled.
  • the substitute video fades in or out with respect to the live video.
  • the opacity of the substitute video and the live video can be set for each pixel of the display panel 62. Even when the substitute video and the live video are always superposed, it is possible to actually superimpose only a part of the entire space.
  • the opacity of the substitute image and the live image in units of pixels, it is possible to freely change the range of the environment space where the substitute image and the live image are mixed with each other and the shape thereof. Further, the change in opacity for each pixel may be changed in time series and dynamically changed based on the user action detected by the motion sensor 59 or the like.
  • a means for exerting at least one of smell, heat, vibration, touch, and sound on the user may be separately provided.
  • a means for exerting at least one of smell, heat, vibration, touch, and sound on the user may be separately provided.
  • an emergency bulletin about natural disaster is played as an alternative video, it may be notified by vibration or voice to alert the user.
  • the present invention is not limited to the case where one substitute image is reproduced with a live image superimposed or in parallel. Since two or more alternative videos are stored in the video storage unit 22, the two or more alternative videos may be superimposed on the live video or reproduced in parallel.
  • the present invention it is possible to multi-layer two or more alternative videos in advance and select one or more desired alternative videos and display them in combination with the live video. Further, according to the present invention, it is possible to sequentially switch a desired one of the alternative videos to be displayed.
  • Selection or switching of a desired substitute image, or switching between the substitute image and the live image may be executed based on the movement of the user's head, and consequently the movement of the head proximity body 4 close to the head. Good. In such a case, switching may be performed based on the movement of the head detected by the motion sensor 59.
  • the present invention is not limited to this, and may be a case where some kind of intention is displayed from the user via the operation unit 65, or may be detected via voice input via the microphone 60. Further, it may be based on any information acquired from the user, such as brain waves, vibrations, and position information. Alternatively, an alternative video display opportunity may be detected based on the surrounding environment including the user, for example, some smell, heat, or touch.
  • one or more alternative videos are acquired in advance in the alternative video acquisition step S21 and recorded in the recording unit 69, and the alternative video is read from the recording unit 69 as necessary.
  • the present invention is not limited to this.
  • the newly required substitute image may be acquired from the communication network 5 each time.
  • the audio data acquired in the audio data acquisition step S35 may be reproduced together.
  • the mobile terminal 2 may reproduce the audio data in conjunction with the alternative video to be reproduced.
  • the video to be played is map information
  • an announcement linked to this may be played back in conjunction with the audio data.
  • the video to be played is a content video related to a game
  • sound effects or music linked to the content may be played back as voice data.
  • the electronic device may be operated in a wirelessly interlocked manner with another electronic device different from the portable terminal 2 or with another portable terminal.
  • This also allows the SR video experience to be synchronized among multiple users.
  • the control application 20 can be controlled from the outside, sharing of the SR video experience can be realized by performing control for synchronization from another electronic device or another portable terminal.
  • the operation of the mobile terminal 2 at the time of visual recognition is not limited to the case where the operation is performed via the operation unit 65.
  • the operation unit 65 may be used instead.
  • the motion sensor 59 detects the number of taps of the protruding piece 101, so that the video can be played back, stopped, fast forwarded, etc. freely.
  • any information acquired from the communication network 5 can be used.
  • the mobile terminal 2 acquires user position information, and acquires a geographical display (map information) such as a building or a road corresponding to the position information from the communication network 5. And the map information is displayed according to the phenomenon currently displayed on the live image
  • FIG. 12A shows a state in which a panoramic video camera 31 arranged at a position P is capturing an image in all directions.
  • the imaging device 321 in the panoramic video camera 31 performs imaging without omission over 360 ° in the horizontal direction. Although this imaging is simultaneously performed even in the vertical direction, the following example will be described by taking the imaging in the horizontal direction as an example.
  • the line-of-sight direction detected by the movement of the head detected by the motion sensor 59 may be reflected in the display on the display panel 62.
  • Information relating to the orientation of the user's head and the line-of-sight direction detected via the motion sensor 59 is sent to the mobile terminal 2.
  • Information on the head direction by the motion sensor 59 specifies the actual head direction of the user in the head direction specifying step S24.
  • the vision that the user actually wants to capture is specified through the head direction specifying step S24. If you specify that the user is facing the front, It is only necessary to cut out an image reflected in the vision in the range indicated by the solid line in FIG. However, when the user recognizes the area indicated by the dotted line as a vision in the head direction specifying step S24, the range of the video to be extracted is shifted in the arrow direction by the video extraction direction updating step S25.
  • the reproduction control step S28 cuts out the video in the range shifted in this way from the video recorded in the recording unit 69. At this time, since the past video is recorded in time series, it is desirable to cut out the cut-out timing along the time series.
  • the user can visually recognize the image visually according to the orientation of his / her head. It is possible to have a sense. Since the position P of the panoramic video camera 31 that has captured the past video is the same as the position P of the user's head where the mobile terminal 2 is brought close, such a feeling can be implanted.
  • ⁇ ⁇ Only such substitute video as past video may be displayed on the display panel 62, or may be displayed in combination with live video.
  • the user is initially viewing the live video, but is switched to this past video without realizing it.
  • the user since the user visually recognizes the live video from the beginning, even if the user has switched to the past video, the user remains conscious of viewing the live video. In other words, by initially letting the user visually recognize the live video, it is difficult to notice the switch to the past video.
  • the sound recorded by the microphone (not shown) is also played through the headphones 43, so that the user can feel as if he / she is viewing the live video.
  • the display of the past video as the substitute video is not limited to the above-described embodiment.
  • the position P of the panoramic video camera 31 that captured the past video and the position P of the user's head close to the head proximity body 4 are the same, and may be different positions.
  • the present invention is not limited to the case where the past image is cut out by identifying both the head direction and the line-of-sight direction, and the past image may be cut out by identifying either the head direction or the line-of-sight direction. Good.
  • the image quality of the past video and the live video may be set to be the same or similar.
  • the mobile terminal 2 performs image quality adjustment via an adjustment value obtained from the characteristic data of the image sensor that captured each of the live video and the past video.
  • the portable terminal 2 may determine an image quality standard in advance and automatically perform image quality adjustment so that the image quality of the live video and the past video is approximated.
  • the user does not feel discomfort when switching from the live video to the past video, and can taste the past video as if viewing the live video.
  • the target displayed on the past video and the target displayed on the live video may be mixed. Thereby, it becomes possible to make it more difficult to distinguish the past video from the current video.
  • Game content video As an alternative video, for example, a content video related to a game may be applied. In recent years, games using a head-mounted display have been produced, but these are applied to alternative images, and actions from the user as a player (based on input from the motion sensor 59, the operation unit 65, etc.) are reflected in the game. Let Since the alternative video is layered in a plurality of layers, the video relating to the game content is also layered in a plurality of layers, and these are sequentially read and displayed according to the scene. At this time, layered game content videos are superimposed on each other, or these are faded out and faded in to each other, so that it is possible to represent the transition between the alternative images without any sense of incongruity to the user.
  • Video content such as movies For example, video content such as a movie may be reproduced as an alternative video.
  • normal movie content may be played back as the first substitute video
  • accompanying information regarding the movie content may be played back as the second substitute video.
  • this accompanying information for example, a comment from a viewer regarding movie content may be acquired from a communication network and flowed, or a cast, a synopsis up to now, personal relationships, etc. may be displayed.
  • investment information stock, exchange rate, bond, futures transaction
  • a chart of a 5-minute bar for a certain brand may be displayed in the first alternative video
  • a daily chart of the brand may be displayed in the second alternative video.
  • a chart of daily and monthly bars may be displayed side by side on the first alternative video
  • a chart of exchange rates may be placed on the second alternative video, or news acquired in real time, and a quotation May be displayed.
  • investment information is acquired from the communication network 5 in real time, and this is used as alternative information over a plurality of layers.
  • the user designates investment information that the user wants to check among the alternative information composed of a plurality of layers, and these are read out and displayed on the display panel 62.
  • Display by application Various applications applied to portable information terminals (mobile phones, smartphones), tablet terminals, PCs, and the like may be reproduced as alternative videos. In such a case, these applications are acquired from the communication network 5 and stored in the recording unit 69. If necessary, the application is read from the recording unit 69 and reproduced as a substitute video.
  • An e-mail screen may be displayed as an alternative video.
  • the portable terminal 2 reproduces the mail received via the communication network 5 or information related thereto as an alternative video.
  • a television broadcast As an alternative video, a television broadcast may be applied. Thereby, the user can view the television broadcast from the alternative video in combination with the live video.
  • the present invention has been applied to provide an alternative video, but the present invention relates to a display system that provides an alternative such as a head-mounted display inexpensively and easily,
  • the minimum configuration is realized by housing the mobile terminal 2 having a screen in an attachment that is the head proximity body 4.
  • the mobile terminal 2 plays a video image and the user views it.
  • video images there are situations where you want to perform operations such as playback, pause, fast forward, and rewind.
  • the mobile terminal 2 is configured by a smartphone having a touch screen and is not accommodated in the attachment, such an operation can be performed by touching an icon, a button, or the like displayed on the touch screen.
  • the touch screen is also disposed inside the attachment. Therefore, in this embodiment, a small opening that allows one finger to pass through is provided on the bottom surface of the terminal accommodating unit 42 so that the user can touch the touch screen. The reason for making the opening small is to prevent external light from entering the attachment as much as possible.
  • the user can touch the touch screen through the opening of the attachment. Since the opening is small, the entire surface cannot be touched. For example, if a right-handed user has an opening on the right bottom surface for a right-handed user, the user can touch the right side of the touch screen but cannot touch the left side because his / her finger does not reach. . That is, the touchable area is limited to a part of the touch screen.
  • the inclination of the attachment housing the mobile terminal 2 is detected by the motion sensor 59 of the mobile terminal 2. And according to this inclination, the position of the operation target displayed in the touch screen is changed.
  • the line is cyclically advanced in the direction of the detected inclination. Then, the path of travel passes through the touchable area.
  • the operation target can be moved into the touchable area by tilting the attachment housing the mobile terminal 2. After the desired operation target moves within the touchable area, the user's finger is touched on the desired operation target displayed on the touch screen through the opening at the bottom of the attachment, and is associated with the operation target. Processing is executed.
  • each tilt and return, the row and the column are rotated one by one in the direction of the tilt, that is, the diagonal scroll is performed obliquely. Also good.
  • a physical simulation may be used in which a viscous liquid is put into a container, an object is put into the container, and the container is tilted. That is, the detected inclination is applied to the inclination of the container, and the operation target is associated with the object. The user can touch the operation target after adjusting the inclination of the attachment housing the mobile terminal 2 to bring the desired operation target close to the finger inserted into the attachment through the opening.
  • the image displayed on the mobile terminal 2 is controlled by opening / closing or tapping the protruding piece 101 of the attachment, touching the touch screen of the mobile terminal 2 through the opening of the attachment, or the like.
  • the attachment has a box-like shape. For this reason, when the user performs an action with contact with the attachment, such as hitting the surface, rubbing it, denting it a little, returning it, opening / closing or tapping the protruding piece 101, the action The sound caused by the sound is generated and resonates due to the box shape. The resonated sound can be collected from the microphone 60 of the mobile terminal 2.
  • the position and orientation of the mobile terminal 2 accommodated in the attachment also change. This movement can be detected by the movement sensor 59.
  • the video displayed by the mobile terminal 2 accommodated in the attachment is controlled by one or both of the voice and the movement generated by the action as described above by the user.
  • the sound generated by the attachment due to the action accompanied by the contact with the attachment is collected by the microphone 60 after being resonated by the attachment. Therefore, it is possible to easily separate the collected sound from the external sound by a technique such as applying a bandpass filter that passes only the frequency band resonated by the attachment.
  • the mobile terminal 2 compares the voice and movement detected during operation with a template prepared in advance, selects a template whose similarity is sufficiently high based on a predetermined comparison criterion, and associates the template with the template. Determine that the user performed the action.
  • the menu After the menu is displayed, move the cursor pointing to the item in the menu with the friction on the right side or the friction on the left side, and select the item pointed to by the cursor with a single tap on the right side, etc. be able to.
  • the association between the type and number of actions and the process can be arbitrarily changed.
  • the mobile terminal 2 can be controlled without impairing the user's immersive feeling by identifying the action accompanied by the user touching the attachment.
  • the attachment according to the present embodiment is configured by a box that wraps the portable terminal 2 when the portable terminal 2 is transported or sold.
  • packaging is the technology and state of applying appropriate materials, containers, etc. to an article to protect the value and state of the article during transportation, storage, etc., and is classified into individual packaging, interior decoration, and exterior packaging. Is done.
  • packaging is classified into industrial packaging intended for transportation, transportation packaging, delivery packaging, and other commercial packaging intended for sale. Called.
  • Packaging in the present application is a concept including all of the above.
  • FIGS. 13 (a), (b), (c), and (d) are explanatory diagrams of a procedure for configuring an attachment with a packaging box of a portable terminal.
  • FIGS. 13 (a), (b), (c), and (d) are explanatory diagrams of a procedure for configuring an attachment with a packaging box of a portable terminal.
  • FIG. 13 (a), (b), (c), and (d) are explanatory diagrams of a procedure for configuring an attachment
  • the box 501 for packaging the portable terminal 2 includes a lid 502, a main body 503, and a tray 504.
  • the lid 502 covers the main body 503, and the tray 504 is housed inside the main body 503 with the portable terminal 2 placed on it (this figure (a)).
  • the lid 502 and the main body 503 can be made of paper such as cardboard, or various resins, but are not limited to this, and may be made of any material such as ceramic material or metal. The material can be arbitrarily selected. Moreover, a pattern, a color, a decoration, etc. can be given to these according to the use and function.
  • the mobile terminal 2 when the mobile terminal 2 is shipped, the mobile terminal 2 is placed on the tray 504 so that the bottom surface of the tray 504 is in contact with the surface (back surface) opposite to the screen of the mobile terminal 2.
  • a protrusion called a nail is generated in a part of the tray 504 where the mobile terminal 2 is accommodated, and the mobile terminal 2 can be easily fixed by fitting the mobile terminal 2 therein.
  • claw is shown.
  • the bag tray 504 is made of a material such as a transparent plastic, and a Fresnel lens 505 is integrally formed as a thin lens at a central portion of a region where the mobile terminal 2 is placed.
  • the user can obtain the Fresnel lens 505 by taking out the portable terminal 2 and the tray 502 from the main body 503 and then cutting the tray 504 along a perforation line or a cut line (this diagram (b)).
  • the lid 502 is also provided with a perforation or a cut line, and an opening 506 for the user to get an eyepiece is obtained by cutting along the perforation or cutting line (this figure (b)).
  • the Fresnel lens 505 can be fixed by, for example, making a cut at the edge of the opening 506 of the lid 502 and sandwiching it, or can be bonded using an adhesive tape or the like.
  • the user turns the tray 504 upside down to fix the mobile terminal 2 and inserts it into the main body 503 (this figure (c)), thereby fixing the mobile terminal 2 to the main body 503. Thereby, a member corresponding to the terminal accommodating unit 42 in which the portable terminal 2 is accommodated is completed.
  • the head proximity body 4 (attachment) containing the portable terminal 2 is completed.
  • the user can focus on the screen of the mobile terminal 2 by adjusting the position where the lid 502 is put on the main body 503.
  • the main body 503 is cut along a perforation or cut line prepared in advance, so that an opening for touching the touch screen of the mobile terminal 2 with a finger or a cord for headphones from the mobile terminal 2 is provided. An opening can be created for exit.
  • the Fresnel lens 505 need not be integrally formed with the tray 504, and may be attached separately or attached to the lid 502 from the beginning of shipment. Further, in the above description, the portable terminal 2 is fixed by inserting the tray 504 upside down. However, without using the tray 504, the bottom or wall surface of the main body 503 is provided with a notch or protrusion. The portable terminal 2 may be fixed as a stopper. In this aspect, the head proximity body 4 (attachment) can be created not only from a packing box (decorative box) used at the time of shipment of the mobile terminal 2 but also from various cardboard boxes for mail order sales.
  • a packing box decorative box
  • the portable terminal 2 is turned over with respect to the tray 504 and carried on the bottom surface of the tray 504. After placing the terminal 2 so that the screen of the terminal 2 is in contact and fixing the portable terminal, the tray 504 is turned upside down in the main body 503 so that the screen of the portable terminal 2 can be seen outside through the bottom surface of the tray 504.
  • the lid 502 may be covered with the main body 503 so that the screen of the portable terminal 2 can be seen through the Fresnel lens 505 installed in the opening 506 of the lid 502.
  • FIGS. 14 (a), 14 (b), and 14 (c) are explanatory diagrams of a procedure for configuring an attachment with a packaging box of a portable terminal. Hereinafter, a description will be given with reference to FIG.
  • the Fresnel lens 505 is formed on the bottom surface of the tray 504 as in the above embodiment (FIG. (A)).
  • the portable terminal 2 is taken out and fixed to the bottom of the main body 503 with a stopper (not shown) using a notch or the like (this figure (b)). Then, the main body accommodation unit 42 is formed.
  • the tray 504 is turned over and inserted into the main body 502 (this figure (b)). Thereby, the simplest eyepiece unit 41 can be obtained.
  • the tray 504 by turning the tray 504 upside down and inserting it into the main body 502, the simplest attachment can be obtained.
  • an opaque material may be used for the side of the tray 504.
  • a light shielding material is provided on the side of the tray 504.
  • a sticker or tape may be attached.
  • the flap can be used as the protruding piece 101 of the housing unit.
  • the tray 504 is fixed to the lid 501 by a fixing method such as providing an opening 506 in the lid 501 and inserting the claws of the tray 504 into a groove provided in the lid 501 (not shown).
  • a tray 504 may be inserted into 502 (this figure (c)).
  • the eyepiece unit 41 is obtained by turning the tray 504 upside down and fixing it to the lid 501.
  • the friction works sufficiently and the accommodation unit 42 and the eyepiece unit 41 are not easily displaced even after the focus is adjusted. Become.
  • FIGS. 15 (a) and 15 (b) are explanatory views of an attachment constituted by a packaging box of a portable terminal. As shown in this figure (a), once the tray 504 is removed from the main body 502, the portable terminal 2 is fixed to the bottom surface of the main body 502, and then the tray 504 is inserted again into the main body 502 in the same direction, the bottom surface of the tray 504 The screen of the portable terminal 2 can be viewed through the Fresnel lens 505.
  • the tray 502 is fixed to the lid 501 provided with the opening 506 by adhesion or the like and then covered with the main body 502, the tray 502 is effectively blocked while external light is blocked.
  • the screen of the mobile terminal 2 can be visually recognized through the Fresnel lens 505 on the bottom surface of 504.
  • one Fresnel lens 505 is used for the eyepiece, but two round convex lenses may be molded on the bottom surface of the tray 502.
  • the lid 501 is provided with two round openings, or the portion other than the convex lens of the tray 502 is formed with an opaque member, or is shielded by an opaque member such as a seal or paper.
  • the attachment 4 can also be configured easily.
  • the display system is A mobile terminal having a screen; An attachment for accommodating the portable terminal so that the screen is visible by a user; A display system comprising: The portable terminal is The sensor of the portable terminal detects an action by the user via the attachment, The video to be displayed on the screen is controlled according to the detected action.
  • the portable terminal is Detecting voice emitted from the attachment and movement of the mobile terminal by the sensor, Identifying whether the sensed voice and movement is caused by an action including contact to the attachment by the user, friction of the attachment, or deformation of the attachment;
  • the video to be displayed on the screen is controlled in accordance with the identified action.
  • the portable terminal is The sound emitted from the attachment is detected by the sensor, Identifying whether the sensed sound is emitted due to an action including contact to the attachment by the user, friction of the attachment, or deformation of the attachment; The video to be displayed on the screen can be controlled according to the identified action.
  • the portable terminal is The frequency component of the detected sound can be classified according to whether or not the frequency band in which the attachment resonates, and can be configured to identify whether or not the sound is generated due to the action.
  • the portable terminal is The movement of the mobile terminal is detected by the sensor, Identifying whether the detected movement is caused by an action including contact to the attachment by the user, friction of the attachment, or deformation of the attachment;
  • the video to be displayed on the screen can be controlled according to the identified action.
  • the screen is a touch screen;
  • the attachment has an opening for enabling the user to touch a partial area of the touch screen;
  • the portable terminal is Based on the detected movement, the position of the operation target included in the video is moved in and out of the touchable area,
  • the touch screen is configured to control the video displayed on the screen according to the touch when the touch on the operation target that the user has moved into the touchable area is detected. Can do.
  • the mobile terminal has a camera whose shooting direction is a direction in which the user visually recognizes the screen
  • the attachment accommodates the portable terminal so that the camera can shoot a live video of a real space, According to the identified action, the video displayed on the screen can be switched to the live video or the non-live video.
  • the portable terminal is While the non-live video is displayed on the screen, based on the movement detected by the sensor, a partial video is cut out from the alternative video including the panoramic video and the 3D video, and the cut-out partial video is removed from the non-live video. It can be configured to be live video.
  • the attachment is In the box for packaging the portable terminal by fixing the portable terminal with a tray, storing the tray in the main body, and covering the main body with a lid, By fixing a part of the bottom surface of the tray, the mobile terminal is fixed so that the screen of the mobile terminal is visible through an opening provided in the tray.
  • the opening is provided, the tray for fixing the mobile terminal is stored in the main body, By covering a part of the upper surface of the lid and covering the main body with the lid so that the screen of the portable terminal can be visually recognized through an opening provided in the lid. be able to.
  • the tray is constituted by a transparent body having a lens at a part of the bottom surface, A lens cut out from the tray can be attached to the opening provided in the lid.
  • the attachment in the box for packaging the portable terminal by fixing the portable terminal with a tray, storing the tray in the main body, and covering the main body with a lid,
  • the tray is constituted by a transparent body having a lens at a part of the bottom surface,
  • the mobile terminal is fixed to the bottom surface of the main body so that the screen of the mobile terminal is visible,
  • the tray can be housed in the main body so that the screen of the portable terminal can be visually recognized through the lens.
  • the attachment is In the box for packaging the portable terminal by fixing the portable terminal with a tray, storing the tray in the main body, and covering the main body with a lid,
  • the tray is constituted by a transparent body having a lens at a part of the bottom surface,
  • the mobile terminal is fixed so that the screen of the mobile terminal is visible through an opening provided in the tray by cutting the lens from the bottom surface of the tray,
  • the opening is provided, the tray for fixing the mobile terminal is stored in the main body,
  • a lens excised from the tray is attached to the opening provided in the lid by excising a part of the upper surface of the lid,
  • the main body may be covered with the lid so that the screen of the portable terminal can be visually recognized through the lens attached to the opening.
  • the attachment of the present invention is An attachment that houses a mobile terminal having a screen so that the screen can be viewed by a user, Transmitting voice or movement generated due to an action accompanied by a contact with the attachment by the user to the mobile terminal;
  • the attachment is In the box for packaging the portable terminal by fixing the portable terminal with a tray, storing the tray in the main body, and covering the main body with a lid,
  • the tray is constituted by a transparent body having a lens at a part of the bottom surface,
  • the mobile terminal is fixed to the bottom surface of the main body so that the screen of the mobile terminal is visible,
  • the tray is stored in the main body so that the screen of the portable terminal can be visually recognized through the lens.
  • the display method according to the present invention is as follows: A display method executed by a mobile terminal, wherein the mobile terminal has a screen, and the mobile terminal is accommodated in an attachment so that the screen can be viewed by a user, A detection step in which the mobile terminal detects an action by the user via the attachment by a sensor of the mobile terminal; A control step of controlling an image displayed on the screen in accordance with the detected action.
  • the program according to the present invention is a program for controlling a mobile terminal, wherein the mobile terminal has a screen, and the mobile terminal is accommodated in an attachment such that the screen is visible by a user,
  • the program is stored in the mobile terminal.
  • the sensor of the portable terminal detects an action by the user via the attachment, The video to be displayed on the screen is controlled according to the detected action.
  • the program can be recorded on a computer-readable non-transitory information recording medium.
  • the information recording medium can be distributed and sold independently of the computer for realizing the portable terminal according to the present invention.
  • the program can be distributed and sold from a distribution server to a computer for realizing the portable terminal according to the present invention via a temporary communication medium by a computer communication network such as the Internet.
  • ⁇ ⁇ According to the present invention, it is possible to provide a display system, a display method, an attachment, and a program that can be realized easily and inexpensively as an alternative to a head-mounted display or the like.
  • Head proximity image display system Mobile terminal 3 Recording module 4 Head proximity body (attachment) 5 Communication Network 20 Control Application 22 Video Storage Unit 31 Panoramic Video Camera 41 Proximity Unit 42 Terminal Housing Unit 43 Headphone 44 Imaging Unit 58 Power Switch 59 Motion Sensor 60 Microphone 62 Display Panel 65 Operation Unit 66 Display Panel 69 Recording Unit 101 Projection Piece 102 Side plate 102 Head frame 103 Hinge mechanism 110 Rear plate 111 Lens 112 Front plate 113 Holding piece 114 Groove 124 Groove 311 Base 312 Camera array 320 Main body 321 Each imaging device 321 Imaging device 330 Interface 501 Box 502 Lid 503 Main body 504 Tray 505 Fresnel lens 506 Opening S21 Alternative image acquisition step S22 Image accumulation step S23 Live image acquisition step S24 Head direction identification step S25 Direction update step S25 Direction updating unit step S28 the playback control step S35 speech data acquiring step S38 detection step

Abstract

The present invention provides an inexpensive and simple display system suitable for a head proximity-type video display system capable of giving a user an impression as though the information amount of the actual visual information space were expanded. A portable terminal (2) is accommodated in an attachment comprising a proximity unit (41) and a terminal accommodation unit (42), and a display panel (62) is magnified by a lens (111) to allow the user to visually confirm (C) the display panel (62). The portable terminal (2) detects, using a sensor of the portable terminal (2), an action (D) of the user that involves touching the attachment, and controls video display on the display panel (62) in accordance with the detected action.

Description

表示システム、アタッチメント、表示方法、ならびに、プログラムDisplay system, attachment, display method, and program
  本発明は、表示システム、アタッチメント、表示方法、ならびに、プログラムに関する。本発明は、頭部近接型映像表示装置を頭部に近接させたユーザに対して映像を表示する頭部近接型映像表示システム及びこれに使用される頭部近接体、頭部近接型映像表示プログラムに好適である。 The present invention relates to a display system, an attachment, a display method, and a program. The present invention relates to a head proximity image display system that displays an image for a user who has placed a head proximity image display device close to the head, a head proximity body used in the head proximity image display system, and a head proximity image display Suitable for programs.
  従来より盛んに研究されているバーチャルリアリティ(VR)技術は、種々の映像やコンピュータグラフィックス(CG)を用いて仮想世界や遠隔地の空間を創出し、人間の動作に応じてコンピュータにより映像を変化させるものである。これにより視聴者に、あたかもその場にいるような臨場感を抱かせることができる。しかし、ある程度の臨場感は得られても、それを現実に目の前でおきているものとして感じさせることはできなかった。このようなVR技術の問題点を解決するために、SR(Substitutional Reality System)システムという新しい技術が開発された。 Virtual reality (VR) technology, which has been studied extensively, creates virtual worlds and remote spaces using various images and computer graphics (CG), and displays images by computers according to human actions. It is something to change. This makes it possible for viewers to feel as if they were there. However, even though a certain level of presence was obtained, it was not possible to make it feel as if it was in front of you. In order to solve such problems of the VR technology, a new technology called SR (Substitutional Reality System) system has been developed.
  SRシステムとは、仮想の代替世界と現実を差し替え、被験者に仮想と現実を区別無く体験させることを念頭に置いた技術である。このSR技術を実現するためには、現実において起きている事象を真実であると確信したり、或いは現実の中につじつまの合わない事象が発生した場合にそれに疑いを持ったりする、いわゆるメタ認知と呼ばれるヒトの高次認知機能をうまく操作する必要がある。従来において、このメタ認知の操作に伴う様々な技術的限界のため、SRシステムは未だ実用化に至っていないのが現状であった。即ち、このSRシステムの実用化を検討する上では、例えば、過去の映像や異なる場所の映像等、現実的に目の前で起きていない仮想的な代替事象を、いかに現実に目の前で起きている事象として視聴者に認識させるかが大きな課題となっている。 The SR system is a technology that keeps in mind that the virtual alternative world and the reality are replaced and the subject can experience the virtual and the reality without distinction. In order to realize this SR technology, it is so-called metacognition that is convinced that an event that occurs in reality is true, or that there is a suspicion when an inconsistent event occurs in reality. It is necessary to operate well the human higher cognitive function called. Conventionally, the SR system has not yet been put into practical use due to various technical limitations associated with this meta-cognition operation. In other words, in considering the practical application of this SR system, for example, virtual substitute events that have not occurred in front of the eyes, such as past images or images of different locations, can be seen in front of the eyes. A major issue is how to make viewers recognize that this is happening.
  実際にこのような代替現実状態を実現する上で従来において試行された技術として、「拡張現実」、「テレプレゼンス技術」、「Google glass(登録商標)」がある。 技術 Technologies that have been tried in the past to actually realize such an alternative reality state include “augmented reality”, “telepresence technology”, and “Google glass (registered trademark)”.
  「拡張現実」とは、ユーザの視点から見える実際のライブ映像上に、CG(Computer Graphics)技術による視覚映像を重ね合わせ、例えばヘッドマウントディスプレイ等の表示装置を介して映し出す技術である。この「拡張現実」とは、本来現実には存在しない情報を新しく付加することが可能な技術である。この「拡張現実」に関して検討された従来技術として、例えば特許文献1~3の開示技術が提案されている。しかし、この「拡張現実」は、重ね合わせる視覚画像がモノレイヤーであるため、これに重ね合わされたCGの背面に表示されている現実画像の情報が削られることになるという問題点がある。 “Augmented reality” is a technology that superimposes a visual image based on CG (Computer Graphics) technology on an actual live image that can be seen from the user's viewpoint, and displays the image via a display device such as a head mounted display. This “augmented reality” is a technology that can newly add information that does not actually exist in reality. For example, disclosed techniques of Patent Documents 1 to 3 have been proposed as conventional techniques studied with respect to this “augmented reality”. However, this “augmented reality” has a problem in that since the visual image to be superimposed is a monolayer, the information of the real image displayed on the back of the CG superimposed thereon is deleted.
  「テレプレゼンス技術」とは、遠隔地に設けられたカメラによって撮像される映像を、視聴者の頭部に装着されたヘッドマウントディスプレイを介してリアルタイムに映し出す。これとともに、この「テレプレゼンス技術」では、その視聴者の頭部の動きに連動してカメラの撮像方向を遠隔操作する。これにより、遠隔地のメンバーとその場で対面しているかのような臨場感を提供することも可能となる。しかしながら、この「テレプレゼンス技術」では、遠隔地に置かれたカメラによる撮像情報を視聴者に与えることを目的としたものであり、現実空間に新たな情報を付加することを目的とした技術ではない。 “Telepresence technology” is a real-time display of images captured by a camera provided at a remote location via a head-mounted display mounted on the viewer's head. At the same time, in this “telepresence technology”, the imaging direction of the camera is remotely controlled in conjunction with the movement of the viewer's head. As a result, it is possible to provide a sense of presence as if they are facing a remote member on the spot. However, this "telepresence technology" is intended to give viewers imaging information from a camera placed at a remote location, and it is a technology intended to add new information to the real space. Absent.
  また最近において、「Google glass(登録商標)」と呼ばれる、メガネ型の拡張現実ウェアラブルコンピュータが提案されている。Google glass(登録商標)は、メガネを通じて視聴者に視認される現実空間に様々な情報を埋め込むというコンセプトであり、ハンズフリーに情報を表示し、自然言語音声コマンドでインターネットを使用することができるようになっている。しかしながら、このGoogle glass(登録商標)は、情報の表示領域が狭いため、複数の情報を表示する際には排他的にならざるをえず、これらを互いに重ね合わせた場合、下に隠れた情報を視聴者に視認させることができないという問題点があった。また、表示映像を現実と思わせる効果は考慮されておらず、持ち運び可能な小型モニターという用途を超えていない。 Recently, a glasses-type augmented reality wearable computer called “Google glass (registered trademark)” has been proposed. Google glass (registered trademark) is a concept of embedding various information in the real space visually recognized by viewers through glasses, so that information can be displayed hands-free and the Internet can be used with natural language voice commands. It has become. However, this Google glass (registered trademark) has a narrow information display area, so it must be exclusive when displaying multiple information, and when they are superimposed on each other, the information hidden underneath There is a problem that the viewer cannot be visually recognized. In addition, the effect of making the displayed image look real is not taken into consideration, and it does not exceed the use of a portable small monitor.
特開平10-51711号公報Japanese Patent Laid-Open No. 10-51711 特開2000-82107号公報JP 2000-82107 A 特開2006-48672号公報JP 2006-48672 A
  このように、従来の「拡張現実」、「テレプレゼンス技術」、「Google glass(登録商標)」では、現実的に目の前で起きていない仮想の代替事象を、現実に目の前で起きている事象として視聴者に認識させるまでには至らない。即ち、仮想の代替事象を描写した映像そのもののCGや映像シーンの不自然さ等により、その仮想の代替事象を現実の事象として認識させる上で様々なメタ認知的な阻害要因が存在する。例えば「拡張現実」では、ライブ映像に重ね合わされたCGの不自然さにより、視聴者は、そのCG部分が人工的なものであると判断してしまう。また、「テレプレゼンス」では、そもそも遠隔地の映像が画面上に映し出されるため、視聴者がその映像を視認しても現実のものと解することはない。 As described above, in the conventional “augmented reality”, “telepresence technology”, and “Google glass (registered trademark)”, a virtual substitute event that does not occur in front of the eyes actually occurs in front of the eyes. It does not lead to let the viewer recognize it as an event. That is, there are various meta-cognitive obstacles for recognizing the virtual substitute event as a real event due to the CG of the video describing the virtual substitute event or the unnaturalness of the video scene. For example, in “augmented reality”, the viewer determines that the CG portion is artificial because of the unnaturalness of the CG superimposed on the live video. In “telepresence”, video from a remote place is projected on the screen in the first place, so even if the viewer views the video, it is not interpreted as real.
  仮想の代替事象を描写した代替映像を映し出す上で、CG映像がいかに精巧であっても、殆どの場合、視聴者は、明らかに現実とは異なる代替事象を描写した映像を視聴しているという認識は拭い去ることはできないという問題点があった。 Regardless of how sophisticated the CG video is to show the alternative video depicting a virtual alternative event, in most cases, the viewer is watching a video that clearly shows an alternative event that is different from reality. There was a problem that recognition could not be wiped away.
  これに加えて、上述した従来技術では、何れもヘッドマウントディスプレイや、メガネ型の拡張現実ウェアラブルコンピュータ等、特別なデバイスが必要となるため、システム全体が高価なものとなってしまい、普及に時間がかかってしまうという問題があった。 In addition, the above-described conventional techniques all require special devices such as a head-mounted display and a glasses-type augmented reality wearable computer, so that the entire system becomes expensive, and time for dissemination becomes long. There was a problem that it took.
  そこで、本発明は、上述した問題点に鑑みて案出されたものであり、その目的とするところは、ヘッドマウントディスプレイ等の代替となる安価で簡易に実現可能な表示システム、表示方法、アタッチメント、ならびに、プログラムを提供することにある。本発明は、現実的に目の前で起きていない事象を、現実に目の前で起きている事象として視聴者に認識させることが可能であり、しかも既に普及しているスマートフォン等を始めとした携帯端末を利用することにより容易に実現でき、安価なシステムとしてユーザに提供可能な頭部近接型映像表示システム及びこれに使用される頭部近接体、頭部近接型映像表示プログラムの提供に好適である。 Accordingly, the present invention has been devised in view of the above-described problems, and its object is to provide a display system, a display method, and an attachment that can be easily and inexpensively replaced with a head-mounted display or the like. As well as providing a program. The present invention allows a viewer to recognize an event that has not actually occurred in front of the user as an event that has actually occurred in front of the user, and has already been used in smartphones and the like. For providing a head proximity type video display system that can be easily realized by using a portable terminal and that can be provided to a user as an inexpensive system, a head proximity body used in the system, and a head proximity type video display program Is preferred.
  本発明者は、上述した課題を解決するために、現実的に目の前で起きていない事象を、現実に目の前で起きている事象として視聴者に認識させることを、普及しているスマートフォン等を始めとした携帯端末のアプリケーションプログラムとして実装した頭部近接型映像表示システム及びこれに使用される頭部近接体、頭部近接型映像表示プログラムに適用するのに好適な、表示システム、表示方法、アタッチメント、ならびにプログラムを発明した。
  ここで、本発明の表示システムは、
  画面を有する携帯端末と、
  携帯端末を、画面がユーザにより視認可能に収容するアタッチメントと、
  を備え、
  携帯端末は、
  携帯端末が有するセンサにより、アタッチメントを介したユーザによるアクションを検知し、
  検知されたアクションに応じて画面に表示する映像を制御する。
In order to solve the above-mentioned problems, the present inventor has made it popular that the viewer recognizes an event that has not actually occurred in front of the eyes as an event that has actually occurred in front of the eyes. A head proximity type video display system implemented as an application program of a mobile terminal such as a smartphone and the like, a head proximity body used therefor, a display system suitable for application to a head proximity type video display program, Invented display methods, attachments, and programs.
Here, the display system of the present invention is:
A mobile terminal having a screen;
An attachment that houses the mobile terminal so that the screen is visible to the user;
With
Mobile devices
The sensor of the mobile terminal detects the action by the user through the attachment,
The video displayed on the screen is controlled according to the detected action.
  本発明の頭部近接型映像表示システムは、1以上の代替映像が記録されている記録手段と、少なくとも上記記録手段に記録されている代替映像を表示する表示手段と、自らの動きを検出する動きセンサとを有する携帯端末と、映像視認時において当該ユーザの眼前に上記表示手段が位置するように上記携帯端末が収容される収容手段を有する頭部近接体とを備え、上記表示手段は、上記動きセンサにより検出された動きに応じて上記代替映像の表示を開始し又は停止する。 The head proximity type video display system of the present invention detects a movement of a recording means in which one or more alternative videos are recorded, a display means for displaying at least the alternative videos recorded in the recording means, and its own movement. A mobile terminal having a motion sensor; and a head proximity body having an accommodating means in which the portable terminal is accommodated so that the display means is positioned in front of the user's eyes during video viewing. The display of the substitute video is started or stopped according to the motion detected by the motion sensor.
  また、本発明の頭部近接型映像表示システムにおいて、上記携帯端末は、ユーザと略同一の視点で現実空間をライブ映像として撮像する撮像手段を更に有し、上記表示手段は、上記動きセンサにより検出された動きに応じて、上記記録手段に記録されている代替映像又は上記撮像手段により撮像されたライブ映像の表示、停止を行い、又は上記代替映像と上記ライブ映像の表示の切り替えを行う。 Moreover, in the head proximity type video display system of the present invention, the portable terminal further includes an imaging unit that captures a real space as a live video from substantially the same viewpoint as the user, and the display unit includes the motion sensor. Depending on the detected movement, the alternative video recorded in the recording unit or the live video captured by the imaging unit is displayed and stopped, or the display of the alternative video and the live video is switched.
  また、本発明の頭部近接型映像表示システムにおいて、上記頭部近接体は、映像視認時において上記ユーザの頭部に近接させる頭部フレームと、互いに上記ユーザの頭部よりも狭い間隔で上記頭部フレームにおける両側から突出された開閉自在の突出片とを有し、上記動きセンサは、ユーザによる突出片の開閉動作に応じた動きを検出する。 Further, in the head proximity type image display system of the present invention, the head proximity body includes a head frame that is close to the user's head when viewing the image, and the head frame that is closer to the user's head than the head frame. The head frame has openable and closable protruding pieces protruding from both sides, and the motion sensor detects a movement according to the opening / closing operation of the protruding piece by the user.
  また、本発明の頭部近接型映像表示システムにおいて、上記動きセンサは、映像視認開始時において上記頭部近接体を頭部の手前まで移動させる動作と、頭部の手前において上記突出片の開閉動作に伴う減速とを検出する。 Further, in the head proximity type video display system of the present invention, the motion sensor moves the head proximity body to the front of the head at the start of visual recognition, and opens and closes the protruding piece in front of the head. Detects deceleration accompanying movement.
  また、本発明の頭部近接型映像表示システムにおいて、上記頭部近接体は、上記収容手段に収容した上記携帯端末の表示手段により表示される映像を拡大するためのレンズを更に有する。 In the head proximity image display system of the present invention, the head proximity body further includes a lens for enlarging an image displayed by the display means of the portable terminal accommodated in the accommodation means.
  また、本発明の頭部近接型映像表示システムにおいて、発明において、上記ユーザの頭部を略中心とした全方位について事前に時系列的に撮像した映像、もしくはリアルタイムに作成されたコンピューターグラフィックスなどの仮想的映像を一の代替映像として取得する代替映像取得手段を更に備え、上記携帯端末は、当該一の代替映像を再生する場合には、上記ユーザの視線又は頭部の動きに基づいて当該一の代替映像から表示すべき映像を上記時系列に沿って切り出す映像制御手段を更に有する。 Further, in the head proximity type video display system of the present invention, in the invention, the image taken in time series in advance in all directions centered on the user's head or computer graphics created in real time, etc. Alternative video acquisition means for acquiring the virtual video as one alternative video, and when the portable terminal reproduces the one alternative video, the mobile terminal is based on the user's line of sight or the movement of the head. Video control means for cutting out the video to be displayed from the one alternative video along the time series is further provided.
  また、本発明の頭部近接型映像表示システムにおいて、上記代替映像取得手段は、公衆通信網に随時アクセスすることにより取得した情報、又は予め蓄積した情報を一の代替映像に加工する。 In addition, in the head proximity type video display system of the present invention, the alternative video acquisition means processes information acquired by accessing the public communication network as needed or information stored in advance into one alternative video.
  また、本発明の頭部近接型映像表示システムにおいて、音声データを取得する音声データ取得手段を更に備え、上記映像制御手段は、上記音声データ取得手段により取得された音声データを上記再生すべき代替映像に連動させて再生する。 The head proximity type video display system of the present invention further includes audio data acquisition means for acquiring audio data, wherein the video control means is an alternative to reproduce the audio data acquired by the audio data acquisition means. Play in conjunction with the video.
  また、本発明の頭部近接体は、上記の頭部近接型映像表示システムに使用され、映像視認時において上記ユーザの眼前に上記表示手段が位置するように上記携帯端末が収容される収容手段を有する。 Further, the head proximity body of the present invention is used in the head proximity type video display system, and the storage means in which the portable terminal is stored so that the display means is positioned in front of the user's eyes when viewing the video. Have
  また、本発明の頭部近接体において、映像視認時において上記ユーザの頭部に近接させる頭部フレームと、互いに上記ユーザの頭部よりも狭い間隔で上記頭部フレームにおける両側から突出された開閉自在の突出片とを有することを特徴とする。 Further, in the head proximity body of the present invention, a head frame that is close to the user's head when viewing an image, and an open / close projecting from both sides of the head frame at a smaller interval than the user's head And a free protruding piece.
  本発明の頭部近接型映像表示プログラムは、頭部近接体を頭部に近接させたユーザに対して映像を表示することを携帯端末により実行させる頭部近接型映像表示プログラムにおいて、1以上の代替映像を記録する代替映像記録ステップと、上記携帯端末の動きを検出する動き検出ステップと、上記代替映像記録ステップにおいて記録した1以上の代替映像を表示することを、上記動き検出ステップにより検出された動きに応じて開始し、又は停止する表示ステップとを携帯端末により実行させる。 The head proximity type video display program according to the present invention is a head proximity type video display program that causes a mobile terminal to display a video for a user whose head proximity body is close to the head. It is detected by the motion detection step that an alternative video recording step for recording the alternative video, a motion detection step for detecting the movement of the portable terminal, and displaying one or more alternative videos recorded in the alternative video recording step. The display step of starting or stopping according to the movement is executed by the portable terminal.
  上述した構成からなる本発明によれば、表示システム、表示方法、アタッチメント、ならびに、プログラムを提供することができる。本発明によれば、代替現実技術(現在と過去の映像を切り目無く、現実の延長として体験する技術)を利用することにより、いわば視覚情報空間を拡張することができる。特に本発明によれば、既に普及しているスマートフォン等を始めとした携帯端末を利用することにより容易に実現できるものであるから、安価なシステムとしてユーザに提供可能であり、急速な普及も期待できる。 本 According to the present invention configured as described above, a display system, a display method, an attachment, and a program can be provided. According to the present invention, it is possible to expand the visual information space by using an alternative reality technology (a technology for experiencing current and past images seamlessly as an extension of reality). In particular, according to the present invention, since it can be easily realized by using a portable terminal such as a smartphone that has already been widely used, it can be provided to the user as an inexpensive system, and rapid spread is expected. it can.
  特に上述した構成からなる本発明によれば、頭部近接体に携帯端末を収容させてユーザに使用させることで、ユーザに対してはあたかもヘッドマウントディスプレイを装着したかのような臨場感を持たせることが可能となり、しかも表示パネルによる映像表示の開始を、この頭部近接体の近接動作を通じて自動的にコントロールすることが可能となる。 In particular, according to the present invention having the above-described configuration, a user feels as if a head-mounted display is attached to the user by allowing the user to use the portable terminal housed in the head proximity body. In addition, the start of video display on the display panel can be automatically controlled through the proximity movement of the head proximity object.
  特に本発明では、携帯端末を頭部近接体に収容した状態で使用する。このため、頭部近接体に覆われた携帯端末を操作することはユーザにとって非常に煩雑なものとなるが、本発明では、自然な近接動作のみで、映像の開始を自動的に行うことができるため、非常に操作性にも優れたものとなる。また、本発明によればワイヤレスで動作する携帯端末を介して動作可能なものであるため、システム全体の完全なワイヤレス化を実現させることが可能となる。 In particular, in the present invention, the portable terminal is used in a state of being accommodated in the head proximity body. For this reason, it is very troublesome for the user to operate the mobile terminal covered with the head proximity object. However, in the present invention, the video can be automatically started only by a natural proximity operation. Therefore, the operability is extremely excellent. Further, according to the present invention, since it can be operated via a portable terminal that operates wirelessly, it is possible to realize complete wirelessization of the entire system.
  このような視覚情報空間の拡張により、ユーザが主体的に情報空間を操作し、代替映像を表示させて必要な情報にアクセスすることが可能となる。従来システムによれば、デバイスの持つ単一レイヤ上のスクリーンを通じてしか情報にアクセスすることができなかったため、新しい情報を付加するには、それを上書きしてオリジナルの情報を削除しなければならず、アクセシビリティが低かった。これに対して本発明によれば、ライブ映像として表示されている現実空間に、代替映像を介してオリジナルの情報を殆ど失うこと無く様々な情報を埋め込むことが可能となり、まるで現実の視覚情報空間の情報量が拡大したかのような印象をユーザに与えることを携帯端末により実現可能となる。 視 覚 Such expansion of the visual information space allows the user to independently operate the information space and display the substitute video to access necessary information. According to the conventional system, information can only be accessed through a screen on a single layer of the device. Therefore, to add new information, the original information must be deleted by overwriting it. The accessibility was low. On the other hand, according to the present invention, it is possible to embed various information in a real space displayed as a live video through an alternative video with almost no loss of original information. Giving the user the impression that the amount of information has expanded can be realized by the portable terminal.
本発明を適用した頭部近接型映像表示システムの全体構成図である。1 is an overall configuration diagram of a head proximity type video display system to which the present invention is applied. パノラマビデオカメラの構成について説明するための図である。It is a figure for demonstrating the structure of a panoramic video camera. 頭部近接体の外観形状を示す図である。It is a figure which shows the external appearance shape of a head proximity body. 頭部近接体の分解斜視図である。It is a disassembled perspective view of a head proximity body. 本発明を適用した頭部近接型映像表示システムの使用方法について説明するための図である。It is a figure for demonstrating the usage method of the head proximity | contact type | formula video display system to which this invention is applied. (a)、(b)は、携帯端末の外観形状について説明するための図である。(a), (b) is a figure for demonstrating the external appearance shape of a portable terminal. 携帯端末のブロック構成を示す図である。It is a figure which shows the block configuration of a portable terminal. 制御アプリケーションのステップ構成について説明するための図である。It is a figure for demonstrating the step structure of a control application. (a)、(b)は、動きセンサによる頭部近接体の近接動作の検出例について説明するための図である。(a), (b) is a figure for demonstrating the example of a detection of the proximity | contact action of the head proximity | contact body by a motion sensor. (a)、(b)は、動きセンサによる頭部近接体の近接動作の検出例について説明するための図である。(a), (b) is a figure for demonstrating the example of a detection of the proximity | contact operation | movement of the head proximity | contact body by a motion sensor. 動きセンサによる携帯端末の動き検出の詳細について説明するための図である。It is a figure for demonstrating the detail of the motion detection of the portable terminal by a motion sensor. (a)、(b)は、代替映像として過去に撮像した過去映像を適用する例について説明するための図である。(a), (b) is a figure for demonstrating the example which applies the past image | video imaged in the past as a substitute image | video. (a)、(b)、(c)、(d)は、携帯端末の包装箱によりアタッチメントを構成する手順の説明図である。(a), (b), (c), (d) is explanatory drawing of the procedure which comprises an attachment with the packaging box of a portable terminal. (a)、(b)、(c)は、携帯端末の包装箱によりアタッチメントを構成する手順の説明図である。(a), (b), (c) is explanatory drawing of the procedure which comprises an attachment with the packaging box of a portable terminal. (a)、(b)は、携帯端末の包装箱により構成されたアタッチメントの説明図である。(a), (b) is explanatory drawing of the attachment comprised by the packaging box of the portable terminal.
  以下、本発明に係る表示システムを代替現実コンテンツの提供に応用した頭部近接型映像表示システムの実施の形態について図面を参照しながら詳細に説明する。 Hereinafter, an embodiment of a head proximity type video display system in which the display system according to the present invention is applied to provision of alternative reality content will be described in detail with reference to the drawings.
  図1は、本発明を適用した頭部近接型映像表示システム1の全体構成図を示している。この頭部近接型映像表示システム1は、携帯端末2を中心とし、これに接続された録画モジュール3と、頭部近接体4とを備え、更に携帯端末2を介して通信網5に接続されている。 FIG. 1 shows an overall configuration diagram of a head proximity image display system 1 to which the present invention is applied. This head proximity type image display system 1 includes a portable terminal 2, a recording module 3 connected thereto, and a head proximity body 4, and is further connected to a communication network 5 via the portable terminal 2. ing.
  携帯端末2は、この頭部近接型映像表示システム1の全体を制御する、いわゆる中央制御機器としての役割を担う。この携帯端末2は、例えばスマートフォン、携帯電話機、タブレット型端末等として具現化されるが、これに限定されるものではなく、携帯型のゲーム機器、携帯型の音楽プレーヤー、ウェアラブル端末等、あらゆる携帯型の電子機器端末を含む概念である。 The portable terminal 2 serves as a so-called central control device that controls the entire head proximity image display system 1. The mobile terminal 2 is embodied as a smartphone, a mobile phone, a tablet terminal, or the like, for example. However, the mobile terminal 2 is not limited to this, and any mobile terminal such as a mobile game device, a mobile music player, or a wearable terminal can be used. It is a concept that includes a type electronic device terminal.
  通信網5は、有線又は無線による通信ネットワークであればいかなるものであってもよく、例えばインターネット、イントラネット、エキストラネット、LAN(Local Area Network)、ISDN(Integrated Services Digital Network)、VAN(value added network)、CATV(Community Antenna Television)通信網、仮想専用網(Virtual Private Network)、電話回線網、移動体通信網、衛星通信網等である。 The communication network 5 may be any wired or wireless communication network. For example, the Internet, an intranet, an extranet, a LAN (Local Area Network), an ISDN (Integrated Services Digital Network), a VAN (value added network). ), CATV (Community Antenna Television) communication network, virtual private network (Virtual Private Network), telephone line network, mobile communication network, satellite communication network, and the like.
  録画モジュール3は、現実の事象とは別に、過去の事象に基づく代替映像を予め録画するために使用されるものであり、パノラマビデオカメラ31を備えている。 The recording module 3 is used to record a substitute video based on a past event separately from an actual event, and includes a panoramic video camera 31.
  パノラマビデオカメラ31は、例えば図2に示すように、基台311と、この基台311上に設けられたカメラアレイ312とを有している。基台311は、必要に応じて高さ調整が可能とされ、所望の映像を録画する上でふさわしい高さに随時設定される。カメラアレイ312は、本体320と、この本体320の内部に実装された複数の撮像装置321とを備えている。本体320には形成された開口にはそれぞれ撮像装置321が実装されており、当該開口を介して事物を撮像可能とされている。これら複数の撮像装置321は、互いに異なる方向を撮像可能とするため、その画角及び撮像方向がそれぞれ設定されている。また、各撮像装置321は、例えば図2に示すように本体320における全ての方向(水平方向に360°、鉛直方向に360°)を洩れなく撮像可能なように実装されていてもよい。しかも、この撮像装置321によれば、全ての方向について同時に撮像を行うことが可能となる。このため、部屋の中に居る人を撮像する際において、人が移動した場合には、その移動した人をカメラ本体内に構築される仮想的な一つの視点を中心とした全周囲で撮像した映像が生成されることとなる。ちなみにこの録画モジュール3は、上述した構成に限定されるものではなく、本体320の全ての方向を撮像可能な構成であればいかなるものに代替されるものであってもよい。 The panoramic video camera 31 has a base 311 and a camera array 312 provided on the base 311 as shown in FIG. The height of the base 311 can be adjusted as necessary, and is set as needed to record a desired video. The camera array 312 includes a main body 320 and a plurality of imaging devices 321 mounted inside the main body 320. An imaging device 321 is mounted in each of the openings formed in the main body 320, and an object can be imaged through the openings. The plurality of imaging devices 321 are set with an angle of view and an imaging direction in order to enable imaging in different directions. Further, each imaging device 321 may be mounted so as to be able to capture all directions in the main body 320 (360 ° in the horizontal direction and 360 ° in the vertical direction) without leakage as shown in FIG. 2, for example. In addition, according to the imaging device 321, it is possible to perform imaging in all directions at the same time. For this reason, when imaging a person in a room, if the person moves, the person who has moved is imaged around the entire virtual viewpoint built in the camera body. A video is generated. Incidentally, the recording module 3 is not limited to the configuration described above, and may be replaced by any configuration as long as it can capture all directions of the main body 320.
  撮像装置321は、このようにして撮像した映像をそれぞれCCD(Charge Coupled Device)等の固体撮像素子を利用し、レンズを介して入射される被写体像を撮像面上に結像させ、光電変換により映像信号を生成し、これをインターフェース330を介して携帯端末2へもしくは記録用コンピュータへと送信する。 The imaging device 321 uses a solid-state imaging device such as a CCD (Charge Coupled Device) for each of the images captured in this manner, forms a subject image incident through the lens on the imaging surface, and performs photoelectric conversion. A video signal is generated and transmitted to the portable terminal 2 or the recording computer via the interface 330.
  また、撮像装置321には図示しないマイクが実装されている場合もあり、この図示しないマイクは、周辺の音声を集音し、音声信号に変換する。この図示しないマイクは、変換したこの音声信号をインターフェース330を介して携帯端末2もしくは記録用コンピュータへと送信する。 In some cases, a microphone (not shown) may be mounted on the imaging device 321. The microphone (not shown) collects peripheral sounds and converts them into audio signals. The microphone (not shown) transmits the converted audio signal to the portable terminal 2 or the recording computer via the interface 330.
  頭部近接体4は、携帯端末2のアタッチメントであり、当該アタッチメントに携帯端末2を収容することによって、図3の斜視図、及び図4の組立図に示すような、いわゆる視聴者に頭部に近接させ、ひいては装着も可能な、擬似的なヘッドマウントディスプレイとして構成される。この頭部近接体4に携帯端末2を収容することによって、メガネ型又はゴーグル型の表示装置が実現され、頭部近接体4は、大きく分類して近接ユニット41と、端末収容ユニット42とを備えている。アタッチメント頭部近接体4(アタッチメント)の各構成要素には、用途・機能に応じて、模様、色彩、装飾等を施すことができる。 The head proximity body 4 is an attachment of the mobile terminal 2, and the mobile terminal 2 is accommodated in the attachment, so that the head close to the so-called viewer as shown in the perspective view of FIG. 3 and the assembly drawing of FIG. It is configured as a pseudo head-mounted display that can be placed close to and can be mounted. By accommodating the mobile terminal 2 in the head proximity body 4, a glasses-type or goggles-type display device is realized. The head proximity body 4 is roughly classified into a proximity unit 41 and a terminal storage unit 42. I have. Each component of the attachment head proximity body 4 (attachment) can be provided with a pattern, a color, a decoration, and the like according to the use and function.
  近接ユニット41は、段ボール等の紙製、樹脂製等で構成されるが、これに限定されるものではなく、セラミック材料や金属等いかなる材料で構成されていてもよく、映像視認時においてユーザの頭部に近接させる頭部フレーム102と、この頭部フレーム102における両側から突出された突出片101a、101bとを備えている。 The proximity unit 41 is made of paper such as cardboard or resin, but is not limited to this, and may be made of any material such as a ceramic material or metal. A head frame 102 close to the head and projecting pieces 101a and 101b projecting from both sides of the head frame 102 are provided.
  頭部フレーム102は、断面矩形状の筒状体で構成されている。この筒状の頭部フレーム102の内部には、端末収容ユニット42が挿入可能とされている。近接ユニット41は、この頭部フレーム102におけるC方向に向けてユーザの頭部へと近接させ、ひいては装着されることとなる。この頭部フレーム102は、互いに対向する側板102a、102bが形成され、この側板102aには、突出片101aがC方向に向けて突出するように取り付けられ、側板102bには、突出片101aがC方向に向けて突出するように取り付けられている。頭部フレーム102における側板102a、102bの間隔は、ユーザの一般的な頭部の幅とほぼ同等又はそれ以下である。 The heel head frame 102 is formed of a cylindrical body having a rectangular cross section. A terminal accommodating unit 42 can be inserted into the cylindrical head frame 102. The proximity unit 41 is brought close to the user's head in the direction C in the head frame 102 and is therefore attached. The head frame 102 is formed with side plates 102a and 102b facing each other, and is attached to the side plate 102a such that the protruding piece 101a protrudes in the C direction, and the protruding piece 101a is provided on the side plate 102b. It is attached to protrude in the direction. The distance between the side plates 102a and 102b in the head frame 102 is substantially equal to or less than the general width of the user's head.
  突出片101a、突出片101bは、それぞれ側板102a、102bに取り付けられていることから、これら突出片101a、突出片101b間の間隔は、ユーザの一般的な頭部と同等又はそれ以下とされている。この突出片101aは、側板102aに対してヒンジ機構103を介してD方向に開閉自在とされている。同様に突出片101bは、側板102bに対してヒンジ機構103を介してD方向に開閉自在とされている。なお、突出片101a、101bは、ゴムや紐等を介して互いに連結可能とされていてもよく、これらをユーザの頭部に被せることにより強固に装着することが可能となる。ユーザは突出片の角度Dを自由に調整し、外光からの影響を最小化することができる。 Since the protruding piece 101a and the protruding piece 101b are respectively attached to the side plates 102a and 102b, the interval between the protruding piece 101a and the protruding piece 101b is equal to or less than the general head of the user. Yes. The protruding piece 101a is openable and closable in the D direction with respect to the side plate 102a via the hinge mechanism 103. Similarly, the protruding piece 101b can be opened and closed in the D direction via the hinge mechanism 103 with respect to the side plate 102b. Note that the protruding pieces 101a and 101b may be connectable to each other via rubber, string, or the like, and can be firmly attached by covering them on the user's head. The user can freely adjust the angle D of the projecting piece to minimize the influence from outside light.
  このヒンジ機構103は、近接ユニット41が段ボールで構成されている場合には、このヒンジ機構103は、単にその段ボールの折り曲げ部分として構成されていればよい。またヒンジ機構103は、近接ユニット41が金属製や樹脂製等の場合には、突出片101、側板102の互いの接合位置に図示しない貫通孔を上下方向に設け、この図示しない貫通孔の軸を挿通させることにより、開閉自在なヒンジを構成するようにしてもよい。但し、この突出片101は、映像を視認しない、いわゆる通常状態において、D方向に開いた状態ではなく、突出片101a、101b同士がユーザの頭部よりも狭い状態とされていることが望ましい。 ヒ ン ジ When the proximity unit 41 is made of cardboard, the hinge mechanism 103 only needs to be configured as a bent portion of the cardboard. Further, when the proximity unit 41 is made of metal or resin, the hinge mechanism 103 is provided with a through hole (not shown) at the joint position of the protruding piece 101 and the side plate 102 in the vertical direction. You may make it comprise the hinge which can be opened and closed by inserting. However, it is desirable that the protruding pieces 101 are not opened in the D direction in a so-called normal state in which an image is not visually recognized, and the protruding pieces 101a and 101b are narrower than the user's head.
  端末収容ユニット42は、段ボール等の紙製、樹脂製等で構成されるが、これに限定されるものではなく、セラミック材料や金属等いかなる材料で構成されていてもよく、近接ユニット41に挿入可能なサイズで構成されており、C方向に設けられる後板110と、この後板110に形成されるレンズ111と、後板110、レンズ111と対向する、換言すればC方向と反対方向に設けられた前板112と、前板112と後板110との間に設けられ、特に前板112により近接させて配置される押さえ片113とを備えている。この前板112と押さえ片113との間には、携帯端末2を収容するための収容部114が形成されている。 The terminal housing unit 42 is made of paper such as cardboard or resin, but is not limited to this, and may be made of any material such as ceramic material or metal and is inserted into the proximity unit 41. The rear plate 110 provided in the C direction, the lens 111 formed on the rear plate 110, the rear plate 110, and the lens 111 are opposed to each other, in other words, in a direction opposite to the C direction. A front plate 112 is provided, and a pressing piece 113 is provided between the front plate 112 and the rear plate 110, and in particular, is disposed closer to the front plate 112. Between the front plate 112 and the pressing piece 113, an accommodating portion 114 for accommodating the portable terminal 2 is formed.
  レンズ111は、携帯端末2の表示パネル62から発せられる可視光を屈折させることが可能なレンズ媒体である。このレンズ111を通じて携帯端末2の表示パネル62上で表示される画像を拡大してユーザの視界に表示させることが可能となる。
  レンズ111としては、非球面レンズ、たとえば、凸レンズ、平凸レンズ、フレネルレンズなどを採用することができるが、軽量かつ薄型であり、安価である点で、フレネルレンズは特に好適である。
  このほか、視差を用いたステレオグラムやランダムドットによる立体映像を携帯端末2から提示する場合には、左右の視界を明確に分けた方が、ユーザにとって視認が容易となる。この場合には、左右に二つの接眼用の開口を空けることにより1枚のレンズを用いて2眼レンズを構成することもできるし、2枚のレンズを左右に配置しすることにより2眼レンズを構成することもできる。このように、多眼レンズを採用する場合には、レンズをその数だけ用意しても良いし、接眼用開口のそれぞれに個別のレンズを嵌め込んでも良い。さらに、左右の視界を遮る壁を、近接ユニット41内に設けても良い。
The lens 111 is a lens medium that can refract visible light emitted from the display panel 62 of the mobile terminal 2. An image displayed on the display panel 62 of the mobile terminal 2 can be enlarged and displayed in the user's field of view through the lens 111.
As the lens 111, an aspherical lens, for example, a convex lens, a plano-convex lens, a Fresnel lens, or the like can be adopted. However, the Fresnel lens is particularly suitable because it is light and thin and inexpensive.
In addition, when a stereoscopic image using parallax or a stereoscopic image using random dots is presented from the mobile terminal 2, it is easier for the user to visually recognize the left and right visual fields. In this case, a two-lens lens can be configured using one lens by opening two eyepiece openings on the left and right, or a two-lens lens by arranging two lenses on the left and right Can also be configured. As described above, when multi-lens lenses are employed, the same number of lenses may be prepared, or individual lenses may be fitted into the eyepiece openings. Furthermore, a wall that blocks the left and right fields of view may be provided in the proximity unit 41.
  収容部114は、前板112と押さえ片113とが、携帯端末2を収容可能な程度の間隔を設けて構成される。携帯端末2は、表示パネル62側を図中C方向に向け、撮像部44側をC方向と反対方向に向けた状態で、この収容部114内に収容される。押さえ片113は、端末収容ユニット42における側板132a、132bから内部に向けてそれぞれ突出されてなり、携帯端末2における端部を支持可能な程度のサイズ及び形状とされている。その結果、携帯端末2における表示パネル62からの表示画像が押さえ片113により遮られてしまうのを防止することが可能となる。また、前板112には、溝123、124が2箇所に亘り設けられている。溝123は、ユーザが携帯端末2を取り出す場合において、これを手で掴みやすくするために設けられている。溝124は、携帯端末2の収容時においてちょうど撮像部44の位置に応じた箇所に設けられている。これにより撮像部44による撮影方向が、溝124により開放されることとなり、前板112により遮られることが無くなる。 The bag accommodating portion 114 is configured with an interval that allows the front plate 112 and the pressing piece 113 to accommodate the portable terminal 2. The portable terminal 2 is accommodated in the accommodating portion 114 in a state where the display panel 62 side is directed in the C direction in the drawing and the imaging unit 44 side is directed in the direction opposite to the C direction. The holding pieces 113 protrude from the side plates 132a and 132b in the terminal housing unit 42 toward the inside, respectively, and have a size and shape that can support the end portion of the mobile terminal 2. As a result, it is possible to prevent the display image from the display panel 62 in the mobile terminal 2 from being blocked by the pressing piece 113. Further, the front plate 112 is provided with grooves 123 and 124 at two locations. The groove 123 is provided to make it easier for the user to grasp the portable terminal 2 when taking it out. The groove 124 is provided at a location corresponding to the position of the imaging unit 44 when the mobile terminal 2 is accommodated. As a result, the shooting direction by the imaging unit 44 is released by the groove 124 and is not blocked by the front plate 112.
  図3に示すように、頭部近接体4は、上述した構成からなる端末収容ユニット42における収容部114に携帯端末2を収容させるとともに、当該端末収容ユニット42を近接ユニット41へ挿入することにより利用可能となる。実際に頭部近接体4をユーザに近接させる場合には、図5に示すように突出片101a、101bを開いた状態とし、近接ユニット41における頭部フレーム102をユーザの頭部に近接させる。これにより、ユーザの視界は、突出片101並びに頭部フレーム102を介して遮蔽され、レンズ111を介して拡大表示される携帯端末2の表示パネル62の表示画像に長時間に亘り集中させることが可能となる。その結果、ユーザは、あたかもヘッドマウントディスプレイを装着しているのと同等の没入感を味わうことが可能となる。この状態で、ユーザは、例えば近接ユニット41に挿入されている端末収容ユニット42を、ユーザに対して近接させ、又は離間させるようにしてもよい。これにより、ユーザの目と、レンズ111との距離を変更することができ、表示パネル62上に表示される画像への焦点を自在に合わせることが可能となる。 As shown in FIG. 3, the head proximity body 4 accommodates the portable terminal 2 in the accommodation portion 114 in the terminal accommodation unit 42 having the above-described configuration, and inserts the terminal accommodation unit 42 into the proximity unit 41. Be available. When the head proximity body 4 is actually brought close to the user, the protruding pieces 101a and 101b are opened as shown in FIG. 5, and the head frame 102 in the proximity unit 41 is brought close to the user's head. Thereby, the user's field of view is shielded through the protruding piece 101 and the head frame 102, and can be concentrated over a long time on the display image of the display panel 62 of the mobile terminal 2 that is enlarged and displayed through the lens 111. It becomes possible. As a result, the user can experience the same immersive feeling as if he / she was wearing a head mounted display. In this state, for example, the user may make the terminal accommodating unit 42 inserted in the proximity unit 41 approach or separate from the user. Thereby, the distance between the user's eyes and the lens 111 can be changed, and the focus on the image displayed on the display panel 62 can be freely adjusted.
  図6(a)は携帯端末2の平面図であり、図6(b)は、その底面図を示している。 携帯端末2は、図6に示すように、表示パネル62と、ヘッドホン43と、撮像部44と、動きセンサ59とを備えている。 FIG. 6 (a) is a plan view of the portable terminal 2, and FIG. 6 (b) is a bottom view thereof. As shown in FIG. 6, the mobile terminal 2 includes a display panel 62, headphones 43, an imaging unit 44, and a motion sensor 59.
  ヘッドホン43は、ユーザの耳に装着可能とされている。このヘッドホン43は、ユーザの耳を完全に被包するような形状、サイズで構成される場合に限定されるものではなく、小型のイヤホン式で構成されていてもよい。また、このヘッドホン43の構成は必須ではなく、必要に応じて省略するようにしてもよいが、使用時にはノイズキャンセリング機能を持つものが望ましい。 Headphone 43 can be worn on the user's ear. The headphone 43 is not limited to a shape and size that completely envelops the user's ears, and may be a small earphone type. The configuration of the headphone 43 is not essential and may be omitted as necessary. However, it is desirable to have a noise canceling function when in use.
  図7は、この携帯端末2のブロック構成を示している。携帯端末2は、上述した動きセンサ59、表示パネル62、ヘッドホン43、撮像部44に加え、マイク60、記録部69とを更に有している。携帯端末2は、周辺インターフェース(I/F)57と制御アプリケーション20とを備えている。電源スイッチ58、操作部65はこの周辺I/F57に接続されている。動きセンサ59、マイク60、表示パネル62、ヘッドホン43、撮像部44、記録部44はそれぞれ周辺I/F57に接続されている。 FIG. 7 shows a block configuration of the portable terminal 2. The mobile terminal 2 further includes a microphone 60 and a recording unit 69 in addition to the motion sensor 59, the display panel 62, the headphones 43, and the imaging unit 44 described above. The portable terminal 2 includes a peripheral interface (I / F) 57 and a control application 20. The power switch 58 and the operation unit 65 are connected to the peripheral I / F 57. The motion sensor 59, the microphone 60, the display panel 62, the headphones 43, the imaging unit 44, and the recording unit 44 are each connected to the peripheral I / F 57.
  表示パネル62は、撮像部44により撮像された映像、又は制御アプリケーション20から送信されてくる映像を表示する。この表示パネル62は、映像信号が入力された場合に、当該映像信号に基づいて画像を生成するための要素となる各信号(R,G,Bの3原色の信号)を発生する。また、発生した各RGB信号に基づく光を出射して合波し、その光を2次元に走査する。2次元に走査された光は、その中心線がユーザの瞳孔に収束するように変換され、ユーザの眼の網膜に投影される。 The display panel 62 displays the video imaged by the imaging unit 44 or the video image transmitted from the control application 20. When a video signal is input, the display panel 62 generates signals (R, G, and B primary color signals) that are elements for generating an image based on the video signal. Further, light based on the generated RGB signals is emitted and combined, and the light is scanned two-dimensionally. The two-dimensionally scanned light is converted so that its center line converges on the user's pupil and projected onto the retina of the user's eye.
  ヘッドホン43は、携帯端末2から送信されてくる音声信号が入力される。ヘッドホン43は、この入力されてくる音声信号に基づいた音声を出力する。 The headphone 43 receives the audio signal transmitted from the mobile terminal 2. The headphone 43 outputs sound based on the input sound signal.
  周辺I/F57は、動きセンサ59やマイク60、操作部65等から取得した情報を制御アプリケーション20へ伝達するための各種情報の送受信を中継する役割を担うインターフェースである。 The heel peripheral I / F 57 is an interface that plays a role of relaying transmission / reception of various information for transmitting information acquired from the motion sensor 59, the microphone 60, the operation unit 65, and the like to the control application 20.
  電源スイッチ58は、例えば外部に露出した押圧可能なボタン型のスイッチで構成され、ユーザがこれを押圧することで、携帯端末2による処理動作を開始し、或いは当該処理動作を終了させる。 The power switch 58 is constituted by, for example, a pushable button-type switch exposed to the outside, and when the user presses the switch, the processing operation by the portable terminal 2 is started or the processing operation is ended.
  動きセンサ59は、携帯端末2の動きを検出する。この動きセンサ59は、ジャイロセンサ、加速度センサ、地磁気センサ等が用いられ、携帯端末2、ひいてはこれが収容された頭部近接体4の角度や傾き、更には速度を検出する。この動きセンサ59は、携帯端末2の位置そのものを検出するものであってもよい。また動きセンサ59は、体験環境に設置した図示しないカメラによるモーションキャプチャーシステムに置換するようにしてもよい。この動きセンサ59により取得されたユーザの頭部の動きに関するデータは、周辺I/F57を介して制御アプリケーション20へと送信される。 The heel movement sensor 59 detects the movement of the mobile terminal 2. As the motion sensor 59, a gyro sensor, an acceleration sensor, a geomagnetic sensor, or the like is used, and detects the angle and inclination of the portable terminal 2, and the head proximity body 4 in which the portable terminal 2 is accommodated, and also the speed. The motion sensor 59 may detect the position of the mobile terminal 2 itself. The motion sensor 59 may be replaced with a motion capture system using a camera (not shown) installed in the experience environment. Data on the movement of the user's head acquired by the motion sensor 59 is transmitted to the control application 20 via the peripheral I / F 57.
  マイク60は、周囲の音声を集音し、これを音声信号に変換する。マイク60は、この変換した音声信号を周辺I/F57を介してヘッドホン43へと送信することができる。この送信される過程で、音声信号に何らかの処理が施されるようにしてもよい。なお、携帯端末2がアタッチメントである頭部近接体4に収容されている間は、マイク60もその内部に配置される。したがって、ユーザが頭部近接体4に触れたり、頭部近接体4を叩いたり、摩擦したり、突出片101a、101bを開閉する等して頭部近接体4を変形したり、等のユーザのアクションに起因する音は、頭部近接体4により共鳴して、マイク60により集音される。したがって、頭部近接体4固有の共鳴周波数帯でフィルタリング等をすることにより、外界の音声と、アタッチメントである頭部近接体4に対するユーザのアクションを区別することができる。 Microphone 60 collects ambient sounds and converts them into audio signals. The microphone 60 can transmit the converted audio signal to the headphones 43 via the peripheral I / F 57. In the process of transmission, some processing may be performed on the audio signal. In addition, while the portable terminal 2 is accommodated in the head proximity body 4 that is an attachment, the microphone 60 is also disposed therein. Accordingly, the user touches the head proximity body 4, hits the head proximity body 4, rubs it, deforms the head proximity body 4 by opening and closing the protruding pieces 101 a and 101 b, etc. The sound resulting from the action is resonated by the head proximity body 4 and collected by the microphone 60. Therefore, by filtering or the like in the resonance frequency band unique to the head proximity body 4, it is possible to distinguish the external sound from the user's action on the head proximity body 4 that is an attachment.
  操作部65は、ユーザが自らの意思に基づく入力を行うためのいわゆるユーザインターフェースである。この操作部65は、ボタン、あるいは表示パネル66も兼ねたタッチスクリーン等で構成され、ユーザがこれを介して入力を行う。この操作部65において何らかの入力が行われた場合、その情報は、周辺I/F57を介して制御アプリケーション20へと送信される。ただし、携帯端末2がアタッチメントである頭部近接体4に収容されている間は、タッチスクリーンもその内部に配置される。そこで、後述するように、頭部近接体4にユーザの指が1本程度通るような小さな開口を設け、その開口を介してタッチスクリーンにタッチできるようにしても良い。開口を大きくすると、指がタッチできる領域も広がるが、外光が侵入しやすくなるため、没入感が劣る。そこで、指がタッチできる領域は狭くなるものの、小さい開口を設け、後述する方策によって、タッチスクリーン内に表示されたアイコンやボタン等の操作対象に対するタッチを可能とすることができる。 The heel operation unit 65 is a so-called user interface for the user to perform input based on his / her own intention. The operation unit 65 includes a button or a touch screen that also serves as the display panel 66, and a user inputs via the touch screen. When any input is performed on the operation unit 65, the information is transmitted to the control application 20 via the peripheral I / F 57. However, while the portable terminal 2 is accommodated in the head proximity body 4 that is an attachment, the touch screen is also disposed therein. Therefore, as will be described later, a small opening that allows about one finger of the user to pass through the head proximity body 4 may be provided so that the touch screen can be touched through the opening. If the opening is enlarged, the area where the finger can touch increases, but external light is likely to enter, so the feeling of immersion is inferior. Therefore, although the area that can be touched by the finger is narrowed, a small opening is provided, and it is possible to touch an operation target such as an icon or a button displayed in the touch screen by a method described later.
  記録部69は、画像、映像その他各種データを記録するためのストレージである。この記録部69におけるデータの書き込み、読み出しは、制御アプリケーション20による制御に基づいて実行される。 The bag recording unit 69 is a storage for recording images, videos, and other various data. The writing and reading of data in the recording unit 69 are executed based on control by the control application 20.
  制御アプリケーション20は、頭部近接型映像表示システム1全体を制御するためのアプリケーションソフトウェアである。この制御アプリケーションのソフトウェアそのものは、記録部69や図示しないメモリ等に記録されることとなる。 The eyelid control application 20 is application software for controlling the entire head proximity image display system 1. The control application software itself is recorded in the recording unit 69, a memory (not shown), or the like.
  図8は、携帯端末2における制御アプリケーション20のステップ構成を示している。携帯端末2は、再生制御ステップS28につながる映像蓄積ステップS22と、ライブ映像取得ステップS23と、映像切出し方向更新ステップS25と、音声データ取得ステップS35と、動き検出ステップS38を有している。映像蓄積ステップS22は、代替映像取得ステップS21から連続するものであり、映像切出し方向更新ステップS25は、頭部方向特定ステップS24から連続するものである。 FIG. 8 shows a step configuration of the control application 20 in the mobile terminal 2. The portable terminal 2 includes a video accumulation step S22 connected to the reproduction control step S28, a live video acquisition step S23, a video cutout direction update step S25, an audio data acquisition step S35, and a motion detection step S38. The video accumulation step S22 is continuous from the alternative video acquisition step S21, and the video cutout direction update step S25 is continuous from the head direction specifying step S24.
  代替映像取得ステップS21は、パノラマビデオカメラ31により撮像された映像を代替映像として取得する。また、この代替映像取得ステップS21では、通信網5を介して取得した各種情報やデータに基づいて加工した代替映像を取得する。さらには、この代替映像取得ステップS21では、フラッシュメモリや記録媒体を介して入力されてくる情報も代替映像として取得するようにしてもよいし、携帯端末2における図示しないユーザインターフェースを介して入力された情報に基づくものであってもよい。 The substitute video acquisition step S21 acquires the video imaged by the panoramic video camera 31 as a substitute video. In this alternative video acquisition step S21, an alternative video processed based on various information and data acquired via the communication network 5 is acquired. Furthermore, in this alternative video acquisition step S21, information input via the flash memory or the recording medium may be acquired as an alternative video, or input via a user interface (not shown) in the portable terminal 2. It may be based on information.
  映像蓄積ステップS22は、代替映像取得ステップS21において取得された代替映像を蓄積する。この映像蓄積ステップS22における代替映像の蓄積は、例えば、記録部69において実行するようにしてもよい。なお、この映像蓄積ステップS22は、この代替映像取得ステップS21からの代替映像のみならず、当初から予め各種コンテンツや情報を蓄積しておき、これを代替映像として記憶するようにしてもよい。この映像蓄積ステップS22において記録部69により記録された代替映像は、再生制御ステップS28による制御の下で読み出される。 Video accumulation step S22 accumulates the alternative video acquired in alternative video acquisition step S21. The storage of the substitute video in this video storage step S22 may be executed by the recording unit 69, for example. In this video accumulation step S22, not only the alternative video from the alternative video acquisition step S21 but also various contents and information may be accumulated in advance from the beginning and stored as alternative video. The substitute video recorded by the recording unit 69 in the video accumulation step S22 is read under the control of the reproduction control step S28.
  ライブ映像取得ステップS23は、撮像部44により撮像された映像信号を取得する。 The live video acquisition step S23 acquires the video signal captured by the imaging unit 44.
  頭部方向特定ステップS24は、携帯端末2における動きセンサ59により検出された携帯端末2の動きを介して、ユーザの頭部の動きに関するデータを取得する。この頭部方向特定ステップS24は、取得したユーザの頭部の動きに関するデータから、実際にユーザの頭部の方向を特定する。 The head direction specifying step S24 acquires data relating to the movement of the user's head through the movement of the mobile terminal 2 detected by the motion sensor 59 in the mobile terminal 2. In this head direction specifying step S24, the direction of the user's head is actually specified from the acquired data relating to the movement of the user's head.
  映像切出し方向更新ステップS25は、頭部方向特定ステップS24によって特定されたユーザの頭部の方向に基づいて、携帯端末2における表示パネル62に表示すべき映像の切出し方向を更新する。 Video cutout direction update step S25 updates the cutout direction of the video to be displayed on the display panel 62 in the portable terminal 2 based on the direction of the user's head specified by the head direction specifying step S24.
  音声データ取得ステップS35は、外部から音声を取得し、これを蓄積する。外部からの音声の取得方法としては、例えば公衆通信網から有線、無線を介して取得するようにしてもよいし、記録媒体に記録された音声データを読み出してこれを記録するようにしてもよい。ちなみに、この音声データ取得ステップS35は、一度取得した音声データを蓄積し、これを読み出して利用する場合に限定されるものではなく、外部から取得した音声データをそのままリアルタイムに音声出力させるようにしてもよい。音声データ取得ステップS35により取得された音声データ、或いは外部から取得した音声データは、ヘッドホン43により出力される。 The sound data acquisition step S35 acquires sound from the outside and accumulates it. As an external audio acquisition method, for example, it may be acquired from a public communication network via wire or wireless, or audio data recorded on a recording medium may be read and recorded. . Incidentally, the audio data acquisition step S35 is not limited to the case where the acquired audio data is accumulated, and is read out and used. The audio data acquired from the outside is directly output in real time as it is. Also good. The sound data acquired in the sound data acquisition step S35 or the sound data acquired from the outside is output by the headphones 43.
  再生制御ステップS28は、映像蓄積ステップS22において記録部69に蓄積されている代替映像や、ライブ映像取得ステップS23において取得されているライブ映像を再生するための処理を行う。再生制御ステップS28は、自らの再生動作において、映像切出し方向更新部ステップS25による情報を用いて映像再生の制御を行う。この再生制御ステップS28において制御された再生映像は、上述した表示パネル62に表示される。 The reproduction control step S28 performs processing for reproducing the substitute video stored in the recording unit 69 in the video storage step S22 and the live video acquired in the live video acquisition step S23. In the reproduction control step S28, video reproduction is controlled using the information from the video cutout direction updating unit step S25 in its own reproduction operation. The playback video controlled in the playback control step S28 is displayed on the display panel 62 described above.
  動き検出ステップS38は、携帯端末2における動きセンサ59により、携帯端末2の動きを検出することを行う。以下、この動き検出ステップS38による詳細な動作について説明をする。 In the heel movement detection step S38, the movement of the portable terminal 2 is detected by the movement sensor 59 in the portable terminal 2. Hereinafter, the detailed operation by this motion detection step S38 will be described.
  この動き検出ステップS38は、図9、10に示すような頭部近接体4の近接動作を検出する。先ずユーザは、図9(a)に示すように、収容部114に携帯端末2を収容させた頭部近接体4を把持する。この段階で、制御アプリケーション20の動作は、操作部65を介してスタート状態となっているが、表示パネル62による映像の表示は未だ開始されていない。 動 き This motion detection step S38 detects the proximity movement of the head proximity body 4 as shown in FIGS. First, as shown in FIG. 9A, the user holds the head proximity body 4 in which the portable terminal 2 is accommodated in the accommodation portion 114. At this stage, the operation of the control application 20 is in a start state via the operation unit 65, but the display of video on the display panel 62 has not yet started.
  次に図9(b)に示すようにこの頭部近接体4をユーザの頭部の手前まで移動させる。動きセンサ59は、この頭部近接体4の移動を、これに収容されている携帯端末2を介して検出していく。この段階でユーザは、頭部近接体4を近接させようとするが、実際のところ頭部フレーム102における側板102a、102bの間隔は、ユーザの一般的な頭部の幅とほぼ同等又はそれ以下とされている。このため、ユーザは図9(a)に示すように、自らの頭部の大きさに合わせて側板102a、102bを開く動作を行うこととなる。その結果、頭部近接体4は、このユーザによる側板102a、102bの開く動作に伴い、図9(b)に示すようなユーザの頭部の手前まで移動させる速度と比較して減速することとなる。動きセンサ59は、この頭部近接体4の移動速度の減速を、これに収容されている携帯端末2を介して検出する。
  なお、この動作の際には、一般には、頭部近接体4の向きも変化する。たとえば、頭部近接体4を把持する段階で、携帯端末2の撮像部44が地面を向いていたとしても、頭部近接体4をユーザの頭部の手間まで移動する段階では、携帯端末2の撮像部44は水平方向に幾分起き上がることもある。
  ただし、図9(a)(b)、ならびに、後述する図10(a)においては、ユーザの動作を大袈裟に表現し、頭部近接体4の向きの変化は起きていないかのように図示している。
  さて、ユーザは、この側板102a、102bを開く動作を行った後、図10(b)に示すように、頭部フレーム102を頭部に近接させ、両側に形成された側板102により頭部の側方を遮蔽させる。ちなみに、この後の動作として、例えば側板102a、102bがゴムや紐等を介して互いに連結されている場合には、これらをユーザの頭部に被せることにより強固に装着するようにしてもよい。但し、本発明でいうところの装着は、このようなゴムや紐等をユーザの頭部に被せる強固な装着のみならず、図10(b)に示すようにユーザが手で把持しながら利用する場合も含まれる。
Next, as shown in FIG. 9B, the head proximity body 4 is moved to the front of the user's head. The motion sensor 59 detects the movement of the head proximity body 4 via the mobile terminal 2 accommodated therein. At this stage, the user tries to bring the head proximity body 4 closer, but in fact, the distance between the side plates 102a and 102b in the head frame 102 is approximately equal to or less than the general head width of the user. It is said that. Therefore, as shown in FIG. 9A, the user performs an operation of opening the side plates 102a and 102b in accordance with the size of his / her head. As a result, the head proximity body 4 is decelerated in comparison with the speed at which it moves to the front of the user's head as shown in FIG. 9 (b) as the user opens the side plates 102a and 102b. Become. The motion sensor 59 detects the deceleration of the moving speed of the head proximity body 4 via the mobile terminal 2 accommodated therein.
In this operation, generally, the orientation of the head proximity body 4 also changes. For example, even when the imaging unit 44 of the mobile terminal 2 faces the ground at the stage of gripping the head proximity body 4, the mobile terminal 2 is at the stage of moving the head proximity body 4 to the trouble of the user's head. The image pickup unit 44 may rise somewhat in the horizontal direction.
However, in FIGS. 9 (a) and 9 (b) and FIG. 10 (a) to be described later, the user's actions are expressed roughly, and it is as if the orientation of the head proximity body 4 has not changed. Show.
Now, after performing the operation of opening the side plates 102a and 102b, the user brings the head frame 102 close to the head, as shown in FIG. Shield the sides. Incidentally, as an operation after this, for example, when the side plates 102a and 102b are connected to each other via rubber or a string, they may be firmly attached by covering them on the user's head. However, the mounting as referred to in the present invention is used not only for firmly mounting such a rubber or string on the user's head, but also while being held by the user's hand as shown in FIG. 10 (b). Cases are also included.
  動きセンサ59は、このような頭部近接体4の動きを、全て携帯端末2を介して検出していく。図11は、この近接過程における頭部近接体4に収容された携帯端末2の動きを、ユーザの側面から視認した状態を示している。当初の図9(a)に示すような頭部近接体4の把持状態をEとし、また図9(b)、図10(a)に示すような側板102a、102bの開く動作を行っている状態をFとする。また、ユーザの頭部への近接状態をGとする。このとき、E~Fまでの移動速度は高速であるのに対して、F~Gは低速になる。しかもFの到達時には、側板102a、102bを開く動作が必須となるため、F近傍において携帯端末2の移動速度は必ず低速になる。しかも、携帯端末2の傾きについても、状態Eでは、表示パネル62がほぼ上向きになっている場合が多いのに対して、状態F、Gでは、表示パネル62の向きがほぼ横向きになっている。 The heel movement sensor 59 detects all the movements of the head proximity body 4 through the portable terminal 2. FIG. 11 shows a state in which the movement of the mobile terminal 2 accommodated in the head proximity body 4 in this proximity process is viewed from the side of the user. The initial gripping state of the head proximity body 4 as shown in FIG. 9A is E, and the side plates 102a and 102b are opened as shown in FIGS. 9B and 10A. Let state be F. Further, the proximity state to the user's head is assumed to be G. At this time, the moving speed from E to F is high, while F to G are low. Moreover, since the operation of opening the side plates 102a and 102b is essential when F reaches, the moving speed of the portable terminal 2 is always low in the vicinity of F. In addition, regarding the tilt of the mobile terminal 2, in many cases, the display panel 62 is almost upward in the state E, whereas in the states F and G, the orientation of the display panel 62 is almost horizontal. .
  このため、このような携帯端末2の向きと、速度は、これを頭部近接体4に実装して頭部近接型映像表示システム1として利用する場合に起こりえる、特有の動きであるといえる。制御アプリケーション20は、動きセンサ59により、映像視認時において頭部近接体4を頭部の手前まで移動させる動作と、頭部の手前において突出片101の開閉動作に伴う減速とを検出した場合には、ユーザが表示パネル62を介した映像表示の開始を望んで、頭部近接体4を近接したものと判別し、実際にこの表示パネル62による映像表示を開始させる。これにより、SR映像体験のスタートを、ユーザの行動とスムーズに連動させることが可能となる。 For this reason, it can be said that the direction and speed of the portable terminal 2 are specific movements that can occur when the mobile terminal 2 is mounted on the head proximity body 4 and used as the head proximity type video display system 1. . When the motion application 59 detects the movement of moving the head proximity body 4 to the front of the head and the deceleration accompanying the opening / closing operation of the protruding piece 101 before the head by the motion sensor 59 when the image is viewed. The user desires to start video display via the display panel 62, determines that the head proximity body 4 is close, and actually starts video display on the display panel 62. This makes it possible to smoothly link the start of the SR video experience with the user's action.
  このように、本発明では、携帯端末2を頭部近接体4に収容した状態で使用する。このため、頭部近接体4に覆われた携帯端末2を操作することはユーザにとって非常に煩雑なものとなるが、本発明では、自然な近接動作のみで、映像の開始を自動的に行うことができるため、非常に操作性にも優れたものとなる。 Thus, in the present invention, the mobile terminal 2 is used in a state of being accommodated in the head proximity body 4. For this reason, it is very complicated for the user to operate the mobile terminal 2 covered with the head proximity body 4, but in the present invention, the video is automatically started only by a natural proximity operation. Therefore, the operability is extremely excellent.
  また、本発明によればワイヤレスで動作する携帯端末2を介して動作可能なものであるため、システム全体の完全なワイヤレス化を実現させることが可能となる。 In addition, according to the present invention, since it can operate via the mobile terminal 2 that operates wirelessly, it is possible to realize complete wirelessization of the entire system.
  一方、動きセンサ59により上述した携帯端末2の動きが検出されなかった場合には、制御アプリケーション20は、表示パネル62を介した映像表示の開始を特段行うことはしない。実際に図11に示すような携帯端末2の動きは、本発明を利用するとき以外では、殆ど起こりえないものであるから、通常の携帯端末2の使用時には、上述した動きが動きセンサ59により殆ど検出されることはない。このため、通常の携帯端末2の使用時において、意図していないにもかかわらず誤って制御アプリケーション20が表示パネル62を介して映像の表示を開始することを防止することができる。 On the other hand, when the movement of the mobile terminal 2 described above is not detected by the motion sensor 59, the control application 20 does not particularly start the video display via the display panel 62. Actually, the movement of the mobile terminal 2 as shown in FIG. 11 hardly occurs except when the present invention is used. Therefore, when the normal mobile terminal 2 is used, the above-described movement is caused by the motion sensor 59. It is hardly detected. For this reason, it is possible to prevent the control application 20 from erroneously starting the display of the video via the display panel 62 even though it is not intended when the normal mobile terminal 2 is used.
  ちなみに、ユーザが映像視認を終了させる場合には、
図10(b)、図10(a)、図9(b)、図9(a)の順に終了動作を行うのが通常である。図11でいうところの、G~F、F~Eの動作が順次行われることとなる。かかる場合において、G~Fの動作速度と、F~Eの動作速度では、後者の方が速い。このため、このような速度変化を動きセンサ59を介して検出することにより、ユーザが表示パネル62を介した映像表示の停止を望んで、頭部近接体4から離間したものと判別し、実際にこの表示パネル62による映像表示を終了させる。これにより、SR映像体験の終了を、ユーザの行動とスムーズに連動させることが可能となる。
  また、停止動作は、上述の方法に限定されるものではなく、ユーザ自らが携帯端末2を頭部近接体4から取り出して、通常の操作部65による操作を通じて映像の視認の停止を指示するようにしてもよい。
  また、本実施形態において、頭部近接体4をユーザの頭から外してから、映像の再生が停止されるまでには、一定の時間を要する。このため、ユーザが映画やビデオなどの映像コンテンツを代替映像として視聴している場合には、携帯端末2は、頭部近接体4をユーザの頭から外し始めた時点(図11の段階G)から再生が停止された時点(図11の段階E)までの経過時間を計測しておく。そして、頭部近接体4をユーザの頭に装着して表示を再開する際には、携帯端末2は、計測された経過時間、もしくは、これにある程度の猶予時間を加算した時間だけ、映像コンテンツを巻き戻してから、映像コンテンツの再生を再開することが望ましい。これにより、ユーザは、映像コンテンツを途切れなく視聴することが可能となる。
By the way, when the user ends video viewing,
Normally, the end operation is performed in the order of FIG. 10 (b), FIG. 10 (a), FIG. 9 (b), and FIG. 9 (a). The operations of G to F and F to E as shown in FIG. 11 are sequentially performed. In such a case, the latter is faster among the operation speeds G to F and the operation speeds F to E. For this reason, by detecting such a speed change via the motion sensor 59, it is determined that the user has moved away from the head proximity body 4 in the hope of stopping the video display via the display panel 62. The video display by the display panel 62 is terminated. As a result, the end of the SR video experience can be smoothly linked to the user's action.
Further, the stopping operation is not limited to the above-described method, and the user himself takes out the mobile terminal 2 from the head proximity body 4 and instructs to stop visual recognition through an operation by the normal operation unit 65. It may be.
Further, in the present embodiment, it takes a certain time until the reproduction of the video is stopped after the head proximity body 4 is removed from the user's head. For this reason, when the user is viewing video content such as a movie or video as an alternative video, the mobile terminal 2 starts to remove the head proximity object 4 from the user's head (step G in FIG. 11). The elapsed time from playback to the point when playback is stopped (step E in FIG. 11) is measured. Then, when the display is resumed with the head proximity body 4 mounted on the user's head, the mobile terminal 2 displays the video content only for the measured elapsed time or the time obtained by adding a certain grace time to the elapsed time. It is desirable to resume playback of the video content after rewinding. Thereby, the user can view the video content without interruption.
  なお、本発明は、上述した頭部近接型映像表示システム1として具現化される場合に限定されるものではなく、ブレインマシンインターフェイス技術を用いて脳に直接情報を与えることでもよく、またこれを実現するためのアプリケーションプログラムとして具現化されるものであってもよい。 In addition, this invention is not limited to the case where it implement | achieves as the head proximity | contact proximity type | formula video display system 1 mentioned above, Information may be directly given to a brain using a brain machine interface technique, and this is also used. It may be embodied as an application program for realizing.
  次に、本発明を適用した頭部近接型映像表示システム1について、携帯端末2による表示パネル62の表示動作について説明する。 Next, the display operation of the display panel 62 by the portable terminal 2 in the head proximity type video display system 1 to which the present invention is applied will be described.
  携帯端末2は、ライブ映像の表示と、代替映像の表示の何れかを表示パネル62上に表示させるか、或いはライブ映像と代替映像とを互いに組み合わせて表示パネル62上に表示させる。 The mobile terminal 2 displays either live video display or alternative video display on the display panel 62, or displays the live video and alternative video in combination with each other on the display panel 62.
  ライブ映像表示では、撮像部44により撮像を行う。撮像部44により撮像されるライブ映像は、頭部近接体4に収容された携帯端末2の方向や位置に応じたものとなる。実際には、この頭部近接体4は、ユーザに近接されるため、ユーザの頭の向きや位置に応じたライブ映像がこの撮像部44を介して撮像される。撮像部44を介して撮像された被写体像は、撮像素子、レンズを介して撮像面上に結像され、光電変換により映像信号が生成される。 In the live video display, the imaging unit 44 performs imaging. The live video imaged by the imaging unit 44 corresponds to the direction and position of the mobile terminal 2 housed in the head proximity body 4. Actually, since the head proximity body 4 is close to the user, a live image corresponding to the orientation and position of the user's head is imaged through the imaging unit 44. The subject image picked up via the image pickup unit 44 is formed on the image pickup surface via the image pickup element and the lens, and a video signal is generated by photoelectric conversion.
  携帯端末2は、このような映像信号を、ライブ映像取得ステップS23で受信し、再生制御ステップS28により、ライブ映像として再生されるための処理が施される。撮像されたライブ映像は、携帯端末2を介して表示パネル62へ送信される。 The mobile terminal 2 receives such a video signal in a live video acquisition step S23, and a process for playing it as a live video is performed in a playback control step S28. The captured live video is transmitted to the display panel 62 via the mobile terminal 2.
  表示パネル62へ送られたライブ映像は、その映像信号に基づいて画像を生成するための要素となる各RGB信号に基づく光として出射され2次元的に走査される。これによりユーザの眼の網膜には撮像部44により撮像されたライブ映像が投影される。 The live video sent to the display panel 62 is emitted as light based on each RGB signal, which is an element for generating an image based on the video signal, and is scanned two-dimensionally. As a result, the live video imaged by the imaging unit 44 is projected onto the retina of the user's eye.
  上述したように、この撮像部44による撮影範囲は、ユーザの頭部の動きに応じたものなっている。このため、ユーザは、自らの頭部の動きに応じたライブ映像を表示パネル62を介して視認することが可能となるため、あたかも表示パネル62を介してユーザと略同一の視点で現実空間を視認しているのと同様の感覚を覚えさせることが可能となる。 As described above, the imaging range by the imaging unit 44 corresponds to the movement of the user's head. For this reason, since the user can visually recognize the live video according to the movement of his / her head through the display panel 62, the user can view the real space from the display panel 62 with almost the same viewpoint as the user. It is possible to make the player feel the same feeling as he is visually recognizing.
  このようなライブ映像表示を連続して実行していく過程で、携帯端末2は、代替映像の表示機会を常に、或いは間隔をおいて検出する。この代替映像の表示機会の検出の意味するところは、ユーザが代替映像の表示に関して何らかの意思を示した場合、或いは、ユーザの意思に関わらずに代替映像を表示する何らかの機会を捉えた場合を示すものである。 で In the process of continuously executing such live video display, the mobile terminal 2 detects an alternative video display opportunity constantly or at intervals. The meaning of the detection of the alternative video display opportunity indicates that the user has expressed some intention regarding the display of the alternative video, or the case where the user has captured any opportunity to display the alternative video regardless of the user's intention. Is.
  このユーザの意思に基づく代替映像の表示機会の検出は、例えば動きセンサ59により検出されたユーザの頭部の動きに基づくものであってもよい。また、上述した頭部近接体4の近接動作を動きセンサ59により検出し、これに応じて代替映像を表示するようにしてもよい。かかる場合には、当初からライブ映像を表示するのではなく、代替映像を表示することとなる。更には、代替映像の表示機会の検出は、操作部65を介してユーザからの何らかの意思表示が行われた場合であってもよいし、マイク60を介した音声入力を介して検出されるものであってもよく、さらには脳波、振動、運動、位置情報等、ユーザから取得したあらゆる情報に基づくものであってもよい。或いはユーザを含めて周囲の環境、例えば、何らかの匂いや熱、触感に基づいて代替映像の表示機会を検出してもよい。 The detection of the substitute video display opportunity based on the user's intention may be based on, for example, the movement of the user's head detected by the motion sensor 59. Further, the proximity movement of the head proximity body 4 described above may be detected by the motion sensor 59, and an alternative image may be displayed accordingly. In such a case, instead of displaying a live video from the beginning, an alternative video is displayed. Furthermore, the detection of the substitute video display opportunity may be performed when some intention is displayed from the user via the operation unit 65 or detected via voice input via the microphone 60. It may also be based on any information acquired from the user, such as brain waves, vibrations, movements, position information, and the like. Alternatively, an alternative video display opportunity may be detected based on the surrounding environment including the user, for example, some smell, heat, or touch.
  なお、上述した操作部65を介した入力や、マイク60を介した音声入力があった場合には、その入力タイミングと同時に代替映像を表示する場合に加え、表示タイミングと入力タイミングとの間で時間的なタイムラグを意図的に発生させるようにしてもよい。これにより、ユーザに対して代替映像の表示を自分の感覚に基づいて行っているものと認識させることができ、当該代替映像の再生表示をより違和感無く実現できる。 When there is an input through the operation unit 65 or an audio input through the microphone 60 as described above, in addition to displaying an alternative video at the same time as the input timing, between the display timing and the input timing. A time lag may be intentionally generated. As a result, the user can be made to recognize that the display of the substitute video is performed based on his / her feeling, and the display / playback of the substitute video can be realized without a sense of discomfort.
  またユーザの意思に基づかない代替映像の表示機会の検出としては、例えば間隔をおいて代替映像の表示に強制移行するものであってもよいし、或いは通信網5から受信した情報に基づいて代替映像の表示機会を検出するようにしてもよい。また、携帯端末2内に組み込まれた所定のプログラムやアルゴリズムに基づいて発生させたイベントを、代替映像の表示機会と捉えるようにしてもよい。 The detection of the substitute video display opportunity not based on the user's intention may be, for example, a forced transition to the display of the substitute video at intervals, or the substitute video based on the information received from the communication network 5 You may make it detect the display opportunity of an image | video. In addition, an event generated based on a predetermined program or algorithm incorporated in the mobile terminal 2 may be regarded as an alternative video display opportunity.
  代替映像を表示するモードに移行した場合には、代替映像を表示パネル62上に表示させる。代替映像の表示方法の具体的な例は、後述する。 When the mode is shifted to the mode for displaying the substitute video, the substitute video is displayed on the display panel 62. A specific example of the alternative video display method will be described later.
  このような代替映像の表示を終了後、再び通常のライブ映像のみの表示に戻るようにしてもよい。このライブ映像に戻る場合には、上述した代替映像の表示機会として検出する各種イベントを検出したことを期に行うようにしてもよい。 後 After displaying the substitute video, the display may return to the normal live video only. When returning to the live video, the above-described various events to be detected as an opportunity to display the alternative video may be detected.
  なお本発明では、代替映像取得ステップS21において、1以上の代替映像を取得することを前提とする。ちなみに、この代替映像取得ステップS21では、2以上の代替映像を取得する場合には、予め2層以上に亘る、いわゆるマルチレイヤに亘る代替映像を取得し、これを記録部69へと記録しておく。そして、再生制御ステップS28による制御の下で、このマルチレイヤの代替映像の中から、1以上の代替映像を選択し、これをライブ映像と並列させて、或いは重ね合わせて再生する。 In the present invention, it is assumed that one or more alternative videos are acquired in the alternative video acquisition step S21. Incidentally, in this alternative video acquisition step S21, when two or more alternative videos are acquired, a so-called multi-layer alternative video is acquired in advance and recorded in the recording unit 69. deep. Then, under the control of the reproduction control step S28, one or more alternative videos are selected from the multi-layer alternative videos, and are reproduced in parallel with or superimposed on the live video.
  また、本発明によればライブ映像と代替映像とを組み合わせて表示するようにしてもよい。かかる場合において、ライブ映像と代替映像とを互いに重ね合わせて表示する場合には、少なくとも一方の映像に対して背景を除去したり、人物を抽出したりする等の加工を施すようにしてもよい。これにより、現実に存在するものと認識されていた人物や自分自身の身体等の対象物を過去映像に対しても表示することができ、代替映像の現実感をより向上させることが可能となる。 In addition, according to the present invention, a live video and a substitute video may be displayed in combination. In such a case, when the live video and the substitute video are displayed so as to overlap each other, processing such as removing the background or extracting a person from at least one of the videos may be performed. . As a result, it is possible to display a target object such as a person who has been recognized to exist in reality or a person's own body on a past image, and it is possible to further improve the reality of an alternative image. .
  また、例えば、代替映像とライブ映像の画質が、同一又は近似するものとして設定してもよい。これにより、ユーザは、ライブ映像と代替映像を重ね合わせて再生しても、違和感を覚えなくなる。これらの効果は、代替映像とライブ映像を重ね合わせる場合のみならず、これらを互いに切り換えて表示する場合や、互いに並列させて表示する場合も同様の効果を得ることが可能となる。 Also, for example, the image quality of the substitute video and the live video may be set to be the same or similar. Thereby, even if a user superimposes and reproduces | regenerates a live image | video and an alternative image | video, it does not feel discomfort. These effects can be obtained not only when the substitute video and the live video are superimposed, but also when the video is switched to each other or displayed in parallel with each other.
  また、代替映像とライブ映像との重ね合わせの映像を生成する上で、その代替映像の不透明度と、ライブ映像の不透明度は、再生制御ステップS28においてそれぞれ調整する。例えば、ライブ映像の透明度のみを制御するようにしてもよいし、ライブ映像のみならず代替映像の不透明度も制御するようにしてもよい。これにより、ライブ映像による映像表示を薄く残しつつ、代替映像を表示させることも可能となる。このときライブ映像に対して代替映像がフェードイン又はフェードアウトするような構成を採用するようにしてもよい。これにより、ライブ画像には存在しない人物、建物、情報等が出現又は消滅しても、ユーザは違和感を覚えることが無くなる。 In addition, in generating a superimposed image of the substitute image and the live image, the opacity of the substitute image and the opacity of the live image are adjusted in the reproduction control step S28, respectively. For example, only the transparency of the live video may be controlled, or the opacity of the substitute video as well as the live video may be controlled. As a result, it is possible to display the substitute video while keeping the video display of the live video thin. At this time, a configuration may be adopted in which the substitute video fades in or out with respect to the live video. As a result, even if a person, building, information, or the like that does not exist in the live image appears or disappears, the user does not feel uncomfortable.
  なお、代替映像とライブ映像の不透明度は、表示パネル62の画素毎に設定することが可能である。代替映像とライブ映像を常時重ね合わせる場合においても、実際に重ね合わせが起きているのは、空間全体の一部だけに留めることも可能となる。また、画素単位で代替映像とライブ映像の不透明度を自由に設定することにより、代替映像とライブ映像が互いに混合する環境空間の範囲や、その形状を自由に変化させることが可能である。更にその画素毎の不透明度の変化を時系列的に変化させ、動きセンサ59等により検出したユーザのアクションに基づいて動的に変化させるようにしてもよい。 Note that the opacity of the substitute video and the live video can be set for each pixel of the display panel 62. Even when the substitute video and the live video are always superposed, it is possible to actually superimpose only a part of the entire space. In addition, by freely setting the opacity of the substitute image and the live image in units of pixels, it is possible to freely change the range of the environment space where the substitute image and the live image are mixed with each other and the shape thereof. Further, the change in opacity for each pixel may be changed in time series and dynamically changed based on the user action detected by the motion sensor 59 or the like.
  また、本発明では、代替映像を表示する際において、匂い、熱、振動、触覚、音声の少なくとも1つをユーザに対して及ぼすための手段を別途設けるようにしてもよい。例えば、代替映像として、天災に関する緊急速報を流す場合において、ユーザに注意喚起するために振動や音声でそれを通知するようにしてもよい。 In addition, in the present invention, when an alternative image is displayed, a means for exerting at least one of smell, heat, vibration, touch, and sound on the user may be separately provided. For example, in the case where an emergency bulletin about natural disaster is played as an alternative video, it may be notified by vibration or voice to alert the user.
  更に、本発明では、一の代替映像をライブ映像と重ね合わせて、或いは並行して再生する場合に限定されるものではない。映像蓄積部22には2以上の代替映像が蓄積されていることから、2以上の代替映像をライブ映像と重ね合わせて、或いは並行して再生するようにしてもよい。 Furthermore, the present invention is not limited to the case where one substitute image is reproduced with a live image superimposed or in parallel. Since two or more alternative videos are stored in the video storage unit 22, the two or more alternative videos may be superimposed on the live video or reproduced in parallel.
  このように、本発明では、代替映像を2つ以上に予めマルチレイヤ化しておき、このうち所望の1以上の代替映像を選んでライブ映像と組み合わせて表示させることが可能となる。また、本発明によれば、この表示すべき代替映像のうち、所望のものを順次切り換えることもできる。 As described above, according to the present invention, it is possible to multi-layer two or more alternative videos in advance and select one or more desired alternative videos and display them in combination with the live video. Further, according to the present invention, it is possible to sequentially switch a desired one of the alternative videos to be displayed.
  所望の代替映像の選択や切替は、或いは代替映像とライブ映像の切り替えをユーザの頭部の動き、ひいてはその頭部に近接させた頭部近接体4の動きに基づいて実行するようにしてもよい。かかる場合には、動きセンサ59により検出した頭部の動きに基づいて、切替を行うようにしてもよい。またこれに限定されるものではなく、操作部65を介してユーザからの何らかの意思表示が行われた場合であってもよいし、マイク60を介した音声入力を介して検出されるものであってもよく、さらには脳波、振動、位置情報等、ユーザから取得したあらゆる情報に基づくものであってもよい。或いはユーザを含めて周囲の環境、例えば、何らかの匂いや熱、触感に基づいて代替映像の表示機会を検出してもよい。 Selection or switching of a desired substitute image, or switching between the substitute image and the live image may be executed based on the movement of the user's head, and consequently the movement of the head proximity body 4 close to the head. Good. In such a case, switching may be performed based on the movement of the head detected by the motion sensor 59. In addition, the present invention is not limited to this, and may be a case where some kind of intention is displayed from the user via the operation unit 65, or may be detected via voice input via the microphone 60. Further, it may be based on any information acquired from the user, such as brain waves, vibrations, and position information. Alternatively, an alternative video display opportunity may be detected based on the surrounding environment including the user, for example, some smell, heat, or touch.
  また、上述した実施の形態においては、1以上の代替映像を代替映像取得ステップS21において事前に取得してこれを記録部69に記録し、必要に応じてこの記録部69から代替映像を読み出す場合を例にとり説明をしたが、これに限定されるものでない。例えば、ライブ映像の表示時において、代替映像が新たに必要となった場合、その新たに必要となる代替映像を、その都度通信網5から取得するようにしてもよい。 In the above-described embodiment, one or more alternative videos are acquired in advance in the alternative video acquisition step S21 and recorded in the recording unit 69, and the alternative video is read from the recording unit 69 as necessary. However, the present invention is not limited to this. For example, when a new substitute image is required when displaying a live image, the newly required substitute image may be acquired from the communication network 5 each time.
  本発明によれば、代替現実技術(現在と過去の映像を切り目無く、現実の延長として体験する技術)を利用することにより、いわば視覚情報空間を拡張することができる。このような視覚情報空間の拡張により、ユーザが主体的に情報空間を操作し、代替映像を表示させて必要な情報にアクセスすることが可能となる。 に よ According to the present invention, it is possible to expand the visual information space by using an alternative reality technology (a technology to experience the current and past images without any breaks as an extension of reality). Such an expansion of the visual information space allows the user to independently operate the information space and display a substitute video to access necessary information.
  更に本発明によれば、音声データ取得ステップS35において取得した音声データを一緒に再生するようにしてもよい。かかる場合には、携帯端末2は、再生すべき代替映像に連動させてこの音声データを再生するようにしてもよい。例えば、再生すべき映像が地図情報であるのであれば、これに連動したアナウンスを音声データとして連動させて再生するようにしてもよい。また再生すべき映像がゲームに関するコンテンツ映像であれば、当該コンテンツに連動させた効果音や音楽を音声データとして連動させて再生するようにしてもよい。 Furthermore, according to the present invention, the audio data acquired in the audio data acquisition step S35 may be reproduced together. In such a case, the mobile terminal 2 may reproduce the audio data in conjunction with the alternative video to be reproduced. For example, if the video to be played is map information, an announcement linked to this may be played back in conjunction with the audio data. If the video to be played is a content video related to a game, sound effects or music linked to the content may be played back as voice data.
  また、本発明によれば、携帯端末2とは異なる他の電子機器、又は他の携帯端末との間でワイアレスで連動させて動作させるようにしてもよい。これにより、複数のユーザ間でSR映像体験を同期させることもできる。実際には、この制御アプリケーション20は、外部からも制御可能であるため、他の電子機器又は他の携帯端末から同期のための制御を行わせることで、SR映像体験の共有化を実現できる。 In addition, according to the present invention, the electronic device may be operated in a wirelessly interlocked manner with another electronic device different from the portable terminal 2 or with another portable terminal. This also allows the SR video experience to be synchronized among multiple users. Actually, since the control application 20 can be controlled from the outside, sharing of the SR video experience can be realized by performing control for synchronization from another electronic device or another portable terminal.
  また映像視認時における携帯端末2の操作は、操作部65を介して行う場合に限定されるものではない。ユーザが突出片101をタップすることを動きセンサ59により検出することで、操作部65の操作に代替させるようにしてもよい。かかる場合には、突出片101のタップの回数を動きセンサ59が検出することで、映像の再生や停止を行ったり、早送り等を自在に行わせることが可能となる。 In addition, the operation of the mobile terminal 2 at the time of visual recognition is not limited to the case where the operation is performed via the operation unit 65. By detecting that the user taps the protruding piece 101 by the motion sensor 59, the operation unit 65 may be used instead. In such a case, the motion sensor 59 detects the number of taps of the protruding piece 101, so that the video can be played back, stopped, fast forwarded, etc. freely.
  以下の実施例において、代替映像の例について説明をする。但し、この代替映像は以下の実施例に限定されるものではなく、他のいかなるコンテンツ、データを代替映像化するようにしてもよい。 In the following embodiments, examples of alternative videos will be described. However, this alternative video is not limited to the following embodiments, and any other content and data may be converted into the alternative video.
  (代替映像)
  以下、代替映像の例について説明をする。但し、この代替映像は以下の実施例に限定されるものではなく、他のいかなるコンテンツ、データを代替映像化するようにしてもよい。
(Alternative video)
Hereinafter, an example of an alternative video will be described. However, this alternative video is not limited to the following embodiments, and any other content and data may be converted into the alternative video.
  代替映像の例としては、先ず通信網5から取得したあらゆる情報を用いることができる。上述の例では、携帯端末2は、ユーザの位置情報を取得して、その位置情報に対応する、建物や道路等の地理的な表示(地図情報)を通信網5から取得する。そして、ライブ映像に表示されている事象に合わせてその地図情報を表示する。 例 As an example of an alternative video, first, any information acquired from the communication network 5 can be used. In the above-described example, the mobile terminal 2 acquires user position information, and acquires a geographical display (map information) such as a building or a road corresponding to the position information from the communication network 5. And the map information is displayed according to the phenomenon currently displayed on the live image | video.
  (過去映像)
  この代替映像としては、過去に撮像した過去映像を適用するようにしてもよい。この過去映像の撮影は、録画モジュール3を用いて行う。具体的には、この録画モジュール3におけるパノラマビデオカメラ31により、全方位に向けて時系列的に撮像を行う。図12(a)の平面図は、位置Pに配置されたパノラマビデオカメラ31により、全方位に向けて映像を撮影している状態を示している。パノラマビデオカメラ31における撮像装置321により、水平方向に向けて360°に亘り洩れなく撮像を行う。この撮像は、垂直方向に向けても同時に撮像されるが、以下の例では、この水平方向への撮像を例にとり説明をする。
(Past video)
As this alternative image, a past image captured in the past may be applied. The past video is shot using the recording module 3. More specifically, the panoramic video camera 31 in the recording module 3 captures images in time series in all directions. The plan view of FIG. 12A shows a state in which a panoramic video camera 31 arranged at a position P is capturing an image in all directions. The imaging device 321 in the panoramic video camera 31 performs imaging without omission over 360 ° in the horizontal direction. Although this imaging is simultaneously performed even in the vertical direction, the following example will be described by taking the imaging in the horizontal direction as an example.
  このようにして、過去映像の撮像が行われると、複数の撮像装置321により各方向に対して時系列的に順次画像が撮像される。そして撮像された画像は、携帯端末2へと送られることとなる。このため、過去映像の撮像が終了した段階で、携帯端末2における映像蓄積部22には、位置Pを中心にして水平方向全方位に亘って撮像された動画像が蓄積されている状態となる。このとき、図示しないマイクにより音声を同時に収録しておいてもよい。 過去 In this way, when past images are captured, images are sequentially captured in time series in each direction by the plurality of imaging devices 321. Then, the captured image is sent to the mobile terminal 2. For this reason, at the stage where the imaging of the past video is finished, the video storage unit 22 in the mobile terminal 2 is in a state in which moving images captured in all horizontal directions around the position P are stored. . At this time, sound may be recorded simultaneously by a microphone (not shown).
  次にこの過去映像を代替映像として撮像する場合には、
図12(b)に示すように、かつて過去映像の撮影を行った、同じ位置Pに頭部近接体4を近接させたユーザが存在するものとする。過去映像を表示パネル62上に表示させる際には、携帯端末2は、記録部69において蓄積されている過去映像を読み出し、上述の同様のプロセスで表示パネル62上にこれを表示する。
Next, when taking this past video as an alternative video,
As shown in FIG. 12 (b), it is assumed that there is a user who has taken a past video and has brought the head proximity body 4 close to the same position P. When displaying the past video on the display panel 62, the portable terminal 2 reads the past video stored in the recording unit 69 and displays it on the display panel 62 in the same process as described above.
  このとき、動きセンサ59により検出した頭部の動きにより検出した視線方向を、表示パネル62上の表示に反映させるようにしてもよい。携帯端末2には動きセンサ59を介して検出された、ユーザの頭部の向きや視線方向に関する情報を送られてくる。動きセンサ59による頭部方向に関する情報は、頭部方向特定ステップS24により、実際のユーザの頭部方向が特定される。 At this time, the line-of-sight direction detected by the movement of the head detected by the motion sensor 59 may be reflected in the display on the display panel 62. Information relating to the orientation of the user's head and the line-of-sight direction detected via the motion sensor 59 is sent to the mobile terminal 2. Information on the head direction by the motion sensor 59 specifies the actual head direction of the user in the head direction specifying step S24.
  次に頭部方向特定ステップS24を介して、実際にユーザが捉らえようとする視覚を特定する。仮にユーザが正面を向いている場合を特定した場合には、
図12(b)の実線で示す範囲の視覚に反映される映像を切り出せばよい。しかしながら、頭部方向特定ステップS24により、ユーザが点線で示される領域を視覚として捉えている場合には、映像切出し方向更新ステップS25により、切り出すべき映像の範囲を矢印方向へとシフトさせる。再生制御ステップS28は、このようにシフトさせた範囲の映像を記録部69に記録されている映像から切り出す。このとき、過去映像は時系列的に録画されているものであるから、この切り出すタイミングもかかる時系列に沿って切り出すことが望ましい。携帯端末2における表示パネル62に、このようにシフトした範囲で切り出した映像を表示することで、ユーザに対して、あたかも自らの頭部の向き等に応じた視覚で映像を視認している現実感覚を持たせることが可能となる。過去映像を撮像したパノラマビデオカメラ31の位置Pと、携帯端末2を近接させたユーザの頭部の位置Pが同一であることから、そのような感覚を植え付けることが可能となる。
Next, the vision that the user actually wants to capture is specified through the head direction specifying step S24. If you specify that the user is facing the front,
It is only necessary to cut out an image reflected in the vision in the range indicated by the solid line in FIG. However, when the user recognizes the area indicated by the dotted line as a vision in the head direction specifying step S24, the range of the video to be extracted is shifted in the arrow direction by the video extraction direction updating step S25. The reproduction control step S28 cuts out the video in the range shifted in this way from the video recorded in the recording unit 69. At this time, since the past video is recorded in time series, it is desirable to cut out the cut-out timing along the time series. By displaying the image cut out in such a shifted range on the display panel 62 in the mobile terminal 2, the user can visually recognize the image visually according to the orientation of his / her head. It is possible to have a sense. Since the position P of the panoramic video camera 31 that has captured the past video is the same as the position P of the user's head where the mobile terminal 2 is brought close, such a feeling can be implanted.
  このような過去映像としての代替映像のみを表示パネル62上に表示させるようにしてもよいし、ライブ映像と組み合わせて表示させるようにしてもよい。ユーザは、当初ライブ映像を視認しているが、気づかぬ間にこの過去映像に切り換えられる。しかし、ユーザは、当初からライブ映像を視認していることから、過去映像に切り換えられていても、自分自身はライブ映像を視認している意識のままでいる。つまり、当初にあえてライブ映像をユーザに視認させることで、過去映像への切り替えに気づかせ難くさせている。このとき、代替映像の表示に切り替わった後に、図示しないマイクにより録音した音声もヘッドホン43を介して流すことで、よりユーザに対し、ライブ映像をあたかも視認している感覚を与えることができる。 の み Only such substitute video as past video may be displayed on the display panel 62, or may be displayed in combination with live video. The user is initially viewing the live video, but is switched to this past video without realizing it. However, since the user visually recognizes the live video from the beginning, even if the user has switched to the past video, the user remains conscious of viewing the live video. In other words, by initially letting the user visually recognize the live video, it is difficult to notice the switch to the past video. At this time, after switching to the display of the substitute video, the sound recorded by the microphone (not shown) is also played through the headphones 43, so that the user can feel as if he / she is viewing the live video.
  なお、過去映像を代替映像として表示する際には、上述した実施の形態に限定されるものではない。例えば、過去映像を撮像したパノラマビデオカメラ31の位置Pと、頭部近接体4を近接させたユーザの頭部の位置Pが同一である場合に限定されず、互いに異なる位置であってもよい。また、頭部方向や視線方向の双方を識別して過去映像の切り出しを行う場合に限定されず、頭部方向、視線方向の何れか一方を識別して過去映像の切り出しを行うようにしてもよい。 It should be noted that the display of the past video as the substitute video is not limited to the above-described embodiment. For example, it is not limited to the case where the position P of the panoramic video camera 31 that captured the past video and the position P of the user's head close to the head proximity body 4 are the same, and may be different positions. . Further, the present invention is not limited to the case where the past image is cut out by identifying both the head direction and the line-of-sight direction, and the past image may be cut out by identifying either the head direction or the line-of-sight direction. Good.
  なお、この過去映像を代替映像として表示する際には、以下に説明するような各種工夫を盛り込むようにしてもよい。 Note that when displaying the past video as an alternative video, various devices as described below may be incorporated.
  例えば、過去映像とライブ映像の画質が、同一又は近似するものとして設定してもよい。かかる場合には、携帯端末2は、ライブ映像、過去映像のそれぞれを撮像した撮像素子の特性データから求められた調整値を介して画質調整を行う。或いはこの携帯端末2は、予め画質標準を定めておき、これにライブ映像、過去映像の画質が近似するように自動的に画質調整を行うようにしてもよい。これにより、ユーザは、ライブ映像から過去映像への切り替わりにつき違和感を覚えなくなり、過去映像をあたかもライブ映像を視認しているような感覚を味合わせることが可能となる。これらの効果は、過去映像とライブ映像を切り換えて表示する場合のみならず、互いに並列させて表示する場合、或いは互いに重ね合わせて表示する場合も同様の効果を得ることが可能となる。 For example, the image quality of the past video and the live video may be set to be the same or similar. In such a case, the mobile terminal 2 performs image quality adjustment via an adjustment value obtained from the characteristic data of the image sensor that captured each of the live video and the past video. Alternatively, the portable terminal 2 may determine an image quality standard in advance and automatically perform image quality adjustment so that the image quality of the live video and the past video is approximated. As a result, the user does not feel discomfort when switching from the live video to the past video, and can taste the past video as if viewing the live video. These effects can be obtained not only when the past video and the live video are switched and displayed, but also when the images are displayed in parallel with each other or when they are displayed in a superimposed manner.
  特に代替映像を過去映像とする場合において、過去映像に表示される対象物とライブ映像に表示される対象物を混在させるようにしてもよい。これにより、過去映像と現在映像との区別をより難しくすることが可能となる。 In particular, when the substitute video is a past video, the target displayed on the past video and the target displayed on the live video may be mixed. Thereby, it becomes possible to make it more difficult to distinguish the past video from the current video.
  (ゲーム等のコンテンツ映像)
  代替映像としては、例えばゲームに関するコンテンツ映像を当てはめるようにしてもよい。近年は、ヘッドマウントディスプレイを利用したゲームが制作されているが、これらを代替映像に当てはめ、プレイヤーとしてのユーザからのアクション(動きセンサ59、操作部65等による入力に基づく)をそのゲームに反映させる。代替映像は、複数に亘りレイヤ化されていることから、ゲームコンテンツに関する映像も複数に亘りレイヤ化しておき、場面に応じてこれらを順次読み出して表示する。このときレイヤ化されているゲームコンテンツ映像を互いに重ね合わせ、或いはこれらを互いにフェードアウト、フェードインさせることでユーザに対して、違和感の無い代替画像間の移り変わりを表現することが可能となる。
(Game content video)
As an alternative video, for example, a content video related to a game may be applied. In recent years, games using a head-mounted display have been produced, but these are applied to alternative images, and actions from the user as a player (based on input from the motion sensor 59, the operation unit 65, etc.) are reflected in the game. Let Since the alternative video is layered in a plurality of layers, the video relating to the game content is also layered in a plurality of layers, and these are sequentially read and displayed according to the scene. At this time, layered game content videos are superimposed on each other, or these are faded out and faded in to each other, so that it is possible to represent the transition between the alternative images without any sense of incongruity to the user.
  (映画等の映像コンテンツ)
  代替映像として、例えば映画等を始めとした映像コンテンツを再生するようにしてもよい。係る場合には、第1の代替映像として通常の映画コンテンツを再生し、第2の代替映像として、その映画コンテンツに関する付随情報を再生するようにしてもよい。この付随情報としては、例えば映画コンテンツに関する視聴者からのコメントを通信網から取得して流すようにしてもよいし、キャストや今までのあらすじ、人物関係等を表示するようにしてもよい。また、第3の代替映像には、映画と関係の無いニュースや天気予報等を再生するようにしてもよい。
(Video content such as movies)
For example, video content such as a movie may be reproduced as an alternative video. In such a case, normal movie content may be played back as the first substitute video, and accompanying information regarding the movie content may be played back as the second substitute video. As this accompanying information, for example, a comment from a viewer regarding movie content may be acquired from a communication network and flowed, or a cast, a synopsis up to now, personal relationships, etc. may be displayed. Moreover, you may make it reproduce | regenerate the news, weather forecast, etc. which are unrelated to a movie to a 3rd alternative image | video.
  (投資情報等のリアルタイムコンテンツ)
  代替映像として、投資情報(株、為替、債権、先物取引)をリアルタイムに再生するようにしてもよい。かかる場合には、第1の代替映像において、ある銘柄の5分足のチャートを、また第2の代替映像では、その銘柄の日足のチャートを表示するようにしてもよい。また第1の代替映像には、日足、月足のチャートを並べて表示し、第2の代替映像には、為替のチャートを載せてもよいし、或いはリアルタイムに取得したニュース、さらには気配値を表示するようにしてもよい。何れの場合においても、通信網5からリアルタイムに投資情報を取得して、これを複数レイヤに亘る代替情報とする。ユーザは、この複数レイヤからなる代替情報のうち、自らが確認したい投資情報を指定することで、これらが読み出されて表示パネル62上に表示されることとなる。
(Real-time content such as investment information)
As an alternative video, investment information (stock, exchange rate, bond, futures transaction) may be reproduced in real time. In such a case, a chart of a 5-minute bar for a certain brand may be displayed in the first alternative video, and a daily chart of the brand may be displayed in the second alternative video. In addition, a chart of daily and monthly bars may be displayed side by side on the first alternative video, and a chart of exchange rates may be placed on the second alternative video, or news acquired in real time, and a quotation May be displayed. In any case, investment information is acquired from the communication network 5 in real time, and this is used as alternative information over a plurality of layers. The user designates investment information that the user wants to check among the alternative information composed of a plurality of layers, and these are read out and displayed on the display panel 62.
  (アプリケーションによる表示)
  携帯情報端末(携帯電話、スマートフォン)や、タブレット型端末、PC等に適用される各種アプリケーションを代替映像として再生するようにしてもよい。かかる場合には、これらのアプリケーションを通信網5から取得して記録部69に蓄積しておく。必要に応じてアプリケーションが記録部69から読み出され、代替映像として再生されることとなる。
(Display by application)
Various applications applied to portable information terminals (mobile phones, smartphones), tablet terminals, PCs, and the like may be reproduced as alternative videos. In such a case, these applications are acquired from the communication network 5 and stored in the recording unit 69. If necessary, the application is read from the recording unit 69 and reproduced as a substitute video.
  (他の場所で撮影された映像)
  ユーザが実際にライブ映像や代替映像を視聴している地点と異なる地点で撮像された映像、代替映像としてこれを再生するようにしてもよい。かかる場合には、異なる場所に設置した録画モジュール3により、事前に又はリアルタイムに映像を撮像し、これを携帯端末2へと送信する。代替映像取得ステップS21では、この送信されてくる映像を代替映像として取得し、事前に撮像した映像であれば、これを記録部69に一度蓄積する。ライブ映像の背景を取り除き、上記代替映像と重ねて表示する事で、離れた場所に自分自身がいるかのような感覚を作り出す事が可能である。
(Images taken elsewhere)
You may make it reproduce | regenerate this as a video image | photographed at the point different from the point where the user is actually viewing live video or a substitute image, and a substitute image. In such a case, a video is captured in advance or in real time by the recording module 3 installed in a different place, and this is transmitted to the mobile terminal 2. In the alternative video acquisition step S21, the transmitted video is acquired as an alternative video, and if it is a video captured in advance, this is once stored in the recording unit 69. By removing the background of the live video and displaying it on top of the above alternative video, it is possible to create a feeling as if you were at a remote location.
  (付随情報)
  ライブ映像として劇場で観劇をする場合、或いは競技場等でスポーツ観戦をする場合に、これに関する付随情報を代替映像として再生するようにしてもよい。例えば携帯端末2の位置情報等を取得した上で、その位置が競技場であればその時間帯において行われる競技の情報を、またその位置が劇場であればその時間帯において公演予定の劇に関する情報を通信網5から取得する。そして、この通信網5から取得した情報を代替映像として再生する。もしくは、複数のパノラマカメラからとった映像を複数の代替映像としてもよく、それらを使用者の意図に応じて切り替える事もできる。
(Accompanying information)
When a theater is played as a live video or when a sporting event is performed at a stadium or the like, accompanying information regarding this may be reproduced as a substitute video. For example, after acquiring the location information of the mobile terminal 2, if the location is a stadium, information on the competition to be performed in that time zone, and if the location is a theater, about the play scheduled to be performed in that time zone Information is acquired from the communication network 5. Then, the information acquired from the communication network 5 is reproduced as a substitute video. Alternatively, videos taken from a plurality of panoramic cameras may be used as a plurality of alternative videos, which can be switched according to the user's intention.
  (電子メール)
  電子メールの画面を代替映像として表示するようにしてもよい。携帯端末2は、通信網5を介して受信したメール又はこれに関する情報を代替映像として、これを再生する。
(e-mail)
An e-mail screen may be displayed as an alternative video. The portable terminal 2 reproduces the mail received via the communication network 5 or information related thereto as an alternative video.
  (テレビジョン放送)
  代替映像としては、テレビジョン放送を適用するようにしてもよい。これにより、ユーザは、ライブ映像と組み合わせて代替映像からテレビジョン放送を視聴することも可能となる。
(Television broadcasting)
As an alternative video, a television broadcast may be applied. Thereby, the user can view the television broadcast from the alternative video in combination with the live video.
  上記の実施例では、本発明を代替映像を提供するために応用していたが、本発明は、ヘッドマウントディスプレイ等の代替となるものを安価かつ簡易に提供する表示システムにかかるものであり、その最小構成は、画面を有する携帯端末2を、頭部近接体4であるアタッチメントに収容することで実現される。 In the above embodiment, the present invention has been applied to provide an alternative video, but the present invention relates to a display system that provides an alternative such as a head-mounted display inexpensively and easily, The minimum configuration is realized by housing the mobile terminal 2 having a screen in an attachment that is the head proximity body 4.
  たとえば、携帯端末2において、ビデオ映像を再生し、これをユーザが視聴する場合を想定する。ビデオ映像の視聴にあたっては、再生、一時停止、早送り、巻き戻しなどの操作を行いたい状況がある。 For example, it is assumed that the mobile terminal 2 plays a video image and the user views it. When viewing video images, there are situations where you want to perform operations such as playback, pause, fast forward, and rewind.
  携帯端末2がタッチスクリーンを有するスマートフォンにより構成され、アタッチメントに収容されていない状況では、タッチスクリーンに表示されたアイコンやボタン等をタッチすることでこのような操作が可能となる。 状況 In a situation where the mobile terminal 2 is configured by a smartphone having a touch screen and is not accommodated in the attachment, such an operation can be performed by touching an icon, a button, or the like displayed on the touch screen.
  一方、携帯端末2がアタッチメントに収容されている状況では、タッチスクリーンもアタッチメントの内部に配置されてしまう。そこで、本実施例では、タッチスクリーンにユーザがタッチできるように、端末収容ユニット42の底面に指1本が通る程度の小さな開口を設ける。開口を小さくするのは、アタッチメント内部に外光が侵入することをできるだけ防止するためである。 On the other hand, in a situation where the mobile terminal 2 is accommodated in the attachment, the touch screen is also disposed inside the attachment. Therefore, in this embodiment, a small opening that allows one finger to pass through is provided on the bottom surface of the terminal accommodating unit 42 so that the user can touch the touch screen. The reason for making the opening small is to prevent external light from entering the attachment as much as possible.
  ユーザは、アタッチメントの開口を介して、タッチスクリーンに触れることができる。開口が小さいため、その全面に触れることはできない。たとえば、右利きのユーザ用に、ユーザから見て右側底面に開口を設けた場合には、ユーザは、タッチスクリーンの右側にはタッチ可能であるが、左側には指が届かないため、タッチできない。すなわち、タッチ可能領域は、タッチスクリーンの一部に限定される。 The user can touch the touch screen through the opening of the attachment. Since the opening is small, the entire surface cannot be touched. For example, if a right-handed user has an opening on the right bottom surface for a right-handed user, the user can touch the right side of the touch screen but cannot touch the left side because his / her finger does not reach. . That is, the touchable area is limited to a part of the touch screen.
  そこで、本実施例では、携帯端末2を収容したアタッチメントの傾きを、携帯端末2の動きセンサ59により検知する。そして、この傾きに応じて、タッチスクリーン内に表示される操作対象の位置を変化させる。 Therefore, in this embodiment, the inclination of the attachment housing the mobile terminal 2 is detected by the motion sensor 59 of the mobile terminal 2. And according to this inclination, the position of the operation target displayed in the touch screen is changed.
  たとえば、操作対象が一列に並べられている場合には、検知された傾きの方向に、その列を巡回的に進行させる。そして、進行の経路が、タッチ可能領域を通過するようにする。 For example, when the operation objects are arranged in a line, the line is cyclically advanced in the direction of the detected inclination. Then, the path of travel passes through the touchable area.
  すると、所望の操作対象がタッチ可能領域外に表示されていたとしても、携帯端末2を収容したアタッチメントを傾けることで、その操作対象をタッチ可能領域内に移動させることができる。タッチ可能領域内に所望の操作対象が移動した後は、ユーザの指を、アタッチメント底部の開口を介してタッチスクリーンに表示された所望の操作対象にタッチすることで、当該操作対象に対応付けられる処理が実行される。 In other words, even if a desired operation target is displayed outside the touchable area, the operation target can be moved into the touchable area by tilting the attachment housing the mobile terminal 2. After the desired operation target moves within the touchable area, the user's finger is touched on the desired operation target displayed on the touch screen through the opening at the bottom of the attachment, and is associated with the operation target. Processing is executed.
  複数の操作対象をどのように並べるか、検知された傾きに応じてどのように移動させるか、は任意である。 It is arbitrary how to arrange a plurality of operation objects and how to move them according to the detected inclination.
  たとえば、複数行複数列に操作対象が並んでいる場合には、一回傾けて戻すごとに、その傾きの方向に、行と列が1つずつ巡回、すなわち、斜めに巡回スクロールするようにしても良い。 For example, if the operation target is arranged in multiple rows and multiple columns, each tilt and return, the row and the column are rotated one by one in the direction of the tilt, that is, the diagonal scroll is performed obliquely. Also good.
  このほか、容器の中に粘性のある液体を入れ、その中に物体を投入して、容器を傾ける物理シミュレーションを利用しても良い。すなわち、検知された傾きを容器の傾きに適用し、操作対象を物体に対応付ける。ユーザは、携帯端末2を収容したアタッチメントの傾きを調整することで、開口を介してアタッチメント内部に挿入した指に所望の操作対象を近付けた後、当該操作対象にタッチすることができる。 In addition, a physical simulation may be used in which a viscous liquid is put into a container, an object is put into the container, and the container is tilted. That is, the detected inclination is applied to the inclination of the container, and the operation target is associated with the object. The user can touch the operation target after adjusting the inclination of the attachment housing the mobile terminal 2 to bring the desired operation target close to the finger inserted into the attachment through the opening.
  上記実施例では、アタッチメントの突出片101の開閉やタップ、アタッチメントの開口を介した携帯端末2のタッチスクリーンへのタッチ等により、携帯端末2により表示される映像を制御していた。ここで、アタッチメントは箱のような形状をしている。このため、その表面を叩いたり、摩擦したり、少しだけ凹ませて戻したり、あるいは突出片101を開閉したりタップしたり、など、アタッチメントへの接触をともなうアクションをユーザが行うと、そのアクションに起因する音声が発生し、その箱形状によって共鳴する。共鳴した音声は、携帯端末2のマイク60から集音することができる。 In the above embodiment, the image displayed on the mobile terminal 2 is controlled by opening / closing or tapping the protruding piece 101 of the attachment, touching the touch screen of the mobile terminal 2 through the opening of the attachment, or the like. Here, the attachment has a box-like shape. For this reason, when the user performs an action with contact with the attachment, such as hitting the surface, rubbing it, denting it a little, returning it, opening / closing or tapping the protruding piece 101, the action The sound caused by the sound is generated and resonates due to the box shape. The resonated sound can be collected from the microphone 60 of the mobile terminal 2.
  また、上記のようなアクションを行った場合には、アタッチメントに収容された携帯端末2の位置や向きも変化する。この動きは、動きセンサ59により検知することが可能である。 In addition, when the above actions are performed, the position and orientation of the mobile terminal 2 accommodated in the attachment also change. This movement can be detected by the movement sensor 59.
  そこで、本実施形態では、ユーザによる上記のようなアクションによって生じる音声ならびに動きの一方もしくは双方によって、アタッチメントに収容された携帯端末2により表示される映像の制御を行う。 Therefore, in the present embodiment, the video displayed by the mobile terminal 2 accommodated in the attachment is controlled by one or both of the voice and the movement generated by the action as described above by the user.
  上記のように、アタッチメントへの接触をともなうアクションに起因してアタッチメントが発生させる音声は、アタッチメントにより共鳴されてから、マイク60により集音させる。したがって、集音された音声に対して、アタッチメントにより共鳴される周波数帯のみを通過させるバンドパスフィルタを適用する等の手法により、外界の音声との分離を簡易に行うことができる。 音 声 As described above, the sound generated by the attachment due to the action accompanied by the contact with the attachment is collected by the microphone 60 after being resonated by the attachment. Therefore, it is possible to easily separate the collected sound from the external sound by a technique such as applying a bandpass filter that passes only the frequency band resonated by the attachment.
  このほか、同一のアタッチメントに対して複数のユーザに同じ操作(たとえば同じ箇所に対するタップ操作)をさせ、そのときに検知された音声や動きを記録して、音声や動きのテンプレートをあらかじめ抽出しておく手法もある。 In addition, let multiple users perform the same operation (for example, tap operation on the same part) for the same attachment, record the voice and movement detected at that time, extract the voice and movement template in advance There is also a technique to leave.
  たとえば、アタッチメントの右側を1回タップすると、マイク60を介してタップ音が検出されるとともに、携帯端末2が1回だけ微小に左に移動してから右に戻る動きが検出される。アタッチメントの右側を摩擦すると、マイク60を介して摩擦音が検出されるとともに、携帯端末2が振動する動きが検出されるが、左側の振動は右側の振動に比べて小さい。このような音声や動きの特徴を実験により収集することで、各アクションに対するテンプレートを用意することが可能である。 For example, when the right side of the attachment is tapped once, a tap sound is detected via the microphone 60, and a movement of the mobile terminal 2 slightly moving left once and then returning to the right is detected. When the right side of the attachment is rubbed, a friction sound is detected via the microphone 60 and a movement of the mobile terminal 2 vibrating is detected, but the left side vibration is smaller than the right side vibration. It is possible to prepare a template for each action by collecting such voice and movement characteristics through experiments.
  そして、携帯端末2は、運用時に検出された音声や動きを、あらかじめ用意されたテンプレートと対比して、その類似度が所定の比較基準において十分に高いテンプレートを選び出し、当該テンプレートに対応付けられたアクションをユーザが実行した、と判断する。 Then, the mobile terminal 2 compares the voice and movement detected during operation with a template prepared in advance, selects a template whose similarity is sufficiently high based on a predetermined comparison criterion, and associates the template with the template. Determine that the user performed the action.
  たとえば、映像再生中は、右側1回タップで再生と停止の切り換え、右側2回タップで早送り、左側2回タップで巻き戻し、右側を摩擦するとメニュー表示、などようにアクションと処理を対応付けることができる。メニューが表示された後は、右側の摩擦もしくは左側の摩擦でメニュー内のアイテムを指すカーソルを移動させ、右側1回タップでカーソルに指されたアイテムを選択、などのようにアクションと処理を対応付けることができる。ここで、アクションの種類や回数と処理の対応付けは、任意に変更が可能である。 For example, during video playback, you can associate actions with processes such as switching between playback and stop with one tap on the right, fast-forwarding with two taps on the right, rewinding with two taps on the left, and menu display when rubbing the right. it can. After the menu is displayed, move the cursor pointing to the item in the menu with the friction on the right side or the friction on the left side, and select the item pointed to by the cursor with a single tap on the right side, etc. be able to. Here, the association between the type and number of actions and the process can be arbitrarily changed.
  本実施例によれば、突出片101の有無にかかわらず、ユーザによるアタッチメントへの接触を伴うアクションを識別することで、ユーザの没入感を損わずに携帯端末2を制御することができる。 According to the present embodiment, regardless of the presence or absence of the protruding piece 101, the mobile terminal 2 can be controlled without impairing the user's immersive feeling by identifying the action accompanied by the user touching the attachment.
  本実施例に係るアタッチメントは、携帯端末2の運搬や販売時に携帯端末2を包装する箱により構成される。
  ここで、包装とは、物品の輸送、保管などにあたって価値及び状態を保護するために適切な材料、容器などを物品に施す技術および施した状態のことであり、個装、内装、外装に分類される。また、包装は、その目的に応じて、輸送を目的とした工業包装、輸送包装、配送包装など、販売を目的とした商業包装などに区別され、工業包装、輸送包装、配送包装などを梱包とも称する。本願における包装は、上記の全てを含む概念である。
図13(a)、(b)、(c)、(d)は、携帯端末の包装箱によりアタッチメントを構成する手順の説明図である。以下、本図を参照して説明する。本図では、携帯端末2およびこれを包装する箱の断面が図示されている。本図では、理解を容易にするため、各部の厚さや大きさ、隙間を誇張して図示している。
The attachment according to the present embodiment is configured by a box that wraps the portable terminal 2 when the portable terminal 2 is transported or sold.
Here, packaging is the technology and state of applying appropriate materials, containers, etc. to an article to protect the value and state of the article during transportation, storage, etc., and is classified into individual packaging, interior decoration, and exterior packaging. Is done. Depending on the purpose of packaging, packaging is classified into industrial packaging intended for transportation, transportation packaging, delivery packaging, and other commercial packaging intended for sale. Called. Packaging in the present application is a concept including all of the above.
FIGS. 13 (a), (b), (c), and (d) are explanatory diagrams of a procedure for configuring an attachment with a packaging box of a portable terminal. Hereinafter, a description will be given with reference to FIG. In this figure, the cross section of the portable terminal 2 and the box which wraps this is illustrated. In this figure, the thickness, size, and gap of each part are exaggerated for easy understanding.
  携帯端末2を包装する箱501は、蓋502、本体503、トレー504からなる。蓋502は本体503を覆い、トレー504は携帯端末2をその上に載せた状態で、本体503の内部に収納されている(本図(a))。蓋502および本体503は、段ボール等の紙、あるいは各種の樹脂等で構成することができるが、これに限定されるものではなく、セラミック材料や金属等いかなる材料で構成されていてもよく、その素材は任意に選択が可能である。また、これらに対しては、その用途や機能に応じて、模様、色彩、装飾等を施すことができる。 The box 501 for packaging the portable terminal 2 includes a lid 502, a main body 503, and a tray 504. The lid 502 covers the main body 503, and the tray 504 is housed inside the main body 503 with the portable terminal 2 placed on it (this figure (a)). The lid 502 and the main body 503 can be made of paper such as cardboard, or various resins, but are not limited to this, and may be made of any material such as ceramic material or metal. The material can be arbitrarily selected. Moreover, a pattern, a color, a decoration, etc. can be given to these according to the use and function.
  一般には、携帯端末2の出荷時には、トレー504の底面に、携帯端末2の画面とは反対側の面(背面)が接するように、携帯端末2がトレー504の上に載せられる。また、トレー504のうち、携帯端末2が収まる領域の一部には、爪と呼ばれる突起が生成されており、携帯端末2を嵌め込むことによって、携帯端末2を容易に固定することができる。本図では、爪がある箇所の断面が示されている。 In general, when the mobile terminal 2 is shipped, the mobile terminal 2 is placed on the tray 504 so that the bottom surface of the tray 504 is in contact with the surface (back surface) opposite to the screen of the mobile terminal 2. In addition, a protrusion called a nail is generated in a part of the tray 504 where the mobile terminal 2 is accommodated, and the mobile terminal 2 can be easily fixed by fitting the mobile terminal 2 therein. In this figure, the cross section of the place with a nail | claw is shown.
  トレー504は、透明なプラスチックなどの素材で構成されており、携帯端末2を載せる領域の中央部分に薄型のレンズとして、フレネルレンズ505が一体形成されている。ユーザは、本体503から携帯端末2とトレー502を取り出した後、ミシン目もしくは切取り線に沿ってトレー504を切断をすることで、フレネルレンズ505を得ることができる(本図(b))。また、蓋502にもミシン目もしくは切取り線が設けられており、これに沿って切断をすることで、ユーザが接眼するための開口506が得られる(本図(b))。 The bag tray 504 is made of a material such as a transparent plastic, and a Fresnel lens 505 is integrally formed as a thin lens at a central portion of a region where the mobile terminal 2 is placed. The user can obtain the Fresnel lens 505 by taking out the portable terminal 2 and the tray 502 from the main body 503 and then cutting the tray 504 along a perforation line or a cut line (this diagram (b)). The lid 502 is also provided with a perforation or a cut line, and an opening 506 for the user to get an eyepiece is obtained by cutting along the perforation or cutting line (this figure (b)).
  ユーザが蓋502の開口506に、フレネルレンズ505を取り付けると(本図(c))、近接ユニット41に相当する部材が完成する。フレネルレンズ505は、たとえば蓋502の開口506の縁に切れ目を入れ、これに挟むことにより固定したり、粘着テープなどを利用して接着することができる。 When the user attaches the Fresnel lens 505 to the opening 506 of the lid 502 (this figure (c)), a member corresponding to the proximity unit 41 is completed. The Fresnel lens 505 can be fixed by, for example, making a cut at the edge of the opening 506 of the lid 502 and sandwiching it, or can be bonded using an adhesive tape or the like.
  ユーザは、トレー504を裏返して携帯端末2を固定し、本体503に挿入する(本図(c))ことで、携帯端末2を本体503に固定する。これにより、携帯端末2が収容された端末収容ユニット42に相当する部材が完成する。 The user turns the tray 504 upside down to fix the mobile terminal 2 and inserts it into the main body 503 (this figure (c)), thereby fixing the mobile terminal 2 to the main body 503. Thereby, a member corresponding to the terminal accommodating unit 42 in which the portable terminal 2 is accommodated is completed.
  この後は、ユーザは、蓋502を本体503に被せる(本図(d))と携帯端末2を収容した頭部近接体4(アタッチメント)が完成する。ユーザは、蓋502を本体503に被せる位置を調整することで、携帯端末2の画面にピントを合わせることができる。 After this, when the user covers the main body 503 with the lid 502 (this figure (d)), the head proximity body 4 (attachment) containing the portable terminal 2 is completed. The user can focus on the screen of the mobile terminal 2 by adjusting the position where the lid 502 is put on the main body 503.
  なお、本体503には、あらかじめ用意されたミシン目もしくは切取り線に沿って切断を行うことで、携帯端末2のタッチスクリーンに指をタッチさせるための開口や、携帯端末2からヘッドホン用のコードを出すための開口を作成することができる。 The main body 503 is cut along a perforation or cut line prepared in advance, so that an opening for touching the touch screen of the mobile terminal 2 with a finger or a cord for headphones from the mobile terminal 2 is provided. An opening can be created for exit.
  また、フレネルレンズ505は、トレー504と一体成形する必要はなく、別途添付しても良いし、出荷当初から蓋502に取り付けておいても良い。また、上記説明では、トレー504を裏返して挿入することで、携帯端末2を固定しているが、トレー504を使わずに、本体503の底や壁面に切り掛きや突起を設け、これをストッパとして携帯端末2を固定することとしても良い。この態様では、携帯端末2の出荷時に利用される梱包箱(化粧箱)のみならず、各種の通信販売の輸送用の段ボール箱などから頭部近接体4(アタッチメント)を作成することができる。 The Fresnel lens 505 need not be integrally formed with the tray 504, and may be attached separately or attached to the lid 502 from the beginning of shipment. Further, in the above description, the portable terminal 2 is fixed by inserting the tray 504 upside down. However, without using the tray 504, the bottom or wall surface of the main body 503 is provided with a notch or protrusion. The portable terminal 2 may be fixed as a stopper. In this aspect, the head proximity body 4 (attachment) can be created not only from a packing box (decorative box) used at the time of shipment of the mobile terminal 2 but also from various cardboard boxes for mail order sales.
  このほか、フレネルレンズ505がトレー504とは別に用意されており、トレー504の底面が透明な素材でできている場合には、携帯端末2をトレー504に対して裏返し、トレー504の底面に携帯端末2の画面が接するように載せて、携帯端末を固定した後、携帯端末2の画面がトレー504の底面を介して外側に見えるように、本体503内にトレー504を裏返して収める。 In addition, when the Fresnel lens 505 is prepared separately from the tray 504 and the bottom surface of the tray 504 is made of a transparent material, the portable terminal 2 is turned over with respect to the tray 504 and carried on the bottom surface of the tray 504. After placing the terminal 2 so that the screen of the terminal 2 is in contact and fixing the portable terminal, the tray 504 is turned upside down in the main body 503 so that the screen of the portable terminal 2 can be seen outside through the bottom surface of the tray 504.
  そして、蓋502の開口506に設置されたフレネルレンズ505を介して携帯端末2の画面が見えるように、蓋502を本体503に被せれば良い。 Then, the lid 502 may be covered with the main body 503 so that the screen of the portable terminal 2 can be seen through the Fresnel lens 505 installed in the opening 506 of the lid 502.
  このほかにもトレー504からフレネルレンズ505を切り離さずに済ませる手法がある。図14(a)、(b)、(c)は、携帯端末の包装箱によりアタッチメントを構成する手順の説明図である。以下、本図を参照して説明する。 ほ か There is another method that does not require the Fresnel lens 505 to be separated from the tray 504. FIGS. 14 (a), 14 (b), and 14 (c) are explanatory diagrams of a procedure for configuring an attachment with a packaging box of a portable terminal. Hereinafter, a description will be given with reference to FIG.
  本態様も、上記態様と同様に、トレー504底面にフレネルレンズ505が成形されている(本図(a))。 In this embodiment, the Fresnel lens 505 is formed on the bottom surface of the tray 504 as in the above embodiment (FIG. (A)).
  そこで、携帯端末2を取り出して本体503の底に、切り欠き等を利用したストッパ(図示せず)で固定する(本図(b))。すると、本体収容ユニット42ができる。 Therefore, the portable terminal 2 is taken out and fixed to the bottom of the main body 503 with a stopper (not shown) using a notch or the like (this figure (b)). Then, the main body accommodation unit 42 is formed.
  一方、トレー504は裏返して本体502に挿入する(本図(b))。これにより、最も簡易な接眼ユニット41ができる。 On the other hand, the tray 504 is turned over and inserted into the main body 502 (this figure (b)). Thereby, the simplest eyepiece unit 41 can be obtained.
  すなわち、トレー504を裏返して本体502に挿入することで、最も簡易なアタッチメントとすることができる。トレー504の側面からの外光の侵入を防止するため、トレー504の側面には不透明な素材を使用しても良いし、透明な素材を使用する場合には、トレー504の側面に遮光用のシールやテープなどを貼り付けることとしても良い。 That is, by turning the tray 504 upside down and inserting it into the main body 502, the simplest attachment can be obtained. In order to prevent intrusion of external light from the side of the tray 504, an opaque material may be used for the side of the tray 504. When a transparent material is used, a light shielding material is provided on the side of the tray 504. A sticker or tape may be attached.
  この態様では、本体503にフラップ(図示せず。)を用意しておくと、当該フラップを、収容ユニット42の突出片101として利用することもできる。 In this aspect, if a flap (not shown) is prepared in the main body 503, the flap can be used as the protruding piece 101 of the housing unit.
  このほか、蓋501に開口506を設けて、トレー504の爪を蓋501に設けられた溝に差し込む(図示せず。)などの固定方法により、蓋501にトレー504を固定した上で、本体502にトレー504を挿入しても良い(本図(c))。この態様では、蓋501にトレー504を裏返して固定することで、接眼ユニット41とする。この態様では、収容ユニット42の内部と外部の両方の面で接眼ユニット41に接することとなるので、摩擦が十分に働き、ピントの調整をした後も、収容ユニット42と接眼ユニット41がずれにくくなる。 In addition, the tray 504 is fixed to the lid 501 by a fixing method such as providing an opening 506 in the lid 501 and inserting the claws of the tray 504 into a groove provided in the lid 501 (not shown). A tray 504 may be inserted into 502 (this figure (c)). In this embodiment, the eyepiece unit 41 is obtained by turning the tray 504 upside down and fixing it to the lid 501. In this aspect, since both the inside and outside surfaces of the housing unit 42 are in contact with the eyepiece unit 41, the friction works sufficiently and the accommodation unit 42 and the eyepiece unit 41 are not easily displaced even after the focus is adjusted. Become.
  また、トレー504にフレネルレンズ505が設けられている態様では、トレー504を裏返す必要は必ずしもない。図15(a)、(b)は、携帯端末の包装箱により構成されたアタッチメントの説明図である。本図(a)に示すように、一旦トレー504を本体502から外し、本体502の底面に携帯端末2を固定してから、トレー504を再度本体502に同じ向きで挿入すると、トレー504の底面のフレネルレンズ505を介して携帯端末2の画面を視認することができる。 In the embodiment in which the Fresnel lens 505 is provided on the tray 504, it is not always necessary to turn the tray 504 upside down. FIGS. 15 (a) and 15 (b) are explanatory views of an attachment constituted by a packaging box of a portable terminal. As shown in this figure (a), once the tray 504 is removed from the main body 502, the portable terminal 2 is fixed to the bottom surface of the main body 502, and then the tray 504 is inserted again into the main body 502 in the same direction, the bottom surface of the tray 504 The screen of the portable terminal 2 can be viewed through the Fresnel lens 505.
  また、本図(b)に示すように、開口506を設けた蓋501にトレー502を接着等により固定してから、本体502に被せることとすれば、外光を効率良く遮断しながら、トレー504の底面のフレネルレンズ505を介して携帯端末2の画面を視認することができる。 In addition, as shown in FIG. 5B, if the tray 502 is fixed to the lid 501 provided with the opening 506 by adhesion or the like and then covered with the main body 502, the tray 502 is effectively blocked while external light is blocked. The screen of the mobile terminal 2 can be visually recognized through the Fresnel lens 505 on the bottom surface of 504.
  上記例では、1枚のフレネルレンズ505を接眼用に利用したが、トレー502の底面に2つの丸型の凸レンズを成形しても良い。この場合には、蓋501に2つの丸型の開口を設けて被せるか、あるいは、トレー502の凸レンズ以外の部分を不透明な部材で成形したり、あるいは、シールや紙等の不透明な部材で遮光することで、簡易にアタッチメント4を構成することもできる。 In the above example, one Fresnel lens 505 is used for the eyepiece, but two round convex lenses may be molded on the bottom surface of the tray 502. In this case, the lid 501 is provided with two round openings, or the portion other than the convex lens of the tray 502 is formed with an opaque member, or is shielded by an opaque member such as a seal or paper. By doing so, the attachment 4 can also be configured easily.
  このように、本実施例によれば、安価かつ簡易にヘッドマウントディスプレイ等の代替となる表示システムをユーザに提供することができる。 As described above, according to the present embodiment, it is possible to provide a user with a display system that is an alternative to a head mounted display and the like at low cost and easily.
  (総括)
  以上説明した通り、本願に係る表示システムは、
  画面を有する携帯端末と、
  前記携帯端末を、前記画面がユーザにより視認可能に収容するアタッチメントと、
  を備える表示システムであって、
前記携帯端末は、
  前記携帯端末が有するセンサにより、前記アタッチメントを介した前記ユーザによるアクションを検知し、
  前記検知されたアクションに応じて前記画面に表示する映像を制御する。
(Summary)
As described above, the display system according to the present application is
A mobile terminal having a screen;
An attachment for accommodating the portable terminal so that the screen is visible by a user;
A display system comprising:
The portable terminal is
The sensor of the portable terminal detects an action by the user via the attachment,
The video to be displayed on the screen is controlled according to the detected action.
  また、本願発明の表示システムにおいて、
  前記携帯端末は、
  前記アタッチメントから発せられる音声ならびに前記携帯端末の動きを前記センサにより検知し、
  前記検知された音声ならびに動きが、前記ユーザによる前記アタッチメントへの接触、前記アタッチメントの摩擦、もしくは前記アタッチメントの変形を含むアクションに起因して発せられたものであるか否かを識別し、
  前記識別されたアクションに応じて、前記画面に表示する前記映像を制御する..ように構成することができる。
In the display system of the present invention,
The portable terminal is
Detecting voice emitted from the attachment and movement of the mobile terminal by the sensor,
Identifying whether the sensed voice and movement is caused by an action including contact to the attachment by the user, friction of the attachment, or deformation of the attachment;
The video to be displayed on the screen is controlled in accordance with the identified action.
  また、本願発明の表示システムにおいて、
  前記携帯端末は、
  前記アタッチメントから発せられる音声を前記センサにより検知し、
  前記検知された音声が、前記ユーザによる前記アタッチメントへの接触、前記アタッチメントの摩擦、もしくは前記アタッチメントの変形を含むアクションに起因して発せられたものであるか否かを識別し、
  前記識別されたアクションに応じて、前記画面に表示する前記映像を制御する
  ように構成することができる。
In the display system of the present invention,
The portable terminal is
The sound emitted from the attachment is detected by the sensor,
Identifying whether the sensed sound is emitted due to an action including contact to the attachment by the user, friction of the attachment, or deformation of the attachment;
The video to be displayed on the screen can be controlled according to the identified action.
  また、本願発明の表示システムにおいて、
  前記携帯端末は、
  前記検知された音声の周波数成分をアタッチメントが共鳴する周波数帯か否かにより分類して、前記アクションに起因して発せられたものであるか否かを識別する
  ように構成することができる。
In the display system of the present invention,
The portable terminal is
The frequency component of the detected sound can be classified according to whether or not the frequency band in which the attachment resonates, and can be configured to identify whether or not the sound is generated due to the action.
  また、本願発明の表示システムにおいて、
  前記携帯端末は、
  前記携帯端末の動きを前記センサにより検知し、
  前記検知された動きが、前記ユーザによる前記アタッチメントへの接触、前記アタッチメントの摩擦、もしくは前記アタッチメントの変形を含むアクションに起因して発せられたものであるか否かを識別し、
  前記識別されたアクションに応じて、前記画面に表示する前記映像を制御する
  ように構成することができる。
In the display system of the present invention,
The portable terminal is
The movement of the mobile terminal is detected by the sensor,
Identifying whether the detected movement is caused by an action including contact to the attachment by the user, friction of the attachment, or deformation of the attachment;
The video to be displayed on the screen can be controlled according to the identified action.
  また、本願発明の表示システムにおいて、
  前記画面はタッチスクリーンであり、
  前記アタッチメントは、前記ユーザが前記タッチスクリーンの一部の領域にタッチ可能とするための開口を有し、
  前記携帯端末は、
  前記検出された動きに基づいて、前記映像に含まれる操作対象の位置を前記タッチ可能な領域の内外に移動させ、
  前記タッチスクリーンにより、前記ユーザが前記タッチ可能な領域の内に移動した前記操作対象に対するタッチが検知されると、当該タッチに応じて、前記画面に表示する前記映像を制御する
  ように構成することができる。
In the display system of the present invention,
The screen is a touch screen;
The attachment has an opening for enabling the user to touch a partial area of the touch screen;
The portable terminal is
Based on the detected movement, the position of the operation target included in the video is moved in and out of the touchable area,
The touch screen is configured to control the video displayed on the screen according to the touch when the touch on the operation target that the user has moved into the touchable area is detected. Can do.
  また、本願発明の表示システムにおいて、
  前記携帯端末は、前記画面を前記ユーザが視認する方向を撮影方向とするカメラを有し、
  前記アタッチメントは、前記カメラが現実空間のライブ映像を撮影可能に、前記携帯端末を収容し、
  前記識別されたアクションに応じて、前記画面に表示する前記映像を前記ライブ映像もしくは非ライブ映像に切り換える
  ように構成することができる。
In the display system of the present invention,
The mobile terminal has a camera whose shooting direction is a direction in which the user visually recognizes the screen,
The attachment accommodates the portable terminal so that the camera can shoot a live video of a real space,
According to the identified action, the video displayed on the screen can be switched to the live video or the non-live video.
  また、本願発明の表示システムにおいて、
  前記携帯端末は、
  前記非ライブ映像が前記画面に表示される間、前記センサにより検出された動きに基づいて、パノラマ映像ならびに3次元映像を含む代替映像から部分映像を切り出して、前記切り出された部分映像を前記非ライブ映像とする
  ように構成することができる。
In the display system of the present invention,
The portable terminal is
While the non-live video is displayed on the screen, based on the movement detected by the sensor, a partial video is cut out from the alternative video including the panoramic video and the 3D video, and the cut-out partial video is removed from the non-live video. It can be configured to be live video.
  また、本願発明の表示システムにおけるアタッチメントにおいて、
  前記アタッチメントは、
  前記携帯端末をトレーにより固定し、前記トレーを本体に収納し、前記本体を蓋で覆うことにより、前記携帯端末を包装する箱において、
  前記トレーの底面の一部を切除することにより、前記トレーに設けられた開口を介して、前記携帯端末の画面が視認可能なように前記携帯端末を固定し、
  前記開口が設けられ、前記携帯端末を固定する前記トレーを前記本体に収納し、
  前記蓋の上面の一部を切除することにより前記蓋に設けられた開口を介して、前記携帯端末の前記画面を視認可能なように前記蓋で前記本体を覆う
  ことにより構成されるようにすることができる。
In addition, in the attachment in the display system of the present invention,
The attachment is
In the box for packaging the portable terminal by fixing the portable terminal with a tray, storing the tray in the main body, and covering the main body with a lid,
By fixing a part of the bottom surface of the tray, the mobile terminal is fixed so that the screen of the mobile terminal is visible through an opening provided in the tray.
The opening is provided, the tray for fixing the mobile terminal is stored in the main body,
By covering a part of the upper surface of the lid and covering the main body with the lid so that the screen of the portable terminal can be visually recognized through an opening provided in the lid. be able to.
  また、本願発明の表示システムにおけるアタッチメントにおいて、
  前記トレーは、前記底面の一部をレンズとした透明体により構成され、
  前記蓋に設けられた開口に、前記トレーから切除されたレンズが装着される
  ように構成することができる。
In addition, in the attachment in the display system of the present invention,
The tray is constituted by a transparent body having a lens at a part of the bottom surface,
A lens cut out from the tray can be attached to the opening provided in the lid.
  また、本願発明の表示システムにおけるアタッチメントにおいて、
  前記アタッチメントは、
  前記携帯端末をトレーにより固定し、前記トレーを本体に収納し、前記本体を蓋で覆うことにより、前記携帯端末を包装する箱において、
  前記トレーは、前記底面の一部をレンズとした透明体により構成され、
  前記携帯端末の画面が視認可能なように前記携帯端末を前記本体の底面に固定し、
  前記レンズを介して前記携帯端末の前記画面が視認可能なように前記トレーを前記本体に収納する
  ことにより構成されるようにすることができる。
In addition, in the attachment in the display system of the present invention,
The attachment is
In the box for packaging the portable terminal by fixing the portable terminal with a tray, storing the tray in the main body, and covering the main body with a lid,
The tray is constituted by a transparent body having a lens at a part of the bottom surface,
The mobile terminal is fixed to the bottom surface of the main body so that the screen of the mobile terminal is visible,
The tray can be housed in the main body so that the screen of the portable terminal can be visually recognized through the lens.
  また、本願発明の表示システムにおけるアタッチメントにおいて、
  前記アタッチメントは、
  前記携帯端末をトレーにより固定し、前記トレーを本体に収納し、前記本体を蓋で覆うことにより、前記携帯端末を包装する箱において、
  前記トレーは、前記底面の一部をレンズとした透明体により構成され、
  前記トレーの底面から前記レンズを切除することにより前記トレーに設けられた開口を介して、前記携帯端末の画面が視認可能なように前記携帯端末を固定し、
  前記開口が設けられ、前記携帯端末を固定する前記トレーを前記本体に収納し、
  前記蓋の上面の一部を切除することにより前記蓋に設けられた開口に、前記トレーから切除されたレンズを装着し、
  前記開口に装着された前記レンズを介して、前記携帯端末の前記画面を視認可能なように前記蓋で前記本体を覆う
  ことにより構成されるようにすることができる。
In addition, in the attachment in the display system of the present invention,
The attachment is
In the box for packaging the portable terminal by fixing the portable terminal with a tray, storing the tray in the main body, and covering the main body with a lid,
The tray is constituted by a transparent body having a lens at a part of the bottom surface,
The mobile terminal is fixed so that the screen of the mobile terminal is visible through an opening provided in the tray by cutting the lens from the bottom surface of the tray,
The opening is provided, the tray for fixing the mobile terminal is stored in the main body,
A lens excised from the tray is attached to the opening provided in the lid by excising a part of the upper surface of the lid,
The main body may be covered with the lid so that the screen of the portable terminal can be visually recognized through the lens attached to the opening.
  本願発明のアタッチメントは、
  画面を有する携帯端末を、前記画面がユーザにより視認可能に収容するアタッチメントであって、
  前記ユーザによる前記アタッチメントに対する接触をともなうアクションに起因して発生した音声もしくは動きを、前記携帯端末に伝達し、
  前記アタッチメントは、
  前記携帯端末をトレーにより固定し、前記トレーを本体に収納し、前記本体を蓋で覆うことにより、前記携帯端末を包装する箱において、
  前記トレーは、前記底面の一部をレンズとした透明体により構成され、
  前記携帯端末の画面が視認可能なように前記携帯端末を前記本体の底面に固定し、
  前記レンズを介して前記携帯端末の前記画面が視認可能なように前記トレーを前記本体に収納する。
The attachment of the present invention is
An attachment that houses a mobile terminal having a screen so that the screen can be viewed by a user,
Transmitting voice or movement generated due to an action accompanied by a contact with the attachment by the user to the mobile terminal;
The attachment is
In the box for packaging the portable terminal by fixing the portable terminal with a tray, storing the tray in the main body, and covering the main body with a lid,
The tray is constituted by a transparent body having a lens at a part of the bottom surface,
The mobile terminal is fixed to the bottom surface of the main body so that the screen of the mobile terminal is visible,
The tray is stored in the main body so that the screen of the portable terminal can be visually recognized through the lens.
  本願発明に係る表示方法は、
  携帯端末が実行する表示方法であって、前記携帯端末は画面を有し、前記携帯端末は、前記画面がユーザにより視認可能にアタッチメントに収容され、
  前記携帯端末が、前記携帯端末が有するセンサにより、前記アタッチメントを介した前記ユーザによるアクションを検知する検知ステップ、
  前記検知されたアクションに応じて前記画面に表示する映像を制御する制御ステップ
  を備える。
The display method according to the present invention is as follows:
A display method executed by a mobile terminal, wherein the mobile terminal has a screen, and the mobile terminal is accommodated in an attachment so that the screen can be viewed by a user,
A detection step in which the mobile terminal detects an action by the user via the attachment by a sensor of the mobile terminal;
A control step of controlling an image displayed on the screen in accordance with the detected action.
  本願発明に係るプログラムは、携帯端末を制御するプログラムであって、前記携帯端末は画面を有し、前記携帯端末は、前記画面がユーザにより視認可能にアタッチメントに収容され、
前記プログラムは、前記携帯端末に、
  前記携帯端末が有するセンサにより、前記アタッチメントを介した前記ユーザによるアクションを検知し、
  前記検知されたアクションに応じて前記画面に表示する映像を制御する。
The program according to the present invention is a program for controlling a mobile terminal, wherein the mobile terminal has a screen, and the mobile terminal is accommodated in an attachment such that the screen is visible by a user,
The program is stored in the mobile terminal.
The sensor of the portable terminal detects an action by the user via the attachment,
The video to be displayed on the screen is controlled according to the detected action.
  前記プログラムは、コンピュータ読取可能な非一時的(non-transitory)な情報記録媒体に記録することができる。当該情報記録媒体は、本願発明に係る携帯端末を実現するためのコンピュータとは独立して配布、販売することができる。また、前記プログラムは、インターネット等のコンピュータ通信網による一時的(transitory)な通信媒体を介して配布サーバから本願発明に係る携帯端末を実現するためのコンピュータへ配布、販売することも可能である。 The program can be recorded on a computer-readable non-transitory information recording medium. The information recording medium can be distributed and sold independently of the computer for realizing the portable terminal according to the present invention. In addition, the program can be distributed and sold from a distribution server to a computer for realizing the portable terminal according to the present invention via a temporary communication medium by a computer communication network such as the Internet.
  本発明によれば、ヘッドマウントディスプレイ等の代替となる安価で簡易に実現可能な表示システム、表示方法、アタッチメント、ならびに、プログラムを提供することができる。 に よ According to the present invention, it is possible to provide a display system, a display method, an attachment, and a program that can be realized easily and inexpensively as an alternative to a head-mounted display or the like.
  なお、本出願においては、2014年3月28日に日本国に出願した特許出願特願2014-067716を基礎とする優先権を主張するものとし、指定国の法令が許す限り当該基礎出願の内容を本願に取り込むものとする。 In this application, we shall claim priority based on Japanese Patent Application No. 2014-067716 filed in Japan on March 28, 2014, and the contents of the basic application as long as the laws of the designated country permit. Is incorporated into the present application.
1 頭部近接型映像表示システム
2 携帯端末
3 録画モジュール
4 頭部近接体(アタッチメント)
5 通信網
20 制御アプリケーション
22 映像蓄積部
31 パノラマビデオカメラ
41 近接ユニット
42 端末収容ユニット
43 ヘッドホン
44 撮像部
58 電源スイッチ
59 動きセンサ
60 マイク
62 表示パネル
65 操作部
66 表示パネル
69 記録部
101 突出片
102 側板
102 頭部フレーム
103 ヒンジ機構
110 後板
111 レンズ
112 前板
113 押さえ片
114 収容部
123 溝
124 溝
311 基台
312 カメラアレイ
320 本体
321 各撮像装置
321 撮像装置
330 インターフェース
501 箱
502 蓋
503 本体
504 トレー
505 フレネルレンズ
506 開口
S21 代替映像取得ステップ
S22 映像蓄積ステップ
S23 ライブ映像取得ステップ
S24 頭部方向特定ステップ
S25 方向更新ステップ
S25 方向更新部ステップ
S28 再生制御ステップ
S35 音声データ取得ステップ
S38 検出ステップ
1 Head proximity image display system 2 Mobile terminal 3 Recording module 4 Head proximity body (attachment)
5 Communication Network 20 Control Application 22 Video Storage Unit 31 Panoramic Video Camera 41 Proximity Unit 42 Terminal Housing Unit 43 Headphone 44 Imaging Unit 58 Power Switch 59 Motion Sensor 60 Microphone 62 Display Panel 65 Operation Unit 66 Display Panel 69 Recording Unit 101 Projection Piece 102 Side plate 102 Head frame 103 Hinge mechanism 110 Rear plate 111 Lens 112 Front plate 113 Holding piece 114 Groove 124 Groove 311 Base 312 Camera array 320 Main body 321 Each imaging device 321 Imaging device 330 Interface 501 Box 502 Lid 503 Main body 504 Tray 505 Fresnel lens 506 Opening S21 Alternative image acquisition step S22 Image accumulation step S23 Live image acquisition step S24 Head direction identification step S25 Direction update step S25 Direction updating unit step S28 the playback control step S35 speech data acquiring step S38 detection step

Claims (26)

  1.   画面を有する携帯端末と、
      前記携帯端末を、前記画面がユーザにより視認可能に収容するアタッチメントと、
      を備える表示システムであって、
    前記携帯端末は、
      前記携帯端末が有するセンサにより、前記アタッチメントを介した前記ユーザによるアクションを検知し、
      前記検知されたアクションに応じて前記画面に表示する映像を制御する
      ことを特徴とする表示システム。
    A mobile terminal having a screen;
    An attachment for accommodating the portable terminal so that the screen is visible by a user;
    A display system comprising:
    The portable terminal is
    The sensor of the portable terminal detects an action by the user via the attachment,
    A display system that controls an image to be displayed on the screen in accordance with the detected action.
  2.   前記携帯端末は、
      前記アタッチメントから発せられる音声ならびに前記携帯端末の動きを前記センサにより検知し、
      前記検知された音声ならびに動きが、前記ユーザによる前記アタッチメントへの接触、前記アタッチメントの摩擦、もしくは前記アタッチメントの変形を含むアクションに起因して発せられたものであるか否かを識別し、
      前記識別されたアクションに応じて、前記画面に表示する前記映像を制御する
      ことを特徴とする請求項1に記載の表示システム。
    The portable terminal is
    Detecting voice emitted from the attachment and movement of the mobile terminal by the sensor,
    Identifying whether the sensed voice and movement is caused by an action including contact to the attachment by the user, friction of the attachment, or deformation of the attachment;
    2. The display system according to claim 1, wherein the video to be displayed on the screen is controlled according to the identified action.
  3.   前記携帯端末は、
      前記アタッチメントから発せられる音声を前記センサにより検知し、
      前記検知された音声が、前記ユーザによる前記アタッチメントへの接触、前記アタッチメントの摩擦、もしくは前記アタッチメントの変形を含むアクションに起因して発せられたものであるか否かを識別し、
      前記識別されたアクションに応じて、前記画面に表示する前記映像を制御する
      ことを特徴とする請求項1に記載の表示システム。
    The portable terminal is
    The sound emitted from the attachment is detected by the sensor,
    Identifying whether the sensed sound is emitted due to an action including contact to the attachment by the user, friction of the attachment, or deformation of the attachment;
    2. The display system according to claim 1, wherein the video to be displayed on the screen is controlled according to the identified action.
  4.   前記携帯端末は、
      前記検知された音声の周波数成分をアタッチメントが共鳴する周波数帯か否かにより分類して、前記アクションに起因して発せられたものであるか否かを識別する
      ことを特徴とする請求項1に記載の表示システム。
    The portable terminal is
    2. The frequency component of the detected sound is classified according to whether or not the frequency band in which an attachment resonates, and whether or not the sound component is emitted due to the action is identified. Display system described.
  5.   前記携帯端末は、
      前記携帯端末の動きを前記センサにより検知し、
      前記検知された動きが、前記ユーザによる前記アタッチメントへの接触、前記アタッチメントの摩擦、もしくは前記アタッチメントの変形を含むアクションに起因して発せられたものであるか否かを識別し、
      前記識別されたアクションに応じて、前記画面に表示する前記映像を制御する
      ことを特徴とする請求項1に記載の表示システム。
    The portable terminal is
    The movement of the mobile terminal is detected by the sensor,
    Identifying whether the detected movement is caused by an action including contact to the attachment by the user, friction of the attachment, or deformation of the attachment;
    2. The display system according to claim 1, wherein the video to be displayed on the screen is controlled according to the identified action.
  6.   前記画面はタッチスクリーンであり、
      前記アタッチメントは、前記ユーザが前記タッチスクリーンの一部の領域にタッチ可能とするための開口を有し、
      前記携帯端末は、
      前記検出された動きに基づいて、前記映像に含まれる操作対象の位置を前記タッチ可能な領域の内外に移動させ、
      前記タッチスクリーンにより、前記ユーザが前記タッチ可能な領域の内に移動した前記操作対象に対するタッチが検知されると、当該タッチに応じて、前記画面に表示する前記映像を制御する
      ことを特徴とする請求項2または5に記載の表示システム。
    The screen is a touch screen;
    The attachment has an opening for enabling the user to touch a partial area of the touch screen;
    The portable terminal is
    Based on the detected movement, the position of the operation target included in the video is moved in and out of the touchable area,
    When the touch on the operation target that the user has moved within the touchable area is detected by the touch screen, the video displayed on the screen is controlled according to the touch. The display system according to claim 2 or 5.
  7.   前記携帯端末は、前記画面を前記ユーザが視認する方向を撮影方向とするカメラを有し、
      前記アタッチメントは、前記カメラが現実空間のライブ映像を撮影可能に、前記携帯端末を収容し、
      前記識別されたアクションに応じて、前記画面に表示する前記映像を前記ライブ映像もしくは非ライブ映像に切り換える
      ことを特徴とする請求項2または5に記載の表示システム。
    The mobile terminal has a camera whose shooting direction is a direction in which the user visually recognizes the screen,
    The attachment accommodates the portable terminal so that the camera can shoot a live video of a real space,
    6. The display system according to claim 2, wherein the video to be displayed on the screen is switched to the live video or the non-live video in accordance with the identified action.
  8.   前記携帯端末は、
      前記非ライブ映像が前記画面に表示される間、前記センサにより検出された動きに基づいて、パノラマ映像ならびに3次元映像を含む代替映像から部分映像を切り出して、前記切り出された部分映像を前記非ライブ映像とする
      ことを特徴とする請求項6に記載の表示システム。
    The portable terminal is
    While the non-live video is displayed on the screen, based on the movement detected by the sensor, a partial video is cut out from the alternative video including the panoramic video and the 3D video, and the cut-out partial video is removed from the non-live video. The display system according to claim 6, wherein the display system is a live video.
  9.   前記アタッチメントは、
      前記携帯端末をトレーにより固定し、前記トレーを本体に収納し、前記本体を蓋で覆うことにより、前記携帯端末を包装する箱において、
      前記トレーの底面の一部を切除することにより、前記トレーに設けられた開口を介して、前記携帯端末の画面が視認可能なように前記携帯端末を固定し、
      前記開口が設けられ、前記携帯端末を固定する前記トレーを前記本体に収納し、
      前記蓋の上面の一部を切除することにより前記蓋に設けられた開口を介して、前記携帯端末の前記画面を視認可能なように前記蓋で前記本体を覆う
      ことにより構成されることを特徴とする請求項1から8のいずれか1項に記載の表示システムにおけるアタッチメント。
    The attachment is
    In the box for packaging the portable terminal by fixing the portable terminal with a tray, storing the tray in the main body, and covering the main body with a lid,
    By fixing a part of the bottom surface of the tray, the mobile terminal is fixed so that the screen of the mobile terminal is visible through an opening provided in the tray.
    The opening is provided, the tray for fixing the mobile terminal is stored in the main body,
    The cover is configured to cover the main body with the lid so that the screen of the mobile terminal can be visually recognized through an opening provided in the lid by cutting off a part of the upper surface of the lid. The attachment in the display system according to any one of claims 1 to 8.
  10.   前記トレーは、前記底面の一部をレンズとした透明体により構成され、
      前記蓋に設けられた開口に、前記トレーから切除されたレンズが装着される
      ことを特徴とする請求項9に記載のアタッチメント。
    The tray is constituted by a transparent body having a lens at a part of the bottom surface,
    10. The attachment according to claim 9, wherein a lens cut from the tray is attached to an opening provided in the lid.
  11.   前記アタッチメントは、
      前記携帯端末をトレーにより固定し、前記トレーを本体に収納し、前記本体を蓋で覆うことにより、前記携帯端末を包装する箱において、
      前記トレーは、前記底面の一部をレンズとした透明体により構成され、
      前記携帯端末の画面が視認可能なように前記携帯端末を前記本体の底面に固定し、
      前記レンズを介して前記携帯端末の前記画面が視認可能なように前記トレーを前記本体に収納する
      ことにより構成されることを特徴とする請求項1から8のいずれか1項に記載の表示システムにおけるアタッチメント。
    The attachment is
    In the box for packaging the portable terminal by fixing the portable terminal with a tray, storing the tray in the main body, and covering the main body with a lid,
    The tray is constituted by a transparent body having a lens at a part of the bottom surface,
    The mobile terminal is fixed to the bottom surface of the main body so that the screen of the mobile terminal is visible,
    9. The display system according to claim 1, wherein the display system is configured by storing the tray in the main body so that the screen of the portable terminal can be visually recognized through the lens. Attachment in.
  12.   前記アタッチメントは、
      前記携帯端末をトレーにより固定し、前記トレーを本体に収納し、前記本体を蓋で覆うことにより、前記携帯端末を包装する箱において、
      前記トレーは、前記底面の一部をレンズとした透明体により構成され、
      前記トレーの底面から前記レンズを切除することにより前記トレーに設けられた開口を介して、前記携帯端末の画面が視認可能なように前記携帯端末を固定し、
      前記開口が設けられ、前記携帯端末を固定する前記トレーを前記本体に収納し、
      前記蓋の上面の一部を切除することにより前記蓋に設けられた開口に、前記トレーから切除されたレンズを装着し、
      前記開口に装着された前記レンズを介して、前記携帯端末の前記画面を視認可能なように前記蓋で前記本体を覆う
      ことにより構成されることを特徴とする請求項11に記載のアタッチメント。
    The attachment is
    In the box for packaging the portable terminal by fixing the portable terminal with a tray, storing the tray in the main body, and covering the main body with a lid,
    The tray is constituted by a transparent body having a lens at a part of the bottom surface,
    The mobile terminal is fixed so that the screen of the mobile terminal is visible through an opening provided in the tray by cutting the lens from the bottom surface of the tray,
    The opening is provided, the tray for fixing the mobile terminal is stored in the main body,
    A lens excised from the tray is attached to the opening provided in the lid by excising a part of the upper surface of the lid,
    12. The attachment according to claim 11, wherein the attachment is configured to cover the main body with the lid so that the screen of the mobile terminal can be visually recognized through the lens attached to the opening.
  13.   画面を有する携帯端末を、前記画面がユーザにより視認可能に収容するアタッチメントであって、
      前記ユーザによる前記アタッチメントに対する接触をともなうアクションに起因して発生した音声もしくは動きを、前記携帯端末に伝達し、
      前記アタッチメントは、
      前記携帯端末をトレーにより固定し、前記トレーを本体に収納し、前記本体を蓋で覆うことにより、前記携帯端末を包装する箱において、
      前記トレーは、前記底面の一部をレンズとした透明体により構成され、
      前記携帯端末の画面が視認可能なように前記携帯端末を前記本体の底面に固定し、
      前記レンズを介して前記携帯端末の前記画面が視認可能なように前記トレーを前記本体に収納する
      ことにより構成されることを特徴とするアタッチメント。
    An attachment that houses a mobile terminal having a screen so that the screen can be viewed by a user,
    Transmitting voice or movement generated due to an action accompanied by a contact with the attachment by the user to the mobile terminal;
    The attachment is
    In the box for packaging the portable terminal by fixing the portable terminal with a tray, storing the tray in the main body, and covering the main body with a lid,
    The tray is constituted by a transparent body having a lens at a part of the bottom surface,
    The mobile terminal is fixed to the bottom surface of the main body so that the screen of the mobile terminal is visible,
    The attachment is configured by storing the tray in the main body so that the screen of the portable terminal can be visually recognized through the lens.
  14.   携帯端末が実行する表示方法であって、前記携帯端末は画面を有し、前記携帯端末は、前記画面がユーザにより視認可能にアタッチメントに収容され、
      前記携帯端末が、前記携帯端末が有するセンサにより、前記アタッチメントを介した前記ユーザによるアクションを検知する検知ステップ、
      前記検知されたアクションに応じて前記画面に表示する映像を制御する制御ステップ
      を備えることを特徴とする表示方法。
    A display method executed by a mobile terminal, wherein the mobile terminal has a screen, and the mobile terminal is accommodated in an attachment so that the screen can be viewed by a user,
    A detection step in which the mobile terminal detects an action by the user via the attachment by a sensor of the mobile terminal;
    A display method comprising: a control step of controlling an image displayed on the screen in accordance with the detected action.
  15.   携帯端末を制御するプログラムであって、前記携帯端末は画面を有し、前記携帯端末は、前記画面がユーザにより視認可能にアタッチメントに収容され、
    前記プログラムは、前記携帯端末に、
      前記携帯端末が有するセンサにより、前記アタッチメントを介した前記ユーザによるアクションを検知し、
      前記検知されたアクションに応じて前記画面に表示する映像を制御する
      処理を実行させることを特徴とするプログラム。
    A program for controlling a mobile terminal, wherein the mobile terminal has a screen, and the mobile terminal is accommodated in an attachment so that the screen can be viewed by a user,
    The program is stored in the mobile terminal.
    The sensor of the portable terminal detects an action by the user via the attachment,
    A program for executing a process of controlling an image displayed on the screen in accordance with the detected action.
  16.   1以上の代替映像が記録されている記録手段と、少なくとも上記記録手段に記録されている代替映像を表示する表示手段と、自らの動きを検出する動きセンサとを有する携帯端末と、
      映像視認時において当該ユーザの眼前に上記表示手段が位置するように上記携帯端末が収容される収容手段を有する頭部近接体とを備え、
      上記表示手段は、上記動きセンサにより検出された動きに応じて上記代替映像の表示を開始し又は停止すること
      を特徴とする頭部近接型映像表示システム。
    A portable terminal having recording means for recording one or more alternative videos, display means for displaying at least the alternative videos recorded in the recording means, and a motion sensor for detecting its own movement;
    A head proximity body having an accommodating means for accommodating the portable terminal so that the display means is positioned in front of the user's eyes at the time of visual recognition,
    The head proximity image display system, wherein the display means starts or stops displaying the substitute image in accordance with the motion detected by the motion sensor.
  17.   上記携帯端末は、ユーザと略同一の視点で現実空間をライブ映像として撮像する撮像手段を更に有し、
      上記表示手段は、上記動きセンサにより検出された動きに応じて、上記記録手段に記録されている代替映像又は上記撮像手段により撮像されたライブ映像の表示、停止を行い、又は上記代替映像と上記ライブ映像の表示の切り替えを行うこと
      を特徴とする請求項16記載の頭部近接型映像表示システム。
    The portable terminal further includes an imaging unit that captures the real space as a live video from a viewpoint substantially the same as the user,
    The display means displays and stops the alternative video recorded in the recording means or the live video imaged by the imaging means according to the motion detected by the motion sensor, or the alternative video and the above 17. The head proximity type video display system according to claim 16, wherein the live video display is switched.
  18.   上記頭部近接体は、映像視認時において上記ユーザの頭部に近接させる頭部フレームと、互いに上記ユーザの頭部よりも狭い間隔で上記頭部フレームにおける両側から突出された開閉自在の突出片とを有し、
      上記動きセンサは、ユーザによる突出片の開閉動作に応じた動きを検出すること
      を特徴とする請求項16記載の頭部近接型映像表示システム。
    The head proximity body includes a head frame that is close to the user's head when visually recognizing an image, and an openable / closable protruding piece that protrudes from both sides of the head frame at a narrower interval than the user's head. And
    17. The head proximity image display system according to claim 16, wherein the movement sensor detects a movement according to an opening / closing operation of the protruding piece by a user.
  19.   上記動きセンサは、
      映像視認開始時において上記頭部近接体を頭部の手前まで移動させる動作と、頭部の手前において上記突出片の開閉動作に伴う減速とを検出すること
      を特徴とする請求項18記載の頭部近接型映像表示システム。
    The motion sensor
    19. The head according to claim 18, wherein an operation of moving the head proximity object to the front of the head at the start of video viewing and a deceleration accompanying the opening / closing operation of the protruding piece in front of the head are detected. Close proximity type image display system.
  20.   上記頭部近接体は、上記収容手段に収容した上記携帯端末の表示手段により表示される映像を拡大するためのレンズを更に有すること
      を特徴とする請求項16記載の頭部近接型映像表示システム。
    17. The head proximity image display system according to claim 16, wherein the head proximity body further includes a lens for enlarging an image displayed by the display means of the mobile terminal accommodated in the accommodation means. .
  21.   上記ユーザの頭部を略中心とした全方位について事前に時系列的に撮像した映像、もしくはリアルタイムに作成されたコンピューターグラフィックスなどの仮想的映像を一の代替映像として取得する代替映像取得手段を更に備え、
      上記携帯端末は、当該一の代替映像を再生する場合には、上記ユーザの視線又は頭部の動きに基づいて当該一の代替映像から表示すべき映像を上記時系列に沿って切り出す映像制御手段を更に有すること
      を特徴とする請求項16記載の頭部近接型映像表示システム。
    Alternative video acquisition means for acquiring, as one alternative video, a video captured in time series in advance in all directions about the user's head or a virtual video such as computer graphics created in real time In addition,
    When the mobile terminal reproduces the one alternative video, the video control means for cutting out the video to be displayed from the one alternative video along the time series based on the line of sight of the user or the movement of the head 17. The head proximity image display system according to claim 16, further comprising:
  22.   上記代替映像取得手段は、公衆通信網に随時アクセスすることにより取得した情報、又は予め蓄積した情報を一の代替映像に加工すること
      を特徴とする請求項21記載の頭部近接型映像表示システム。
    22. The head proximity type image display system according to claim 21, wherein the substitute image acquisition means processes information acquired by accessing the public communication network as needed or information stored in advance into one substitute image. .
  23.   音声データを取得する音声データ取得手段を更に備え、
      上記映像制御手段は、上記音声データ取得手段により取得された音声データを上記再生すべき代替映像に連動させて再生すること
      を特徴とする請求項22記載の頭部近接型映像表示システム。
    Voice data acquisition means for acquiring voice data,
    23. The head proximity type video display system according to claim 22, wherein the video control unit reproduces the audio data acquired by the audio data acquisition unit in conjunction with the substitute video to be reproduced.
  24.   請求項16記載の頭部近接型映像表示システムに使用され、映像視認時において上記ユーザの眼前に上記表示手段が位置するように上記携帯端末が収容される収容手段を有すること
      を特徴とする頭部近接体。
    17. A head used in the head proximity type video display system according to claim 16, further comprising a storage unit in which the mobile terminal is stored so that the display unit is positioned in front of the user's eyes when viewing the video. Close proximity body.
  25.   映像視認時において上記ユーザの頭部に近接させる頭部フレームと、互いに上記ユーザの頭部よりも狭い間隔で上記頭部フレームにおける両側から突出された開閉自在の突出片とを有すること
      を特徴とする請求項24記載の頭部近接体。
    A head frame that is close to the user's head when the image is viewed; and an openable and closable protruding piece that protrudes from both sides of the head frame at a narrower interval than the user's head. The head proximity body according to claim 24.
  26.   頭部近接体を頭部に近接させたユーザに対して映像を表示することを携帯端末により実行させる頭部近接型映像表示プログラムにおいて、
      1以上の代替映像を記録する代替映像記録ステップと、
      上記携帯端末の動きを検出する動き検出ステップと、
      上記代替映像記録ステップにおいて記録した1以上の代替映像を表示することを、上記動き検出ステップにより検出された動きに応じて開始し、又は停止する表示ステップとを携帯端末により実行させることを特徴とする頭部近接型映像表示プログラム。
    In a head proximity type video display program that causes a mobile terminal to display a video for a user who has a head proximity body close to the head,
    An alternative video recording step of recording one or more alternative videos;
    A motion detection step for detecting the motion of the mobile terminal;
    Displaying one or more alternative videos recorded in the alternative video recording step is started by a portable terminal in accordance with the motion detected in the motion detection step or stopped. Head proximity type video display program.
PCT/JP2014/080393 2014-03-28 2014-11-17 Display system, attachment, display method, and program WO2015145863A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2016509900A JP6278490B2 (en) 2014-03-28 2014-11-17 Display system, attachment, display method, and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014067716 2014-03-28
JP2014-067716 2014-03-28

Publications (1)

Publication Number Publication Date
WO2015145863A1 true WO2015145863A1 (en) 2015-10-01

Family

ID=54194433

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/080393 WO2015145863A1 (en) 2014-03-28 2014-11-17 Display system, attachment, display method, and program

Country Status (2)

Country Link
JP (1) JP6278490B2 (en)
WO (1) WO2015145863A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6262890B1 (en) * 2017-01-13 2018-01-17 株式会社日本エスシーマネージメント Viewing device, underwater space viewing system, and underwater space viewing method
JP2018014090A (en) * 2016-06-15 2018-01-25 イマージョン コーポレーションImmersion Corporation Systems and methods for providing haptic feedback via case
JP2018528444A (en) * 2016-07-27 2018-09-27 ベイジン シャオミ モバイル ソフトウェア カンパニーリミテッド Virtual reality glasses
JP2018205345A (en) * 2017-05-30 2018-12-27 株式会社電通 Water ar goggle
JP2019518536A (en) * 2016-05-25 2019-07-04 ミン, サン キューMIN, Sang Kyu Virtual reality combined cell phone case
JP2019529261A (en) * 2016-09-12 2019-10-17 中興通訊股▲ふん▼有限公司Zte Corporation Cell phone case
KR20230071878A (en) * 2021-11-16 2023-05-24 장영진 Wearable Device for Ocular Motor Training Using Augmented Reality Images

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012221435A (en) * 2011-04-14 2012-11-12 Lenovo Singapore Pte Ltd Method of waking up electronic device including touch panel and electronic device
JP2013077013A (en) * 2012-11-20 2013-04-25 Sony Corp Display device and display method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012221435A (en) * 2011-04-14 2012-11-12 Lenovo Singapore Pte Ltd Method of waking up electronic device including touch panel and electronic device
JP2013077013A (en) * 2012-11-20 2013-04-25 Sony Corp Display device and display method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SIMON ROCKMAN: "Oculus Rift? Tchah, try 'Oculus Thrift' ... You bet your vrAse we tested these bargain VR specs", THE REGISTER, 21 March 2014 (2014-03-21), XP055226460, Retrieved from the Internet <URL:http://www.theregister.co.uk/2014/03/21/wearable_tech_show> [retrieved on 20150128] *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019518536A (en) * 2016-05-25 2019-07-04 ミン, サン キューMIN, Sang Kyu Virtual reality combined cell phone case
JP7053041B2 (en) 2016-05-25 2022-04-12 キュー ミン,サン Mobile phone case for virtual reality
JP2018014090A (en) * 2016-06-15 2018-01-25 イマージョン コーポレーションImmersion Corporation Systems and methods for providing haptic feedback via case
JP2018528444A (en) * 2016-07-27 2018-09-27 ベイジン シャオミ モバイル ソフトウェア カンパニーリミテッド Virtual reality glasses
US10108228B2 (en) 2016-07-27 2018-10-23 Beijing Xiaomi Mobile Software Co., Ltd. Virtual reality goggle
JP2019529261A (en) * 2016-09-12 2019-10-17 中興通訊股▲ふん▼有限公司Zte Corporation Cell phone case
JP6262890B1 (en) * 2017-01-13 2018-01-17 株式会社日本エスシーマネージメント Viewing device, underwater space viewing system, and underwater space viewing method
JP2018113653A (en) * 2017-01-13 2018-07-19 株式会社日本エスシーマネージメント Viewing device, and underwater space viewing system and method for viewing the same
WO2018131688A1 (en) * 2017-01-13 2018-07-19 株式会社日本エスシーマネージメント Viewing device, underwater space viewing system, and underwater space viewing method
JP2018205345A (en) * 2017-05-30 2018-12-27 株式会社電通 Water ar goggle
KR20230071878A (en) * 2021-11-16 2023-05-24 장영진 Wearable Device for Ocular Motor Training Using Augmented Reality Images
KR102624540B1 (en) * 2021-11-16 2024-01-15 장영진 Wearable Device for Ocular Motor Training Using Augmented Reality Images

Also Published As

Publication number Publication date
JP6278490B2 (en) 2018-02-14
JPWO2015145863A1 (en) 2017-04-13

Similar Documents

Publication Publication Date Title
JP6278490B2 (en) Display system, attachment, display method, and program
US9927948B2 (en) Image display apparatus and image display method
CN108605166B (en) Method and equipment for presenting alternative image in augmented reality
US9922448B2 (en) Systems and methods for generating a three-dimensional media guidance application
JP6558587B2 (en) Information processing apparatus, display apparatus, information processing method, program, and information processing system
US20160379417A1 (en) Augmented reality virtual monitor
US20140123015A1 (en) Information processing system, information processing apparatus, and storage medium
JP6845111B2 (en) Information processing device and image display method
JP6292658B2 (en) Head-mounted video display system and method, head-mounted video display program
TW201501510A (en) Method and system for displaying multi-view images and non-transitory computer readable storage medium thereof
CN111970456B (en) Shooting control method, device, equipment and storage medium
US11354871B2 (en) Head-mountable apparatus and methods
CN110036416A (en) Device and correlation technique for space audio
KR20180095197A (en) Mobile terminal and method for controlling the same
US20160070346A1 (en) Multi vantage point player with wearable display
US20180192031A1 (en) Virtual Reality Viewing System
WO2020206647A1 (en) Method and apparatus for controlling, by means of following motion of user, playing of video content
CN106954093B (en) Panoramic video processing method, device and system
JP2006086717A (en) Image display system, image reproducer, and layout controller
EP3654099A2 (en) Method for projecting immersive audiovisual content
JP6720575B2 (en) Video playback device and video processing device
KR20170046947A (en) Mobile terminal and method for controlling the same
JP2020025275A (en) Video and audio reproduction device and method
US20150070465A1 (en) Interactive television

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14887167

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016509900

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase
122 Ep: pct application non-entry in european phase

Ref document number: 14887167

Country of ref document: EP

Kind code of ref document: A1