WO2013108438A1 - Display device, display method and display program - Google Patents

Display device, display method and display program Download PDF

Info

Publication number
WO2013108438A1
WO2013108438A1 PCT/JP2012/073958 JP2012073958W WO2013108438A1 WO 2013108438 A1 WO2013108438 A1 WO 2013108438A1 JP 2012073958 W JP2012073958 W JP 2012073958W WO 2013108438 A1 WO2013108438 A1 WO 2013108438A1
Authority
WO
WIPO (PCT)
Prior art keywords
moving image
unit
orientation
display
screen
Prior art date
Application number
PCT/JP2012/073958
Other languages
French (fr)
Japanese (ja)
Inventor
新開 誠
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Publication of WO2013108438A1 publication Critical patent/WO2013108438A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1694Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being a single or a set of motion sensors for pointer control or gesture input obtained by sensing movements of the portable computer
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/363Graphics controllers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2200/00Indexing scheme relating to G06F1/04 - G06F1/32
    • G06F2200/16Indexing scheme relating to G06F1/16 - G06F1/18
    • G06F2200/161Indexing scheme relating to constructional details of the monitor
    • G06F2200/1614Image rotation following screen orientation, e.g. switching from landscape to portrait mode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2200/00Indexing scheme relating to G06F1/04 - G06F1/32
    • G06F2200/16Indexing scheme relating to G06F1/16 - G06F1/18
    • G06F2200/163Indexing scheme relating to constructional details of the computer
    • G06F2200/1637Sensing arrangement for detection of housing movement or orientation, e.g. for controlling scrolling or cursor movement on the display of an handheld computer
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0492Change of orientation of the displayed image, e.g. upside-down, mirrored
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2380/00Specific applications
    • G09G2380/14Electronic books and readers

Definitions

  • the present invention relates to a display device, a display method, and a display program.
  • content that incorporates multimedia data such as video and audio in an e-book can be expressed in a way that could not be realized with conventional paper. Therefore, such content can be said to be content that is beneficial to both content creators and users.
  • interactive content is assumed in which when a user selects a specific character or image, a moving image corresponding to the selected character or image is reproduced. Further, for example, a content is assumed that a moving image is automatically played when a specific page is displayed.
  • Patent Document 1 discloses an information display device that does not affect the layout of other characters on a screen by displaying a moving image as it is on a still image display area when displaying a moving image. .
  • Some terminals such as smartphones and tablets use a built-in acceleration sensor to switch the display according to the orientation (vertical / horizontal) of the terminal. Therefore, even when the terminal displays an electronic book, it is required to display the content in a form that matches the orientation of the terminal. That is, when the orientation of the terminal is changed to the landscape orientation (rotation) while displaying a certain page in the portrait orientation, it is necessary to display the same page on the landscape orientation screen.
  • the video when the content includes a video, if the orientation of the terminal changes, depending on the content, the video may be located outside the display area after the orientation changes, and the video may not be displayed on the screen after the orientation has changed. .
  • the orientation of the terminal when the orientation of the terminal changes during playback of a moving image, depending on the content, the moving image may be located outside the display area after the orientation change, and the moving image may not be displayed on the screen after the orientation change. As a result, the user cannot comfortably view the content including the moving image.
  • the present invention provides a display device, a display method, and a display program that enable comfortable viewing of content including moving images.
  • the display device determines the screen layout after the orientation change so that the entire at least one moving image included in the content is displayed on the screen when the orientation of the device changes.
  • a control unit is provided.
  • the display device may include a display unit that displays content including a moving image on a screen, and a direction detection unit that detects a change in the direction of the device itself on a plane including the screen.
  • the control unit is configured to determine a screen layout after the change of direction so that at least one moving image included in the content is displayed on the screen when the direction detection unit detects a change in the direction. It may be.
  • the control unit detects a change in orientation
  • the video viewing determination unit that determines whether or not the user is viewing the video displayed on the display unit, and the direction detection unit
  • a layout unit that determines a screen layout after changing the orientation based on a position on the screen on which the moving image is displayed is provided. Also good.
  • the control unit may include a page position holding unit that holds a displayed page position and a change direction of the direction when the direction detected by the direction detection unit is changed.
  • the layout unit detects a page held by the page position holding unit when the change in direction detected by the direction detection unit is opposite to the direction change of the direction held by the page position holding unit.
  • the screen layout after the change may be determined based on the position.
  • the moving image viewing determination unit may be configured to determine that the moving image is being watched at least when the moving image is being played back.
  • control unit may include a line-of-sight detection unit that detects a user's line of sight.
  • the moving image viewing determination unit may be configured to determine whether or not the user is viewing the moving image based on a reproduction state of the moving image and a line-of-sight direction detected by the line-of-sight detection unit.
  • control unit may include a video selection unit that selects one video from among a plurality of videos included in the content as a video being viewed based on a viewing index. .
  • the viewing index may be a priority for each moving picture described in the document data.
  • the moving image selection unit may be configured to select one moving image from among a plurality of moving images included in the content based on a priority for each moving image as a moving image being viewed.
  • the viewing index may be a display size of the moving image.
  • the moving image selection unit may be configured to select one moving image from among a plurality of moving images included in the content as a moving image being viewed based on the display size of the moving image.
  • the viewing index may be a user's line of sight.
  • the said control part may be provided with the 2nd gaze detection part which detects the said user's gaze direction.
  • the moving image selection unit may be configured to select one moving image from among a plurality of moving images included in the content based on the line-of-sight direction detected by the line-of-sight detection unit.
  • a display method is a display method executed by the display device, and displays the entire at least one moving image included in the content on the screen when the orientation of the device changes. As described above, the method includes the step of determining the screen layout after the change of direction.
  • a display program is arranged so that when the orientation of the device changes, the computer of the display device displays at least one entire moving image included in the content on the screen. A step of determining the screen layout after the orientation change is executed.
  • an electronic book device that displays an electronic book will be described as an example of a display device for displaying an electronic book.
  • FIGS. 12 There are several ways to display electronic books depending on the type of content. For example, a method of dynamically generating pages according to the display size of the electronic book device (FIG. 12), or the electronic book device in landscape orientation. In such a case, a method of displaying with two pages (for two pages) can be used.
  • FIG. 12 is a diagram for explaining a first comparative example of a method of dynamically generating pages according to the display size of the display device.
  • an example of the display screen 71 in the case where the display device is oriented vertically is shown.
  • a character area 71a and a moving image 71b are displayed.
  • the display screen 72 in the figure is an example of the display screen when the display device displaying the display screen 71 is turned sideways.
  • the character area 72a corresponding to the character area 71a is displayed.
  • the moving image corresponding to the moving image 71b is not displayed on the display screen 72 displayed when the display device is in the landscape orientation, and the moving image 73b corresponding to the moving image 71b is displayed on the display screen 72. Is displayed on the display screen 73 of the next page.
  • the page break position is not determined, and the page break position is dynamically determined according to the result of arranging characters, images, or moving images.
  • the display content after rotation is basically determined based on the first character of the displayed page. For this reason, when the display device is in landscape orientation, such as when the line breaks of characters continue as shown in FIG. 12, the moving image arranged at the bottom of the page may be arranged on the next page.
  • FIG. 13 is a diagram for explaining a second comparative example in which two pages are displayed side by side when the display device is turned sideways.
  • a display screen 81 when the display device is in landscape orientation is shown.
  • a text area 81a is displayed on the left page.
  • the page on the right side shows that a text area 81b and a moving image 81c are displayed.
  • the display screen 82 in the figure is an example of the display screen when the display device displaying the display screen 81 is oriented vertically.
  • the text area 82a is displayed.
  • the moving image corresponding to the moving image 81c is not displayed on the display screen 82, and the moving image 83b corresponding to the moving image 81c is displayed on the display screen 83 of the next page.
  • the electronic book device 100 includes a display for displaying a book, and a touch panel for inputting a user operation is disposed on the display.
  • the user can use the touch panel to give instructions such as page movement, display enlargement / reduction, and content selection.
  • a configuration including physical buttons for operation may be used instead of the touch panel or in addition to the touch panel.
  • the electronic book device 100 includes an acceleration sensor inside the device, and has a function of detecting the orientation of the device and outputting a screen according to the orientation of the device. That is, if the orientation of the electronic book apparatus 100 changes during display of the content, the electronic book apparatus 100 changes the orientation of the content according to the orientation.
  • the content in the present embodiment is not a book with only characters and images but a content in which at least one image is included in the content.
  • content including a moving image will be described.
  • the content including the moving image only needs to include the moving image in at least a part of the content, and the entire content may be the moving image.
  • the moving image is displayed inline in the content as an example.
  • the user can see a character and an image
  • contents for example, contents such as pictorial books that explain not only characters but also images and sports instructional books are assumed.
  • the electronic book device 100 may have a function of switching the display method so that the moving image can be viewed on the full screen, assuming that the moving image is mainly viewed. Further, the electronic book apparatus 100 does not need to actually include the moving image data in the book data, and may acquire it from the outside via the network at the timing of reproduction.
  • FIG. 1 is a schematic block diagram of an electronic book device 100 according to this embodiment.
  • the electronic book device 100 includes a user input unit 101, an orientation detection unit 102, an imaging unit 103, a storage unit 104, a communication unit 105, a display unit 106, and a control unit 110.
  • the user input unit 101 receives a user input input by the user and outputs the received user input to the control unit 110.
  • the user input includes operations for moving images such as moving image reproduction start, pause, and skip in addition to operations for books such as page movement and display enlargement or reduction.
  • the user input unit 101 is an input device, for example, a touch panel.
  • the orientation detection unit 102 includes, for example, an acceleration sensor and acquires the orientation of the own device and detects that the orientation of the own device has changed. That is, the orientation detection unit 102 detects a change in the orientation of the device itself on a plane including the screen of the display unit 106.
  • the orientation detection unit 102 outputs the acquired orientation information indicating the orientation of the own device and the orientation change information indicating the detected change in the orientation of the own device to the control unit 110.
  • the imaging unit 103 captures an image of the user's face of the user apparatus while the moving image is being reproduced on the display unit 106, and outputs image data obtained by the imaging to the control unit 110.
  • the imaging unit 103 is provided, for example, at a location on the front surface of the device itself (location on the surface where the display unit is provided).
  • maintains the book data which shows the text contained in a book, and the moving image data contained in a book.
  • the communication unit 105 wirelessly communicates with other devices on the network via the wireless network interface included in the communication unit 105 and transmits / receives data under the control of the control unit 110.
  • wireless communication is used.
  • the wireless communication in the communication unit 105 is, for example, mobile communication such as 3G (3rd Generation) and LTE (Long Term Evolution), or a wireless LAN. Note that the communication in the communication unit 105 may be wired.
  • the control unit 110 determines the screen layout after the orientation change so that at least one moving image included in the content is displayed on the screen. More specifically, when the orientation detection unit 102 detects a change in orientation, the control unit 110 determines the screen layout after the orientation change so that at least one entire moving image included in the content is displayed on the screen. To do. Specific processing of the control unit 110 will be described later. In addition, the control unit 110 performs the following processing. For example, when the orientation of the electronic book apparatus 100 changes, the control unit 110 sets the moving image size after the change in direction to be the same as the moving image size before the change in direction.
  • the moving image size is the size of the moving image displayed on the screen.
  • control unit 110 performs processing for a book or processing for a moving image in accordance with a user input input from the user input unit 101.
  • the processing for the book is, for example, page movement, display enlargement or reduction, and the like.
  • the process with respect to a moving image is the reproduction start of a moving image, a pause, a skip, etc., for example.
  • the control unit 110 determines the text and video layout according to the direction of the own device input from the direction detection unit 102.
  • control unit 110 reads the book data or moving image data from the storage unit 104 and causes the display unit 106 to display the read book data and moving image data with the determined text and moving image layout.
  • control unit 110 causes the image capturing unit 103 to capture an image of the user of the own apparatus while a moving image is being reproduced on the display unit 106.
  • control unit 110 determines whether or not the user is watching the moving image being played based on the captured image.
  • the control unit 110 also controls the communication unit 105 to acquire document data or moving image data stored in another device on the network.
  • control unit 110 may change the size of the at least one image included in the content on the screen after the orientation change so as to fit on the screen.
  • control unit 110 may make the movie size after the orientation change the same as the movie size before the orientation change for at least one movie included in the content. .
  • the display unit 106 displays the moving image or book information input from the control unit 110.
  • the book information is a sentence included in the electronic book. That is, the display unit 106 displays content including moving images on the screen.
  • FIG. 2 is a schematic block diagram showing a logical configuration of the control unit 110 in the present embodiment.
  • the control unit 110 includes a video playback unit 111, a layout unit 112, a page position holding unit 113, a video selection unit 114, a video display position calculation unit 115, a data management unit 116, a line-of-sight detection unit 120, and a video A viewing determination unit 130.
  • the moving image reproducing unit 111 decodes moving image data incorporated in the book and causes the display unit 106 to display the decoded data.
  • the video playback unit 111 manages the playback state of the video (playing / paused / stopped), and processes such as playback start, pause, skip, etc. according to user input from the user input unit 101 I do.
  • the video codec is H.264.
  • H.264 the container is in the mp4 format, but other formats such as MPEG (Moving Picture Experts Group) -2 and WMV (Windows (registered trademark) Media Video) may be used.
  • audio data may be included.
  • a plurality of moving images may be decoded simultaneously and the decoded data may be displayed.
  • the electronic book device 100 can also reproduce content in which a plurality of moving images are embedded in one page of the book.
  • the layout unit 112 acquires the book data via the data management unit 116, lays out the book according to the orientation of the device and the size of the display, and causes the display unit 106 to display the data after layout.
  • the book data format is, for example, XMDF (ever-eXtending Mobile Document Format), but other formats such as EPUB (Electronic PUBlication) may be used.
  • XMDF ever-eXtending Mobile Document Format
  • EPUB Electrical PUBlication
  • a format capable of expressing a document such as HTML (HyperText Markup Language) or XML (Extensible Markup Language) may be used.
  • the book display method is specified for each content.
  • a display method for dynamically generating a page as shown in FIG. 12 is assumed.
  • the layout unit 112 does not determine the page break position, and dynamically determines the page break position according to the result of arranging characters, images, and moving images.
  • the display content after the rotation is determined based on the first character of the displayed page.
  • the page separation position also changes. Therefore, the character at the top of the next page differs between when the page is advanced vertically and when the page is advanced horizontally.
  • the casing of the own apparatus as shown in FIG. 13 is in landscape orientation, the content is displayed on two pages, and when the casing of the own apparatus is in portrait orientation, the content is displayed on one page. Also good.
  • the layout unit 112 determines the page layout after rotation based on the relative position of the moving image area calculated by the moving image display position calculating unit 115 described later. Specifically, in order to make it appear as if the rotation is performed with reference to the position where the moving image area is displayed, the layout unit 112 arranges the layout so that the relative position of the moving image area is as close as possible to the relative position before the rotation. decide. That is, when the orientation detection unit 102 detects a change in orientation and the video viewing determination unit 130 determines that the video is being viewed, the layout unit 112 is based on the position on the screen on which the video is displayed. Thus, the screen layout corresponding to the direction after the change detected by the direction detection unit 102 is determined. Details of the processing of the layout unit 112 will be described later.
  • the page position holding unit 113 receives orientation change information from the orientation detection unit 102.
  • the page position holding unit 113 holds the page position and the direction change direction displayed before the rotation. That is, the page position holding unit 113 holds the displayed page position and direction change direction when the orientation detected by the orientation detection unit changes.
  • the layout unit 112 holds the change in the direction detected by the orientation detection unit 102 in the page position holding unit 113.
  • the direction is the direction opposite to the change direction
  • the screen layout after the change is determined based on the page position held by the page position holding unit 113.
  • the page position holding unit 113 holds an offset value on the data of the top character of the page.
  • the page position holding unit 113 holds an offset value on the data of the element.
  • the page position holding unit 113 holds the offset value on the data of the element, so that the layout unit 11 can specify the element.
  • the layout unit 112 specifies the first character of the page or the first element of the page by using this offset value when returning to the state before the rotation after the rotation. A detailed description of the processing of the layout unit 112 will be described later.
  • the line-of-sight detection unit 120 detects the user's line-of-sight direction SL based on the image captured by the imaging unit 103 during video reproduction, and outputs the line-of-sight direction information indicating the detected line-of-sight direction SL to the video viewing determination unit 130. . Specific processing will be described later.
  • the moving image viewing determination unit 130 determines whether or not the user is viewing the moving image displayed on the display unit 106. More specifically, the moving image viewing determination unit 130 determines that the user who has detected the visual line direction SL based on the visual line direction SL indicated by the visual line direction information input from the visual line detection unit 120 when the moving image is being reproduced. It is determined whether or not the moving image is being viewed. That is, the moving image viewing determination unit 130 determines whether or not the user is viewing the moving image based on the reproduction state of the moving image and the visual line direction SL detected by the visual line detection unit 120.
  • the moving image viewing determination unit 130 views the moving image when, for example, the probability that the moving image has been viewed for a predetermined time (for example, the last few seconds) is lower than a predetermined threshold. Judge that there is no. On the other hand, the moving image viewing determination unit 130 determines that the moving image is being viewed when, for example, the probability that the moving image has been viewed for a predetermined time is equal to or greater than a predetermined threshold. Specific processing of the moving image viewing determination unit 130 will be described later. The moving image viewing determination unit 130 outputs the determination result to the moving image selection unit 114.
  • a predetermined time for example, the last few seconds
  • the video viewing determination unit 130 may determine whether or not the user is viewing the video based on the playback state of the video. That is, the video viewing determination unit 130 determines that the user is watching the video if the video is being played back, and determines that the user is not watching the video if the video is not being played (stopped, paused, etc.). To do. That is, the moving image viewing determination unit 130 may determine that the moving image is being watched at least when the moving image is being played back.
  • the video selection unit 114 is a case where the determination result input from the video viewing determination unit 130 indicates that a video is being viewed, and the video playback unit 111 is playing back a plurality of videos simultaneously. Choose which videos are most likely to be watched. Even if a plurality of moving images are arranged on the same page, if there is only one moving image being reproduced, the moving image selecting unit 114 selects the moving image being reproduced. Specifically, the moving image selection unit 114 selects one moving image from among a plurality of moving images included in the content based on the viewing index.
  • the moving image selection unit 114 selects a moving image based on the priority included in the content data.
  • the content creator embeds priority information in which priority is specified for each video at the time of content data creation in the content data, and the video selection unit 114 selects the video with the highest priority indicated by the priority information.
  • the viewing index is a priority for each moving image described in the document data as an example, and the moving image selection unit 114 selects one of the plurality of moving images included in the content based on the priority for each moving image.
  • Video as the currently viewed video.
  • the content creator sets the priority accordingly.
  • the moving image display position calculation unit 115 calculates the relative position of the moving image area on the screen. Specifically, when the character is written horizontally, the moving image display position calculation unit 115 acquires the vertical relative position of the moving image area. For example, the moving image display position calculation unit 115 calculates the relative position using the number of lines of characters before and after the moving image area. Specifically, for example, if the number of lines of characters before the moving image area is L1 and the number of lines of characters after the moving image area is L2, the moving image display position calculation unit 115 sets the relative position of the moving image area to L1. / (L1 + L2). Then, the moving image display position calculation unit 115 outputs the calculated relative position of the moving image area to the layout unit 112.
  • the data management unit 116 reads the data recorded in the storage unit 104 in response to a request from the layout unit 112 or the moving image reproduction unit 111, and outputs the read data to the requested layout unit or moving image reproduction unit 111. In addition, when data on the network is designated as a data reference destination, the data management unit 116 acquires the data via the communication unit 105.
  • FIG. 3 is a diagram illustrating an example of a page layout before being rotated horizontally and after being rotated horizontally.
  • This vertical relative position takes a value of 0 if the moving image area is arranged at the top of the screen, 1 if it is arranged at the bottom of the screen, and 0-1 depending on the position otherwise. Therefore, the moving image display position calculation unit 115 calculates the vertical relative position as the relative position of the moving image area on the screen.
  • a first horizontal screen 23, a second horizontal screen 25, and a third horizontal screen 27 are shown as page layout candidates after being rotated horizontally.
  • the moving image display position of the first horizontally oriented screen 23 is the upper part of the screen.
  • the moving image area 28 is at the bottom of the screen.
  • the moving image area 26 is located between the sentences (that is, in the middle).
  • the layout unit 112 calculates the relative position of the moving image region for each candidate, and selects the one closest to the relative position before rotation.
  • the layout unit 112 adopts the third horizontal screen 27 closest to 1 which is the relative position before rotation as the layout after rotation.
  • FIG. 4 is a diagram illustrating an example in which priority is specified for moving image data.
  • a tag 41, a tag 42, and a tag 43 are shown.
  • moving image data is described by a ⁇ movie> tag, and each priority is specified by an attribute value called priority.
  • the tag 41 has the moving image data “movie1.mp4” and the priority thereof is 1.
  • the tag 42 has moving image data “movie2.mp4” and a priority of 10.
  • the tag 43 has moving image data “movie3.mp4” and a priority of 50.
  • the video selection unit 114 determines that movie1. Select mp4. For example, movie2. mp4 and movie3. When the mp4 is played back at the same time, the moving image selection unit 114 determines that movie2. Select mp4.
  • the moving image selection unit 114 may select one moving image from among a plurality of moving images that are simultaneously reproduced based on the user's line of sight. Specifically, for example, when the moving image viewing determination unit 130 determines the viewing state using the user's line of sight, the moving image selection unit 114 determines the probability of viewing the moving image, which is the immediately preceding information to be determined. You may select the animation with the highest probability by using it as it is. That is, the viewing index is the user's line of sight, and the moving image selection unit 114 is configured to display a plurality of moving images included in the content based on the user's line-of-sight direction detected by the line-of-sight detection unit (second line-of-sight detection unit) 120. One of the videos may be selected as the currently viewed video.
  • the moving image selection unit 114 may select one moving image from a plurality of moving images that are simultaneously played based on the display size of the moving image.
  • the video selection unit 114 has the highest display size from among a plurality of videos that are simultaneously played in view of the high possibility that a large video is being viewed. You may select a large video. That is, the viewing index is, for example, the size of the video that is displayed, and the video selection unit 114 is viewing one video from among the plurality of videos included in the content based on the size of the video that is displayed. May be selected as the video.
  • FIG. 5 is a diagram illustrating an example in which a display size is specified for moving image data.
  • a tag 51, a tag 52, and a tag 53 are shown.
  • the horizontal width of the display size is specified by the attribute value “width”
  • the vertical width of the display size is specified by the attribute value “height”.
  • the size designation method includes a method of designating by the number of pixels of a pixel and a method of designating by a relative value with respect to the screen size.
  • the moving image selection unit 114 calculates the display size that is actually displayed on the screen.
  • the moving image selection unit 114 calculates the other value according to the aspect ratio of the moving image content. For example, if the number of pixels of the screen is 600 pixels wide and 1024 pixels long, and the aspect ratio of the moving image content is all 4: 3, movie1. mp4 is 300 ⁇ 225 pixels, movie3. mp4 is displayed with 273 ⁇ 204 pixels. Therefore, when these two moving images are being played back simultaneously, the moving image selection unit 114 assumes that the movie 1. Select mp4.
  • FIG. 6 is a schematic block diagram of the line-of-sight detection unit 120 in the present embodiment.
  • the line-of-sight detection unit 120 includes a feature amount extraction unit 121, a face region specification unit 122, a parts specification unit 123, a black eye region specification unit 124, an eyeball center radius calculation unit 125, a black eye center position calculation unit 127, and a line of sight A direction calculation unit 128.
  • the feature amount extraction unit 121 Based on the image data input from the imaging unit 103, the feature amount extraction unit 121 extracts a feature amount such as a color (skin color) or a boundary (contour) in the image indicated by the image data. Then, the feature amount extraction unit 121 outputs the extracted feature amount to the face area specifying unit 122.
  • the face area specifying unit 122 specifies the face area based on the feature amount input from the feature amount extracting unit 121, and sends information indicating the position of the face image area obtained by the specification to the part specifying unit 123. Output.
  • the parts specifying unit 123 specifies the positions of facial parts such as eyes, nose and mouth.
  • the part specifying unit 123 outputs the face area position information R indicating the position of the face image area specifying the position of the face part and the eye position information P indicating the eye position to the video viewing determination unit 130.
  • the part specifying unit 123 outputs eye position information indicating the position of the eyes to the black eye region specifying unit 124 and the eyeball center radius calculating unit 125.
  • the parts specifying unit 123 checks whether there are face parts such as eyes, nose, and mouth. It is possible to more accurately determine whether or not the face image area specified by the area specifying unit 122 corresponds to a face.
  • the black eye area specifying unit 124 specifies a black eye area among the eye areas based on the eye position information input from the parts specifying unit 123.
  • the black eye area specifying unit 124 outputs black eye position information indicating the position of the specified black eye area to the black eye center position calculating unit 127.
  • the eyeball center radius calculation unit 125 extracts the positions of the eyes and the corners of the eyes based on the eye position information input from the part specifying unit 123, and based on the extracted positions of the eyes and the corners of the eyes, the center of the eyeball Calculate the position and radius of the eyeball.
  • the eyeball center radius calculation unit 125 outputs information indicating the calculated center position of the eyeball and the radius of the eyeball to the gaze direction calculation unit 128.
  • the black eye center position calculating unit 127 specifies the center position of the black eye based on the black eye position information input from the black eye region specifying unit 124. Specifically, the black eye center position calculation unit 127 calculates, for example, the center of the ellipse as a black eye center position when the black eye is approximated as an ellipse from the position of the black eye region indicated by the black eye position information. Then, the black eye center position calculation unit 127 outputs black eye center position information indicating the specified black eye center position to the line-of-sight direction calculation unit 128.
  • the line-of-sight direction calculation unit 128 includes the eyeball center position and the eyeball radius indicated by the information input from the eyeball center radius calculation unit 125 and the black eye center position indicated by the black eye center position information input from the black eye center position calculation unit 127. Based on the above, the direction from the eyeball center to the black eye center is detected as the line-of-sight direction SL.
  • the gaze direction calculation unit 128 outputs gaze direction information SL indicating the detected gaze direction to the moving image viewing determination unit 130.
  • FIG. 7 is a schematic block diagram of the moving image viewing determination unit 130 in the present embodiment.
  • the video viewing determination unit 130 includes a face area size calculation unit 131, a distance calculation unit 132, a horizontal pixel interval calculation unit 133, an eye position calculation unit 134, a gaze determination unit 135, a probability calculation unit 136, A viewing determination unit 137.
  • the face area size calculation unit 131 calculates the size of the face image area based on the position of the face image area indicated by the face area position information R input from the line-of-sight detection unit 120. Specifically, for example, when the outline of the face image area is approximated as an ellipse, the face area size calculation unit 131 calculates the major axis of the face image area as the size of the face image area. Then, the face area size calculation unit 131 outputs face size information indicating the calculated size of the face image area to the distance calculation unit 132. Note that the face area size calculation unit 131 may calculate the minor axis of the face image area as the size of the face image area.
  • the distance calculation unit 132 calculates the distance d from the lens included in the imaging unit 103 to the face, assuming that the actual face size of the user is the average face size.
  • the distance calculation unit 132 captures images based on the focal length f of the imaging unit 103, the average minor axis of the face (actual minor axis), and the calculated minor axis of the face region (minor axis on the image). The distance d from the lens included in the unit 103 to the face may be calculated.
  • the horizontal pixel interval calculation unit 133 is a horizontal line between the eye position on the image (for example, the center position of the black eye on the image) indicated by the eye position information P input from the line-of-sight detection unit 120 and the center position of the image.
  • the pixel interval a in the direction is calculated.
  • the horizontal pixel interval calculation unit 133 outputs horizontal pixel interval information indicating the calculated horizontal pixel interval a to the eye position calculation unit 134.
  • the eye position calculation unit 134 holds information indicating the focal length f of the imaging unit 103 in advance.
  • the eye position calculation unit 134 includes the horizontal pixel interval a indicated by the horizontal pixel interval information input from the horizontal pixel interval calculation unit 133, the focal length f of the imaging unit 103, and the distance information input from the distance calculation unit 132. Based on the indicated distance d, the eye position with the center position of the lens as the origin is calculated. Then, the eye position calculation unit 134 outputs eye position information b indicating the calculated eye position to the line-of-sight determination unit 135.
  • the line-of-sight determination unit 135 Based on the eye position indicated by the eye position information b input from the eye position calculation unit 134 and the line-of-sight direction indicated by the line-of-sight direction information SL input from the line-of-sight detection unit 120, the line-of-sight determination unit 135 It is determined whether or not it is within the display area. The line-of-sight determination unit 135 outputs the determination result to the probability calculation unit 136 at a predetermined interval (for example, an interval of 0.1 seconds).
  • the probability calculation unit 136 calculates the probability that the line of sight is within the display area of the moving image. For example, when the line-of-sight determination unit 135 determines whether or not the line of sight is within the moving image display area at 0.1 second intervals (10 fps) for 5 seconds (that is, 50 times), the line of sight moves to the moving image 35 times. The probability calculation unit 136 calculates the probability of watching a moving image as 70%. The probability calculation unit 136 outputs probability information indicating the calculated probability to the viewing determination unit 137.
  • the probability calculating unit 136 may weight the probability so that the probability of watching the moving image is high. Further, the line-of-sight determination unit 135 determines whether or not the line of sight is within the moving image display area, but is not limited thereto, and determines whether or not the line of sight is within the display area of the display unit 106. Also good. In that case, the probability calculation unit 136 may calculate the probability that the line of sight is within the display area of the moving image.
  • the viewing determination unit 137 determines whether or not the user is watching a video based on the probability indicated by the probability information input from the probability calculation unit 136. Specifically, the viewing determination unit 137 determines that the moving image is not viewed when the probability is lower than a predetermined threshold. On the other hand, the viewing determination unit 137 determines that the video is being viewed when the probability is equal to or greater than a predetermined threshold. The viewing determination unit 137 outputs the determination result obtained by the determination to the moving image selection unit 114.
  • FIG. 8 is a diagram for explaining the eye position and the line-of-sight direction with the center position of the lens as the origin.
  • the electronic book apparatus 100 including the lens 61 and the display unit 106 included in the imaging unit 103 and the user's left eye 62 are illustrated in FIG.
  • the distance between the display area of the display unit 106 and the user's left eye 62 is d1.
  • the position of the user's left eye is indicated by coordinates (x1, y1, d1).
  • the lens 61 is disposed on the surface on which the display unit 106 is located.
  • the positions of the lens 61 and the imaging unit 103 are merely examples, and may be arranged at positions where the user's eyes can be imaged.
  • the line-of-sight determination unit 135 determines whether the intersection is within the moving image display area based on the calculated intersection and the moving image absolute position. When the calculated intersection is within the moving image display area, the line-of-sight determination unit 135 determines that the line of sight is within the moving image display area. On the other hand, if the intersection is outside the moving image display area, the line-of-sight determination unit 135 determines that the line of sight is not within the moving image display area.
  • FIG. 9 is a flowchart illustrating an example of processing of a basic part of the electronic book device 100 of the present embodiment.
  • the user input unit 101 receives a user input indicating a book selected by the user, and outputs book information indicating the book to the layout unit 112 (step S101).
  • the layout unit 112 acquires the book data stored in the storage unit 104 via the data management unit 116 for the selected book (step S102).
  • the layout unit 112 determines the layout of the document according to the orientation of the own device and the size of the display included in the display unit 106, and displays the book data on the display unit 106 with the determined layout (step S103). .
  • the user input unit 101 displays the moving image data.
  • the user input indicating the start of playback is accepted (step S104). Then, the user input unit 101 outputs the user input to the moving image playback unit 111.
  • the moving image reproduction unit 111 acquires a moving image before decoding from the storage unit 104, and decodes the acquired moving image. Then, the moving image reproducing unit 111 causes the display unit 106 to reproduce the decoded moving image (step S105). Next, when the user performs a page turning operation, the user input unit 101 receives a user input indicating page turning and outputs the received user input to the layout unit 112 (step S106).
  • the layout unit 112 acquires the next page data from the storage unit 104 via the data management unit 116 (step S107). Then, the layout unit 112 determines the layout in the same manner as described above, and causes the display unit 106 to display the next page data with the determined layout (step S108). At that time, the display unit 106 performs screen transition by displaying, for example, an animation of turning a page. Above, the process of this flowchart is complete
  • FIG. 10 is a flowchart illustrating an example of processing when the orientation of the device itself changes in the electronic book device 100 of the present embodiment.
  • the orientation detection unit 102 detects that the orientation of the own device has changed
  • the following processing is executed.
  • the moving image viewing determination unit 130 determines whether or not the user is viewing a moving image (S201). If the currently displayed page includes a plurality of moving images, the moving image viewing determination unit 130 determines whether or not each of the moving images is being viewed individually. When there is one or more videos determined to be viewed and the probability that the user's line of sight is within the video display area is equal to or greater than a predetermined threshold, the video viewing determination unit 130 allows the user to view the videos. It is determined that it is in the middle.
  • step S201 when the moving image viewing determination unit 130 determines that the moving image is being viewed (step S201: YES), the rotation is performed with reference to the position where the moving image is displayed on the screen so that the moving image is displayed even after the rotation. I do.
  • the moving image selection unit 114 selects one moving image with priority (step S202). Even when there are a plurality of moving images being viewed, one moving image is selected in this process, and thereafter, the same processing as in the case where there is only one moving image is performed.
  • the layout unit 112 determines the layout of the rotated page based on the acquired relative position of the moving image area (step S204). Specifically, the layout unit 112 determines the layout so that the position of the moving image area is as close as possible to the position before the rotation in order to make it appear as if it has been rotated based on the position where the moving image area is displayed. .
  • the page position holding unit 113 stores the position of the page before the rotation (step S205). For example, in FIG. 2, the page position holding unit 113 stores information for specifying “K” that is the first character of the page before rotation. This information is used when returning to the state before the rotation after the direction is changed, and specific processing will be described later.
  • the layout unit 112 displays the rotated page on the screen based on the determined layout (step S207).
  • the moving image reproducing unit 111 continuously reproduces the moving image, so that the user can seamlessly view the moving image.
  • step S201 when the video viewing determination unit 130 determines that the video is not being viewed (step S201: NO), the rotation processing may be performed as a normal book, and therefore the rotation is based on the first character of the page. Do. Accordingly, the position of the page after rotation is set as the same position as the current position (step S206). That is, even in the rotated page, the page layout is such that the first character of the current page is “K”, and the layout unit 112 displays the rotated page on the screen (step S207). In this case, for example, the landscape display screen 72 shown in FIG. 12 is displayed. Above, the process of this flowchart is complete
  • the electronic book device 100 determines the layout so that the relative position of the moving image area after rotation is as close as possible to the relative position before rotation. As a result, the electronic book device 100 can maintain the relative position of the moving image area even when the device itself is rotated while watching the moving image, and thus can continuously display the moving image. As a result, even if the electronic book apparatus 100 is rotated, a moving image can be viewed seamlessly, and user convenience can be improved. Therefore, the user can comfortably view the electronic book in which the moving image is incorporated.
  • the electronic book device 100 holds the relative positions before and after the rotation of the position on the screen on which the moving image is displayed, the user can easily associate the moving image before and after the rotation. Can be done. As a result, the user can comfortably view the electronic book in which the moving image is incorporated.
  • the electronic book apparatus 100 rotates the display screen based on the position where the moving image is displayed. change. Therefore, immediately after the electronic book apparatus 100 is rotated, when the electronic book apparatus 100 is returned to the direction before the rotation, the position may be shifted from the original page. Therefore, processing of the electronic book device 100 when the orientation of the device itself is restored immediately after the orientation of the device is changed will be described with reference to FIG.
  • FIG. 11 is a flowchart illustrating an example of a processing flow of the electronic book device 100 when the orientation of the device itself is restored immediately after the orientation of the device itself is changed.
  • the following processing is executed when the orientation of the terminal is returned immediately after execution of the rotation processing shown in FIG.
  • the following processing is executed when the image is rotated from the vertical orientation (0 °) to the horizontal orientation (90 °) and then returned from the horizontal orientation (90 °) to the vertical orientation (0 °).
  • the layout unit 112 determines whether or not the page position before rotation is stored in the page position holding unit 113 (step S301).
  • step S301 YES
  • the layout unit 112 stores the page position in the page position holding unit 113.
  • the stored page position is acquired (step S302).
  • the layout unit 112 sets this page position as the position of the page after rotation (step S303). That is, the layout unit 112 performs a page layout with a designated element (character, image, moving image, etc.) as the head as a rotated page. The page generated as a result is the same page as before the rotation.
  • the layout unit 112 displays the rotated page on the display unit 106 (step S304). Thereby, the layout part 112 can return to the same display state as before rotation.
  • the page position holding unit 113 deletes the information (step S305).
  • step S301 if the page start position is not stored in the page position holding unit 113 (step S301: NO), it means that the page position is not moved during the previous rotation. Normal rotation processing is performed. Specifically, similarly to the processing shown in FIG. 10, the layout unit 112 sets the page position after rotation to the same position as the current position (step S306). The layout unit 112 displays the rotated page on the display unit 106 in this layout (step S307). Thereby, the layout unit 112 can display a page rotated with reference to the first character of the page.
  • the electronic book device 100 executes the process of FIG. 11 only when the orientation of the terminal is returned to the original orientation immediately after the rotation of the device itself.
  • the electronic book device 100 executes the rotation processing (FIG. 10) without executing the processing of FIG. To do.
  • the electronic book device 100 can be The processing of FIG. 10 is executed without executing the processing. That is, the electronic book apparatus 100 basically executes the process of FIG. 10 when the own apparatus is rotated, but executes the special process shown in FIG. 11 only when the direction of the own apparatus is returned. To do.
  • the electronic book device 100 can give the meaning of returning to the original state (before rotation) to the operation of returning the orientation of the device itself. Therefore, even when there are a plurality of moving images being played back, even if the moving image preferentially displayed by the device is different from the user's intention, the user can simply change the orientation of the terminal to restore the state. Since it is guaranteed to return to the original, the user can use the electronic book apparatus 100 without a sense of incongruity.
  • the electronic book apparatus 100 was demonstrated as an example, it is not restricted to this, What is necessary is just a display apparatus provided with display screens, such as a mobile telephone apparatus, a smart phone, an information terminal device, and a tablet PC.
  • the “computer system” referred to here may include an OS and hardware such as peripheral devices. Further, the “computer system” includes a homepage providing environment (or display environment) if a WWW system is used.
  • the “computer-readable recording medium” means a flexible disk, a magneto-optical disk, a ROM, a writable nonvolatile memory such as a flash memory, a portable medium such as a CD-ROM, a hard disk built in a computer system, etc. This is a storage device.
  • the “computer-readable recording medium” refers to a volatile memory (for example, DRAM (Dynamic) in a computer system serving as a server or a client when a program is transmitted via a network such as the Internet or a communication line such as a telephone line. Random Access Memory)), etc. that hold a program for a certain period of time.
  • the program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium.
  • the “transmission medium” for transmitting the program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line.
  • the program may be for realizing a part of the functions described above. Furthermore, what can implement
  • the present invention can be applied to terminals such as mobile phones, smartphones, and tablets that can display electronic books.
  • SYMBOLS 100 Electronic book apparatus 101 User input part 102 Direction detection part 103 Image pick-up part 104 Storage part 105 Communication part 110 Control part 111 Movie reproduction part 112 Layout part 113 Page position holding part 114 Movie selection part 115 Movie display position calculation part 116 Data management part DESCRIPTION OF SYMBOLS 120 Eye-gaze detection part 121 Feature-value extraction part 122 Face area specific

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Position Input By Displaying (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

A display device according to an embodiment of the present invention has a control unit that sets the image layout after a change in orientation so as to display the whole of at least one video contained in contents when the orientation of the device changes.

Description

表示装置、表示方法及び表示プログラムDisplay device, display method, and display program
 本発明は、表示装置、表示方法及び表示プログラムに関する。
 本願は、2012年1月16日に、日本に出願された特願2012-006263号に基づき優先権を主張し、その内容をここに援用する。
The present invention relates to a display device, a display method, and a display program.
This application claims priority based on Japanese Patent Application No. 2012-006263 filed in Japan on January 16, 2012, the contents of which are incorporated herein by reference.
 近年、携帯電話やスマートフォン、タブレットと呼ばれるポータブル端末の普及してきている。これに伴い、端末上のディスプレイで書籍を購読できる、いわゆる電子書籍のサービスが広まりつつある。電子書籍は、従来の紙による書籍に比べ、一度に大量の本を持ち運べること、動画や音声などのマルチメディアデータも再生できること、等のメリットがあり、今後の更なる普及が期待されている。 In recent years, portable terminals called mobile phones, smartphones, and tablets have become widespread. Along with this, so-called electronic book services that allow users to subscribe to books on a display on a terminal are spreading. Compared to conventional paper books, electronic books have the advantages of being able to carry a large number of books at once and playing multimedia data such as video and audio, and are expected to become more popular in the future.
 特に、電子書籍内に動画や音声などのマルチメディアデータを組み込んだコンテンツは、従来の紙では実現できなかった表現が可能となる。そのため、このようなコンテンツは、コンテンツ作成者にとってもユーザにとってもメリットのあるコンテンツと言える。例えば、ユーザが特定の文字や画像を選択すると、それに対応した動画が再生されるというインタラクティブなコンテンツが想定される。また、例えば、特定のページが表示されると自動的に動画が再生されるというコンテンツが想定される。 In particular, content that incorporates multimedia data such as video and audio in an e-book can be expressed in a way that could not be realized with conventional paper. Therefore, such content can be said to be content that is beneficial to both content creators and users. For example, interactive content is assumed in which when a user selects a specific character or image, a moving image corresponding to the selected character or image is reproduced. Further, for example, a content is assumed that a moving image is automatically played when a specific page is displayed.
 このような動画データを組み込んだ電子書籍のコンテンツにおいて、例えば、動画を表示する方法としては、書籍内にインラインで(その場で埋め込んで)表示する方法、又は全画面で表示する方法が考えられる。例えば、特許文献1には、動画を表示する際、静止画の表示領域上にそのまま動画を表示することで、画面上のその他の文字のレイアウトに影響を与えない情報表示装置が開示されている。 In the content of an electronic book incorporating such moving image data, for example, as a method of displaying a moving image, a method of displaying inline (embedded on the spot) in a book or a method of displaying on a full screen is conceivable. . For example, Patent Document 1 discloses an information display device that does not affect the layout of other characters on a screen by displaying a moving image as it is on a still image display area when displaying a moving image. .
特開2006-53606号公報JP 2006-53606 A
 スマートフォンやタブレット等の端末では、内蔵している加速度センサを利用して、端末の向き(縦/横)に応じて表示を切り替えるものがある。したがって、端末が電子書籍を表示する場合にも、端末の向きに合わせた形でコンテンツを表示することが求められる。すなわち、あるページを縦向きで表示中に端末の向きを横向きに変化(回転)させた場合、同じページを横向きの画面で表示するという処理が必要である。 Some terminals such as smartphones and tablets use a built-in acceleration sensor to switch the display according to the orientation (vertical / horizontal) of the terminal. Therefore, even when the terminal displays an electronic book, it is required to display the content in a form that matches the orientation of the terminal. That is, when the orientation of the terminal is changed to the landscape orientation (rotation) while displaying a certain page in the portrait orientation, it is necessary to display the same page on the landscape orientation screen.
 しかしながら、コンテンツが動画を含む場合に、端末の向きが変化した場合、コンテンツによっては、向き変化後にその動画が表示領域外に位置してしまい、向き変化後の画面において動画が表示されない場合がある。特に、動画を再生中に端末の向きが変化した場合、コンテンツによっては向き変化後にその動画が表示領域外に位置してしまい、向き変化後の画面において動画が表示されない場合がある。これにより、ユーザは、動画を含むコンテンツを快適に視聴できない。 However, when the content includes a video, if the orientation of the terminal changes, depending on the content, the video may be located outside the display area after the orientation changes, and the video may not be displayed on the screen after the orientation has changed. . In particular, when the orientation of the terminal changes during playback of a moving image, depending on the content, the moving image may be located outside the display area after the orientation change, and the moving image may not be displayed on the screen after the orientation change. As a result, the user cannot comfortably view the content including the moving image.
 そこで、本発明は、動画を含むコンテンツを快適に視聴することを可能とする表示装置、表示方法及び表示プログラムを提供する。 Therefore, the present invention provides a display device, a display method, and a display program that enable comfortable viewing of content including moving images.
 (1)本発明の一態様に係る表示装置は、自装置の向きが変わるときに、コンテンツに含まれる少なくとも一つの動画の全体が画面に表示されるように、向き変化後の画面レイアウトを決定する制御部を備える。 (1) The display device according to one aspect of the present invention determines the screen layout after the orientation change so that the entire at least one moving image included in the content is displayed on the screen when the orientation of the device changes. A control unit is provided.
 (2)上記表示装置は、動画を含むコンテンツを画面に表示する表示部と、前記画面を含む平面上における自装置の向きの変化を検出する向き検出部と、を備えていてもよい。前記制御部は、前記向き検出部が向きの変化を検出した場合、コンテンツに含まれる少なくとも一つの動画の全体が画面に表示されるように、向き変化後の画面レイアウトを決定するように構成されていてもよい。 (2) The display device may include a display unit that displays content including a moving image on a screen, and a direction detection unit that detects a change in the direction of the device itself on a plane including the screen. The control unit is configured to determine a screen layout after the change of direction so that at least one moving image included in the content is displayed on the screen when the direction detection unit detects a change in the direction. It may be.
 (3)上記表示装置において、前記制御部は、ユーザが前記表示部に表示されている動画を視聴中か否か判定する動画視聴判定部と、前記向き検出部が向きの変化を検出した場合で、かつ前記動画視聴判定部が動画を視聴中と判定した場合、該動画の表示されている画面上の位置に基づいて、向き変化後の画面レイアウトを決定するレイアウト部と、を備えていてもよい。 (3) In the display device, when the control unit detects a change in orientation, the video viewing determination unit that determines whether or not the user is viewing the video displayed on the display unit, and the direction detection unit And when the moving image viewing determination unit determines that the moving image is being viewed, a layout unit that determines a screen layout after changing the orientation based on a position on the screen on which the moving image is displayed is provided. Also good.
 (4)上記表示装置において、前記制御部は、前記向き検出部が検出した向きが変化した場合、表示していたページ位置と向きの変化方向とを保持するページ位置保持部を備えていてもよい。前記レイアウト部は、前記向き検出部が検出した向きの変化が、前記ページ位置保持部で保持している向きの変化方向と逆方向であった場合、前記ページ位置保持部で保持しているページ位置に基づいて、変化後の画面レイアウトを決定するように構成されていてもよい。 (4) In the display device, the control unit may include a page position holding unit that holds a displayed page position and a change direction of the direction when the direction detected by the direction detection unit is changed. Good. The layout unit detects a page held by the page position holding unit when the change in direction detected by the direction detection unit is opposite to the direction change of the direction held by the page position holding unit. The screen layout after the change may be determined based on the position.
(5)上記表示装置において、前記動画視聴判定部は、少なくとも動画が再生中の場合に動画が視聴中であると判定するように構成されていてもよい。 (5) In the display device, the moving image viewing determination unit may be configured to determine that the moving image is being watched at least when the moving image is being played back.
 (6)上記表示装置において、前記制御部は、ユーザの視線を検出する視線検出部を備えていてもよい。前記動画視聴判定部は、動画の再生状態と前記視線検出部が検出した視線方向とに基づいて、前記ユーザが前記動画を視聴中であるか否か判定するように構成されていてもよい。 (6) In the display device, the control unit may include a line-of-sight detection unit that detects a user's line of sight. The moving image viewing determination unit may be configured to determine whether or not the user is viewing the moving image based on a reproduction state of the moving image and a line-of-sight direction detected by the line-of-sight detection unit.
(7)上記表示装置は、前記制御部は、視聴の指標に基づいて、コンテンツに含まれる複数の動画のうちから一の動画を視聴中の動画として選択する動画選択部を備えていてもよい。 (7) In the display device, the control unit may include a video selection unit that selects one video from among a plurality of videos included in the content as a video being viewed based on a viewing index. .
 (8)上記表示装置において、前記視聴の指標は、文書データ中に記載された動画ごとの優先度であってもよい。前記動画選択部は、前記動画ごとの優先度に基づいて、前記コンテンツに含まれる複数の動画のうちから一の動画を視聴中の動画として選択するように構成されていてもよい。 (8) In the display device, the viewing index may be a priority for each moving picture described in the document data. The moving image selection unit may be configured to select one moving image from among a plurality of moving images included in the content based on a priority for each moving image as a moving image being viewed.
 (9)上記表示装置において、前記視聴の指標は、前記動画の表示サイズであってもよい。前記動画選択部は、前記動画の表示サイズに基づいて、コンテンツに含まれる複数の動画のうちから一の動画を視聴中の動画として選択するように構成されていてもよい。 (9) In the display device, the viewing index may be a display size of the moving image. The moving image selection unit may be configured to select one moving image from among a plurality of moving images included in the content as a moving image being viewed based on the display size of the moving image.
 (10)上記表示装置において、前記視聴の指標は、ユーザの視線であってもよい。前記制御部は、前記ユーザの視線方向を検出する第2の視線検出部を備えていてもよい。前記動画選択部は、前記視線検出部が検出した視線方向に基づいて、前記コンテンツに含まれる複数の動画のうちから一の動画を視聴中の動画として選択するように構成されていてもよい。 (10) In the display device, the viewing index may be a user's line of sight. The said control part may be provided with the 2nd gaze detection part which detects the said user's gaze direction. The moving image selection unit may be configured to select one moving image from among a plurality of moving images included in the content based on the line-of-sight direction detected by the line-of-sight detection unit.
 (11)本発明の他の一態様に係る表示方法は、表示装置が実行する表示方法であって、自装置の向きが変わるときに、コンテンツに含まれる少なくとも一つの動画の全体が画面に表示されるように、向き変化後の画面レイアウトを決定する工程を有する。 (11) A display method according to another aspect of the present invention is a display method executed by the display device, and displays the entire at least one moving image included in the content on the screen when the orientation of the device changes. As described above, the method includes the step of determining the screen layout after the change of direction.
 (12)本発明の他の一態様に係る表示プログラムは、表示装置のコンピュータに、自装置の向きが変わるときに、コンテンツに含まれる少なくとも一つの動画の全体が画面に表示されるように、向き変化後の画面レイアウトを決定するステップを実行させる。 (12) A display program according to another aspect of the present invention is arranged so that when the orientation of the device changes, the computer of the display device displays at least one entire moving image included in the content on the screen. A step of determining the screen layout after the orientation change is executed.
 本発明の態様によれば、動画を含むコンテンツを快適に視聴することができる。 According to the aspect of the present invention, it is possible to comfortably view content including moving images.
本発明の一実施形態における電子書籍装置の概略ブロック図である。It is a schematic block diagram of the electronic book apparatus in one Embodiment of this invention. 本実施形態における制御部の論理的な構成を示す概略ブロック図である。It is a schematic block diagram which shows the logical structure of the control part in this embodiment. 横向きに回転前と横向きに回転後のページレイアウトの一例を示す図である。It is a figure which shows an example of the page layout before rotating horizontally and after rotating horizontally. 動画データに対して優先度を指定した例を示す図である。It is a figure which shows the example which specified the priority with respect to moving image data. 動画データに対して表示サイズを指定した例を示す図である。It is a figure which shows the example which designated display size with respect to moving image data. 本実施形態における視線検出部の概略ブロック図である。It is a schematic block diagram of the gaze detection part in this embodiment. 本実施形態における動画視聴判定部の概略ブロック図である。It is a schematic block diagram of the moving image viewing determination part in this embodiment. レンズの中心位置を原点とする目位置と視線方向とを説明するための図である。It is a figure for demonstrating the eye position and gaze direction which make the center position of a lens the origin. 本実施形態の電子書籍装置の基本部分の処理の一例を示すフローチャートである。It is a flowchart which shows an example of the process of the basic part of the electronic book apparatus of this embodiment. 本実施形態の電子書籍装置において、自装置の向きが変化した際の処理の一例を示すフローチャートである。In the electronic book apparatus of this embodiment, it is a flowchart which shows an example of a process when the direction of an own apparatus changes. 自装置の向きを変えた直後に自装置の向きを元に戻した場合において、電子書籍装置の処理の流れの一例を示すフローチャートである。10 is a flowchart illustrating an example of a processing flow of the electronic book device when the orientation of the device is returned to the original state immediately after the orientation of the device is changed. 電子書籍装置のディスプレイサイズに応じて動的にページを生成する方式の第1の比較例を説明するための図である。It is a figure for demonstrating the 1st comparative example of the system which produces | generates a page dynamically according to the display size of an electronic book apparatus. 電子書籍装置を横向きにした時に2ページ分を並べて表示する第2の比較例を説明するための図である。It is a figure for demonstrating the 2nd comparative example which displays two pages side by side when an electronic book apparatus is turned sideways.
 以下、本発明の一実施形態について、図面を参照して詳細に説明する。本実施形態では、電子書籍を表示するための表示装置の一例として、電子書籍を表示する電子書籍装置について説明する。 Hereinafter, an embodiment of the present invention will be described in detail with reference to the drawings. In this embodiment, an electronic book device that displays an electronic book will be described as an example of a display device for displaying an electronic book.
 まず、本実施形態の電子書籍装置と比較する比較例について、図12及び図13を用いて説明する。電子書籍の表示方式はコンテンツの種類に応じて何通りか考えられるが、例えば、電子書籍装置のディスプレイサイズに応じて動的にページを生成する方式(図12)、又は電子書籍装置を横向きにした時には見開き(2ページ分)で表示する方式(図13)などが挙げられる。 First, a comparative example to be compared with the electronic book device of the present embodiment will be described with reference to FIGS. There are several ways to display electronic books depending on the type of content. For example, a method of dynamically generating pages according to the display size of the electronic book device (FIG. 12), or the electronic book device in landscape orientation. In such a case, a method of displaying with two pages (for two pages) can be used.
 図12は、表示装置のディスプレイサイズに応じて動的にページを生成する方式の第1の比較例を説明するための図である。同図において、表示装置が縦向きの場合における表示画面71の一例が示されている。表示画面71において、文字領域71aと動画71bとが表示されることが示されている。 FIG. 12 is a diagram for explaining a first comparative example of a method of dynamically generating pages according to the display size of the display device. In the figure, an example of the display screen 71 in the case where the display device is oriented vertically is shown. In the display screen 71, it is shown that a character area 71a and a moving image 71b are displayed.
 また、同図における表示画面72は、表示画面71を表示している表示装置を横向きにした場合における表示画面の一例である。表示画面72において、文字領域71aに対応する文字領域72aが表示されることが示されている。また、同図において、表示装置を横向きの場合にした場合に表示される表示画面72には、動画71bに対応する動画が表示されておらず、動画71bに対応する動画73bは、表示画面72の次のページの表示画面73において表示される。 Further, the display screen 72 in the figure is an example of the display screen when the display device displaying the display screen 71 is turned sideways. In the display screen 72, it is shown that the character area 72a corresponding to the character area 71a is displayed. Also, in the figure, the moving image corresponding to the moving image 71b is not displayed on the display screen 72 displayed when the display device is in the landscape orientation, and the moving image 73b corresponding to the moving image 71b is displayed on the display screen 72. Is displayed on the display screen 73 of the next page.
 同図に示す表示方式の場合、ページの区切り位置は決まっておらず、文字、画像又は動画を配置した結果に応じて動的にページの区切り位置が決定される。表示装置を回転させた場合、基本的には表示していたページの先頭の文字を基準として回転後の表示内容が決まる。そのため、図12に示したように文字の改行が連続した場合等、表示装置が横向きの場合、ページ下部に配置されていた動画が次のページに配置されることが起こり得る。 In the case of the display method shown in the figure, the page break position is not determined, and the page break position is dynamically determined according to the result of arranging characters, images, or moving images. When the display device is rotated, the display content after rotation is basically determined based on the first character of the displayed page. For this reason, when the display device is in landscape orientation, such as when the line breaks of characters continue as shown in FIG. 12, the moving image arranged at the bottom of the page may be arranged on the next page.
 結果として、表示装置を回転させることで、再生していた動画が表示されなくなるため、動画を視聴していたユーザにとっては視聴を妨げられることとなり、大きな不満につながる可能性がある。
 もちろん、動画自体は残っており、次のページを表示すれば動画を再生することは可能であるが、ユーザの操作が必要となる上、動画をまた最初から再生することになるため、ユーザに対して負担を強いることとなってしまう。
As a result, by rotating the display device, the reproduced moving image is not displayed, so that the user who has viewed the moving image is prevented from viewing, which may lead to great dissatisfaction.
Of course, the movie itself remains, and it is possible to play the movie if the next page is displayed, but the user's operation is required and the movie will be played again from the beginning. It will be burdensome for this.
 図13は、表示装置を横向きにした時に2ページ分を並べて表示する第2の比較例を説明するための図である。同図において、表示装置が横向きの場合の表示画面81の一例が示されている。同図の表示画面81において、向かって左のページには、文章領域81aが表示されていることが示されている。表示画面81において、向かって右のページには、文章領域81bと動画81cが表示されていることが示されている。 FIG. 13 is a diagram for explaining a second comparative example in which two pages are displayed side by side when the display device is turned sideways. In the figure, an example of a display screen 81 when the display device is in landscape orientation is shown. In the display screen 81 shown in the figure, it is shown that a text area 81a is displayed on the left page. On the display screen 81, the page on the right side shows that a text area 81b and a moving image 81c are displayed.
 また、同図における表示画面82は、表示画面81を表示している表示装置を縦向きにした場合の、表示画面の一例である。表示画面82において、文章領域82aが表示されていることが示されている。
 表示画面82には、動画81cに対応する動画が表示されておらず、動画81cに対応する動画83bは、次のページの表示画面83において表示されることが示されている。
Moreover, the display screen 82 in the figure is an example of the display screen when the display device displaying the display screen 81 is oriented vertically. In the display screen 82, it is shown that the text area 82a is displayed.
It is shown that the moving image corresponding to the moving image 81c is not displayed on the display screen 82, and the moving image 83b corresponding to the moving image 81c is displayed on the display screen 83 of the next page.
 図13に示す表示方式は、各ページのレイアウトは固定されており、横向き時には2ページ分を並べて表示している。この場合、横向きから縦向きに回転させた際、表示されていた2ページのうち若いページを表示することにすると、もう一方のページに表示されていた動画は表示されなくなってしまい、図12の場合と同様の課題が生じる。 In the display method shown in FIG. 13, the layout of each page is fixed, and two pages are displayed side by side when in landscape orientation. In this case, when the young page is displayed out of the two displayed pages when rotated from landscape to portrait, the video displayed on the other page is not displayed. The same problem as that occurs.
 本実施形態の電子書籍装置100は、書籍を表示するためのディスプレイを備え、ユーザの操作を入力するためのタッチパネルをディスプレイ上に重ねて配置している。ユーザはタッチパネルを用いて、ページの移動や表示の拡大・縮小、コンテンツの選択等の指示を行うことができる。なお、タッチパネルの代わりに、もしくはタッチパネルに加えて、操作のための物理的なボタンを備えた構成でも構わない。 The electronic book device 100 according to the present embodiment includes a display for displaying a book, and a touch panel for inputting a user operation is disposed on the display. The user can use the touch panel to give instructions such as page movement, display enlargement / reduction, and content selection. In addition, instead of the touch panel or in addition to the touch panel, a configuration including physical buttons for operation may be used.
 電子書籍装置100は、装置内部に加速度センサを備えており、装置の向きを検出すると共に、装置の向きに応じて画面を出力する機能を有する。すなわち、電子書籍装置100は、コンテンツの表示中に自装置の向きが変わると、その向きに応じてコンテンツの向きを変更する。 The electronic book device 100 includes an acceleration sensor inside the device, and has a function of detecting the orientation of the device and outputting a screen according to the orientation of the device. That is, if the orientation of the electronic book apparatus 100 changes during display of the content, the electronic book apparatus 100 changes the orientation of the content according to the orientation.
 また、本実施形態におけるコンテンツは、単なる文字と画像だけの書籍ではなく、コンテンツ中に少なくとも一つの画像が含まれているものを対象にする。ここで、本実施形態では、一例として、動画が含まれているコンテンツを対象に説明する。ここで、動画が含まれているコンテンツは、コンテンツの少なくとも一部に動画が含まれていればよく、コンテンツの全てが動画であってもよい。 In addition, the content in the present embodiment is not a book with only characters and images but a content in which at least one image is included in the content. Here, in the present embodiment, as an example, content including a moving image will be described. Here, the content including the moving image only needs to include the moving image in at least a part of the content, and the entire content may be the moving image.
 本実施形態では、動画は一例として、コンテンツ内にインラインで表示することとする。これにより、ユーザは、文字と映像とを同時に見ることができる。コンテンツとしては、例えば、文字だけでなく映像でも解説してくれる図鑑や、スポーツの教則本などのコンテンツが想定される。 In this embodiment, the moving image is displayed inline in the content as an example. Thereby, the user can see a character and an image | video simultaneously. As the contents, for example, contents such as pictorial books that explain not only characters but also images and sports instructional books are assumed.
 なお、動画をメインに視聴する場合も想定し、電子書籍装置100は、動画を全画面で視聴できるよう表示方式を切り替えられる機能を有していてもよい。また、電子書籍装置100は、動画データは実際に書籍データ中に含まれている必要はなく、再生するタイミングで、ネットワーク経由で外部から取得してもよい。 Note that the electronic book device 100 may have a function of switching the display method so that the moving image can be viewed on the full screen, assuming that the moving image is mainly viewed. Further, the electronic book apparatus 100 does not need to actually include the moving image data in the book data, and may acquire it from the outside via the network at the timing of reproduction.
 図1は、本実施形態における電子書籍装置100の概略ブロック図である。電子書籍装置100は、ユーザ入力部101と、向き検出部102と、撮像部103と、記憶部104と、通信部105と、表示部106と、制御部110とを備える。 FIG. 1 is a schematic block diagram of an electronic book device 100 according to this embodiment. The electronic book device 100 includes a user input unit 101, an orientation detection unit 102, an imaging unit 103, a storage unit 104, a communication unit 105, a display unit 106, and a control unit 110.
 ユーザ入力部101は、ユーザが入力したユーザ入力を受け付け、受け付けたユーザ入力を制御部110へ出力する。ユーザ入力としては、ページ移動、表示の拡大又は縮小等の書籍に対する操作に加えて、動画の再生開始、一時停止、スキップ等の動画に対する操作も含まれる。ユーザ入力部101は、入力デバイスであり、例えば、タッチパネルである。 The user input unit 101 receives a user input input by the user and outputs the received user input to the control unit 110. The user input includes operations for moving images such as moving image reproduction start, pause, and skip in addition to operations for books such as page movement and display enlargement or reduction. The user input unit 101 is an input device, for example, a touch panel.
 向き検出部102は、例えば加速度センサ等を備え、自装置の向きを取得すると共に、自装置の向きが変化したことを検出する。すなわち、向き検出部102は、表示部106の画面を含む平面上における自装置の向きの変化を検出する。
 向き検出部102は、取得した自装置の向きを示す向き情報と検出した自装置の向きの変化を示す向き変化情報を制御部110へ出力する。
 撮像部103は、制御部110の制御により、動画を表示部106に再生させている間に、自装置のユーザの顔を撮像し、撮像により得られた画像データを制御部110へ出力する。撮像部103は、例えば、自装置の前面の個所(表示部のある面の個所)に設けられている。
 記憶部104は、書籍に含まれる文章を示す書籍データ、および書籍に含まれる動画データを保持している。
The orientation detection unit 102 includes, for example, an acceleration sensor and acquires the orientation of the own device and detects that the orientation of the own device has changed. That is, the orientation detection unit 102 detects a change in the orientation of the device itself on a plane including the screen of the display unit 106.
The orientation detection unit 102 outputs the acquired orientation information indicating the orientation of the own device and the orientation change information indicating the detected change in the orientation of the own device to the control unit 110.
Under the control of the control unit 110, the imaging unit 103 captures an image of the user's face of the user apparatus while the moving image is being reproduced on the display unit 106, and outputs image data obtained by the imaging to the control unit 110. The imaging unit 103 is provided, for example, at a location on the front surface of the device itself (location on the surface where the display unit is provided).
The memory | storage part 104 hold | maintains the book data which shows the text contained in a book, and the moving image data contained in a book.
 通信部105は、制御部110による制御により、通信部105が備える無線のネットワークインターフェースを介してネットワーク上の他の装置と無線で通信し、データの送受信を行う。ここで、電子書籍装置を持ち運ぶことを想定して、無線通信を利用する。通信部105における無線通信は、例えば、3G(3rd Generation)やLTE(Long Term Evolution)などの移動体通信、もしくは無線LANである。
 なお、通信部105における通信は、有線であってもよい。
The communication unit 105 wirelessly communicates with other devices on the network via the wireless network interface included in the communication unit 105 and transmits / receives data under the control of the control unit 110. Here, assuming that the electronic book device is carried, wireless communication is used. The wireless communication in the communication unit 105 is, for example, mobile communication such as 3G (3rd Generation) and LTE (Long Term Evolution), or a wireless LAN.
Note that the communication in the communication unit 105 may be wired.
 制御部110は、電子書籍装置100の向きが変わるとき、コンテンツに含まれる少なくとも一つの動画の全体が画面に表示されるように、向き変化後の画面レイアウトを決定する。より詳細には、制御部110は、向き検出部102が向きの変化を検出した場合、コンテンツに含まれる少なくとも一つの動画の全体が画面に表示されるように、向き変化後の画面レイアウトを決定する。この制御部110の具体的な処理は、後述する。
 また、制御部110は、以下の処理を行う。制御部110は、電子書籍装置100の向きが変わるときに、一例として、向き変化後の動画サイズを、向き変化前の動画サイズと同一にする。ここで、動画サイズとは、画面上に表示されている動画の大きさである。
 また、制御部110は、ユーザ入力部101から入力されたユーザ入力に応じて、書籍に対する処理あるいは動画に対する処理を行う。ここで、書籍に対する処理は、例えば、ページ移動、表示の拡大又は縮小等である。また、動画に対する処理は、例えば、動画の再生開始、一時停止又はスキップ等である。
 また、制御部110は、向き検出部102から向き変化情報が入力された場合、向き検出部102から入力された自装置の向きに応じて、文章と動画のレイアウトを決定する。そして、制御部110は、書籍データ又は動画データを記憶部104から読み出し、決定した文章と動画のレイアウトで、読み出した書籍データと動画データを表示部106に表示させる。
 また、制御部110は、表示部106において動画を再生中に、撮像部103に自装置のユーザを撮像させる。そして、制御部110は、撮像した画像に基づいて、ユーザが再生中の動画を視聴しているか否かを判定する。また、制御部110は、通信部105を制御して、ネットワーク上の他の装置に記憶されている文書データ又は動画データを取得する。
When the orientation of the electronic book device 100 changes, the control unit 110 determines the screen layout after the orientation change so that at least one moving image included in the content is displayed on the screen. More specifically, when the orientation detection unit 102 detects a change in orientation, the control unit 110 determines the screen layout after the orientation change so that at least one entire moving image included in the content is displayed on the screen. To do. Specific processing of the control unit 110 will be described later.
In addition, the control unit 110 performs the following processing. For example, when the orientation of the electronic book apparatus 100 changes, the control unit 110 sets the moving image size after the change in direction to be the same as the moving image size before the change in direction. Here, the moving image size is the size of the moving image displayed on the screen.
In addition, the control unit 110 performs processing for a book or processing for a moving image in accordance with a user input input from the user input unit 101. Here, the processing for the book is, for example, page movement, display enlargement or reduction, and the like. Moreover, the process with respect to a moving image is the reproduction start of a moving image, a pause, a skip, etc., for example.
In addition, when direction change information is input from the direction detection unit 102, the control unit 110 determines the text and video layout according to the direction of the own device input from the direction detection unit 102. Then, the control unit 110 reads the book data or moving image data from the storage unit 104 and causes the display unit 106 to display the read book data and moving image data with the determined text and moving image layout.
In addition, the control unit 110 causes the image capturing unit 103 to capture an image of the user of the own apparatus while a moving image is being reproduced on the display unit 106. Then, the control unit 110 determines whether or not the user is watching the moving image being played based on the captured image. The control unit 110 also controls the communication unit 105 to acquire document data or moving image data stored in another device on the network.
 なお、制御部110は、電子書籍装置100の向きが変わるときに、コンテンツに含まれる少なくとも一つの画像の向き変化後の画面上におけるサイズを、画面に収まるように変更してもよい。また、制御部110は、電子書籍装置100の向きが変わるときに、コンテンツに含まれる少なくとも一つの動画に対して、向き変化後の動画サイズを、向き変化前の動画サイズと同一にすればよい。 Note that when the orientation of the electronic book device 100 changes, the control unit 110 may change the size of the at least one image included in the content on the screen after the orientation change so as to fit on the screen. In addition, when the orientation of the electronic book device 100 changes, the control unit 110 may make the movie size after the orientation change the same as the movie size before the orientation change for at least one movie included in the content. .
 表示部106は、制御部110から入力された動画または書籍情報を表示する。ここで、書籍情報は、電子書籍に含まれる文章である。すなわち、表示部106は、動画を含むコンテンツを画面に表示する。 The display unit 106 displays the moving image or book information input from the control unit 110. Here, the book information is a sentence included in the electronic book. That is, the display unit 106 displays content including moving images on the screen.
 図2は、本実施形態における制御部110の論理的な構成を示す概略ブロック図である。制御部110は、動画再生部111と、レイアウト部112と、ページ位置保持部113と、動画選択部114と、動画表示位置算出部115と、データ管理部116と、視線検出部120と、動画視聴判定部130とを備える。 FIG. 2 is a schematic block diagram showing a logical configuration of the control unit 110 in the present embodiment. The control unit 110 includes a video playback unit 111, a layout unit 112, a page position holding unit 113, a video selection unit 114, a video display position calculation unit 115, a data management unit 116, a line-of-sight detection unit 120, and a video A viewing determination unit 130.
 動画再生部111は、書籍に組み込まれている動画データをデコードし、表示部106に表示させる。また、動画再生部111は、動画の再生状態(再生中/一時停止中/停止中)を管理し、ユーザ入力部101から入力されたユーザ入力に応じて再生開始、一時停止、スキップ等の処理を行う。 The moving image reproducing unit 111 decodes moving image data incorporated in the book and causes the display unit 106 to display the decoded data. The video playback unit 111 manages the playback state of the video (playing / paused / stopped), and processes such as playback start, pause, skip, etc. according to user input from the user input unit 101 I do.
 ここで、動画データのフォーマットとしては、例えば、映像コーデックがH.264、コンテナがmp4形式とするが、MPEG(Moving Picture Experts Group)-2やWMV(Windows(登録商標) Media Video)等の他のフォーマットであってもよい。また、音声データが含まれていてもよい。コンテンツに含まれる動画を再生開始するタイミングは、表示されたときに自動的に再生開始、もしくは、動画領域をユーザがタッチしたときに再生開始、の2通りがあり、ユーザが予めいずれかの方式を設定可能である。装置の能力次第では、複数の動画を同時にデコードし、デコード後のデータを表示するようにしてもよい。この場合、電子書籍装置100は、書籍の1ページに複数の動画が埋め込まれているコンテンツも再生できる。 Here, as the format of the video data, for example, the video codec is H.264. H.264, the container is in the mp4 format, but other formats such as MPEG (Moving Picture Experts Group) -2 and WMV (Windows (registered trademark) Media Video) may be used. Also, audio data may be included. There are two timings for starting playback of the video included in the content: automatically start playback when displayed, or start playback when the user touches the video area. Can be set. Depending on the capabilities of the device, a plurality of moving images may be decoded simultaneously and the decoded data may be displayed. In this case, the electronic book device 100 can also reproduce content in which a plurality of moving images are embedded in one page of the book.
 レイアウト部112はデータ管理部116を介して書籍データを取得し、自装置の向きやディスプレイの大きさに合わせてレイアウトし、レイアウト後のデータを表示部106に表示させる。書籍データのフォーマットとしては、例えばXMDF(ever-eXtending Mobile Document Format)とするが、EPUB(Electronic PUBlication)等の他のフォーマットであってもよい。さらには、HTML(HyperText Markup Language)やXML(Extensible Markup Language)などの文書を表現できるフォーマットであってもよい。 The layout unit 112 acquires the book data via the data management unit 116, lays out the book according to the orientation of the device and the size of the display, and causes the display unit 106 to display the data after layout. The book data format is, for example, XMDF (ever-eXtending Mobile Document Format), but other formats such as EPUB (Electronic PUBlication) may be used. Furthermore, a format capable of expressing a document such as HTML (HyperText Markup Language) or XML (Extensible Markup Language) may be used.
 書籍の表示方式はコンテンツごとに指定されるが、ここでは、図12に示すような動的にページを生成する表示方法を想定する。この場合、レイアウト部112は、ページの区切り位置は決まっておらず、文字や画像、動画を配置した結果に応じて動的にページの区切り位置を決定する。自装置がユーザにより回転させられた場合、従来では表示していたページの先頭の文字を基準として回転後の表示内容が決まっていた。しかし、図12に示した通り、縦向きの画面と横向きの画面とで表示される範囲が異なるため、ページの区切り位置も変わる。したがって、縦向きでページを進めた場合と横向きでページを進めた場合とでは、次のページの先頭の文字は異なる。
 なお、図13に示したような自装置の筐体が横向きの場合に、2ページでコンテンツを表示し、自装置の筐体が縦向きの場合に、1ページでコンテンツを表示するようにしてもよい。
The book display method is specified for each content. Here, a display method for dynamically generating a page as shown in FIG. 12 is assumed. In this case, the layout unit 112 does not determine the page break position, and dynamically determines the page break position according to the result of arranging characters, images, and moving images. When the user's own device is rotated by the user, the display content after the rotation is determined based on the first character of the displayed page. However, as shown in FIG. 12, since the displayed range is different between the portrait screen and the landscape screen, the page separation position also changes. Therefore, the character at the top of the next page differs between when the page is advanced vertically and when the page is advanced horizontally.
In addition, when the casing of the own apparatus as shown in FIG. 13 is in landscape orientation, the content is displayed on two pages, and when the casing of the own apparatus is in portrait orientation, the content is displayed on one page. Also good.
 レイアウト部112は、後述する動画表示位置算出部115が算出した動画領域の相対位置に基づいて、回転後のページのレイアウトを決定する。具体的には、動画領域の表示されている位置を基準として回転を行ったように見せるため、レイアウト部112は、動画領域の相対位置が回転前の相対位置に出来るだけ近くなるようにレイアウトを決定する。すなわち、レイアウト部112は、向き検出部102が向きの変化を検出した場合で、かつ動画視聴判定部130が動画を視聴中と判定した場合、該動画の表示されている画面上の位置に基づいて、向き検出部102が検出した変化の後の向きに応じた画面レイアウトを決定する。レイアウト部112の処理の詳細については後述する。 The layout unit 112 determines the page layout after rotation based on the relative position of the moving image area calculated by the moving image display position calculating unit 115 described later. Specifically, in order to make it appear as if the rotation is performed with reference to the position where the moving image area is displayed, the layout unit 112 arranges the layout so that the relative position of the moving image area is as close as possible to the relative position before the rotation. decide. That is, when the orientation detection unit 102 detects a change in orientation and the video viewing determination unit 130 determines that the video is being viewed, the layout unit 112 is based on the position on the screen on which the video is displayed. Thus, the screen layout corresponding to the direction after the change detected by the direction detection unit 102 is determined. Details of the processing of the layout unit 112 will be described later.
 ページ位置保持部113は、向き検出部102により自装置の向きが変化したことを検出した場合、向き検出部102から向き変化情報を受け取る。そして、ページ位置保持部113は、向き変化情報を受け取った場合、回転前に表示していたページの位置と向きの変化方向とを保持する。すなわち、ページ位置保持部113は、向き検出部が検出した向きが変化した場合、表示していたページ位置と向きの変化方向とを保持する。
 これにより、ユーザが自装置の回転直後に、自装置を回転前の元の向きに戻した場合、レイアウト部112は、向き検出部102が検出した向きの変化が、ページ位置保持部113で保持している向きの変化方向と逆方向であった場合、ページ位置保持部113で保持しているページ位置に基づいて、変化後の画面レイアウトを決定する。
When the orientation detection unit 102 detects that the orientation of the device itself has changed, the page position holding unit 113 receives orientation change information from the orientation detection unit 102. When the page position holding unit 113 receives the direction change information, the page position holding unit 113 holds the page position and the direction change direction displayed before the rotation. That is, the page position holding unit 113 holds the displayed page position and direction change direction when the orientation detected by the orientation detection unit changes.
As a result, when the user returns the own apparatus to the original orientation before the rotation immediately after the own apparatus is rotated, the layout unit 112 holds the change in the direction detected by the orientation detection unit 102 in the page position holding unit 113. When the direction is the direction opposite to the change direction, the screen layout after the change is determined based on the page position held by the page position holding unit 113.
 一例として、ページ位置保持部113は、ページの先頭が文字である場合、ページの先頭の文字のデータ上でのオフセット値を保持する。一方、一例として、ページ位置保持部113は、ページの先頭が文字以外の要素(画像、動画等)である場合、その要素のデータ上でのオフセット値を保持する。これにより、ページ位置保持部113が、その要素のデータ上でのオフセット値を保持するので、レイアウト部112はその要素を特定することが可能となる。
 レイアウト部112は、回転した後で回転前の状態に戻す際にこのオフセット値を用いて、ページの先頭の文字またはページの先頭の要素を特定する。レイアウト部112の処理についての詳細な説明は後述する。
As an example, when the top of the page is a character, the page position holding unit 113 holds an offset value on the data of the top character of the page. On the other hand, as an example, when the top of the page is an element other than a character (image, video, etc.), the page position holding unit 113 holds an offset value on the data of the element. As a result, the page position holding unit 113 holds the offset value on the data of the element, so that the layout unit 11 can specify the element.
The layout unit 112 specifies the first character of the page or the first element of the page by using this offset value when returning to the state before the rotation after the rotation. A detailed description of the processing of the layout unit 112 will be described later.
 視線検出部120は、動画再生中に、撮像部103が撮像した画像に基づいて、ユーザの視線方向SLを検出し、検出した視線方向SLを示す視線方向情報を動画視聴判定部130へ出力する。具体的な処理については、後述する。 The line-of-sight detection unit 120 detects the user's line-of-sight direction SL based on the image captured by the imaging unit 103 during video reproduction, and outputs the line-of-sight direction information indicating the detected line-of-sight direction SL to the video viewing determination unit 130. . Specific processing will be described later.
 動画視聴判定部130は、ユーザが表示部106に表示されている動画を視聴中か否かを判定する。より詳細には、動画視聴判定部130は、動画が再生されている場合に、視線検出部120から入力された視線方向情報が示す視線方向SLに基づいて、視線方向SLを検出されたユーザがその動画を視聴中か否かを判定する。すなわち、動画視聴判定部130は、動画の再生状態と視線検出部120が検出した視線方向SLとに基づいて、そのユーザがその動画を視聴中であるか否かを判定する。 The moving image viewing determination unit 130 determines whether or not the user is viewing the moving image displayed on the display unit 106. More specifically, the moving image viewing determination unit 130 determines that the user who has detected the visual line direction SL based on the visual line direction SL indicated by the visual line direction information input from the visual line detection unit 120 when the moving image is being reproduced. It is determined whether or not the moving image is being viewed. That is, the moving image viewing determination unit 130 determines whether or not the user is viewing the moving image based on the reproduction state of the moving image and the visual line direction SL detected by the visual line detection unit 120.
 その際、動画視聴判定部130は、例えば、予め決められた時間(例えば、直近数秒間)に亘って動画を視聴していた確率が予め決められた閾値よりも低い場合、動画を視聴していないと判定する。一方、動画視聴判定部130は、例えば、予め決められた時間にわたって動画を視聴していた確率が予め決められた閾値以上の場合、動画を視聴していると判定する。動画視聴判定部130の具体的な処理については、後述する。
 動画視聴判定部130は、判定結果を動画選択部114へ出力する。
At that time, the moving image viewing determination unit 130 views the moving image when, for example, the probability that the moving image has been viewed for a predetermined time (for example, the last few seconds) is lower than a predetermined threshold. Judge that there is no. On the other hand, the moving image viewing determination unit 130 determines that the moving image is being viewed when, for example, the probability that the moving image has been viewed for a predetermined time is equal to or greater than a predetermined threshold. Specific processing of the moving image viewing determination unit 130 will be described later.
The moving image viewing determination unit 130 outputs the determination result to the moving image selection unit 114.
 なお、動画視聴判定部130は、動画の再生状態に基づいて、ユーザが動画を視聴中か否かを判定してもよい。すなわち、動画視聴判定部130は、動画が再生中であればユーザは動画視聴中と判定し、再生中以外(停止中、一時停止中、等)であればユーザは動画視聴していないと判定する。すなわち、動画視聴判定部130は、少なくとも動画が再生中の場合に動画が視聴中であると判定してもよい。 Note that the video viewing determination unit 130 may determine whether or not the user is viewing the video based on the playback state of the video. That is, the video viewing determination unit 130 determines that the user is watching the video if the video is being played back, and determines that the user is not watching the video if the video is not being played (stopped, paused, etc.). To do. That is, the moving image viewing determination unit 130 may determine that the moving image is being watched at least when the moving image is being played back.
 動画選択部114は、動画視聴判定部130から入力された判定結果が動画を視聴していることを示す場合であって、かつ動画再生部111において複数の動画が同時に再生されている場合に、その中で最も視聴されている可能性が高い動画がどれか選択する。もし、同じページに複数の動画が配置されていても再生している動画が1つのときには、動画選択部114はその再生している動画を選択する。具体的には、動画選択部114は、視聴の指標に基づいて、コンテンツに含まれる複数の動画のうちから一の動画を視聴中の動画として選択する。 The video selection unit 114 is a case where the determination result input from the video viewing determination unit 130 indicates that a video is being viewed, and the video playback unit 111 is playing back a plurality of videos simultaneously. Choose which videos are most likely to be watched. Even if a plurality of moving images are arranged on the same page, if there is only one moving image being reproduced, the moving image selecting unit 114 selects the moving image being reproduced. Specifically, the moving image selection unit 114 selects one moving image from among a plurality of moving images included in the content based on the viewing index.
 一例として、動画選択部114は、コンテンツデータに含まれる優先度に基づいて、動画を選択する。コンテンツ作成者は、コンテンツデータ作成時に動画ごとに優先度を指定した優先度情報をコンテンツデータ内に埋め込み、動画選択部114は、優先度情報が示す優先度が最も高い動画を選択する。すなわち、視聴の指標は、一例として文書データ中に記載された動画ごとの優先度であり、動画選択部114は、動画ごとの優先度に基づいて、コンテンツに含まれる複数の動画のうちから一の動画を視聴中の動画として選択する。ここで、動画ごとに、メインコンテンツ、補足用コンテンツ、単なる演出、等の意味づけの違いがあれば、コンテンツ作成者がそれに応じて優先度を設定する。 As an example, the moving image selection unit 114 selects a moving image based on the priority included in the content data. The content creator embeds priority information in which priority is specified for each video at the time of content data creation in the content data, and the video selection unit 114 selects the video with the highest priority indicated by the priority information. In other words, the viewing index is a priority for each moving image described in the document data as an example, and the moving image selection unit 114 selects one of the plurality of moving images included in the content based on the priority for each moving image. Video as the currently viewed video. Here, if there is a difference in meaning such as main content, supplemental content, and mere presentation for each video, the content creator sets the priority accordingly.
 動画表示位置算出部115は、画面上における動画領域の相対位置を算出する。具体的には、文字が横書きとなっている場合、動画表示位置算出部115は、動画領域の縦方向の相対位置を取得する。例えば、動画表示位置算出部115は、動画領域前後の文字の行数を用いて相対的な位置を算出する。具体的には、例えば、動画領域の前にある文字の行数をL1、動画領域の後にある文字の行数をL2とすれば、動画表示位置算出部115は、動画領域の相対位置をL1/(L1+L2)で算出する。そして、動画表示位置算出部115は、算出した動画領域の相対位置をレイアウト部112へ出力する。 The moving image display position calculation unit 115 calculates the relative position of the moving image area on the screen. Specifically, when the character is written horizontally, the moving image display position calculation unit 115 acquires the vertical relative position of the moving image area. For example, the moving image display position calculation unit 115 calculates the relative position using the number of lines of characters before and after the moving image area. Specifically, for example, if the number of lines of characters before the moving image area is L1 and the number of lines of characters after the moving image area is L2, the moving image display position calculation unit 115 sets the relative position of the moving image area to L1. / (L1 + L2). Then, the moving image display position calculation unit 115 outputs the calculated relative position of the moving image area to the layout unit 112.
 データ管理部116は、レイアウト部112又は動画再生部111からのリクエストに応じて、記憶部104に記録されているデータを読み出し、リクエストしたレイアウト部又は動画再生部111へ読み出したデータを出力する。また、データ管理部116は、データの参照先としてネットワーク上のデータが指定された場合、通信部105を介してデータを取得する。 The data management unit 116 reads the data recorded in the storage unit 104 in response to a request from the layout unit 112 or the moving image reproduction unit 111, and outputs the read data to the requested layout unit or moving image reproduction unit 111. In addition, when data on the network is designated as a data reference destination, the data management unit 116 acquires the data via the communication unit 105.
 続いて、動画表示位置算出部115とレイアウト部112の処理の詳細を、図3を用いて説明する。図3は、横向きに回転前と横向きに回転後のページレイアウトの一例を示す図である。例えば、図3の縦向きの画面31において、動画領域の前には文字が7行あり、動画領域22の後には文字が存在しない(すなわち0行)ことが示されている。このため、動画表示位置算出部115は、縦方向の相対位置として、7/(7+0)=1という値を得る。この縦方向の相対位置は、動画領域が画面上部に配置されていれば0、画面下部に配置されていれば1、それ以外であれば位置に応じて0~1の値を取る。したがって、動画表示位置算出部115は、この縦方向の相対位置を画面上での動画領域の相対位置として算出する。 Next, details of the processing of the moving image display position calculation unit 115 and the layout unit 112 will be described with reference to FIG. FIG. 3 is a diagram illustrating an example of a page layout before being rotated horizontally and after being rotated horizontally. For example, in the vertically oriented screen 31 of FIG. 3, it is shown that there are seven lines before the moving image area and there are no characters after the moving area 22 (that is, zero lines). For this reason, the moving image display position calculation unit 115 obtains a value of 7 / (7 + 0) = 1 as the relative position in the vertical direction. This vertical relative position takes a value of 0 if the moving image area is arranged at the top of the screen, 1 if it is arranged at the bottom of the screen, and 0-1 depending on the position otherwise. Therefore, the moving image display position calculation unit 115 calculates the vertical relative position as the relative position of the moving image area on the screen.
 また、同図において、横向きに回転後のページレイアウトの候補として、第1の横向きの画面23と、第2の横向きの画面25と、第3の横向きの画面27とが示されている。
 これら三つの横向きの画面において、動画の表示位置としては、第1の横向きの画面23では動画領域24が画面上部である。第3の横向きの画面27では動画領域28が画面下部である。そして、第2の横向きの画面25では動画領域26が、文章と文章の間(すなわち中間)に位置している。
In addition, in the figure, a first horizontal screen 23, a second horizontal screen 25, and a third horizontal screen 27 are shown as page layout candidates after being rotated horizontally.
In these three horizontally oriented screens, the moving image display position of the first horizontally oriented screen 23 is the upper part of the screen. In the third landscape screen 27, the moving image area 28 is at the bottom of the screen. In the second horizontal screen 25, the moving image area 26 is located between the sentences (that is, in the middle).
 この例では上部でも下部でもない位置として、第2の横向きの画面25の1つしか存在しないが、表示できる行数がより多い場合は、中間の位置についても複数の候補が考えられるため、全体として位置の候補は4つ以上になる可能性もある。レイアウト部112は、このそれぞれの候補について動画領域の相対位置を算出し、回転前の相対位置に最も近いものを選択する。 In this example, only one of the second horizontal screens 25 exists as a position that is neither the upper part nor the lower part. However, when there are more lines that can be displayed, a plurality of candidates can be considered for the intermediate position. As a result, there may be four or more position candidates. The layout unit 112 calculates the relative position of the moving image region for each candidate, and selects the one closest to the relative position before rotation.
 例えば、図3においては、動画表示位置算出部115は、第1の横向きの画面23において動画領域の相対位置を、0/(0+2)=0と算出する。また、動画表示位置算出部115は、第2の横向きの画面25において動画領域の相対位置を、1/(1+1)=0.5と算出する。また、動画表示位置算出部115は、第3の横向きの画面27において動画領域の相対位置を、2/(2+0)=1と算出する。そして、レイアウト部112は、回転前の相対位置である1に最も近い第3の横向きの画面27が回転後のレイアウトとして採用する。 For example, in FIG. 3, the moving image display position calculation unit 115 calculates the relative position of the moving image area as 0 / (0 + 2) = 0 on the first horizontally oriented screen 23. In addition, the moving image display position calculation unit 115 calculates the relative position of the moving image area as 1 / (1 + 1) = 0.5 on the second horizontally oriented screen 25. In addition, the moving image display position calculation unit 115 calculates the relative position of the moving image area as 2 / (2 + 0) = 1 on the third horizontal screen 27. The layout unit 112 adopts the third horizontal screen 27 closest to 1 which is the relative position before rotation as the layout after rotation.
 <動画選択部114の処理の詳細>
 続いて、図4を用いて、動画選択部114の処理の詳細について説明する。図4は、動画データに対して優先度を指定した例を示す図である。同図において、タグ41と、タグ42と、タグ43とが示されている。ここでは<movie>タグによって動画データが記述されており、priorityという属性値でそれぞれの優先度を指定している。
 具体的には、タグ41は、動画データが「movie1.mp4」で、その優先度が1である。タグ42は、動画データが「movie2.mp4」で、その優先度が10である。タグ43は、動画データが「movie3.mp4」で、その優先度が50である。
<Details of processing of moving image selection unit 114>
Next, details of the processing of the moving image selection unit 114 will be described with reference to FIG. FIG. 4 is a diagram illustrating an example in which priority is specified for moving image data. In the figure, a tag 41, a tag 42, and a tag 43 are shown. Here, moving image data is described by a <movie> tag, and each priority is specified by an attribute value called priority.
Specifically, the tag 41 has the moving image data “movie1.mp4” and the priority thereof is 1. The tag 42 has moving image data “movie2.mp4” and a priority of 10. The tag 43 has moving image data “movie3.mp4” and a priority of 50.
 例えば、priorityの値が小さい方が、優先度が高いとすれば、この三つの動画が同時に再生されていた場合、動画選択部114は、優先度の最も高いものとしてmovie1.mp4を選択する。また、例えば、movie2.mp4とmovie3.mp4が同時に再生されていた場合は、動画選択部114は、優先度の高いものとしてmovie2.mp4を選択する。 For example, if the priority is higher when the value of priority is lower, when these three videos are being played simultaneously, the video selection unit 114 determines that movie1. Select mp4. For example, movie2. mp4 and movie3. When the mp4 is played back at the same time, the moving image selection unit 114 determines that movie2. Select mp4.
 なお、動画選択部114は、ユーザの視線に基づいて、同時に再生されている複数の動画のうちから一の動画を選択してもよい。具体的には、例えば、動画選択部114は、動画視聴判定部130において、ユーザの視線を用いて視聴状態を判定した場合、判定する1つ手前の情報である動画を視聴している確率をそのまま用いて確率の最も高い動画を選択してもよい。すなわち、視聴の指標は、ユーザの視線であり、動画選択部114は、視線検出部(第2の視線検出部)120が検出したユーザの視線方向に基づいて、コンテンツに含まれる複数の動画のうちから一の動画を視聴中の動画として選択してもよい。 Note that the moving image selection unit 114 may select one moving image from among a plurality of moving images that are simultaneously reproduced based on the user's line of sight. Specifically, for example, when the moving image viewing determination unit 130 determines the viewing state using the user's line of sight, the moving image selection unit 114 determines the probability of viewing the moving image, which is the immediately preceding information to be determined. You may select the animation with the highest probability by using it as it is. That is, the viewing index is the user's line of sight, and the moving image selection unit 114 is configured to display a plurality of moving images included in the content based on the user's line-of-sight direction detected by the line-of-sight detection unit (second line-of-sight detection unit) 120. One of the videos may be selected as the currently viewed video.
 また、動画選択部114は、動画の表示サイズに基づいて、同時に再生されている複数の動画のうちから一の動画を選択してもよい。具体的には、例えば、動画選択部114は、動画が大きく表示されているものが視聴されている可能性が高いことに鑑みて、同時に再生されている複数の動画のうちから表示サイズが最も大きい動画を選択してもよい。すなわち、視聴の指標は、一例として動画の表示されたサイズであり、動画選択部114は、動画の表示されたサイズに基づいて、コンテンツに含まれる複数の動画のうちから一の動画を視聴中の動画として選択してもよい。 Also, the moving image selection unit 114 may select one moving image from a plurality of moving images that are simultaneously played based on the display size of the moving image. Specifically, for example, the video selection unit 114 has the highest display size from among a plurality of videos that are simultaneously played in view of the high possibility that a large video is being viewed. You may select a large video. That is, the viewing index is, for example, the size of the video that is displayed, and the video selection unit 114 is viewing one video from among the plurality of videos included in the content based on the size of the video that is displayed. May be selected as the video.
 図5を用いて、動画選択部114が、動画の表示サイズに基づいて動画を選択する処理について説明する。図5は、動画データに対して表示サイズを指定した例を示す図である。同図において、タグ51と、タグ52と、タグ53とが示されている。同図において、widthという属性値で表示サイズの横幅を指定し、heightという属性値で表示サイズの縦幅を指定している。 The process in which the moving image selection unit 114 selects a moving image based on the display size of the moving image will be described with reference to FIG. FIG. 5 is a diagram illustrating an example in which a display size is specified for moving image data. In the figure, a tag 51, a tag 52, and a tag 53 are shown. In the figure, the horizontal width of the display size is specified by the attribute value “width”, and the vertical width of the display size is specified by the attribute value “height”.
 サイズの指定方式としては、画素のピクセル数で指定する方式と、画面サイズに対する相対値で指定する方式とがある。画面サイズに対する相対値で指定した場合、動画選択部114は、実際に画面に表示される表示サイズを算出する。また、縦幅か横幅のいずれか一方しか指定されていない場合、動画選択部114は、動画コンテンツのアスペクト比に応じてもう一方の値を算出する。例えば、画面の画素数が横600ピクセル、縦1024ピクセルであり、動画コンテンツのアスペクト比が全て4:3だった場合、movie1.mp4は300×225ピクセル、movie3.mp4は273×204ピクセルで表示されることになる。したがって、この2つの動画が同時に再生されていた場合、動画選択部114は、表示サイズが最も大きいものとしてmovie1.mp4を選択する。 The size designation method includes a method of designating by the number of pixels of a pixel and a method of designating by a relative value with respect to the screen size. When the relative value is specified for the screen size, the moving image selection unit 114 calculates the display size that is actually displayed on the screen. When only one of the vertical width and the horizontal width is specified, the moving image selection unit 114 calculates the other value according to the aspect ratio of the moving image content. For example, if the number of pixels of the screen is 600 pixels wide and 1024 pixels long, and the aspect ratio of the moving image content is all 4: 3, movie1. mp4 is 300 × 225 pixels, movie3. mp4 is displayed with 273 × 204 pixels. Therefore, when these two moving images are being played back simultaneously, the moving image selection unit 114 assumes that the movie 1. Select mp4.
 図6は、本実施形態における視線検出部120の概略ブロック図である。視線検出部120は、特徴量抽出部121と、顔領域特定部122と、パーツ特定部123と、黒目領域特定部124と、眼球中心半径算出部125と、黒目中心位置算出部127と、視線方向算出部128とを備える。 FIG. 6 is a schematic block diagram of the line-of-sight detection unit 120 in the present embodiment. The line-of-sight detection unit 120 includes a feature amount extraction unit 121, a face region specification unit 122, a parts specification unit 123, a black eye region specification unit 124, an eyeball center radius calculation unit 125, a black eye center position calculation unit 127, and a line of sight A direction calculation unit 128.
 特徴量抽出部121は、撮像部103から入力された画像データに基づいて、その画像データが示す画像内の色(肌色)又は境界(輪郭)などの特徴量を抽出する。そして、特徴量抽出部121は、抽出した特徴量を顔領域特定部122へ出力する。
 顔領域特定部122は、特徴量抽出部121から入力された特徴量に基づいて、顔の領域を特定し、特定することにより得られた顔画像領域の位置を示す情報をパーツ特定部123へ出力する。
Based on the image data input from the imaging unit 103, the feature amount extraction unit 121 extracts a feature amount such as a color (skin color) or a boundary (contour) in the image indicated by the image data. Then, the feature amount extraction unit 121 outputs the extracted feature amount to the face area specifying unit 122.
The face area specifying unit 122 specifies the face area based on the feature amount input from the feature amount extracting unit 121, and sends information indicating the position of the face image area obtained by the specification to the part specifying unit 123. Output.
 パーツ特定部123は、目、鼻及び口といった顔のパーツの位置をそれぞれ特定する。パーツ特定部123は、顔のパーツの位置を特定した顔画像領域の位置を示す顔領域位置情報Rと、目の位置を示す目位置情報Pとを動画視聴判定部130とへ出力する。また、パーツ特定部123は、目の位置を示す目位置情報を黒目領域特定部124と眼球中心半径算出部125へ出力する。
 これにより、顔領域特定部122が特定した顔領域の候補が複数存在していた場合でも、パーツ特定部123が、目、鼻及び口といった顔のパーツがあるか否かを調べることにより、顔領域特定部122が特定した顔画像領域が顔に該当するか否かの判断をより正確に行うことができる。
The parts specifying unit 123 specifies the positions of facial parts such as eyes, nose and mouth. The part specifying unit 123 outputs the face area position information R indicating the position of the face image area specifying the position of the face part and the eye position information P indicating the eye position to the video viewing determination unit 130. In addition, the part specifying unit 123 outputs eye position information indicating the position of the eyes to the black eye region specifying unit 124 and the eyeball center radius calculating unit 125.
As a result, even when there are a plurality of face area candidates specified by the face area specifying unit 122, the parts specifying unit 123 checks whether there are face parts such as eyes, nose, and mouth. It is possible to more accurately determine whether or not the face image area specified by the area specifying unit 122 corresponds to a face.
 黒目領域特定部124は、パーツ特定部123から入力された目位置情報に基づいて、目の領域のうち、黒目の領域を特定する。黒目領域特定部124は、特定した黒目の領域の位置を示す黒目位置情報を黒目中心位置算出部127へ出力する。
 眼球中心半径算出部125は、パーツ特定部123から入力された目位置情報に基づいて、目頭の位置及び目尻の位置を抽出し、抽出した目頭の位置及び目尻の位置に基づいて、眼球の中心位置及び眼球の半径を算出する。眼球中心半径算出部125は、算出した眼球の中心位置及び眼球の半径を示す情報を視線方向算出部128へ出力する。
The black eye area specifying unit 124 specifies a black eye area among the eye areas based on the eye position information input from the parts specifying unit 123. The black eye area specifying unit 124 outputs black eye position information indicating the position of the specified black eye area to the black eye center position calculating unit 127.
The eyeball center radius calculation unit 125 extracts the positions of the eyes and the corners of the eyes based on the eye position information input from the part specifying unit 123, and based on the extracted positions of the eyes and the corners of the eyes, the center of the eyeball Calculate the position and radius of the eyeball. The eyeball center radius calculation unit 125 outputs information indicating the calculated center position of the eyeball and the radius of the eyeball to the gaze direction calculation unit 128.
 黒目中心位置算出部127は、黒目領域特定部124から入力された黒目位置情報に基づいて、黒目の中心位置を特定する。具体的には、黒目中心位置算出部127は、例えば、黒目位置情報が示す黒目の領域の位置から、黒目を楕円として近似したときの楕円の中心を黒目の中心位置として算出する。そして、黒目中心位置算出部127は、特定した黒目の中心位置を示す黒目中心位置情報を視線方向算出部128へ出力する。 The black eye center position calculating unit 127 specifies the center position of the black eye based on the black eye position information input from the black eye region specifying unit 124. Specifically, the black eye center position calculation unit 127 calculates, for example, the center of the ellipse as a black eye center position when the black eye is approximated as an ellipse from the position of the black eye region indicated by the black eye position information. Then, the black eye center position calculation unit 127 outputs black eye center position information indicating the specified black eye center position to the line-of-sight direction calculation unit 128.
 視線方向算出部128は、眼球中心半径算出部125から入力された情報が示す眼球の中心位置及び眼球の半径と黒目中心位置算出部127から入力された黒目中心位置情報が示す黒目の中心位置とに基づいて、眼球中心から黒目中心へと向かう方向を視線方向SLとして検出する。視線方向算出部128は、検出した視線方向を示す視線方向情報SLを動画視聴判定部130へ出力する。 The line-of-sight direction calculation unit 128 includes the eyeball center position and the eyeball radius indicated by the information input from the eyeball center radius calculation unit 125 and the black eye center position indicated by the black eye center position information input from the black eye center position calculation unit 127. Based on the above, the direction from the eyeball center to the black eye center is detected as the line-of-sight direction SL. The gaze direction calculation unit 128 outputs gaze direction information SL indicating the detected gaze direction to the moving image viewing determination unit 130.
 図7は、本実施形態における動画視聴判定部130の概略ブロック図である。動画視聴判定部130は、顔領域大きさ算出部131と、距離算出部132と、水平方向画素間隔算出部133と、目位置算出部134と、視線判定部135と、確率算出部136と、視聴判定部137とを備える。 FIG. 7 is a schematic block diagram of the moving image viewing determination unit 130 in the present embodiment. The video viewing determination unit 130 includes a face area size calculation unit 131, a distance calculation unit 132, a horizontal pixel interval calculation unit 133, an eye position calculation unit 134, a gaze determination unit 135, a probability calculation unit 136, A viewing determination unit 137.
 顔領域大きさ算出部131は、視線検出部120から入力された顔領域位置情報Rが示す顔画像領域の位置に基づいて、顔画像領域の大きさを算出する。具体的には、例えば、顔領域大きさ算出部131は、顔画像領域の輪郭を楕円として近似した場合、顔画像領域の長径を顔画像領域の大きさとして算出する。そして、顔領域大きさ算出部131は、算出した顔画像領域の大きさを示す顔大きさ情報を距離算出部132へ出力する。
 なお、顔領域大きさ算出部131は、顔画像領域の短径を顔画像領域の大きさとして算出してもよい。
The face area size calculation unit 131 calculates the size of the face image area based on the position of the face image area indicated by the face area position information R input from the line-of-sight detection unit 120. Specifically, for example, when the outline of the face image area is approximated as an ellipse, the face area size calculation unit 131 calculates the major axis of the face image area as the size of the face image area. Then, the face area size calculation unit 131 outputs face size information indicating the calculated size of the face image area to the distance calculation unit 132.
Note that the face area size calculation unit 131 may calculate the minor axis of the face image area as the size of the face image area.
 距離算出部132は、ユーザの実際の顔の大きさが、平均的な顔の大きさであると仮定して、撮像部103が備えるレンズから顔までの距離dを算出する。ここで、距離算出部132は、撮像部103の焦点距離fと平均的な顔の長径(実際の長径)yとを保持している。撮像部103の焦点距離fと平均的な顔の長径(実際の長径)L、顔大きさ情報が示す顔領域の長径(画像上の長径)lとに基づいて、撮像部103が備えるレンズから顔までの距離d(=L×f/l)を算出する。そして、距離算出部132は、算出した撮像部103が備えるレンズから顔までの距離dを示す距離情報を目位置算出部134へ出力する。 The distance calculation unit 132 calculates the distance d from the lens included in the imaging unit 103 to the face, assuming that the actual face size of the user is the average face size. Here, the distance calculation unit 132 holds the focal length f of the imaging unit 103 and the average face major axis (actual major axis) y. Based on the focal length f of the imaging unit 103, the average face major axis (actual major axis) L, and the face region major axis (major axis on the image) l indicated by the face size information, from the lens included in the imaging unit 103 The distance d (= L × f / l) to the face is calculated. Then, the distance calculation unit 132 outputs distance information indicating the calculated distance d from the lens included in the imaging unit 103 to the face to the eye position calculation unit 134.
 なお、距離算出部132は、撮像部103の焦点距離fと平均的な顔の短径(実際の短径)、算出した顔領域の短径(画像上の短径)とに基づいて、撮像部103が備えるレンズから顔までの距離dを算出してもよい。 The distance calculation unit 132 captures images based on the focal length f of the imaging unit 103, the average minor axis of the face (actual minor axis), and the calculated minor axis of the face region (minor axis on the image). The distance d from the lens included in the unit 103 to the face may be calculated.
 水平方向画素間隔算出部133は、視線検出部120から入力された目位置情報Pが示す画像上の目の位置(例えば、画像上の黒目の中心位置)と画像の中心位置との間の水平方向の画素間隔aを算出する。水平方向画素間隔算出部133は、算出した水平方向の画素間隔aを示す水平画素間隔情報を目位置算出部134へ出力する。 The horizontal pixel interval calculation unit 133 is a horizontal line between the eye position on the image (for example, the center position of the black eye on the image) indicated by the eye position information P input from the line-of-sight detection unit 120 and the center position of the image. The pixel interval a in the direction is calculated. The horizontal pixel interval calculation unit 133 outputs horizontal pixel interval information indicating the calculated horizontal pixel interval a to the eye position calculation unit 134.
 目位置算出部134は、撮像部103の焦点距離fを示す情報を予め保持している。目位置算出部134は、水平方向画素間隔算出部133から入力された水平画素間隔情報が示す水平方向の画素間隔aと撮像部103の焦点距離fと距離算出部132から入力された距離情報が示す距離dとに基づいて、レンズの中心位置を原点とする目位置を算出する。そして、目位置算出部134は、算出した目位置を示す目位置情報bを視線判定部135へ出力する。 The eye position calculation unit 134 holds information indicating the focal length f of the imaging unit 103 in advance. The eye position calculation unit 134 includes the horizontal pixel interval a indicated by the horizontal pixel interval information input from the horizontal pixel interval calculation unit 133, the focal length f of the imaging unit 103, and the distance information input from the distance calculation unit 132. Based on the indicated distance d, the eye position with the center position of the lens as the origin is calculated. Then, the eye position calculation unit 134 outputs eye position information b indicating the calculated eye position to the line-of-sight determination unit 135.
 視線判定部135は、目位置算出部134から入力された目位置情報bが示す目位置と、視線検出部120から入力された視線方向情報SLが示す視線方向とに基づいて、視線が動画の表示領域内に収まっているか否かを判定する。視線判定部135は、予め決められた間隔(例えば、0.1秒間隔)で、判定結果を確率算出部136へ出力する。 Based on the eye position indicated by the eye position information b input from the eye position calculation unit 134 and the line-of-sight direction indicated by the line-of-sight direction information SL input from the line-of-sight detection unit 120, the line-of-sight determination unit 135 It is determined whether or not it is within the display area. The line-of-sight determination unit 135 outputs the determination result to the probability calculation unit 136 at a predetermined interval (for example, an interval of 0.1 seconds).
 確率算出部136は、視線が動画の表示領域内に収まっている確率を算出する。例えば、視線判定部135が0.1秒間隔(10fps)で、5秒間(すなわち50回)、視線が動画の表示領域内に収まっているか否かを判定した場合、そのうち35回で視線が動画の表示領域内にあった場合には、確率算出部136は、動画を視聴していた確率を70%と算出する。確率算出部136は、算出した確率を示す確率情報を視聴判定部137へ出力する。 The probability calculation unit 136 calculates the probability that the line of sight is within the display area of the moving image. For example, when the line-of-sight determination unit 135 determines whether or not the line of sight is within the moving image display area at 0.1 second intervals (10 fps) for 5 seconds (that is, 50 times), the line of sight moves to the moving image 35 times. The probability calculation unit 136 calculates the probability of watching a moving image as 70%. The probability calculation unit 136 outputs probability information indicating the calculated probability to the viewing determination unit 137.
 なお、確率算出部136は、視線が動画の表示領域の中心部に近い場合に向けられていた場合には、動画を視聴していた確率が高くなるように重みづけをしてもよい。
 また、視線判定部135は、視線が動画の表示領域内に収まっているか否かを判定したが、これに限らず、視線が表示部106の表示領域内に収まっているか否かを判定してもよい。その場合、確率算出部136は、視線が動画の表示領域内に収まっている確率を算出すればよい。
If the line of sight is directed to the center of the moving image display area, the probability calculating unit 136 may weight the probability so that the probability of watching the moving image is high.
Further, the line-of-sight determination unit 135 determines whether or not the line of sight is within the moving image display area, but is not limited thereto, and determines whether or not the line of sight is within the display area of the display unit 106. Also good. In that case, the probability calculation unit 136 may calculate the probability that the line of sight is within the display area of the moving image.
 視聴判定部137は、確率算出部136から入力された確率情報が示す確率に基づいて、ユーザが動画を視聴しているか否かを判定する。具体的には、視聴判定部137は、その確率が予め決められた閾値よりも低い場合、動画を視聴していないと判定する。一方、視聴判定部137は、その確率が予め決められた閾値以上の場合、動画を視聴していると判定する。視聴判定部137は、判定することにより得られた判定結果を動画選択部114へ出力する。 The viewing determination unit 137 determines whether or not the user is watching a video based on the probability indicated by the probability information input from the probability calculation unit 136. Specifically, the viewing determination unit 137 determines that the moving image is not viewed when the probability is lower than a predetermined threshold. On the other hand, the viewing determination unit 137 determines that the video is being viewed when the probability is equal to or greater than a predetermined threshold. The viewing determination unit 137 outputs the determination result obtained by the determination to the moving image selection unit 114.
 続いて、図8を用いて視線判定部135の処理の詳細について説明する。
 図8は、レンズの中心位置を原点とする目位置と視線方向とを説明するための図である。同図に置いて、撮像部103が備えるレンズ61及び表示部106を備える電子書籍装置100と、ユーザの左目62とが示されている。ここで、表示部106が表示領域と、ユーザの左目62との距離はd1である。また、xyz座標系において、ユーザの左目の位置が座標(x1,y1,d1)であることが示されている。同図に示すように、レンズ61は、一例として、表示部106が位置する面に配置されている。なお、同図の例では、レンズ61及び撮像部103の位置は、一例であって、ユーザの目が撮像できる位置に配置されていればよい。
Next, details of the processing of the line-of-sight determination unit 135 will be described with reference to FIG.
FIG. 8 is a diagram for explaining the eye position and the line-of-sight direction with the center position of the lens as the origin. The electronic book apparatus 100 including the lens 61 and the display unit 106 included in the imaging unit 103 and the user's left eye 62 are illustrated in FIG. Here, the distance between the display area of the display unit 106 and the user's left eye 62 is d1. In the xyz coordinate system, the position of the user's left eye is indicated by coordinates (x1, y1, d1). As shown in the figure, as an example, the lens 61 is disposed on the surface on which the display unit 106 is located. In the example shown in the figure, the positions of the lens 61 and the imaging unit 103 are merely examples, and may be arranged at positions where the user's eyes can be imaged.
 同図の例において、視線判定部135は、例えば、表示部106に表示されている動画の表示部106内の輪郭の位置から、xyz座標系における位置(以下、動画輪郭絶対位置と称す)へ座標変換する。そして、視線判定部135は、ユーザの左目の位置(x1,y1,d1)から視線方向SLへの直線が、表示部106の表示面を含む平面すなわちz=0の平面との交点を算出する。 In the example of the figure, the line-of-sight determination unit 135 moves from, for example, the position of the contour in the moving image display unit 106 displayed on the display unit 106 to a position in the xyz coordinate system (hereinafter referred to as a moving image contour absolute position). Convert coordinates. Then, the line-of-sight determination unit 135 calculates the intersection of a straight line from the position (x1, y1, d1) of the user's left eye to the line-of-sight direction SL with a plane including the display surface of the display unit 106, that is, a plane with z = 0. .
 そして、視線判定部135は、算出した交点と動画輪郭絶対位置に基づいて、交点が動画の表示領域内であるか否かを判定する。視線判定部135は、算出した交点が動画の表示領域内である場合、視線が動画の表示領域内に収まっていると判定する。一方、視線判定部135は、交点が動画の表示領域外であれば、視線が動画の表示領域内に収まっていないと判定する。 Then, the line-of-sight determination unit 135 determines whether the intersection is within the moving image display area based on the calculated intersection and the moving image absolute position. When the calculated intersection is within the moving image display area, the line-of-sight determination unit 135 determines that the line of sight is within the moving image display area. On the other hand, if the intersection is outside the moving image display area, the line-of-sight determination unit 135 determines that the line of sight is not within the moving image display area.
 図9は、本実施形態の電子書籍装置100の基本部分の処理の一例を示すフローチャートである。まず、ユーザ入力部101は、ユーザが選択した書籍を示すユーザ入力を受けつけ、その書籍を示す書籍情報をレイアウト部112へ出力する(ステップS101)。レイアウト部112は、選択された書籍についてデータ管理部116を介して記憶部104に保存されている書籍データを取得する(ステップS102)。 FIG. 9 is a flowchart illustrating an example of processing of a basic part of the electronic book device 100 of the present embodiment. First, the user input unit 101 receives a user input indicating a book selected by the user, and outputs book information indicating the book to the layout unit 112 (step S101). The layout unit 112 acquires the book data stored in the storage unit 104 via the data management unit 116 for the selected book (step S102).
 次に、レイアウト部112は、自装置の向き及び表示部106が備えるディスプレイの大きさに応じて、文書のレイアウトを決定し、決定したレイアウトで表示部106に書籍データを表示させる(ステップS103)。
 次に、動画データが含まれているページを表示した場合、例えば、ユーザがタッチパネルにおいて、表示部106に表示されている動画領域に相当する位置をタッチすることで、ユーザ入力部101は、動画の再生開始を示すユーザ入力を受け付ける(ステップS104)。そして、ユーザ入力部101は、動画再生部111にそのユーザ入力を出力する。
Next, the layout unit 112 determines the layout of the document according to the orientation of the own device and the size of the display included in the display unit 106, and displays the book data on the display unit 106 with the determined layout (step S103). .
Next, when a page including moving image data is displayed, for example, when the user touches a position corresponding to the moving image area displayed on the display unit 106 on the touch panel, the user input unit 101 displays the moving image data. The user input indicating the start of playback is accepted (step S104). Then, the user input unit 101 outputs the user input to the moving image playback unit 111.
 次に、動画再生部111は、記憶部104からデコード前の動画を取得し、取得した動画をデコードする。そして、動画再生部111は、デコードした動画を表示部106で再生させる(ステップS105)。
 次に、ユーザがページめくりの操作を行うと、ユーザ入力部101は、ページ送りを示すユーザ入力を受け付け、受け付けたユーザ入力をレイアウト部112へ出力する(ステップS106)。
Next, the moving image reproduction unit 111 acquires a moving image before decoding from the storage unit 104, and decodes the acquired moving image. Then, the moving image reproducing unit 111 causes the display unit 106 to reproduce the decoded moving image (step S105).
Next, when the user performs a page turning operation, the user input unit 101 receives a user input indicating page turning and outputs the received user input to the layout unit 112 (step S106).
 次に、レイアウト部112は、次のページデータを記憶部104からデータ管理部116を介して取得する(ステップS107)。そして、レイアウト部112は、先ほどと同様にレイアウトを決定し、決定したレイアウトで次のページデータを表示部106に表示させる(ステップS108)。その際、表示部106は、例えばページをめくるアニメーションを表示して画面遷移を行う。以上で、本フローチャートの処理を終了する。 Next, the layout unit 112 acquires the next page data from the storage unit 104 via the data management unit 116 (step S107). Then, the layout unit 112 determines the layout in the same manner as described above, and causes the display unit 106 to display the next page data with the determined layout (step S108). At that time, the display unit 106 performs screen transition by displaying, for example, an animation of turning a page. Above, the process of this flowchart is complete | finished.
 図10は、本実施形態の電子書籍装置100において、自装置の向きが変化した際の処理の一例を示すフローチャートである。自装置の向きが変わったことが向き検出部102によって検出されたときに、以下の処理が実行される。
 まず、動画視聴判定部130は、ユーザが動画を視聴中か否かを判定する(S201)。もし、現在表示しているページに動画が複数含まれている場合は、動画視聴判定部130は、それぞれ個別に視聴中か否かを判定する。そして、視聴中と判断された動画が1つ以上あり、ユーザの視線が動画の表示領域に収まっている確率が予め決められた閾値以上の場合、動画視聴判定部130は、ユーザが動画を視聴中であると判定する。
FIG. 10 is a flowchart illustrating an example of processing when the orientation of the device itself changes in the electronic book device 100 of the present embodiment. When the orientation detection unit 102 detects that the orientation of the own device has changed, the following processing is executed.
First, the moving image viewing determination unit 130 determines whether or not the user is viewing a moving image (S201). If the currently displayed page includes a plurality of moving images, the moving image viewing determination unit 130 determines whether or not each of the moving images is being viewed individually. When there is one or more videos determined to be viewed and the probability that the user's line of sight is within the video display area is equal to or greater than a predetermined threshold, the video viewing determination unit 130 allows the user to view the videos. It is determined that it is in the middle.
 次に、動画視聴判定部130が動画を視聴中と判定した場合(ステップS201:YES)、回転後も動画が表示されるように、画面上で動画の表示されている位置を基準にして回転を行う。まず、視聴中の動画が複数存在していた場合を考慮し、動画選択部114が優先する動画を1つ選択する(ステップS202)。視聴中の動画が複数存在していた場合でも、この処理で動画が1つ選択されるため、これ以降は視聴中の動画が1つの場合と同じ処理が行われることになる。 Next, when the moving image viewing determination unit 130 determines that the moving image is being viewed (step S201: YES), the rotation is performed with reference to the position where the moving image is displayed on the screen so that the moving image is displayed even after the rotation. I do. First, considering the case where there are a plurality of moving images being viewed, the moving image selection unit 114 selects one moving image with priority (step S202). Even when there are a plurality of moving images being viewed, one moving image is selected in this process, and thereafter, the same processing as in the case where there is only one moving image is performed.
 続いて、レイアウト部112が選択された動画について、画面上における動画領域の相対位置を取得する(ステップS203)。
 次に、レイアウト部112は、取得した動画領域の相対位置に基づいて、回転後のページのレイアウトを決定する(ステップS204)。具体的には、レイアウト部112は、動画領域の表示されている位置を基準として回転を行ったように見せるため、動画領域の位置が回転前の位置に出来るだけ近くなるようにレイアウトを決定する。
Subsequently, the relative position of the moving image area on the screen is acquired for the moving image selected by the layout unit 112 (step S203).
Next, the layout unit 112 determines the layout of the rotated page based on the acquired relative position of the moving image area (step S204). Specifically, the layout unit 112 determines the layout so that the position of the moving image area is as close as possible to the position before the rotation in order to make it appear as if it has been rotated based on the position where the moving image area is displayed. .
 以上の処理により、回転後の表示内容が決まったので、ページ位置保持部113が回転前のページの位置を保存する(ステップS205)。例えば、図2において、ページ位置保持部113は、回転前のページの先頭の文字である「カ」を特定するための情報を保存する。この情報は、向きが変わった後で回転前の状態に戻す際に使用するが、具体的な処理については後述する。 Since the display content after the rotation is determined by the above processing, the page position holding unit 113 stores the position of the page before the rotation (step S205). For example, in FIG. 2, the page position holding unit 113 stores information for specifying “K” that is the first character of the page before rotation. This information is used when returning to the state before the rotation after the direction is changed, and specific processing will be described later.
 そして、レイアウト部112は、決定したレイアウトに基づき、回転後のページを画面に表示する(ステップS207)。これにより、動画を再生中であれば、動画再生部111は動画の再生は継続して行うことで、ユーザがシームレスに動画視聴できる。 The layout unit 112 displays the rotated page on the screen based on the determined layout (step S207). Thus, if a moving image is being reproduced, the moving image reproducing unit 111 continuously reproduces the moving image, so that the user can seamlessly view the moving image.
 一方、ステップS201において、動画視聴判定部130が動画を視聴中でないと判断した場合(ステップS201:NO)、通常の書籍として回転処理を行えば良いため、ページの先頭の文字を基準に回転を行う。したがって、回転後のページの位置を現在と同じ位置として設定する(ステップS206)。すなわち、回転後のページにおいても、現在のページの先頭の文字である「カ」が先頭になるページレイアウトとなり、レイアウト部112は回転後のページを画面に表示する(ステップS207)。この場合、例えば、図12に示した横向きの表示画面72が表示される。以上で、本フローチャートの処理を終了する。 On the other hand, in step S201, when the video viewing determination unit 130 determines that the video is not being viewed (step S201: NO), the rotation processing may be performed as a normal book, and therefore the rotation is based on the first character of the page. Do. Accordingly, the position of the page after rotation is set as the same position as the current position (step S206). That is, even in the rotated page, the page layout is such that the first character of the current page is “K”, and the layout unit 112 displays the rotated page on the screen (step S207). In this case, for example, the landscape display screen 72 shown in FIG. 12 is displayed. Above, the process of this flowchart is complete | finished.
 以上の処理により、電子書籍装置100は、回転後の動画領域の相対位置を、回転前の相対位置に出来るだけ近くなるようにレイアウトを決定する。これにより、電子書籍装置100は、動画を視聴中に自装置が回転させられても、動画領域の相対位置を維持することができるので、動画を継続して表示することができる。その結果、電子書籍装置100を回転させても動画をシームレスに視聴することができ、ユーザの使い勝手を向上させることができる。ゆえに、ユーザは、動画が組み込まれている電子書籍を快適に視聴することができる。 Through the above processing, the electronic book device 100 determines the layout so that the relative position of the moving image area after rotation is as close as possible to the relative position before rotation. As a result, the electronic book device 100 can maintain the relative position of the moving image area even when the device itself is rotated while watching the moving image, and thus can continuously display the moving image. As a result, even if the electronic book apparatus 100 is rotated, a moving image can be viewed seamlessly, and user convenience can be improved. Therefore, the user can comfortably view the electronic book in which the moving image is incorporated.
 また、電子書籍装置100は、動画が表示されている画面上の位置についても、回転の前後で相対的な位置が保持されているため、ユーザは、回転の前後での動画の対応付けを容易に行うことができる。その結果、ユーザは、動画が組み込まれている電子書籍を快適に視聴することができる。 In addition, since the electronic book device 100 holds the relative positions before and after the rotation of the position on the screen on which the moving image is displayed, the user can easily associate the moving image before and after the rotation. Can be done. As a result, the user can comfortably view the electronic book in which the moving image is incorporated.
 続いて、動画視聴中の場合、電子書籍装置100が回転させられた場合、電子書籍装置100は、動画の表示されている位置を基準に表示画面を回転するため、そのページの先頭の文字は変わる。したがって、電子書籍装置100の回転直後に、電子書籍装置100を回転前の向きに戻した場合に、元のページとは位置がずれる可能性がある。そこで、自装置の向きを変えた直後に自装置の向きを元に戻した場合における電子書籍装置100の処理を図11を用いて説明する。 Subsequently, when the electronic book apparatus 100 is rotated when the moving image is being viewed, the electronic book apparatus 100 rotates the display screen based on the position where the moving image is displayed. change. Therefore, immediately after the electronic book apparatus 100 is rotated, when the electronic book apparatus 100 is returned to the direction before the rotation, the position may be shifted from the original page. Therefore, processing of the electronic book device 100 when the orientation of the device itself is restored immediately after the orientation of the device is changed will be described with reference to FIG.
 図11は、自装置の向きを変えた直後に自装置の向きを元に戻した場合において、電子書籍装置100の処理の流れの一例を示すフローチャートである。以下の処理は、図10に示した回転処理の実行直後に端末の向きを戻した場合に実行される。例えば、縦向き(0°)から横向き(90°)に回転後、そのまま横向き(90°)から縦向き(0°)に戻した際に、以下の処理が実行される。 FIG. 11 is a flowchart illustrating an example of a processing flow of the electronic book device 100 when the orientation of the device itself is restored immediately after the orientation of the device itself is changed. The following processing is executed when the orientation of the terminal is returned immediately after execution of the rotation processing shown in FIG. For example, the following processing is executed when the image is rotated from the vertical orientation (0 °) to the horizontal orientation (90 °) and then returned from the horizontal orientation (90 °) to the vertical orientation (0 °).
 まず、レイアウト部112は、ページ位置保持部113に回転前のページ位置が保存されているか否かを判定する(ステップS301)。ページ位置保持部113に回転前のページ位置が保存されている場合(ステップS301:YES)、直前の回転時にページ位置が移動したことを意味するため、レイアウト部112は、ページ位置保持部113に保存されているページ位置を取得する(ステップS302)。 First, the layout unit 112 determines whether or not the page position before rotation is stored in the page position holding unit 113 (step S301). When the page position before the rotation is stored in the page position holding unit 113 (step S301: YES), it means that the page position has moved during the previous rotation, so the layout unit 112 stores the page position in the page position holding unit 113. The stored page position is acquired (step S302).
 次に、レイアウト部112は、このページ位置を回転後のページの位置として設定する(ステップS303)。すなわち、レイアウト部112は、回転後のページとして、指定された要素(文字、画像、動画等)を先頭にしたページレイアウトを行う。この結果生成されるページは回転前と同じページとなる。レイアウト部112は、回転後のページを表示部106に表示させる(ステップS304)。これにより、レイアウト部112は、回転前と同じ表示状態に戻すことができる。次に、ページ位置保持部113に保存していたページ位置は不要となるため、ページ位置保持部113はその情報を消去する(ステップS305)。 Next, the layout unit 112 sets this page position as the position of the page after rotation (step S303). That is, the layout unit 112 performs a page layout with a designated element (character, image, moving image, etc.) as the head as a rotated page. The page generated as a result is the same page as before the rotation. The layout unit 112 displays the rotated page on the display unit 106 (step S304). Thereby, the layout part 112 can return to the same display state as before rotation. Next, since the page position stored in the page position holding unit 113 becomes unnecessary, the page position holding unit 113 deletes the information (step S305).
 ステップS301において、ページ開始位置がページ位置保持部113に保存されていなかった場合(ステップS301:NO)、直前の回転時にはページ位置を移動させなかったということを意味するため、レイアウト部112は、通常の回転処理を行う。具体的には、図10で示した処理と同様に、レイアウト部112は、回転後のページ位置を現在と同じ位置に設定する(ステップS306)。レイアウト部112は、このレイアウトで、回転後のページを表示部106に表示させる(ステップS307)。これにより、レイアウト部112は、ページの先頭の文字を基準に回転させたページを表示することができる。 In step S301, if the page start position is not stored in the page position holding unit 113 (step S301: NO), it means that the page position is not moved during the previous rotation. Normal rotation processing is performed. Specifically, similarly to the processing shown in FIG. 10, the layout unit 112 sets the page position after rotation to the same position as the current position (step S306). The layout unit 112 displays the rotated page on the display unit 106 in this layout (step S307). Thereby, the layout unit 112 can display a page rotated with reference to the first character of the page.
 前述した通り、電子書籍装置100は、自装置の回転直後に端末の向きを元の向きに戻した場合にのみ、図11の処理を実行する。一方、自装置を一旦回転させた後、ページを移動させてから元の向きに戻した場合、電子書籍装置100は、図11の処理を実行せずに回転時の処理(図10)を実行する。 As described above, the electronic book device 100 executes the process of FIG. 11 only when the orientation of the terminal is returned to the original orientation immediately after the rotation of the device itself. On the other hand, when the device itself is rotated once and then the page is moved and then returned to the original orientation, the electronic book device 100 executes the rotation processing (FIG. 10) without executing the processing of FIG. To do.
 また、自装置を縦向き(0°)から横向き(90°)に回転後、横向き(90°)から反対側の縦向き(180°)にした場合も、電子書籍装置100は、図11の処理は実行せずに図10の処理を実行する。すなわち、電子書籍装置100は、自装置を回転させたときは基本的には図10の処理を実行するが、自装置の向きを戻した場合にのみ図11の示された特別な処理を実行する。 In addition, when the device is rotated from the portrait orientation (0 °) to the landscape orientation (90 °) and then changed from the landscape orientation (90 °) to the opposite portrait orientation (180 °), the electronic book device 100 can be The processing of FIG. 10 is executed without executing the processing. That is, the electronic book apparatus 100 basically executes the process of FIG. 10 when the own apparatus is rotated, but executes the special process shown in FIG. 11 only when the direction of the own apparatus is returned. To do.
 これにより、電子書籍装置100は、自装置の向きを元に戻すという操作に、元の(回転前の)状態に戻す、という意味を与えることができる。そのため、特に再生中の動画が複数あった場合で、装置によって優先的に表示された動画がユーザの意図と異なっていた場合であっても、ユーザは端末の向きを元に戻すだけで状態が元に戻ることが保証されているため、ユーザは違和感なく電子書籍装置100を利用することができる。 Thereby, the electronic book device 100 can give the meaning of returning to the original state (before rotation) to the operation of returning the orientation of the device itself. Therefore, even when there are a plurality of moving images being played back, even if the moving image preferentially displayed by the device is different from the user's intention, the user can simply change the orientation of the terminal to restore the state. Since it is guaranteed to return to the original, the user can use the electronic book apparatus 100 without a sense of incongruity.
 なお、本実施形態では、電子書籍装置100を一例として説明したが、これに限らず、携帯電話装置、スマートフォン、情報端末装置、タブレットPCなどの表示画面を備える表示装置であればよい。 In addition, in this embodiment, although the electronic book apparatus 100 was demonstrated as an example, it is not restricted to this, What is necessary is just a display apparatus provided with display screens, such as a mobile telephone apparatus, a smart phone, an information terminal device, and a tablet PC.
 また、本実施形態の電子書籍装置100の各処理を実行するためのプログラムをコンピュータ読み取り可能な記録媒体に記録して、当該記録媒体に記録されたプログラムをコンピュータシステムに読み込ませ、実行することにより、電子書籍装置100に係る上述した種々の処理を行ってもよい。 Further, by recording a program for executing each process of the electronic book apparatus 100 of the present embodiment on a computer-readable recording medium, and causing the computer system to read and execute the program recorded on the recording medium. The above-described various processes related to the electronic book device 100 may be performed.
 なお、ここでいう「コンピュータシステム」とは、OSや周辺機器等のハードウェアを含むものであってもよい。また、「コンピュータシステム」は、WWWシステムを利用している場合であれば、ホームページ提供環境(あるいは表示環境)も含むものとする。また、「コンピュータ読み取り可能な記録媒体」とは、フレキシブルディスク、光磁気ディスク、ROM、フラッシュメモリ等の書き込み可能な不揮発性メモリ、CD-ROM等の可搬媒体、コンピュータシステムに内蔵されるハードディスク等の記憶装置のことをいう。 Note that the “computer system” referred to here may include an OS and hardware such as peripheral devices. Further, the “computer system” includes a homepage providing environment (or display environment) if a WWW system is used. The “computer-readable recording medium” means a flexible disk, a magneto-optical disk, a ROM, a writable nonvolatile memory such as a flash memory, a portable medium such as a CD-ROM, a hard disk built in a computer system, etc. This is a storage device.
 さらに「コンピュータ読み取り可能な記録媒体」とは、インターネット等のネットワークや電話回線等の通信回線を介してプログラムが送信された場合のサーバやクライアントとなるコンピュータシステム内部の揮発性メモリ(例えばDRAM(Dynamic Random Access Memory))のように、一定時間プログラムを保持しているものも含むものとする。また、上記プログラムは、このプログラムを記憶装置等に格納したコンピュータシステムから、伝送媒体を介して、あるいは、伝送媒体中の伝送波により他のコンピュータシステムに伝送されてもよい。ここで、プログラムを伝送する「伝送媒体」は、インターネット等のネットワーク(通信網)や電話回線等の通信回線(通信線)のように情報を伝送する機能を有する媒体のことをいう。また、上記プログラムは、前述した機能の一部を実現するためのものであっても良い。さらに、前述した機能をコンピュータシステムにすでに記録されているプログラムとの組み合わせで実現できるもの、いわゆる差分ファイル(差分プログラム)であっても良い。 Further, the “computer-readable recording medium” refers to a volatile memory (for example, DRAM (Dynamic) in a computer system serving as a server or a client when a program is transmitted via a network such as the Internet or a communication line such as a telephone line. Random Access Memory)), etc. that hold a program for a certain period of time. The program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium. Here, the “transmission medium” for transmitting the program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line. The program may be for realizing a part of the functions described above. Furthermore, what can implement | achieve the function mentioned above in combination with the program already recorded on the computer system, and what is called a difference file (difference program) may be sufficient.
 以上、本発明の実施形態について図面を参照して詳述したが、具体的な構成はこの実施形態に限られるものではなく、この発明の要旨を逸脱しない範囲の設計等も含まれる。 As described above, the embodiment of the present invention has been described in detail with reference to the drawings. However, the specific configuration is not limited to this embodiment, and includes design and the like within a scope not departing from the gist of the present invention.
 本発明は、電子書籍を表示可能な携帯電話、スマートフォン、タブレット等の端末に適用できる。 The present invention can be applied to terminals such as mobile phones, smartphones, and tablets that can display electronic books.
 100  電子書籍装置
 101  ユーザ入力部
 102  向き検出部
 103  撮像部
 104  記憶部
 105  通信部
 110  制御部
 111  動画再生部
 112  レイアウト部
 113  ページ位置保持部
 114  動画選択部
 115  動画表示位置算出部
 116  データ管理部
 120  視線検出部
 121  特徴量抽出部
 122  顔領域特定部
 123  パーツ特定部
 124  黒目領域特定部
 125  眼球中心半径算出部
 126  黒目中心位置算出部
 128  視線方向算出部
 130  動画視聴判定部
 131  顔領域大きさ算出部
 132  距離算出部
 133  水平方向画素間隔算出部
 134  目位置算出部
 135  視線判定部
 136  確率算出部
 137  視聴判定部
DESCRIPTION OF SYMBOLS 100 Electronic book apparatus 101 User input part 102 Direction detection part 103 Image pick-up part 104 Storage part 105 Communication part 110 Control part 111 Movie reproduction part 112 Layout part 113 Page position holding part 114 Movie selection part 115 Movie display position calculation part 116 Data management part DESCRIPTION OF SYMBOLS 120 Eye-gaze detection part 121 Feature-value extraction part 122 Face area specific | specification part 123 Parts specific part 124 Black eye area | region specific part 125 Eyeball center radius calculation part 126 Black eye center position calculation part 128 Gaze direction calculation part 130 Movie viewing determination part 131 Face area size Calculation unit 132 Distance calculation unit 133 Horizontal pixel interval calculation unit 134 Eye position calculation unit 135 Gaze determination unit 136 Probability calculation unit 137 Viewing determination unit

Claims (12)

  1.  自装置の向きが変わるときに、コンテンツに含まれる少なくとも一つの動画の全体が画面に表示されるように、向き変化後の画面レイアウトを決定する制御部を備える表示装置。 A display device including a control unit that determines a screen layout after the orientation change so that at least one moving image included in the content is displayed on the screen when the orientation of the device changes.
  2.  動画を含むコンテンツを画面に表示する表示部と、
     前記画面を含む平面上における自装置の向きの変化を検出する向き検出部と、
     を備え、
     前記制御部は、前記向き検出部が向きの変化を検出した場合、コンテンツに含まれる少なくとも一つの動画の全体が画面に表示されるように、向き変化後の画面レイアウトを決定する請求項1に記載の表示装置。
    A display unit for displaying content including a video on the screen;
    An orientation detection unit for detecting a change in orientation of the device itself on a plane including the screen;
    With
    2. The control unit according to claim 1, wherein, when the orientation detection unit detects a change in orientation, the control unit determines a screen layout after the orientation change so that at least one moving image included in the content is displayed on the screen. The display device described.
  3.  前記制御部は、
     ユーザが前記表示部に表示されている動画を視聴中か否かを判定する動画視聴判定部と、
     前記向き検出部が向きの変化を検出した場合で、かつ前記動画視聴判定部が動画を視聴中と判定した場合、該動画の表示されている画面上の位置に基づいて、向き変化後の画面レイアウトを決定するレイアウト部と、
     を備える請求項2に記載の表示装置。
    The controller is
    A video viewing determination unit that determines whether the user is viewing the video displayed on the display unit;
    When the orientation detection unit detects a change in orientation and the video viewing determination unit determines that the video is being viewed, the screen after the orientation change is based on the position on the screen on which the video is displayed. A layout part for determining the layout;
    A display device according to claim 2.
  4.  前記制御部は、前記向き検出部が検出した向きが変化した場合、表示していたページ位置と向きの変化方向とを保持するページ位置保持部を備え、
     前記レイアウト部は、前記向き検出部が検出した向きの変化が、前記ページ位置保持部で保持している向きの変化方向と逆方向であった場合、前記ページ位置保持部で保持しているページ位置に基づいて、変化後の画面レイアウトを決定する請求項3に記載の表示装置。
    The control unit includes a page position holding unit that holds the displayed page position and the change direction of the orientation when the orientation detected by the orientation detection unit is changed,
    The layout unit detects a page held by the page position holding unit when the change in direction detected by the direction detection unit is opposite to the direction change of the direction held by the page position holding unit. The display device according to claim 3, wherein the screen layout after the change is determined based on the position.
  5.  前記動画視聴判定部は、少なくとも動画が再生中の場合に動画が視聴中であると判定する請求項3または請求項4に記載の表示装置。 The display device according to claim 3 or 4, wherein the moving image viewing determination unit determines that the moving image is being watched at least when the moving image is being played back.
  6.  前記制御部は、ユーザの視線方向を検出する視線検出部を備え、
     前記動画視聴判定部は、動画の再生状態と前記視線検出部が検出した視線方向とに基づいて、前記ユーザが前記動画を視聴中であるか否か判定する請求項5に記載の表示装置。
    The control unit includes a gaze detection unit that detects a gaze direction of the user,
    The display device according to claim 5, wherein the moving image viewing determination unit determines whether or not the user is viewing the moving image based on a reproduction state of the moving image and a line-of-sight direction detected by the line-of-sight detection unit.
  7.  前記制御部は、視聴の指標に基づいて、コンテンツに含まれる複数の動画のうちから一の動画を視聴中の動画として選択する動画選択部を備える請求項1から請求項6のいずれか一項に記載の表示装置。 7. The control unit according to claim 1, further comprising: a video selection unit that selects one video as a currently viewed video from among a plurality of videos included in the content based on a viewing index. The display device described in 1.
  8.  前記視聴の指標は、コンテンツに含まれる動画ごとの優先度であり、
     前記動画選択部は、前記動画ごとの優先度に基づいて、前記コンテンツに含まれる複数の動画のうちから一の動画を視聴中の動画として選択する請求項7に記載の表示装置。
    The viewing index is a priority for each video included in the content,
    The display device according to claim 7, wherein the moving image selection unit selects one moving image from among a plurality of moving images included in the content based on a priority for each moving image.
  9.  前記視聴の指標は、前記動画の表示サイズであり、
     前記動画選択部は、前記動画の表示サイズに基づいて、コンテンツに含まれる複数の動画のうちから一の動画を視聴中の動画として選択する請求項7に記載の表示装置。
    The viewing index is a display size of the video,
    The display device according to claim 7, wherein the moving image selection unit selects one moving image as a currently viewed moving image from a plurality of moving images included in the content based on the display size of the moving image.
  10.  前記視聴の指標は、ユーザの視線であり、
     前記制御部は、前記ユーザの視線方向を検出する第2の視線検出部を備え、
     前記動画選択部は、前記視線検出部が検出した視線方向に基づいて、前記コンテンツに含まれる複数の動画のうちから一の動画を視聴中の動画として選択する請求項7に記載の表示装置。
    The viewing index is a user's line of sight,
    The control unit includes a second gaze detection unit that detects a gaze direction of the user,
    The display device according to claim 7, wherein the moving image selection unit selects one moving image from among a plurality of moving images included in the content based on the line-of-sight direction detected by the line-of-sight detection unit.
  11.  表示装置が実行する表示方法であって、
     自装置の向きが変わるときに、コンテンツに含まれる少なくとも一つの動画の全体が画面に表示されるように、向き変化後の画面レイアウトを決定するステップを有する表示方法。
    A display method executed by a display device,
    A display method comprising a step of determining a screen layout after the orientation change so that at least one moving image included in the content is displayed on the screen when the orientation of the device changes.
  12.  表示装置のコンピュータに、
     自装置の向きが変わるときに、コンテンツに含まれる少なくとも一つの動画の全体が画面に表示されるように、向き変化後の画面レイアウトを決定するステップを実行させるための表示プログラム。
    In the computer of the display device,
    A display program for executing a step of determining a screen layout after a change of orientation so that at least one moving image included in the content is displayed on the screen when the orientation of the own device changes.
PCT/JP2012/073958 2012-01-16 2012-09-19 Display device, display method and display program WO2013108438A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012006263A JP5911168B2 (en) 2012-01-16 2012-01-16 Display device, display method, and display program
JP2012-006263 2012-01-16

Publications (1)

Publication Number Publication Date
WO2013108438A1 true WO2013108438A1 (en) 2013-07-25

Family

ID=48798875

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/073958 WO2013108438A1 (en) 2012-01-16 2012-09-19 Display device, display method and display program

Country Status (2)

Country Link
JP (1) JP5911168B2 (en)
WO (1) WO2013108438A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6092157B2 (en) 2014-06-09 2017-03-08 富士フイルム株式会社 Electronic device, display control method for electronic device, and display control program for electronic device
JP6050396B2 (en) * 2015-01-29 2016-12-21 ヤフー株式会社 Distribution device, display control device, display control method, and display control program
US11036914B2 (en) * 2017-06-29 2021-06-15 Salesforce.Com, Inc. Automatic layout engine

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006078750A (en) * 2004-09-09 2006-03-23 Casio Hitachi Mobile Communications Co Ltd Portable information processor, information presentation method and information presentation program
JP2007141001A (en) * 2005-11-18 2007-06-07 Sharp Corp Display device, content display method, content display program, and recording medium
JP2008171085A (en) * 2007-01-09 2008-07-24 Sharp Corp Content display device, content data providing device, content data distribution system, content selection method, program and recording medium
JP2008178063A (en) * 2006-12-22 2008-07-31 Sharp Corp Digital television broadcasting system and portable telephone unit
WO2010021373A1 (en) * 2008-08-22 2010-02-25 ソニー株式会社 Image display device, control method and computer program

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0209219D0 (en) * 2002-04-23 2002-06-05 Koninkl Philips Electronics Nv Electronic device including a display
JP2006053427A (en) * 2004-08-13 2006-02-23 Nec Access Technica Ltd Portable terminal and its control display method
JP4801623B2 (en) * 2006-09-14 2011-10-26 シャープ株式会社 Electronic device and method for selecting effective functions
JP2008209711A (en) * 2007-02-27 2008-09-11 Fujitsu Ltd Electronic paper
JP5566120B2 (en) * 2010-01-20 2014-08-06 キヤノン株式会社 Display control apparatus, method, program, and recording medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006078750A (en) * 2004-09-09 2006-03-23 Casio Hitachi Mobile Communications Co Ltd Portable information processor, information presentation method and information presentation program
JP2007141001A (en) * 2005-11-18 2007-06-07 Sharp Corp Display device, content display method, content display program, and recording medium
JP2008178063A (en) * 2006-12-22 2008-07-31 Sharp Corp Digital television broadcasting system and portable telephone unit
JP2008171085A (en) * 2007-01-09 2008-07-24 Sharp Corp Content display device, content data providing device, content data distribution system, content selection method, program and recording medium
WO2010021373A1 (en) * 2008-08-22 2010-02-25 ソニー株式会社 Image display device, control method and computer program

Also Published As

Publication number Publication date
JP2013145515A (en) 2013-07-25
JP5911168B2 (en) 2016-04-27

Similar Documents

Publication Publication Date Title
KR101847796B1 (en) Device, method, and graphical user interface for media content navigation
US8917286B2 (en) Image processing device, information processing device, image processing method, and information processing method
JP7195426B2 (en) Display page interaction control method and apparatus
KR101452667B1 (en) Superimposed annotation output
US20070086669A1 (en) Regions of interest in video frames
US20130311561A1 (en) Authoring, archiving, and delivering interactive social media videos
US20150121225A1 (en) Method and System for Navigating Video to an Instant Time
Chambel et al. Towards immersive interactive video through 360 hypervideo
US20210099505A1 (en) Techniques for Optimizing the Display of Videos
CN109154862B (en) Apparatus, method, and computer-readable medium for processing virtual reality content
US11188760B2 (en) Method and system for gaming segment generation in a mobile computing platform
WO2022116962A1 (en) Video playback method and apparatus, and electronic device
JP2016537918A (en) Method and apparatus for parallax of captions on images when scrolling
KR20160087649A (en) User terminal apparatus, system and controlling method thereof
JP5911168B2 (en) Display device, display method, and display program
KR102319462B1 (en) Method for controlling playback of media contents and electronic device performing the same
WO2019157965A1 (en) Interface display method and apparatus, device, and storage medium
JP2013250771A (en) Program, information processing device, image display method and display system
JP2009093356A (en) Information processor and scroll method
JP5683291B2 (en) Movie reproducing apparatus, method, program, and recording medium
US20220350650A1 (en) Integrating overlaid digital content into displayed data via processing circuitry using a computing memory and an operating system memory
JP7187737B2 (en) Method, program and electronic device for providing a user interface
JP2006349845A (en) Electronic book display device
KR101268112B1 (en) Method, apparatus and computer-readable recording medium for playing video contained in wep page
KR101895865B1 (en) System and method for adaptive playing of landscape video content

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12866078

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12866078

Country of ref document: EP

Kind code of ref document: A1