WO2022215347A1 - Content display control system - Google Patents

Content display control system Download PDF

Info

Publication number
WO2022215347A1
WO2022215347A1 PCT/JP2022/005304 JP2022005304W WO2022215347A1 WO 2022215347 A1 WO2022215347 A1 WO 2022215347A1 JP 2022005304 W JP2022005304 W JP 2022005304W WO 2022215347 A1 WO2022215347 A1 WO 2022215347A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
user
display
target content
display control
Prior art date
Application number
PCT/JP2022/005304
Other languages
French (fr)
Japanese (ja)
Inventor
有希 中村
康夫 森永
望 松本
達哉 西▲崎▼
怜央 水田
弘行 藤野
Original Assignee
株式会社Nttドコモ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Nttドコモ filed Critical 株式会社Nttドコモ
Publication of WO2022215347A1 publication Critical patent/WO2022215347A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position

Definitions

  • the present disclosure relates to a content display control system that controls content display on a head-mounted display such as AR (Augmented Reality) glasses.
  • a head-mounted display such as AR (Augmented Reality) glasses.
  • various contents for example, browser screens, display screens for various information provided by cloud services, etc., image display screens, video display screens, etc. etc.
  • various contents for example, browser screens, display screens for various information provided by cloud services, etc., image display screens, video display screens, etc. etc.
  • the present disclosure has been made to solve the above problems, and aims to arrange desired content at a position in the virtual space that is easy for the user to see without taking time and effort.
  • a content display control system includes a head-mounted display worn by a user to display content in a virtual space, a detection unit that detects a display instruction operation associated with target content reproduced by the user, and a control unit that controls content display on the head-mounted display such that the target content associated with the display instruction operation detected by the detection unit is displayed at the line-of-sight position of the user in the virtual space.
  • the ⁇ position of the user's line of sight'' here means (1) the focal position of the user's line of sight, (2) the extension line passing through the reticle in the 360-degree omnidirectional virtual space centered on the user, and the omnispherical surface. and (3) a position pointed by a user using an operating device (controller).
  • the control unit controls the target content associated with the detected display instruction action. controls the display of content on the head-mounted display so that is displayed at the line-of-sight position of the user in the virtual space.
  • desired content can be arranged at a position in the virtual space that is easy for the user to see without taking time and effort.
  • a content display control system includes a head-mounted display worn by a user to display content in a virtual space, and a detection unit that detects a display instruction operation that is reproduced by the user and is associated with target content.
  • a control unit that controls content display on the head-mounted display such that the target content associated with the display instruction operation detected by the detection unit is displayed at a position suitable for visual recognition by the user in a virtual space; , provided.
  • the "user's preferred viewing position" includes a position in front of the face that does not depend on the user's line of sight.
  • FIG. 4 is a flow diagram showing a rough flow of processing executed in the content display control system;
  • (a) is a diagram showing an example of tagging and registration of content in a virtual space, and
  • (b) is a diagram showing an example of tagging and registration of content from a dedicated management system.
  • FIG. 10 is a flow diagram showing processing related to display control of target content at the line-of-sight position of the user; (a) is a diagram showing the initial state when the tag of the target content is uttered, (b) is a diagram showing a comparison between the speech recognition result and the correspondence table, and (c) is the state after rearrangement.
  • FIG. 4 is a diagram showing; (a) is a diagram showing rearrangement pattern 1 for simple superimposition of target content, (b) is a diagram showing rearrangement pattern 2 for temporary placement of target content, and (c) is a diagram showing rearrangement pattern 2 for temporary placement of target content.
  • FIG. 10D is a diagram showing a rearrangement pattern 3 for simple superimposition and holding of relative positional relationships of all contents in the azimuth direction, and FIG. It is a figure for demonstrating conversion between a rectangular coordinate and a three-dimensional polar coordinate (spherical surface coordinate).
  • FIG. 4 is a diagram for explaining a first algorithm for rearranging target content to a line-of-sight position;
  • FIG. 10 is a diagram showing a calculation example according to the first algorithm in rearrangement pattern 1;
  • FIG. 10 is a diagram for explaining a second algorithm for rearranging target content to a line-of-sight position;
  • FIG. 10 is a diagram for explaining an algorithm for maintaining a relative positional relationship in the azimuth direction in rearrangement pattern 3;
  • FIG. 11 is a diagram for explaining an algorithm for maintaining relative positional relationships in rearrangement pattern 4;
  • FIG. 11 is a diagram for explaining an algorithm for maintaining relative positional relationships in rearrangement pattern 4;
  • FIG. 11 is a diagram for explaining an algorithm for maintaining relative positional relationships in rearrangement pattern 4;
  • FIG. 11 is a diagram for explaining an algorithm for maintaining relative positional relationships in rearrangement pattern 4;
  • (a) is a diagram showing semi-transparency of non-targeted content
  • (b) is a diagram showing sharpness adjustment of non-targeted content
  • (c) is a diagram showing outline glow adjustment of targeted content
  • (d) is a diagram showing target content display on a pop-up virtual screen.
  • FIG. 10 is a flow diagram showing processing related to display control of target content in front of a user's face.
  • (a) is a diagram showing a state when the tag of the target content is uttered while directing the line of sight in a direction different from the direction of the face, and
  • (b) is a diagram showing the state in which the target content is displayed at a display position in front of the face.
  • FIG. 4C is a diagram showing a state
  • (c) is a diagram showing a state when the line of sight is turned to the target content in front of the face.
  • the content display control system 1 includes a controller 20 operated by a user, a terminal 10 carried by the user, and AR glasses 30 worn by the user. Prepare.
  • the controller 20 has, for example, a configuration including buttons 21, a touch pad 22, a trigger 23 and a bumper 24 shown in FIG. 2(a).
  • the controller 20 is not an essential requirement in the content display control system 1, but can be replaced by the user interface of the terminal 10.
  • FIG. 2(b) the user interface of the terminal 10 shown in FIG. 2(b) can be substituted by a user interface including a touch pad 16 consisting of a total of five buttons in the center, up, down, left, and right, and buttons 17 and 18.
  • the AR glasses 30 correspond to a head-mounted display that displays content in virtual space, and displays content in a 360-degree omnidirectional virtual space centered on the user wearing it.
  • AR glasses are exemplified as a head-mounted display, but head-mounted displays other than AR glasses (for example, VR (Virtual Reality) goggles, etc.) may be employed.
  • VR Virtual Reality
  • the terminal 10 corresponds to, for example, a smart phone, a mobile phone, a notebook personal computer, etc., and includes a detection unit 11 and a control unit 12 shown in FIG. 1(a). The function, operation, etc. of each unit will be described below.
  • the detection unit 11 is a functional unit that detects a display instruction operation that is reproduced by the user and is associated with the target content.
  • a display instruction operation of uttering a character string (tag) indicating target content while performing a predetermined operation using the controller 20 will be described.
  • a predetermined operation using the controller 20 here will be exemplified later.
  • the control unit 12 is a functional unit that controls content display on the AR glasses 30 so that the target content associated with the display instruction operation detected by the detection unit 11 is displayed at the user's line-of-sight position in the virtual space.
  • the control unit 12 includes a specifying unit 12A and a display control unit 12B as described below.
  • the specifying unit 12A is a functional unit that specifies the target content associated with the display instruction operation detected by the detecting unit 11.
  • the display control unit 12B acquires the line-of-sight position of the user and controls content display on the AR glasses 30 so that the target content identified by the identification unit 12A is displayed at the line-of-sight position of the user in the virtual space. Department.
  • the ⁇ position of the user's line of sight'' includes (1) the focal position of the user's line of sight, (2) the extension line passing through the reticle in the 360-degree omnidirectional virtual space centered on the user, and the omnidirectional surface. and (3) the position pointed by the user using the controller 20. Of these, when (3) is adopted, the content using the controller 20 is used for the display instruction operation by the user. A pointing operation for the display position is added.
  • a character string (tag) indicating the target content is linked in advance to the target content as "content specifying information" for specifying the target content.
  • the user designates the target content by uttering a tag, which is a character string indicating the target content, while performing a predetermined operation described later using the controller 20 . Therefore, the detection unit 11 includes a microphone 11A that collects the user's utterance, and a voice recognition unit 11B that recognizes and converts the collected voice into text.
  • the detection unit 11 performs a display instruction operation such as "uttering a character string (tag) indicating the target content while performing a predetermined operation using the controller 20" (that is, a display instruction action) is detected, the specifying unit 12A specifies the target content linked to the content specifying information (character string (tag)) corresponding to the detected display instruction action.
  • a display instruction operation such as "uttering a character string (tag) indicating the target content while performing a predetermined operation using the controller 20" (that is, a display instruction action) is detected
  • the specifying unit 12A specifies the target content linked to the content specifying information (character string (tag)) corresponding to the detected display instruction action.
  • the target content is specified by a gesture of a part of the user's body (for example, the user's hand). and a gesture recognition unit 11D for recognizing gestures from moving image data obtained by imaging.
  • the predetermined actions using the controller 20 include, for example, the action of continuously pressing the button 21 or the bumper 24 n times, the action of long-pressing the button 21 or the bumper 24, the action of holding the trigger 23 while pulling it for a certain period of time, Examples include an action of tapping a specific portion of the touch pad 22 at the top, bottom, left, right, and center n times consecutively, and an action of pressing and holding a specific portion of the touch pad 22 at the top, bottom, left, right, and center.
  • the "long press action” various variations can be provided by combining a plurality of actions as described above, such as starting a long press after tapping n times.
  • the terminal 10 stores a correspondence table 13 in which content identification information for various contents is linked to content IDs for identifying the contents, and a user ID.
  • a registration unit 14 may further include a registration unit 14 for linking the content identification information to the content ID and registering it in the correspondence table 13 by performing a predetermined registration process.
  • the correspondence table 13 is a content correspondence information database in which content identification information (for example, a tag consisting of a character string, a controller operation pattern, a hand gesture, etc.) is stored in association with a content ID, and an example is shown in FIG. As shown, tags associated with various content IDs are registered. In the following, processing operations, effects, etc. will be described assuming that the terminal 10 has the configuration shown in FIG. 1(b).
  • the processing executed in the content display control system 1 includes tagging registration to content (step S1) and display control of target content at the user's line of sight position (step S2).
  • the tagging registration in step S1 is a process that is performed before the process in step S2, which is a feature of the present disclosure.
  • the user operates the controller 20 to expand the menu of the content, select "tagging registration", and enter the tag (character string) of the content to tag the content. and register.
  • the tag "ALPACA" is registered in the correspondence table 13.
  • the content may be tagged and registered from a dedicated management system running on a personal computer.
  • step S2 in FIG. 3 the display control of the target content at the line-of-sight position of the user (step S2 in FIG. 3) will be described.
  • Execution of the processing in FIG. 6 is started when the user utters a tag, which is a character string indicating the target content, while performing a predetermined action using the controller 20 .
  • the detection unit 11 collects the user's utterance with the microphone 11A, and the speech recognition unit 11B performs speech recognition on the collected speech (step S21).
  • the text data of the speech recognition result is transferred to the identification unit 12A.
  • the identification unit 12A compares the speech recognition result with each record stored in the correspondence table 13 (step S22), and determines whether or not there is a record that matches the speech recognition result (step S23). If there is no record that matches the speech recognition result, the process is terminated. On the other hand, if there is a record that matches the voice recognition result, the specifying unit 12A specifies the content of the matching record as the target content, and transmits it to the display control unit 12B. Then, the display control unit 12B determines whether or not the target content is being displayed in the virtual space (step S24). Here, if the target content is being displayed, the display control unit 12B acquires the line-of-sight position of the user (step S25), and rearranges the content based on the line-of-sight position (step S25). S26).
  • the display control unit 12B acquires the line-of-sight position of the user (step S27), renders the target content at the line-of-sight position, and displays the target content at the line-of-sight position. (Step S28).
  • the initial state is a state in which four types of animal content are arranged with the user at the center in the virtual space.
  • the detection unit 11 recognizes the voice of the tag "FOX” by voice recognition, obtains the text "FOX” as a voice recognition result, and transfers it to the identification unit 12A.
  • the identifying unit 12A compares the text "FOX" with the tag of each record stored in the correspondence table 13, and as shown in FIG.
  • step S23 the content of the matching record (the content of the fox with the content ID "ID-JSN3G49”) is specified as the target content, and the display is controlled. It is transmitted to the section 12B. Further, the display control unit 12B determines that the target content is being displayed in the virtual space (step S24), and sets the user's gaze position ((x e , y e , ze ) shown in FIG. 7A) to Acquired (step S25), and rearrangement of the contents based on the line-of-sight position is executed according to a method described later (step S26). As an execution example of rearrangement, FIG. 7(c) shows an example in which only the fox content, which is the target content, is simply rearranged to the user's line-of-sight position (x e , y e , z e ).
  • the initial state before the rearrangement is a state in which four types of animal content are arranged around the user in the disguised space shown in FIG. 7A.
  • FIG. 8(a) shows a rearrangement pattern 1 that simply superimposes the target content.
  • This rearrangement pattern 1 simply rearranges only the target content (fox content in this case) to the user's gaze position (x e , y e , z e ) as in the example of FIG. 7(c). It's a pattern.
  • FIG. 8(b) shows rearrangement pattern 2 for temporarily arranging the target content.
  • this rearrangement pattern 2 only the target content (the fox content here) is temporarily moved to the user's gaze position (x e , y e , z e ), a copy of the target content is generated, and the copy is moved. It is a pattern that is temporarily placed at the original position (x t , y t , z t ). A copy of the target content is displayed, for example, translucent and cannot be operated.
  • the target content displayed at the user's line-of-sight position (x e , y e , z e ) is deleted, and the content displayed at the original position (x t , y t , z t ) is restored. is returned to the original display form, and the operation details, changes, etc. to the target content displayed at the user's line-of-sight position (x e , y e , z e ) are reflected.
  • FIG. 8(c) shows a rearrangement pattern 3 that simply superimposes the target content and maintains the relative positional relationship of all the content in the azimuth direction.
  • this rearrangement pattern 3 all of the displayed contents (hereinafter referred to as "display contents”) (here, four types of animals) are moved while maintaining the relative positional relationship between the contents in the azimuth direction, Furthermore, it is a pattern in which only the target content (here, fox content) is translated to the user's gaze position (x e , y e , z e ).
  • FIG. 8(d) shows a rearrangement pattern 4 that maintains the relative positional relationship between all displayed contents.
  • this rearrangement pattern 4 all the displayed contents are moved while maintaining the relative positional relationship between them, and the target contents (here, fox contents) are moved to the user's gaze position (x e , y e , z e ).
  • the target contents here, fox contents
  • the first algorithm is a procedure that rotates the target content around the Z-axis in Cartesian coordinates and translates the target content to the user's line-of-sight position.
  • FIG. 10 first, three-dimensional polar coordinates are obtained from rectangular coordinates for each of the target content and the user's line-of-sight position (process (1)).
  • FIG. 9 shows a general relationship between orthogonal coordinates and three-dimensional polar coordinates (spherical coordinates). , ⁇ ), the three-dimensional polar coordinates (r t , ⁇ t , ⁇ t ) of the target content and the three-dimensional polar coordinates (r e , ⁇ e , ⁇ e ). It should be noted that only the azimuth angle of the three-dimensional polar coordinates may be obtained.
  • Equation (3) the target content is rotated by ⁇ around the Z-axis in the orthogonal coordinates according to the following equation (3) (process (3)). That is, the azimuth angle with respect to the user is adjusted.
  • the first term on the right side of Equation (3) is a rotation matrix about the Z-axis.
  • Equation (4) the first term on the right side of Equation (4) is a translation matrix.
  • the orthogonal coordinates (x e , y e , z e ) of the user's gaze position are , the following is obtained as the three-dimensional polar coordinates (r e , ⁇ e , ⁇ e ) of the line-of-sight position of the user by substituting into the above formula (1).
  • the target content is rotated by ⁇ around the Z-axis in the orthogonal coordinates as follows (process (3)).
  • the target content is translated to the line-of-sight position of the user as follows (process (4)).
  • the Cartesian coordinates (x' t , y' t , z' t ) of the target content are the Cartesian coordinates (x e , y e , ze ) of the line-of-sight position of the user, that is, becomes.
  • the second algorithm is a procedure that rotates the target content about the Z-axis in Cartesian coordinates, adjusts the polar angle with respect to the user, and translates the target content to the user's gaze position.
  • first, three-dimensional polar coordinates are obtained from orthogonal coordinates for each of the target content and the user's line-of-sight position (process (1)). This process is similar to the first algorithm described above.
  • Equation (5) the target content is rotated by ⁇ around the Z-axis in the orthogonal coordinates according to the following equation (5) (process (3)). That is, the azimuth angle with respect to the user is adjusted.
  • the first term on the right side of Equation (5) is a rotation matrix about the Z-axis.
  • Equation (6) The first term on the right side of Equation (6) is the Rodriguez rotation matrix.
  • Equation (7) the orthogonal coordinates (x' t , y' t , z' t ) of the target content become the orthogonal coordinates (x e , y e , z e ) of the user's gaze position.
  • the first term on the right side of Equation (7) corresponds to the following.
  • each of all the display contents is rotated by ⁇ around the Z-axis in the orthogonal coordinates according to the following formula (8) (process (3)). That is, the azimuth angle with respect to the user is adjusted for each display content.
  • n is a subscript for specifying each display content.
  • the target content is translated to the user's line-of-sight position using Equation (4) described above, thereby relocating the target content to the user's line-of-sight position (processing ( Four)).
  • the rearrangement method may employ the method of the second algorithm instead of the method of the first algorithm.
  • Equation (10) is used to provisionally calculate the post-rearrangement azimuth angle of the display content other than the target content (process (4)).
  • m is a subscript for specifying content other than the target content.
  • the post-rearrangement polar angles of the contents other than the target contents among the displayed contents are tentatively calculated as follows (processing (5) shown in FIGS. 14 and 15). However, it is limited to the following range.
  • Azimuth angle ⁇ 'm (pre) after rearrangement is - ⁇ 'm (pre) ⁇ t +( ⁇ /2) or ⁇ -Arctan( -xtz / ytz ) ⁇ 'm If (pre) ⁇ ⁇ , the rearranged polar angle ⁇ 'm (pre) is obtained by the following equation (11).
  • ⁇ ' m (pre) ⁇ m + ⁇ (11)
  • ⁇ ' m (pre) ⁇ m + ⁇ (13)
  • the rearranged polar angle ⁇ ′ m(pre) is obtained by the following equation (14).
  • ⁇ ' m (pre) ⁇ m - ⁇ (14)
  • Azimuth angle ⁇ ' m (pre) after rearrangement is - ⁇ ⁇ ' m (pre) ⁇ - ⁇ + Arctan(-x tz /y tz ) or Arctan (-x tz /y tz ) ⁇ ⁇ ' m If (pre) ⁇ ⁇ , the rearranged polar angle ⁇ 'm (pre) is obtained by the following equation (15).
  • ⁇ ' m (pre) ⁇ m + ⁇ (15)
  • ⁇ ' m (pre) ⁇ m - ⁇ (16)
  • the user After determining the polar coordinates after rearrangement for content other than the target content as described above, the user presets TH (threshold value) as follows, and sets the radius r that defines the display area of the content in the depth direction. By adjusting, the adjusted radius r' m may be obtained. In this case, it is possible to prevent the radius r from becoming extremely small and the radius r from becoming extremely large due to the amount of movement of the target content. However, care must be taken because it can be regarded as a process that creates a shift in the depth direction and destroys the relative position.
  • TH threshold value
  • the coordinates after adjustment of the azimuth angle and polar angle obtained by the above processing are translated by applying the following equation (28), and the orthogonal coordinates after translation (x' m , y' m ,z' m ) is obtained (process (7) iv).
  • the Cartesian coordinates (x' m , y' m , z' m ) after rearrangement of all contents other than the target contents are obtained.
  • the target content in the virtual space is displayed according to a predetermined rearrangement pattern (any of the rearrangement patterns 1 to 4) based on the line-of-sight position.
  • a rearrangement position of the content can be determined, and the target content can be rearranged to the determined rearrangement position.
  • Target content can be displayed.
  • a rearrangement pattern for rearranging all the display contents while maintaining the relative positional relationship in the azimuth direction for all the display contents 3 it is possible to maintain the relative positional relationship in the azimuth angle direction for all display contents, and to avoid the layout that the user is particular about from collapsing significantly.
  • the rearrangement pattern 4 for rearranging all the display contents while maintaining the relative positional relationship between all the display contents is included. , the relative positional relationship between all displayed contents can be maintained, and it is possible to avoid collapsing of the user's preferred layout.
  • Modification 18(a) to 18(d) will be used to explain how to deal with overlap in content display. If there is a possibility that the target content and other content (non-target content) overlap in the virtual space, for example, by taking the following measures, the overlap can be avoided and the inconvenience of the target content becoming difficult to see can be resolved. can.
  • the non-target content is translucent, as shown in FIG. 18(a).
  • a method of making the target content relatively easy to see by performing processing such as adjusting the value of r may be adopted.
  • processing such as adjusting the brilliance of the outline of the target content may be performed to make the target content stand out and be easier to see.
  • the target content may be displayed on a pop-up virtual screen as shown in FIG. 18(d). In this way, various techniques can be used to make target content relatively easy to see.
  • FIG. 18D is limited to rearrangement pattern 2, and the processing for adjusting the value of radius vector r in spherical coordinates is limited to rearrangement patterns 1-3.
  • the user may specify the target content by inputting a pattern described later using a controller.
  • a hand gesture may be used to specify the target content.
  • the former "pattern using a controller”
  • FIG. a pattern in which the button 21 is pressed n times consecutively, and a pattern in which a laser pointer function of the controller 20 draws a specific figure (for example, a circle) in the virtual space.
  • the controller pattern as described above is registered in the correspondence table 13 in association with the content ID of the corresponding content.
  • Hand gestures for designating target content by the latter hand gesture include, as shown in FIG. be done.
  • the hand gesture as described above is registered in the correspondence table 13 in association with the content ID of the corresponding content. It should be noted that when target content is designated by a hand gesture, as described above, the detection unit 11 shown in FIGS.
  • the configuration further includes a gesture recognition unit 11D for recognition.
  • the user instead of uttering the tag of the target content, the user can specify the target content and instruct to display it by a method such as pattern input using a controller or hand gesture. Display instructions of desired content can be easily given by various methods.
  • the terminal 10 has a correspondence table 13 in which content specifying information (for example, a tag) is associated with a content ID and stored in advance, and the specifying unit 12A
  • content specifying information for example, a tag
  • the specifying unit 12A an example of specifying the target content from the content ID linked to the content specifying information corresponding to the display instruction operation detected by the detection unit 11 by referring to the correspondence table 13 has been shown.
  • the correspondence table 13 is not an essential requirement. For example, by using the content ID itself as a tag that is content identification information, the correspondence table 13 is made unnecessary, and the identification unit 12A identifies content corresponding to the detected display instruction operation.
  • the configuration shown in FIG. 1(a) may be adopted so that the target content can be specified immediately from the information. According to the configuration of FIG. 1(a), the simplification of the device configuration can be achieved.
  • the terminal 10 includes all of the illustrated components (the detection unit 11, the identification unit 12A, the display control unit 12B, etc.).
  • a part of the functional units may be installed in the server, and the target content may be specified by requesting the server from the terminal 10 to process.
  • the content display control system according to the present disclosure is understood to have a configuration including the terminal 10 and the server.
  • the display control unit 12B acquires the orientation of the user's face using an existing face recognition technology or the like, and the target content is displayed at a display position in front of the orientation of the user's face in the virtual space. , which controls the display of content on the head-mounted display.
  • display position in front of the face for example, the position of the intersection of the extension line to the front of the face in the 360-degree omnidirectional virtual space centered on the user and the omnidirectional surface is adopted.
  • a position at a predetermined distance from the face may be adopted on the forward extension of the face. It should be noted that the "predetermined distance" (distance from the face) can be appropriately adjusted by the user.
  • the processing shown in FIG. 21 is executed instead of the "display control to the user's gaze position" in FIG. 6 described above.
  • the same processing as in FIG. 6 is assigned the same number, and redundant description is omitted.
  • the display control unit 12B acquires the orientation of the user's face using existing face recognition technology or the like (step S25A). Content rearrangement, which will be described later, is executed based on the display position of (step S26A).
  • the display control unit 12B acquires the orientation of the user's face (step S27A), and renders the target content at a display position in front of the face orientation to display the target content.
  • the target content is displayed at the position (step S28A).
  • the initial state is a state in which three types of animal content are arranged with the user at the center in the virtual space.
  • the detection unit 11 recognizes the voice of the tag "ALPACA” in step S21 of FIG.
  • the text "ALPACA” is obtained as a speech recognition result and transferred to the identification unit 12A.
  • step S22 the specifying unit 12A compares the text "ALPACA" with the tag of each record stored in the correspondence table 13, and determines that there is a record matching the text "ALPACA" in the correspondence table 13 (step S22). YES in S23), the content of the matching record, Alpaca, is specified as the target content and transmitted to the display control unit 12B. Further, the display control unit 12B determines that the target content is being displayed in the virtual space (step S24), acquires the orientation of the user's face using existing face recognition technology or the like (step S25A), The content rearrangement is executed with reference to the forward display position ((x e , y e , z e ) shown in FIG.
  • FIG. 22B shows an example of rearranging only the target content (alpaca content) to the display position (x e , y e , z e ) in front of the face direction.
  • the user's line of sight is directed to the position before the movement of the target content, but the target content first moves to the display position in front of the face.
  • the user can view the target content at a position that is easy for the user to see (here, in front of the face) by directing the line of sight forward.
  • the display control unit 12B acquires the orientation of the user's face using an existing face authentication technology or the like (step S27A), and determines the display position in front of the orientation of the face.
  • the target content is displayed in front of the face (step S28A).
  • the target content can be displayed in front of the user's face regardless of the line of sight of the user.
  • each functional block may be implemented using one device that is physically or logically coupled, or directly or indirectly using two or more devices that are physically or logically separated (e.g. , wired, wireless, etc.) and may be implemented using these multiple devices.
  • a functional block may be implemented by combining software in the one device or the plurality of devices.
  • Functions include judging, determining, determining, calculating, calculating, processing, deriving, investigating, searching, checking, receiving, transmitting, outputting, accessing, resolving, selecting, choosing, establishing, comparing, assuming, expecting, assuming, Broadcasting, notifying, communicating, forwarding, configuring, reconfiguring, allocating, mapping, assigning, etc. can't
  • a functional block (component) that makes transmission work is called a transmitting unit or transmitter.
  • the implementation method is not particularly limited.
  • FIG. 20 is a diagram illustrating a hardware configuration example of terminal 10 according to an embodiment of the present disclosure.
  • the terminal 10 described above may be physically configured as a computer device including a processor 1001, a memory 1002, a storage 1003, a communication device 1004, an input device 1005, an output device 1006, a bus 1007, and the like.
  • the term "apparatus” can be read as a circuit, device, unit, or the like.
  • the hardware configuration of the terminal 10 may be configured to include one or more of each device shown in the figure, or may be configured without some of the devices.
  • Each function of the terminal 10 is performed by causing the processor 1001 to perform calculations, controlling communication by the communication device 1004, controlling communication by the communication device 1004, and controlling the communication by the memory 1002 and the It is realized by controlling at least one of data reading and writing in the storage 1003 .
  • the processor 1001 for example, operates an operating system and controls the entire computer.
  • the processor 1001 may be configured by a central processing unit (CPU) including interfaces with peripheral devices, terminals, arithmetic units, registers, and the like.
  • CPU central processing unit
  • the processor 1001 reads programs (program codes), software modules, data, etc. from at least one of the storage 1003 and the communication device 1004 to the memory 1002, and executes various processes according to them.
  • programs program codes
  • software modules software modules
  • data etc.
  • the program a program that causes a computer to execute at least part of the operations described in the above embodiments is used.
  • FIG. Processor 1001 may be implemented by one or more chips. Note that the program may be transmitted from a network via an electric communication line.
  • the memory 1002 is a computer-readable recording medium, and is composed of at least one of, for example, ROM (Read Only Memory), EPROM (Erasable Programmable ROM), EEPROM (Electrically Erasable Programmable ROM), RAM (Random Access Memory), etc. may be
  • ROM Read Only Memory
  • EPROM Erasable Programmable ROM
  • EEPROM Electrical Erasable Programmable ROM
  • RAM Random Access Memory
  • the memory 1002 may also be called a register, cache, main memory (main storage device), or the like.
  • the memory 1002 can store executable programs (program code), software modules, etc. for implementing a wireless communication method according to an embodiment of the present disclosure.
  • the storage 1003 is a computer-readable recording medium, for example, an optical disc such as a CD-ROM (Compact Disc ROM), a hard disk drive, a flexible disc, a magneto-optical disc (for example, a compact disc, a digital versatile disc, a Blu-ray disk), smart card, flash memory (eg, card, stick, key drive), floppy disk, magnetic strip, and/or the like.
  • Storage 1003 may also be called an auxiliary storage device.
  • the storage medium described above may be, for example, a database, server, or other suitable medium including at least one of memory 1002 and storage 1003 .
  • the communication device 1004 is hardware (transmitting/receiving device) for communicating between computers via at least one of a wired network and a wireless network, and is also called a network device, a network controller, a network card, a communication module, or the like.
  • the input device 1005 is an input device (for example, keyboard, mouse, microphone, switch, button, sensor, etc.) that receives input from the outside.
  • the output device 1006 is an output device (eg, display, speaker, LED lamp, etc.) that outputs to the outside. Note that the input device 1005 and the output device 1006 may be integrated (for example, a touch panel).
  • Each device such as the processor 1001 and the memory 1002 is connected by a bus 1007 for communicating information.
  • the bus 1007 may be configured using a single bus, or may be configured using different buses between devices.
  • notification of predetermined information is not limited to being performed explicitly, but may be performed implicitly (for example, not notifying the predetermined information). good too.
  • Input/output information may be stored in a specific location (for example, memory) or managed using a management table. Input/output information and the like can be overwritten, updated, or appended. The output information and the like may be deleted. The entered information and the like may be transmitted to another device.
  • a and B are different may mean “A and B are different from each other.”
  • the term may also mean that "A and B are different from C”.
  • Terms such as “separate,” “coupled,” etc. may also be interpreted in the same manner as “different.”

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

This content display control system 1 comprises: AR glasses 30 as a head-mounted display which is worn by a user and displays content in a virtual space; a detection unit 11 for detecting a display instruction operation that is reproduced by the user and is associated with target content; and a control unit 12 for controlling content display on the AR glasses 30 such that the target content associated with the detected display instruction operation is displayed at a position suitable for the user's visual recognition in a virtual space.

Description

コンテンツ表示制御システムContent display control system
 本開示は、AR(Augmented Reality)グラス等のヘッドマウントディスプレイへのコンテンツ表示を制御するコンテンツ表示制御システムに関する。 The present disclosure relates to a content display control system that controls content display on a head-mounted display such as AR (Augmented Reality) glasses.
 従来より、ARグラス等のヘッドマウントディスプレイを着用したユーザ向けに、さまざまなコンテンツ(例えば、ブラウザ画面、クラウドサービス等により提供されるさまざまな情報等の表示画面、画像の表示画面、動画の表示画面など)を仮想空間に表示する技術が知られており、例えば、ユーザを中心とした仮想空間を表示領域として、さまざまなコンテンツを表示領域内に配置することが可能である。 Conventionally, for users wearing head-mounted displays such as AR glasses, various contents (for example, browser screens, display screens for various information provided by cloud services, etc., image display screens, video display screens, etc. etc.) in a virtual space. For example, it is possible to arrange various contents in a virtual space centered on a user as a display area.
 一方、ユーザの眼球の動き等を検知して、ユーザにより所望の操作を簡易に選択する方法は提案されている(特許文献1参照)。 On the other hand, a method has been proposed in which the movement of the user's eyeballs is detected and the user easily selects a desired operation (see Patent Document 1).
特開2018-042004号公報Japanese Patent Application Laid-Open No. 2018-042004
 しかし、ユーザが、注目したい所望のコンテンツを簡易に選択して、表示領域における見易い位置(例えばユーザの正面)へコンテンツを再配置させる方法については確立されていない。そのため、表示領域に配置されたコンテンツが多数ある場合には、ユーザが所望のコンテンツを探すのに手間がかかるといった不都合がある。また、所望のコンテンツを簡易に選択する方法として、ユーザが所望のコンテンツの方向へ顔を向けることが考えられるが、実際の利用シーンでは、ユーザがさまざまな方向へ顔を向けることが公共のマナー等の面で憚られる場合が多い。そのため、上記のように所望のコンテンツをユーザの見易い位置へ再配置させる技術は大いに待望されている。 However, there is no established method for the user to easily select desired content that the user wants to focus on and rearrange the content to a position in the display area that is easy to see (for example, in front of the user). Therefore, when there are many contents arranged in the display area, there is an inconvenience that it takes time and effort for the user to search for the desired contents. As a method for easily selecting desired content, it is conceivable that the user turns his face toward the desired content. It is often embarrassed for such reasons. Therefore, there is a great demand for a technique for rearranging desired content to a position where it is easy for the user to see it, as described above.
 本開示は、上記課題を解決するために成されたものであり、仮想空間におけるユーザの見易い位置に、手間をかけることなく所望のコンテンツを配置させることを目的とする。 The present disclosure has been made to solve the above problems, and aims to arrange desired content at a position in the virtual space that is easy for the user to see without taking time and effort.
 本開示に係るコンテンツ表示制御システムは、ユーザにより着用され、仮想空間にコンテンツを表示するヘッドマウントディスプレイと、前記ユーザにより再現される、対象コンテンツに紐付く表示指示動作を検知する検知部と、前記検知部により検知された前記表示指示動作に紐付く前記対象コンテンツが、仮想空間における前記ユーザの視線位置に表示されるように、前記ヘッドマウントディスプレイへのコンテンツ表示を制御する制御部と、を備える。なお、ここでの「ユーザの視線位置」としては、(1)ユーザの視線の焦点位置、(2)ユーザを中心とした360度の全天球仮想空間におけるレティクルを通る延長線と全天球面との交点、(3)ユーザが操作機器(コントローラ)を用いてポインティングした位置などを採用することができる。 A content display control system according to the present disclosure includes a head-mounted display worn by a user to display content in a virtual space, a detection unit that detects a display instruction operation associated with target content reproduced by the user, and a control unit that controls content display on the head-mounted display such that the target content associated with the display instruction operation detected by the detection unit is displayed at the line-of-sight position of the user in the virtual space. . The ``position of the user's line of sight'' here means (1) the focal position of the user's line of sight, (2) the extension line passing through the reticle in the 360-degree omnidirectional virtual space centered on the user, and the omnispherical surface. and (3) a position pointed by a user using an operating device (controller).
 上記のコンテンツ表示制御システムでは、ヘッドマウントディスプレイを着用したユーザにより再現される、対象コンテンツに紐付く表示指示動作を検知部が検知すると、制御部が、検知された表示指示動作に紐付く対象コンテンツが、仮想空間におけるユーザの視線位置に表示されるように、ヘッドマウントディスプレイへのコンテンツ表示を制御する。これにより、仮想空間におけるユーザの見易い位置に、手間をかけることなく所望のコンテンツを配置させることができる。 In the content display control system described above, when the detection unit detects a display instruction action associated with target content reproduced by a user wearing a head-mounted display, the control unit controls the target content associated with the detected display instruction action. controls the display of content on the head-mounted display so that is displayed at the line-of-sight position of the user in the virtual space. As a result, desired content can be arranged at a position in the virtual space that is easy for the user to see without taking time and effort.
 また、本開示に係るコンテンツ表示制御システムは、ユーザにより着用され、仮想空間にコンテンツを表示するヘッドマウントディスプレイと、前記ユーザにより再現される、対象コンテンツに紐付く表示指示動作を検知する検知部と、前記検知部により検知された前記表示指示動作に紐付く前記対象コンテンツが、仮想空間における前記ユーザの視認好適位置に表示されるように、前記ヘッドマウントディスプレイへのコンテンツ表示を制御する制御部と、を備える。なお、ここでの「ユーザの視認好適位置」には、前述したユーザの視線位置に加えて、ユーザの視線によらない顔の向き前方の位置を含む。 Further, a content display control system according to the present disclosure includes a head-mounted display worn by a user to display content in a virtual space, and a detection unit that detects a display instruction operation that is reproduced by the user and is associated with target content. a control unit that controls content display on the head-mounted display such that the target content associated with the display instruction operation detected by the detection unit is displayed at a position suitable for visual recognition by the user in a virtual space; , provided. In addition to the user's line-of-sight position described above, the "user's preferred viewing position" includes a position in front of the face that does not depend on the user's line of sight.
 本開示によれば、仮想空間におけるユーザの見易い位置に、手間をかけることなく所望のコンテンツを配置させることができる。 According to the present disclosure, it is possible to arrange desired content at a position in the virtual space that is easy for the user to see without taking time and effort.
(a)は発明の実施形態に係るコンテンツ表示制御システムの構成図であり、(b)はコンテンツ特定情報をコンテンツIDに紐付けて記憶した対応表等をさらに備えたコンテンツ表示制御システムの構成図である。(a) is a configuration diagram of a content display control system according to an embodiment of the invention, and (b) is a configuration diagram of a content display control system further provided with a correspondence table or the like in which content identification information is linked to content IDs and stored. is. (a)はコントローラの一例を示す図であり、(b)はコントローラの代用となる端末のユーザインタフェースの一例を示す図である。(a) is a diagram showing an example of a controller, and (b) is a diagram showing an example of a user interface of a terminal substituting for the controller. コンテンツ表示制御システムにおいて実行される処理の大まかな流れを示すフロー図である。FIG. 4 is a flow diagram showing a rough flow of processing executed in the content display control system; (a)は仮想空間中でコンテンツへのタグ付け登録をする例を示す図であり、(b)は専用管理システムからコンテンツへのタグ付け登録をする例を示す図である。(a) is a diagram showing an example of tagging and registration of content in a virtual space, and (b) is a diagram showing an example of tagging and registration of content from a dedicated management system. 対応表に登録されたデータ例を示す図である。It is a figure which shows the data example registered into the correspondence table. ユーザの視線位置への対象コンテンツの表示制御に関する処理を示すフロー図である。FIG. 10 is a flow diagram showing processing related to display control of target content at the line-of-sight position of the user; (a)は対象コンテンツのタグを発声したときの初期状態を示す図であり、(b)は音声認識結果と対応表との比較を示す図であり、(c)は再配置後の状態を示す図である。(a) is a diagram showing the initial state when the tag of the target content is uttered, (b) is a diagram showing a comparison between the speech recognition result and the correspondence table, and (c) is the state after rearrangement. FIG. 4 is a diagram showing; (a)は対象コンテンツの単純な重畳を行う再配置パターン1を示す図であり、(b)は対象コンテンツの一時配置を行う再配置パターン2を示す図であり、(c)は対象コンテンツの単純な重畳および方位角方向における全コンテンツの相対位置関係保持を行う再配置パターン3を示す図であり、(d)は全コンテンツの相対位置関係保持を行う再配置パターン4を示す図である。(a) is a diagram showing rearrangement pattern 1 for simple superimposition of target content, (b) is a diagram showing rearrangement pattern 2 for temporary placement of target content, and (c) is a diagram showing rearrangement pattern 2 for temporary placement of target content. FIG. 10D is a diagram showing a rearrangement pattern 3 for simple superimposition and holding of relative positional relationships of all contents in the azimuth direction, and FIG. 直交座標と3次元極座標(球面座標)間の変換を説明するための図である。It is a figure for demonstrating conversion between a rectangular coordinate and a three-dimensional polar coordinate (spherical surface coordinate). 視線位置へ対象コンテンツを再配置する第1のアルゴリズムを説明するための図である。FIG. 4 is a diagram for explaining a first algorithm for rearranging target content to a line-of-sight position; 再配置パターン1における第1のアルゴリズムに沿った計算例を示す図である。FIG. 10 is a diagram showing a calculation example according to the first algorithm in rearrangement pattern 1; 視線位置へ対象コンテンツを再配置する第2のアルゴリズムを説明するための図である。FIG. 10 is a diagram for explaining a second algorithm for rearranging target content to a line-of-sight position; 再配置パターン3での方位角方向における相対位置関係保持のアルゴリズムを説明するための図である。FIG. 10 is a diagram for explaining an algorithm for maintaining a relative positional relationship in the azimuth direction in rearrangement pattern 3; 再配置パターン4での相対位置関係保持のアルゴリズムを説明するための図である。FIG. 11 is a diagram for explaining an algorithm for maintaining relative positional relationships in rearrangement pattern 4; 再配置パターン4での相対位置関係保持のアルゴリズムを説明するための図である。FIG. 11 is a diagram for explaining an algorithm for maintaining relative positional relationships in rearrangement pattern 4; 再配置パターン4での相対位置関係保持のアルゴリズムを説明するための図である。FIG. 11 is a diagram for explaining an algorithm for maintaining relative positional relationships in rearrangement pattern 4; 再配置パターン4での相対位置関係保持のアルゴリズムを説明するための図である。FIG. 11 is a diagram for explaining an algorithm for maintaining relative positional relationships in rearrangement pattern 4; (a)非対象コンテンツの半透明化を示す図であり、(b)は非対象コンテンツの鮮明度調整を示す図であり、(c)は対象コンテンツの輪郭の光彩調整を示す図であり、(d)はポップアップ仮想スクリーンへの対象コンテンツ表示を示す図である。(a) is a diagram showing semi-transparency of non-targeted content, (b) is a diagram showing sharpness adjustment of non-targeted content, (c) is a diagram showing outline glow adjustment of targeted content, (d) is a diagram showing target content display on a pop-up virtual screen. (a)はコントローラを用いたパターン入力により対象コンテンツを指定する場合の対比表のデータ例を示す図であり、(b)はハンドジェスチャにより対象コンテンツを指定する場合の対比表のデータ例を示す図である。(a) is a diagram showing an example of comparison table data when target content is specified by pattern input using a controller, and (b) is a diagram showing an example of comparison table data when target content is specified by a hand gesture. It is a diagram. 端末のハードウェア構成例を示す図である。It is a figure which shows the hardware structural example of a terminal. ユーザの顔の向き前方への対象コンテンツの表示制御に関する処理を示すフロー図である。FIG. 10 is a flow diagram showing processing related to display control of target content in front of a user's face. (a)は顔の向きとは異なる方向へ視線を向けながら対象コンテンツのタグを発声したときの状態を示す図であり、(b)は対象コンテンツが顔の向き前方の表示位置に表示された状態を示す図であり、(c)は顔の向き前方の対象コンテンツに視線を向けたときの状態を示す図である。(a) is a diagram showing a state when the tag of the target content is uttered while directing the line of sight in a direction different from the direction of the face, and (b) is a diagram showing the state in which the target content is displayed at a display position in front of the face. FIG. 4C is a diagram showing a state, and (c) is a diagram showing a state when the line of sight is turned to the target content in front of the face.
 以下、図面を参照しながら、発明の実施形態に係るコンテンツ表示制御システムについて説明する。 A content display control system according to an embodiment of the invention will be described below with reference to the drawings.
 (コンテンツ表示制御システムの構成)
 図1(a)に示すように、本実施形態に係るコンテンツ表示制御システム1は、ユーザにより操作されるコントローラ20と、ユーザにより携帯される端末10と、ユーザにより着用されるARグラス30とを備える。
(Configuration of content display control system)
As shown in FIG. 1(a), the content display control system 1 according to this embodiment includes a controller 20 operated by a user, a terminal 10 carried by the user, and AR glasses 30 worn by the user. Prepare.
 コントローラ20は、例えば、図2(a)に示すボタン21、タッチパッド22、トリガー23およびバンパー24を備えた構成とされている。なお、コントローラ20は、コンテンツ表示制御システム1において必須要件ではなく、端末10のユーザインタフェースにより代用できる。例えば図2(b)に示す端末10のユーザインタフェースにおける、中央と上下左右の計5つのボタンから成るタッチパッド16、およびボタン17、18を含んだユーザインタフェースによって代用できる。 The controller 20 has, for example, a configuration including buttons 21, a touch pad 22, a trigger 23 and a bumper 24 shown in FIG. 2(a). Note that the controller 20 is not an essential requirement in the content display control system 1, but can be replaced by the user interface of the terminal 10. FIG. For example, the user interface of the terminal 10 shown in FIG. 2(b) can be substituted by a user interface including a touch pad 16 consisting of a total of five buttons in the center, up, down, left, and right, and buttons 17 and 18. FIG.
 ARグラス30は、仮想空間にコンテンツを表示するヘッドマウントディスプレイに相当し、着用したユーザを中心とした360度の全天球仮想空間にコンテンツを表示する。なお、本実施形態ではヘッドマウントディスプレイとしてARグラスを例示するが、ARグラス以外のヘッドマウントディスプレイ(例えばVR(Virtual Reality)ゴーグルなど)を採用してもよい。 The AR glasses 30 correspond to a head-mounted display that displays content in virtual space, and displays content in a 360-degree omnidirectional virtual space centered on the user wearing it. In this embodiment, AR glasses are exemplified as a head-mounted display, but head-mounted displays other than AR glasses (for example, VR (Virtual Reality) goggles, etc.) may be employed.
 端末10は、例えば、スマートフォン、携帯電話、ノート型パーソナルコンピュータなどに該当し、図1(a)に示す検知部11および制御部12を含む。以下、各部の機能、動作等について説明する。 The terminal 10 corresponds to, for example, a smart phone, a mobile phone, a notebook personal computer, etc., and includes a detection unit 11 and a control unit 12 shown in FIG. 1(a). The function, operation, etc. of each unit will be described below.
 検知部11は、ユーザにより再現される、対象コンテンツに紐づく表示指示動作を検知する機能部である。本実施形態では、一例として、コントローラ20を用いて所定動作を行いながら、対象コンテンツを示す文字列(タグ)を発声するという表示指示動作を説明する。ここでのコントローラ20を用いて所定動作については、後に例示する。 The detection unit 11 is a functional unit that detects a display instruction operation that is reproduced by the user and is associated with the target content. In the present embodiment, as an example, a display instruction operation of uttering a character string (tag) indicating target content while performing a predetermined operation using the controller 20 will be described. A predetermined operation using the controller 20 here will be exemplified later.
 制御部12は、検知部11により検知された表示指示動作に紐付く対象コンテンツが、仮想空間におけるユーザの視線位置に表示されるように、ARグラス30へのコンテンツ表示を制御する機能部である。上記機能を発揮するために、制御部12は、以下のような特定部12Aおよび表示制御部12Bを含む。 The control unit 12 is a functional unit that controls content display on the AR glasses 30 so that the target content associated with the display instruction operation detected by the detection unit 11 is displayed at the user's line-of-sight position in the virtual space. . In order to exhibit the above functions, the control unit 12 includes a specifying unit 12A and a display control unit 12B as described below.
 特定部12Aは、検知部11により検知された表示指示動作に紐付く対象コンテンツを特定する機能部である。 The specifying unit 12A is a functional unit that specifies the target content associated with the display instruction operation detected by the detecting unit 11.
 表示制御部12Bは、ユーザの視線位置を取得し、特定部12Aにより特定された対象コンテンツが、仮想空間におけるユーザの視線位置に表示されるように、ARグラス30へのコンテンツ表示を制御する機能部である。前述したように、「ユーザの視線位置」としては、(1)ユーザの視線の焦点位置、(2)ユーザを中心とした360度の全天球仮想空間におけるレティクルを通る延長線と全天球面との交点、(3)ユーザがコントローラ20を用いてポインティングした位置などを採用することができ、このうち、(3)を採用する場合は、ユーザによる表示指示動作に、コントローラ20を用いたコンテンツ表示位置のポインティング動作が加わることとなる。 The display control unit 12B acquires the line-of-sight position of the user and controls content display on the AR glasses 30 so that the target content identified by the identification unit 12A is displayed at the line-of-sight position of the user in the virtual space. Department. As described above, the ``position of the user's line of sight'' includes (1) the focal position of the user's line of sight, (2) the extension line passing through the reticle in the 360-degree omnidirectional virtual space centered on the user, and the omnidirectional surface. and (3) the position pointed by the user using the controller 20. Of these, when (3) is adopted, the content using the controller 20 is used for the display instruction operation by the user. A pointing operation for the display position is added.
 本実施形態では、一例として、対象コンテンツを特定するための「コンテンツ特定情報」として、対象コンテンツを示す文字列(タグ)が、対象コンテンツに予め紐付けられている。また、本実施形態では、ユーザは、コントローラ20を用いて後述の所定動作を行いながら、対象コンテンツを示す文字列であるタグを発声することで、対象コンテンツを指定する。そのため、検知部11は、ユーザの発声を集音するマイク11Aと、集音された音声を認識してテキスト化する音声認識部11Bとを含む。そのため、検知部11が、「コントローラ20を用いて所定動作を行いながら、対象コンテンツを示す文字列(タグ)を発声する」といった表示指示動作(即ち、対象コンテンツに紐付くコンテンツ特定情報に対応した表示指示動作)を検知した場合、特定部12Aは、検知された上記表示指示動作に対応したコンテンツ特定情報(文字列(タグ))に紐付く対象コンテンツを特定する。 In this embodiment, as an example, a character string (tag) indicating the target content is linked in advance to the target content as "content specifying information" for specifying the target content. Further, in this embodiment, the user designates the target content by uttering a tag, which is a character string indicating the target content, while performing a predetermined operation described later using the controller 20 . Therefore, the detection unit 11 includes a microphone 11A that collects the user's utterance, and a voice recognition unit 11B that recognizes and converts the collected voice into text. Therefore, the detection unit 11 performs a display instruction operation such as "uttering a character string (tag) indicating the target content while performing a predetermined operation using the controller 20" (that is, a display instruction action) is detected, the specifying unit 12A specifies the target content linked to the content specifying information (character string (tag)) corresponding to the detected display instruction action.
 なお、後述する変形例の1つとして、ユーザの身体の一部(例えばユーザの手)のジェスチャにより対象コンテンツを指定する例が挙げられるが、その場合、検知部11は、図1(a)にて破線で示すカメラ11Cと、撮像により得られた動画データからジェスチャを認識するジェスチャ認識部11Dとを含む。また、コントローラ20を用いた所定動作としては、例えば、ボタン21又はバンパー24をn回連続押下する動作、ボタン21又はバンパー24を長押しする動作、トリガー23を引いたまま一定時間ホールドする動作、タッチパッド22の上下左右中央いずれかの特定部位をn回連続タップする動作、タッチパッド22の上下左右中央いずれかの特定部位を長押しする動作などが挙げられる。ここでの「長押しする動作」については、n回連続タップした後に長押しを開始するなど、上記のように複数の動作を組み合わせることで、さまざまなバリエーションを持たせることができる。 Note that as one of the modifications described later, there is an example in which the target content is specified by a gesture of a part of the user's body (for example, the user's hand). and a gesture recognition unit 11D for recognizing gestures from moving image data obtained by imaging. Further, the predetermined actions using the controller 20 include, for example, the action of continuously pressing the button 21 or the bumper 24 n times, the action of long-pressing the button 21 or the bumper 24, the action of holding the trigger 23 while pulling it for a certain period of time, Examples include an action of tapping a specific portion of the touch pad 22 at the top, bottom, left, right, and center n times consecutively, and an action of pressing and holding a specific portion of the touch pad 22 at the top, bottom, left, right, and center. As for the "long press action" here, various variations can be provided by combining a plurality of actions as described above, such as starting a long press after tapping n times.
 本実施形態では、端末10は、図1(b)に示すように、さまざまなコンテンツについての、コンテンツ特定情報を、コンテンツを識別するためのコンテンツIDに紐付けて記憶した対応表13と、ユーザによる所定の登録処理により、コンテンツ特定情報をコンテンツIDに紐付けて対応表13に登録する登録部14と、をさらに備えた構成としてもよい。上記の対応表13は、コンテンツ特定情報(例えば、文字列から成るタグ、コントローラ操作パターン、ハンドジェスチャなど)をコンテンツIDに紐付けて記憶したコンテンツ対応情報データベースであり、一例として、図5に示すように、さまざまなコンテンツIDのそれぞれに紐付けられたタグが登録されている。以下では、端末10が、図1(b)に示す構成であるとして、処理動作、効果等を説明する。 In this embodiment, as shown in FIG. 1(b), the terminal 10 stores a correspondence table 13 in which content identification information for various contents is linked to content IDs for identifying the contents, and a user ID. A registration unit 14 may further include a registration unit 14 for linking the content identification information to the content ID and registering it in the correspondence table 13 by performing a predetermined registration process. The correspondence table 13 is a content correspondence information database in which content identification information (for example, a tag consisting of a character string, a controller operation pattern, a hand gesture, etc.) is stored in association with a content ID, and an example is shown in FIG. As shown, tags associated with various content IDs are registered. In the following, processing operations, effects, etc. will be described assuming that the terminal 10 has the configuration shown in FIG. 1(b).
 (コンテンツ表示制御システムにおいて実行される処理)
 図3に示すように、コンテンツ表示制御システム1において実行される処理は、コンテンツへのタグ付け登録(ステップS1)と、ユーザの視線位置への対象コンテンツの表示制御(ステップS2)とを含む。
(Processing Executed in Content Display Control System)
As shown in FIG. 3, the processing executed in the content display control system 1 includes tagging registration to content (step S1) and display control of target content at the user's line of sight position (step S2).
 上記のうちステップS1のタグ付け登録は、本開示の特徴であるステップS2の処理よりも前に実施しておく処理であり、例えば、図4(a)に示すように、仮想空間に特定のコンテンツを表示中に、ユーザがコントローラ20を操作して、当該コンテンツのメニューを展開して「タグ付け登録」を選択し、当該コンテンツのタグ(文字列)を入力することで当該コンテンツへのタグ付け登録を行う。例えば、仮想空間に表示されたアルパカ(コンテンツID:ID-XYZ1245)に対し、「# ALPACA」を入力することで、図5に示すようにアルパカ(コンテンツID:ID-XYZ1245)に紐付けられたタグ「ALPACA」が対応表13に登録される。なお、タグ入力の際にタグの前に「#」を入力するのは、タグ入力規則の一例であり、必須要件ではない。また、別の態様として、図4(b)に示すように、パーソナルコンピュータで稼働する専用管理システムからコンテンツへのタグ付け登録をしてもよい。 Of the above, the tagging registration in step S1 is a process that is performed before the process in step S2, which is a feature of the present disclosure. For example, as shown in FIG. While the content is being displayed, the user operates the controller 20 to expand the menu of the content, select "tagging registration", and enter the tag (character string) of the content to tag the content. and register. For example, by entering "# ALPACA" for the alpaca (content ID: ID-XYZ1245) displayed in the virtual space, it is linked to the alpaca (content ID: ID-XYZ1245) as shown in Fig. 5 The tag "ALPACA" is registered in the correspondence table 13. It should be noted that inputting "#" before the tag when inputting the tag is an example of the tag input rule, and is not an essential requirement. As another mode, as shown in FIG. 4(b), the content may be tagged and registered from a dedicated management system running on a personal computer.
 次に、図6を用いて、ユーザの視線位置への対象コンテンツの表示制御(図3のステップS2)について説明する。 Next, using FIG. 6, the display control of the target content at the line-of-sight position of the user (step S2 in FIG. 3) will be described.
 ユーザが、コントローラ20を用いて所定動作を行いながら、対象コンテンツを示す文字列であるタグを発声することで、図6の処理が実行開始される。 Execution of the processing in FIG. 6 is started when the user utters a tag, which is a character string indicating the target content, while performing a predetermined action using the controller 20 .
 検知部11は、マイク11Aによりユーザの発声を集音して、音声認識部11Bにより、集音された音声に対し音声認識を実行する(ステップS21)。音声認識結果のテキストデータは特定部12Aへ転送される。 The detection unit 11 collects the user's utterance with the microphone 11A, and the speech recognition unit 11B performs speech recognition on the collected speech (step S21). The text data of the speech recognition result is transferred to the identification unit 12A.
 次に、特定部12Aは、音声認識結果と対応表13に記憶された各レコードとを比較し(ステップS22)、音声認識結果に一致するレコードが有るか否かを判断する(ステップS23)。ここで音声認識結果に一致するレコードが無ければ、そのまま処理を終了する。一方、音声認識結果に一致するレコードが有る場合、特定部12Aは、一致するレコードのコンテンツを対象コンテンツとして特定し、表示制御部12Bへ伝達する。そして、表示制御部12Bは、対象コンテンツが仮想空間に表示中であるか否かを判断する(ステップS24)。ここで、対象コンテンツが表示中であれば、表示制御部12Bは、ユーザの視線位置を取得し(ステップS25)、後述する手法に従って、視線位置を基準とするコンテンツの再配置を実行する(ステップS26)。 Next, the identification unit 12A compares the speech recognition result with each record stored in the correspondence table 13 (step S22), and determines whether or not there is a record that matches the speech recognition result (step S23). If there is no record that matches the speech recognition result, the process is terminated. On the other hand, if there is a record that matches the voice recognition result, the specifying unit 12A specifies the content of the matching record as the target content, and transmits it to the display control unit 12B. Then, the display control unit 12B determines whether or not the target content is being displayed in the virtual space (step S24). Here, if the target content is being displayed, the display control unit 12B acquires the line-of-sight position of the user (step S25), and rearranges the content based on the line-of-sight position (step S25). S26).
 一方、ステップS24で対象コンテンツが表示中でなければ、表示制御部12Bは、ユーザの視線位置を取得し(ステップS27)、視線位置に対象コンテンツをレンダリングすることで視線位置に対象コンテンツを表示させる(ステップS28)。 On the other hand, if the target content is not being displayed in step S24, the display control unit 12B acquires the line-of-sight position of the user (step S27), renders the target content at the line-of-sight position, and displays the target content at the line-of-sight position. (Step S28).
 ここで、図7(a)~(c)を用いて、図6の処理の具体例を示す。まず、初期状態は図7(a)に示すように仮装空間においてユーザを中心に、4種類の動物のコンテンツが配置された状態とし、ここで、ユーザが対象コンテンツのタグ「FOX」を発声すると、図6のステップS21で、検知部11は、音声認識によってタグ「FOX」の音声を認識し、音声認識結果としてテキスト「FOX」を得て、特定部12Aへ転送する。そして、特定部12Aは、ステップS22でテキスト「FOX」と対応表13に記憶された各レコードのタグとを比較し、図7(b)に示すようにテキスト「FOX」に一致するレコード(コンテンツID「ID-JSN3G49」)が対応表13に有ると判断し(ステップS23でYES)、一致するレコードのコンテンツ(コンテンツID「ID-JSN3G49」の狐のコンテンツ)を対象コンテンツとして特定し、表示制御部12Bへ伝達する。さらに、表示制御部12Bは、対象コンテンツが仮想空間に表示中であると判断し(ステップS24)、ユーザの視線位置(図7(a)に示す(xe,ye,ze))を取得し(ステップS25)、後述する手法に従って、視線位置を基準とするコンテンツの再配置を実行する(ステップS26)。再配置の実行例として、図7(c)には、対象コンテンツである狐のコンテンツのみを単純にユーザの視線位置(xe,ye,ze)へ再配置した例を示す。 Here, a specific example of the processing of FIG. 6 is shown using FIGS. 7(a) to (c). First, as shown in FIG. 7A, the initial state is a state in which four types of animal content are arranged with the user at the center in the virtual space. 6, the detection unit 11 recognizes the voice of the tag "FOX" by voice recognition, obtains the text "FOX" as a voice recognition result, and transfers it to the identification unit 12A. Then, in step S22, the identifying unit 12A compares the text "FOX" with the tag of each record stored in the correspondence table 13, and as shown in FIG. ID "ID-JSN3G49") is in the correspondence table 13 (YES in step S23), the content of the matching record (the content of the fox with the content ID "ID-JSN3G49") is specified as the target content, and the display is controlled. It is transmitted to the section 12B. Further, the display control unit 12B determines that the target content is being displayed in the virtual space (step S24), and sets the user's gaze position ((x e , y e , ze ) shown in FIG. 7A) to Acquired (step S25), and rearrangement of the contents based on the line-of-sight position is executed according to a method described later (step S26). As an execution example of rearrangement, FIG. 7(c) shows an example in which only the fox content, which is the target content, is simply rearranged to the user's line-of-sight position (x e , y e , z e ).
 (さまざまな再配置パターンについての説明)
 次に、図8(a)~(d)を用いて、上記ステップS26におけるコンテンツのさまざまな再配置パターンについて説明する。なお、再配置前の初期状態は、前述した図7(a)に示す仮装空間にユーザを中心に4種類の動物のコンテンツが配置された状態とする。
(description of different relocation patterns)
Next, various rearrangement patterns of contents in step S26 will be described with reference to FIGS. 8(a) to 8(d). It should be noted that the initial state before the rearrangement is a state in which four types of animal content are arranged around the user in the disguised space shown in FIG. 7A.
 図8(a)には、対象コンテンツの単純な重畳を行う再配置パターン1が示されている。この再配置パターン1は、上記図7(c)の例のように、対象コンテンツ(ここでは狐のコンテンツ)のみを単純にユーザの視線位置(xe,ye,ze)へ再配置するパターンである。 FIG. 8(a) shows a rearrangement pattern 1 that simply superimposes the target content. This rearrangement pattern 1 simply rearranges only the target content (fox content in this case) to the user's gaze position (x e , y e , z e ) as in the example of FIG. 7(c). It's a pattern.
 図8(b)には、対象コンテンツの一時配置を行う再配置パターン2が示されている。この再配置パターン2は、対象コンテンツ(ここでは狐のコンテンツ)のみをユーザの視線位置(xe,ye,ze)へ一時的に移動させるとともに、対象コンテンツのコピーを生成し当該コピーを元の位置(xt,yt,zt)に一時的に配置するパターンである。対象コンテンツのコピーは、例えば半透明で表示され、操作不可能とされる。また、一時配置が終了すると、ユーザの視線位置(xe,ye,ze)に表示した対象コンテンツは削除されるとともに、元の位置(xt,yt,zt)に表示したコンテンツは、元の表示形態へ戻され、ユーザの視線位置(xe,ye,ze)に表示していた対象コンテンツへの操作内容、変更点等が反映される。 FIG. 8(b) shows rearrangement pattern 2 for temporarily arranging the target content. In this rearrangement pattern 2, only the target content (the fox content here) is temporarily moved to the user's gaze position (x e , y e , z e ), a copy of the target content is generated, and the copy is moved. It is a pattern that is temporarily placed at the original position (x t , y t , z t ). A copy of the target content is displayed, for example, translucent and cannot be operated. In addition, when the temporary placement ends, the target content displayed at the user's line-of-sight position (x e , y e , z e ) is deleted, and the content displayed at the original position (x t , y t , z t ) is restored. is returned to the original display form, and the operation details, changes, etc. to the target content displayed at the user's line-of-sight position (x e , y e , z e ) are reflected.
 図8(c)には、対象コンテンツの単純な重畳および方位角方向における全コンテンツの相対位置関係保持を行う再配置パターン3が示されている。この再配置パターン3は、表示されたコンテンツ(以下「表示コンテンツ」という)の全て(ここでは4種類の動物)を、方位角方向におけるコンテンツ同士の相対的な位置関係を保持しながら移動させ、さらに対象コンテンツ(ここでは狐のコンテンツ)のみをユーザの視線位置(xe,ye,ze)へ平行移動させるパターンである。 FIG. 8(c) shows a rearrangement pattern 3 that simply superimposes the target content and maintains the relative positional relationship of all the content in the azimuth direction. In this rearrangement pattern 3, all of the displayed contents (hereinafter referred to as "display contents") (here, four types of animals) are moved while maintaining the relative positional relationship between the contents in the azimuth direction, Furthermore, it is a pattern in which only the target content (here, fox content) is translated to the user's gaze position (x e , y e , z e ).
 図8(d)には、全ての表示コンテンツ同士の相対位置関係保持を行う再配置パターン4が示されている。この再配置パターン4は、全ての表示コンテンツ同士の相対的な位置関係を保持しながら移動させて、対象コンテンツ(ここでは狐のコンテンツ)をユーザの視線位置(xe,ye,ze)へ再配置するパターンである。なお、上記4つのパターンは必須要件ではなく、これら以外のパターンを採用してもよい。 FIG. 8(d) shows a rearrangement pattern 4 that maintains the relative positional relationship between all displayed contents. In this rearrangement pattern 4, all the displayed contents are moved while maintaining the relative positional relationship between them, and the target contents (here, fox contents) are moved to the user's gaze position (x e , y e , z e ). It is a pattern that rearranges to Note that the above four patterns are not essential requirements, and patterns other than these may be employed.
 (視線位置への対象コンテンツの再配置についての説明)
 前述したさまざまな再配置パターンについて共通する点は、ユーザの視線位置へ対象コンテンツを再配置する点である。そこで、以下では、図9~図12を用いて、視線位置への対象コンテンツの再配置に係る2つのアルゴリズムを説明する。なお、下記のアルゴリズムは一例であり、下記以外のアルゴリズムを採用してもよい。
(Description of relocation of target content to line-of-sight position)
A point common to the various rearrangement patterns described above is that the target content is rearranged to the line-of-sight position of the user. Therefore, two algorithms relating to rearrangement of the target content to the line-of-sight position will be described below with reference to FIGS. 9 to 12. FIG. Note that the following algorithm is an example, and an algorithm other than the following may be adopted.
 第1のアルゴリズムは、直交座標におけるZ軸周りの対象コンテンツの回転およびユーザの視線位置への対象コンテンツの平行移動を行う手順である。図10に示すように、まず、対象コンテンツとユーザの視線位置のそれぞれについて、直交座標から3次元極座標を取得する(処理(1))。図9には、直交座標と3次元極座標(球面座標)との一般的な関係を示すが、以下の式(1)により、直交座標(x,y,z)から3次元極座標(r,θ,Φ)へ変換することができるため、この式(1)を用いて、対象コンテンツの3次元極座標(rttt)とユーザの視線位置の3次元極座標(reee)とを取得する。
Figure JPOXMLDOC01-appb-M000001
なお、取得するのは、3次元極座標のうち方位角のみでもよい。
The first algorithm is a procedure that rotates the target content around the Z-axis in Cartesian coordinates and translates the target content to the user's line-of-sight position. As shown in FIG. 10, first, three-dimensional polar coordinates are obtained from rectangular coordinates for each of the target content and the user's line-of-sight position (process (1)). FIG. 9 shows a general relationship between orthogonal coordinates and three-dimensional polar coordinates (spherical coordinates). , Φ), the three-dimensional polar coordinates (r t , θ t , Φ t ) of the target content and the three-dimensional polar coordinates (r e , θ e , Φ e ).
Figure JPOXMLDOC01-appb-M000001
It should be noted that only the azimuth angle of the three-dimensional polar coordinates may be obtained.
 次に、以下の式(2)により方位角変化量を計算する(処理(2))。
ΔΦ=Φe-Φt  (2)
Next, the azimuth angle change amount is calculated by the following equation (2) (process (2)).
ΔΦ = Φe - Φt (2)
 そして、以下の式(3)により、直交座標におけるZ軸周りに対象コンテンツをΔΦだけ回転させる(処理(3))。即ち、ユーザに対する方位角を調整する。なお、式(3)の右辺第1項はZ軸周りの回転行列である。
Figure JPOXMLDOC01-appb-M000002
Then, the target content is rotated by ΔΦ around the Z-axis in the orthogonal coordinates according to the following equation (3) (process (3)). That is, the azimuth angle with respect to the user is adjusted. Note that the first term on the right side of Equation (3) is a rotation matrix about the Z-axis.
Figure JPOXMLDOC01-appb-M000002
 さらに、以下の式(4)により、ユーザの視線位置に対象コンテンツを平行移動させる(処理(4))。なお、式(4)の右辺第1項は平行移動行列である。
Figure JPOXMLDOC01-appb-M000003
Furthermore, the target content is translated to the user's line-of-sight position according to the following equation (4) (process (4)). Note that the first term on the right side of Equation (4) is a translation matrix.
Figure JPOXMLDOC01-appb-M000003
 以上のように、コンテンツ(3次元CGのオブジェクト)を構成する全ての点を対象としてアフィン変換行列を適用することで、上記のZ軸周りの回転およびユーザの視線位置への平行移動を実現する。 As described above, by applying the affine transformation matrix to all the points that make up the content (three-dimensional CG object), the rotation around the Z-axis and the translation to the user's gaze position are realized. .
 ここで、図11を用いて、再配置パターン1において上記第1のアルゴリズムに基づく計算の具体例を説明する。 Here, a specific example of calculation based on the above first algorithm in rearrangement pattern 1 will be described with reference to FIG.
 最初の、対象コンテンツと視線位置それぞれの3次元極座標の取得については、対象コンテンツの直交座標(xt,yt,zt)が(1,-1,-1)である場合、上記式(1)に代入することで、対象コンテンツの3次元極座標(rttt)として以下が取得される(処理(1))。
Figure JPOXMLDOC01-appb-M000004
また、ユーザの視線位置の直交座標(xe,ye,ze)が、
Figure JPOXMLDOC01-appb-M000005
である場合、上記式(1)に代入することで、ユーザの視線位置の3次元極座標(reee)として以下が取得される。
Figure JPOXMLDOC01-appb-M000006
For the first acquisition of the three-dimensional polar coordinates of the target content and the line-of-sight position, when the orthogonal coordinates (x t , y t , z t ) of the target content are (1, -1, -1), the above formula ( 1), the following is obtained as the three-dimensional polar coordinates (r t , θ t , Φ t ) of the target content (process (1)).
Figure JPOXMLDOC01-appb-M000004
Also, the orthogonal coordinates (x e , y e , z e ) of the user's gaze position are
Figure JPOXMLDOC01-appb-M000005
, the following is obtained as the three-dimensional polar coordinates (r e , θ e , Φ e ) of the line-of-sight position of the user by substituting into the above formula (1).
Figure JPOXMLDOC01-appb-M000006
 次に、上記式(2)を用いて、以下のように方位角変化量ΔΦが計算される(処理(2))。
ΔΦ=Φe-Φt=(2π/3)-(-π/4)=11π/12
Next, using the above equation (2), the azimuth angle change amount ΔΦ is calculated as follows (process (2)).
ΔΦ=Φ e -Φ t =(2π/3)-(-π/4)=11π/12
 そして、上記式(3)を用いて、以下のように直交座標におけるZ軸周りに対象コンテンツをΔΦだけ回転させる(処理(3))。
Figure JPOXMLDOC01-appb-M000007
Then, using the above formula (3), the target content is rotated by ΔΦ around the Z-axis in the orthogonal coordinates as follows (process (3)).
Figure JPOXMLDOC01-appb-M000007
 さらに、上記式(4)を用いて、以下のようにユーザの視線位置に対象コンテンツを平行移動させる(処理(4))。
Figure JPOXMLDOC01-appb-M000008
Furthermore, using the above equation (4), the target content is translated to the line-of-sight position of the user as follows (process (4)).
Figure JPOXMLDOC01-appb-M000008
 以上の第1のアルゴリズムにより、対象コンテンツの直交座標(x’t,y’t,z’t)は、ユーザの視線位置の直交座標(xe,ye,ze)、即ち、
Figure JPOXMLDOC01-appb-M000009
となる。
According to the above first algorithm, the Cartesian coordinates (x' t , y' t , z' t ) of the target content are the Cartesian coordinates (x e , y e , ze ) of the line-of-sight position of the user, that is,
Figure JPOXMLDOC01-appb-M000009
becomes.
 以下、第2のアルゴリズムを説明する。第2のアルゴリズムは、直交座標におけるZ軸周りの対象コンテンツの回転、ユーザに対する極角の調整、およびユーザの視線位置への対象コンテンツの平行移動を行う手順である。 The second algorithm will be explained below. The second algorithm is a procedure that rotates the target content about the Z-axis in Cartesian coordinates, adjusts the polar angle with respect to the user, and translates the target content to the user's gaze position.
 図12に示すように、まず、対象コンテンツとユーザの視線位置のそれぞれについて、直交座標から3次元極座標を取得する(処理(1))。この処理は、前述した第1のアルゴリズムと同様である。 As shown in FIG. 12, first, three-dimensional polar coordinates are obtained from orthogonal coordinates for each of the target content and the user's line-of-sight position (process (1)). This process is similar to the first algorithm described above.
 次に、対象コンテンツの3次元極座標と視線位置の3次元極座標における、半径と方位角と極角それぞれの変化量を以下のように計算する(処理(2))。
Figure JPOXMLDOC01-appb-M000010
Next, in the three-dimensional polar coordinates of the target content and the three-dimensional polar coordinates of the line-of-sight position, the amount of change in each of the radius, azimuth angle, and polar angle is calculated as follows (process (2)).
Figure JPOXMLDOC01-appb-M000010
 次に、以下の式(5)により、直交座標におけるZ軸周りに対象コンテンツをΔΦだけ回転させる(処理(3))。即ち、ユーザに対する方位角を調整する。なお、式(5)の右辺第1項はZ軸周りの回転行列である。
Figure JPOXMLDOC01-appb-M000011
Next, the target content is rotated by ΔΦ around the Z-axis in the orthogonal coordinates according to the following equation (5) (process (3)). That is, the azimuth angle with respect to the user is adjusted. Note that the first term on the right side of Equation (5) is a rotation matrix about the Z-axis.
Figure JPOXMLDOC01-appb-M000011
 そして、以下の式(6)により、対象コンテンツを視線位置に対して垂直なXY平面上の軸周りにΔθだけ回転させる(処理(4))。即ち、ユーザに対する極角を調整する。なお、式(6)の右辺第1項はロドリゲスの回転行列である。
Figure JPOXMLDOC01-appb-M000012
Then, the target content is rotated by Δθ around the axis on the XY plane perpendicular to the line-of-sight position by the following equation (6) (process (4)). That is, the polar angle with respect to the user is adjusted. The first term on the right side of Equation (6) is the Rodriguez rotation matrix.
Figure JPOXMLDOC01-appb-M000012
 さらに、以下の式(7)により、ユーザの視線位置に対象コンテンツを平行移動させる(処理(5))。これにより、対象コンテンツの直交座標(x’t,y’t,z’t)は、ユーザの視線位置の直交座標(xe,ye,ze)となる。
Figure JPOXMLDOC01-appb-M000013
なお、式(7)の右辺第1項は以下に相当する。
Figure JPOXMLDOC01-appb-M000014
Furthermore, the target content is moved in parallel to the user's line-of-sight position according to the following equation (7) (process (5)). As a result, the orthogonal coordinates (x' t , y' t , z' t ) of the target content become the orthogonal coordinates (x e , y e , z e ) of the user's gaze position.
Figure JPOXMLDOC01-appb-M000013
Note that the first term on the right side of Equation (7) corresponds to the following.
Figure JPOXMLDOC01-appb-M000014
 以上のように、コンテンツ(3次元CGのオブジェクト)を構成する全ての点を対象としてアフィン変換行列を適用することで、上記のZ軸周りの回転、ユーザに対する極角の調整、およびユーザの視線位置への平行移動を実現する。 As described above, by applying the affine transformation matrix to all the points that make up the content (three-dimensional CG object), the rotation around the Z axis, the adjustment of the polar angle with respect to the user, and the user's line of sight Implement translation to position.
 (再配置パターン3について)
 以下、図13を用いて、図8(c)の再配置パターン3に特有の方位角方向における相対位置関係保持のためのアルゴリズムを説明する。図13に示すように、最初は前述した第1のアルゴリズムと同様に、前述した式(1)を用いて、対象コンテンツの3次元極座標(rttt)とユーザの視線位置の3次元極座標(reee)とを取得する(処理(1))。なお、取得するのは、3次元極座標のうち方位角のみでもよい。
(Regarding rearrangement pattern 3)
An algorithm for maintaining the relative positional relationship in the azimuth angle direction unique to rearrangement pattern 3 in FIG. 8C will be described below with reference to FIG. As shown in FIG. 13, the three-dimensional polar coordinates (r t , θ t , Φ t ) of the target content and the gaze position of the user are first calculated using the above-described formula (1), similarly to the first algorithm described above. and the three-dimensional polar coordinates (r e , θ e , Φ e ) of are obtained (process (1)). It should be noted that only the azimuth angle of the three-dimensional polar coordinates may be obtained.
 次に、前述した式(2)により方位角変化量を計算する(処理(2))。 Next, the azimuth angle change amount is calculated using the above-described formula (2) (process (2)).
 そして、以下の式(8)により、直交座標におけるZ軸周りに全ての表示コンテンツそれぞれをΔΦだけ回転させる(処理(3))。即ち、表示コンテンツそれぞれについて、ユーザに対する方位角を調整する。なお、nは各表示コンテンツを特定するための添え字である。
Figure JPOXMLDOC01-appb-M000015
Then, each of all the display contents is rotated by ΔΦ around the Z-axis in the orthogonal coordinates according to the following formula (8) (process (3)). That is, the azimuth angle with respect to the user is adjusted for each display content. Note that n is a subscript for specifying each display content.
Figure JPOXMLDOC01-appb-M000015
 さらに、前述した第1のアルゴリズムと同様に、前述した式(4)を用いて、ユーザの視線位置に対象コンテンツを平行移動させることで、ユーザの視線位置に対象コンテンツを再配置する(処理(4))。なお、この再配置の方法は、第1のアルゴリズムの方法に代わり、第2のアルゴリズムの方法を採用してもよい。 Furthermore, similar to the above-described first algorithm, the target content is translated to the user's line-of-sight position using Equation (4) described above, thereby relocating the target content to the user's line-of-sight position (processing ( Four)). It should be noted that the rearrangement method may employ the method of the second algorithm instead of the method of the first algorithm.
 以上により、図13の右上端に示すように、対象コンテンツの単純な重畳および方位角方向における全コンテンツの相対位置関係保持を行う再配置パターン3を実現する。 As described above, as shown in the upper right corner of FIG. 13, a rearrangement pattern 3 that simply superimposes the target content and maintains the relative positional relationship of all the content in the azimuth direction is realized.
 (再配置パターン4について)
 以下、図14~図17を用いて、再配置パターン4に特有の相対位置関係保持のためのアルゴリズムを説明する。図14に示すように、最初は前述した第2のアルゴリズムと同様に、対象コンテンツの3次元極座標(rttt)とユーザの視線位置の3次元極座標(reee)とを取得し(処理(1))、対象コンテンツの3次元極座標と視線位置の3次元極座標における、半径と方位角と極角それぞれの変化量を以下のように計算する(処理(2))。
Figure JPOXMLDOC01-appb-M000016
(Regarding rearrangement pattern 4)
14 to 17, an algorithm for holding the relative positional relationship unique to rearrangement pattern 4 will be described below. As shown in FIG. 14, the three-dimensional polar coordinates (r t , θ t , Φ t ) of the target content and the three-dimensional polar coordinates (r e , θ e , Φ e ) are obtained (process (1)), and the amount of change in the radius, azimuth angle, and polar angle in the 3D polar coordinates of the target content and the 3D polar coordinates of the line-of-sight position are calculated as follows (process (2)).
Figure JPOXMLDOC01-appb-M000016
 次に、以下の式(9)により、対象コンテンツを直交座標におけるZ軸周りにΔΦだけ回転させた座標を計算する(処理(3))。
Figure JPOXMLDOC01-appb-M000017
Next, the coordinates obtained by rotating the target content by ΔΦ around the Z-axis in the orthogonal coordinates are calculated according to the following equation (9) (process (3)).
Figure JPOXMLDOC01-appb-M000017
 次に、以下の式(10)により、表示コンテンツのうち対象コンテンツ以外のコンテンツに関する再配置後の方位角を仮計算する(処理(4))。なお、mは対象コンテンツ以外のコンテンツを特定するための添え字である。
Figure JPOXMLDOC01-appb-M000018
Next, the following equation (10) is used to provisionally calculate the post-rearrangement azimuth angle of the display content other than the target content (process (4)). Note that m is a subscript for specifying content other than the target content.
Figure JPOXMLDOC01-appb-M000018
 次に、以下のようにして、表示コンテンツのうち対象コンテンツ以外のコンテンツに関する再配置後の極角を仮計算する(図14~図15に示す処理(5))。ただし、以下の範囲に限定される。
Figure JPOXMLDOC01-appb-M000019
ここでの再配置後の極角の仮計算は、「-π<Φe≦-π/2の場合」、「-π/2<Φe≦π/2の場合」、「π/2<Φe<πの場合」、「Φe=πの場合」の計4つの場合に分けて、実行される。
Next, the post-rearrangement polar angles of the contents other than the target contents among the displayed contents are tentatively calculated as follows (processing (5) shown in FIGS. 14 and 15). However, it is limited to the following range.
Figure JPOXMLDOC01-appb-M000019
The tentative calculation of the polar angle after the rearrangement here is "when -π < Φ e ≤ -π/2", "when -π/2 < Φ e ≤ π/2", and "π/2 < Execution is divided into a total of four cases of Φ e <π and Φ e =π.
 まず、-π<Φe≦-π/2の場合(図14の(5)iの場合)については、
再配置後の方位角Φ’m(pre)が、-π<Φ’m(pre)≦Φt+(π/2)、又は、π-Arctan(-xtz/ytz)≦Φ’m(pre)≦πの範囲にある場合は、以下の式(11)により再配置後の極角θ’m(pre)を得る。
θ’m(pre)=θm+Δθ  (11)
なお、ここでの条件「-π<Φ’m(pre)≦Φt+(π/2)」には、極点(θm,Φ’m(pre))=(0,Φe+(π/2))は除かれ、もう一方の条件「π-Arctan(-xtz/ytz)≦Φ’m(pre)≦π」には、極点(θm,Φ’m(pre))=(0,π-Arctan(-xtz/ytz))は除かれる。
一方、再配置後の方位角Φ’m(pre)が上記以外の範囲にある場合は、以下の式(12)により再配置後の極角θ’m(pre)を得る。
θ’m(pre)=θm-Δθ  (12)
First, for the case of -π<Φ e ≤ -π/2 (case (5)i in Fig. 14),
Azimuth angle Φ'm (pre) after rearrangement is -π<Φ'm (pre) ≤Φt +(π/2) or π-Arctan( -xtz / ytz ) ≤Φ'm If (pre) ≤ π, the rearranged polar angle θ'm (pre) is obtained by the following equation (11).
θ' m (pre) = θ m + Δθ (11)
Note that the condition "-π<Φ' m(pre) ≤Φ t +(π/2)" here means that the pole (θ m ,Φ' m(pre) )=(0,Φ e +(π /2)) is removed, and the other condition ``π-Arctan(-x tz /y tz )≤Φ' m(pre) ≤π'' has the pole (θ m ,Φ' m(pre) )= (0,π-Arctan(-x tz /y tz )) is excluded.
On the other hand, if the rearranged azimuth angle Φ′ m(pre) is in a range other than the above range, the rearranged polar angle θ′ m(pre) is obtained by the following equation (12).
θ' m (pre) = θ m - Δθ (12)
 次に、-π/2<Φe≦π/2の場合(図14の(5)iiの場合)については、
再配置後の方位角Φ’m(pre)が、Φe-(π/2)<Φ’m(pre)≦Φe+(π/2)(但し、極点(θm,Φ’m(pre))=(0,Φe+(π/2))を除く)の範囲にある場合は、以下の式(13)により再配置後の極角θ’m(pre)を得る。
θ’m(pre)=θm+Δθ  (13)
一方、再配置後の方位角Φ’m(pre)が上記以外の範囲(但し、極点(θm,Φ’m(pre))=(0,Φe-(π/2))を除く)にある場合は、以下の式(14)により再配置後の極角θ’m(pre)を得る。
θ’m(pre)=θm-Δθ  (14)
Next, for the case of -π/2<Φ e ≤π/2 (case (5) ii in Fig. 14),
The azimuth angle Φ' m(pre) after rearrangement is Φ e -(π/2)<Φ' m(pre) ≦Φ e +(π/2) (however, the poles (θ m , Φ' m( pre) )=(0, Φ e +(π/2))), the rearranged polar angle θ′ m(pre) is obtained by the following equation (13).
θ' m (pre) = θ m + Δθ (13)
On the other hand, the azimuth angle Φ' m (pre) after rearrangement is in a range other than the above (except for the pole (θ m , Φ' m (pre) ) = (0, Φ e - (π/2))) , the rearranged polar angle θ′ m(pre) is obtained by the following equation (14).
θ' m (pre) = θ m - Δθ (14)
 次に、π/2<Φe<πの場合(図15の(5)iiiの場合)については、
再配置後の方位角Φ’m(pre)が、-π<Φ’m(pre)≦-π+Arctan(-xtz/ytz)、又は、Arctan(-xtz/ytz)≦Φ’m(pre)≦πの範囲にある場合は、以下の式(15)により再配置後の極角θ’m(pre)を得る。
θ’m(pre)=θm+Δθ  (15)
一方、再配置後の方位角Φ’m(pre)が上記以外の範囲(但し、極点(θm,Φ’m(pre))=(0,-π+Arctan(-xtz/ytz)と極点(θm,Φ’m(pre))=(0,Arctan(-xtz/ytz))を除く)にある場合は、以下の式(16)により再配置後の極角θ’m(pre)を得る。
θ’m(pre)=θm-Δθ  (16)
Next, in the case of π/2<Φ e <π (case of (5) iii in FIG. 15),
Azimuth angle Φ' m (pre) after rearrangement is -π <Φ' m (pre) ≤ -π + Arctan(-x tz /y tz ) or Arctan (-x tz /y tz ) ≤ Φ' m If (pre) ≤ π, the rearranged polar angle θ'm (pre) is obtained by the following equation (15).
θ' m (pre) = θ m + Δθ (15)
On the other hand, the azimuth angle Φ' m (pre) after rearrangement is in a range other than the above (however, the pole (θ m , Φ' m (pre) ) = (0, -π + Arctan (-x tz /y tz ) and the pole (θ m , Φ' m(pre) )=(0, Arctan(-x tz /y tz )))), the rearranged polar angle θ' m( pre) .
θ' m (pre) = θ m - Δθ (16)
 さらに、Φe=πの場合(図15の(5)ivの場合)については、
再配置後の方位角Φ’m(pre)が、-π<Φ’m(pre)≦-(π/2)、又は、(π/2)≦Φ’m(pre)≦πの範囲(但し、極点(θm,Φ’m(pre))=(0,-π/2)と極点(θm,Φ’m(pre))=(0,π/2)を除く)にある場合は、以下の式(17)により再配置後の極角θ’m(pre)を得る。
θ’m(pre)=θm+Δθ  (17)
一方、再配置後の方位角Φ’m(pre)が上記以外の範囲にある場合は、以下の式(18)により再配置後の極角θ’m(pre)を得る。
θ’m(pre)=θm-Δθ  (18)
Furthermore, in the case of Φ e =π (case of (5) iv in FIG. 15),
The azimuth angle Φ'm (pre) after rearrangement is in the range of -π <Φ'm (pre) ≤ -(π/2) or (π/2) ≤ Φ'm (pre) ≤ π ( However, except for poles (θ m ,Φ' m(pre) )=(0,-π/2) and poles (θ m ,Φ' m(pre) )=(0,π/2)) obtains the rearranged polar angle θ′ m(pre) by the following equation (17).
θ' m (pre) = θ m + Δθ (17)
On the other hand, if the rearranged azimuth angle Φ′ m(pre) is in a range other than the above range, the rearranged polar angle θ′ m(pre) is obtained by the following equation (18).
θ' m (pre) = θ m - Δθ (18)
 次に、以下のようにして、表示コンテンツのうち対象コンテンツ以外のコンテンツに関する再配置後の極座標を決定する(図15に示す処理(6))。ここでは「再配置後の極角θ’m(pre)<0の場合」、「再配置後の極角θ’m(pre)>πの場合」の2つの場合に分けて、実行される。 Next, the polar coordinates after rearrangement of the contents other than the target contents among the displayed contents are determined as follows (process (6) shown in FIG. 15). Here, it is executed in two cases: "Polar angle θ' m(pre ) after rearrangement <0" and "Polar angle θ' m(pre) after rearrangement >π". .
 まず、再配置後の極角θ’m(pre)<0の場合(図15の(6)iの場合)については、再配置後の方位角Φ’m(pre)が、0<Φ’m(pre)≦πの範囲にある場合は、以下の式(19)により再配置後の極座標(r’m,θ’m,Φ’m)を得る。
(r’m,θ’m,Φ’m)=(rm+Δr,θ’m(pre),Φ’m(pre)-π)  (19)
一方、再配置後の方位角Φ’m(pre)が、-π<Φ’m(pre)≦0の範囲にある場合は、以下の式(20)により再配置後の極座標(r’m,θ’m,Φ’m)を得る。
(r’m,θ’m,Φ’m)=(rm+Δr,θ’m(pre),Φ’m(pre)+π)  (20)
First, when the rearranged polar angle θ′ m(pre) <0 (case (6)i in FIG. 15), the rearranged azimuth angle Φ′ m(pre) is 0<Φ′ In the range of m(pre) ≤ π, the rearranged polar coordinates (r' m , θ' m , Φ' m ) are obtained by the following equation (19).
( r'm , θ'm , Φ'm ) = (rm + Δr, θ'm (pre) , Φ'm (pre) - π) (19)
On the other hand, if the rearranged azimuth angle Φ'm (pre) is in the range of -π <Φ'm (pre) ≤ 0, the rearranged polar coordinates ( r'm ,θ' m ,Φ' m ).
( r'm , θ'm , Φ'm ) = (rm + Δr, θ'm (pre) , Φ'm (pre) + π) (20)
 次に、再配置後の極角θ’m(pre)>πの場合(図15の(6)iiの場合)については、再配置後の方位角Φ’m(pre)が、0<Φ’m(pre)≦πの範囲にある場合は、以下の式(21)により再配置後の極座標(r’m,θ’m,Φ’m)を得る。
(r’m,θ’m,Φ’m)=(rm+Δr,2π-θ’m(pre),Φ’m(pre)-π)  (21)
一方、再配置後の方位角Φ’m(pre)が、-π<Φ’m(pre)≦0の範囲にある場合は、以下の式(22)により再配置後の極座標(r’m,θ’m,Φ’m)を得る。
(r’m,θ’m,Φ’m)=(rm+Δr,2π-θ’m(pre),Φ’m(pre)+π)  (22)
Next, when the rearranged polar angle θ′ m(pre) >π (case (6)ii in FIG. 15), the rearranged azimuth angle Φ′ m(pre) is 0<Φ In the range of ' m (pre) ≤ π, the rearranged polar coordinates ( r'm , θ'm, Φ'm ) are obtained by the following equation (21).
( r'm , θ'm , Φ'm ) = (rm + Δr, 2π- θ'm (pre) , Φ'm (pre) - π) (21)
On the other hand, if the rearranged azimuth angle Φ′ m(pre) is in the range of −π<Φ′ m(pre) ≦0, the rearranged polar coordinates (r′ m ,θ' m ,Φ' m ).
( r'm , θ'm , Φ'm ) = (rm + Δr, 2π- θ'm (pre) , Φ'm (pre) + π) (22)
 上記のように対象コンテンツ以外のコンテンツに関する再配置後の極座標を決定した後、以下のようにユーザによりTH(閾値)を事前設定しておき、コンテンツの奥行方向の表示領域を規定する半径rを調整することで、調整後の半径r’mを得てもよい。この場合、対象コンテンツの移動量に伴い、半径rが極端に小さな値になることおよび半径rが極端に大きな値になることを防止できる。ただし、奥行方向にズレが生まれ相対位置が崩れる処理と捉えることもできるため、注意を要する。
Figure JPOXMLDOC01-appb-M000020
After determining the polar coordinates after rearrangement for content other than the target content as described above, the user presets TH (threshold value) as follows, and sets the radius r that defines the display area of the content in the depth direction. By adjusting, the adjusted radius r' m may be obtained. In this case, it is possible to prevent the radius r from becoming extremely small and the radius r from becoming extremely large due to the amount of movement of the target content. However, care must be taken because it can be regarded as a process that creates a shift in the depth direction and destroys the relative position.
Figure JPOXMLDOC01-appb-M000020
 その後、「対象コンテンツ以外の全コンテンツを再配置後の座標へアフィン変換(図16、図17の処理(7))」と「対象コンテンツを視線位置へアフィン変換(図16、図17の処理(8))」の2つの処理を実行する。これらの処理の実行順序は順不同であり、同時並行で実行してもよいし、予め定められた順序で実行してもよい。上記のうち後者の「対象コンテンツを視線位置へアフィン変換(図16、図17の処理(8))」は、既に説明した第1のアルゴリズムと第2のアルゴリズムのうち何れかを適用して実行可能であるため、ここでは重複した説明は省略する。 After that, "affine transformation of all contents other than the target content to coordinates after rearrangement (processing (7) in FIGS. 16 and 17)" and "affine transformation of target content to the line-of-sight position (processing in FIGS. 16 and 17 (processing (7) of FIG. 16 and FIG. 17)" 8))” are executed. The execution order of these processes is random, and they may be executed concurrently or in a predetermined order. Of the above, the latter "affine transformation of the target content to the line-of-sight position (processing (8) in FIGS. 16 and 17)" is executed by applying either the first algorithm or the second algorithm already explained. Since it is possible, redundant description is omitted here.
 (対象コンテンツ以外の全コンテンツを再配置後の座標へアフィン変換)
 対象コンテンツ以外の全コンテンツを再配置後の座標へアフィン変換(図16、図17の処理(7))は、対象コンテンツを視線位置へアフィン変換を、第1のアルゴリズムを適用して実行するのか、第2のアルゴリズムを適用して実行するのかによって、処理内容が異なるため、以下、順に説明する。
(Affine transformation of all contents other than the target contents to the coordinates after relocation)
In the affine transformation of all contents other than the target content to the rearranged coordinates (process (7) in FIGS. 16 and 17), is the target content affine transformed to the line-of-sight position by applying the first algorithm? , and the second algorithm is applied and executed.
 対象コンテンツを視線位置へアフィン変換を、第1のアルゴリズムを適用して実行する場合は、対象コンテンツ以外の全コンテンツを再配置後の座標へアフィン変換は、図16の処理(7)に示すように実行される。 When the affine transformation of the target content to the line-of-sight position is executed by applying the first algorithm, the affine transformation of all contents other than the target content to the coordinates after rearrangement is performed as shown in processing (7) in FIG. is executed.
 まず、再配置前後における方位角と極角の変化量を以下の式(23)により計算する(図16の処理(7)i)。
Δθm=θ’mm
ΔΦm=Φ’mm  (23)
First, the amount of change in the azimuth angle and the polar angle before and after rearrangement is calculated by the following equation (23) (process (7)i in FIG. 16).
Δθm = θ'm - θm
ΔΦ m =Φ' mm (23)
 次に、以下の式(24)によりz軸まわりにΔΦmだけ回転させることで、ユーザに対する方位角を調整する(処理(7)ii)。
Figure JPOXMLDOC01-appb-M000021
Next, the azimuth angle with respect to the user is adjusted by rotating ΔΦ m around the z-axis according to the following equation (24) (process (7) ii).
Figure JPOXMLDOC01-appb-M000021
 次に、再配置後の極座標から直交座標を以下の式(25)により取得する(処理(7)iii)。
Figure JPOXMLDOC01-appb-M000022
Next, rectangular coordinates are acquired from the rearranged polar coordinates by the following equation (25) (process (7) iii).
Figure JPOXMLDOC01-appb-M000022
 さらに、上記の処理で取得された方位角調整後の座標等を以下の式(26)に適用することで平行移動させ、平行移動後の直交座標(x’m,y’m,z’m)を得る(処理(7)iv)。
Figure JPOXMLDOC01-appb-M000023
Furthermore, the coordinates after the azimuth angle adjustment obtained by the above processing are translated by applying the following equation (26), and the orthogonal coordinates after the translation (x' m , y' m , z' m ) is obtained (process (7) iv).
Figure JPOXMLDOC01-appb-M000023
 以上のようにして、対象コンテンツ以外の全コンテンツの再配置後の直交座標(x’m,y’m,z’m)が得られる。 As described above, the Cartesian coordinates (x' m , y' m , z' m ) after rearrangement of all contents other than the target contents are obtained.
 一方、対象コンテンツを視線位置へアフィン変換を、第2のアルゴリズムを適用して実行する場合は、対象コンテンツ以外の全コンテンツを再配置後の座標へアフィン変換は、図17の処理(7)に示すように実行される。 On the other hand, when the target content is affine-transformed to the line-of-sight position by applying the second algorithm, all the contents other than the target content are affine-transformed to the coordinates after rearrangement in the process (7) of FIG. Executed as shown.
 最初の「再配置前後における方位角と極角の変化量の計算(図17の処理(7)i)」とその次の「Z軸周りのΔΦm回転(処理(7)ii)」とは、上述した図16の例と同様であるため、重複した説明は省略する。 What is the first "calculation of the amount of change in the azimuth angle and polar angle before and after rearrangement (process (7) i in FIG. 17)" and the subsequent "ΔΦ m rotation around the Z axis (process (7) ii)"? , is the same as the example of FIG. 16 described above, so redundant description will be omitted.
 次に、以下の式(27)により、xy平面上の軸周りにΔθmだけ回転させることで、ユーザに対する極角を調整する(処理(7)iii)。
Figure JPOXMLDOC01-appb-M000024
Next, the polar angle with respect to the user is adjusted by rotating Δθ m around the axis on the xy plane according to the following equation (27) (process (7) iii).
Figure JPOXMLDOC01-appb-M000024
 さらに、上記の処理で取得された方位角および極角の調整後の座標等を以下の式(28)に適用することで平行移動させ、平行移動後の直交座標(x’m,y’m,z’m)を得る(処理(7)iv)。
Figure JPOXMLDOC01-appb-M000025
以上のようにして、対象コンテンツ以外の全コンテンツの再配置後の直交座標(x’m,y’m,z’m)が得られる。
Furthermore, the coordinates after adjustment of the azimuth angle and polar angle obtained by the above processing are translated by applying the following equation (28), and the orthogonal coordinates after translation (x' m , y' m ,z' m ) is obtained (process (7) iv).
Figure JPOXMLDOC01-appb-M000025
As described above, the Cartesian coordinates (x' m , y' m , z' m ) after rearrangement of all contents other than the target contents are obtained.
 (発明の実施形態の効果)
 以上説明した発明の実施形態により、仮想空間におけるユーザの見易い位置に、手間をかけることなく所望のコンテンツを配置させることができる。また、ユーザにとっては、コントローラを用いて所定動作を行いながら、対象コンテンツを示すタグを発声するだけで図6の処理を実行開始させることができ、ユーザは簡単に所望のコンテンツの表示指示を行うことができる。
(Effect of the embodiment of the invention)
According to the embodiments of the invention described above, it is possible to arrange desired content at a position in the virtual space that is easy for the user to see without taking time and effort. Further, for the user, the processing of FIG. 6 can be started simply by uttering a tag indicating the target content while performing a predetermined operation using the controller, and the user can easily instruct the display of the desired content. be able to.
 また、初期状態が「対象コンテンツが仮想空間に表示された状態」の場合、視線位置を基準に、予め定められた再配置パターン(再配置パターン1~4の何れか)に従って、仮想空間における対象コンテンツの再配置位置を定め、定められた再配置位置へ対象コンテンツを再配置することができる。一方、初期状態が「対象コンテンツが仮想空間に表示されていない状態」であっても、ユーザの視線位置を取得し、取得された視線位置に対象コンテンツをレンダリングすることで、ユーザの視線位置へ対象コンテンツを表示させることができる。 Further, when the initial state is "a state in which the target content is displayed in the virtual space", the target content in the virtual space is displayed according to a predetermined rearrangement pattern (any of the rearrangement patterns 1 to 4) based on the line-of-sight position. A rearrangement position of the content can be determined, and the target content can be rearranged to the determined rearrangement position. On the other hand, even if the initial state is "a state in which the target content is not displayed in the virtual space", by acquiring the user's line-of-sight position and rendering the target content at the acquired line-of-sight position, Target content can be displayed.
 また、対象コンテンツが仮想空間に表示された状態からの再配置パターンとして、全ての表示コンテンツについての方位角方向における相対的な位置関係を保持したまま、全ての表示コンテンツを再配置する再配置パターン3を含むので、全ての表示コンテンツについての方位角方向における相対的な位置関係を保持でき、ユーザにとってこだわりのレイアウトが大きく崩れることを回避できる。 Also, as a rearrangement pattern from the state where the target content is displayed in the virtual space, a rearrangement pattern for rearranging all the display contents while maintaining the relative positional relationship in the azimuth direction for all the display contents. 3, it is possible to maintain the relative positional relationship in the azimuth angle direction for all display contents, and to avoid the layout that the user is particular about from collapsing significantly.
 また、対象コンテンツが仮想空間に表示された状態からの再配置パターンとして、全ての表示コンテンツ同士の相対的な位置関係を保持したまま、全ての表示コンテンツを再配置する再配置パターン4を含むので、全ての表示コンテンツ同士の相対的な位置関係を保持でき、ユーザにとってこだわりのレイアウトが崩れることを回避できる。 Further, as a rearrangement pattern from the state where the target content is displayed in the virtual space, the rearrangement pattern 4 for rearranging all the display contents while maintaining the relative positional relationship between all the display contents is included. , the relative positional relationship between all displayed contents can be maintained, and it is possible to avoid collapsing of the user's preferred layout.
 (変形例)
 図18(a)~(d)を用いて、コンテンツ表示における重なりへの対応について説明する。対象コンテンツとそれ以外のコンテンツ(非対象コンテンツ)とが仮想空間において重なるおそれがある場合、例えば、以下のように対応することで、重なりを回避し、対象コンテンツが見にくくなる不都合を解消することができる。
(Modification)
18(a) to 18(d) will be used to explain how to deal with overlap in content display. If there is a possibility that the target content and other content (non-target content) overlap in the virtual space, for example, by taking the following measures, the overlap can be avoided and the inconvenience of the target content becoming difficult to see can be resolved. can.
 即ち、図18(a)に示すように非対象コンテンツを半透明化する加工、図18(b)に示すように非対象コンテンツの鮮明度を低下させる(ぼやけさせる)加工、球面座標における動径rの値を調整する加工などを行って、相対的に対象コンテンツを見易くする方法を採用してもよい。また、図18(c)に示すように対象コンテンツの輪郭の光彩を調整する加工などを行って、対象コンテンツを際立たせて見易くする方法を採用してもよい。さらに、図18(d)に示すように対象コンテンツをポップアップ仮想スクリーン上に表示してもよい。このように、さまざまな手法により、相対的に対象コンテンツを見易くすることができる。なお、図18(d)の例は再配置パターン2に限定され、球面座標における動径rの値を調整する加工は再配置パターン1~3に限定される。 That is, as shown in FIG. 18(a), the non-target content is translucent, as shown in FIG. A method of making the target content relatively easy to see by performing processing such as adjusting the value of r may be adopted. Alternatively, as shown in FIG. 18C, processing such as adjusting the brilliance of the outline of the target content may be performed to make the target content stand out and be easier to see. Furthermore, the target content may be displayed on a pop-up virtual screen as shown in FIG. 18(d). In this way, various techniques can be used to make target content relatively easy to see. The example of FIG. 18D is limited to rearrangement pattern 2, and the processing for adjusting the value of radius vector r in spherical coordinates is limited to rearrangement patterns 1-3.
 また、ユーザが対象コンテンツを指定する方法として、対象コンテンツのタグを発声することは必須要件ではなく、例えば、ユーザは、コントローラを用いた後述するパターンの入力によって対象コンテンツを指定してもよいし、ハンドジェスチャによって対象コンテンツを指定してもよい。前者の「コントローラを用いたパターン」の例としては、図19(a)に示すように、図2(a)のコントローラ20におけるタッチパッド22の中央部をn回連続タップするパターン、n回連続でトリガー23を引くパターン、ボタン21をn回連続押下するパターン、コントローラ20のレーザポインタ機能で仮想空間に特定の図形(例えば丸など)を描くパターンなどが挙げられる。この場合、対応表13には、上記のようなコントローラパターンは、対応するコンテンツのコンテンツIDに対応付けて登録される。後者のハンドジェスチャによって対象コンテンツを指定する場合のハンドジェスチャとしては、図19(b)に示すように、握りこぶし(Fist)、OKサイン、アルファベットのC、ピースサインなどを手で形成するジェスチャが挙げられる。この場合、対応表13には、上記のようなハンドジェスチャは、対応するコンテンツのコンテンツIDに対応付けて登録される。なお、ハンドジェスチャによって対象コンテンツを指定する場合は、前述したように、図1(a)および図1(b)に示す検知部11は、カメラ11Cと、撮像により得られた動画データからジェスチャを認識するジェスチャ認識部11Dとをさらに含んだ構成とされる。 Also, as a method for the user to specify the target content, it is not essential to utter the tag of the target content. For example, the user may specify the target content by inputting a pattern described later using a controller. , a hand gesture may be used to specify the target content. As an example of the former "pattern using a controller", as shown in FIG. , a pattern in which the button 21 is pressed n times consecutively, and a pattern in which a laser pointer function of the controller 20 draws a specific figure (for example, a circle) in the virtual space. In this case, the controller pattern as described above is registered in the correspondence table 13 in association with the content ID of the corresponding content. Hand gestures for designating target content by the latter hand gesture include, as shown in FIG. be done. In this case, the hand gesture as described above is registered in the correspondence table 13 in association with the content ID of the corresponding content. It should be noted that when target content is designated by a hand gesture, as described above, the detection unit 11 shown in FIGS. The configuration further includes a gesture recognition unit 11D for recognition.
 上記のように、変形例として、対象コンテンツのタグの発声に代えて、コントローラを用いたパターン入力、ハンドジェスチャといった方法による対象コンテンツの指定およびその表示指示動作を可能とすることで、ユーザは、さまざまな手法で、簡単に所望のコンテンツの表示指示を行うことができる。 As described above, as a modified example, instead of uttering the tag of the target content, the user can specify the target content and instruct to display it by a method such as pattern input using a controller or hand gesture. Display instructions of desired content can be easily given by various methods.
 なお、上記実施形態の処理動作では、図1(b)に示すように、端末10が、コンテンツ特定情報(例えばタグ)をコンテンツIDに紐付けて予め記憶した対応表13を備え、特定部12Aが、対応表13を参照して、検知部11で検知された表示指示動作に対応したコンテンツ特定情報に紐付けられたコンテンツIDから、対象コンテンツを特定する例を示したが、上記のような対応表13は必須要件ではなく、例えば、コンテンツ特定情報であるタグとして、コンテンツIDそのものを用いるなどによって、対応表13を不要とし、特定部12Aが、検知された表示指示動作に対応したコンテンツ特定情報から直ちに対象コンテンツを特定可能とする図1(a)の構成を採用してもよい。図1(a)の構成によれば、装置構成の簡素化を図ることができる。 In the processing operation of the above embodiment, as shown in FIG. 1(b), the terminal 10 has a correspondence table 13 in which content specifying information (for example, a tag) is associated with a content ID and stored in advance, and the specifying unit 12A However, an example of specifying the target content from the content ID linked to the content specifying information corresponding to the display instruction operation detected by the detection unit 11 by referring to the correspondence table 13 has been shown. The correspondence table 13 is not an essential requirement. For example, by using the content ID itself as a tag that is content identification information, the correspondence table 13 is made unnecessary, and the identification unit 12A identifies content corresponding to the detected display instruction operation. The configuration shown in FIG. 1(a) may be adopted so that the target content can be specified immediately from the information. According to the configuration of FIG. 1(a), the simplification of the device configuration can be achieved.
 また、図1(a)および図1(b)の構成において、端末10が、図示した構成要素(検知部11、特定部12A、表示制御部12Bなど)を全て内包することは必須ではなく、一部の機能部をサーバに搭載し、端末10からサーバへ処理依頼することで対象コンテンツの特定などを行ってもよい。この場合、本開示に係るコンテンツ表示制御システムは、端末10と上記サーバとを含んだ構成と把握される。 In addition, in the configuration of FIGS. 1A and 1B, it is not essential that the terminal 10 includes all of the illustrated components (the detection unit 11, the identification unit 12A, the display control unit 12B, etc.). A part of the functional units may be installed in the server, and the target content may be specified by requesting the server from the terminal 10 to process. In this case, the content display control system according to the present disclosure is understood to have a configuration including the terminal 10 and the server.
 (別の実施形態)
 次に、別の実施形態として、対象コンテンツがユーザの顔の向き前方に表示されるようにコンテンツ表示制御を行う形態について説明する。ここでは、表示制御部12Bが、既存の顔認証技術等を用いてユーザの顔の向きを取得し、対象コンテンツが、仮想空間における、ユーザの顔の向き前方の表示位置に表示されるように、ヘッドマウントディスプレイへのコンテンツ表示を制御する。上記の「顔の向き前方の表示位置」としては、例えば、ユーザを中心とした360度の全天球仮想空間における顔の向き前方への延長線と全天球面との交点の位置を採用してもよいし、顔の向き前方への延長線上で顔から所定距離の位置を採用してもよい。なお、上記の「所定距離」(顔からの距離)は、ユーザにより適宜調整可能とされている。
(another embodiment)
Next, as another embodiment, a mode will be described in which content display control is performed so that target content is displayed in front of the user's face. Here, the display control unit 12B acquires the orientation of the user's face using an existing face recognition technology or the like, and the target content is displayed at a display position in front of the orientation of the user's face in the virtual space. , which controls the display of content on the head-mounted display. As the above-mentioned "display position in front of the face", for example, the position of the intersection of the extension line to the front of the face in the 360-degree omnidirectional virtual space centered on the user and the omnidirectional surface is adopted. Alternatively, a position at a predetermined distance from the face may be adopted on the forward extension of the face. It should be noted that the "predetermined distance" (distance from the face) can be appropriately adjusted by the user.
 処理としては、前述した図6の「ユーザの視線位置への表示制御」に代わり、図21に示す処理が実行される。図21では、図6と同じ処理は同じ番号を付し、重複した説明は省略する。図21のステップS24で対象コンテンツが仮想空間に表示中であれば、表示制御部12Bは、既存の顔認証技術等を用いてユーザの顔の向きを取得し(ステップS25A)、顔の向き前方の表示位置を基準とする後述のコンテンツ再配置を実行する(ステップS26A)。一方、ステップS24で対象コンテンツが表示中でなければ、表示制御部12Bは、ユーザの顔の向きを取得し(ステップS27A)、顔の向き前方の表示位置に対象コンテンツをレンダリングすることで当該表示位置に対象コンテンツを表示させる(ステップS28A)。 As the processing, the processing shown in FIG. 21 is executed instead of the "display control to the user's gaze position" in FIG. 6 described above. In FIG. 21, the same processing as in FIG. 6 is assigned the same number, and redundant description is omitted. If the target content is being displayed in the virtual space in step S24 of FIG. 21, the display control unit 12B acquires the orientation of the user's face using existing face recognition technology or the like (step S25A). Content rearrangement, which will be described later, is executed based on the display position of (step S26A). On the other hand, if the target content is not being displayed in step S24, the display control unit 12B acquires the orientation of the user's face (step S27A), and renders the target content at a display position in front of the face orientation to display the target content. The target content is displayed at the position (step S28A).
 ここで、図22(a)~(c)を用いて、図21の処理の具体的な一例を示す。まず、初期状態は図22(a)に示すように仮装空間においてユーザを中心に、3種類の動物のコンテンツが配置された状態とし、ここで、ユーザが、視線を顔の向き前方に向けることなく、移動させたい対象コンテンツ「ALPACA」を見ている状態で、対象コンテンツのタグ「ALPACA」を発声すると、図21のステップS21で、検知部11は、音声認識によってタグ「ALPACA」の音声を認識し、音声認識結果としてテキスト「ALPACA」を得て、特定部12Aへ転送する。そして、特定部12Aは、ステップS22でテキスト「ALPACA」と対応表13に記憶された各レコードのタグとを比較し、テキスト「ALPACA」に一致するレコードが対応表13に有ると判断し(ステップS23でYES)、一致するレコードであるアルパカのコンテンツを対象コンテンツとして特定し、表示制御部12Bへ伝達する。さらに、表示制御部12Bは、対象コンテンツが仮想空間に表示中であると判断し(ステップS24)、既存の顔認証技術等を用いてユーザの顔の向きを取得し(ステップS25A)、顔の向き前方の表示位置(図22(b)に示す(xe,ye,ze))を基準とするコンテンツ再配置を実行する(ステップS26A)。図22(b)には、一例として、顔の向き前方の表示位置(xe,ye,ze)へ対象コンテンツ(アルパカのコンテンツ)のみを再配置する例を示す。このとき、ユーザの視線は対象コンテンツの移動前の位置に向いているが、先に対象コンテンツが顔の向き前方の表示位置へ移動する。その後、図22(c)に示すように、ユーザは、視線を顔の向き前方へ向けることで、対象コンテンツをユーザにとって見易い位置(ここでは顔の向き前方)で視認することができる。なお、対象コンテンツが仮想空間に表示中でない場合には、表示制御部12Bは、既存の顔認証技術等を用いてユーザの顔の向きを取得し(ステップS27A)、顔の向き前方の表示位置(図22(b)に示す(xe,ye,ze))に対象コンテンツをレンダリングすることで、顔の向き前方に対象コンテンツを表示させる(ステップS28A)。このように対象コンテンツが表示中でない場合でも、ユーザの視線によらずに、対象コンテンツを顔の向き前方に表示させることができる。以上のように、別の実施形態においても、仮想空間におけるユーザの見易い位置に、手間をかけることなく所望のコンテンツを配置させることができる。 Here, a specific example of the processing of FIG. 21 will be shown using FIGS. 22(a) to (c). First, as shown in FIG. 22(a), the initial state is a state in which three types of animal content are arranged with the user at the center in the virtual space. When the tag "ALPACA" of the target content is uttered while viewing the target content "ALPACA" to be moved, the detection unit 11 recognizes the voice of the tag "ALPACA" in step S21 of FIG. The text "ALPACA" is obtained as a speech recognition result and transferred to the identification unit 12A. Then, in step S22, the specifying unit 12A compares the text "ALPACA" with the tag of each record stored in the correspondence table 13, and determines that there is a record matching the text "ALPACA" in the correspondence table 13 (step S22). YES in S23), the content of the matching record, Alpaca, is specified as the target content and transmitted to the display control unit 12B. Further, the display control unit 12B determines that the target content is being displayed in the virtual space (step S24), acquires the orientation of the user's face using existing face recognition technology or the like (step S25A), The content rearrangement is executed with reference to the forward display position ((x e , y e , z e ) shown in FIG. 22(b)) (step S26A). FIG. 22B shows an example of rearranging only the target content (alpaca content) to the display position (x e , y e , z e ) in front of the face direction. At this time, the user's line of sight is directed to the position before the movement of the target content, but the target content first moves to the display position in front of the face. After that, as shown in FIG. 22(c), the user can view the target content at a position that is easy for the user to see (here, in front of the face) by directing the line of sight forward. Note that when the target content is not being displayed in the virtual space, the display control unit 12B acquires the orientation of the user's face using an existing face authentication technology or the like (step S27A), and determines the display position in front of the orientation of the face. By rendering the target content in ((x e , y e , z e ) shown in FIG. 22(b)), the target content is displayed in front of the face (step S28A). In this way, even when the target content is not being displayed, the target content can be displayed in front of the user's face regardless of the line of sight of the user. As described above, even in another embodiment, it is possible to place desired content at a position in the virtual space that is easy for the user to see without taking time and effort.
 また、上記実施形態の説明に用いたブロック図は、機能単位のブロックを示している。これらの機能ブロック(構成部)は、ハードウェア及びソフトウェアの少なくとも一方の任意の組み合わせによって実現される。また、各機能ブロックの実現方法は特に限定されない。すなわち、各機能ブロックは、物理的又は論理的に結合した1つの装置を用いて実現されてもよいし、物理的又は論理的に分離した2つ以上の装置を直接的又は間接的に(例えば、有線、無線などを用いて)接続し、これら複数の装置を用いて実現されてもよい。機能ブロックは、上記1つの装置又は上記複数の装置にソフトウェアを組み合わせて実現されてもよい。 Also, the block diagrams used in the description of the above embodiments show blocks in units of functions. These functional blocks (components) are realized by any combination of at least one of hardware and software. Also, the method of implementing each functional block is not particularly limited. That is, each functional block may be implemented using one device that is physically or logically coupled, or directly or indirectly using two or more devices that are physically or logically separated (e.g. , wired, wireless, etc.) and may be implemented using these multiple devices. A functional block may be implemented by combining software in the one device or the plurality of devices.
 機能には、判断、決定、判定、計算、算出、処理、導出、調査、探索、確認、受信、送信、出力、アクセス、解決、選択、選定、確立、比較、想定、期待、見做し、報知(broadcasting)、通知(notifying)、通信(communicating)、転送(forwarding)、構成(configuring)、再構成(reconfiguring)、割り当て(allocating、mapping)、割り振り(assigning)などがあるが、これらに限られない。たとえば、送信を機能させる機能ブロック(構成部)は、送信部(transmitting unit)、送信機(transmitter)と呼称される。いずれも、上述したとおり、実現方法は特に限定されない。 Functions include judging, determining, determining, calculating, calculating, processing, deriving, investigating, searching, checking, receiving, transmitting, outputting, accessing, resolving, selecting, choosing, establishing, comparing, assuming, expecting, assuming, Broadcasting, notifying, communicating, forwarding, configuring, reconfiguring, allocating, mapping, assigning, etc. can't For example, a functional block (component) that makes transmission work is called a transmitting unit or transmitter. In either case, as described above, the implementation method is not particularly limited.
 例えば、本開示の一実施の形態における制御部は、本実施形態における処理を行うコンピュータとして機能してもよい。図20は、本開示の一実施の形態に係る端末10のハードウェア構成例を示す図である。上述の端末10は、物理的には、プロセッサ1001、メモリ1002、ストレージ1003、通信装置1004、入力装置1005、出力装置1006、バス1007などを含むコンピュータ装置として構成されてもよい。 For example, the control unit in one embodiment of the present disclosure may function as a computer that performs processing in this embodiment. FIG. 20 is a diagram illustrating a hardware configuration example of terminal 10 according to an embodiment of the present disclosure. The terminal 10 described above may be physically configured as a computer device including a processor 1001, a memory 1002, a storage 1003, a communication device 1004, an input device 1005, an output device 1006, a bus 1007, and the like.
 なお、以下の説明では、「装置」という文言は、回路、デバイス、ユニットなどに読み替えることができる。端末10のハードウェア構成は、図に示した各装置を1つ又は複数含むように構成されてもよいし、一部の装置を含まずに構成されてもよい。 In the following explanation, the term "apparatus" can be read as a circuit, device, unit, or the like. The hardware configuration of the terminal 10 may be configured to include one or more of each device shown in the figure, or may be configured without some of the devices.
 端末10における各機能は、プロセッサ1001、メモリ1002などのハードウェア上に所定のソフトウェア(プログラム)を読み込ませることによって、プロセッサ1001が演算を行い、通信装置1004による通信を制御したり、メモリ1002及びストレージ1003におけるデータの読み出し及び書き込みの少なくとも一方を制御したりすることによって実現される。 Each function of the terminal 10 is performed by causing the processor 1001 to perform calculations, controlling communication by the communication device 1004, controlling communication by the communication device 1004, and controlling the communication by the memory 1002 and the It is realized by controlling at least one of data reading and writing in the storage 1003 .
 プロセッサ1001は、例えば、オペレーティングシステムを動作させてコンピュータ全体を制御する。プロセッサ1001は、周辺装置とのインターフェース、端末、演算装置、レジスタなどを含む中央処理装置(CPU:Central Processing Unit)によって構成されてもよい。 The processor 1001, for example, operates an operating system and controls the entire computer. The processor 1001 may be configured by a central processing unit (CPU) including interfaces with peripheral devices, terminals, arithmetic units, registers, and the like.
 また、プロセッサ1001は、プログラム(プログラムコード)、ソフトウェアモジュール、データなどを、ストレージ1003及び通信装置1004の少なくとも一方からメモリ1002に読み出し、これらに従って各種の処理を実行する。プログラムとしては、上述の実施の形態において説明した動作の少なくとも一部をコンピュータに実行させるプログラムが用いられる。上述の各種処理は、1つのプロセッサ1001によって実行される旨を説明してきたが、2以上のプロセッサ1001により同時又は逐次に実行されてもよい。プロセッサ1001は、1以上のチップによって実装されてもよい。なお、プログラムは、電気通信回線を介してネットワークから送信されても良い。 Also, the processor 1001 reads programs (program codes), software modules, data, etc. from at least one of the storage 1003 and the communication device 1004 to the memory 1002, and executes various processes according to them. As the program, a program that causes a computer to execute at least part of the operations described in the above embodiments is used. Although it has been explained that the above-described various processes are executed by one processor 1001, they may be executed simultaneously or sequentially by two or more processors 1001. FIG. Processor 1001 may be implemented by one or more chips. Note that the program may be transmitted from a network via an electric communication line.
 メモリ1002は、コンピュータ読み取り可能な記録媒体であり、例えば、ROM(Read Only Memory)、EPROM(Erasable Programmable ROM)、EEPROM(Electrically Erasable Programmable ROM)、RAM(Random Access Memory)などの少なくとも1つによって構成されてもよい。メモリ1002は、レジスタ、キャッシュ、メインメモリ(主記憶装置)などと呼ばれてもよい。メモリ1002は、本開示の一実施の形態に係る無線通信方法を実施するために実行可能なプログラム(プログラムコード)、ソフトウェアモジュールなどを保存することができる。 The memory 1002 is a computer-readable recording medium, and is composed of at least one of, for example, ROM (Read Only Memory), EPROM (Erasable Programmable ROM), EEPROM (Electrically Erasable Programmable ROM), RAM (Random Access Memory), etc. may be The memory 1002 may also be called a register, cache, main memory (main storage device), or the like. The memory 1002 can store executable programs (program code), software modules, etc. for implementing a wireless communication method according to an embodiment of the present disclosure.
 ストレージ1003は、コンピュータ読み取り可能な記録媒体であり、例えば、CD-ROM(Compact Disc ROM)などの光ディスク、ハードディスクドライブ、フレキシブルディスク、光磁気ディスク(例えば、コンパクトディスク、デジタル多用途ディスク、Blu-ray(登録商標)ディスク)、スマートカード、フラッシュメモリ(例えば、カード、スティック、キードライブ)、フロッピー(登録商標)ディスク、磁気ストリップなどの少なくとも1つによって構成されてもよい。ストレージ1003は、補助記憶装置と呼ばれてもよい。上述の記憶媒体は、例えば、メモリ1002及びストレージ1003の少なくとも一方を含むデータベース、サーバその他の適切な媒体であってもよい。 The storage 1003 is a computer-readable recording medium, for example, an optical disc such as a CD-ROM (Compact Disc ROM), a hard disk drive, a flexible disc, a magneto-optical disc (for example, a compact disc, a digital versatile disc, a Blu-ray disk), smart card, flash memory (eg, card, stick, key drive), floppy disk, magnetic strip, and/or the like. Storage 1003 may also be called an auxiliary storage device. The storage medium described above may be, for example, a database, server, or other suitable medium including at least one of memory 1002 and storage 1003 .
 通信装置1004は、有線ネットワーク及び無線ネットワークの少なくとも一方を介してコンピュータ間の通信を行うためのハードウェア(送受信デバイス)であり、例えばネットワークデバイス、ネットワークコントローラ、ネットワークカード、通信モジュールなどともいう。 The communication device 1004 is hardware (transmitting/receiving device) for communicating between computers via at least one of a wired network and a wireless network, and is also called a network device, a network controller, a network card, a communication module, or the like.
 入力装置1005は、外部からの入力を受け付ける入力デバイス(例えば、キーボード、マウス、マイクロフォン、スイッチ、ボタン、センサなど)である。出力装置1006は、外部への出力を実施する出力デバイス(例えば、ディスプレイ、スピーカー、LEDランプなど)である。なお、入力装置1005及び出力装置1006は、一体となった構成(例えば、タッチパネル)であってもよい。 また、プロセッサ1001、メモリ1002などの各装置は、情報を通信するためのバス1007によって接続される。バス1007は、単一のバスを用いて構成されてもよいし、装置間ごとに異なるバスを用いて構成されてもよい。 The input device 1005 is an input device (for example, keyboard, mouse, microphone, switch, button, sensor, etc.) that receives input from the outside. The output device 1006 is an output device (eg, display, speaker, LED lamp, etc.) that outputs to the outside. Note that the input device 1005 and the output device 1006 may be integrated (for example, a touch panel). Each device such as the processor 1001 and the memory 1002 is connected by a bus 1007 for communicating information. The bus 1007 may be configured using a single bus, or may be configured using different buses between devices.
 本開示において説明した各態様/実施形態は単独で用いてもよいし、組み合わせて用いてもよいし、実行に伴って切り替えて用いてもよい。また、所定の情報の通知(例えば、「Xであること」の通知)は、明示的に行うものに限られず、暗黙的(例えば、当該所定の情報の通知を行わない)ことによって行われてもよい。 Each aspect/embodiment described in the present disclosure may be used alone, may be used in combination, or may be used by switching along with execution. In addition, the notification of predetermined information (for example, notification of “being X”) is not limited to being performed explicitly, but may be performed implicitly (for example, not notifying the predetermined information). good too.
 以上、本開示について詳細に説明したが、当業者にとっては、本開示が本開示中に説明した実施形態に限定されるものではないということは明らかである。本開示は、請求の範囲の記載により定まる本開示の趣旨及び範囲を逸脱することなく修正及び変更態様として実施することができる。したがって、本開示の記載は、例示説明を目的とするものであり、本開示に対して何ら制限的な意味を有するものではない。 Although the present disclosure has been described in detail above, it is clear to those skilled in the art that the present disclosure is not limited to the embodiments described in the present disclosure. The present disclosure can be practiced with modifications and variations without departing from the spirit and scope of the present disclosure as defined by the claims. Accordingly, the description of the present disclosure is for illustrative purposes and is not meant to be limiting in any way.
 本開示において説明した各態様/実施形態の処理手順、シーケンス、フローチャートなどは、矛盾の無い限り、順序を入れ替えてもよい。例えば、本開示において説明した方法については、例示的な順序を用いて様々なステップの要素を提示しており、提示した特定の順序に限定されない。 The order of the processing procedures, sequences, flowcharts, etc. of each aspect/embodiment described in the present disclosure may be changed as long as there is no contradiction. For example, the methods described in this disclosure present elements of the various steps using a sample order and are not limited to the specific order presented.
 入出力された情報等は特定の場所(例えば、メモリ)に保存されてもよいし、管理テーブルを用いて管理してもよい。入出力される情報等は、上書き、更新、又は追記され得る。出力された情報等は削除されてもよい。入力された情報等は他の装置へ送信されてもよい。 Input/output information may be stored in a specific location (for example, memory) or managed using a management table. Input/output information and the like can be overwritten, updated, or appended. The output information and the like may be deleted. The entered information and the like may be transmitted to another device.
 本開示において使用する「に基づいて」という記載は、別段に明記されていない限り、「のみに基づいて」を意味しない。言い換えれば、「に基づいて」という記載は、「のみに基づいて」と「に少なくとも基づいて」の両方を意味する。 The term "based on" as used in this disclosure does not mean "based only on" unless otherwise specified. In other words, the phrase "based on" means both "based only on" and "based at least on."
 本開示において、「含む(include)」、「含んでいる(including)」及びそれらの変形が使用されている場合、これらの用語は、用語「備える(comprising)」と同様に、包括的であることが意図される。さらに、本開示において使用されている用語「又は(or)」は、排他的論理和ではないことが意図される。 Where "include," "including," and variations thereof are used in this disclosure, these terms are inclusive, as is the term "comprising." is intended. Furthermore, the term "or" as used in this disclosure is not intended to be an exclusive OR.
 本開示において、例えば、英語でのa, an及びtheのように、翻訳により冠詞が追加された場合、本開示は、これらの冠詞の後に続く名詞が複数形であることを含んでもよい。 In this disclosure, if articles are added by translation, such as a, an, and the in English, the disclosure may include that the nouns following these articles are plural.
 本開示において、「AとBが異なる」という用語は、「AとBが互いに異なる」ことを意味してもよい。なお、当該用語は、「AとBがそれぞれCと異なる」ことを意味してもよい。「離れる」、「結合される」などの用語も、「異なる」と同様に解釈されてもよい。 In the present disclosure, the term "A and B are different" may mean "A and B are different from each other." The term may also mean that "A and B are different from C". Terms such as "separate," "coupled," etc. may also be interpreted in the same manner as "different."
 1…コンテンツ表示制御システム、10…端末、11…検知部、11A…マイク、11B…音声認識部、11C…カメラ、11D…ジェスチャ認識部、12…制御部、12A…特定部、12B…表示制御部、13…対応表、14…登録部、16…タッチパッド、17、18…ボタン、20…コントローラ、21…ボタン、22…タッチパッド、23…トリガー、24…バンパー、30…ARグラス(ヘッドマウントディスプレイ)、1001…プロセッサ、1002…メモリ、1003…ストレージ、1004…通信装置、1005…入力装置、1006…出力装置、1007…バス。 DESCRIPTION OF SYMBOLS 1... Content display control system 10... Terminal 11... Detection part 11A... Microphone 11B... Voice recognition part 11C... Camera 11D... Gesture recognition part 12... Control part 12A... Specification part 12B... Display control Unit 13 Correspondence table 14 Registration unit 16 Touch pad 17, 18 Button 20 Controller 21 Button 22 Touch pad 23 Trigger 24 Bumper 30 AR glass (head mount display), 1001... processor, 1002... memory, 1003... storage, 1004... communication device, 1005... input device, 1006... output device, 1007... bus.

Claims (11)

  1.  ユーザにより着用され、仮想空間にコンテンツを表示するヘッドマウントディスプレイと、
     前記ユーザにより再現される、対象コンテンツに紐付く表示指示動作を検知する検知部と、
     前記検知部により検知された前記表示指示動作に紐付く前記対象コンテンツが、仮想空間における前記ユーザの視認好適位置に表示されるように、前記ヘッドマウントディスプレイへのコンテンツ表示を制御する制御部と、
     を備えるコンテンツ表示制御システム。
    a head-mounted display that is worn by a user and displays content in a virtual space;
    a detection unit that detects a display instruction action that is reproduced by the user and is associated with target content;
    a control unit that controls content display on the head-mounted display such that the target content associated with the display instruction operation detected by the detection unit is displayed at a position suitable for viewing by the user in a virtual space;
    A content display control system comprising:
  2.  ユーザにより着用され、仮想空間にコンテンツを表示するヘッドマウントディスプレイと、
     前記ユーザにより再現される、対象コンテンツに紐付く表示指示動作を検知する検知部と、
     前記検知部により検知された前記表示指示動作に紐付く前記対象コンテンツが、仮想空間における前記ユーザの視線位置に表示されるように、前記ヘッドマウントディスプレイへのコンテンツ表示を制御する制御部と、
     を備えるコンテンツ表示制御システム。
    a head-mounted display that is worn by a user and displays content in a virtual space;
    a detection unit that detects a display instruction action that is reproduced by the user and is associated with target content;
    a control unit that controls content display on the head-mounted display such that the target content associated with the display instruction operation detected by the detection unit is displayed at the line-of-sight position of the user in the virtual space;
    A content display control system comprising:
  3.  前記制御部は、
     前記検知部により検知された前記表示指示動作に紐付く前記対象コンテンツを特定する特定部と、
     前記特定部により特定された前記対象コンテンツが、仮想空間における前記ユーザの視線位置に表示されるように、前記ヘッドマウントディスプレイへのコンテンツ表示を制御する表示制御部と、
     を含む、
     請求項2に記載のコンテンツ表示制御システム。
    The control unit
    a specifying unit that specifies the target content associated with the display instruction operation detected by the detecting unit;
    a display control unit that controls content display on the head-mounted display so that the target content specified by the specifying unit is displayed at the line-of-sight position of the user in virtual space;
    including,
    3. The content display control system according to claim 2.
  4.  前記検知部は、前記ユーザにより再現される、前記対象コンテンツに紐付くコンテンツ特定情報に対応した表示指示動作を検知し、
     前記特定部は、前記検知部により検知された前記表示指示動作に対応したコンテンツ特定情報に紐付けられた前記対象コンテンツを特定する、
     請求項3に記載のコンテンツ表示制御システム。
    The detection unit detects a display instruction operation reproduced by the user corresponding to content specifying information linked to the target content,
    The identification unit identifies the target content linked to content identification information corresponding to the display instruction operation detected by the detection unit.
    4. The content display control system according to claim 3.
  5.  前記コンテンツ表示制御システムは、
     コンテンツに紐付くコンテンツ特定情報を、当該コンテンツを識別するためのコンテンツIDに紐付けて記憶したコンテンツ対応情報データベース、
     をさらに備え、
     前記特定部は、前記コンテンツ対応情報データベースを参照して、前記検知された前記表示指示動作に対応したコンテンツ特定情報に紐付けられたコンテンツIDに基づいて、前記対象コンテンツを特定する、
     請求項4に記載のコンテンツ表示制御システム。
    The content display control system includes:
    a content correspondence information database in which content identification information associated with content is associated with a content ID for identifying the content and stored;
    further comprising
    The specifying unit refers to the content correspondence information database and specifies the target content based on a content ID linked to content specifying information corresponding to the detected display instruction operation.
    5. The content display control system according to claim 4.
  6.  前記検知部により、ユーザによる前記対象コンテンツに紐づく発声を含んだ表示指示動作が検知された場合、前記特定部は、前記発声の音声認識結果に紐付けられた前記対象コンテンツを特定する、
     請求項3~5の何れか一項に記載のコンテンツ表示制御システム。
    When the detection unit detects a display instruction operation including an utterance linked to the target content by the user, the specifying unit specifies the target content linked to the voice recognition result of the utterance.
    A content display control system according to any one of claims 3 to 5.
  7.  前記検知部により、ユーザによる所定の操作機器を用いた操作パターンの実行又はユーザによる身体の一部の動作、を含んだ表示指示動作が検知された場合、前記特定部は、検知された前記表示指示動作に紐付けられた前記対象コンテンツを特定する、
     請求項3~6の何れか一項に記載のコンテンツ表示制御システム。
    When the detection unit detects a display instruction operation including execution of an operation pattern by a user using a predetermined operation device or movement of a part of the body by the user, the identification unit detects the detected display identifying the target content linked to the instruction action;
    A content display control system according to any one of claims 3 to 6.
  8.  前記表示制御部は、
     前記特定部により特定された前記対象コンテンツが、仮想空間に表示されている場合、前記ユーザの視線位置を取得し、取得された視線位置を基準にして、予め定められた再配置パターンに従って、仮想空間における前記対象コンテンツの再配置位置を定め、定められた再配置位置へ前記対象コンテンツを再配置する、
     請求項3~7の何れか一項に記載のコンテンツ表示制御システム。
    The display control unit
    When the target content specified by the specifying unit is displayed in a virtual space, a line-of-sight position of the user is acquired, and a virtual display is performed according to a predetermined rearrangement pattern based on the acquired line-of-sight position. Determining a rearrangement position of the target content in space, and rearranging the target content to the determined rearrangement position;
    A content display control system according to any one of claims 3 to 7.
  9.  前記再配置パターンは、
     表示された全コンテンツについての方位角方向における相対的な位置関係を保持したまま、前記表示された全コンテンツを再配置するパターン、および、
     表示されたコンテンツ同士の相対的な位置関係を保持したまま、表示された全コンテンツを再配置するパターン、
     の少なくとも一方を含む、
     請求項8に記載のコンテンツ表示制御システム。
    The rearrangement pattern is
    a pattern for rearranging all displayed content while preserving relative azimuthal positional relationships for all displayed content; and
    A pattern that rearranges all displayed content while maintaining the relative positional relationship between displayed content,
    including at least one of
    The content display control system according to claim 8.
  10.  前記表示制御部は、
     前記特定部により特定された前記対象コンテンツが、仮想空間に表示されていない場合、前記ユーザの視線位置を取得し、取得された視線位置に前記対象コンテンツが表示されるように前記対象コンテンツをレンダリングする、
     請求項3~9の何れか一項に記載のコンテンツ表示制御システム。
    The display control unit
    When the target content specified by the specifying unit is not displayed in the virtual space, acquiring the line-of-sight position of the user and rendering the target content so that the target content is displayed at the acquired line-of-sight position. do,
    A content display control system according to any one of claims 3 to 9.
  11.  前記表示制御部は、
     前記視線位置に表示される前記対象コンテンツが、表示された他のコンテンツと重なる場合、前記対象コンテンツが見易くなるように前記対象コンテンツおよび前記他のコンテンツの少なくとも一方の表示態様を変更する、
     請求項3~10の何れか一項に記載のコンテンツ表示制御システム。
    The display control unit
    changing a display mode of at least one of the target content and the other content so that the target content is easier to see when the target content displayed at the line-of-sight position overlaps with other displayed content;
    A content display control system according to any one of claims 3 to 10.
PCT/JP2022/005304 2021-04-08 2022-02-10 Content display control system WO2022215347A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-065976 2021-04-08
JP2021065976 2021-04-08

Publications (1)

Publication Number Publication Date
WO2022215347A1 true WO2022215347A1 (en) 2022-10-13

Family

ID=83546319

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/005304 WO2022215347A1 (en) 2021-04-08 2022-02-10 Content display control system

Country Status (1)

Country Link
WO (1) WO2022215347A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019095936A (en) * 2017-11-20 2019-06-20 キヤノン株式会社 Image processor, method for processing image, and program
JP2020519986A (en) * 2017-04-19 2020-07-02 マジック リープ, インコーポレイテッドMagic Leap,Inc. Multi-mode execution and text editing for wearable systems

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020519986A (en) * 2017-04-19 2020-07-02 マジック リープ, インコーポレイテッドMagic Leap,Inc. Multi-mode execution and text editing for wearable systems
JP2019095936A (en) * 2017-11-20 2019-06-20 キヤノン株式会社 Image processor, method for processing image, and program

Similar Documents

Publication Publication Date Title
US10514758B2 (en) Visibility improvement method based on eye tracking, machine-readable storage medium and electronic device
US10732725B2 (en) Method and apparatus of interactive display based on gesture recognition
US8135440B2 (en) System for using mobile communication terminal as pointer and method and medium thereof
KR102059913B1 (en) Tag storing method and apparatus thereof, image searching method using tag and apparauts thereof
JP5844288B2 (en) Function expansion device, function expansion method, function expansion program, and integrated circuit
KR102285915B1 (en) Real-time 3d gesture recognition and tracking system for mobile devices
US9064436B1 (en) Text input on touch sensitive interface
US20160012612A1 (en) Display control method and system
JP2011217098A (en) Information processing system, conference management device, information processing method, method for controlling conference management device, and program
JP2003256142A (en) Information processor, information processing program, computer-readable recording medium with information processing program recorded thereon, and information processing method
US20180300031A1 (en) Customizing user interfaces of binary applications
US20220101638A1 (en) Image processing method, and electronic device supporting same
US11709593B2 (en) Electronic apparatus for providing a virtual keyboard and controlling method thereof
US20210133363A1 (en) Display apparatus, display method, and image processing system
CN110738185B (en) Form object identification method, form object identification device and storage medium
JP5342806B2 (en) Display method and display device
WO2022215347A1 (en) Content display control system
KR20190102479A (en) Mobile terminal and method for controlling the same
US10593077B2 (en) Associating digital ink markups with annotated content
JP6208910B1 (en) Moving image processing apparatus, moving image processing system, moving image processing method, and moving image processing program
JP2009015720A (en) Authentication device and authentication method
KR102512184B1 (en) System and method for supporting a reading
WO2018185830A1 (en) Information processing system, information processing method, information processing device, and program
KR102570009B1 (en) Electronic device and method for generating argument reality object
KR102294717B1 (en) A system and method for providing an augmented reality image for providing a transformed object

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22784332

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22784332

Country of ref document: EP

Kind code of ref document: A1