WO2020071545A1 - Information processing device - Google Patents

Information processing device

Info

Publication number
WO2020071545A1
WO2020071545A1 PCT/JP2019/039345 JP2019039345W WO2020071545A1 WO 2020071545 A1 WO2020071545 A1 WO 2020071545A1 JP 2019039345 W JP2019039345 W JP 2019039345W WO 2020071545 A1 WO2020071545 A1 WO 2020071545A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
moving image
viewer
service
predetermined
Prior art date
Application number
PCT/JP2019/039345
Other languages
French (fr)
Japanese (ja)
Inventor
道生 小林
Original Assignee
パロニム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パロニム株式会社 filed Critical パロニム株式会社
Priority to JP2020551117A priority Critical patent/JPWO2020071545A1/en
Publication of WO2020071545A1 publication Critical patent/WO2020071545A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising

Definitions

  • the present invention relates to an information processing device.
  • the object of the present invention is to improve the convenience of a user who browses a moving image and a still image when accessing information of an object shown in the moving image and the still image.
  • an information processing device of one embodiment of the present invention Presenting means for presenting to the user an image including at least one or more objects capable of providing information to the user;
  • An acquisition unit configured to acquire information for identifying a target to which the user wants to provide information, based on an operation performed by the user whose image is presented; Based on the information acquired by the acquiring means, the identifying means for identifying the object the user wants to provide information, Providing means for providing the user with information on the target specified by the specifying means, Is provided.
  • the present invention it is possible to improve the convenience of a user who browses a moving image or the like when accessing information of an object shown in a moving image or a still image.
  • FIG. 10 is a block diagram illustrating an example of a hardware configuration of a management server in the information processing system of FIG. 9.
  • FIG. 11 is a functional block diagram illustrating an example of a functional configuration for executing a heat map process, a branching process, a link process, a voice process, a gesture process, and a suggestion process among the functional configurations of the management server in FIG. 10.
  • FIG. 10 is a diagram illustrating a specific example of a GUI (Graphical User Interface) function of a viewer terminal in the information processing system of FIG. 9.
  • FIG. 9 is a diagram illustrating a specific example of a GUI function of the viewer terminal.
  • FIG. 9 is a diagram illustrating a specific example of a GUI function of the viewer terminal.
  • the “moving image” includes images displayed by the following first to third processes.
  • the first process is a process of successively switching and displaying a series of still images composed of a plurality of images with respect to each motion of an object (for example, an animation character) in a planar image (2D image).
  • an object for example, an animation character
  • a planar image 2D image
  • a process based on the principle of a two-dimensional animation a so-called flip comic, corresponds to the first process.
  • the second process is a process of setting a motion corresponding to each motion of an object (for example, an animated character) in a stereoscopic image (a 3D model image) and changing and displaying the motion over time. .
  • a three-dimensional animation corresponds to the second processing.
  • the third process is a process of preparing a video (that is, a moving image) corresponding to each motion of an object (for example, an animated character) and flowing the video over time.
  • the “video (ie, moving image)” is composed of images such as a plurality of frames and fields (hereinafter, referred to as “unit images”).
  • the unit image is described as a frame.
  • this service a service that can be realized by the information processing system of FIG. 2 described below will be described.
  • FIG. 1 is a diagram showing an outline of an example of the present service which can be realized by the information processing system according to one embodiment of the present invention.
  • This service is provided by a service provider (not shown) who views a moving image (hereinafter, referred to as “viewer”) and a person who creates the moving image and performs various settings (hereinafter, “setter”). )
  • viewer who views a moving image
  • etter a person who creates the moving image and performs various settings
  • a service provider who views a moving image
  • printer a person who creates the moving image and performs various settings
  • a service for viewers a service mainly intended for viewers
  • service for setters a service mainly intended for setters
  • an object displayed on a moving image can be managed by linking information about the object (hereinafter, referred to as “object information”).
  • object information This service is provided as a “service for setters”.
  • the “object” includes, in addition to an area such as an image or a character indicating a person or an object appearing as a subject in the moving image, an image such as a telop displayed over the moving image (still image or moving image) Etc. are also included. Objects that cannot be viewed in a moving image are also included in the “object”. More specifically, for example, BGM (Back Ground Music), music, position information, and the like are all examples of “objects”.
  • BGM Back Ground Music
  • the “object information” may include all kinds of information about the object, but in the present service, information about the object that cannot be directly obtained from a moving image viewed by a viewer is mainly used as object information.
  • the viewer designates a desired object among one or more objects displayed on the moving image while watching the moving image displayed on a terminal such as a smartphone (a viewer terminal 3 in FIG. 2 and the like described later). Perform the operation you want.
  • This service is provided as a “viewer service”.
  • a moving image is distributed to a viewer.
  • the moving image is viewed using dedicated application software (hereinafter, referred to as “viewing application”) for viewing the moving image, which is installed in the viewer's terminal in advance.
  • viewing application dedicated application software
  • the method of viewing a moving image is not particularly limited, and for example, a method of viewing a moving image using a browser function of a viewer terminal may be adopted.
  • FIG. 1 shows a specific example of a moving image L distributed to a viewer terminal.
  • the viewer can view the moving image L delivered to the terminal using a viewing application or a browser function.
  • the moving image L shown in FIG. 1 depicts a female talent walking on a Hawaiian beach.
  • the female talent drawn in the moving image L is an example of the object J.
  • any object such as clothes (products), hotels (facilities), palm trees (animals and plants), and the sea (natural objects) of the female talent can correspond to the object J.
  • the object information of the object J includes, for example, a name (stage name), a name of a production office to which the user belongs, a public profile, and the like as information on the female talent.
  • the object information includes, for example, information on clothes worn by the female talent, such as a brand name, a product name, and a store (including an EC (Electronic Commercial) site) that can be purchased. May be.
  • arbitrary timing means not only while watching a moving image, but also while watching a moving image different from the moving image, and viewing of the moving image itself ends. It is intended to include the timing etc. after performing.
  • the viewer service is realized by linking information in advance for each of one or more objects J that can be designated based on a viewer's operation in a moving image.
  • the work of linking the link information to the object J is performed by a setter who uses the service for the setter.
  • the "linking information" linked to the object includes all information for "providing" the object information to the viewer.
  • the following information can be adopted as the association information of the object J indicating a person (female talent). That is, the name (stage name) of the female talent, the name of the production office to which the female talent belongs, the URL (Uniform Resource Locator) of the WEB page on which the published profile and the like (object information) are published, and the like are used as the linking information. Can be.
  • a URL of a WEB page on which a brand name, a product name, a store where purchase is possible, and the like are adopted as linking information. You can also.
  • the viewer terminal, server, or the like can perform a search on a predetermined search WEB site using the keyword, and provide the search result to the viewer as object information.
  • the number of pieces of link information linked to one object J is not limited to one, and may be plural.
  • an authorized URL in the present service can be adopted as the first link information.
  • a URL separately extracted by the present service (for example, a recommendation is automatically displayed) can be adopted as the second linked information.
  • the viewer may be enabled to select a desired one of the first linked information and the second linked information.
  • the WEB site existing at the selected URL is provided to the viewer as object information of the object J.
  • the object information provided to the viewer for one object J is not limited to one, and may be plural.
  • the viewer can receive the object information of the object J by performing “operation to use” the association information associated with the predetermined object J to the terminal.
  • the "operation to use the link information” refers to an "operation to instruct to access the URL”.
  • the web site existing at the URL is displayed on the terminal of the viewer, and the object information posted on the web site is provided to the viewer.
  • the timing at which the “operation to use” the linked information associated with the predetermined object J is not limited to when the moving image including the object J is being viewed. For example, it can be set to any timing while viewing another moving image or after ending viewing of a moving image.
  • the operation using the association information associated with the predetermined object J can be performed at an arbitrary timing of the viewer.
  • the association information associated with the object J is stored in a predetermined location (for example, a predetermined storage location in the viewing application).
  • the viewer can use the service for viewers to perform an operation using desired linking information at one or more times among the one or more pieces of linking information stored in the predetermined location. become able to.
  • the “operation for designating the object J (operation for storing the association information in a predetermined location)” by the viewer is, specifically, a “TIG area” where the object J may exist in the moving image.
  • a “TIG area” where the object J may exist in the moving image.
  • the operation becomes an “operation for specifying the object J”, and the object J existing in the TIG area A is specified.
  • a place called “TIG stock” is adopted as a predetermined place where the association information is stored. That is, the association information of the object J specified by the operation of tapping any position in the TIG area A by the viewer is stored and managed as “stock information” in “TIG stock”.
  • the data format of the object information provided to the viewer does not depend on the data format of the corresponding object J, and may be any format such as a text data format, a still image data format, and a moving image data format.
  • the “TIG stock” is grasped as a stock page of the moving image, but can be extended to a common stock page straddling a plurality of moving images. Specifically, for example, although not shown, the viewer presses a predetermined button displayed on the terminal (performs an operation of tapping) to thereby obtain one or more pieces of object information stored in the “TIG stock”. Can be displayed in a list.
  • this service is realized only when various settings are set in advance by a setter or automatically set for a moving image displayed on a viewer terminal.
  • the setter sets the TIG area A for each of the one or more objects J displayed in the moving image using the service for the setter, and links the link information to the TIG area A.
  • linking work or the AI (artificial intelligence) or the like automatically performs the linking work on the system side, so that the viewer can be provided with the linking information for the first time. For this reason, for the object J for which the setter has not performed the linking operation, even if the viewer performs the tapping operation, no response may be made, or the terminal that the linking information is not linked may be displayed. May be displayed.
  • the setter performs the linking operation, and the viewer taps any position in the TIG area A, so that the viewer can easily view the object information of the target object J. Can be obtained.
  • FIG. 2 is a diagram showing an outline of a heat map service which is an example of the present service.
  • the “heat map service” refers to a heat of the number of taps, which indicates how much the tap operation performed by the viewer W on the screen of the viewer terminal 3 including the TIG area is performed at what position (coordinate).
  • a service represented by a map the viewer W who has performed the tapping operation to be counted may be all the viewers W, or may be a viewer W satisfying a predetermined condition (for example, gender, age, etc.).
  • a period can be set, and how much the viewer W has tapped during the set period can be represented by a heat map. Thereby, it is also possible to compare the number of taps in the period.
  • a stepwise threshold value is provided for the number of tap operations (the number of taps) for the TIG area, a heat map of the number of taps based on color and shading of color is generated, and a target moving image is generated.
  • the heat map service is information indicating a specific tapping operation performed on a moving image
  • the heat map service is mainly used by the setter C who performs the linking operation. Service for By using the heat map service, the setter C can see at a glance which positions are tapped more or less in the TIG area where the object J exists. Can be. Also, the setter C can grasp at a glance which position other than the TIG area has been tapped.
  • the setter C can correct the position of the TIG area A set to, for example, the moving image to a suitable position based on the heat map. Further, for example, a correction relating to the width of the initially set TIG area can be made. Further, for example, among the objects J to which the object information has not been linked, a new linking operation can be performed on the object J on which the tapping operation by the viewer W has been performed a lot. As a result, the convenience of the viewer can be improved.
  • the degree of enhancement of the heat map can be set.
  • the heat map provided by the heat map service is given a color or shading by providing a stepwise threshold value for the number of taps of the viewer W.
  • the degree of emphasis of the heat map can be adjusted by changing a threshold value provided in a stepwise manner.
  • a moving image in which the number of viewers W is 1,000,000 is, for example, the threshold of the number of taps in the first step is 200,000, and the threshold of the number of taps in the second step is 400,000 times, the threshold of the tap number of the third stage is 600,000 times, the threshold value of the tap number of the fourth stage is set to 800,000 times, and the threshold value of the tap number of the fifth stage is set to 1,000,000 times, Colors and shades of color can be applied.
  • the threshold value of the first-stage tap number is 20 times
  • the threshold value of the second-stage tap number is 40 times
  • the third-stage tap number is The threshold value of the number of taps is set to 60 times
  • the threshold value of the tap number of the fourth step is set to 80 times
  • the threshold value of the tap number of the fifth step is set to 100 times, so that the color and the shading of the color can be applied.
  • a heat map in which the setting of the degree of emphasis is set to “standard” is drawn on the upper left side of FIG.
  • a heat map in which the setting of the degree of emphasis is “+2” is drawn on the upper right side of FIG.
  • the heat map (upper right side in FIG. 2) in which the emphasis level is set to “+2” is better than the heat map (upper left side in FIG. 2) in which the emphasis level is set to “standard”.
  • the small number of taps is also set to be displayed with colors and shades of color. Thereby, the setter C can display the heat map while changing the setting of the degree of emphasis according to the number of taps.
  • a TIG area is displayed in a superimposed manner on the heat map.
  • the setter C can correct the position of the TIG area A set in the moving image to a suitable position, for example. Further, for example, a correction relating to the width of the initially set TIG area can be made.
  • a large number of tap operations are performed particularly on an area of the TIG areas A1 and A2 that is outside the TIG area A1.
  • the setter C performs, for example, a correction to expand the TIG area A1 downward while referring to the heat map. Thereby, the convenience of the viewer W can be improved.
  • FIG. 2 On the right side in the lower part of FIG. 2, an example is shown in which a balloon is displayed in a superimposed manner on the heat map.
  • Each of the balloons F1 and F2 shown on the lower right side in FIG. 2 can be arranged near the center of the TIG area.
  • the setter C can grasp the validity of the set position of the TIG area A.
  • the balloon since the balloon is displayed on the viewer terminal 3 while being superimposed on the moving image L, the balloon is a target when the viewer W performs a tap operation for acquiring object information.
  • the setter C can grasp the correspondence between the balloon and the position where the tap operation was actually performed. Thereby, the setter C can correct, for example, the position of the TIG area A set in the moving image to a suitable position. Further, for example, a correction relating to the width of the initially set TIG area can be made.
  • the heat map service is mainly provided as a service for setters, but can also be provided as a service for viewers. That is, the viewer W can see what kind of object J other viewers W are interested in by looking at the heat map. In addition, it is possible to give an incentive for the user to tap the object J to the object J tapped by many viewers W. Also, the result of counting the number of taps can be used for various purposes. Specifically, for example, when the moving image L is a so-called live broadcast movie of a live concert, the song with the largest number of taps during the live concert can be adopted as the last expand song of the live concert. . That is, the heat map can be grasped not only as a map of the tapped position but also as a climax in the time zone of the reproduced moving image L.
  • FIG. 3 is a diagram showing an outline and a specific example of a story branch service which is an example of the present service.
  • the story branching service is a service in which a plurality of buttons indicating options for branching a story of a moving image are displayed as an object J in a moving image to be reproduced in a selectable manner.
  • a button for selecting a story of a moving image is displayed in a place different from the moving image, and when a viewer W performs an operation of selecting a story, a target moving image is reproduced.
  • a button indicating an option is displayed in the moving image L by a branch configuration shown on the right side of FIG. That is, the branch shown on the right side of FIG.
  • FIG. 3 is a first-level branch including “stay”, “play”, “eat”, and “soak”, and a case where “play” is selected in the first-level branch. , “Spring”, “Summer”, “Autumn”, and “Winter”, and “Emphasis” and “Entertainment” when “Eat” is selected in the first hierarchy branch And a branch of the second hierarchy consisting of
  • the upper left part of FIG. 3 shows an example of a moving image top screen created based on the branch configuration shown on the right side of FIG. That is, the top screen shown in the upper left part of FIG. 3 includes an object J1 described as "stay” and an object J2 described as “play”, together with a guide text "select a theme to see”. And an object J3 described as "eat” and an object J4 described as “soak” are displayed as selectable buttons.
  • an operation tap operation
  • a moving image L having a content reminiscent of “eat” is reproduced. Is done.
  • a button labeled “emphasis on atmosphere” and a button labeled “emphasis on entertainment” are displayed so as to be selectable.
  • a button described as "atmosphere-oriented” is tapped, a moving image L (promotion video) having a content reminiscent of "atmosphere-oriented” is reproduced as an introduction.
  • a moving image L (promotion video) of an inn with an emphasis on the atmosphere such as local fish dishes being served on the backside of the hearth, with the theme of "eat” and "emphasis on the atmosphere” ) Is reproduced as object information.
  • the moving image reproduced as the object information includes one or more objects J linked with the object information. Then, when any position in the TIG area A where the object J is present is tapped, it is stored in the area D indicating “TIG stock” at the right end of the screen. In the example shown in the lower left part of FIG. 3, when any position in the TIG area A21 where the local fish dish as the object J21 exists is tapped, it is stored in the area D at the right end of the screen indicating “TIG stock”. You.
  • the TIG area A is set for each object J even at the branch destination. Therefore, when any position in the TIG area A is tapped, the tapped object J is displayed as "TIG area". Stock ". Further, it is also possible to add all the options to the “TIG stock” while returning to the top screen shown in the upper left part of FIG. 3 many times. Further, in the example of FIG. 3, the story is branched into two layers, but the story can be branched into any number of layers. Further, a configuration in which two-way branching is repeated may be adopted. In this case, it is possible to reproduce a substory as object information in each branch, while using only one main story.
  • the story branching service can be applied to, for example, a quiz moving image.
  • each of the correct and incorrect moving images can be reproduced as object information.
  • the story branch service is applied to the moving image L of the quiz, even if the viewer W cannot correctly answer the quiz, it is possible to try again the problem that could not be answered correctly. In this case, it is possible to provide a difference in the display between the option that was answered once and was incorrect and the other option.
  • the problem is re-challenged, it can be displayed so that only the shade of “A” becomes lighter. Thus, the viewer W can answer the quiz without selecting “A” again.
  • FIG. 4 is a diagram showing another specific example of the story branching service which is an example of the present service.
  • a button indicating an option is displayed in the moving image L. That is, the branch shown in the upper part of FIG. 4 has a configuration in which the branch of “see again”, “see next”, and “see only how to make” is repeated.
  • the lower part of FIG. 4 shows an example of a top screen of a moving image L that is created based on the branch configuration shown in the upper part of FIG. That is, the top screen shown in the lower part of FIG. 4 is accompanied by a guide sentence indicating “please click on a button like this!”, An object J31 indicated by “see next”, and “only see how to make”. Object J32 and an object J33 described as "See again” are displayed in a selectable manner.
  • FIG. 5 is a diagram showing an outline of a multilink service which is an example of the present service.
  • the multilink service is a service for adding link information.
  • the linking information can be additionally linked to the object J added to the “TIG stock”.
  • the multilink service is a service for setters.
  • a screen for setting the association information of the object J41 is drawn.
  • the setter C can additionally set the link information H1 to H3 of the object J41.
  • the additionally set association information is displayed on the viewer terminal 3. This allows the viewer W to freely select and purchase the object J41, for example, at the EC site or at an actual store.
  • the object information can be used for various purposes, such as simply browsing an EC site, browsing specific information of a store, and confirming the location of a store.
  • FIG. 6 is a diagram showing an outline of a voice TIG service which is an example of the present service.
  • the voice TIG service is a service that recognizes the voice of the viewer W.
  • the viewer W can perform an operation by voice even when both hands are blocked and a finger cannot be operated.
  • the viewer W can efficiently assemble the model using both hands when assembling the model or the like while watching the moving image L of the assembly manual.
  • the operation on the viewer terminal 3 is performed by uttering the instruction content to the microphone I by voice.
  • a more detailed assembly manual can be acquired as object information of the model based on the operation of the voice of the viewer W.
  • an image of the actual size of the part can be acquired and displayed as object information of some parts of the model.
  • the GUI is provided with an ON / OFF switch button G for the microphone I.
  • the microphone I thereby, it is possible to prevent the microphone I from picking up a sound that is not necessary for operation.
  • the viewer terminal 3 and the microphone I are connected by wire.
  • the present invention is not limited to this.
  • the viewer terminal 3 and the microphone I may be wirelessly connected by Bluetooth (registered trademark) or the like. It may be built in the viewer terminal 3.
  • FIG. 7 is a diagram showing an outline of a gesture TIG service which is an example of the present service.
  • the gesture TIG service is a service that allows the viewer W to acquire object information only by making a gesture while viewing the moving image data displayed on the viewer terminal 3. Specifically, for example, the viewer W performs an operation of selecting the object J by a gesture of squeezing his / her fist (so-called goo) or a gesture of opening the hand (so-called par). The user can access the object information of the created object J.
  • the gesture TIG service typically makes a gesture with respect to the camera of the viewer terminal 3 as shown at the left end of FIG. 7, but as shown in the center of FIG. They can be distributed simultaneously. In this case, the viewer W makes a gesture of selecting an object while watching the television V.
  • the gesture can be detected by a Web camera mounted on the viewer terminal 3 or various kinds of sensors.
  • a gesture TIG service can be provided in a signage S at a store.
  • an operation of selecting and purchasing a product appearing in the moving image L can be performed by a gesture, and the product can be received and taken home on the spot.
  • FIG. 8 is a diagram showing an outline of a real-time suggest service which is an example of the present service.
  • the real-time suggestion service is a service that displays a timely advertisement for the viewer W on the moving image L. That is, in the present service, basically, the linking information is linked in advance (or automatically) to every object J appearing in the moving image L. However, when the viewers W having different hobbies and preferences view the same moving image L, the object J of interest is usually different for each viewer W. For this reason, in the real-time suggestion service, the TIG area to be activated can be changed in accordance with the taste and preference of the viewer W. Specifically, for example, as shown on the left side of FIG. 8, when a certain viewer W finishes viewing the first half of the moving image L, he or she frequently performs an operation of specifying a car as the object J. Assume a case.
  • the TIG area A of the car (or the TIG area A of the car is mainly used) is enabled in the latter half of the moving image L viewed by the viewer W, as shown in the upper right part of FIG. (Or automatically set). That is, the operation of the viewer W to specify the object J is recorded as an actual result together with the content of the object J specified, and becomes an analysis target. As a result, the tendency of the operation of the viewer W is grasped, and the TIG area to be displayed can be changed at any time according to the tendency of the operation. Further, as shown in the lower right part of FIG. 8, the advertisement K of the car may be displayed as a wipe in the moving image L for the viewer W who prefers to acquire information about the car.
  • FIG. 9 is a diagram illustrating an example of a configuration of an information processing system according to an embodiment of the present invention.
  • the information processing system shown in FIG. 9 is configured to include a management server 1, a setter terminal 2, a viewer terminal 3, and an external server 4.
  • a management server 1 a setter terminal 2, a viewer terminal 3, and an external server 4.
  • Each of the management server 1, the setter terminal 2, the viewer terminal 3, and the external server 4 is mutually connected via a predetermined network N such as the Internet.
  • the management server 1 is an information processing device managed by a service provider (not shown).
  • the management server 1 executes various processes for realizing the present service while appropriately communicating with the setter terminal 2 and the viewer terminal 3.
  • the setter terminal 2 is an information processing device operated by the setter C, and includes, for example, a personal computer, a smartphone, a tablet, and the like.
  • the viewer terminal 3 is an information processing device operated by the viewer W, and includes, for example, a personal computer, a smartphone, a tablet, and the like.
  • the external server 4 manages object information that can be provided to the viewer by the linking information, for example, when the linking information is a URL, various web sites (web sites on which the object information is posted) existing in the URL. .
  • FIG. 10 is a block diagram showing an example of a hardware configuration of a management server in the information processing system of FIG.
  • the management server 1 includes a CPU (Central Processing Unit) 11, a ROM (Read Only Memory) 12, a RAM (Random Access Memory) 13, a bus 14, an input / output interface 15, an input unit 16, and an output unit 17. , A storage unit 18, a communication unit 19, and a drive 20.
  • CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the CPU 11 executes various processes according to a program recorded in the ROM 12 or a program loaded from the storage unit 18 into the RAM 13.
  • the RAM 13 also stores data and the like necessary for the CPU 11 to execute various processes.
  • the CPU 11, the ROM 12, and the RAM 13 are connected to each other via the bus 14.
  • the bus 14 is also connected to an input / output interface 15.
  • the input / output interface 15 is connected to an input unit 16, an output unit 17, a storage unit 18, a communication unit 19, and a drive 20.
  • the input unit 16 is composed of, for example, a keyboard and outputs various information.
  • the output unit 17 includes a display such as a liquid crystal, a speaker, and the like, and outputs various types of information as images and sounds.
  • the storage unit 18 is configured by a DRAM (Dynamic Random Access Memory) or the like, and stores various data.
  • the communication unit 19 communicates with another device (for example, the setter terminal 2, the viewer terminal 3, the external server 4, and the like in FIG. 2) via a network N including the Internet.
  • the drive 20 is appropriately equipped with a removable medium 30 made of a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like.
  • the program read from the removable medium 30 by the drive 20 is installed in the storage unit 18 as needed. Further, the removable medium 30 can also store various data stored in the storage unit 18 in the same manner as the storage unit 18.
  • the setter terminal 2, the viewer terminal 3, and the external server 4 in FIG. 9 can also have basically the same configuration as the hardware configuration shown in FIG. Therefore, description of the hardware configurations of the setter terminal 2, the viewer terminal 3, and the external server 4 will be omitted.
  • a service provider (not shown) can provide various services described below in addition to the above-described base service.
  • Heat map processing refers to processing for realizing the above-described heat map service.
  • Branch processing refers to processing for realizing the above-described story branch service.
  • Link processing refers to processing for realizing the above-described multilink service.
  • Audio processing refers to processing for realizing the audio TIG service described above.
  • Gesure processing refers to processing for realizing the above-described gesture TIG service.
  • Suggest processing refers to processing for realizing the above-described real-time suggest service.
  • a functional configuration for executing a heat map process, a branch process, a link process, a voice process, a gesture process, and a suggest process in which the execution of the process is controlled in the management server 1 will be described.
  • FIG. 11 is a functional block diagram showing an example of a functional configuration for executing a heat map process, a branch process, a link process, a voice process, a gesture process, and a suggestion process among the functional configurations of the management server in FIG. It is.
  • the associating information management unit 101, the moving image presentation control unit 102, the designation reception unit 103, and the acquisition unit 104 function.
  • the associating information management unit 101, the moving image presentation control unit 102, the designation reception unit 103, the acquisition unit 104, the provision unit 105, the branch generation unit 107 function.
  • the linking information management unit 101, the moving image presentation control unit 102, the designation reception unit 103, the acquisition unit 104, and the provision unit 105 function.
  • the linking information management unit 101, the moving image presentation control unit 102, the designation reception unit 103, the acquisition unit 104, the provision unit 105, the speech recognition unit 108 function.
  • the association information management unit 101, the moving image presentation control unit 102, the designation reception unit 103, the acquisition unit 104, the provision unit 105, the gesture recognition unit 109 function.
  • the linking information management unit 101, the moving image presentation control unit 102, the designation reception unit 103, the acquisition unit 104, the provision unit 105, the suggestion control unit 110 function.
  • the associating information management unit 101 provides the object information to the viewer W for the object J existing in the TIG area A specified by the viewer W as the first user in the displayed moving image L.
  • One or more pieces of linking information to be linked are managed before being specified by the viewer W.
  • the associating information management unit 101 performs the following processing before an operation (for example, a tap operation) for specifying the object J displayed on the moving image L is performed. That is, the associating information management unit 101 manages one or more pieces of associating information in association with each of the at least one object J stored and managed in the object DB 181 as the object J that can be displayed in the moving image L. I do.
  • the moving image presentation control unit 102 executes control for presenting the moving image L to the viewer W. Specifically, the moving image presentation control unit 102 executes control for displaying the moving image L on the viewer terminal 3 of the viewer W.
  • the specification receiving unit 103 When the viewer presenting the moving image performs a predetermined operation (for example, a tap operation) on the TIG area, the specification receiving unit 103 recognizes that the object J existing in the TIG area has been specified and performs the specification. Accept.
  • a predetermined operation for example, a tap operation
  • the acquisition unit 104 acquires the association information managed in association with the object J for which the specification has been accepted. Specifically, for example, the acquisition unit 104 acquires the association information managed in association with the object J for which the specification by the viewer W has been accepted.
  • the providing unit 105 executes control for providing the object information to the viewer terminal 3 based on the link information acquired by the acquiring unit 104.
  • the heat map unit 106 executes control for presenting a moving image L on which one or more objects J can be displayed to the viewer terminal 3. Specifically, for example, the control for presenting the moving image L in FIG.
  • the branch generation unit 107 generates information indicating a branch point at which the moving image branches into a plurality of stories. Specifically, for example, as shown in FIGS. 3 and 4, the branch generation unit 107 generates information indicating a branch point at which the moving image L branches into a plurality of stories.
  • the voice recognition unit 108 recognizes the voice of the viewer W. Specifically, for example, as shown in FIG. 6, the voice recognition unit 108 recognizes the voice of the viewer W input using the microphone I.
  • the gesture recognition unit 109 executes control for recognizing the gesture of the viewer W. Specifically, for example, the gesture recognition unit 109 executes control for recognizing the gesture of the viewer W using a proximity sensor or the like mounted on the viewer terminal 3.
  • the suggestion control unit 110 executes a control for displaying a predetermined advertisement on the moving image L based on the operation results of the viewer W. Specifically, for example, as illustrated in FIG. 8, the suggestion control unit 110 executes control for displaying the advertisement K on the moving image L based on the operation results of the viewer W.
  • the upper part of FIG. 12 shows two specific examples of the collective TIG function.
  • the “batch TIG function” means that when “predetermined operation” is performed at “predetermined timing” during reproduction of the moving image L, all objects J in the frame (still image) displayed at that timing are displayed.
  • the content of the “predetermined operation” is not particularly limited.
  • the batch TIG is executed as the “predetermined operation” at the “predetermined timing” during the reproduction of the moving image. An operation of tapping the button B1 for causing the user to perform the operation is performed.
  • all the objects J1 to J3 existing in the frame (still image) displayed at the “predetermined timing” are collectively TIGated to the area D at the right end of the screen indicating “TIG stock”.
  • all the objects J1 to J3 existing in the frame (still image) displayed at the “predetermined timing” are collectively TIGated to the area D at the right end of the screen indicating “TIG stock”.
  • the lower part of FIG. 12 shows a specific example of the similar information presentation function.
  • the “similar information presentation function” refers to a function of providing information about a thing similar to the object J displayed on the moving image (hereinafter, referred to as “similar information”) to the viewer W.
  • similar information a function of providing information about a thing similar to the object J displayed on the moving image (hereinafter, referred to as “similar information”) to the viewer W.
  • similar information refers to a function of providing information about a thing similar to the object J displayed on the moving image (hereinafter, referred to as “similar information”) to the viewer W.
  • similar information a function of providing information about a thing similar to the object J displayed on the moving image
  • the similar object information of the object J1 is displayed. Specifically, for example, information on “round neck long-sleeved shirt” having a brand different from that of the object J1 is displayed as similar information. Thereby, for example, even if the object J1 appearing in the moving image is a thing that cannot be easily obtained by the viewer, such as an object having a rare value or an expensive object, a “similar object” is displayed. Information to obtain "things" can be obtained.
  • the present service basically allows the user to easily save the object J in the “TIG stock” by performing an operation of tapping the object J.
  • a plurality of areas indicating storage in “TIG stock” may be provided so that the viewer W can determine which one to store.
  • the viewer W performs an operation of selecting the object J
  • a part of the categorization performed in the “TIG stock” can be performed.
  • the object J to be purchased immediately is stored in the area D, and the object J to be put into the cart is stored in the area E for the time being. , And so on.
  • the upper part of FIG. 13 shows a specific example of the screen shot function.
  • the viewer W performs an operation of tapping the button B1 during reproduction of a moving image
  • all the objects J1 to J3 included in the frame (still image) are displayed.
  • the stored frames (still images) are displayed in a list as a “screenshot list”, for example, as shown in the upper part of FIG.
  • the viewer W performs the following operation to obtain the object information. Can be obtained.
  • the viewer W performs an operation of tapping any position of each of the one or more objects J included in the frame (still image) in the TIG area A, and thereby the object information of the object J is changed. Can be obtained.
  • the object of each of the objects J1 to J3 is Information can be obtained.
  • the viewer W may temporarily save the frame (still image) using the screen shot function and later slowly acquire the object information. it can.
  • the lower part of FIG. 13 shows a specific example of the pinch-in function.
  • the viewer W can display a person, music, a location, a controller, and the like outside the screen M by performing a pinch-in operation on the screen M.
  • a separate button is provided so that a function corresponding to the pinch-in function can be executed. Can also.
  • the upper part of FIG. 14 shows a specific example of the simple translation function.
  • the “simple translation function” means that by tapping one of the words included in the text displayed in the telop or caption displayed during playback of the moving image, This function allows you to display a simple translation.
  • one or more words included in a character or a document written in a telop or a caption displayed during reproduction of a moving image can be regarded as an object J. Therefore, for example, similarly to the GUI shown in the upper part of FIG. 12 described above, by performing an operation of tapping the word as the object J, it is possible to save the word in the “TIG stock” indicated by the area D. Furthermore, as shown in the upper part of FIG.
  • the word as the object J stored in the “TIG stock” (area D) is recorded in a word book that can be used by each of the viewers W. Then, when the viewer W taps an icon indicating a word recorded in a word book or “TIG stock” (area D), an accurate pronunciation of the word is reproduced by voice. Then, when the word is pronounced by the viewer W, the accuracy of the pronunciation of the viewer W is analyzed, and the correct answer rate indicating the degree of accuracy of the pronunciation can be displayed.
  • the lower part of FIG. 14 shows a specific example of the TIG stock.
  • the above-mentioned “TIG stock” can be simply displayed as a list of objects J.
  • the list is displayed in a form that is categorized into one or more books. You can also.
  • the operation of categorizing the object J stored in the “TIG stock” can be manually performed by the setter C, or can be automatically performed using a technique such as AI (artificial intelligence). it can. This allows the viewer W to quickly find the object information of the object J desired by himself / herself when trying to acquire the object information of the object J after viewing the moving image.
  • AI artificial intelligence
  • the object J is merely an example. That is, the moving image L includes a myriad of candidates that can correspond to the object J. For this reason, as described above, by linking the linking information to as many objects J as possible in advance, the convenience of the viewer W can be further improved.
  • the link information of the object J is stored in the “TIG stock” as stock information, but this is only an example.
  • the linking information of the object J specified by the viewer W may not be stored as stock information.
  • FIG. 2 shows only one setter C, but this is merely an example, and a plurality of setters C may exist.
  • the shape of the TIG area is not limited to a quadrangle, but may be a free rectangle.
  • the moving images L shown in FIGS. 1 to 8 are merely examples, and may have other configurations.
  • the viewer W obtains information on clothes and ornaments worn by a person (for example, the female talent in FIG. 1) appearing in the moving image L while operating the viewer terminal 3. And can be purchased. Further, the extent to which the viewer W performs the tap operation on the TIG area A can be fed back to the setter C or the person who provides the moving image L. In addition, since the degree of setting the TIG area A in the moving image L is expected to change in accordance with the ability of the setter C, it is expected that the setter C as a professional having advanced skills will increase. .
  • CM commercial
  • the viewer W can be set as the setter C to set the TIG area A in the moving image. That is, the viewer W can use the setting application described above. Further, the viewer W can also distribute the image L in which the TIG area A is set. Also, when the TIG area A corresponding to one object J is tapped, a plurality of link destinations (jump destinations) can be selected. "TIG stock" can also be customized. In addition, the effect can be changed according to the specific mode of the operation for designating the object J.
  • the effect can be changed by an operation angle, a tap time, a double tap, or the like. More specifically, for example, in a case where the first linking information and the second linking information are associated with the object J, the first linking information is stored when a deep pressing operation is performed, while the touch is performed twice (double tapping). ) And the second association information can be stored. Further, the object J can be stocked by the movement of the eyes of the viewer W.
  • the system configuration shown in FIG. 9 and the hardware configuration of the management server 1 shown in FIG. 10 are merely examples for achieving the object of the present invention, and are not particularly limited.
  • the functional block diagram shown in FIG. 11 is merely an example and is not particularly limited. That is, it suffices that the information processing system has a function capable of executing the series of processes described above as a whole, and what kind of functional block is used to realize this function is not particularly limited to the example of FIG. .
  • the location of the functional block is not limited to FIG. 11 and may be arbitrary.
  • the functional blocks necessary for executing the linking information providing process and the linking information setting support process are configured on the management server 1 side, but this is only an example.
  • one functional block may be configured by hardware alone, may be configured by software alone, or may be configured by a combination thereof.
  • a program constituting the software is installed on a computer or the like from a network or a recording medium.
  • the computer may be a computer embedded in dedicated hardware.
  • the computer may be a computer that can execute various functions by installing various programs, for example, a general-purpose smartphone or a personal computer in addition to a server.
  • a recording medium including such a program is provided separately from the apparatus main body to provide the program to each user. It is composed of provided recording media and the like.
  • steps for describing a program to be recorded on a recording medium are not limited to processing performed in chronological order according to the order, but are not necessarily performed in chronological order. This also includes the processing to be executed.
  • system refers to an entire device including a plurality of devices and a plurality of means.
  • the information processing system to which the present invention is applied only needs to have the following configuration, and can take various embodiments. That is, the information processing system to which the present invention is applied is: It exists in a predetermined area (for example, TIG area A1 in FIG. 1) specified by a first user (for example, viewer W in FIG. 9) in the displayed moving image (for example, moving image L in FIG. 1). Information for providing predetermined information (for example, object information) to the first user for an object (for example, the object J1 indicating a female talent in FIG. 1) is provided before the designation (for example, when the moving image is displayed on the viewer terminal). A management unit (for example, the linking information management unit 101 in FIG. 11) that links the information at the timing before the information is displayed in FIG.
  • a predetermined area for example, TIG area A1 in FIG. 1
  • a first user for example, viewer W in FIG. 9
  • Information for providing predetermined information for example, object information
  • an object for example, the object J1 indicating a female talent in FIG
  • a presentation control unit for example, a moving image presentation control unit 102 in FIG. 11 for executing control for presenting the moving image to the first user;
  • a predetermined operation for example, a tap operation
  • the moving image is present in the predetermined area.
  • a first receiving unit for example, the specification receiving unit 103 in FIG. 11
  • An acquisition unit for example, the acquisition unit 104 in FIG. 11 for acquiring the association information managed in association with the object whose specification has been accepted;
  • Provided control means for example, FIG. 11 for executing control for providing the predetermined information to the first user (for example, displaying it on the viewer terminal 3 in FIG. 2 or the like) based on the link information acquired by the acquisition means.
  • Providing unit 105 Is provided.
  • the management unit associates the object with the association information and manages the object, and the presentation control unit presents the moving image to the viewer.
  • the first receiving unit receives the specification of the object by the viewer
  • the obtaining unit obtains the linking information associated with the object
  • the presentation control unit provides the viewer with predetermined information based on the linking information. I do.
  • the viewer can easily obtain the predetermined information on the object included in the moving image during the viewing of the moving image or at a timing after the viewing of the moving image.
  • the viewer W in the above-described embodiment easily acquires the object information regarding the object J included in the moving image L during the viewing of the moving image L or at a timing after the viewing of the moving image L. can do.
  • information indicating a position on the moving image where the predetermined operation (for example, tap operation) by one or more users (for example, viewer W) has been performed is generated and presented in a predetermined format (for example, a heat map).
  • a second presentation control unit (for example, the heat map unit 106) for performing the control may be further provided.
  • the second presentation control unit generates information indicating a position on the moving image where a predetermined operation has been performed by one or more users, and performs control for presenting in a predetermined format.
  • the position of the predetermined area set in the moving image can be corrected to a suitable position.
  • a new associating operation can be performed on an object for which a predetermined operation has been frequently performed by a user among objects to which object information has not been associated so far.
  • the second presentation control means may include, as information indicating a position on the moving image at which the predetermined operation by the one or more users is performed, a heat generated based on the number of the predetermined operations performed. Controls presented in the form of a map can be performed.
  • the second presentation control unit generates information indicating a position on the moving image where a predetermined operation has been performed by one or more users and executes control for presenting the information in the form of a heat map.
  • a heat map a specific area in which an object is present, a specific position where a specific operation is frequently performed, or a specific position where the specific operation is not performed.
  • the position of the predetermined area set in the moving image can be corrected to a suitable position.
  • a new associating operation can be performed on an object for which a predetermined operation has been frequently performed by a user among objects to which object information has not been associated so far.
  • the image processing apparatus further includes a branch generation unit that generates information indicating a branch point at which the moving image is branched into a plurality of stories.
  • the branch generation unit Since the branch generation unit generates information indicating a branch point at which a moving image is branched into a plurality of stories, the above-described story branch service can be provided.
  • a voice recognition means for recognizing the voice of the user.
  • the voice recognition means recognizes the voice of the user, so that the above-mentioned voice TIG service can be provided.
  • a gesture recognition unit for recognizing the gesture of the user.
  • the gesture recognizing unit recognizes the gesture of the user, so that the above-described gesture TIG service can be provided.
  • a suggestion control means for executing a control for displaying a predetermined advertisement (signage) on the moving image based on a result of the operation of the user may be further provided.
  • the suggestion control means executes control for displaying a predetermined advertisement (signage) on a moving image based on the results of user operations, the above-described real-time suggestion service can be provided.
  • the management means manages the object by associating the object with the plurality of pieces of the associating information
  • the acquisition unit acquires the plurality of pieces of the association information managed in association with the object whose specification has been accepted
  • the provision control means may execute a control for providing the predetermined information to the user based on the plurality of pieces of link information acquired by the acquisition means.
  • the management unit associates and manages a plurality of pieces of linked information with the object, and the obtaining unit obtains and provides a plurality of pieces of linked information managed by being linked to the object specified by the user.
  • the control means executes control for providing predetermined information to the user based on the plurality of pieces of link information acquired by the acquisition means.
  • 1 management server, 2: setter terminal, 3: viewer terminal, 4: external server, 11: CPU, 12: ROM, 13: RAM, 14: bus, 15: input / output interface, 16: input unit, 17 : Output unit, 18: storage unit, 19: communication unit, 20: drive, 30: removable medium, 101: linked information management unit, 102: moving image presentation control unit, 103: designation reception unit, 104: acquisition unit, 105 : Provision unit, 106: heat map unit, 107: branch generation unit, 108: speech recognition unit, 109: gesture recognition unit, 110: suggestion control unit, 181: object DB, C: setter, W: viewer, L : Moving image, J: Object, A: TIG area, F: Balloon, H: Link information, I: Microphone, G: ON / OFF switch button, V: Television, S: Storefront Ineji, K: advertising, B: Button, D: area, E: region, M: Screen, N: network

Landscapes

  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Engineering & Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present invention addresses the problem of improving convenience for a user viewing a video or still image when accessing information about a specific object shown in the video or still image. An associated information management unit 101 associates associated information for providing object information to a viewer W with an object J which may be designated by the viewer W in a video L and manages said information, said association being performed prior to designation. A video presentation control unit 102 presents the video L to the viewer W. A designation acceptance unit 103 accepts the designation via a tap operation on a TIG area A by the viewer W of the video L. An acquisition unit 104 acquires the associated information associated with the object J for which the designation was accepted. A provision unit 105 provides the object information to the viewer W on the basis of the acquired associated information. The problem is solved thereby.

Description

情報処理装置Information processing device
 本発明は、情報処理装置に関する。 << The present invention relates to an information processing device.
 これまで、ユーザによる動画を用いた商品の紹介を支援するため、配信ユーザによる動画を複数の視聴ユーザに提示し、複数の商品の中からの紹介商品を提示し、視聴ユーザの商品の選択を受け付けるシステムが提案されている(例えば特許文献1参照)。 Until now, in order to support the user's introduction of a product using a video, a video by a distribution user is presented to a plurality of viewing users, an introduction product from a plurality of products is presented, and a selection of a viewing user's product is performed. An accepting system has been proposed (for example, see Patent Document 1).
特開2018-26152号公報JP 2018-26152 A
 しかしながら、例えば、動画を視聴するユーザが動画内の商品に興味を持ち、詳細を知りたい要望や購入したいという要望がある場合、その商品の名前等を用いて別途検索エンジン等を用いて検索する必要がある。このような場合、視聴するユーザは最終的に購入するまでの操作や検討する時間や手間が増えるため、購入に至る可能性が低下するという問題があった。 However, for example, when a user who watches a video is interested in a product in the video and has a request to know details or a request to purchase, the user separately searches using a name or the like of the product using a search engine or the like. There is a need. In such a case, there is a problem in that the viewing user has an increased operation and time and labor until final purchase, and thus the possibility of purchase is reduced.
 本発明は、動画像及び静止画像を閲覧するユーザが、動画像や静止画像に示される対象物の情報にアクセスする際の利便性を向上させることを目的とする。 The object of the present invention is to improve the convenience of a user who browses a moving image and a still image when accessing information of an object shown in the moving image and the still image.
 上記目的を達成するため、本発明の一態様の情報処理装置は、
 ユーザに対して情報提供が可能な1以上の対象を少なくとも含む画像を、当該ユーザに提示する提示手段と、
 前記画像が提示された前記ユーザによる操作に基づいて、当該ユーザが情報提供を希望する対象を特定するための情報を取得する取得手段と、
 前記取得手段により取得された前記情報に基づいて、前記ユーザが情報提供を希望する前記対象を特定する特定手段と、
 前記特定手段により特定された前記対象に関する情報を前記ユーザに提供する提供手段と、
 を備える。
In order to achieve the above object, an information processing device of one embodiment of the present invention
Presenting means for presenting to the user an image including at least one or more objects capable of providing information to the user;
An acquisition unit configured to acquire information for identifying a target to which the user wants to provide information, based on an operation performed by the user whose image is presented;
Based on the information acquired by the acquiring means, the identifying means for identifying the object the user wants to provide information,
Providing means for providing the user with information on the target specified by the specifying means,
Is provided.
 本発明によれば、動画等を閲覧するユーザが、動画や静止画像に示される対象物の情報にアクセスする際の利便性を向上させることができる。 According to the present invention, it is possible to improve the convenience of a user who browses a moving image or the like when accessing information of an object shown in a moving image or a still image.
本発明の一実施形態に係る情報処理システムにより実現可能な本サービスの一例の概要を示す図である。It is a figure showing an outline of an example of this service which can be realized by information processing system concerning one embodiment of the present invention. 本サービスの一例であるヒートマップサービスの概要を示す図である。It is a figure showing the outline of the heat map service which is an example of this service. 本サービスの一例であるストーリー分岐サービスの概要と具体例を示す図である。It is a figure which shows the outline | summary of the story branching service which is an example of this service, and a specific example. 本サービスの一例であるストーリー分岐サービスの他の具体例を示す図である。It is a figure showing other examples of the story branch service which is an example of this service. 本サービスの一例であるマルチリンクサービスの概要を示す図である。It is a figure showing the outline of the multilink service which is an example of this service. 本サービスの一例である音声TIGサービスの概要を示す図である。It is a figure showing the outline of voice TIG service which is an example of this service. 本サービスの一例であるジェスチャーTIGサービスの概要を示す図である。It is a figure showing the outline of the gesture TIG service which is an example of this service. 本サービスの一例であるリアルタイムサジェストサービスの概要を示す図である。It is a figure showing the outline of the real-time suggestion service which is an example of this service. 本発明の一実施形態に係る情報処理システムの構成の一例を示す図である。1 is a diagram illustrating an example of a configuration of an information processing system according to an embodiment of the present invention. 図9の情報処理システムのうち管理サーバのハードウェア構成の一例を示すブロック図である。FIG. 10 is a block diagram illustrating an example of a hardware configuration of a management server in the information processing system of FIG. 9. 図10の管理サーバの機能的構成のうち、ヒートマップ処理、分岐処理、リンク処理、音声処理、ジェスチャー処理、及びサジェスト処理を実行するための機能的構成の一例を示す機能ブロック図である。FIG. 11 is a functional block diagram illustrating an example of a functional configuration for executing a heat map process, a branching process, a link process, a voice process, a gesture process, and a suggestion process among the functional configurations of the management server in FIG. 10. 図9の情報処理システムのうち視聴者端末のGUI(Graphical User Interface)の機能の具体例を示す図である。FIG. 10 is a diagram illustrating a specific example of a GUI (Graphical User Interface) function of a viewer terminal in the information processing system of FIG. 9. 視聴者端末のGUIの機能の具体例を示す図である。FIG. 9 is a diagram illustrating a specific example of a GUI function of the viewer terminal. 視聴者端末のGUIの機能の具体例を示す図である。FIG. 9 is a diagram illustrating a specific example of a GUI function of the viewer terminal.
 以下、本発明の実施形態について、図面を用いて説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
 なお、以下において、単に「画像」と呼ぶ場合には、「動画像」と「静止画像」との両方を含むものとする。
 また、「動画像」には、次の第1処理乃至第3処理の夫々により表示される画像を含むものとする。
 第1処理とは、平面画像(2D画像)におけるオブジェクト(例えばアニメのキャラクタ)の夫々の動作に対して、複数枚からなる一連の静止画像を時間経過と共に連続的に切り替えて表示させる処理をいう。具体的には例えば、2次元アニメーション、いわゆるパラパラ漫画の原理による処理が第1処理に該当する。
 第2処理とは、立体画像(3Dモデルの画像)におけるオブジェクト(例えばアニメのキャラクタ)の夫々の動作に対応するモーションを設定しておき、時間経過と共に当該モーションを変化させて表示させる処理をいう。具体的には例えば、3次元アニメーションが第2処理に該当する。
 第3処理とは、オブジェクト(例えばアニメのキャラクタ)の夫々の動作に対応した映像(即ち動画像)を準備しておき、時間経過と共に当該映像を流していく処理をいう。
 ここで、「映像(即ち動画像)」は、複数のフレームやフィールド等の画像(以下、「単位画像」と呼ぶ)から構成される。なお以下の例では、単位画像はフレームであるものとして説明する。
In the following, when simply called "image", it includes both "moving image" and "still image".
The “moving image” includes images displayed by the following first to third processes.
The first process is a process of successively switching and displaying a series of still images composed of a plurality of images with respect to each motion of an object (for example, an animation character) in a planar image (2D image). . Specifically, for example, a process based on the principle of a two-dimensional animation, a so-called flip comic, corresponds to the first process.
The second process is a process of setting a motion corresponding to each motion of an object (for example, an animated character) in a stereoscopic image (a 3D model image) and changing and displaying the motion over time. . Specifically, for example, a three-dimensional animation corresponds to the second processing.
The third process is a process of preparing a video (that is, a moving image) corresponding to each motion of an object (for example, an animated character) and flowing the video over time.
Here, the “video (ie, moving image)” is composed of images such as a plurality of frames and fields (hereinafter, referred to as “unit images”). In the following example, the unit image is described as a frame.
 まず図1を参照して、後述する図2の情報処理システムにより実現可能なサービス(以下、「本サービス」と呼ぶ)の概要について説明する。 First, with reference to FIG. 1, an outline of a service (hereinafter, referred to as “this service”) that can be realized by the information processing system of FIG. 2 described below will be described.
 図1は、本発明の一実施形態に係る情報処理システムにより実現可能な本サービスの一例の概要を示す図である。 FIG. 1 is a diagram showing an outline of an example of the present service which can be realized by the information processing system according to one embodiment of the present invention.
 本サービスは、サービス提供者(図示せず)により、動画像を視聴する者(以下、「視聴者」と呼ぶ)、及びその動画像の作成や各種設定を行う者(以下、「設定者」と呼ぶ)の夫々に提供されるサービスの一例である。
 以下、本サービスのうち、主に視聴者を対象とするサービスのことを「視聴者向けサービス」と呼び、主に設定者を対象とするサービスのことを「設定者向けサービス」と呼ぶ。
 本サービスでは、動画像に表示されるオブジェクトに、そのオブジェクトに関する情報(以下、「オブジェクト情報」と呼ぶ)を紐付けて管理することができる。このサービスは「設定者向けサービス」として提供される。
This service is provided by a service provider (not shown) who views a moving image (hereinafter, referred to as “viewer”) and a person who creates the moving image and performs various settings (hereinafter, “setter”). ) Are provided.
Hereinafter, among the present services, a service mainly intended for viewers will be referred to as a "service for viewers", and a service mainly intended for setters will be referred to as a "service for setters".
In this service, an object displayed on a moving image can be managed by linking information about the object (hereinafter, referred to as “object information”). This service is provided as a “service for setters”.
 ここで、「オブジェクト」には、動画像に被写体として登場する人や物を示す画像や文字等の領域の他、動画像に重畳されて表示されるテロップ等の画像(静止画像又は動画像)等も含まれる。また、動画像において目視することができないものも「オブジェクト」に含まれる。具体的には例えば、BGM(Back Ground Music)や音楽、位置情報等はいずれも「オブジェクト」の一例である。 Here, the “object” includes, in addition to an area such as an image or a character indicating a person or an object appearing as a subject in the moving image, an image such as a telop displayed over the moving image (still image or moving image) Etc. are also included. Objects that cannot be viewed in a moving image are also included in the “object”. More specifically, for example, BGM (Back Ground Music), music, position information, and the like are all examples of “objects”.
 「オブジェクト情報」には、オブジェクトに関するあらゆる情報が含まれ得るが、本サービスでは、視聴者が視聴している動画像からは直接取得することができないようなオブジェクトに関する情報が主にオブジェクト情報とされる。
 視聴者は、スマートフォン等の端末(後述の図2等の視聴者端末3)に表示されている動画像を視聴しながら、動画像に表示されている1以上のオブジェクトのうち所望のオブジェクトを指定する操作を行う。
 これにより、視聴者は、自身が指定したオブジェクトに紐付けられているオブジェクト情報を容易に取得することができる。このサービスは「視聴者向けサービス」として提供される。
The “object information” may include all kinds of information about the object, but in the present service, information about the object that cannot be directly obtained from a moving image viewed by a viewer is mainly used as object information. You.
The viewer designates a desired object among one or more objects displayed on the moving image while watching the moving image displayed on a terminal such as a smartphone (a viewer terminal 3 in FIG. 2 and the like described later). Perform the operation you want.
Thus, the viewer can easily acquire the object information associated with the object designated by the viewer. This service is provided as a “viewer service”.
 本サービスでは、動画像は、視聴者に配信等されるものであるとする。また、本実施形態では、視聴者の端末に予めインストールされた、動画像を視聴するための専用のアプリケーションソフトウェア(以下、「視聴アプリ」と呼ぶ)を用いて動画像が視聴されるものとする。
 ただし、動画像の視聴の手法は、特に限定されず、例えば視聴者の端末のブラウザ機能により動画像を視聴する手法を採用してもよい。
In this service, it is assumed that a moving image is distributed to a viewer. In the present embodiment, the moving image is viewed using dedicated application software (hereinafter, referred to as “viewing application”) for viewing the moving image, which is installed in the viewer's terminal in advance. .
However, the method of viewing a moving image is not particularly limited, and for example, a method of viewing a moving image using a browser function of a viewer terminal may be adopted.
 具体的には例えば、図1には、視聴者の端末に配信された動画像Lの具体例が示されている。視聴者は、端末に配信されてきた動画像Lを、視聴アプリやブラウザ機能を用いて視聴することができる。 Specifically, for example, FIG. 1 shows a specific example of a moving image L distributed to a viewer terminal. The viewer can view the moving image L delivered to the terminal using a viewing application or a browser function.
 図1に示す動画像Lには、女性タレントがハワイのビーチを歩く様子が描画されている。この動画像Lに描画されている女性タレントは、オブジェクトJの一例である。なお、女性タレントの他、女性タレントの衣服(商品)、ホテル(施設)、ヤシの木(動植物)、海(自然物)等、あらゆる事物がオブジェクトJに該当し得る。 動 The moving image L shown in FIG. 1 depicts a female talent walking on a Hawaiian beach. The female talent drawn in the moving image L is an example of the object J. Note that, in addition to the female talent, any object such as clothes (products), hotels (facilities), palm trees (animals and plants), and the sea (natural objects) of the female talent can correspond to the object J.
 視聴者は、動画像Lの視聴中、例えば動画像Lに登場している女性タレントに興味を持った場合、その女性タレントを示すオブジェクトJを指定する操作を行う。 (4) While watching the moving image L, for example, when the viewer is interested in a female talent appearing in the moving image L, the viewer performs an operation of designating an object J indicating the female talent.
 オブジェクトJのオブジェクト情報としては、その女性タレントに関する情報として、例えば名前(芸名)、所属するプロダクション事務所の名称、公開されているプロフィール等が含まれる。
 また、それ以外にも、オブジェクト情報には、例えばその女性タレントが着用している衣服に関する情報として、ブランド名、商品名、購入可能な販売店(EC(Electronic Commerce)サイト含む)等が含まれていてもよい。
 これにより、視聴者は、自身が指定したオブジェクトJのオブジェクト情報を任意のタイミングで入手することができる。
 ここで、「任意のタイミング」とは、動画像を視聴している最中は勿論のこと、当該動画像とは別の動画像を視聴している最中や、動画像自体の視聴が終了した後のタイミング等を含む意である。
The object information of the object J includes, for example, a name (stage name), a name of a production office to which the user belongs, a public profile, and the like as information on the female talent.
In addition to this, the object information includes, for example, information on clothes worn by the female talent, such as a brand name, a product name, and a store (including an EC (Electronic Commercial) site) that can be purchased. May be.
Thus, the viewer can obtain the object information of the object J specified by the viewer at an arbitrary timing.
Here, “arbitrary timing” means not only while watching a moving image, but also while watching a moving image different from the moving image, and viewing of the moving image itself ends. It is intended to include the timing etc. after performing.
 本サービスでは、動画像において、視聴者の操作に基づいて指定され得る1以上のオブジェクトJ毎に、そのオブジェクト情報を視聴者に「提供するため」の情報が紐付けられて管理されている。このように、オブジェクトJに対して紐付けられた情報であって、そのオブジェクト情報を視聴者に「提供するため」の情報を、以下、「紐付情報」と呼ぶ。これにより、上述のような視聴者向けサービスを実現させることができる。 In the present service, in a moving image, for each of one or more objects J that can be specified based on a viewer's operation, information for “providing” the object information to the viewer is linked and managed. In this manner, the information associated with the object J and “information for providing” the object information to the viewer is hereinafter referred to as “associated information”. As a result, the above-described services for viewers can be realized.
 また、視聴者向けサービスは、動画像において、視聴者の操作に基づいて指定され得る1以上のオブジェクトJ毎に、紐付情報が予め紐付けられていることで実現される。オブジェクトJに紐付情報を紐付ける作業は、設定者向けサービスを利用する設定者により行われる。 (4) The viewer service is realized by linking information in advance for each of one or more objects J that can be designated based on a viewer's operation in a moving image. The work of linking the link information to the object J is performed by a setter who uses the service for the setter.
 オブジェクトに紐付けられる「紐付情報」には、視聴者にオブジェクト情報を「提供するため」のあらゆる情報が含まれる。例えば図1の動画像Lの場合、人物(女性タレント)を示すオブジェクトJの紐付情報として以下のような情報を採用することができる。
 即ち、その女性タレントの名前(芸名)、所属するプロダクション事務所の名称、公開されているプロフィール等(オブジェクト情報)が掲載されたWEBページのURL(Uniform Resource Locator)等を紐付情報として採用することができる。また、それ以外にも、例えば女性タレントが着用している衣服に関する情報として、ブランド名、商品名、購入可能な販売店等(オブジェクト情報)が掲載されたWEBページのURLを紐付情報として採用することもできる。
The "linking information" linked to the object includes all information for "providing" the object information to the viewer. For example, in the case of the moving image L in FIG. 1, the following information can be adopted as the association information of the object J indicating a person (female talent).
That is, the name (stage name) of the female talent, the name of the production office to which the female talent belongs, the URL (Uniform Resource Locator) of the WEB page on which the published profile and the like (object information) are published, and the like are used as the linking information. Can be. In addition to the above, for example, as information relating to clothes worn by female talent, a URL of a WEB page on which a brand name, a product name, a store where purchase is possible, and the like (object information) are adopted as linking information. You can also.
 さらにいえば、「紐付情報」として、URLを採用することは例示に過ぎない。
 例えば、オブジェクトJに関する各種のキーワードを紐付情報として採用することもできる。この場合、視聴者の端末やサーバ等は、当該キーワードを用いて所定の検索用WEBサイトで検索を行い、その検索結果をオブジェクト情報として視聴者に提供することができる。
Furthermore, adopting a URL as the "linking information" is merely an example.
For example, various keywords regarding the object J can be adopted as the link information. In this case, the viewer terminal, server, or the like can perform a search on a predetermined search WEB site using the keyword, and provide the search result to the viewer as object information.
 また、1つのオブジェクトJに対して紐付けられる紐付情報の個数は、1個に限定されず、複数個であってもよい。
 例えば、本サービスにおいて正規とされるURLを第1紐付情報として採用することもできる。また、そのURL以外に、本サービスにより別途抽出された(例えば自動でレコメンド表示される)URLを第2紐付情報として採用することもできる。そして、視聴者が、その端末を操作することで、第1紐付情報と第2紐付情報のうち所望の一方を選択することができるようにしてもよい。この場合、選択された方のURLに存在するWEBサイトが、当該オブジェクトJのオブジェクト情報として視聴者に提供される。
 換言すると、1つのオブジェクトJに対して視聴者に提供されるオブジェクト情報は、1個に限定されず、複数個であってもよい。
Further, the number of pieces of link information linked to one object J is not limited to one, and may be plural.
For example, an authorized URL in the present service can be adopted as the first link information. In addition to the URL, a URL separately extracted by the present service (for example, a recommendation is automatically displayed) can be adopted as the second linked information. Then, by operating the terminal, the viewer may be enabled to select a desired one of the first linked information and the second linked information. In this case, the WEB site existing at the selected URL is provided to the viewer as object information of the object J.
In other words, the object information provided to the viewer for one object J is not limited to one, and may be plural.
 視聴者は、端末に対して、所定のオブジェクトJに対応付けられた紐付情報を「使用する操作」を行うことで、当該オブジェクトJのオブジェクト情報の提供を受けることができる。
 例えば、紐付情報がURLである場合には、当該紐付情報を「使用する操作」とは、当該URLに「アクセスすることを指示する操作」のことを指す。この場合、当該URLに存在するWEBサイトが視聴者の端末に表示されることで、そのWEBサイトに掲載されているオブジェクト情報が視聴者に提供される。
The viewer can receive the object information of the object J by performing “operation to use” the association information associated with the predetermined object J to the terminal.
For example, when the link information is a URL, the "operation to use the link information" refers to an "operation to instruct to access the URL". In this case, the web site existing at the URL is displayed on the terminal of the viewer, and the object information posted on the web site is provided to the viewer.
 ここで、所定のオブジェクトJに対応付けられた紐付情報を「使用する操作」が行われるタイミングは、当該オブジェクトJを含む動画像の視聴している最中に限定されない。例えば他の動画像を視聴している最中や、動画像の視聴を終了した後の任意のタイミング等とすることが可能である。
 このように、所定のオブジェクトJに対応付けられた紐付情報を使用する操作を視聴者の任意のタイミングとすることができるように、本サービスでは、当該オブジェクトJの指定の操作がなされると、当該オブジェクトJに紐付けられた紐付情報が所定場所(例えば視聴アプリ内の所定の格納場所)に格納される。
 このように、視聴者は、視聴者向けサービスを利用することにより、その所定場所に格納された1以上の紐付情報のうち、所望の紐付情報を使用する操作を、任意のタイミングで行うことができるようになる。
Here, the timing at which the “operation to use” the linked information associated with the predetermined object J is not limited to when the moving image including the object J is being viewed. For example, it can be set to any timing while viewing another moving image or after ending viewing of a moving image.
As described above, in the present service, when the operation of designating the object J is performed, the operation using the association information associated with the predetermined object J can be performed at an arbitrary timing of the viewer. The association information associated with the object J is stored in a predetermined location (for example, a predetermined storage location in the viewing application).
As described above, the viewer can use the service for viewers to perform an operation using desired linking information at one or more times among the one or more pieces of linking information stored in the predetermined location. become able to.
 ここで、視聴者による「オブジェクトJを指定する操作(紐付情報を所定場所に格納させる操作)」とは、具体的には、動画像の中の、オブジェクトJが存在し得る、「TIGエリア」と呼ばれる領域を指定する操作のことをいう。
 例えば、図1の動画像Lの中に二点鎖線で示される領域がTIGエリアAとなる。ここで、視聴者よる「TIGエリアAを指定する操作」が行われると、この操作が「オブジェクトJを指定する操作」となり、TIGエリアAに存在するオブジェクトJが指定される。
Here, the “operation for designating the object J (operation for storing the association information in a predetermined location)” by the viewer is, specifically, a “TIG area” where the object J may exist in the moving image. Refers to an operation to specify an area called.
For example, a region indicated by a two-dot chain line in the moving image L in FIG. Here, when the viewer performs an “operation for specifying the TIG area A”, the operation becomes an “operation for specifying the object J”, and the object J existing in the TIG area A is specified.
 本サービスでは、視聴者向けサービスとして、オブジェクトJが存在するTIGエリアAのうち、いずれかの位置に対するタップの操作が行われると、これが「オブジェクトJを指定する操作」として受付けられる。
 即ち、視聴者は、TIGエリアAのうち、いずれかの位置をタップする操作を行うことで、そのTIGエリアAに対応するオブジェクトJを指定するとともに、そのオブジェクトJの紐付情報を所定場所に格納させることができる。
 例えば図1の例では、TIGエリアAのうち、いずれかの位置がタップされると、TIGエリアAに対応するオブジェクトJの紐付情報(例えばオブジェクト情報が掲載されたWEBページのURL等)が、所定場所に格納される。
In this service, when a tap operation is performed on any position in the TIG area A where the object J exists as a service for viewers, this is accepted as an “operation for specifying the object J”.
That is, the viewer specifies the object J corresponding to the TIG area A by performing an operation of tapping any position in the TIG area A, and stores the linking information of the object J in a predetermined location. Can be done.
For example, in the example of FIG. 1, when any position in the TIG area A is tapped, the link information of the object J corresponding to the TIG area A (for example, the URL of a WEB page on which the object information is posted) is displayed. It is stored in a predetermined location.
 ここで、本サービスでは、紐付情報が格納される所定場所として、「TIGストック」と呼ばれる場所が採用されている。
 即ち、視聴者によるTIGエリアAのうち、いずれかの位置をタップする操作により指定されたオブジェクトJの紐付情報は、ストック情報として、「TIGストック」に保存されて管理される。
 ここで、視聴者に提供されるオブジェクト情報のデータ形式は、対応するオブジェクトJのデータ形式に依存することなく、例えばテキストデータ形式、静止画像のデータ形式、動画像のデータ形式等任意のものを採用することができる。「TIGストック」は、その動画像のストックページとして把握されるが、複数の動画像を跨ぐ共通のストックページに拡張することもできる。
 具体的には例えば、図示はしないが、視聴者は、端末に表示された所定のボタンを押下する(タップする操作を行う)ことで、「TIGストック」に保存されている1以上のオブジェクト情報を一覧表示させることができる。
Here, in the present service, a place called “TIG stock” is adopted as a predetermined place where the association information is stored.
That is, the association information of the object J specified by the operation of tapping any position in the TIG area A by the viewer is stored and managed as “stock information” in “TIG stock”.
Here, the data format of the object information provided to the viewer does not depend on the data format of the corresponding object J, and may be any format such as a text data format, a still image data format, and a moving image data format. Can be adopted. The “TIG stock” is grasped as a stock page of the moving image, but can be extended to a common stock page straddling a plurality of moving images.
Specifically, for example, although not shown, the viewer presses a predetermined button displayed on the terminal (performs an operation of tapping) to thereby obtain one or more pieces of object information stored in the “TIG stock”. Can be displayed in a list.
 なお、動画像において、視聴者によりタップされたオブジェクトJに紐付情報が紐付けられていない場合には、視聴者に紐付情報を提供することができない。
 即ち、本サービスは、上述したように、視聴者の端末に表示される動画像について、設定者による各種の設定が予め行われるか、あるいは自動的に設定が行われることで初めて実現される。
 具体的には、設定者が、設定者向けサービスを利用して、動画像に表示される1以上のオブジェクトJの夫々にTIGエリアAを設定して、TIGエリアAに紐付情報を紐付ける作業(以下、「紐付作業」と呼ぶ)を行うか、あるいはシステム側でAI(人工知能)等が自動的に紐付作業を行うことで、初めて視聴者に紐付情報を提供することができる。
 このため、設定者による紐付作業が行われていないオブジェクトJについては、視聴者がタップする操作を行ったとしても何ら反応しないようにしてもよいし、紐付情報が紐付けられていない旨を端末に表示させてもよい。
Note that in the moving image, if the linked information is not linked to the object J tapped by the viewer, the linked information cannot be provided to the viewer.
That is, as described above, this service is realized only when various settings are set in advance by a setter or automatically set for a moving image displayed on a viewer terminal.
Specifically, the setter sets the TIG area A for each of the one or more objects J displayed in the moving image using the service for the setter, and links the link information to the TIG area A. (Hereinafter referred to as “linking work”), or the AI (artificial intelligence) or the like automatically performs the linking work on the system side, so that the viewer can be provided with the linking information for the first time.
For this reason, for the object J for which the setter has not performed the linking operation, even if the viewer performs the tapping operation, no response may be made, or the terminal that the linking information is not linked may be displayed. May be displayed.
 このように、本サービスによれば、設定者が紐付作業を行い、視聴者がTIGエリアAのうちいずれかの位置をタップすることで、対象となるオブジェクトJのオブジェクト情報を視聴者が容易に取得できるようにすることができる。 As described above, according to the present service, the setter performs the linking operation, and the viewer taps any position in the TIG area A, so that the viewer can easily view the object information of the target object J. Can be obtained.
 また、本サービスでは、上述したベースとなるサービスの一例の他、視聴者及び設定者の利便性をさらに向上させる各種サービスが提供される。
 具体的には、「ヒートマップサービス」、「ストーリー分岐サービス」、「マルチリンクサービス」、「音声TIGサービス」、「ジェスチャーTIGサービス」、「リアルタイムサジェストサービス」が本サービスの一例として提供される。以下、これらの各種サービスの概要について、図2乃至図8を参照して説明する。
In this service, in addition to the above-described example of the base service, various services for further improving the convenience of the viewer and the setter are provided.
Specifically, “heat map service”, “story branch service”, “multilink service”, “voice TIG service”, “gesture TIG service”, and “real-time suggest service” are provided as examples of this service. Hereinafter, an overview of these various services will be described with reference to FIGS.
 図2は、本サービスの一例であるヒートマップサービスの概要を示す図である。 FIG. 2 is a diagram showing an outline of a heat map service which is an example of the present service.
 「ヒートマップサービス」とは、TIGエリアを含む視聴者端末3の画面に対する視聴者Wによるタップ操作が、具体的にどの位置(座標)に対してどの程度行われたのかを、タップ数のヒートマップによって表すサービスである。
 ここで、集計対象となるタップ操作を行った視聴者Wは、全ての視聴者Wとすることもできるし、所定の条件(例えば性別、年代等)を満たす視聴者Wとすることもできる。また、期間を設定して、設定された期間中にどの程度視聴者Wにタップされたのかをヒートマップで表すこともできる。これにより、タップ数の期間比較を行うこともできる。
The “heat map service” refers to a heat of the number of taps, which indicates how much the tap operation performed by the viewer W on the screen of the viewer terminal 3 including the TIG area is performed at what position (coordinate). A service represented by a map.
Here, the viewer W who has performed the tapping operation to be counted may be all the viewers W, or may be a viewer W satisfying a predetermined condition (for example, gender, age, etc.). In addition, a period can be set, and how much the viewer W has tapped during the set period can be represented by a heat map. Thereby, it is also possible to compare the number of taps in the period.
 具体的には、ヒートマップサービスでは、TIGエリアに対するタップ操作の数(タップ数)に段階的な閾値が設けられ、色彩や色の濃淡によるタップ数のヒートマップが生成され、対象となる動画像に重畳的に表示される。
 このため、ヒートマップサービスは、動画像のうち具体的にどの位置に対してタップする操作が行われたのかを示す情報であるため、主に紐付作業を行う設定者Cにより利用される設定者向けサービスとなる。
 設定者Cは、ヒートマップサービスを利用することで、オブジェクトJが存在するTIGエリアのうち、具体的にどの位置が多くタップされたのか、あるいはどの位置がタップされないのかを一見して把握することができる。また、設定者Cは、TIGエリア以外のどの位置がタップされたのかを一見して把握することもできる。
 これにより、設定者Cは、ヒートマップに基づいて、例えば動画像に設定したTIGエリアAの位置を好適な位置に修正することができる。また例えば、当初設定したTIGエリアの広狭に関する修正を行うことができる。また例えば、これまでオブジェクト情報が紐付けられていなかったオブジェクトJのうち、視聴者Wによるタップする操作が多く行われたオブジェクトJに対し、新たな紐付作業を行うこともできる。その結果、視聴者の利便性を向上させることができる。
Specifically, in the heat map service, a stepwise threshold value is provided for the number of tap operations (the number of taps) for the TIG area, a heat map of the number of taps based on color and shading of color is generated, and a target moving image is generated. Are displayed in a superimposed manner.
For this reason, since the heat map service is information indicating a specific tapping operation performed on a moving image, the heat map service is mainly used by the setter C who performs the linking operation. Service for
By using the heat map service, the setter C can see at a glance which positions are tapped more or less in the TIG area where the object J exists. Can be. Also, the setter C can grasp at a glance which position other than the TIG area has been tapped.
Thereby, the setter C can correct the position of the TIG area A set to, for example, the moving image to a suitable position based on the heat map. Further, for example, a correction relating to the width of the initially set TIG area can be made. Further, for example, among the objects J to which the object information has not been linked, a new linking operation can be performed on the object J on which the tapping operation by the viewer W has been performed a lot. As a result, the convenience of the viewer can be improved.
 ヒートマップサービスでは、ヒートマップの強調度を設定することができる。上述したように、ヒートマップサービスで提供されるヒートマップは、視聴者Wのタップ数に段階的な閾値が設けられることで、色彩や色の濃淡が施される。ヒートマップの強調度は、段階的に設けられる閾値を変化させることで調節することができる。
 これにより、例えば視聴者Wの数が100万人を超える動画像と、視聴者Wの数が100人程度である動画像との夫々のヒートマップを好適に生成することができる。即ち、仮に5段階の閾値を設けた場合、視聴者Wの数が100万人である動画像は、例えば1段階目のタップ数の閾値が20万回、2段階目のタップ数の閾値が40万回、3段階目のタップ数の閾値が60万回、4段階目のタップ数の閾値が80万回、5段階目のタップ数の閾値が100万回となるように設定して、色彩や色の濃淡を施すことができる。
 これに対して、視聴者Wの数が100人程度である動画像は、例えば1段階目のタップ数の閾値が20回、2段階目のタップ数の閾値が40回、3段階目のタップ数の閾値が60回、4段階目のタップ数の閾値が80回、5段階目のタップ数の閾値が100回となるように設定して、色彩や色の濃淡を施すことができる。
In the heat map service, the degree of enhancement of the heat map can be set. As described above, the heat map provided by the heat map service is given a color or shading by providing a stepwise threshold value for the number of taps of the viewer W. The degree of emphasis of the heat map can be adjusted by changing a threshold value provided in a stepwise manner.
Thus, for example, it is possible to preferably generate a heat map for each of a moving image in which the number of viewers W exceeds 1,000,000 and a moving image in which the number of viewers W is about 100. That is, if five levels of thresholds are provided, a moving image in which the number of viewers W is 1,000,000 is, for example, the threshold of the number of taps in the first step is 200,000, and the threshold of the number of taps in the second step is 400,000 times, the threshold of the tap number of the third stage is 600,000 times, the threshold value of the tap number of the fourth stage is set to 800,000 times, and the threshold value of the tap number of the fifth stage is set to 1,000,000 times, Colors and shades of color can be applied.
On the other hand, in a moving image in which the number of viewers W is about 100, for example, the threshold value of the first-stage tap number is 20 times, the threshold value of the second-stage tap number is 40 times, and the third-stage tap number is The threshold value of the number of taps is set to 60 times, the threshold value of the tap number of the fourth step is set to 80 times, and the threshold value of the tap number of the fifth step is set to 100 times, so that the color and the shading of the color can be applied.
 具体的には例えば、図2の上段左側には、強調度の設定が「標準」とされたヒートマップが描画されている。また、図2の上段右側には、強調度の設定が「+2」とされたヒートマップが描画されている。
 この場合、強調度の設定が「標準」とされたヒートマップ(図2の上段左側)よりも、強調度の設定が「+2」とされたヒートマップ(図2の上段右側)の方が、少ない数のタップ数も色彩や色の濃淡が施されて表示されるように設定されている。
 これにより、設定者Cは、タップ数に応じて強調度の設定を変更しながらヒートマップを表示させることができる。
Specifically, for example, a heat map in which the setting of the degree of emphasis is set to “standard” is drawn on the upper left side of FIG. Further, a heat map in which the setting of the degree of emphasis is “+2” is drawn on the upper right side of FIG.
In this case, the heat map (upper right side in FIG. 2) in which the emphasis level is set to “+2” is better than the heat map (upper left side in FIG. 2) in which the emphasis level is set to “standard”. The small number of taps is also set to be displayed with colors and shades of color.
Thereby, the setter C can display the heat map while changing the setting of the degree of emphasis according to the number of taps.
 図2の下段左側には、ヒートマップにTIGエリアを重畳的に表示させた場合の例が示されている。
 ヒートマップにTIGエリアAを重畳的に表示させることで、上述したように、設定者Cは、例えば動画像に設定したTIGエリアAの位置を好適な位置に修正することができる。また例えば、当初設定したTIGエリアの広狭に関する修正を行うことができる。
 図2の下段左側に示す例では、TIGエリアA1及びA2のうち、特にTIGエリアA1からはみ出た領域に対するタップ操作が多く行われていることがわかる。この場合、視聴者Wにおいて、タップする操作を行ったにもかかわらず反応がなかった、あるいはオブジェクト情報が設定されていない旨の表示がなされたといった不便が生じたことを示している。
 このため、設定者Cは、ヒートマップを参考にしながら、例えばTIGエリアA1を下方に広げる修正等を行う。これにより、視聴者Wの利便性を向上させることができる。
On the lower left side of FIG. 2, an example is shown in which a TIG area is displayed in a superimposed manner on the heat map.
By displaying the TIG area A in a superimposed manner on the heat map, as described above, the setter C can correct the position of the TIG area A set in the moving image to a suitable position, for example. Further, for example, a correction relating to the width of the initially set TIG area can be made.
In the example shown on the lower left side of FIG. 2, it can be seen that a large number of tap operations are performed particularly on an area of the TIG areas A1 and A2 that is outside the TIG area A1. In this case, it indicates that inconvenience has occurred in the viewer W, for example, there was no reaction despite the tapping operation being performed, or a display indicating that the object information was not set was made.
For this reason, the setter C performs, for example, a correction to expand the TIG area A1 downward while referring to the heat map. Thereby, the convenience of the viewer W can be improved.
 図2の下段右側には、ヒートマップに吹き出しを重畳的に表示させた場合の例が示されている。
 図2の下段右側に示す吹き出しF1及びF2の夫々は、TIGエリアの中心付近に配置させることができる。この場合、ヒートマップに吹き出しF1及びF2を重畳的に表示させることで、設定者Cは、TIGエリアAの設定位置の妥当性を把握することができる。
 また、吹き出しは、動画像Lに重畳して視聴者端末3に表示されるため、視聴者Wがオブジェクト情報を取得するためのタップ操作を行う際のターゲットとなる。このため、ターゲットとしての吹き出しから離れた位置に対するタップ数が多いような場合には、例えば吹き出しが見難いであるとか、他にターゲットとなるようなものが存在する、といった何等かの理由が存在する。つまり、設定者Cは、ヒートマップを表示することにより、吹き出しと実際にタップ操作された位置との対応関係を把握することができる。
 これにより、設定者Cは、例えば動画像に設定したTIGエリアAの位置を好適な位置に修正することができる。また例えば、当初設定したTIGエリアの広狭に関する修正を行うことができる。
On the right side in the lower part of FIG. 2, an example is shown in which a balloon is displayed in a superimposed manner on the heat map.
Each of the balloons F1 and F2 shown on the lower right side in FIG. 2 can be arranged near the center of the TIG area. In this case, by displaying the balloons F1 and F2 in a superimposed manner on the heat map, the setter C can grasp the validity of the set position of the TIG area A.
In addition, since the balloon is displayed on the viewer terminal 3 while being superimposed on the moving image L, the balloon is a target when the viewer W performs a tap operation for acquiring object information. For this reason, when the number of taps at a position distant from the balloon as a target is large, there are some reasons, for example, that the balloon is difficult to see or that there is another target. I do. That is, by displaying the heat map, the setter C can grasp the correspondence between the balloon and the position where the tap operation was actually performed.
Thereby, the setter C can correct, for example, the position of the TIG area A set in the moving image to a suitable position. Further, for example, a correction relating to the width of the initially set TIG area can be made.
 なお、上述したように、ヒートマップサービスは、主に設定者向けサービスとして提供されるが、視聴者向けサービスとして提供することもできる。
 即ち、視聴者Wはヒートマップを見ることで、自分以外の視聴者Wが、どのようなオブジェクトJに興味を持っているのかを把握することができる。また、多くの視聴者WによりタップされたオブジェクトJに対して、自分もタップしてみようと考える誘因を与えることができる。また、タップ数の集計結果を様々な用途で活用することもできる。具体的には例えば、動画像Lがライブコンサートのいわゆる生中継動画である場合には、ライブコンサート中にタップ数が一番多かった楽曲を、ライブコンサートの最後のアンコール曲に採用することもできる。即ち、ヒートマップは、単にタップされた位置のマップとしてのみならず、再生された動画像Lの時間帯における盛り上がりとして把握されることもできる。
As described above, the heat map service is mainly provided as a service for setters, but can also be provided as a service for viewers.
That is, the viewer W can see what kind of object J other viewers W are interested in by looking at the heat map. In addition, it is possible to give an incentive for the user to tap the object J to the object J tapped by many viewers W. Also, the result of counting the number of taps can be used for various purposes. Specifically, for example, when the moving image L is a so-called live broadcast movie of a live concert, the song with the largest number of taps during the live concert can be adopted as the last encore song of the live concert. . That is, the heat map can be grasped not only as a map of the tapped position but also as a climax in the time zone of the reproduced moving image L.
 図3は、本サービスの一例であるストーリー分岐サービスの概要と具体例を示す図である。 FIG. 3 is a diagram showing an outline and a specific example of a story branch service which is an example of the present service.
 ストーリー分岐サービスとは、再生される動画像の中に、オブジェクトJとして、動画像のストーリーを分岐させる選択肢を示す複数のボタンを選択可能に表示させるサービスである。
 従来、動画像のストーリーを選択するためのボタンは、動画像とは異なる場所に表示され、視聴者Wによるストーリーを選択する操作が行われると、対象となる動画像が再生される構成となっていた。
 これに対して、本サービスの一例であるストーリー分岐サービスでは、例えば図3の右側に示す分岐の構成によって、選択肢を示すボタンが動画像Lの中に表示される。
 即ち、図3の右側に示す分岐は、「泊まる」、「遊ぶ」、「食べる」、「浸かる」からなる第1階層の分岐と、第1階層の分岐で「遊ぶ」が選択された場合における、「春」、「夏」、「秋」、「冬」からなる第2階層の分岐と、第1階層の分岐で「食べる」が選択された場合における、「雰囲気重視」、「エンタメ重視」からなる第2階層の分岐とで構成される。
The story branching service is a service in which a plurality of buttons indicating options for branching a story of a moving image are displayed as an object J in a moving image to be reproduced in a selectable manner.
Conventionally, a button for selecting a story of a moving image is displayed in a place different from the moving image, and when a viewer W performs an operation of selecting a story, a target moving image is reproduced. I was
On the other hand, in the story branch service which is an example of the present service, a button indicating an option is displayed in the moving image L by a branch configuration shown on the right side of FIG.
That is, the branch shown on the right side of FIG. 3 is a first-level branch including “stay”, “play”, “eat”, and “soak”, and a case where “play” is selected in the first-level branch. , “Spring”, “Summer”, “Autumn”, and “Winter”, and “Emphasis” and “Entertainment” when “Eat” is selected in the first hierarchy branch And a branch of the second hierarchy consisting of
 図3の左側上段には、図3の右側に示す分岐の構成に基づいて作成された、動画像のトップ画面の一例が示されている。
 即ち、図3の左側上段に示すトップ画面は、「見たいテーマを選んでください」と表記された案内文章とともに、「泊まる」と表記されたオブジェクトJ1と、「遊ぶ」と表記されたオブジェクトJ2と、「食べる」と表記されたオブジェクトJ3と、「浸かる」と表記されたオブジェクトJ4との夫々が選択可能なボタンとして表示されている。
 視聴者Wが、オブジェクトJ1乃至J4のうち、「食べる」と表記されたオブジェクトJ3を選択する操作(タップ操作)を行うと、「食べる」を想起させる内容の動画像L(プロモーションビデオ)が再生される。その後、図示はしないが、「雰囲気重視」と表記されたボタンと、「エンタメ重視」と表記されたボタンとが選択可能に表示される。ここで、例えば「雰囲気重視」と表記されたボタンがタップされると、「雰囲気重視」を想起させる内容の動画像L(プロモーションビデオ)がイントロダクションとして再生される。その後、例えば図3の左側下段に示すような、「食べる」及び「雰囲気重視」をテーマとした、囲炉裏端で地魚料理が振舞われるような、雰囲気を重視した旅館の動画像L(プロモーションビデオ)がオブジェクト情報として再生される。このオブジェクト情報として再生される動画像には、オブジェクト情報が紐付けられた1以上のオブジェクトJが含まれる。そして、オブジェクトJが存在するTIGエリアAのいずれかの位置がタップされると、画面右端の「TIGストック」を示す領域Dに保存される。
 なお、図3の左側下段に示す例では、オブジェクトJ21としての地魚料理が存在するTIGエリアA21のいずれかの位置がタップされると、画面右端の「TIGストック」を示す領域Dに保存される。
The upper left part of FIG. 3 shows an example of a moving image top screen created based on the branch configuration shown on the right side of FIG.
That is, the top screen shown in the upper left part of FIG. 3 includes an object J1 described as "stay" and an object J2 described as "play", together with a guide text "select a theme to see". And an object J3 described as "eat" and an object J4 described as "soak" are displayed as selectable buttons.
When the viewer W performs an operation (tap operation) of selecting the object J3 described as “eat” among the objects J1 to J4, a moving image L (promotion video) having a content reminiscent of “eat” is reproduced. Is done. Thereafter, although not shown, a button labeled "emphasis on atmosphere" and a button labeled "emphasis on entertainment" are displayed so as to be selectable. Here, for example, when a button described as "atmosphere-oriented" is tapped, a moving image L (promotion video) having a content reminiscent of "atmosphere-oriented" is reproduced as an introduction. Thereafter, as shown in the lower left part of FIG. 3, for example, a moving image L (promotion video) of an inn with an emphasis on the atmosphere, such as local fish dishes being served on the backside of the hearth, with the theme of "eat" and "emphasis on the atmosphere" ) Is reproduced as object information. The moving image reproduced as the object information includes one or more objects J linked with the object information. Then, when any position in the TIG area A where the object J is present is tapped, it is stored in the area D indicating “TIG stock” at the right end of the screen.
In the example shown in the lower left part of FIG. 3, when any position in the TIG area A21 where the local fish dish as the object J21 exists is tapped, it is stored in the area D at the right end of the screen indicating “TIG stock”. You.
 このように、ストーリー分岐サービスによれば、分岐先でもオブジェクトJ毎にTIGエリアAが設定されているため、TIGエリアAのいずれかの位置がタップされると、タップされたオブジェクトJが「TIGストック」に順次加えられていくことになる。また、図3の左側上段に示すトップ画面に何度も戻りながら、全ての選択肢を横断して、「TIGストック」に加えることもできる。
 また、図3の例ではストーリーの分岐は、2階層であるが、何階層にでも分岐することができる。
 また、二択の分岐が繰り返される構成とすることもできる。この場合、メインストーリーは1本としながらも、各分岐ではオブジェクト情報としてサブストーリーを再生することもできる。
 また、ストーリー分岐サービスは、例えばクイズの動画像に適用することができる。この場合、例えば正解と間違いの夫々の動画像をオブジェクト情報として再生することができる。ここで、クイズの動画像Lにストーリー分岐サービスを適用した場合には、たとえ視聴者Wがクイズに正解することができなくても、正解できなかった問題に再チャレンジすることができる。この場合、一度回答して不正解だった選択肢と、他の選択肢との表示に違いを設けることができる。具体的には例えば、図示はしないが、選択肢が「A」,「B」,「C」である三択問題に対し、視聴者Wが「A」を選択して間違えた場合には、同じ問題に再チャレンジした際、「A」の色合いのみが薄くなるように表示させることができる。これにより、視聴者Wは、再度「A」を選択することなく、クイズに解答することができる。
As described above, according to the story branching service, the TIG area A is set for each object J even at the branch destination. Therefore, when any position in the TIG area A is tapped, the tapped object J is displayed as "TIG area". Stock ". Further, it is also possible to add all the options to the “TIG stock” while returning to the top screen shown in the upper left part of FIG. 3 many times.
Further, in the example of FIG. 3, the story is branched into two layers, but the story can be branched into any number of layers.
Further, a configuration in which two-way branching is repeated may be adopted. In this case, it is possible to reproduce a substory as object information in each branch, while using only one main story.
The story branching service can be applied to, for example, a quiz moving image. In this case, for example, each of the correct and incorrect moving images can be reproduced as object information. Here, when the story branch service is applied to the moving image L of the quiz, even if the viewer W cannot correctly answer the quiz, it is possible to try again the problem that could not be answered correctly. In this case, it is possible to provide a difference in the display between the option that was answered once and was incorrect and the other option. Specifically, for example, although not shown, the same applies when the viewer W selects “A” and makes a mistake for a three-choice question where the choices are “A”, “B”, and “C”. When the problem is re-challenged, it can be displayed so that only the shade of “A” becomes lighter. Thus, the viewer W can answer the quiz without selecting “A” again.
 図4は、本サービスの一例であるストーリー分岐サービスの他の具体例を示す図である。 FIG. 4 is a diagram showing another specific example of the story branching service which is an example of the present service.
 図4の上段に示す分岐の構成によって、選択肢を示すボタンが動画像Lの中に表示される。
 即ち、図4の上段に示す分岐は、「もう一度見る」、「次を見る」、「作り方だけを見る」という分岐が繰り返される構成となっている。
 図4の下段には、図4の上段に示す分岐の構成に基づいて作成された、子供の工作の手順を示す動画像Lのトップ画面の一例が示されている。
 即ち、図4の下段に示すトップ画面は、「みたいボタンをおしてね!」と表記された案内文章とともに、「つぎをみる」と表記されたオブジェクトJ31と、「つくりかただけをみる」と表記されたオブジェクトJ32と、「もういちどみる」と表記されたオブジェクトJ33とが選択可能に表示されている。
 このうち、「もういちどみる」と表記されたオブジェクトJ33がタップされると、一度再生された工作の手順を示す動画像Lが、オブジェクト情報として再度再生される。
 また、「つぎをみる」と表記されたオブジェクトJ31がタップされると、工作の手順が先に進み、その手順を示す動画像Lがオブジェクト情報として再生される。
 また、「つくりかただけをみる」と表記されたオブジェクトJ32がタップされると、工作の手順のみを示す動画像Lが、オブジェクト情報として再生される。
By the structure of the branch shown in the upper part of FIG. 4, a button indicating an option is displayed in the moving image L.
That is, the branch shown in the upper part of FIG. 4 has a configuration in which the branch of “see again”, “see next”, and “see only how to make” is repeated.
The lower part of FIG. 4 shows an example of a top screen of a moving image L that is created based on the branch configuration shown in the upper part of FIG.
That is, the top screen shown in the lower part of FIG. 4 is accompanied by a guide sentence indicating “please click on a button like this!”, An object J31 indicated by “see next”, and “only see how to make”. Object J32 and an object J33 described as "See again" are displayed in a selectable manner.
Of these, when the object J33 described as “See again” is tapped, the moving image L indicating the procedure of the work once reproduced is reproduced again as object information.
Also, when the object J31 described as "See next" is tapped, the procedure of the work proceeds, and the moving image L indicating the procedure is reproduced as object information.
Further, when the object J32 described as “Look at only how to make” is tapped, a moving image L indicating only the procedure of the work is reproduced as object information.
 図5は、本サービスの一例であるマルチリンクサービスの概要を示す図である。 FIG. 5 is a diagram showing an outline of a multilink service which is an example of the present service.
 マルチリンクサービスとは、紐付情報を追加するサービスである。
 マルチリンクサービスによれば、「TIGストック」に追加されたオブジェクトJに対し、紐付情報を追加的に紐付けることができる。マルチリンクサービスは、設定者向けサービスである。
The multilink service is a service for adding link information.
According to the multilink service, the linking information can be additionally linked to the object J added to the “TIG stock”. The multilink service is a service for setters.
 図5の上段には、オブジェクトJ41の紐付情報を設定する画面が描画されている。図5の上段に示すように、設定者Cは、オブジェクトJ41の紐付情報H1乃至H3を追加的に設定することができる。
 図5の下段には、追加的に設定された紐付情報が視聴者端末3に表示された場合の例が示されている。
 これにより、視聴者Wは、オブジェクトJ41について、例えばECサイトでの購入と、実店舗での購入とを自由に選択して購入することができる。また、それ以外にも、単にECサイトを閲覧したり、店舗の具体的な情報を閲覧したり、店舗の場所を確認したりするなど、様々な用途にオブジェクト情報を利用することができる。
In the upper part of FIG. 5, a screen for setting the association information of the object J41 is drawn. As shown in the upper part of FIG. 5, the setter C can additionally set the link information H1 to H3 of the object J41.
In the lower part of FIG. 5, an example is shown in which the additionally set association information is displayed on the viewer terminal 3.
This allows the viewer W to freely select and purchase the object J41, for example, at the EC site or at an actual store. In addition, the object information can be used for various purposes, such as simply browsing an EC site, browsing specific information of a store, and confirming the location of a store.
 図6は、本サービスの一例である音声TIGサービスの概要を示す図である。 FIG. 6 is a diagram showing an outline of a voice TIG service which is an example of the present service.
 音声TIGサービスとは、視聴者Wの音声を認識するサービスである。
 音声TIGサービスによれば、視聴者Wは、両手がふさがって指による操作が行えないようなときでも、声によって操作を行うことができる。
 具体的には例えば、図6に示すように、視聴者Wは、組み立てマニュアルの動画像Lを視聴しながら、模型等を組み立てる際、両手を使って効率よく模型を組み立てることができる。この場合、視聴者端末3に対する操作は、マイクIに向かって指示内容を声で発することにより行われる。
 これにより、視聴者Wの音声の操作に基づいて、模型のオブジェクト情報として、例えばより詳細な組み立てマニュアルを取得することができる。
 また、視聴者Wの音声の操作に基づいて、模型の一部のパーツのオブジェクト情報として、実寸大のパーツのサイズの画像を取得して表示させることもできる。
 動画像Lに対しては、例えば再生、停止、10秒送り、10秒戻しといった操作を行うことができる。
 また、図6に示すように、GUIにはマイクIのON/OFFスイッチボタンGが設けられている。これにより、操作上必要でない音声をマイクIが拾ってしまうことを防ぐことができる。
 なお、図6の例では、視聴者端末3とマイクIとが有線接続されているが、これに限定されず、例えばBluetooth(登録商標)等によって無線接続されていてもよいし、マイクIが視聴者端末3に内蔵されていてもよい。
The voice TIG service is a service that recognizes the voice of the viewer W.
According to the voice TIG service, the viewer W can perform an operation by voice even when both hands are blocked and a finger cannot be operated.
Specifically, for example, as shown in FIG. 6, the viewer W can efficiently assemble the model using both hands when assembling the model or the like while watching the moving image L of the assembly manual. In this case, the operation on the viewer terminal 3 is performed by uttering the instruction content to the microphone I by voice.
Thereby, for example, a more detailed assembly manual can be acquired as object information of the model based on the operation of the voice of the viewer W.
Further, based on the voice operation of the viewer W, an image of the actual size of the part can be acquired and displayed as object information of some parts of the model.
For the moving image L, for example, operations such as reproduction, stop, forward for 10 seconds, and return for 10 seconds can be performed.
As shown in FIG. 6, the GUI is provided with an ON / OFF switch button G for the microphone I. Thereby, it is possible to prevent the microphone I from picking up a sound that is not necessary for operation.
In the example of FIG. 6, the viewer terminal 3 and the microphone I are connected by wire. However, the present invention is not limited to this. For example, the viewer terminal 3 and the microphone I may be wirelessly connected by Bluetooth (registered trademark) or the like. It may be built in the viewer terminal 3.
 図7は、本サービスの一例であるジェスチャーTIGサービスの概要を示す図である。 FIG. 7 is a diagram showing an outline of a gesture TIG service which is an example of the present service.
 ジェスチャーTIGサービスとは、視聴者端末3に表示された動画像のデータを視聴しながら、視聴者Wがジェスチャーをするだけで、オブジェクト情報を取得できるサービスである。
 具体的には例えば、視聴者Wが、自分のこぶしを握り締める(いわゆるグーにする)ジェスチャーや、手を開くジャスチャー(いわゆるパーにする)ジェスチャーにより、オブジェクトJを選択する操作を行ったり、選択したオブジェクトJのオブジェクト情報にアクセスしたりすることができる。
 ジェスチャーTIGサービスは、図7の左端に示すように、典型的には視聴者端末3のカメラに対してジェスチャーするが、図7の中央に示すように、テレビジョンVと視聴者端末3等で同時配信されるようにすることもできる。この場合、視聴者Wは、テレビジョンVを見ながらオブジェクトの選択をジェスチャーで行う。
 ジェスチャーの検知は、視聴者端末3に搭載されているWebカメラや、各種各様のセンサによって行うことができる。
 また、図7の右端に示すように、店頭のサイネージSにおいてジェスチャーTIGサービスを提供することができる。この場合、例えば動画像Lに登場した商品を選択して購入する操作をジェスチャーにより行い、その商品をその場で受け取って持ち帰ることもできる。
The gesture TIG service is a service that allows the viewer W to acquire object information only by making a gesture while viewing the moving image data displayed on the viewer terminal 3.
Specifically, for example, the viewer W performs an operation of selecting the object J by a gesture of squeezing his / her fist (so-called goo) or a gesture of opening the hand (so-called par). The user can access the object information of the created object J.
The gesture TIG service typically makes a gesture with respect to the camera of the viewer terminal 3 as shown at the left end of FIG. 7, but as shown in the center of FIG. They can be distributed simultaneously. In this case, the viewer W makes a gesture of selecting an object while watching the television V.
The gesture can be detected by a Web camera mounted on the viewer terminal 3 or various kinds of sensors.
In addition, as shown in the right end of FIG. 7, a gesture TIG service can be provided in a signage S at a store. In this case, for example, an operation of selecting and purchasing a product appearing in the moving image L can be performed by a gesture, and the product can be received and taken home on the spot.
 図8は、本サービスの一例であるリアルタイムサジェストサービスの概要を示す図である。 FIG. 8 is a diagram showing an outline of a real-time suggest service which is an example of the present service.
 リアルタイムサジェストサービスとは、視聴者Wにとってタイムリーな広告を動画像Lに表示させるサービスである。
 即ち、本サービスでは、基本的に動画像Lに登場するあらゆるオブジェクトJに紐付情報が予め(又は自動的に)紐付けられることになる。しかしながら、趣味嗜好が夫々異なる視聴者Wが同一の動画像Lを視聴した場合、興味を抱くオブジェクトJは視聴者W毎に異なるのが通常である。このため、リアルタイムサジェストサービスでは、視聴者Wの趣味嗜好に合わせて、有効化されるTIGエリアに変化をつけることができる。
 具体的には例えば、図8の左側に示すように、ある視聴者Wが、動画像Lの前半部分を視聴し終えた時点で、オブジェクトJとしての自動車を頻繁に指定する操作を行っているケースを想定する。このケースでは、その視聴者Wが視聴する動画像Lの後半部分には、図8の右側上段に示すように、自動車のTIG領域Aのみ(又は自動車のTIG領域Aがメインに)が有効化されるように設定する(又は自動設定される)ことができる。
 即ち、視聴者WがオブジェクトJを指定する操作は、その指定の対象となったオブジェクトJの内容とともに実績として記録され、分析の対象になる。
 その結果、その視聴者Wの操作の傾向が把握され、操作の傾向に応じて表示させるTIGエリアが随時変化するようすることができる。
 さらに、図8の右側下段に示すように、自動車に関する情報を好んで取得する視聴者Wには、自動車の広告Kがワイプとして動画像Lの中に表示されるようにすることもできる。
The real-time suggestion service is a service that displays a timely advertisement for the viewer W on the moving image L.
That is, in the present service, basically, the linking information is linked in advance (or automatically) to every object J appearing in the moving image L. However, when the viewers W having different hobbies and preferences view the same moving image L, the object J of interest is usually different for each viewer W. For this reason, in the real-time suggestion service, the TIG area to be activated can be changed in accordance with the taste and preference of the viewer W.
Specifically, for example, as shown on the left side of FIG. 8, when a certain viewer W finishes viewing the first half of the moving image L, he or she frequently performs an operation of specifying a car as the object J. Assume a case. In this case, only the TIG area A of the car (or the TIG area A of the car is mainly used) is enabled in the latter half of the moving image L viewed by the viewer W, as shown in the upper right part of FIG. (Or automatically set).
That is, the operation of the viewer W to specify the object J is recorded as an actual result together with the content of the object J specified, and becomes an analysis target.
As a result, the tendency of the operation of the viewer W is grasped, and the TIG area to be displayed can be changed at any time according to the tendency of the operation.
Further, as shown in the lower right part of FIG. 8, the advertisement K of the car may be displayed as a wipe in the moving image L for the viewer W who prefers to acquire information about the car.
 次に、本サービスを実現させる情報システムの構成について説明する。
 図9は、本発明の一実施形態に係る情報処理システムの構成の一例を示す図である。
Next, the configuration of an information system for realizing the present service will be described.
FIG. 9 is a diagram illustrating an example of a configuration of an information processing system according to an embodiment of the present invention.
 図9に示す情報処理システムは、管理サーバ1と、設定者端末2と、視聴者端末3と、外部サーバ4とを含むように構成されている。
 管理サーバ1、設定者端末2、視聴者端末3、及び外部サーバ4の夫々は、インターネット等の所定のネットワークNを介して相互に接続されている。
The information processing system shown in FIG. 9 is configured to include a management server 1, a setter terminal 2, a viewer terminal 3, and an external server 4.
Each of the management server 1, the setter terminal 2, the viewer terminal 3, and the external server 4 is mutually connected via a predetermined network N such as the Internet.
 管理サーバ1は、サービス提供者(図示せず)により管理される情報処理装置である。管理サーバ1は、設定者端末2及び視聴者端末3と適宜通信をしながら、本サービスを実現するための各種処理を実行する。 The management server 1 is an information processing device managed by a service provider (not shown). The management server 1 executes various processes for realizing the present service while appropriately communicating with the setter terminal 2 and the viewer terminal 3.
 設定者端末2は、設定者Cにより操作される情報処理装置であって、例えばパーソナルコンピュータ、スマートフォン、タブレット等で構成される。 The setter terminal 2 is an information processing device operated by the setter C, and includes, for example, a personal computer, a smartphone, a tablet, and the like.
 視聴者端末3は、視聴者Wにより操作される情報処理装置であって、例えばパーソナルコンピュータ、スマートフォン、タブレット等で構成される。 The viewer terminal 3 is an information processing device operated by the viewer W, and includes, for example, a personal computer, a smartphone, a tablet, and the like.
 外部サーバ4は、紐付情報により視聴者に提供可能となるオブジェクト情報、例えば紐付情報がURLの場合には当該URLに存在している各種WEBサイト(オブジェクト情報が掲載されたWEBサイト)を管理する。 The external server 4 manages object information that can be provided to the viewer by the linking information, for example, when the linking information is a URL, various web sites (web sites on which the object information is posted) existing in the URL. .
 図10は、図9の情報処理システムのうち管理サーバのハードウェア構成の一例を示すブロック図である。 FIG. 10 is a block diagram showing an example of a hardware configuration of a management server in the information processing system of FIG.
 管理サーバ1は、CPU(Central Processing Unit)11と、ROM(Read Only Memory)12と、RAM(Random Access Memory)13と、バス14と、入出力インターフェース15と、入力部16と、出力部17と、記憶部18と、通信部19と、ドライブ20とを備えている。 The management server 1 includes a CPU (Central Processing Unit) 11, a ROM (Read Only Memory) 12, a RAM (Random Access Memory) 13, a bus 14, an input / output interface 15, an input unit 16, and an output unit 17. , A storage unit 18, a communication unit 19, and a drive 20.
 CPU11は、ROM12に記録されているプログラム、又は、記憶部18からRAM13にロードされたプログラムに従って各種の処理を実行する。
 RAM13には、CPU11が各種の処理を実行する上において必要なデータ等も適宜記憶される。
The CPU 11 executes various processes according to a program recorded in the ROM 12 or a program loaded from the storage unit 18 into the RAM 13.
The RAM 13 also stores data and the like necessary for the CPU 11 to execute various processes.
 CPU11、ROM12及びRAM13は、バス14を介して相互に接続されている。このバス14にはまた、入出力インターフェース15も接続されている。入出力インターフェース15には、入力部16、出力部17、記憶部18、通信部19及びドライブ20が接続されている。 The CPU 11, the ROM 12, and the RAM 13 are connected to each other via the bus 14. The bus 14 is also connected to an input / output interface 15. The input / output interface 15 is connected to an input unit 16, an output unit 17, a storage unit 18, a communication unit 19, and a drive 20.
 入力部16は、例えばキーボード等により構成され、各種情報を出力する。
 出力部17は、液晶等のディスプレイやスピーカ等により構成され、各種情報を画像や音声として出力する。
 記憶部18は、DRAM(Dynamic Random Access Memory)等で構成され、各種データを記憶する。
 通信部19は、インターネットを含むネットワークNを介して他の装置(例えば図2の設定者端末2、視聴者端末3、外部サーバ4等)との間で通信を行う。
The input unit 16 is composed of, for example, a keyboard and outputs various information.
The output unit 17 includes a display such as a liquid crystal, a speaker, and the like, and outputs various types of information as images and sounds.
The storage unit 18 is configured by a DRAM (Dynamic Random Access Memory) or the like, and stores various data.
The communication unit 19 communicates with another device (for example, the setter terminal 2, the viewer terminal 3, the external server 4, and the like in FIG. 2) via a network N including the Internet.
 ドライブ20には、磁気ディスク、光ディスク、光磁気ディスク、或いは半導体メモリ等よりなる、リムーバブルメディア30が適宜装着される。ドライブ20によってリムーバブルメディア30から読み出されたプログラムは、必要に応じて記憶部18にインストールされる。
 また、リムーバブルメディア30は、記憶部18に記憶されている各種データも、記憶部18と同様に記憶することができる。
The drive 20 is appropriately equipped with a removable medium 30 made of a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like. The program read from the removable medium 30 by the drive 20 is installed in the storage unit 18 as needed.
Further, the removable medium 30 can also store various data stored in the storage unit 18 in the same manner as the storage unit 18.
 なお、図示はしないが、図9の設定者端末2、視聴者端末3、及び外部サーバ4も、図10に示すハードウェア構成と基本的に同様の構成を有することができる。従って、設定者端末2、視聴者端末3、及び外部サーバ4のハードウェア構成の説明については省略する。 Although not shown, the setter terminal 2, the viewer terminal 3, and the external server 4 in FIG. 9 can also have basically the same configuration as the hardware configuration shown in FIG. Therefore, description of the hardware configurations of the setter terminal 2, the viewer terminal 3, and the external server 4 will be omitted.
 このような図10の管理サーバ1の各種ハードウェアと各種ソフトウェアとの協働により、管理サーバ1におけるヒートマップ処理、分岐処理、リンク処理、音声処理、ジェスチャー処理、サジェスト処理を含む各種処理の実行が可能になる。その結果、サービス提供者(図示せず)は、上述のベースとなるサービスの他、後述する各種サービスを提供することができる。 Execution of various processes including heat map processing, branch processing, link processing, audio processing, gesture processing, and suggestion processing in the management server 1 by cooperation of various hardware and various software of the management server 1 in FIG. Becomes possible. As a result, a service provider (not shown) can provide various services described below in addition to the above-described base service.
 「ヒートマップ処理」とは、上述のヒートマップサービスを実現させる処理のことをいう。
 「分岐処理」とは、上述のストーリー分岐サービスを実現させる処理のことをいう。
 「リンク処理」とは、上述のマルチリンクサービスを実現させる処理のことをいう。
 「音声処理」とは、上述の音声TIGサービスを実現させる処理のことをいう。
 「ジェスチャー処理」とは、上述のジェスチャーTIGサービスを実現させる処理のことをいう。
 「サジェスト処理」とは、上述のリアルタイムサジェストサービスを実現させる処理のことをいう。
 以下、管理サーバ1において処理の実行が制御される、ヒートマップ処理、分岐処理、リンク処理、音声処理、ジェスチャー処理、及びサジェスト処理を実行するための機能構成について説明する。
“Heat map processing” refers to processing for realizing the above-described heat map service.
"Branch processing" refers to processing for realizing the above-described story branch service.
"Link processing" refers to processing for realizing the above-described multilink service.
“Audio processing” refers to processing for realizing the audio TIG service described above.
“Gesture processing” refers to processing for realizing the above-described gesture TIG service.
“Suggest processing” refers to processing for realizing the above-described real-time suggest service.
Hereinafter, a functional configuration for executing a heat map process, a branch process, a link process, a voice process, a gesture process, and a suggest process in which the execution of the process is controlled in the management server 1 will be described.
 図11は、図10の管理サーバの機能的構成のうち、ヒートマップ処理、分岐処理、リンク処理、音声処理、ジェスチャー処理、及びサジェスト処理を実行するための機能的構成の一例を示す機能ブロック図である。 FIG. 11 is a functional block diagram showing an example of a functional configuration for executing a heat map process, a branch process, a link process, a voice process, a gesture process, and a suggestion process among the functional configurations of the management server in FIG. It is.
 図4に示すように、管理サーバ1のCPU11においては、ヒートマップ処理の実行が制御される場合、紐付情報管理部101と、動画像提示制御部102と、指定受付部103と、取得部104と、提供部105と、ヒートマップ部106とが機能する。
 また、CPU11においては、分岐処理の実行が制御される場合、紐付情報管理部101と、動画像提示制御部102と、指定受付部103と、取得部104と、提供部105と、分岐生成部107とが機能する。
 また、CPU11においては、リンク処理の実行が制御される場合、紐付情報管理部101と、動画像提示制御部102と、指定受付部103と、取得部104と、提供部105とが機能する。
 また、CPU11においては、音声処理の実行が制御される場合、紐付情報管理部101と、動画像提示制御部102と、指定受付部103と、取得部104と、提供部105と、音声認識部108とが機能する。
 また、CPU11においては、ジェスチャー処理の実行が制御される場合、紐付情報管理部101と、動画像提示制御部102と、指定受付部103と、取得部104と、提供部105と、ジェスチャー認識部109とが機能する。
 また、CPU11においては、サジェスト処理の実行が制御される場合、紐付情報管理部101と、動画像提示制御部102と、指定受付部103と、取得部104と、提供部105と、サジェスト制御部110とが機能する。
As shown in FIG. 4, when the execution of the heat map process is controlled in the CPU 11 of the management server 1, the associating information management unit 101, the moving image presentation control unit 102, the designation reception unit 103, and the acquisition unit 104 , The providing unit 105 and the heat map unit 106 function.
When the execution of the branching process is controlled in the CPU 11, the associating information management unit 101, the moving image presentation control unit 102, the designation reception unit 103, the acquisition unit 104, the provision unit 105, the branch generation unit 107 function.
In the CPU 11, when the execution of the link processing is controlled, the linking information management unit 101, the moving image presentation control unit 102, the designation reception unit 103, the acquisition unit 104, and the provision unit 105 function.
In the CPU 11, when execution of the voice processing is controlled, the linking information management unit 101, the moving image presentation control unit 102, the designation reception unit 103, the acquisition unit 104, the provision unit 105, the speech recognition unit 108 function.
When the execution of the gesture processing is controlled in the CPU 11, the association information management unit 101, the moving image presentation control unit 102, the designation reception unit 103, the acquisition unit 104, the provision unit 105, the gesture recognition unit 109 function.
In the CPU 11, when the execution of the suggestion process is controlled, the linking information management unit 101, the moving image presentation control unit 102, the designation reception unit 103, the acquisition unit 104, the provision unit 105, the suggestion control unit 110 function.
 紐付情報管理部101は、表示されている動画像Lの中で第1ユーザとしての視聴者Wにより指定がなされる、TIGエリアAに存在するオブジェクトJに対し、視聴者Wにオブジェクト情報を提供するための1以上の紐付情報を、視聴者Wによる指定がなされる前に紐付けて管理する。
 具体的には、紐付情報管理部101は、動画像Lに表示されているオブジェクトJを指定するための操作(例えばタップ操作)が行われる前に、以下の処理を行う。即ち、紐付情報管理部101は、動画像Lに表示され得るオブジェクトJとして、オブジェクトDB181に記憶されて管理されている1以上のオブジェクトJの夫々に対し、1以上の紐付情報を紐付けて管理する。
The associating information management unit 101 provides the object information to the viewer W for the object J existing in the TIG area A specified by the viewer W as the first user in the displayed moving image L. One or more pieces of linking information to be linked are managed before being specified by the viewer W.
Specifically, the associating information management unit 101 performs the following processing before an operation (for example, a tap operation) for specifying the object J displayed on the moving image L is performed. That is, the associating information management unit 101 manages one or more pieces of associating information in association with each of the at least one object J stored and managed in the object DB 181 as the object J that can be displayed in the moving image L. I do.
 動画像提示制御部102は、視聴者Wに対し、動画像Lを提示する制御を実行する。
 具体的には、動画像提示制御部102は、視聴者Wの視聴者端末3に対して、動画像Lを表示させる制御を実行する。
The moving image presentation control unit 102 executes control for presenting the moving image L to the viewer W.
Specifically, the moving image presentation control unit 102 executes control for displaying the moving image L on the viewer terminal 3 of the viewer W.
 指定受付部103は、動画像が提示されている視聴者によるTIGエリアに対する所定操作(例えばタップ操作)がなされた場合、TIGエリアに存在するオブジェクトJの指定がなされたと認識して、当該指定を受付ける。 When the viewer presenting the moving image performs a predetermined operation (for example, a tap operation) on the TIG area, the specification receiving unit 103 recognizes that the object J existing in the TIG area has been specified and performs the specification. Accept.
 取得部104は、指定が受付けられたオブジェクトJに紐付けられて管理されている紐付情報を取得する。
 具体的には例えば、取得部104は、視聴者Wによる指定が受付けられたオブジェクトJに紐付けられて管理されている紐付情報を取得する。
The acquisition unit 104 acquires the association information managed in association with the object J for which the specification has been accepted.
Specifically, for example, the acquisition unit 104 acquires the association information managed in association with the object J for which the specification by the viewer W has been accepted.
 提供部105は、取得部104により取得された紐付情報に基づいて、オブジェクト情報を視聴者端末3に提供する制御を実行する。 The providing unit 105 executes control for providing the object information to the viewer terminal 3 based on the link information acquired by the acquiring unit 104.
 ヒートマップ部106は、1以上のオブジェクトJが表示され得る動画像Lを視聴者端末3に提示する制御を実行する。
 具体的には例えば、図1の動画像Lを視聴者端末3に提示する制御を実行する。
The heat map unit 106 executes control for presenting a moving image L on which one or more objects J can be displayed to the viewer terminal 3.
Specifically, for example, the control for presenting the moving image L in FIG.
 分岐生成部107は、前記動画像を複数のストーリーに分岐させる分岐点を示す情報を生成する。
 具体的には例えば、分岐生成部107は、図3及び図4に示すように、動画像Lを複数のストーリーに分岐させる分岐点を示す情報を生成する。
The branch generation unit 107 generates information indicating a branch point at which the moving image branches into a plurality of stories.
Specifically, for example, as shown in FIGS. 3 and 4, the branch generation unit 107 generates information indicating a branch point at which the moving image L branches into a plurality of stories.
 音声認識部108は、視聴者Wの音声の認識を行う。
 具体的には例えば、音声認識部108は、図6に示すように、マイクIを用いて入力された視聴者Wの音声の認識を行う。
The voice recognition unit 108 recognizes the voice of the viewer W.
Specifically, for example, as shown in FIG. 6, the voice recognition unit 108 recognizes the voice of the viewer W input using the microphone I.
 ジェスチャー認識部109は、視聴者Wのジェスチャーを認識する制御を実行する。
 具体的には例えば、ジェスチャー認識部109は、視聴者端末3に搭載された近接センサ等を用いて、視聴者Wのジェスチャーを認識する制御を実行する。
The gesture recognition unit 109 executes control for recognizing the gesture of the viewer W.
Specifically, for example, the gesture recognition unit 109 executes control for recognizing the gesture of the viewer W using a proximity sensor or the like mounted on the viewer terminal 3.
 サジェスト制御部110は、視聴者Wの操作の実績に基づいて、動画像Lに所定の広告を表示する制御を実行する。
 具体的には例えば、サジェスト制御部110は、図8に示すように、視聴者Wの操作の実績に基づいて、動画像Lに広告Kを表示する制御を実行する。
The suggestion control unit 110 executes a control for displaying a predetermined advertisement on the moving image L based on the operation results of the viewer W.
Specifically, for example, as illustrated in FIG. 8, the suggestion control unit 110 executes control for displaying the advertisement K on the moving image L based on the operation results of the viewer W.
 次に、図12乃至図14を参照して、視聴者端末3のGUIの機能の具体例について説明する。 Next, a specific example of the GUI function of the viewer terminal 3 will be described with reference to FIGS.
 図12の上段には、一括TIG機能の具体例が2つ示されている。
 「一括TIG機能」とは、動画像Lの再生中の「所定のタイミング」で「所定の操作」が行われると、そのタイミングに表示されているフレーム(静止画像)内の全てのオブジェクトJを「TIGストック」に一括保存(以下、「一括TIG」と呼ぶ)される機能のことをいう。
 ここで、「所定の操作」の内容は特に限定されないが、例えば図12の上段左側に示す例では、動画像の再生中の「所定のタイミング」における「所定の操作」として、一括TIGを実行させるためのボタンB1をタップする操作が行われる。すると、「所定のタイミング」に表示されていたフレーム(静止画像)の中に存在する全てのオブジェクトJ1乃至J3が、「TIGストック」を示す画面右端の領域Dに一括TIGされる。
 また、図12の上段右側に示す例では、動画像の再生中の「所定のタイミング」における「所定の操作」として、視聴者Wの指2本を用いて画面右端の領域Dまでスワイプする操作が行われる。すると、「所定のタイミング」に表示されていたフレーム(静止画像)の中に存在する全てのオブジェクトJ1乃至J3が、「TIGストック」を示す画面右端の領域Dに一括TIGされる。
The upper part of FIG. 12 shows two specific examples of the collective TIG function.
The “batch TIG function” means that when “predetermined operation” is performed at “predetermined timing” during reproduction of the moving image L, all objects J in the frame (still image) displayed at that timing are displayed. This refers to a function that is collectively stored in the “TIG stock” (hereinafter, referred to as “collective TIG”).
Here, the content of the “predetermined operation” is not particularly limited. For example, in the example shown in the upper left part of FIG. 12, the batch TIG is executed as the “predetermined operation” at the “predetermined timing” during the reproduction of the moving image. An operation of tapping the button B1 for causing the user to perform the operation is performed. Then, all the objects J1 to J3 existing in the frame (still image) displayed at the “predetermined timing” are collectively TIGated to the area D at the right end of the screen indicating “TIG stock”.
In the example shown on the upper right side of FIG. 12, an operation of swiping to the area D at the right end of the screen using two fingers of the viewer W as “predetermined operation” at “predetermined timing” during reproduction of the moving image Is performed. Then, all the objects J1 to J3 existing in the frame (still image) displayed at the “predetermined timing” are collectively TIGated to the area D at the right end of the screen indicating “TIG stock”.
 図12の下段には、類似情報提示機能の具体例が示されている。
 「類似情報提示機能」とは、動画像に表示されているオブジェクトJに類似する事物に関する情報(以下、「類似物情報」と呼ぶ)を視聴者Wに提供する機能のことをいう。
 具体的には例えば、図12の下段に示すように、動画像に登場したオブジェクトJ1が「丸首の長袖シャツ」である場合には、オブジェクトJ1を画面左端の領域Eの方向にフリックする操作が行われる。すると、オブジェクトJ1が、画面左端の領域Eに保存される。
 その後、図12の下段に示すように、領域Eに保存されているオブジェクトJ1に対し、タップする操作が行われると、オブジェクトJ1の類似物情報が表示される。具体的には例えば、オブジェクトJ1とブランドが相違する「丸首の長袖シャツ」に関する情報が類似物情報として表示される。これにより、例えば動画像に登場しているオブジェクトJ1が希少価値を有する物や高価な物など、視聴者が簡単に入手できないような物である場合であっても、これに「似たような物」を入手するための情報を得ることができる。
The lower part of FIG. 12 shows a specific example of the similar information presentation function.
The “similar information presentation function” refers to a function of providing information about a thing similar to the object J displayed on the moving image (hereinafter, referred to as “similar information”) to the viewer W.
Specifically, for example, as shown in the lower part of FIG. 12, when the object J1 appearing in the moving image is a “round neck long-sleeved shirt”, an operation of flicking the object J1 in the direction of the area E at the left end of the screen is performed. Done. Then, the object J1 is stored in the area E at the left end of the screen.
Thereafter, as shown in the lower part of FIG. 12, when the tap operation is performed on the object J1 stored in the area E, the similar object information of the object J1 is displayed. Specifically, for example, information on “round neck long-sleeved shirt” having a brand different from that of the object J1 is displayed as similar information. Thereby, for example, even if the object J1 appearing in the moving image is a thing that cannot be easily obtained by the viewer, such as an object having a rare value or an expensive object, a “similar object” is displayed. Information to obtain "things" can be obtained.
 これに対して、類似物情報ではなく、動画像Lに登場したオブジェクトJのオブジェクト情報を取得したい場合には、オブジェクトJ2を画面右端の領域Dの方向にフリックする操作が行われる。すると、オブジェクトJ2が、画面右端の領域Dに保存される。
 その後、図12の下段に示すように、領域Dに保存されているオブジェクトJ2に対し、タップする操作が行われると、オブジェクトJ2のオブジェクト情報が表示される。
 このように、オブジェクトJに類似する事物に関する情報を取得する手法と、オブジェクトJそのもののオブジェクト情報を取得する手法とを区別することにより、視聴者Wの利便性を向上させることができる。
 ここで、図12の下段に示す類似情報提示機能は、オブジェクトJを領域Eと領域Dとに区別して保存する機能の一例に過ぎない。
 即ち、本サービスは、基本的にオブジェクトJをタップする操作を行うことで容易に「TIGストック」に保存することができるとするものである。しかしながら、例えば図12の下段に示すように、「TIGストック」への保存を示す領域を複数設けて、いずれかに保存するかを視聴者Wが決定できるようにすることもできる。
 これにより、視聴者WがオブジェクトJを選択する操作を行う段階で、「TIGストック」内で行われるカテゴライズの一部を行うことができるようになる。
 その結果、図12の下段に示す例(類似情報提示機能)の他に、例えば即時購入の対象となるオブジェクトJについては領域Dに保存し、とりあえずカートに入れるオブジェクトJについては領域Eに保存する、といったように区分することもできる。
On the other hand, when it is desired to acquire not the similar object information but the object information of the object J appearing in the moving image L, an operation of flicking the object J2 in the direction of the area D at the right end of the screen is performed. Then, the object J2 is stored in the area D at the right end of the screen.
Thereafter, as shown in the lower part of FIG. 12, when the tap operation is performed on the object J2 stored in the area D, the object information of the object J2 is displayed.
As described above, by distinguishing between a method of acquiring information on a thing similar to the object J and a method of acquiring object information of the object J itself, the convenience of the viewer W can be improved.
Here, the similar information presentation function shown in the lower part of FIG. 12 is merely an example of a function of storing the object J separately in the area E and the area D.
In other words, the present service basically allows the user to easily save the object J in the “TIG stock” by performing an operation of tapping the object J. However, as shown in the lower part of FIG. 12, for example, a plurality of areas indicating storage in “TIG stock” may be provided so that the viewer W can determine which one to store.
Thus, when the viewer W performs an operation of selecting the object J, a part of the categorization performed in the “TIG stock” can be performed.
As a result, in addition to the example (similar information presentation function) shown in the lower part of FIG. 12, for example, the object J to be purchased immediately is stored in the area D, and the object J to be put into the cart is stored in the area E for the time being. , And so on.
 図13の上段には、スクリーンショット機能の具体例が示されている。
 図12の上段を参照して説明したように、視聴者Wは、動画像の再生中にボタンB1をタップする操作を行うと、そのフレーム(静止画像)に含まれる全てのオブジェクトJ1乃至J3が「TIGストック」に保存される。また、それとともに、保存されたフレーム(静止画像)は、例えば図13の上段に示すように、「スクリーンショット一覧」として一覧表示される。
 ここで、保存されたフレーム(静止画像)に含まれる1以上のオブジェクトJの中に、紐付情報が紐付けられているものがある場合、視聴者Wは以下の操作を行うことでそのオブジェクト情報を取得することができる。
 即ち、視聴者Wは、そのフレーム(静止画像)に含まれている1以上のオブジェクトJの夫々のTIGエリアAのいずれかの位置をタップする操作を行うことで、そのオブジェクトJのオブジェクト情報を取得することができる。
 具体的には例えば、図13の上段の例では、オブジェクトJ1乃至J3の夫々のTIGエリアA1乃至A3の夫々のいずれかの位置をタップする操作が行われると、オブジェクトJ1乃至J3の夫々のオブジェクト情報を取得することができる。
 これにより、視聴者Wは、例えば動画像の再生中、気になるシーンを見つけたら、スクリーンショット機能を用いてとりあえずフレーム(静止画像)を保存して、後でゆっくりオブジェクト情報を取得することもできる。
The upper part of FIG. 13 shows a specific example of the screen shot function.
As described with reference to the upper part of FIG. 12, when the viewer W performs an operation of tapping the button B1 during reproduction of a moving image, all the objects J1 to J3 included in the frame (still image) are displayed. Stored in "TIG stock". At the same time, the stored frames (still images) are displayed in a list as a “screenshot list”, for example, as shown in the upper part of FIG.
Here, when one or more objects J included in the saved frame (still image) are associated with the associated information, the viewer W performs the following operation to obtain the object information. Can be obtained.
That is, the viewer W performs an operation of tapping any position of each of the one or more objects J included in the frame (still image) in the TIG area A, and thereby the object information of the object J is changed. Can be obtained.
Specifically, for example, in the example in the upper part of FIG. 13, when an operation of tapping any position of each of the TIG areas A1 to A3 of the objects J1 to J3 is performed, the object of each of the objects J1 to J3 is Information can be obtained.
Thus, for example, if the viewer W finds an anxious scene during the reproduction of a moving image, the viewer W may temporarily save the frame (still image) using the screen shot function and later slowly acquire the object information. it can.
 図13の下段には、ピンチイン機能の具体例が示されている。
 図13の下段に示すように、視聴者Wは、画面Mに対するピンチイン操作を行うことで、人物、音楽、ロケーション、コントローラ等を画面Mの外に表示させることができる。
 また、図示はしないが、例えば視聴者端末3がパーソナルコンピュータ等であるためにピンチイン操作を行うことができない場合には、別途ボタンを設けて、ピンチイン機能に相当する機能が実行できるようにすることもできる。
The lower part of FIG. 13 shows a specific example of the pinch-in function.
As shown in the lower part of FIG. 13, the viewer W can display a person, music, a location, a controller, and the like outside the screen M by performing a pinch-in operation on the screen M.
Although not shown, if the viewer terminal 3 is a personal computer or the like and cannot perform a pinch-in operation, a separate button is provided so that a function corresponding to the pinch-in function can be executed. Can also.
 図14の上段には、簡易翻訳機能の具体例が示されている。
 「簡易翻訳機能」とは、動画像の再生中に表示されるテロップやキャプションに表記された文章に含まれる1以上の単語のうち、1の単語をタップする操作を行うことで、その単語の簡単な翻訳文を表示させることができる機能である。
 また、動画像の再生中に表示されるテロップやキャプションに表記された文字や文書に含まれる1以上の単語は、オブジェクトJとして捉えることができる。このため、例えば上述の図12の上段に示すGUIと同様に、オブジェクトJとしての単語をタップする操作を行うことで、領域Dにより示される「TIGストック」に保存することができる。
 またさらに、図14の上段に示すように、「TIGストック」(領域D)に保存されたオブジェクトJとしての単語は、視聴者Wの夫々が利用可能な単語帳に記録される。そして、視聴者Wが単語帳や「TIGストック」(領域D)に記録された単語を示すアイコンをタップすると、その単語の正確な発音が音声再生される。そして、その単語を視聴者Wが発音すると、視聴者Wの発音の正確性の分析が行われ、発音の正確性の度合いを示す正解率が表示されるようにすることもできる。
The upper part of FIG. 14 shows a specific example of the simple translation function.
The “simple translation function” means that by tapping one of the words included in the text displayed in the telop or caption displayed during playback of the moving image, This function allows you to display a simple translation.
In addition, one or more words included in a character or a document written in a telop or a caption displayed during reproduction of a moving image can be regarded as an object J. Therefore, for example, similarly to the GUI shown in the upper part of FIG. 12 described above, by performing an operation of tapping the word as the object J, it is possible to save the word in the “TIG stock” indicated by the area D.
Furthermore, as shown in the upper part of FIG. 14, the word as the object J stored in the “TIG stock” (area D) is recorded in a word book that can be used by each of the viewers W. Then, when the viewer W taps an icon indicating a word recorded in a word book or “TIG stock” (area D), an accurate pronunciation of the word is reproduced by voice. Then, when the word is pronounced by the viewer W, the accuracy of the pronunciation of the viewer W is analyzed, and the correct answer rate indicating the degree of accuracy of the pronunciation can be displayed.
 図14の下段には、TIGストックの具体例が示されている。
 上述の「TIGストック」は、単にオブジェクトJの羅列として一覧表示させることもできるが、例えば図14の下段に示すように、カテゴライズされることで1以上のブックに区分された形で一覧表示させることもできる。「TIGストック」に保存されているオブジェクトJをカテゴライズする作業は、設定者Cが手動で行うこともできるし、AI(人工知能)等の技術を用いて自動的に行われるようにすることもできる。
 これにより、視聴者Wは、動画像の視聴後にオブジェクトJのオブジェクト情報を取得しようとする場合に、自身が所望するオブジェクトJのオブジェクト情報を素早く見つけ出すことができる。
The lower part of FIG. 14 shows a specific example of the TIG stock.
The above-mentioned “TIG stock” can be simply displayed as a list of objects J. For example, as shown in the lower part of FIG. 14, the list is displayed in a form that is categorized into one or more books. You can also. The operation of categorizing the object J stored in the “TIG stock” can be manually performed by the setter C, or can be automatically performed using a technique such as AI (artificial intelligence). it can.
This allows the viewer W to quickly find the object information of the object J desired by himself / herself when trying to acquire the object information of the object J after viewing the moving image.
 以上、本発明の一実施形態について説明したが、本発明は、上述の実施形態に限定されるものではなく、本発明の目的を達成できる範囲での変形、改良等は本発明に含まれるものである。 As described above, one embodiment of the present invention has been described. However, the present invention is not limited to the above-described embodiment, and includes modifications and improvements as long as the object of the present invention can be achieved. It is.
 例えば、上述の実施形態において、オブジェクトJとして特定されているものは例示に過ぎない。
 即ち、動画像LにはオブジェクトJに該当し得る候補は無数に含まれている。このため、上述したように、可能な限り多くのオブジェクトJに紐付情報を予め紐付けておくことで、視聴者Wの利便性をより向上させることができる。
For example, in the above-described embodiment, what is specified as the object J is merely an example.
That is, the moving image L includes a myriad of candidates that can correspond to the object J. For this reason, as described above, by linking the linking information to as many objects J as possible in advance, the convenience of the viewer W can be further improved.
 また、上述の実施形態では、視聴者Wによる指定がなされると、オブジェクトJの紐付情報がストック情報として「TIGストック」に保存される構成となっているが、これは例示に過ぎない。視聴者Wによる指定がなされたオブジェクトJの紐付情報は、ストック情報として保存されない構成とすることもできる。 Also, in the above-described embodiment, when specified by the viewer W, the link information of the object J is stored in the “TIG stock” as stock information, but this is only an example. The linking information of the object J specified by the viewer W may not be stored as stock information.
 また例えば、図2には設定者Cが1人のみ描画されているが、これは例示に過ぎず、設定者Cは複数人存在してもよい。 Also, for example, FIG. 2 shows only one setter C, but this is merely an example, and a plurality of setters C may exist.
 また例えば、TIGエリアの形状は、四角形に限定されず、自由矩形とすることもできる。 Also, for example, the shape of the TIG area is not limited to a quadrangle, but may be a free rectangle.
 また例えば、図1乃至図8に示す動画像Lは例示に過ぎず、他の構成としてもよい。 Also, for example, the moving images L shown in FIGS. 1 to 8 are merely examples, and may have other configurations.
 また、本発明によれば、上述のサービスや効果に加え、以下のようなサービスや効果を実現させることもできる。
 即ち、視聴者Wは、動画像Lに出演している人物(例えば図1の女性タレント)が身に着けている衣服や装飾品について、視聴者端末3を操作しながらその場で情報を得たり購入したりすることができる。
 また、視聴者WによるTIGエリアAに対するタップ操作がどの程度行われたのかについて、設定者Cや動画像Lを提供する者にフィードバックすることもできる。
 また、設定者Cの能力に応じて、動画像LにTIGエリアAを設定する度合いも変わることが予想されるので、高度なスキルを有する専門職としての設定者Cが増加することが期待できる。
 また、従来のスポンサーは、CM(コマーシャル)を広告媒体としていたが、動画像Lを広告媒体とするスポンサーが増加することが期待できる。
 また、視聴者Wが、設定者Cとして動画像にTIGエリアAを設定できるようにすることもできる。即ち、視聴者Wが上述の設定アプリを利用できるようにすることもできる。
 さらに、視聴者Wが、TIGエリアAが設定された画像Lを、自ら配信できるようにすることもできる。
 また、1つのオブジェクトJに対応するTIGエリアAがタップされた場合、複数のリンク先(ジャンプ先)を選択できるようにすることができる。
 また、「TIGストック」は、カスタマイズすることもできる。また、オブジェクトJの指定のための操作の具体的態様に応じて、その効果に変化を持たせることができる。具体的には例えば、操作角度、タップする時間、ダブルタップ等により、効果に変化を持たせることができる。より具体的には例えばオブジェクトJに第1紐付情報と第2紐付情報とが対応付けられている場合、深く押下操作がなされると第1紐付情報が保存される一方、2回触る(ダブルタップ)と第2紐付情報が保存されるようにすることができる。
 また、視聴者Wの目の動きによって、オブジェクトJをストックできるようにすることもできる。
Further, according to the present invention, in addition to the above services and effects, the following services and effects can be realized.
That is, the viewer W obtains information on clothes and ornaments worn by a person (for example, the female talent in FIG. 1) appearing in the moving image L while operating the viewer terminal 3. And can be purchased.
Further, the extent to which the viewer W performs the tap operation on the TIG area A can be fed back to the setter C or the person who provides the moving image L.
In addition, since the degree of setting the TIG area A in the moving image L is expected to change in accordance with the ability of the setter C, it is expected that the setter C as a professional having advanced skills will increase. .
In addition, although conventional sponsors use CM (commercial) as an advertising medium, it is expected that the number of sponsors using a moving image L as an advertising medium will increase.
Also, the viewer W can be set as the setter C to set the TIG area A in the moving image. That is, the viewer W can use the setting application described above.
Further, the viewer W can also distribute the image L in which the TIG area A is set.
Also, when the TIG area A corresponding to one object J is tapped, a plurality of link destinations (jump destinations) can be selected.
"TIG stock" can also be customized. In addition, the effect can be changed according to the specific mode of the operation for designating the object J. Specifically, for example, the effect can be changed by an operation angle, a tap time, a double tap, or the like. More specifically, for example, in a case where the first linking information and the second linking information are associated with the object J, the first linking information is stored when a deep pressing operation is performed, while the touch is performed twice (double tapping). ) And the second association information can be stored.
Further, the object J can be stocked by the movement of the eyes of the viewer W.
 また、図9に示すシステム構成や、図10に示す管理サーバ1のハードウェア構成は、本発明の目的を達成するための例示に過ぎず、特に限定されない。 The system configuration shown in FIG. 9 and the hardware configuration of the management server 1 shown in FIG. 10 are merely examples for achieving the object of the present invention, and are not particularly limited.
 また、図11に示す機能ブロック図は、例示に過ぎず、特に限定されない。即ち、上述した一連の処理を全体として実行できる機能が情報処理システムに備えられていれば足り、この機能を実現するためにどのような機能ブロックを用いるのかは、特に図11の例に限定されない。 The functional block diagram shown in FIG. 11 is merely an example and is not particularly limited. That is, it suffices that the information processing system has a function capable of executing the series of processes described above as a whole, and what kind of functional block is used to realize this function is not particularly limited to the example of FIG. .
 また、機能ブロックの存在場所も、図11に限定されず、任意でよい。
 例えば、図11の例で、紐付情報提供処理や紐付情報設定支援処理の実行に必要となる機能ブロックは、管理サーバ1側が備える構成となっているが、これは例示に過ぎない。例えば専用のアプリを設定者端末2や視聴者端末3にインストールさせることにより、これらの機能ブロックの少なくとも一部を、設定者端末2や視聴者端末3側が備える構成としてもよい。
 また、1つの機能ブロックは、ハードウェア単体で構成してもよいし、ソフトウェア単体で構成してもよいし、それらの組み合わせで構成してもよい。
Further, the location of the functional block is not limited to FIG. 11 and may be arbitrary.
For example, in the example of FIG. 11, the functional blocks necessary for executing the linking information providing process and the linking information setting support process are configured on the management server 1 side, but this is only an example. For example, by installing a dedicated application in the setter terminal 2 or the viewer terminal 3, at least a part of these functional blocks may be provided in the setter terminal 2 or the viewer terminal 3.
In addition, one functional block may be configured by hardware alone, may be configured by software alone, or may be configured by a combination thereof.
 各機能ブロックの処理をソフトウェアにより実行させる場合には、そのソフトウェアを構成するプログラムが、コンピュータ等にネットワークや記録媒体からインストールされる。
 コンピュータは、専用のハードウェアに組み込まれているコンピュータであってもよい。また、コンピュータは、各種のプログラムをインストールすることで、各種の機能を実行することが可能なコンピュータ、例えばサーバの他汎用のスマートフォンやパーソナルコンピュータであってもよい。
When the processing of each functional block is executed by software, a program constituting the software is installed on a computer or the like from a network or a recording medium.
The computer may be a computer embedded in dedicated hardware. In addition, the computer may be a computer that can execute various functions by installing various programs, for example, a general-purpose smartphone or a personal computer in addition to a server.
 このようなプログラムを含む記録媒体は、各ユーザにプログラムを提供するために装置本体とは別に配布される、リムーバブルメディアにより構成されるだけではなく、装置本体に予め組み込まれた状態で各ユーザに提供される記録媒体等で構成される。 A recording medium including such a program is provided separately from the apparatus main body to provide the program to each user. It is composed of provided recording media and the like.
 なお、本明細書において、記録媒体に記録されるプログラムを記述するステップは、その順序に添って時系列的に行われる処理はもちろん、必ずしも時系列的に処理されなくとも、並列的或いは個別に実行される処理をも含むものである。 In this specification, steps for describing a program to be recorded on a recording medium are not limited to processing performed in chronological order according to the order, but are not necessarily performed in chronological order. This also includes the processing to be executed.
 また、本明細書において、システムの用語は、複数の装置や複数の手段等より構成される全体的な装置を意味するものである。 シ ス テ ム In this specification, the term “system” refers to an entire device including a plurality of devices and a plurality of means.
 以上まとめると、本発明が適用される情報処理システムは、次のような構成を取れば足り、各種各様な実施形態を取ることができる。
 即ち、本発明が適用される情報処理システムは、
 表示されている動画像(例えば図1の動画像L)の中で第1ユーザ(例えば図9の視聴者W)により指定がなされる、所定領域(例えば図1のTIGエリアA1)に存在するオブジェクト(例えば図1の女性タレントを示すオブジェクトJ1)に対し、当該第1ユーザに所定情報(例えばオブジェクト情報)を提供するための情報を、当該指定がなされる前(例えば動画像が視聴者端末3に表示される以前のタイミング)で紐付けて、当該情報を紐付情報として管理する管理手段(例えば図11の紐付情報管理部101)と、
 前記動画像を前記第1ユーザに提示する制御を実行する提示制御手段(例えば図11の動画像提示制御部102)と、
 前記動画像が提示されている(例えば図2等の視聴者端末3に表示されている)前記第1ユーザによる前記所定領域に対する所定操作(例えばタップ操作)がなされた場合、当該所定領域に存在する前記オブジェクトの前記指定がなされたと認識して、当該指定を受付ける第1受付手段(例えば図11の指定受付部103)と、
 前記指定が受付けられた前記オブジェクトに紐付けられて管理されている前記紐付情報を取得する取得手段(例えば図11の取得部104)と、
 前記取得手段により取得された前記紐付情報に基づいて、前記所定情報を前記第1ユーザに提供する(例えば図2等の視聴者端末3に表示させる)制御を実行する提供制御手段(例えば図11の提供部105)と、
 を備える。
In summary, the information processing system to which the present invention is applied only needs to have the following configuration, and can take various embodiments.
That is, the information processing system to which the present invention is applied is:
It exists in a predetermined area (for example, TIG area A1 in FIG. 1) specified by a first user (for example, viewer W in FIG. 9) in the displayed moving image (for example, moving image L in FIG. 1). Information for providing predetermined information (for example, object information) to the first user for an object (for example, the object J1 indicating a female talent in FIG. 1) is provided before the designation (for example, when the moving image is displayed on the viewer terminal). A management unit (for example, the linking information management unit 101 in FIG. 11) that links the information at the timing before the information is displayed in FIG.
A presentation control unit (for example, a moving image presentation control unit 102 in FIG. 11) for executing control for presenting the moving image to the first user;
When a predetermined operation (for example, a tap operation) on the predetermined area by the first user in which the moving image is presented (for example, displayed on the viewer terminal 3 in FIG. 2 or the like) is performed, the moving image is present in the predetermined area. A first receiving unit (for example, the specification receiving unit 103 in FIG. 11) that recognizes that the specification of the object to be performed has been performed and receives the specification.
An acquisition unit (for example, the acquisition unit 104 in FIG. 11) for acquiring the association information managed in association with the object whose specification has been accepted;
Provided control means (for example, FIG. 11) for executing control for providing the predetermined information to the first user (for example, displaying it on the viewer terminal 3 in FIG. 2 or the like) based on the link information acquired by the acquisition means. Providing unit 105),
Is provided.
 これにより、管理手段がオブジェクトに紐付情報を紐付けて管理し、提示制御手段が動画像を視聴者に提示する。そして、第1受付手段が視聴者によるオブジェクトの指定を受付けて、取得手段がそのオブジェクトに紐付けられている紐付情報を取得し、提示制御手段がその紐付情報に基づく所定情報を視聴者に提供する。
 その結果、視聴者は、動画像の視聴中、又は動画像を視聴した後のタイミングで、当該動画像に含まれるオブジェクトに関する所定情報を容易に取得することができる。
 具体的には例えば、上述の実施形態における視聴者Wは、動画像Lの視聴中、又は動画像Lを視聴した後のタイミングで、動画像Lに含まれるオブジェクトJに関するオブジェクト情報を容易に取得することができる。
As a result, the management unit associates the object with the association information and manages the object, and the presentation control unit presents the moving image to the viewer. Then, the first receiving unit receives the specification of the object by the viewer, the obtaining unit obtains the linking information associated with the object, and the presentation control unit provides the viewer with predetermined information based on the linking information. I do.
As a result, the viewer can easily obtain the predetermined information on the object included in the moving image during the viewing of the moving image or at a timing after the viewing of the moving image.
Specifically, for example, the viewer W in the above-described embodiment easily acquires the object information regarding the object J included in the moving image L during the viewing of the moving image L or at a timing after the viewing of the moving image L. can do.
 また、1以上の前記ユーザ(例えば視聴者W)による前記所定操作(例えばタップ操作)が行われた前記動画像上の位置を示す情報を生成して、所定様式(例えばヒートマップ)で提示する制御を実行する第2提示制御手段(例えばヒートマップ部106)をさらに備えることができる。 Further, information indicating a position on the moving image where the predetermined operation (for example, tap operation) by one or more users (for example, viewer W) has been performed is generated and presented in a predetermined format (for example, a heat map). A second presentation control unit (for example, the heat map unit 106) for performing the control may be further provided.
 これにより、第2提示制御手段が、1以上のユーザによる所定操作が行われた動画像上の位置を示す情報を生成して、所定様式で提示する制御を実行するので、例えば以下のような効果を奏する
 即ち、オブジェクトが存在する所定領域のうち、具体的にどの位置が多く所定操作が行われたのか、あるいはどの位置に所定操作がされないのかを一見して把握することができる。また、所定領域以外のどの位置に対して所定操作が行われたのかを一見して把握することもできる。
 その結果、例えば動画像に設定され所定領域の位置を好適な位置に修正することができる。また例えば、当初設定された所定領域の広狭に関する修正を行うことができる。また例えば、これまでオブジェクト情報が紐付けられていなかったオブジェクトのうち、ユーザによる所定操作が多く行われたオブジェクトに対し、新たな紐付作業を行うこともできる。
Accordingly, the second presentation control unit generates information indicating a position on the moving image where a predetermined operation has been performed by one or more users, and performs control for presenting in a predetermined format. In other words, it is possible to see at a glance which positions in the predetermined area where the object is present are specifically subjected to the predetermined operation, or which positions are not subjected to the predetermined operation. Further, it is possible to see at a glance which position other than the predetermined area has been subjected to the predetermined operation.
As a result, for example, the position of the predetermined area set in the moving image can be corrected to a suitable position. Further, for example, it is possible to make a correction relating to the width of the initially set predetermined area. Further, for example, a new associating operation can be performed on an object for which a predetermined operation has been frequently performed by a user among objects to which object information has not been associated so far.
 また、前記第2提示制御手段は、前記1以上の前記ユーザによる前記所定操作が行われた前記動画像上の位置を示す情報として、前記所定操作が行われた数に基づいて生成されたヒートマップの様式で提示する制御を実行することができる。 Further, the second presentation control means may include, as information indicating a position on the moving image at which the predetermined operation by the one or more users is performed, a heat generated based on the number of the predetermined operations performed. Controls presented in the form of a map can be performed.
 これにより、第2提示制御手段が、1以上のユーザによる所定操作が行われた動画像上の位置を示す情報を生成して、ヒートマップの様式で提示する制御を実行するので、例えば以下のような効果を奏する
 即ち、オブジェクトが存在する所定領域のうち、具体的にどの位置が多く所定操作が行われたのか、あるいはどの位置に所定操作がされないのかをヒートマップによって一見して把握することができる。また、所定領域以外のどの位置に対して所定操作が行われたのかをヒートマップによって一見して把握することもできる。
 その結果、例えば動画像に設定され所定領域の位置を好適な位置に修正することができる。また例えば、当初設定された所定領域の広狭に関する修正を行うことができる。また例えば、これまでオブジェクト情報が紐付けられていなかったオブジェクトのうち、ユーザによる所定操作が多く行われたオブジェクトに対し、新たな紐付作業を行うこともできる。
Accordingly, the second presentation control unit generates information indicating a position on the moving image where a predetermined operation has been performed by one or more users and executes control for presenting the information in the form of a heat map. In other words, it is possible to grasp at a glance, using a heat map, a specific area in which an object is present, a specific position where a specific operation is frequently performed, or a specific position where the specific operation is not performed. Can be. Further, it is possible to grasp at a glance which position other than the predetermined area has been subjected to the predetermined operation by using the heat map.
As a result, for example, the position of the predetermined area set in the moving image can be corrected to a suitable position. Further, for example, it is possible to make a correction relating to the width of the initially set predetermined area. Further, for example, a new associating operation can be performed on an object for which a predetermined operation has been frequently performed by a user among objects to which object information has not been associated so far.
 また、前記動画像を複数のストーリーに分岐させる分岐点を示す情報を生成する分岐生成手段をさらに備える、 備 え る Further, the image processing apparatus further includes a branch generation unit that generates information indicating a branch point at which the moving image is branched into a plurality of stories.
 これにより、分岐生成手段が動画像を複数のストーリーに分岐させる分岐点を示す情報を生成するので、上述のストーリー分岐サービスを提供することができる。 (4) Since the branch generation unit generates information indicating a branch point at which a moving image is branched into a plurality of stories, the above-described story branch service can be provided.
 また、前記ユーザの音声の認識を行う音声認識手段をさらに備えることができる Furthermore, it is possible to further comprise a voice recognition means for recognizing the voice of the user.
 これにより、音声認識手段が、ユーザの音声の認識を行うので、上述の音声TIGサービスを提供することができる。 Thereby, the voice recognition means recognizes the voice of the user, so that the above-mentioned voice TIG service can be provided.
 また、前記ユーザのジェスチャーの認識を行うジェスチャー認識部手段をさらに備えることができる In addition, it is possible to further comprise a gesture recognition unit for recognizing the gesture of the user.
 これにより、ジェスチャー認識部手段が、ユーザのジェスチャーの認識を行うので、上述のジェスチャーTIGサービスを提供することができる。 Thereby, the gesture recognizing unit recognizes the gesture of the user, so that the above-described gesture TIG service can be provided.
 また、前記ユーザの操作の実績に基づいて、前記動画像に所定の広告(サイネージ)を表示する制御を実行するサジェスト制御手段をさらに備えることができる。 Further, a suggestion control means for executing a control for displaying a predetermined advertisement (signage) on the moving image based on a result of the operation of the user may be further provided.
 これにより、サジェスト制御手段が、ユーザの操作の実績に基づいて、動画像に所定の広告(サイネージ)を表示する制御を実行するので、上述のリアルタイムサジェストサービスを提供することができる。 (4) Since the suggestion control means executes control for displaying a predetermined advertisement (signage) on a moving image based on the results of user operations, the above-described real-time suggestion service can be provided.
 また、前記管理手段は、前記オブジェクトに複数の前記紐付情報を紐付けて管理し、
 前記取得手段は、前記指定が受付けられた前記オブジェクトに紐付けられて管理されている前記複数の前記紐付情報を取得し、
 前記提供制御手段は、前記取得手段により取得された前記複数の前記紐付情報に基づいて、前記所定情報を前記ユーザに提供する制御を実行することができる。
Further, the management means manages the object by associating the object with the plurality of pieces of the associating information,
The acquisition unit acquires the plurality of pieces of the association information managed in association with the object whose specification has been accepted,
The provision control means may execute a control for providing the predetermined information to the user based on the plurality of pieces of link information acquired by the acquisition means.
 これにより、管理手段が、オブジェクトに複数の紐付情報を紐付けて管理し、取得手段が、ユーザによる指定が受付けられたオブジェクトに紐付けられて管理されている複数の紐付情報を取得し、提供制御手段が、取得手段により取得された複数の紐付情報に基づいて、所定情報をユーザに提供する制御を実行する。その結果、上述のマルチリンクサービスを提供することができる。 Thereby, the management unit associates and manages a plurality of pieces of linked information with the object, and the obtaining unit obtains and provides a plurality of pieces of linked information managed by being linked to the object specified by the user. The control means executes control for providing predetermined information to the user based on the plurality of pieces of link information acquired by the acquisition means. As a result, the above-described multilink service can be provided.
 1:管理サーバ、2:設定者端末、3:視聴者端末、4:外部サーバ、11:CPU、12:ROM、13:RAM、14:バス、15:入出力インターフェース、16:入力部、17:出力部、18:記憶部、19:通信部、20:ドライブ、30:リムーバブルメディア、101:紐付情報管理部、102:動画像提示制御部、103:指定受付部、104:取得部、105:提供部、106:ヒートマップ部、107:分岐生成部、108:音声認識部、109:ジェスチャー認識部、110:サジェスト制御部、181:オブジェクトDB、C:設定者、W:視聴者、L:動画像、J:オブジェクト、A:TIGエリア、F:吹き出し、H:紐付情報、I:マイク、G:ON/OFFスイッチボタン、V:テレビジョン、S:店頭サイネージ、K:広告、B:ボタン、D:領域、E:領域、M:画面、N:ネットワーク 1: management server, 2: setter terminal, 3: viewer terminal, 4: external server, 11: CPU, 12: ROM, 13: RAM, 14: bus, 15: input / output interface, 16: input unit, 17 : Output unit, 18: storage unit, 19: communication unit, 20: drive, 30: removable medium, 101: linked information management unit, 102: moving image presentation control unit, 103: designation reception unit, 104: acquisition unit, 105 : Provision unit, 106: heat map unit, 107: branch generation unit, 108: speech recognition unit, 109: gesture recognition unit, 110: suggestion control unit, 181: object DB, C: setter, W: viewer, L : Moving image, J: Object, A: TIG area, F: Balloon, H: Link information, I: Microphone, G: ON / OFF switch button, V: Television, S: Storefront Ineji, K: advertising, B: Button, D: area, E: region, M: Screen, N: network

Claims (8)

  1.  表示されている動画像の中でユーザにより指定がなされる、所定領域に存在するオブジェクトに対し、当該ユーザに所定情報を提供するための情報を、当該指定がなされる前に紐付けて、当該情報を紐付情報として管理する管理手段と、
     前記ユーザに対し、前記動画像を提示する制御を実行する第1提示制御手段と、
     前記動画像が提示されている前記ユーザによる前記所定領域に対する所定操作がなされた場合、当該所定領域に存在する前記オブジェクトの前記指定がなされたと認識して、当該指定を受付ける第1受付手段と、
     前記指定が受付けられた前記オブジェクトに紐付けられて管理されている前記紐付情報を取得する取得手段と、
     前記取得手段により取得された前記紐付情報に基づいて、前記所定情報を前記ユーザに提供する制御を実行する提供制御手段と、
     を備える情報処理システム。
    The object for providing the predetermined information to the user for the object existing in the predetermined region, which is specified by the user in the displayed moving image, is linked to the object before the specification is performed. Management means for managing information as linked information;
    First presentation control means for performing control for presenting the moving image to the user;
    When a predetermined operation is performed on the predetermined region by the user in which the moving image is presented, a first reception unit that recognizes that the specification of the object present in the predetermined region has been performed and receives the specification,
    Acquisition means for acquiring the association information managed in association with the object whose specification has been accepted;
    A providing control unit that executes control for providing the predetermined information to the user based on the link information acquired by the acquiring unit;
    An information processing system comprising:
  2.  1以上の前記ユーザによる前記所定操作が行われた前記動画像上の位置を示す情報を生成して、所定様式で提示する制御を実行する第2提示制御手段をさらに備える、
     請求項1に記載の情報処理システム。
    A second presentation control unit configured to generate information indicating a position on the moving image at which the predetermined operation by one or more of the users has been performed, and to perform a control of presenting in a predetermined style;
    The information processing system according to claim 1.
  3.  前記第2提示制御手段は、前記1以上の前記ユーザによる前記所定操作が行われた前記動画像上の位置を示す情報として、前記所定操作が行われた数に基づいて生成されたヒートマップの様式で提示する制御を実行する、
     請求項2に記載の情報処理システム。
    The second presentation control unit may include, as information indicating a position on the moving image at which the one or more users have performed the predetermined operation, a heat map generated based on the number of the predetermined operations performed. Perform the controls presented in the form,
    The information processing system according to claim 2.
  4.  前記動画像を複数のストーリーに分岐させる分岐点を示す情報を生成する分岐生成手段をさらに備える、
     請求項1乃至3のうちいずれか1項に記載の情報処理システム。
    The apparatus further includes a branch generation unit that generates information indicating a branch point at which the moving image branches into a plurality of stories,
    The information processing system according to claim 1.
  5.  前記ユーザの音声の認識を行う音声認識手段をさらに備える、
     請求項1乃至4のうちいずれか1項に記載の情報処理システム。
    Further comprising a voice recognition means for recognizing the voice of the user,
    The information processing system according to claim 1.
  6.  前記ユーザのジェスチャーの認識を行うジェスチャー認識部手段をさらに備える、
     請求項1乃至5のうちいずれか1項に記載の情報処理システム。
    Further comprising a gesture recognition unit means for recognizing the gesture of the user,
    The information processing system according to claim 1.
  7.  前記ユーザの操作の実績に基づいて、前記動画像に所定の広告を表示する制御を実行するサジェスト制御部110をさらに備える、
     請求項1乃至6のうちいずれか1項に記載の情報処理システム。
    The system further includes a suggestion control unit 110 that executes a control of displaying a predetermined advertisement on the moving image based on a result of the operation of the user.
    The information processing system according to claim 1.
  8.  前記管理手段は、前記オブジェクトに複数の前記紐付情報を紐付けて管理し、
     前記取得手段は、前記指定が受付けられた前記オブジェクトに紐付けられて管理されている前記複数の前記紐付情報を取得し、
     前記提供制御手段は、前記取得手段により取得された前記複数の前記紐付情報に基づいて、前記所定情報を前記ユーザに提供する制御を実行する、
     請求項1乃至7のうちいずれか1項に記載の情報処理システム。
    The management means manages the object by associating the object with the plurality of pieces of the association information,
    The acquisition unit acquires the plurality of pieces of the association information managed in association with the object whose specification has been accepted,
    The provision control means executes control for providing the predetermined information to the user based on the plurality of pieces of link information acquired by the acquisition means.
    The information processing system according to claim 1.
PCT/JP2019/039345 2018-10-04 2019-10-04 Information processing device WO2020071545A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020551117A JPWO2020071545A1 (en) 2018-10-04 2019-10-04 Information processing device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018189416 2018-10-04
JP2018-189416 2018-10-04

Publications (1)

Publication Number Publication Date
WO2020071545A1 true WO2020071545A1 (en) 2020-04-09

Family

ID=70055412

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/039345 WO2020071545A1 (en) 2018-10-04 2019-10-04 Information processing device

Country Status (2)

Country Link
JP (1) JPWO2020071545A1 (en)
WO (1) WO2020071545A1 (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010109773A (en) * 2008-10-30 2010-05-13 Koichi Sumida Information providing system, content distribution apparatus and content viewing terminal device
JP2011209979A (en) * 2010-03-30 2011-10-20 Brother Industries Ltd Merchandise recommendation method and merchandise recommendation system
JP2014100208A (en) * 2012-11-19 2014-06-05 Konami Digital Entertainment Co Ltd Game control device, game control method, program, game system, and lottery device
JP2015082706A (en) * 2013-10-21 2015-04-27 キヤノン株式会社 Image forming apparatus, control method of the same, and computer program
JP2015115661A (en) * 2013-12-09 2015-06-22 株式会社Pumo Interface device for designating link destination, interface device for viewer, and computer program
US20150262278A1 (en) * 2013-03-15 2015-09-17 Catherine G. Lin-Hendel Method and System to Conduct Electronic Commerce Through Motion Pictures or Life Performance Events
JP2016119125A (en) * 2016-03-16 2016-06-30 ヤフー株式会社 Advertisement distribution system, advertisement distribution method, terminal estimating device, terminal estimation method, and program
WO2017056229A1 (en) * 2015-09-30 2017-04-06 楽天株式会社 Information processing device, information processing method, and program for information processing device
WO2017077751A1 (en) * 2015-11-04 2017-05-11 ソニー株式会社 Information processing device, information processing method, and program
JP2017129640A (en) * 2016-01-18 2017-07-27 キヤノン株式会社 Image processing device, and control method and computer program of the same
JP2019080252A (en) * 2017-10-26 2019-05-23 株式会社リコー Program, image display method, image display system, and information processing device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170147164A1 (en) * 2015-11-25 2017-05-25 Google Inc. Touch heat map

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010109773A (en) * 2008-10-30 2010-05-13 Koichi Sumida Information providing system, content distribution apparatus and content viewing terminal device
JP2011209979A (en) * 2010-03-30 2011-10-20 Brother Industries Ltd Merchandise recommendation method and merchandise recommendation system
JP2014100208A (en) * 2012-11-19 2014-06-05 Konami Digital Entertainment Co Ltd Game control device, game control method, program, game system, and lottery device
US20150262278A1 (en) * 2013-03-15 2015-09-17 Catherine G. Lin-Hendel Method and System to Conduct Electronic Commerce Through Motion Pictures or Life Performance Events
JP2015082706A (en) * 2013-10-21 2015-04-27 キヤノン株式会社 Image forming apparatus, control method of the same, and computer program
JP2015115661A (en) * 2013-12-09 2015-06-22 株式会社Pumo Interface device for designating link destination, interface device for viewer, and computer program
WO2017056229A1 (en) * 2015-09-30 2017-04-06 楽天株式会社 Information processing device, information processing method, and program for information processing device
WO2017077751A1 (en) * 2015-11-04 2017-05-11 ソニー株式会社 Information processing device, information processing method, and program
JP2017129640A (en) * 2016-01-18 2017-07-27 キヤノン株式会社 Image processing device, and control method and computer program of the same
JP2016119125A (en) * 2016-03-16 2016-06-30 ヤフー株式会社 Advertisement distribution system, advertisement distribution method, terminal estimating device, terminal estimation method, and program
JP2019080252A (en) * 2017-10-26 2019-05-23 株式会社リコー Program, image display method, image display system, and information processing device

Also Published As

Publication number Publication date
JPWO2020071545A1 (en) 2021-06-03

Similar Documents

Publication Publication Date Title
US9743145B2 (en) Second screen dilemma function
US9583147B2 (en) Second screen shopping function
US20180152767A1 (en) Providing related objects during playback of video data
US20150172787A1 (en) Customized movie trailers
US9576334B2 (en) Second screen recipes function
CN103108248B (en) A kind of implementation method of interactive video and system
US11343595B2 (en) User interface elements for content selection in media narrative presentation
CN107735746A (en) Interactive media system and method
CN107852399A (en) System is presented in Streaming Media
CN107430630A (en) For assembling and presenting method, system and the medium of the content related to particular video frequency game
US9578370B2 (en) Second screen locations function
US10440435B1 (en) Performing searches while viewing video content
US11277668B2 (en) Methods, systems, and media for providing media guidance
JP2021535656A (en) Video processing methods, equipment, devices and computer programs
US20180249206A1 (en) Systems and methods for providing interactive video presentations
CN112596694B (en) Method and device for processing house source information
US20170262991A1 (en) Browsing interface for item counterparts having different scales and lengths
US20240291784A1 (en) Methods, Systems, and Media for Identifying and Presenting Video Objects Linked to a Source Video
US20220415360A1 (en) Method and apparatus for generating synopsis video and server
CN114143572A (en) Live broadcast interaction method and device, storage medium and electronic equipment
CN112052315A (en) Information processing method and device
WO2019059207A1 (en) Display control device and computer program
CN112667333A (en) Singing list interface display control method and device, storage medium and electronic equipment
JP5821152B2 (en) Content providing server and content providing method
WO2020071545A1 (en) Information processing device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19869682

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020551117

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19869682

Country of ref document: EP

Kind code of ref document: A1