WO2016111067A1 - 情報処理装置、情報処理方法、およびプログラム - Google Patents
情報処理装置、情報処理方法、およびプログラム Download PDFInfo
- Publication number
- WO2016111067A1 WO2016111067A1 PCT/JP2015/079175 JP2015079175W WO2016111067A1 WO 2016111067 A1 WO2016111067 A1 WO 2016111067A1 JP 2015079175 W JP2015079175 W JP 2015079175W WO 2016111067 A1 WO2016111067 A1 WO 2016111067A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- action
- information processing
- processing apparatus
- content
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07F—COIN-FREED OR LIKE APPARATUS
- G07F17/00—Coin-freed apparatus for hiring articles; Coin-freed facilities or services
- G07F17/32—Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
- G07F17/3225—Data transfer within a gaming system, e.g. data sent between gaming machines and users
- G07F17/323—Data transfer within a gaming system, e.g. data sent between gaming machines and users wherein the player is informed, e.g. advertisements, odds, instructions
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B69/00—Training appliances or apparatus for special sports
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B69/00—Training appliances or apparatus for special sports
- A63B69/18—Training appliances or apparatus for special sports for skiing
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/211—Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/213—Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
- A63F13/42—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
- A63F13/428—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/54—Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
- A63F13/67—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/80—Special adaptations for executing a specific game genre or game mode
- A63F13/807—Gliding or sliding on surfaces, e.g. using skis, skates or boards
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07F—COIN-FREED OR LIKE APPARATUS
- G07F17/00—Coin-freed apparatus for hiring articles; Coin-freed facilities or services
- G07F17/32—Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
- G07F17/3286—Type of games
- G07F17/3288—Betting, e.g. on live events, bookmaking
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/216—Input arrangements for video game devices characterised by their sensors, purposes or types using geographical information, e.g. location of the game device or player using GPS
Definitions
- the present disclosure relates to an information processing apparatus, an information processing method, and a program.
- Patent Literature 1 has a plurality of behavior determination units specialized for specific behavior among user behaviors recognized by threshold processing of sensor data, and the behavior based on the determination result in each behavior determination unit. An information processing apparatus that generates information is described.
- a user's action (action) detected by the technique described in Patent Document 1 is not limited to the generation of information, and can be used in various ways. However, it cannot be said that such a method of utilization has been sufficiently proposed so far.
- the present disclosure proposes a new and improved information processing apparatus, information processing method, and program capable of providing various benefits to the user based on the detection result of the user's action.
- an information processing apparatus comprising: an information acquisition unit that acquires action information indicating a detected user action; and a content providing unit that provides content developed according to a temporal or spatial sequence of the actions. Is provided.
- information processing including obtaining action information indicating a detected user action, and providing content to be developed according to a temporal or spatial sequence of the actions by the processor.
- a method is provided.
- a computer it is possible to cause a computer to realize a function of acquiring action information indicating a detected user action and a function of providing content developed according to a temporal or spatial sequence of the actions.
- a program is provided.
- FIG. 2 is a block diagram illustrating a schematic functional configuration of an information processing apparatus according to an embodiment of the present disclosure.
- FIG. 9 is a flowchart illustrating a first example of a process for detecting a jump included in a user action according to an embodiment of the present disclosure. It is a flowchart which shows the example of the high impact detection process shown in FIG.
- FIG. 3 is a flowchart showing a first example of free fall detection processing shown in FIG. 2.
- FIG. 5 is a flowchart illustrating an example of a process for providing a virtual game course in an embodiment of the present disclosure. It is a figure showing the 1st example of the virtual game course provided in one embodiment of this indication. It is a figure which shows the 2nd example of the virtual game course provided in one Embodiment of this indication. It is a figure which shows the 2nd example of the virtual game course provided in one Embodiment of this indication. It is a figure which shows the 3rd example of the virtual game course provided in one Embodiment of this indication. It is a figure which shows the 4th example of the virtual game course provided in one Embodiment of this indication. It is a figure which shows the 4th example of the virtual game course provided in one Embodiment of this indication. It is a figure which shows the example of the music rhythm game provided in one Embodiment of this indication.
- FIG. 3 is a block diagram illustrating a hardware configuration example of an information processing apparatus according to an embodiment of the present disclosure.
- FIG. 1 is a block diagram illustrating a schematic functional configuration of an information processing apparatus according to an embodiment of the present disclosure.
- an information processing apparatus 100 includes a transmission unit 101, a reception unit 102, a sensor device control unit 103, a sensor data analysis unit 104, an analysis result processing unit 107, and a detection section information holding unit 110.
- the additional information holding unit 111 and the service control unit 112 are included.
- the information processing apparatus 100 may be a single device or a collection of devices that configure a server on a network, as shown in some specific examples described later, for example.
- the information processing apparatus 100 may be a terminal device that communicates with a server via a network or a terminal device that operates alone.
- the functions of the information processing apparatus 100 may be realized by being distributed to a server and a terminal device that communicate with each other via a network.
- the hardware configuration of each of the information processing apparatus 100 or a plurality of apparatuses that implement the functions of the information processing apparatus 100 will be described later.
- the transmission unit 101 and the reception unit 102 are realized by a communication device that communicates with the sensor device by various wired or wireless communication methods, for example.
- the sensor device includes at least one sensor mounted on a user or an instrument used by the user.
- the transmission unit 101 transmits a control signal output from the sensor device control unit 103 to the sensor device.
- the receiving unit 102 receives sensor data and time information (time stamp) from the sensor device, and inputs them to the sensor device control unit 103.
- the receiving unit 102 realizes a sensor data receiving unit that receives sensor data provided by a user or a sensor attached to an instrument used by the user.
- the information processing apparatus 100 is a terminal device including at least one sensor, more specifically, a mobile device or a wearable device, a CPU that executes a driver program that receives sensor data from the sensor (
- the sensor data receiving unit may be realized by a processor such as Central Processing Unit.
- the information processing apparatus according to the present embodiment may include an acquisition unit that acquires sensor data from an external apparatus including a sensor, for example.
- the acquisition unit is, for example, a processor such as a CPU that executes “a driver program that receives sensor data from an external device including a sensor via the communication device that implements the transmission unit 101 and the reception unit 102”. It is realized by.
- the information processing apparatus which concerns on this embodiment can also take the structure which is not provided with a sensor data receiving part.
- the sensor device control unit 103 is realized by a processor such as a CPU operating according to a program stored in a memory.
- the sensor device control unit 103 acquires sensor data and time information from the reception unit 102.
- the sensor device control unit 103 provides these data to the sensor data analysis unit 104 and the analysis result processing unit 107.
- the sensor device control unit 103 may perform preprocessing of data as necessary.
- the sensor device control unit 103 outputs a control signal for the sensor device to the transmission unit 101.
- the sensor device control unit 103 may output a control signal based on feedback of a processing result in the sensor data analysis unit 104 or the analysis result processing unit 107.
- the sensor data analysis unit 104 is realized by a processor such as a CPU operating according to a program stored in a memory.
- the sensor data analysis unit 104 performs various analyzes using the sensor data provided from the sensor device control unit 103.
- the sensor data analysis unit 104 includes a feature amount extraction unit 105 and an action detection unit 106.
- the feature amount extraction unit 105 extracts various feature amounts from the sensor data.
- the action detection unit 106 detects a user action based on the feature amount extracted from the sensor data by the feature amount extraction unit 105.
- the user action detected by the action detection unit 106 includes a user turn and / or jump.
- the action detection unit 106 may detect other user actions such as walking, running, standing still, and moving by a vehicle.
- the user's action can be detected in association with time information (time stamp) indicating a section (action section) in which the user action occurred.
- the sensor data analysis unit 104 stores the analysis result, more specifically, for example, information including a user action section detected by the action detection unit 106 in the detection section information holding unit 110. Further, the sensor data analysis unit 104 provides the analysis result to the analysis result processing unit 107.
- the analysis result processing unit 107 is realized by a processor such as a CPU operating according to a program stored in a memory. Based on the analysis result of the sensor data analysis unit 104, more specifically, the information of the user action detected by the action detection unit 106, the analysis result processing unit 107 performs various types used by the service control unit 112 in the subsequent stage. Generate additional information.
- the analysis result processing unit 107 includes a clustering processing unit 108 and a scoring processing unit 109. For example, when the detected user action includes a plurality of actions of the same type, the clustering processing unit 108 may identify these actions as feature amounts (feature amounts extracted by the feature amount extraction unit 105). Or an intermediate feature amount calculated by the action detection unit 106).
- the scoring processing unit 109 may calculate a score indicating action evaluation based on the feature amount. Further, the clustering processing unit 108 and / or the scoring processing unit 109 may newly calculate a feature amount based on the sensor data provided from the sensor device control unit 103.
- the analysis result processing unit 107 holds additional information on the processing result, more specifically, the result of clustering by the clustering processing unit 108 and the score information calculated by the scoring processing unit 109 together with time information (time stamp). Stored in the unit 111.
- the detection section information holding unit 110 and the additional information holding unit 111 are realized by various memories or storage devices, for example.
- the detection section information holding unit 110 and the additional information holding unit 111 temporarily or permanently store the information provided from the sensor data analysis unit 104 and the analysis result processing unit 107 as described above.
- the information stored in the detection section information holding unit 110 and the information stored in the additional information holding unit 111 can be associated with each other by, for example, time information (time stamp). Further, the detection section information holding unit 110 and the additional information holding unit 111 may store information regarding each of a plurality of users.
- the service control unit 112 is realized by a processor such as a CPU operating according to a program stored in a memory.
- the service control unit 112 controls the service 113 using information stored in the detection section information holding unit 110 and / or the additional information holding unit 111. More specifically, for example, the service control unit 112 generates information provided to the user in the service 113 based on the information read from the detection section information holding unit 110 and / or the additional information holding unit 111.
- the information processing apparatus 100 is a server
- the information output by the service control unit 112 can be transmitted to the terminal apparatus via the communication apparatus.
- the information output by the service control unit 112 can be provided to an output device such as a display, a speaker, or a vibrator included in the terminal device.
- a sensor device including an acceleration sensor, an angular velocity sensor, and the like may be directly attached to a user by being embedded in wear, or incorporated in a wearable terminal device or a mobile terminal device.
- the sensor device may be mounted on a snowboard tool, such as a board.
- the action detection process executed in the present embodiment is not limited to jumps and turns that occur on snowboards.
- the action detection process may be executed on jumps and turns that occur in sports other than snowboarding. Since jumps and turns are actions that can occur in common in various sports, it may be possible to detect jumps and turns regardless of the type of sports by, for example, detection processing described below.
- an action other than a jump or turn may be detected.
- various techniques used in the action recognition technique described in, for example, Japanese Patent Application Laid-Open No. 2010-198595 can be applied.
- FIG. 2 is a flowchart illustrating a first example of a process for detecting a jump included in a user action according to an embodiment of the present disclosure. The illustrated process is executed by, for example, the sensor data analysis unit 104 included in the information processing apparatus 100 described above.
- the sensor data analysis unit 104 performs a high impact detection process (S110) and a free fall detection process (S120) for each predetermined time frame. Details of these processes will be described later. Based on the results of these processes, the action detection unit 106 included in the sensor data analysis unit 104 determines whether or not a section between two high impact sections (estimated as crossing and landing) has occurred. (S101). When such a section occurs, the action detection unit 106 determines whether or not the duration of the section is between two threshold values (TH1, TH2) (S102). These threshold values are set, for example, for the purpose of excluding sections that are too long or too short for a jump.
- TH1, TH2 threshold values
- the action detection unit 106 further determines whether or not the ratio of the free fall section in the section exceeds the threshold (TH) (S103). When the ratio of the free fall section exceeds the threshold, it is detected that the section (section sandwiched between two high impact sections) is a jump section (S104).
- FIG. 3 is a flowchart showing an example of the high impact detection process (S110) shown in FIG.
- acceleration D111 included in the sensor data is used.
- the feature quantity extraction unit 105 included in the sensor data analysis unit 104 calculates a norm of acceleration (S112), and further smoothes the norm with an LPF (Low Pass Filter) (S113).
- the feature amount extraction unit 105 calculates the power of the amplitude in a predetermined time frame for the smoothed norm of acceleration (S114).
- the action detection unit 106 determines whether or not the power exceeds the threshold value (TH) (S115), and when the power exceeds the threshold value, detects that the time frame is a high impact section (S116).
- TH threshold value
- S116 detects that the time frame is a high impact section
- FIG. 4 is a flowchart showing a first example of the free fall detection process (S120) shown in FIG.
- acceleration D121
- angular velocity D125
- the feature quantity extraction unit 105 calculates the norm of acceleration (S122), and the action detection unit 106 determines whether or not the norm in each section is below a threshold value (TH) (S123).
- the action detection unit 106 detects that the section is a free fall section with respect to a section in which the norm of acceleration is below the threshold (S124).
- the feature quantity extraction unit 105 also calculates the norm for the angular velocity (S126), and further calculates the variance of the norm in a predetermined time frame (S127).
- the action detection unit 106 determines whether or not the variance of the norm of the angular velocity is lower than the threshold (TH) (S128), and when the variance is lower than the threshold, masks the free fall section detected in S124 (that is, free The determination as a fall section is canceled) (S129).
- TH threshold
- S129 masks the free fall section detected in S124 (that is, free The determination as a fall section is canceled)
- Such a mask process based on angular velocity causes a change in angular velocity when the user jumps, so that the free fall section where the change (dispersion) in angular velocity is small is caused by a cause other than the jump. Based.
- the mask processing in S126 to S129 does not necessarily have to be executed after the free fall section determination processing in S121 to S124.
- the action detection unit 106 may perform the mask process in advance, and may not execute the free fall section determination process for the section specified as the section to be masked.
- the mask process may be executed after the jump section detection process (S104) shown in FIG. 2, and a section once detected as a jump section may be masked.
- the free fall process (S120) shown in FIG. 4 or the like does not necessarily need to be executed before the section occurrence determination (S101) shown in FIG. Before the determination regarding the ratio of the section (S103), the free fall detection process may be executed for the section.
- FIG. 5 is a flowchart showing a second example of the free fall detection process (S120) shown in FIG.
- acceleration D121 included in the sensor data provided by the acceleration sensor mounted on the user or an instrument used by the user is used.
- the feature amount extraction unit 105 and the action detection unit 106 execute the same process as in the first example, and detect a free fall section.
- the feature quantity extraction unit 105 extracts the X-axis component and the Y-axis component of acceleration (S132), and further calculates the covariance between the X-axis component and the Y-axis component of acceleration (S132). S133). More specifically, for example, when the user is walking or running on a reference plane (which is not limited to a horizontal plane but may be an inclined plane), the feature amount extraction unit 105 performs coordinate axes of the acceleration sensor.
- the X axis is the axis closest to the user's direction of travel
- the Y axis is the axis closest to the normal direction of the reference plane
- the covariance of acceleration components (X axis component, Y axis component) in these axis directions is calculated.
- the action detection unit 106 determines whether or not the covariance is lower than the threshold (TH) (S134), and when the covariance is lower than the threshold, masks the free fall section detected in S124 (S129).
- Such mask processing based on the covariance of acceleration is performed when, for example, the jump to be detected is not a so-called vertical jump with a displacement in the normal direction of the reference plane but a jump with a displacement in the user's traveling direction. It is valid.
- FIG. 6 is a flowchart illustrating a second example of a process for detecting a jump included in a user action according to an embodiment of the present disclosure. The illustrated process is executed in the sensor data analysis unit 104 included in the information processing apparatus 100, for example, as in the first example.
- the sensor data analysis unit 104 executes candidate section detection processing (S140). Details of this process will be described later.
- the action detection unit 106 included in the sensor data analysis unit 104 determines whether a candidate section has occurred (S105). When a candidate section occurs, the action detection unit 106 determines whether or not the duration (duration) of the section is between two threshold values (TH1, TH2), as in the first example ( S102). When the duration is between two threshold values, the action detection unit 106 further determines whether or not the average value (mean) of the acceleration in the vertical direction and the horizontal direction in the section exceeds the respective threshold values (THs) ( S106). When the average value of acceleration exceeds each threshold value, it is detected that the candidate section is a jump section (S104).
- FIG. 7 is a flowchart showing an example of the candidate section detection process (S140) shown in FIG.
- the candidate section detection process first, the high impact detection process (S110) described above with reference to FIG. 3, the vertical acceleration calculation process (S141), and the horizontal acceleration calculation process (S142). ) And are executed. Further, the feature amount extraction unit 105 included in the sensor data analysis unit 104 calculates the difference between the vertical acceleration and the horizontal acceleration calculated in S141 and S142 for each section (S143). After that, the action detection unit 106 determines whether or not a section between two high impact sections (estimated as crossing and landing) has occurred (S144).
- the action detection unit 106 determines whether or not the difference between the vertical acceleration and the horizontal acceleration calculated in S143 exceeds a threshold (TH) in the section (S145). . When the difference exceeds the threshold, it is detected that the section (a section sandwiched between two high impact sections) is a jump section candidate section (S146).
- FIG. 8 is a flowchart showing an example of the vertical acceleration calculation process (S141) shown in FIG.
- the acceleration (D151) included in the sensor data is used.
- the feature amount extraction unit 105 included in the sensor data analysis unit 104 calculates an average value (mean) of acceleration (S152).
- the average value calculated here can be, for example, a moving average.
- the feature amount extraction unit 105 executes gravity component acceleration calculation processing (S153). Further, the feature amount extraction unit 105 calculates the norm of the calculated gravity component acceleration (S154).
- the gravity component acceleration may be calculated based on an average value such as a moving average, or may be calculated using a filter such as an LPF.
- the feature quantity extraction unit 105 processes the acceleration (D151) by BPF (Band Pass Filter) separately from the processing of S152 to S154 described above (S155).
- BPF Band Pass Filter
- the BPF is used for the purpose of removing a DC component (that is, gravity component) included in acceleration by a filter in a low frequency region and further smoothing acceleration by a filter in a high frequency region.
- the BPF in S155 may be replaced by a combination of other types of filters such as LPF and HPF (High Pass Filter).
- the feature amount extraction unit 105 calculates the inner product of the acceleration processed by the BPF and the gravity component acceleration calculated in S153 (S156).
- the feature amount extraction unit 105 divides the inner product calculated in S156 by the norm of the gravity component acceleration calculated in S154 (S157). Thereby, the vertical acceleration (V158) is obtained.
- the vertical acceleration is calculated by projecting the acceleration from which the gravity component is removed by the BPF (S155) in the direction of the gravity component acceleration.
- FIG. 9 is a flowchart showing an example of the horizontal acceleration calculation process (S142) shown in FIG.
- the acceleration (D151) included in the sensor data is also used in the horizontal acceleration calculation process.
- the vertical acceleration calculated in the vertical acceleration calculation process (S141) described above with reference to FIG. 8 is used.
- the feature quantity extraction unit 105 included in the sensor data analysis unit 104 squares and uses the vertical acceleration (S161).
- the feature amount extraction unit acceleration (D151) is processed by the BPF (S162), and the DC component included in the acceleration is removed and the acceleration is smoothed.
- the BPF in S162 may also be replaced by a combination of other types of filters such as LPF and HPF.
- the feature amount extraction unit 105 calculates the norm of the acceleration processed by the BPF (S163), and squares it (S164). Further, the feature amount extraction unit 105 calculates a difference between the square of the vertical acceleration calculated in S161 and the square of the horizontal acceleration calculated in S164 (S165), and the square root of the difference (S166). Obtain horizontal acceleration (V167).
- the jump detection according to the embodiment of the present disclosure is the same as the case where the first example (FIG. 4) is adopted for the free fall detection process in the first example of jump detection (FIG. 2).
- the second example (FIG. 5) is adopted for the free fall detection process
- a total of three types Jump detection processing is possible.
- the sensor data analysis unit 104 including the action detection unit 106 may detect the final jump section based on the results after executing these three types of jump detection processing. More specifically, for example, when a jump section is detected by at least one of the three types of jump detection processing, the action detection unit 106 may detect the section as a final jump section. . Alternatively, the action detection unit 106 may detect the section as a final jump section when a jump section is detected by two or more of the three types of jump detection processing or all three types.
- FIG. 10 is a flowchart illustrating an example of a process for detecting a turn section included in a user action according to an embodiment of the present disclosure.
- the illustrated process is executed by, for example, the sensor data analysis unit 104 included in the information processing apparatus 100 described above.
- the sensor data analysis unit 104 detects rotation included in the user's action (S210), and further detects non-turning rotation included in the rotation (S230).
- a turn is detected from those other than the rotation (S250).
- the non-turning rotation includes, for example, rotation generated by swinging the user when the sensor includes a sensor attached to the user's head or an instrument attached to the user's head.
- the non-turning rotation includes other rotations generated by the user's body movement, more specifically, a sensor that is mounted on the user's arm or a device that is mounted on the user's arm. In some cases, it may include rotation generated by a user's arm swing or arm rotation.
- the sensor data analysis unit 104 can detect a turn section with higher accuracy by detecting a turn section after removing such non-turning rotation. In this sense, it can be said that the non-turning rotation is noise with respect to the detection target turn.
- the sensor data analysis unit 104 detects the rotation included in the user action, and further detects the rotation. It can be said that a turn is detected from a noise obtained by detecting noise included in the rotation and removing the noise from the rotation.
- the sensor data analysis unit 104 executes a rotation section detection process (S210).
- the rotation section is defined as a section where the angular velocity in the horizontal plane direction exceeds a threshold value.
- the sensor data analysis unit 104 determines whether a rotation section has occurred (S201). When a rotation section occurs, first, the sensor data analysis unit 104 performs a head shake detection process (S230). Further, the sensor data analysis unit 104 determines whether or not the swing is detected (S203), and when the swing is not detected, further performs a turn detection process (S250).
- the section generated by the user swinging (for example, when the sensor is mounted on a head-mounted wearable terminal device) is excluded from the rotation section, and the rotation radius or angular velocity is further removed.
- the turn section in which the duration or the like satisfies a desired condition can be extracted.
- FIG. 11 is a flowchart showing an example of the rotation section detection process (S210) shown in FIG.
- acceleration D211
- angular velocity D214
- the feature amount extraction unit 105 included in the sensor data analysis unit 104 calculates an average value (mean) of acceleration (S212).
- the average value calculated here can be, for example, a moving average.
- the feature amount extraction unit 105 executes gravity component acceleration calculation processing (S213).
- the feature amount extraction unit 105 calculates the inner product of the gravity component acceleration calculated in S213 and the angular velocity (D214) (S215). Thereby, the projection of the angular velocity in the direction of the gravitational component acceleration, that is, the angular velocity (V216) in the horizontal plane direction (around the vertical axis) is obtained.
- the feature amount extraction unit 105 temporarily integrates the calculated angular velocity (S217), and calculates the angular displacement (V218) in the horizontal plane direction.
- the feature amount extraction unit 105 processes the angular displacement with the LPF (S219). Further, the feature amount extraction unit 105 differentiates the angular displacement (S220) to obtain the angular velocity (V221) in the horizontal plane direction.
- the angular velocity of V221 is once integrated in S217 as compared with the angular velocity of V218, and the angular displacement after integration is smoothed by being processed by the LPF in S219, and noise is removed from the waveform.
- the action detection unit 106 included in the sensor data analysis unit 104 determines whether or not the angular velocity (V221) in the horizontal plane direction exceeds a threshold (S222), and detects a section where the angular speed exceeds the threshold as a rotation section (S223). .
- FIG. 12 is a flowchart showing an example of the swing detection process (S230) shown in FIG.
- the angular velocity (V221) in the horizontal direction after smoothing calculated in the rotation section detection process shown in FIG. 11 is used.
- the feature amount extraction unit 105 acquires the sign of the angular velocity (S231). Any sign may be defined for the direction of rotation. In the illustrated example, clockwise rotation (V232) and counterclockwise rotation (V233) are defined as signs of angular velocity (V221). To do. Further, the feature amount extraction unit 105 calculates a time interval at which reverse rotation has occurred (S234).
- the feature amount extraction unit 105 determines the time interval from the occurrence of the clockwise rotation (V232) to the occurrence of the counterclockwise rotation (V233), and the counterclockwise rotation ( The time interval from the occurrence of V233) to the occurrence of clockwise rotation (V232) is calculated.
- the action detection unit 106 determines whether or not the time interval calculated in S234 is below a threshold value (TH) (S235), and detects that a swing has occurred when the time interval is below the threshold value. (S236).
- FIG. 13 is a chart showing an example of the turn detection process (S250) shown in FIG.
- the turn detection process a plurality of feature amounts are calculated by the feature amount extraction unit 105, and the action detection unit 106 performs determination based on each threshold based on each feature amount.
- FIG. 13 shows a process for the feature quantity extraction unit 105 to calculate each feature quantity.
- the calculation processing of each feature amount will be described in order, but the processing by the feature amount extraction unit 105 does not necessarily have to be executed in the description order, and the presumed amount is acquired or calculated. If so, the processing can be executed in an arbitrary order.
- the feature quantity extraction unit 105 calculates a norm of acceleration (D251) included in the sensor data (S252), and further calculates an average value of norms in a predetermined time frame (S253).
- the acceleration norm average (V254) calculated in this way is used as one of the feature amounts for detecting the turn.
- the feature amount extraction unit 105 processes the acceleration (D251) with the first LPF (S273), and calculates the gravity component acceleration (V274). Further, the feature amount extraction unit 105 calculates the inner product of the angular velocity (D255) and the gravity component acceleration included in the sensor data (S256). Thereby, the projection of the angular velocity in the direction of the gravitational component acceleration, that is, the angular velocity (V257) in the horizontal plane direction (around the vertical axis) is obtained. The feature quantity extraction unit 105 integrates the calculated angular velocity (S258), and calculates the angular displacement (V259) in the horizontal plane direction. Angular displacement (V259) is also used as one of feature quantities for turn detection.
- the feature amount extraction unit 105 calculates the angular velocity (V261) based on the angular displacement (V259) and the duration (V260) of the rotation section to be processed.
- the angular velocity of V261 can be a longer time frame (for example, smoothed over the entire rotation section, for example, compared to the angular speed of D255.
- the duration of the rotation section (V260) and the angular change rate (V261) are also detected by the turn detection. Is used as one of the feature quantities for
- the feature quantity extraction unit 105 calculates several feature quantities by analyzing the angular displacement (V259) for a predetermined time frame (S262). More specifically, the feature amount extraction unit 105 determines the maximum value (S263, V268), average value (S264, V269), variance (S265, V270), and kurtosis (S266, V271) in the time frame. , And skewness (S267, V272). These feature amounts are also used as feature amounts for turn detection.
- the feature quantity extraction unit 105 processes the acceleration (D251) with the second LPF (S275).
- the first LPF (S273) is used to extract the gravitational component acceleration (V274), which is a DC component included in the acceleration, whereas the second LPF (S275) Used to smooth acceleration by filtering the high frequency region. Therefore, the passband settings of these LPFs can be different.
- the feature amount extraction unit 105 calculates the inner product of the acceleration smoothed by the second LPF (S275) and the gravity component acceleration (V274) extracted by the first LPF (S273) (S276). Thereby, vertical acceleration (V277) is obtained. Further, the feature amount extraction unit 105 calculates the difference between the acceleration vector obtained by combining the gravity component acceleration (V274) and the vertical acceleration (V277) and the acceleration smoothed by the second LPF (S275) (S278). ). Thereby, horizontal acceleration (V279) is obtained. The feature amount extraction unit 105 calculates the average value of the horizontal acceleration (S280). The average value (V281) of the horizontal acceleration calculated in this way is also used as a feature value for turn detection.
- the action detection unit 106 determines whether or not a turn has occurred based on the feature amount extracted from the sensor data as described above, for example.
- the action detection unit 106 includes the duration of the rotation section (V260), the angular displacement in the horizontal plane (V259), the smoothed angular velocity (V261), the acceleration norm average (V254), and the horizontal acceleration average. The determination is performed based on the value (V281), the maximum value (V268) of the angular velocity in the time frame, the average value (V269), the variance (V270), the kurtosis (V271), and the skewness (V272).
- the feature amount used for the determination is not limited to the above example.
- a feature amount other than the above example may be used, or a part of the feature amount of the above example may not be used.
- the type of feature quantity used for turn detection may be determined by principal component analysis based on sensor data when a turn actually occurs.
- the feature-value used for determination may be determined based on the tendency of the sensor data that appears when a turn actually occurs.
- the acceleration norm average (V254) and the horizontal acceleration average value (V281) are feature quantities related to the turning radius of the turn.
- the threshold value of each feature amount applied in the determination by the action detection unit 106 is determined according to the result of machine learning based on sensor data when a turn actually occurs, for example. At this time, whether or not a turn has actually occurred may be determined manually with reference to, for example, an action video acquired simultaneously with the sensor data. Further, not only whether or not a turn has occurred, but a label indicating what kind of turn may be given. More specifically, for example, as a result of referring to the video, a label indicating each attribute of an action determined on the service provider side that the service provider wants to detect as a turn, does not want to detect as a turn, or either may be detected. May be given.
- the action detection processing executed in the present embodiment is not limited to jumps and turns that occur on snowboards.
- action detection processing is performed on jumps and turns that occur in sports other than snowboards or scenes other than sports. May be executed.
- an action other than a jump or turn may be detected.
- the action detection unit 106 may detect a fall that occurs on a snowboard or the like.
- the feature quantity extraction unit 105 calculates the norm of acceleration in the same manner as the above-described jump and turn detection, and the action detection unit 106 has a threshold value (for example, a large value that does not occur in normal downhill). The occurrence of a fall may be detected when the value exceeds (which may be a value).
- the scoring processing unit 109 included in the analysis result processing unit 107 may perform the action section including the jump section and / or the turn section detected by the processing described with reference to FIGS. 2 to 13 above.
- a score (action score) for evaluating the generated action is calculated.
- the action score can be calculated, for example, by extracting physical quantities (feature quantities) representing good or bad actions and features from sensor data in the action section and weighting and adding them.
- the service control unit 112 generates information on the action (for example, jump or turn) based on the score calculated in this way.
- the duration of the section (the angular displacement around the X axis / Y axis / Z axis in the section), the ratio of the free fall section, the magnitude of the impact at the time of crossing / landing, etc. It can be extracted as a feature amount for calculating a score.
- the duration of the section, the displacement angle, the average value of each speed, the maximum value, and the standard deviation, the maximum value of the angular acceleration and the standard deviation, etc. are the feature quantities for calculating the score. Can be extracted.
- the weighted addition coefficient can be set according to the nature of the action emphasized in the service 113 provided by the information processing apparatus 100, for example.
- the method for calculating the action score from the feature amount is not limited to the weighted addition, and other calculation methods may be used.
- the action score may be calculated by applying a machine learning algorithm such as a linear regression model.
- the clustering processing unit 108 included in the analysis result processing unit 107 performs an action section including a jump section and / or a turn section detected by the processing described above with reference to FIGS. Then, a clustering algorithm such as the k-means method is applied using the feature amount extracted for scoring, and the detected actions are classified into clusters.
- a clustering algorithm such as the k-means method is applied using the feature amount extracted for scoring, and the detected actions are classified into clusters.
- actions may be classified into clusters according to the length of the duration of the section or the magnitude of rotation.
- the result of clustering is used, for example, to extract action sections so that various types of actions such as jumps and turns are included in the moving image when a digest moving image is provided as a service. Also, by classifying good actions and bad actions into separate clusters, the user may look back on the actions or use them for coaching to improve the actions.
- the analysis result processing unit 107 may calculate the similarity between the action sections based on the correlation coefficient of the feature amount as the same process as the clustering (action sections with high similarity are classified into the same cluster). It can be treated in the same way as the action section that was made). In addition, for example, the analysis result processing unit 107 prepares a characteristic amount pattern of a typical type of action in advance, and determines which type the newly generated action corresponds to by a k-NN method or the like. May be.
- FIG. 14 is a block diagram illustrating an example of processing for estimating a sensor mounting state according to an embodiment of the present disclosure. More specifically, the illustrated configuration determines whether a sensor that provides sensor data is mounted directly on the user's body or on an instrument used by the user. The illustrated process is executed by, for example, the sensor data analysis unit 104 included in the information processing apparatus 100 described above. In the illustrated example, the cut-off frequency (Fc) of the filter and the length of the time frame are specifically described. However, these numerical values are examples, and may be appropriately changed according to actual sensor characteristics. Can be done.
- Fc cut-off frequency
- the receiving unit 102 of the information processing apparatus 100 receives sensor data provided by a three-axis (u, v, w) acceleration sensor 121.
- the sensor data analysis unit 104 acquires this sensor data via the sensor device control unit 103.
- the above-described determination processing is based on the fact that when the sensor is directly attached to the user's body, the high-frequency component of acceleration is attenuated by the body functioning as an LPF.
- A amplitude of the low frequency component that has passed through the LPF 124)
- B amplitude of the high frequency component that has passed through the HPF
- the threshold determination 130 if the value obtained by processing A / B with the HPF 129 is larger than the threshold, it is determined that the sensor is directly attached to the user's body, and if not, the sensor is attached to the instrument. It can be determined that it is attached.
- the estimation result as described above may be used inside the sensor data analysis unit 104, for example.
- the sensor data analysis unit 104 changes the threshold value, the filter setting value, and the like depending on whether the sensor is attached to the body or the appliance in the process of detecting the user action as described above. Also good.
- the estimation result as described above is fed back to the sensor device control unit 103 and used for setting parameters relating to measurement of the sensor device, determining the sensor data preprocessing method by the sensor device control unit 103, and the like. May be.
- adaptive control related to sensor data processing may be performed based on estimation related to the state of the sensor data providing side, such as estimation of the sensor mounting state described above.
- the sensor data analysis unit 104 estimates the type of sport in which an action has occurred using an algorithm such as machine learning from the impact strength or movement pattern detected by an acceleration sensor or the like. Also good.
- the sports may be estimated for each generally recognized event, or may be estimated for each system such as board sports, water sports, bicycle competitions, and motor sports.
- the sensor data analysis unit 104 estimates the type of device (for example, in the case of skiing, whether it is mounted on a ski or mounted on a stock). May be.
- the estimation result may be used, for example, for controlling a threshold value or a filter setting value in action detection or the like, or fed back to the sensor device control unit 103 and similar to the sensor mounting state estimation result described above. It may be used for device control and determination of sensor data pre-processing method.
- an information acquisition unit that acquires action information indicating a detected user action
- a content providing unit that provides content developed according to a temporal or spatial sequence of actions. Realized.
- the receiving unit 102 receives sensor data and time information (time stamp) from the sensor device.
- the action detection unit 106 included in the sensor data analysis unit 104 stores information in which the above time stamp is associated with the user action detected based on the sensor data in the detection section information holding unit 110. Accordingly, the service control unit 112 can acquire user action information and time coordinates (time stamp) associated with the action from the detection section information holding unit 110.
- the action detecting unit 106 displays time information (time stamp) at the time when the action is detected, It may be used instead of the time stamp received together with the sensor data.
- the position information of the user is associated with the data received by the receiving unit 102 from the sensor device, and the information in which the action detecting unit 106 associates the position information with the user action is stored in the detection section information holding unit 110.
- the service control unit 112 can acquire user action information and spatial coordinates (position information) associated with the action from the detection section information holding unit 110.
- the action detection unit 106 When the action detection unit 106 is realized in a terminal device that is carried or worn by the user, the action detection unit 106 receives the position information acquired by the terminal device when the action is detected, together with the sensor data. It may be used instead of the position information.
- the service control unit 112 may acquire the spatial coordinates associated with the action by matching the action video acquired separately from the sensor data with the action information using the time stamp.
- the spatial coordinates associated with the action may be defined by an absolute coordinate system such as latitude and longitude, or relative to the environment in which the action is executed, such as a course, court, field, etc. May be defined by a general coordinate system.
- the action information is not limited to information directly indicating the user action detected by the action detection unit 106 but may include various information related to the detected user action. Therefore, in the above example, not only the action detection result provided by the action detection unit 106 but also additional information generated by the analysis result processing unit 107 can be included in the action information.
- the service control unit 112 can acquire user action information and time coordinates (time stamp) and / or space coordinates (position information) associated with the action from the detection section information holding unit 110. .
- the analysis result processing unit 107 associates the time stamp and / or position information provided together with the action detection result from the action detection unit 106 with additional information generated based on the action detection result, and additional information is generated.
- the service control unit 112 receives additional information regarding the action, time coordinates (time stamp) and / or spatial coordinates (position information) associated with the action from the additional information holding unit 111. Can be obtained.
- the service control unit 112 can provide content to be developed according to a temporal or spatial sequence of actions based on such information.
- the content includes, for example, video or audio.
- the content may be game content that progresses with user actions as input.
- the content development may reflect the content of the action, more specifically, the type of action and the action score.
- the temporal sequence of actions is defined by a sequence of time coordinates of detected actions. That is, the temporal sequence of actions is defined by the occurrence order and interval of a series of actions that occur in a certain temporal or spatial range. More specifically, for example, the temporal sequence of actions is the sequence and occurrence interval of a series of jumps and turns that occur while the user is descending a slope with a snowboard (or the time stamp of each action indicating them). May be defined by Also, for example, the temporal sequence of actions is defined by the occurrence order and interval of a series of user actions that occur between 10 am and 10:30 am (or the time stamp of each action indicating them). May be.
- the spatial sequence of actions is defined by a series of spatial coordinates of detected actions. That is, the spatial sequence of actions is defined by the occurrence position of a series of actions that have occurred in a certain temporal or spatial range. More specifically, for example, the spatial sequence of actions is defined by the location of a series of jumps or turns that occur while the user is sliding down the slope with a snowboard (eg, can be relative to the slope). May be. Also, for example, a spatial sequence of actions is defined by a position where a user's series of actions that occur between 10 am and 10:30 (eg, can be absolute coordinates corresponding to latitude and longitude). May be.
- the service control unit 112 may define a temporal sequence and / or a spatial sequence for actions that have already been detected, and provide content that is developed according to them.
- the content may be a series of jumps or tunes that occur while the user slides down the slope with a snowboard. May be included.
- the content may include game content in which the development of a story or the like changes according to the time and place where a series of user actions that occurred during one day occurred.
- the content may further include other examples. Some of such examples will be described later.
- the service control unit 112 may predict a temporal sequence and / or a spatial sequence based on, for example, a user's environmental state, and provide content to be developed in accordance with the detected action.
- the content is the time or position where the task action should be detected in the temporal or spatial sequence of actions, with the series of jumps and turns that occur while the user is descending the slope on the snowboard as the task action.
- the content may include, for example, game content that designates a place or time at which a user's action is detected within a predetermined time in daily activities.
- the content may further include other examples. Some of such examples will be described later.
- FIG. 15 is a flowchart illustrating an example of processing for providing a virtual game course according to an embodiment of the present disclosure.
- game content is provided that specifies a position included in a spatial sequence of predicted user actions and a task action to be detected at that position.
- the service control unit 112 receives an input of a start trigger (S301).
- the start trigger may be, for example, a user operation, or may be that the user has reached a position in the real space that is set as the start point of the game course.
- the service control unit 112 recognizes the environmental state based on the sensor data received by the receiving unit 102, the image from the camera, and the like (S303).
- the environmental state may include, for example, the state of the course in which the user's action will be executed, the length, the width, the route to the goal, and obstacles existing in the course.
- the service control unit 112 sets a task action at one or a plurality of positions included in the spatial sequence after predicting a spatial sequence of subsequent user actions according to the environmental state ( S305). A specific example of the task action will be described later. Further, the service control unit 112 designs a game course based on the environmental state recognized in S303 and the position and task action set in S305 (S307). The service control unit 112 virtually displays the virtual object displaying the designed game course in the real space using a transparent display such as an HMD (Head Mounted Display) on which the user wears (S309), thereby preparing the game Is completed (S311).
- HMD Head Mounted Display
- the service control unit 112 starts a game (S313).
- the start timing of the game may be specified by a user operation or the start of an action by the user, or the game may be automatically started after completion of the game preparation.
- a message prompting to start the game or a message notifying that the game has started may be displayed together with the virtually displayed game course.
- the service control unit 112 updates the position of the user in the game course based on the space coordinates of the user that are sequentially updated, and the user designates the position where the task action is set in S305. It is determined whether or not the performed action is successful (S315). Here, if the user succeeds in the task action, points are added (S317), and if not, the points are subtracted (S319). The service control unit 112 repeats this determination until the user reaches the goal of the game course (S321).
- the service control unit 112 calculates a total point based on the determination result in S315 (S323). In calculating the total points, the time required to reach the goal can be considered. The service control unit 112 presents the calculated total points to the user (S325). At this time, the service control unit 112 may present the user with a breakdown of points of individual task actions and points by elements other than the task actions together with the total points. Thus, the game ends (S327).
- FIG. 16 is a diagram illustrating a first example of a virtual game course provided in an embodiment of the present disclosure.
- a game screen 1100 is displayed in a superimposed manner on a real space R including a snowboard course C.
- the game screen 1100 includes a game course 1101.
- objects that display designated positions and task actions to be detected at those positions, more specifically, icons 1103 and text 1105 are displayed in the real space as in the game course 1101. Overlaid on R.
- the game screen 1100 may display a status 1107 including the current point and elapsed time.
- the service control unit 112 predicts a spatial sequence of user actions based on the environmental state of the user. More specifically, the service control unit 112 determines the length of the spatial sequence expressed as the game course 1101 based on the environmental state of the user at the start of the game content. For example, the service control unit 112 acquires an image around the user taken by a camera worn by the user or a camera installed in the course C, and recognizes the environmental state based on the image. In the above example, the service control unit 112 recognizes the state of the course C, such as the length and width of the course C, the route to the goal, and obstacles existing in the course C as the user's environmental state. A course 1101 is set. As a result, the game course 1101 is set along the course C while avoiding the obstacle B existing in the course C.
- AR Augmented Reality
- the user can wear a transmissive display such as an HMD and transparently superimpose the game screen 1100 on the image of the real space. If the user's safety is ensured due to being closed or the like, the user wears a shielded HMD or the like and plays the game while watching the game screen 1100 that does not include the real space image. It may be possible. Further, in a shielded HMD, a smartphone, a tablet, or the like, an experience similar to the example shown in FIG. 16 may be possible by superimposing the game screen 1100 on the live view image of the camera. The same applies to the examples of FIGS. 18 to 22 described below.
- the icon 1103 includes an icon 1103a indicating a position where a turn task action is set and an icon 1103b indicating a position where a jump task action is set.
- the text 1105a indicating the details of the turn for example, the rotation direction and the rotation angle of the turn are shown.
- the text 1105a “R40 °” indicates that a turn with a rotation angle of 40 degrees in the clockwise direction (clockwise) is designated as the task action.
- the text 1105b indicating the details of the jump indicates, for example, that the task action is a jump and the height of the jump.
- a text 1105b “JUMP 1m” indicates that a jump with a height of 1 m is designated as a task action.
- icons indicating the rotation direction of the turn and the jump direction may be displayed together with the text 1105.
- FIG. 17 and 18 are diagrams illustrating a second example of a virtual game course provided in an embodiment of the present disclosure.
- the game course is set in a place where the course is not necessarily set in the real space, more specifically in the urban area.
- the service control unit 112 recognizes the user's environmental state based on the map information. More specifically, the service control unit, based on the map information and the position information of the user at the start of the game content, the length, width of the course that the user is expected to follow within a predetermined time, Recognize the route to the goal, the state of obstacles, etc. in the course, and based on these, set a game course that includes points where task actions are specified.
- the game screen 1200 is superimposed on the real space R.
- a game course 1201 set along the road Rd in the real space R is displayed.
- the road Rd is not necessarily used as a course, it can be used as a course after arbitrarily extracting points as shown in FIG.
- icons 1203 indicating designated positions and text 1205 indicating task actions that should be detected at those positions are superimposed and displayed in the real space R in the same manner as the game course 1201.
- the text 1205 may display different contents depending on whether the distance to the icon 1203 is far or near, as indicated by the text 1205a on the game screen 1200a and the text 1205b on the game screen 1200b.
- the text 1205a schematically indicates the existence of the task action
- the text 1205b indicates the details of the task action.
- a game course 1101 is set along a snowboard course C in real space. Since the user can slide down along the course C, the actual movement trajectory of the user is close to the game course 1101. Therefore, for example, a rule may be set such that a deduction is made when the movement locus deviates from the game course 1101 beyond a predetermined range.
- the service control unit 112 may first determine the game course 1101 along the course C and then determine the position (indicated by the icon 1103) where the task action is set.
- the service control unit 112 first determines a position for setting the task action, and then determines the game course 1101 along the course C including those positions. Also good.
- the user sequentially passes the position where the task action is set while continuously descending along the game course 1101. Therefore, the timing for executing the task action can be specified, or a plurality of task actions can be specified. It is also possible to specify an interval (for example, executing three or more actions at equal intervals).
- the game course 1201 is set in an urban area that is not necessarily a course in the real space.
- the service control unit 112 may determine the game course 1201 as a link connecting these positions after determining the position (indicated by the icon 1203) for setting the task action first.
- the game course 1201 is set as a link connecting positions where task actions are set with the shortest distance, and the user refers to the game course 1201 as a rough guide indicating the direction of the next destination, You may move according to the actual shape of the road or traffic regulations.
- the game course 1201 may be set as a link connecting positions where task actions are set by a route that can move according to the shape of the road, traffic regulations, or the like. In this case, it may be set as a rule that the user moves along the game course 1201.
- the movement of the user on the game course 1201 and the execution of the task action at the designated position can occur discontinuously.
- the jump is performed 50 times by the text 1205b.
- the user interrupts the movement at the position indicated by the icon 1203 (one corner of the sidewalk), jumps 50 times, and then resumes the movement toward the next position.
- FIG. 19 is a diagram illustrating a third example of a virtual game course provided in an embodiment of the present disclosure.
- the game course 1101 set along the course C in the real space R and the task action are set on the game screen 1100 as in the example shown in FIG.
- the icon 1103 indicating the location (not shown, but text 1105 and status 1107 indicating the details of the task action may be displayed in the same manner)
- the game course 1301 of other users and
- the avatar 1309 is an example of an object that displays other users.
- the game course 1301 of the other user displayed on the first user's game screen 1100 is configured based on, for example, the environmental state of the second user who is in a different location from the first user.
- the game course 1301, the icon 1303, and the avatar 1309 displayed on the screen 1100 can be completely virtual. That is, the second user displayed by the avatar 1309 does not exist at the position of the real space R where the avatar 1309 is superimposed.
- the second user is provided with a screen similar to the game screen 1100 at a place different from the first user.
- the screen provided to the second user includes a display of a game course 1101 and an icon 1103 configured based on the environmental state for the second user.
- the screen provided to the second user includes the same display as the game course 1301, the icon 1303, and the avatar 1309 for the first user.
- the first user and the second user can battle each other on the game courses 1101 and 1301 that are virtually arranged in parallel.
- the game course 1101 and the game course 1301 may be parallel or intersected at a short distance.
- the icon 1103 displayed on the game course 1101 and the icon 1303 displayed on the game course 1301 are task actions that prompt the first user and the second user to compete or cooperate with each other. May be indicated. More specifically, points are added to the person who achieved the task action first, or points are added to both when each user synchronizes (with a time difference less than the threshold) and succeeds in the task action. Thus, the task action may be set.
- FIG. 20 and 21 are diagrams illustrating a fourth example of a virtual game course provided in an embodiment of the present disclosure.
- the game screen 1200c is displayed for the first user and the game screen 1200d is displayed for the second user with respect to the first user and the second user who are in different locations.
- the game screens 1200c and 1200d are, for example, the game course 1201, icons 1203, and text displayed according to the course set in the urban area, similar to the example of the game screen 1200 described above with reference to FIGS. 1205 included.
- a game course 1201c In the game screen 1200c provided to the first user shown in FIG. 20, a game course 1201c, an icon 1203c, and a text 1205c are displayed. The icon 1203c and the text 1205c indicate a task action “get on the train with player B (second user)”.
- a game course 1201d In the game screen 1200d provided to the second user shown in FIG. 21, a game course 1201d, an icon 1203d, and a text 1205d are displayed. The icon 1203d and the text 1205d indicate a task action “get on the train with the player A (first user)”.
- game screens 1200c and 1200d provided to the first user and the second user, respectively.
- Such game screens 1200c and 1200d also allow a plurality of users to play a game in cooperation with each other.
- the first user and the second user The users do not appear on the other party's game screen 1200 until they meet at the designated station and get on the train together.
- the task action Is achieved.
- the user may freely select a course until the users meet each other.
- the service control unit 112 does not present a pre-completed game course 1201 to each user until the task action is achieved until the task action is presented. It may be recognized as 1201.
- FIG. 22 is a diagram illustrating an example of a music rhythm game provided in an embodiment of the present disclosure.
- game content is provided that specifies a time included in a temporal sequence of predicted user actions and a task action to be detected at that time.
- the game screen 1400 is displayed superimposed on the real space R.
- the rhythm score 1401 the icon 1403 indicating the time when the task action is set, the text 1405 indicating the details of the task action, and the approach indicating that the time indicated by the icon 1403 is approaching every moment.
- a display 1407 and a status 1409 including the current point and elapsed time are displayed.
- a music rhythm game is provided that evolves according to the user's actions while snowboarding down.
- the music associated with the temporal sequence of the user's actions is also provided to the user via headphones or speakers.
- the service control unit 112 predicts a temporal sequence of user actions, and selects a piece of music associated with the temporal sequence. Note that the music may be selected according to a user operation, and the music may be edited in accordance with the temporal sequence predicted by the service control unit 112.
- the service control unit 112 predicts a temporal sequence of user actions based on the environmental state of the user. More specifically, for example, the service control unit 112 determines the length of the temporal sequence in which the game content is developed based on the user's environmental state at the start of the game content. In this case, the service control unit 112 may acquire an image around the user taken by a camera worn by the user or a camera installed on the course C, and recognize the environmental state based on the image.
- the service control unit 112 recognizes the state, such as the length, width, and the route to the goal, of the course C as the user's environmental state, and based on these, the game content is developed over time.
- the length of the sequence that is, the length of the music provided corresponding to the temporal sequence
- the service control unit 112 may select a tempo or a tune according to the width of the course, the presence or absence of an obstacle, and the like.
- the service control unit 112 may set the task action in accordance with the content of the selected music, more specifically, the tempo and melody.
- the service control unit 112 estimates the length of the course C based on the position information of the user, action information of other users who have already slid down the course C, and the like. To do. Further, the service control unit 112 edits the music according to the length of the course C, and determines the time for setting the task action. After that, when the user starts downhill, the game starts, the music starts to be played, and the rhythm score 1401 and icons 1403 start to move. More specifically, the icon 1403 and the text 1405 approach along the approach display 1407 toward the user's feet down the course C in the real space R. Note that the icon 1403 and the text 1405 are not necessarily associated with positions within the course C.
- the icon 1403 and the text 1405 flow toward the user at a constant speed, for example, so as to reach the user's feet when the music reaches the playback time associated with the task action.
- the task action can be, for example, a turn or a jump.
- a task action for example, it may be specified by an icon 1403, a text 1405, or the like that a technique is determined based on a tune key of the music.
- the course downhill by the user can be free.
- the user takes a task action when the icon 1403 and the text 1405 come to his or her feet, more specifically in accordance with the music being played while downhill (left turn or right turn in the illustrated example). Should be executed.
- the service control unit 112 may recognize, for example, the width of the course C as the environmental state, and dynamically change the task action so that the user can correct the course on which the user slides down as necessary.
- the task action is executed at a time close to the specified time (music playback time)
- a high point is added, while the task action is executed with a large deviation from the specified time, If different actions are performed, points can be subtracted.
- the service control unit 112 may provide a music rhythm game that is developed in accordance with a user's action during jogging.
- the service control unit 112 edits the music according to the scheduled duration and determines the time for setting the task action.
- the scheduled jogging duration (one of the user's environmental conditions) may be recognized based on, for example, a schedule input in advance by the user.
- the game starts, the reproduction of the music starts, and the rhythm score 1401 and the icons 1403 start to move. More specifically, the icon 1403 and the text 1405 are approached along the approach display 1407 toward the step of the jogging user.
- a single game may be composed of a plurality of music pieces having different tempos, and a high point may be given when the user can jog while performing a task action in accordance with the tempo of the music that changes.
- the service control unit 112 may provide content that a story develops according to a user action.
- a story develops according to a user action.
- the service control unit 112 sets a task action for the user in the daily life of the user, and executes the task action according to a temporal or spatial sequence designated by the user.
- You may develop the story. For example, when an event of jogging with a character appearing as a friend or lover of the user is generated and a jogging action is detected continuously for a predetermined time or more, the likability from the character may increase. Also, for example, when an event occurs in which a character is picked up to a designated place by a certain time, and it is detected that the user has actually arrived at a designated place at a designated time, Favorability may increase.
- the service control unit 112 may generate an event corresponding to a temporal or spatial sequence of actions indicated by a user action detection result. More specifically, for example, when the user continues to be late, the service control unit 112 may generate a wake-up call event by a character having high user preference on the next morning.
- the service control unit 112 extracts a habitual pattern configured by a temporal or spatial sequence of actions such as commuting and meals from the user's past action detection history, and the newly detected user's action Different events may be generated depending on whether the action matches or is different from the pattern. For example, if it is detected that the user is commuting on a bus at a different time (for example, one later), an unusual character may appear in the game.
- the service control unit 112 estimates a user attribute from a temporal or spatial sequence of actions indicated by the user's past action detection history, and based on the estimated attribute, the attribute of the character appearing in the game May be selected.
- a virtual personality of an agent program that provides a service in a user terminal device or the like is selected according to an attribute estimated from a temporal or spatial sequence of actions indicated by the user's past action detection history. May be.
- the service control unit 112 may include information related to the previously detected action. Based on the character or agent in the game, the content of the conversation between the user and the user may be determined. For example, if an action such as a jump or turn that occurs during snowboard downhill is detected for the first time in a year, other actions detected when the character or agent was snowboarded a year ago, for example, moving in a car It may be talked about the fact that it took a long time or that there were many falls due to downhill.
- the service control unit 112 may reflect the time and place related to the temporal or spatial sequence included in the action detection result in the expression in the game. For example, when it is estimated from the action detection result that the user is on the train, the stage setting in the game may be set in the train. When the route on which the user is on is specified, an announcement of the next station in the game may be provided according to the actual user location. At this time, the setting in the game may be matched with the actual detection result for the time (morning, evening, night, etc.). Moreover, you may change the character and appearance of the character which appear in a game according to a user's place. For example, if you are in a city where people of a specific age and hierarchy gather, characters with character and appearance that match the people who are in the city may appear in the game.
- the service control unit 112 may provide content that uses a user action detection result as a collection.
- “use as a collection” means, for example, that the object has a tangible and intangible value and is collected or exchanged. More specifically, the service control unit 112 gives points to each user based on the user's action detection result, and the user collects the points, and some valuables (for example, a substantial item or social item) It may be exchanged for a virtual item such as an avatar that can be used in the media) or compete for the number of points held.
- a virtual item such as an avatar that can be used in the media
- the service control unit 112 may give the user points according to the action at different grant rates depending on the temporal or spatial sequence of the action to be detected. For example, when a point is given to a moving action, if the moving distance is the same, an action that travels and moves (the time sequence corresponding to the movement is longer) than the action that walks and moves (the time sequence corresponding to the movement is long). High points may be given). Further, for example, when a point is given to a jump while snowboarding, a higher point may be given to a jump with a large rotation angle (a spatial sequence corresponding to the jump is large).
- the point grant rate may be further adjusted by a combination with context information such as time and place.
- the service control unit 112 may be defined to include a temporal or spatial sequence of actions, and may give points to actions achieved by a team of a plurality of users in cooperation. More specifically, for example, a task action of “consumption of 5000 kcal or more by a jogging with a team of five people by a jog” (in this case, “a week” is a temporal sequence) is given, If it is determined from the actions detected for each user that the calorie consumed by the jogging of the five people exceeds 5000 kcal, points may be given to each of the five users.
- the service control unit 112 may detect actions that can be competed between users, such as the number of steps taken in a day, the height of a jump while snowboarding, and more specifically, the time or space of the action.
- a user may bet points on a score calculated according to a typical sequence. More specifically, for example, the user bets points in anticipation of the user who becomes the first action detection result among the users other than the user. For example, points may be awarded according to the odds for a user who has made a prediction after one day or a predetermined number of downhill action detections.
- Such a bet may be performed by a plurality of users performing an action at the same place, for example, or many users may be able to participate through social media or the like.
- FIG. 23 is a block diagram illustrating a hardware configuration example of the information processing apparatus according to the embodiment of the present disclosure.
- the information processing apparatus 900 includes a CPU (Central Processing unit) 901, a ROM (Read Only Memory) 903, and a RAM (Random Access Memory) 905.
- the information processing apparatus 900 may include a host bus 907, a bridge 909, an external bus 911, an interface 913, an input device 915, an output device 917, a storage device 919, a drive 921, a connection port 923, and a communication device 925.
- the information processing apparatus 900 may include an imaging device 933 and a sensor 935 as necessary.
- the information processing apparatus 900 may include a processing circuit such as a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable Gate Array) instead of or in addition to the CPU 901.
- DSP Digital Signal Processor
- ASIC Application Specific Integrated Circuit
- FPGA Field-Programmable Gate Array
- the CPU 901 functions as an arithmetic processing device and a control device, and controls all or a part of the operation in the information processing device 900 according to various programs recorded in the ROM 903, the RAM 905, the storage device 919, or the removable recording medium 927.
- the ROM 903 stores programs and calculation parameters used by the CPU 901.
- the RAM 905 primarily stores programs used in the execution of the CPU 901, parameters that change as appropriate during the execution, and the like.
- the CPU 901, the ROM 903, and the RAM 905 are connected to each other by a host bus 907 configured by an internal bus such as a CPU bus. Further, the host bus 907 is connected to an external bus 911 such as a PCI (Peripheral Component Interconnect / Interface) bus via a bridge 909.
- PCI Peripheral Component Interconnect / Interface
- the input device 915 is a device operated by the user, such as a mouse, a keyboard, a touch panel, a button, a switch, and a lever.
- the input device 915 may be, for example, a remote control device that uses infrared rays or other radio waves, or may be an external connection device 929 such as a mobile phone that supports the operation of the information processing device 900.
- the input device 915 includes an input control circuit that generates an input signal based on information input by the user and outputs the input signal to the CPU 901. The user operates the input device 915 to input various data and instruct processing operations to the information processing device 900.
- the output device 917 is configured by a device capable of notifying the acquired information to the user using a sense such as vision, hearing, or touch.
- the output device 917 can be, for example, a display device such as an LCD (Liquid Crystal Display) or an organic EL (Electro-Luminescence) display, an audio output device such as a speaker or headphones, or a vibrator.
- the output device 917 outputs the result obtained by the processing of the information processing device 900 as video such as text or image, sound such as sound or sound, or vibration.
- the storage device 919 is a data storage device configured as an example of a storage unit of the information processing device 900.
- the storage device 919 includes, for example, a magnetic storage device such as an HDD (Hard Disk Drive), a semiconductor storage device, an optical storage device, or a magneto-optical storage device.
- the storage device 919 stores, for example, programs executed by the CPU 901 and various data, and various data acquired from the outside.
- the drive 921 is a reader / writer for a removable recording medium 927 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, and is built in or externally attached to the information processing apparatus 900.
- the drive 921 reads information recorded on the attached removable recording medium 927 and outputs the information to the RAM 905.
- the drive 921 writes a record in the attached removable recording medium 927.
- the connection port 923 is a port for connecting a device to the information processing apparatus 900.
- the connection port 923 can be, for example, a USB (Universal Serial Bus) port, an IEEE 1394 port, a SCSI (Small Computer System Interface) port, or the like.
- the connection port 923 may be an RS-232C port, an optical audio terminal, an HDMI (registered trademark) (High-Definition Multimedia Interface) port, or the like.
- the communication device 925 is a communication interface configured with, for example, a communication device for connecting to the communication network 931.
- the communication device 925 can be, for example, a wired or wireless LAN (Local Area Network), Bluetooth (registered trademark), NFC (Near Field Communication), or a communication card for WUSB (Wireless USB).
- the communication device 925 may be a router for optical communication, a router for ADSL (Asymmetric Digital Subscriber Line), or a modem for various communication.
- the communication device 925 transmits and receives signals and the like using a predetermined protocol such as TCP / IP with the Internet and other communication devices, for example.
- the communication network 931 connected to the communication device 925 is a network connected by wire or wireless, and may include, for example, the Internet, a home LAN, infrared communication, radio wave communication, satellite communication, or the like.
- the imaging device 933 uses various members such as an imaging element such as a CMOS (Complementary Metal Oxide Semiconductor) or a CCD (Charge Coupled Device), and a lens for controlling the formation of a subject image on the imaging element. It is an apparatus that images a real space and generates a captured image.
- the imaging device 933 may capture a still image or may capture a moving image.
- the sensor 935 is various sensors such as an acceleration sensor, an angular velocity sensor, a geomagnetic sensor, an illuminance sensor, a temperature sensor, an atmospheric pressure sensor, a pressure sensor, a distance sensor, or a sound sensor (microphone).
- the sensor 935 acquires information about the state of the information processing apparatus 900 itself, such as the posture of the information processing apparatus 900, and information about the surrounding environment of the information processing apparatus 900, such as brightness and noise around the information processing apparatus 900, for example. To do.
- the sensor 935 may also include a GNSS receiver that receives a GNSS (Global Navigation Satellite System) signal and measures the latitude, longitude, and altitude of the device.
- GNSS Global Navigation Satellite System
- Each component described above may be configured using a general-purpose member, or may be configured by hardware specialized for the function of each component. Such a configuration can be appropriately changed according to the technical level at the time of implementation.
- an information processing apparatus for example, an information processing apparatus, a system, an information processing method executed by the information processing apparatus or system, a program for causing the information processing apparatus to function, and a program are recorded. It may include tangible media that is not temporary.
- An information acquisition unit that acquires action information indicating the detected user action
- An information processing apparatus comprising: a content providing unit that provides content to be developed according to a temporal or spatial sequence of actions.
- the content providing unit provides game content that specifies a position included in the spatial sequence and a task action to be detected at the position.
- a game screen including an object that displays the position and the task action is presented to the user while being superimposed on a real space where the action is generated.
- the game screen displays an object for displaying the position of the first user, an object for displaying the position of the second user different from the first user, and the second user.
- the information processing apparatus according to (3) including an object to be displayed.
- the information processing apparatus recognizes the environmental state based on position information and map information of the user.
- the content providing unit provides game content that specifies a time included in the temporal sequence and a task action to be detected at the time, any of (1) to (8)
- the information processing apparatus according to item 1.
- the content providing unit according to (9), wherein a game screen including an object for displaying the time and the task action is presented to the user while being superimposed on a real space where the action occurs.
- Information processing device (11) The information processing apparatus according to (9) or (10), wherein the content providing unit predicts the temporal sequence based on an environmental state of the user.
- the content providing unit according to any one of (1) to (15), wherein the content providing unit provides content in which an attribute of a character or a virtual personality is determined according to a temporal or spatial sequence of the action.
- Information processing device 17.
- the information processing apparatus according to any one of (1) to (16), wherein the content providing unit provides content whose stage setting is determined according to a temporal or spatial sequence of the actions. .
- the information processing apparatus according to item 1. (19) obtaining action information indicating the detected user action; A processor providing content that expands according to a temporal or spatial sequence of the actions. (20) a function of acquiring action information indicating the detected user action; A program for causing a computer to realize a function of providing contents to be developed according to a temporal or spatial sequence of the actions.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Physical Education & Sports Medicine (AREA)
- Acoustics & Sound (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
1.情報処理装置の機能構成
2.アクション検出処理の例
2-1.ジャンプの検出-1
2-2.ジャンプの検出-2
2-3.ターンの検出
3.付加的な処理の例
3-1.アクションスコアの算出
3-2.クラスタリング処理
3-3.センサ装着状態の推定
4.コンテンツ提供の例
4-1.仮想的なゲームコースの提供
4-2.音楽リズムゲームの提供
4-3.アクションに応じてストーリーが展開するコンテンツの提供
4-4.コレクションコンテンツの提供
5.ハードウェア構成
6.補足
図1は、本開示の一実施形態に係る情報処理装置の概略的な機能構成を示すブロック図である。図1を参照すると、情報処理装置100は、送信部101と、受信部102と、センサデバイス制御部103と、センサデータ解析部104と、解析結果処理部107と、検出区間情報保持部110と、付加的情報保持部111と、サービス制御部112とを含む。
以下では、本開示の一実施形態において実行されるアクション検出処理の例について説明する。これらの例では、ユーザがスノーボードをしている場合に発生するジャンプおよびターンが検出される。例えば、スノーボードの場合、加速度センサおよび角速度センサなどを含むセンサデバイスは、ウェアに埋め込まれたり、ウェアラブル端末装置やモバイル端末装置に組み込まれたりすることによって、ユーザに直接的に装着されてもよい。あるいは、センサデバイスは、スノーボードの用具、例えばボードに装着されてもよい。
図2は、本開示の一実施形態において、ユーザのアクションに含まれるジャンプを検出するための処理の第1の例を示すフローチャートである。図示された処理は、例えば上記の情報処理装置100に含まれるセンサデータ解析部104において実行される。
図6は、本開示の一実施形態において、ユーザのアクションに含まれるジャンプを検出するための処理の第2の例を示すフローチャートである。図示された処理は、上記の第1の例と同様に、例えば情報処理装置100に含まれるセンサデータ解析部104において実行される。
図10は、本開示の一実施形態において、ユーザのアクションに含まれるターン区間を検出するための処理の例を示すフローチャートである。図示された処理は、例えば上記の情報処理装置100に含まれるセンサデータ解析部104において実行される。以下の処理では、センサデータ解析部104が、ユーザのアクションに含まれる回転を検出し(S210)、さらにその回転に含まれる非ターン性の回転を検出し(S230)、回転から非ターン性の回転を除いたものの中からターンを検出する(S250)。
ここで、非ターン性の回転は、例えば、センサがユーザの頭部、またはユーザの頭部に装着される器具に装着されるセンサを含む場合に、ユーザの首振りによって発生する回転を含む。非ターン性の回転は、他にも、ユーザの体動によって発生する回転、より具体的には、センサがユーザの腕部、またはユーザの腕部に装着される器具に装着されるセンサを含む場合に、ユーザの腕振りや腕回しによって発生する回転などを含みうる。
本実施形態では、センサデータ解析部104が、このような非ターン性の回転を除いた上でターン区間を検出することで、より精度の高いターン区間の検出が可能になる。そのような意味で、非ターン性の回転は、検出対象であるターンに対するノイズであるともいえ、本実施形態において、センサデータ解析部104は、ユーザのアクションに含まれる回転を検出し、さらにその回転に含まれるノイズを検出し、回転からノイズを除いたものの中からターンを検出するともいえる。
以上、本開示の一実施形態において実行されるアクション検出処理のいくつかの例について説明した。既に説明した通り、本実施形態で実行されるアクション検出処理はスノーボードで発生するジャンプやターンには限られず、例えばスノーボード以外のスポーツ、またはスポーツ以外のシーンで発生するジャンプやターンについてアクション検出処理が実行されてもよい。また、本実施形態で実行されるアクション検出処理では、ジャンプやターン以外のアクションが検出されてもよい。一例として、アクション検出部106は、スノーボードなどで発生する転倒を検出してもよい。この場合、特徴量抽出部105が上述したジャンプやターンの検出と同様にして加速度のノルムを算出し、アクション検出部106が、加速度のノルムが閾値(例えば、通常の滑降では発生しない程度の大きな値でありうる)を上回った場合に、転倒の発生を検出してもよい。
(3-1.アクションスコアの算出)
例えば、解析結果処理部107に含まれるスコアリング処理部109は、上記で図2~図13を参照して説明したような処理によって検出されるジャンプ区間および/またはターン区間を含むアクション区間について、発生したアクションを評価するスコア(アクションスコア)を算出する。アクションスコアは、例えばアクション区間におけるセンサデータから、アクションの良し悪しや特徴を表す物理量(特徴量)を抽出し、それらを重みづけ加算することによって算出されうる。サービス制御部112は、このようにして算出されたスコアに基づいて、アクション(例えばジャンプまたはターン)に関する情報を生成する。
また、例えば、解析結果処理部107に含まれるクラスタリング処理部108は、上記で図2~図13を参照して説明したような処理によって検出されるジャンプ区間および/またはターン区間を含むアクション区間について、スコアリングのために抽出した特徴量などを利用してk-means法などのクラスタリングアルゴリズムを適用し、検出されたアクションをクラスタに分類する。ジャンプ区間やターン区間の場合、例えば、区間の持続時間の長短や、回転の大小によってアクションがクラスタに分類されてもよい。クラスタリングの結果は、例えば、サービスとしてダイジェスト動画を提供するような場合に、様々な種類のジャンプやターンなどのアクションが動画に含まれるようにアクション区間を抽出するために利用される。また、良かったアクションとそうでなかったアクションとを別々のクラスタに分類することによって、ユーザ自身がアクションを振り返ったり、アクションの改善のためのコーチングに役立てたりしてもよい。
図14は、本開示の一実施形態において、センサ装着状態を推定するための処理の例を示すブロック図である。より具体的には、図示された構成によって、センサデータを提供するセンサがユーザの身体に直接的に装着されているか、ユーザによって使用される器具に装着されているかが判定される。図示された処理は、例えば上記の情報処理装置100に含まれるセンサデータ解析部104において実行される。なお、図示された例では、フィルタの遮断周波数(Fc)やタイムフレームの長さが具体的に説明されているが、これらの数値は一例であり、実際のセンサの特性などに応じて適宜変更されうる。
ここで、再び図1を参照して、本実施形態に係る情報処理装置のコンテンツ提供に関する構成について説明する。本実施形態では、情報処理装置100において、検出されたユーザのアクションを示すアクション情報を取得する情報取得部と、アクションの時間的または空間的なシーケンスに従って展開するコンテンツを提供するコンテンツ提供部とが実現される。
図15は、本開示の一実施形態において仮想的なゲームコースを提供するための処理の例を示すフローチャートである。図示された例では、予測されるユーザのアクションの空間的なシーケンスに含まれる位置と、その位置で検出されるべき課題アクションとを指定するゲームコンテンツが提供される。
第1の例では、実空間にあるスノーボードのコースCに沿ってゲームコース1101が設定される。ユーザはコースCに沿って滑降可能であるため、ユーザの実際の移動軌跡はゲームコース1101に近くなる。従って、例えば、移動軌跡がゲームコース1101から所定の範囲を超えてずれた場合には減点、というようなルールを設定してもよい。第1の例において、サービス制御部112は、まずコースCに沿ってゲームコース1101を決定した上で、課題アクションを設定する位置(アイコン1103によって示される)を決定してもよい。あるいは、滑降可能なコースCが複数あるような場合、サービス制御部112は、まず課題アクションを設定する位置を決定した上で、それらの位置を含むコースCに沿ってゲームコース1101を決定してもよい。
また、第1の例では、ユーザがゲームコース1101に沿って連続的に滑降しながら課題アクションが設定された位置を順次通過するため、課題アクションを実行するタイミングを指定したり、複数の課題アクションを実行する間隔(例えば、3つ以上のアクションを等間隔で実行する、など)を指定したりすることも可能である。
一方、第2の例では、実空間においては必ずしもコースではない市街地などでゲームコース1201が設定される。このような場合、サービス制御部112は、先に課題アクションを設定する位置(アイコン1203によって示される)を決定した上で、それらの位置を結ぶリンクとしてゲームコース1201を決定してもよい。この場合、例えば、課題アクションが設定された位置の間を最短距離で結ぶリンクとしてゲームコース1201が設定され、ユーザは次の目的地の方向を示すおおまかなガイドとしてゲームコース1201を参照しながら、実際の道路の形状や交通規制などに従って移動してもよい。あるいは、課題アクションが設定された位置の間を、道路の形状や交通規制などに従って移動可能なルートによって結ぶリンクとしてゲームコース1201が設定されてもよい。この場合、ユーザがゲームコース1201に沿って移動することをルールとして設定してもよい。
また、第2の例では、ゲームコース1201上でのユーザの移動と、指定された位置での課題アクションの実行とが、不連続に発生しうる。例えば、図18に示されたゲーム画面1200bでは、テキスト1205bによって、ジャンプを50回することが指定されている。この場合、ユーザは、アイコン1203によって示される位置(歩道の一角)で移動を中断し、50回ジャンプしてから、次の位置に向かって移動を再開する。このような実行の不連続性を利用して、すごろくのようにダイスやルーレットなどを使って実行する課題アクションを選択したり(例えば、ダイスで6が出た場合、6つ先の位置で指定された課題アクションを実行すればよく、途中で指定されている位置は通過するだけでよいか、通過もしなくてよい、といった例が可能である。あるいは、同じ位置でも、ダイスやルーレットの目によって異なる課題アクションが設定される例も可能である)、ある位置で指定された課題アクションが実行された時点で次の位置と課題アクションとが明らかにされたり、といったような仕掛けがゲームコース1201に加えられてもよい。
図22は、本開示の一実施形態において提供される音楽リズムゲームの例を示す図である。図示された例では、予測されるユーザのアクションの時間的なシーケンスに含まれる時刻と、その時刻で検出されるべき課題アクションとを指定するゲームコンテンツが提供される。
本開示の一実施形態において、サービス制御部112は、ユーザのアクションに応じてストーリーが展開するコンテンツを提供してもよい。以下、そのようなコンテンツのいくつかの例について説明する。
例えば、サービス制御部112は、ユーザの日常生活の中で、ユーザに対して課題アクションを設定し、ユーザが指定された時間的または空間的なシーケンスに沿って課題アクションを実行することによってゲーム内のストーリーを展開させてもよい。例えば、ユーザの友人または恋人として登場するキャラクターとともにジョギングをするというイベントを発生させ、所定の時間以上継続してジョギングのアクションが検出された場合に、キャラクターからの好感度が上がってもよい。また、例えば、ある時刻までにキャラクターを指定された場所まで迎えに行くというイベントを発生させ、ユーザが実際に指定された時刻に指定された場所に到着したことが検出された場合に、キャラクターからの好感度が上がってもよい。
例えば、サービス制御部112は、ユーザのアクション検出結果によって示されるアクションの時間的または空間的なシーケンスに応じたイベントを発生させてもよい。より具体的には、例えば、サービス制御部112は、ユーザの帰宅が遅いことが続いた場合に、翌朝にユーザの好感度が高いキャラクターによるモーニングコールのイベントを発生させてもよい。
例えば、サービス制御部112は、ユーザの過去のアクション検出履歴から、通勤や食事などのアクションの時間的または空間的なシーケンスによって構成される習慣的なパターンを抽出し、新たに検出されたユーザのアクションがパターンと一致しているか、異なっているかに応じて異なるイベントを発生させてもよい。例えば、ユーザがいつもとは異なる時刻の(例えば1本遅い)バスに乗って通勤していることが検出された場合、ゲームにおいていつもとは異なるキャラクターが登場してもよい。
例えば、サービス制御部112は、ユーザの過去のアクション検出履歴によって示されるアクションの時間的または空間的なシーケンスからユーザの属性を推定し、推定された属性に基づいて、ゲームに登場するキャラクターの属性を選択してもよい。ゲーム以外の例では、ユーザの端末装置などにおいてサービスを提供するエージェントプログラムの仮想人格が、ユーザの過去のアクション検出履歴によって示されるアクションの時間的または空間的なシーケンスから推定された属性に従って選択されてもよい。
例えば、サービス制御部112は、ユーザの過去のアクション検出履歴に含まれるアクションと同様の時間的または空間的なシーケンスをもつアクションが検出された場合に、以前に検出されたアクションに関連する情報に基づいて、ゲーム内のキャラクターやエージェントとユーザとの会話の内容を決定してもよい。例えば、スノーボードの滑降で発生するジャンプやターンなどのアクションが1年ぶりに検出された場合、キャラクターやエージェントが、1年前のスノーボードのときに検出された他のアクション、例えば、自動車での移動に時間がかかったことや、滑降で転倒が多かったことなどを話題にしてもよい。
例えば、サービス制御部112は、アクション検出結果に含まれる時間的または空間的なシーケンスに関連する時刻および場所を、ゲーム内の表現に反映させてもよい。例えば、アクション検出結果からユーザが電車に乗っていることが推定される場合には、ゲーム内の舞台設定が電車の中に設定されてもよい。また、ユーザが乗っている路線が特定される場合には、ゲーム内での次の駅のアナウンスなどを、実際のユーザの場所に応じて提供してもよい。このとき、時刻(朝、夕方、夜など)についても、ゲーム内の設定を実際の検出結果に整合させてもよい。また、ユーザがいる場所に応じて、ゲーム内で出現するキャラクターの性格や容姿を変化させてもよい。例えば、特徴的な年代や階層の人々が集まる街にいる場合には、その街に多くいる人々に合わせた性格や容姿のキャラクターが、ゲーム内に登場してもよい。
本開示の一実施形態において、サービス制御部112は、ユーザのアクション検出結果をコレクションとして用いるコンテンツを提供してもよい。ここで、コレクションとして用いる、とは、例えば、有形無形の価値をもったものとして収集や交換の対象にすることを意味する。より具体的には、サービス制御部112は、ユーザのアクション検出結果に基づいて各ユーザにポイントを付与し、ユーザはそのポイントを収集して何らかの有価物(例えば実体的なアイテムでもよいし、ソーシャルメディアで使用可能になるアバターのような仮想的なアイテムでもよい)に交換したり、保有するポイントの多さを競ったりしてもよい。以下、そのようなコンテンツのいくつかの例について説明する。
例えば、サービス制御部112は、検出対象になるアクションの時間的または空間的なシーケンスに応じて異なる付与率で、アクションに応じたポイントをユーザに与えてもよい。例えば、移動のアクションにポイントを与える場合、移動距離が同じであれば、歩いて移動するアクション(移動に対応する時間的なシーケンスが長い)よりも走って移動するアクション(移動に対応する時間的なシーケンスが短い)に高いポイントを与えてもよい。また、例えば、スノーボードの滑降中のジャンプにポイントを与える場合、回転角度が大きいジャンプ(ジャンプに対応する空間的なシーケンスが大きい)により高いポイントを与えてもよい。なお、ポイント付与率は、時刻や場所などのコンテキスト情報との組み合わせによってさらに調整されてもよい。
例えば、サービス制御部112は、アクションの時間的または空間的なシーケンスを含んで定義され、複数のユーザによるチームが協力して達成されるアクションにポイントを与えてもよい。より具体的には、例えば、「5人のチームで1週間に5000kcal以上をジョギングで消費する」(この場合、「1週間に」という部分が時間的なシーケンスにあたる)という課題アクションが与えられ、それぞれのユーザについて検出されたアクションから、5人のジョギングによる消費カロリーが5000kcalを超えたと判定された場合に、5人のユーザそれぞれにポイントが与えられてもよい。
例えば、サービス制御部112は、1日に歩いた歩数や、スノーボードの滑降中のジャンプの高さなど、ユーザ間で競うことができるアクションの検出結果、より具体的にはアクションの時間的または空間的なシーケンスに従って算出されるスコアについて、ユーザによるポイントのベットを可能にしてもよい。より具体的には、例えば、ユーザは、自分以外のユーザのうち、アクションの検出結果について1番になるユーザを予想してポイントをベットする。例えば、1日や、所定の回数の滑降などのアクション検出の終了後、予想が当たったユーザに対して、オッズに応じてポイントが付与されてもよい。このようなベットは、例えば同じ場所でアクションを実行している複数のユーザで実施されてもよいし、ソーシャルメディアなどを介して多くのユーザが参加することが可能であってもよい。
次に、図23を参照して、本開示の実施形態に係る情報処理装置のハードウェア構成について説明する。図23は、本開示の実施形態に係る情報処理装置のハードウェア構成例を示すブロック図である。
本開示の実施形態は、例えば、上記で説明したような情報処理装置、システム、情報処理装置またはシステムで実行される情報処理方法、情報処理装置を機能させるためのプログラム、およびプログラムが記録された一時的でない有形の媒体を含みうる。
(1)検出されたユーザのアクションを示すアクション情報を取得する情報取得部と、
前記アクションの時間的または空間的なシーケンスに従って展開するコンテンツを提供するコンテンツ提供部と
を備える情報処理装置。
(2)前記コンテンツ提供部は、前記空間的なシーケンスに含まれる位置と、該位置で検出されるべき課題アクションとを指定するゲームコンテンツを提供する、前記(1)に記載の情報処理装置。
(3)前記コンテンツ提供部は、前記位置と前記課題アクションとを表示するオブジェクトを含むゲーム画面を、前記アクションが発生する実空間に重畳させて前記ユーザに提示する、前記(2)に記載の情報処理装置。
(4)前記ゲーム画面は、第1のユーザについての前記位置を表示するオブジェクトと、前記第1のユーザとは異なる第2のユーザについての前記位置を表示するオブジェクトと、前記第2のユーザを表示するオブジェクトとを含む、前記(3)に記載の情報処理装置。
(5)前記コンテンツ提供部は、前記空間的なシーケンスを、前記ユーザの環境状態に基づいて予測する、前記(2)~(4)のいずれか1項に記載の情報処理装置。
(6)前記コンテンツ提供部は、前記空間的なシーケンスの長さを、前記ゲームコンテンツの開始時の前記ユーザの環境状態に基づいて決定する、前記(5)に記載の情報処理装置。
(7)前記コンテンツ提供部は、前記ユーザの周辺を撮影した画像に基づいて前記環境状態を認識する、前記(5)または(6)に記載の情報処理装置。
(8)前記コンテンツ提供部は、前記ユーザの位置情報とマップ情報とに基づいて前記環境状態を認識する、前記(5)~(7)のいずれか1項に記載の情報処理装置。
(9)前記コンテンツ提供部は、前記時間的なシーケンスに含まれる時刻と、該時刻に検出されるべき課題アクションとを指定するゲームコンテンツを提供する、前記(1)~(8)のいずれか1項に記載の情報処理装置。
(10)前記コンテンツ提供部は、前記時刻と前記課題アクションとを表示するオブジェクトを含むゲーム画面を、前記アクションが発生する実空間に重畳させて前記ユーザに提示する、前記(9)に記載の情報処理装置。
(11)前記コンテンツ提供部は、前記時間的なシーケンスを、前記ユーザの環境状態に基づいて予測する、前記(9)または(10)に記載の情報処理装置。
(12)前記コンテンツ提供部は、前記時間的なシーケンスの長さを、前記ゲームコンテンツの開始時の前記ユーザの環境状態に基づいて決定する、前記(11)に記載の情報処理装置。
(13)前記コンテンツ提供部は、前記ユーザのスケジュールに基づいて前記環境状態を認識する、前記(11)または(12)に記載の情報処理装置。
(14)前記時間的なシーケンスは、楽曲に対応付けられ、
前記コンテンツ提供部は、前記ゲームコンテンツとともに前記楽曲を前記ユーザに提供する、前記(11)~(13)のいずれか1項に記載の情報処理装置。
(15)前記コンテンツ提供部は、前記アクションの時間的または空間的なシーケンスに従ってストーリーが展開するコンテンツを提供する、前記(1)~(14)のいずれか1項に記載の情報処理装置。
(16)前記コンテンツ提供部は、前記アクションの時間的または空間的なシーケンスに従ってキャラクターまたは仮想人格の属性が決定されるコンテンツを提供する、前記(1)~(15)のいずれか1項に記載の情報処理装置。
(17)前記コンテンツ提供部は、前記アクションの時間的または空間的なシーケンスに従って舞台設定が決定されるコンテンツを提供する、前記(1)~(16)のいずれか1項に記載の情報処理装置。
(18)前記コンテンツ提供部は、前記アクションの時間的または空間的なシーケンスに従って算出されるスコアに対する他のユーザのベットを可能にするコンテンツを提供する、前記(1)~(17)のいずれか1項に記載の情報処理装置。
(19)検出されたユーザのアクションを示すアクション情報を取得することと、
プロセッサが、前記アクションの時間的または空間的なシーケンスに従って展開するコンテンツを提供することと
を含む情報処理方法。
(20)検出されたユーザのアクションを示すアクション情報を取得する機能と、
前記アクションの時間的または空間的なシーケンスに従って展開するコンテンツを提供する機能と
をコンピュータに実現させるためのプログラム。
101 送信部
102 受信部
103 センサデバイス制御部
104 センサデータ解析部
105 特徴量抽出部
106 アクション検出部
107 解析結果処理部
108 クラスタリング処理部
109 スコアリング処理部
112 サービス制御部
Claims (20)
- 検出されたユーザのアクションを示すアクション情報を取得する情報取得部と、
前記アクションの時間的または空間的なシーケンスに従って展開するコンテンツを提供するコンテンツ提供部と
を備える情報処理装置。 - 前記コンテンツ提供部は、前記空間的なシーケンスに含まれる位置と、該位置で検出されるべき課題アクションとを指定するゲームコンテンツを提供する、請求項1に記載の情報処理装置。
- 前記コンテンツ提供部は、前記位置と前記課題アクションとを表示するオブジェクトを含むゲーム画面を、前記アクションが発生する実空間に重畳させて前記ユーザに提示する、請求項2に記載の情報処理装置。
- 前記ゲーム画面は、第1のユーザについての前記位置を表示するオブジェクトと、前記第1のユーザとは異なる第2のユーザについての前記位置を表示するオブジェクトと、前記第2のユーザを表示するオブジェクトとを含む、請求項3に記載の情報処理装置。
- 前記コンテンツ提供部は、前記空間的なシーケンスを、前記ユーザの環境状態に基づいて予測する、請求項2に記載の情報処理装置。
- 前記コンテンツ提供部は、前記空間的なシーケンスの長さを、前記ゲームコンテンツの開始時の前記ユーザの環境状態に基づいて決定する、請求項5に記載の情報処理装置。
- 前記コンテンツ提供部は、前記ユーザの周辺を撮影した画像に基づいて前記環境状態を認識する、請求項5に記載の情報処理装置。
- 前記コンテンツ提供部は、前記ユーザの位置情報とマップ情報とに基づいて前記環境状態を認識する、請求項5に記載の情報処理装置。
- 前記コンテンツ提供部は、前記時間的なシーケンスに含まれる時刻と、該時刻に検出されるべき課題アクションとを指定するゲームコンテンツを提供する、請求項1に記載の情報処理装置。
- 前記コンテンツ提供部は、前記時刻と前記課題アクションとを表示するオブジェクトを含むゲーム画面を、前記アクションが発生する実空間に重畳させて前記ユーザに提示する、請求項9に記載の情報処理装置。
- 前記コンテンツ提供部は、前記時間的なシーケンスを、前記ユーザの環境状態に基づいて予測する、請求項9に記載の情報処理装置。
- 前記コンテンツ提供部は、前記時間的なシーケンスの長さを、前記ゲームコンテンツの開始時の前記ユーザの環境状態に基づいて決定する、請求項11に記載の情報処理装置。
- 前記コンテンツ提供部は、前記ユーザのスケジュールに基づいて前記環境状態を認識する、請求項11に記載の情報処理装置。
- 前記時間的なシーケンスは、楽曲に対応付けられ、
前記コンテンツ提供部は、前記ゲームコンテンツとともに前記楽曲を前記ユーザに提供する、請求項11に記載の情報処理装置。 - 前記コンテンツ提供部は、前記アクションの時間的または空間的なシーケンスに従ってストーリーが展開するコンテンツを提供する、請求項1に記載の情報処理装置。
- 前記コンテンツ提供部は、前記アクションの時間的または空間的なシーケンスに従ってキャラクターまたは仮想人格の属性が決定されるコンテンツを提供する、請求項1に記載の情報処理装置。
- 前記コンテンツ提供部は、前記アクションの時間的または空間的なシーケンスに従って舞台設定が決定されるコンテンツを提供する、請求項1に記載の情報処理装置。
- 前記コンテンツ提供部は、前記アクションの時間的または空間的なシーケンスに従って算出されるスコアに対する他のユーザのベットを可能にするコンテンツを提供する、請求項1に記載の情報処理装置。
- 検出されたユーザのアクションを示すアクション情報を取得することと、
プロセッサが、前記アクションの時間的または空間的なシーケンスに従って展開するコンテンツを提供することと
を含む情報処理方法。 - 検出されたユーザのアクションを示すアクション情報を取得する機能と、
前記アクションの時間的または空間的なシーケンスに従って展開するコンテンツを提供する機能と
をコンピュータに実現させるためのプログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016568284A JP6658545B2 (ja) | 2015-01-05 | 2015-10-15 | 情報処理装置、情報処理方法、およびプログラム |
EP15876940.6A EP3243557B1 (en) | 2015-01-05 | 2015-10-15 | Information processing device, information processing method, and program |
US15/527,068 US20170352226A1 (en) | 2015-01-05 | 2015-10-15 | Information processing device, information processing method, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015000415 | 2015-01-05 | ||
JP2015-000415 | 2015-01-05 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016111067A1 true WO2016111067A1 (ja) | 2016-07-14 |
Family
ID=56342227
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/079175 WO2016111067A1 (ja) | 2015-01-05 | 2015-10-15 | 情報処理装置、情報処理方法、およびプログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US20170352226A1 (ja) |
EP (1) | EP3243557B1 (ja) |
JP (1) | JP6658545B2 (ja) |
CN (2) | CN105759953B (ja) |
WO (1) | WO2016111067A1 (ja) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018132913A (ja) * | 2017-02-15 | 2018-08-23 | 清水建設株式会社 | 構造物可視化装置及び構造物可視化システム |
JP2018171279A (ja) * | 2017-03-31 | 2018-11-08 | 株式会社愛和ライト | 情報処理装置及び情報処理システム |
WO2019240070A1 (ja) * | 2018-06-11 | 2019-12-19 | 本田技研工業株式会社 | 動作確認システム |
WO2019240062A1 (ja) * | 2018-06-11 | 2019-12-19 | 本田技研工業株式会社 | 報知システム |
CN110646227A (zh) * | 2018-06-27 | 2020-01-03 | 斯凯孚公司 | 具有近场通信调试硬件的无线状况监测传感器 |
JP2020092910A (ja) * | 2018-12-13 | 2020-06-18 | 株式会社ドリコム | 情報処理システム、情報処理方法および情報処理プログラム |
CN117232819A (zh) * | 2023-11-16 | 2023-12-15 | 湖南大用环保科技有限公司 | 基于数据分析的阀体综合性能测试系统 |
JP7506648B2 (ja) | 2020-03-27 | 2024-06-26 | 株式会社バンダイ | プログラム、端末、ゲームシステム及びゲーム管理装置 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11210772B2 (en) | 2019-01-11 | 2021-12-28 | Universal City Studios Llc | Wearable visualization device systems and methods |
JP7168757B2 (ja) * | 2019-02-26 | 2022-11-09 | マクセル株式会社 | 映像表示装置及び映像表示方法 |
CN110180189B (zh) * | 2019-06-14 | 2022-10-21 | 广州好酷信息科技有限公司 | 一种基于游戏游艺设备的积分排名方法及系统 |
EP3940586A1 (en) | 2020-07-17 | 2022-01-19 | Sony Group Corporation | An electronic device and a related method for detecting and counting an action |
JP7143872B2 (ja) * | 2020-08-14 | 2022-09-29 | カシオ計算機株式会社 | 情報処理装置、ランニング指標導出方法及びプログラム |
JP7185670B2 (ja) * | 2020-09-02 | 2022-12-07 | 株式会社スクウェア・エニックス | ビデオゲーム処理プログラム、及びビデオゲーム処理システム |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000353249A (ja) * | 1999-06-11 | 2000-12-19 | Mr System Kenkyusho:Kk | 複合現実空間における指示表示及び指示表示方法 |
JP2007007081A (ja) * | 2005-06-29 | 2007-01-18 | Namco Bandai Games Inc | ゲームシステム、楽音生成システム、曲選択システム、プログラム及び情報記憶媒体 |
JP2009018127A (ja) * | 2007-07-13 | 2009-01-29 | Panasonic Corp | 学習支援装置および学習支援方法 |
JP2010240185A (ja) * | 2009-04-07 | 2010-10-28 | Kanazawa Inst Of Technology | 動作学習支援装置 |
JP2012095914A (ja) * | 2010-11-04 | 2012-05-24 | Ns Solutions Corp | ゴルフプレイヤー支援システム、ユーザ端末装置、ゴルフプレイヤー支援方法及びプログラム |
JP2012239719A (ja) * | 2011-05-20 | 2012-12-10 | Konami Digital Entertainment Co Ltd | ゲーム装置、ゲーム制御方法、ならびに、プログラム |
JP2014054303A (ja) * | 2012-09-11 | 2014-03-27 | Casio Comput Co Ltd | 運動支援装置、運動支援方法及び運動支援プログラム |
JP2014174589A (ja) * | 2013-03-06 | 2014-09-22 | Mega Chips Corp | 拡張現実システム、プログラムおよび拡張現実提供方法 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008538295A (ja) * | 2005-01-10 | 2008-10-23 | アイポイント リミテッド | 体力トレーニング用音楽ペースメーカー |
US20060288846A1 (en) * | 2005-06-27 | 2006-12-28 | Logan Beth T | Music-based exercise motivation aid |
US20070074619A1 (en) * | 2005-10-04 | 2007-04-05 | Linda Vergo | System and method for tailoring music to an activity based on an activity goal |
US20070254271A1 (en) * | 2006-04-28 | 2007-11-01 | Volodimir Burlik | Method, apparatus and software for play list selection in digital music players |
US20110183754A1 (en) * | 2010-01-25 | 2011-07-28 | Mansour Ali Saleh Alghamdi | Game system based on real time and location of user |
KR20130000401A (ko) * | 2010-02-28 | 2013-01-02 | 오스터하우트 그룹 인코포레이티드 | 대화형 머리장착식 아이피스 상의 지역 광고 컨텐츠 |
US9317660B2 (en) * | 2011-03-31 | 2016-04-19 | Adidas Ag | Group performance monitoring system and method |
US9155964B2 (en) * | 2011-09-14 | 2015-10-13 | Steelseries Aps | Apparatus for adapting virtual gaming with real world information |
US8847988B2 (en) * | 2011-09-30 | 2014-09-30 | Microsoft Corporation | Exercising applications for personal audio/visual system |
DE202013103775U1 (de) * | 2012-08-23 | 2013-09-05 | Jakub Jirus | Konditions- und/oder Fitnessgerät |
US8758127B2 (en) * | 2012-11-08 | 2014-06-24 | Audible, Inc. | In-vehicle gaming system for a driver |
CN104008296A (zh) * | 2014-06-08 | 2014-08-27 | 蒋小辉 | 将视频转化为游戏的方法和一种视频类游戏及其实现方法 |
-
2015
- 2015-10-15 EP EP15876940.6A patent/EP3243557B1/en active Active
- 2015-10-15 JP JP2016568284A patent/JP6658545B2/ja not_active Expired - Fee Related
- 2015-10-15 WO PCT/JP2015/079175 patent/WO2016111067A1/ja active Application Filing
- 2015-10-15 US US15/527,068 patent/US20170352226A1/en not_active Abandoned
- 2015-12-29 CN CN201511009188.7A patent/CN105759953B/zh active Active
- 2015-12-29 CN CN201521117699.6U patent/CN205730297U/zh not_active Expired - Fee Related
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000353249A (ja) * | 1999-06-11 | 2000-12-19 | Mr System Kenkyusho:Kk | 複合現実空間における指示表示及び指示表示方法 |
JP2007007081A (ja) * | 2005-06-29 | 2007-01-18 | Namco Bandai Games Inc | ゲームシステム、楽音生成システム、曲選択システム、プログラム及び情報記憶媒体 |
JP2009018127A (ja) * | 2007-07-13 | 2009-01-29 | Panasonic Corp | 学習支援装置および学習支援方法 |
JP2010240185A (ja) * | 2009-04-07 | 2010-10-28 | Kanazawa Inst Of Technology | 動作学習支援装置 |
JP2012095914A (ja) * | 2010-11-04 | 2012-05-24 | Ns Solutions Corp | ゴルフプレイヤー支援システム、ユーザ端末装置、ゴルフプレイヤー支援方法及びプログラム |
JP2012239719A (ja) * | 2011-05-20 | 2012-12-10 | Konami Digital Entertainment Co Ltd | ゲーム装置、ゲーム制御方法、ならびに、プログラム |
JP2014054303A (ja) * | 2012-09-11 | 2014-03-27 | Casio Comput Co Ltd | 運動支援装置、運動支援方法及び運動支援プログラム |
JP2014174589A (ja) * | 2013-03-06 | 2014-09-22 | Mega Chips Corp | 拡張現実システム、プログラムおよび拡張現実提供方法 |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018132913A (ja) * | 2017-02-15 | 2018-08-23 | 清水建設株式会社 | 構造物可視化装置及び構造物可視化システム |
JP2018171279A (ja) * | 2017-03-31 | 2018-11-08 | 株式会社愛和ライト | 情報処理装置及び情報処理システム |
JPWO2019240065A1 (ja) * | 2018-06-11 | 2021-06-24 | 本田技研工業株式会社 | 動作評価システム |
JP7424974B2 (ja) | 2018-06-11 | 2024-01-30 | 本田技研工業株式会社 | 動作評価システム |
WO2019240065A1 (ja) * | 2018-06-11 | 2019-12-19 | 本田技研工業株式会社 | 動作評価システム |
JP7523347B2 (ja) | 2018-06-11 | 2024-07-26 | 本田技研工業株式会社 | 動作確認システム |
JP7424973B2 (ja) | 2018-06-11 | 2024-01-30 | 本田技研工業株式会社 | 報知システム |
JPWO2019240062A1 (ja) * | 2018-06-11 | 2021-06-24 | 本田技研工業株式会社 | 報知システム |
WO2019240070A1 (ja) * | 2018-06-11 | 2019-12-19 | 本田技研工業株式会社 | 動作確認システム |
JPWO2019240070A1 (ja) * | 2018-06-11 | 2021-07-29 | 本田技研工業株式会社 | 動作確認システム |
WO2019240062A1 (ja) * | 2018-06-11 | 2019-12-19 | 本田技研工業株式会社 | 報知システム |
CN110646227B (zh) * | 2018-06-27 | 2024-05-28 | 斯凯孚公司 | 具有近场通信调试硬件的无线状况监测传感器 |
CN110646227A (zh) * | 2018-06-27 | 2020-01-03 | 斯凯孚公司 | 具有近场通信调试硬件的无线状况监测传感器 |
JP2020092910A (ja) * | 2018-12-13 | 2020-06-18 | 株式会社ドリコム | 情報処理システム、情報処理方法および情報処理プログラム |
JP7506648B2 (ja) | 2020-03-27 | 2024-06-26 | 株式会社バンダイ | プログラム、端末、ゲームシステム及びゲーム管理装置 |
CN117232819B (zh) * | 2023-11-16 | 2024-01-26 | 湖南大用环保科技有限公司 | 基于数据分析的阀体综合性能测试系统 |
CN117232819A (zh) * | 2023-11-16 | 2023-12-15 | 湖南大用环保科技有限公司 | 基于数据分析的阀体综合性能测试系统 |
Also Published As
Publication number | Publication date |
---|---|
US20170352226A1 (en) | 2017-12-07 |
JPWO2016111067A1 (ja) | 2017-10-12 |
EP3243557B1 (en) | 2020-03-04 |
JP6658545B2 (ja) | 2020-03-04 |
CN205730297U (zh) | 2016-11-30 |
EP3243557A1 (en) | 2017-11-15 |
CN105759953A (zh) | 2016-07-13 |
CN105759953B (zh) | 2020-04-21 |
EP3243557A4 (en) | 2018-05-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6658545B2 (ja) | 情報処理装置、情報処理方法、およびプログラム | |
CN105749548B (zh) | 信息处理装置、信息处理方法以及程序 | |
TWI786701B (zh) | 用於在hmd環境中利用傳至gpu之預測及後期更新的眼睛追蹤進行快速注視點渲染的方法及系統以及非暫時性電腦可讀媒體 | |
CN112400150B (zh) | 基于预测扫视着陆点的动态图形渲染 | |
JP6683134B2 (ja) | 情報処理装置、情報処理方法、およびプログラム | |
CN106659932B (zh) | 头戴式显示器中的传感刺激管理 | |
CN103357177B (zh) | 使用便携式游戏装置来记录或修改在主游戏系统上实时运行的游戏或应用 | |
US10317988B2 (en) | Combination gesture game mechanics using multiple devices | |
US20170216675A1 (en) | Fitness-based game mechanics | |
CN108379809A (zh) | 基于ar的滑雪场虚拟轨迹引导和训练控制方法 | |
US20180374270A1 (en) | Information processing device, information processing method, program, and server | |
WO2016092933A1 (ja) | 情報処理装置、情報処理方法およびプログラム | |
WO2020090223A1 (ja) | 情報処理装置、情報処理方法及び記録媒体 | |
US11173375B2 (en) | Information processing apparatus and information processing method | |
US20240303018A1 (en) | Head mounted processing apparatus | |
CN117122910A (zh) | 用于将真实世界声音添加到虚拟现实场景的方法和系统 | |
KR102433084B1 (ko) | 가상 현실 게임용 플레이어 감정 분석 방법, 플레이어 감정 기반 가상 현실 서비스 운용 방법 및 가상 현실 시스템 | |
US20230381649A1 (en) | Method and system for automatically controlling user interruption during game play of a video game | |
US20240045496A1 (en) | Improving accuracy of interactions for gaze-enabled ar objects when in motion | |
CN114995642A (zh) | 基于增强现实的运动训练方法、装置、服务器及终端设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15876940 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2016568284 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15527068 Country of ref document: US |
|
REEP | Request for entry into the european phase |
Ref document number: 2015876940 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |