WO2018122956A1 - Sport motion analysis support system, method and program - Google Patents

Sport motion analysis support system, method and program Download PDF

Info

Publication number
WO2018122956A1
WO2018122956A1 PCT/JP2016/088884 JP2016088884W WO2018122956A1 WO 2018122956 A1 WO2018122956 A1 WO 2018122956A1 JP 2016088884 W JP2016088884 W JP 2016088884W WO 2018122956 A1 WO2018122956 A1 WO 2018122956A1
Authority
WO
WIPO (PCT)
Prior art keywords
time
time segment
model
data
sports
Prior art date
Application number
PCT/JP2016/088884
Other languages
French (fr)
Japanese (ja)
Inventor
洋介 本橋
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to JP2018558560A priority Critical patent/JP6677319B2/en
Priority to PCT/JP2016/088884 priority patent/WO2018122956A1/en
Publication of WO2018122956A1 publication Critical patent/WO2018122956A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Definitions

  • the present invention relates to a sports motion analysis support system, a sports motion analysis support method, and a sports motion analysis support program that support motion analysis in sports.
  • Motion capture is known as a technique that can be used for motion analysis in sports.
  • application software that displays a trajectory of a specific part of the body (for example, an ankle) based on a moving image obtained by imaging a person who is playing sports.
  • application software that displays the difference between a model form and the form of a sports person.
  • Patent Document 1 describes that a swing stage and a main position of golf are specified from a measurement value of a frame difference of an image using a rule-based model. Further, Patent Document 1 describes that in this case, a hidden Markov model, a state space model, a finite state machine, a regression method, a support vector machine, a neural network, a fuzzy theory, and the like may be used.
  • Patent Document 2 collects motion image data of 10 trials, uses 5 trials as learning data, estimates the parameters of the hidden Markov model, and performs a recognition experiment using the remaining 5 trials as test data. It is described. Patent Document 2 describes that a discrete cosine transform coefficient having a relatively low frequency component has been found to be effective as a feature amount for image recognition of human motion.
  • the form and the result of the action including the form do not always correspond completely. For example, even if a person's form is good, it may happen that the person's condition was not good and the result was not good. Also, for example, even if a person's form is good, it may occur that good results are not achieved due to external factors such as rain and wind.
  • the inventors of the present invention thought that a form in a certain time zone during a time when a person is operating is an important form having a great influence on the result.
  • the result there is a numerical value indicating the sports performance.
  • An example of the result other than the numerical value indicating the result is an event.
  • Specific examples of the event include, for example, matters relating to the movement of equipment used in sports, such as the matter in which direction the ball flew after soccer PK (penalty kick).
  • the present invention relates to a sports motion analysis support system and a sports motion analysis support method capable of assisting a user so that a time zone in which a form has a great influence on a result among times during which a series of motions are performed in sports can be grasped. And it aims at providing a sports movement analysis support program.
  • a sports motion analysis support system uses a plurality of data, a data storage unit that stores a plurality of data in which moving image data representing a series of motions in sports and the results of the motions are associated with each other.
  • a learning unit that learns a model representing a relationship between an operation in a time segment and a result corresponding to the operation for each of a plurality of time segments determined with respect to a time point representing the operation, and using the model
  • an evaluation unit that calculates the prediction accuracy of the predicted result for each time segment.
  • the sports motion analysis support system uses a plurality of data, and a data storage unit that stores a plurality of data in which image data of a moving image representing a series of motions in sports and the results of the motions are associated,
  • a learning unit that learns a model representing a relationship between an operation in a time segment and a result corresponding to the operation for each of a plurality of time segments determined with respect to a time point representing a predetermined motion
  • a model that specifies a time segment in which a time zone in which the degree of improvement in the prediction accuracy of the result predicted using the method is large can be specified.
  • a computer including a data storage unit that stores a plurality of data in which image data of a moving image representing a series of motions in sports is associated with a result of the motion is stored in the computer. And learning a model representing the relationship between the motion in the time segment and the result corresponding to the motion for each of the plurality of time segments defined with reference to the time point representing the predetermined motion. The prediction accuracy of the result predicted by using is calculated for each time segment.
  • a computer including a data storage unit that stores a plurality of data in which image data of a moving image representing a series of motions in sports is associated with a result of the motion is stored in the computer. And learning a model representing the relationship between the motion in the time segment and the result corresponding to the motion for each of the plurality of time segments defined with reference to the time point representing the predetermined motion. A time segment in which a time zone in which a degree of improvement in prediction accuracy of a result predicted by use is large can be specified is specified.
  • the sports motion analysis support program is a sports motion mounted on a computer including a data storage unit that stores a plurality of data in which image data of a moving image representing a series of motions in sports is associated with a result of the motion.
  • An analysis support program that uses a plurality of data in a computer and corresponds to the operation in the time division and the corresponding operation for each of the plurality of time divisions defined based on the time point representing the predetermined operation.
  • the sports motion analysis support program is a sports motion mounted on a computer including a data storage unit that stores a plurality of data in which image data of a moving image representing a series of motions in sports is associated with a result of the motion.
  • An analysis support program that uses a plurality of data in a computer and corresponds to the operation in the time division and the corresponding operation for each of the plurality of time divisions defined based on the time point representing the predetermined operation.
  • a learning process that learns a model that expresses the relationship with the result to be performed, and a specific process that specifies a time zone that can identify a time zone in which the degree of improvement in prediction accuracy of the result predicted using the model is large It is characterized by.
  • the present invention it is possible to assist the user so that the time zone in which the form has a great influence on the result among the time during which a series of actions in sports is performed can be grasped.
  • FIG. 1 is a block diagram illustrating an example of a sports motion analysis support system according to a first embodiment of the present invention. It is a schematic diagram which shows a series of operation
  • a long jump will be described as an example.
  • the present invention is applicable to sports other than long jumping.
  • a case where the result of the long jump operation is a result (in this example, a jump distance) will be described as an example.
  • the result is represented numerically.
  • the result of movement in sports may be an event such as in which direction the ball flew.
  • the result is represented by a value (for example, a binary value of “0” and “1”) according to the content of the event.
  • a value for example, a binary value of “0” and “1”
  • FIG. FIG. 1 is a block diagram showing an example of a sports motion analysis support system according to the first embodiment of the present invention.
  • the sports motion analysis support system 1 according to the first embodiment of the present invention includes a data storage unit 2, a time segment image extraction unit 3, a learning unit 4, a prediction unit 5, an evaluation unit 6, and a display unit 7. Is provided.
  • the data storage unit 2 is a storage device that stores a plurality of data in which image data of a moving image representing a series of actions in sports and the results of the actions are associated with each other.
  • image data of a moving image representing a series of actions in sports and the results of the actions are associated with each other.
  • moving image data representing a series of actions of a person who performs a long jump is associated with a result (jumping distance) obtained as a result of the action.
  • the data storage unit 2 stores a plurality of such data.
  • the manner of associating the image data with the results is not particularly limited.
  • the grade may exist as data different from the image data, and the image data and the grade may be associated with each other.
  • the image data and the results may be associated with each other in a manner in which the grade is included in the moving image of the image data (in other words, the manner in which the grade is expressed on the moving image). This also applies to the embodiments described later.
  • FIG. 2 is a schematic diagram showing a series of actions of a person who performs a long jump (hereinafter referred to as a player).
  • a player to simplify the drawing, for example, the same posture is illustrated as the posture of the player when running, but the actual player moves a series of limbs while moving the limbs and the like. Perform the action.
  • This also applies to the schematic diagram shown in FIG.
  • the athlete makes a run, jumps at the crossing board 11, and then lands.
  • moving image data representing the series of operations can be obtained.
  • the data which matched the image data and the result at that time become one data.
  • the data storage unit 2 stores a plurality of such data in advance.
  • a plurality of storage units 2 may be stored.
  • the data storage unit 2 stores a plurality of data related to a specific player.
  • the specific player may be a player who uses the sports motion analysis support system 1 of the present invention, or a player who is instructed by a coach using the sports motion analysis support system 1.
  • the specific player may be a player who is a competitor for the user who uses the sports motion analysis support system 1 of the present invention. This also applies to other embodiments described later.
  • the movie represents a series of movements of a player, the movie also represents the form at each point of the player.
  • the time division in the present invention is a time division determined on the basis of a time point when the moving image represents a specific action of the player.
  • the time point when the moving image represents the crossing operation is used as a reference, and the time of this reference is set to zero.
  • the time before the reference (time 0) is expressed as negative, and the time after the reference is expressed as positive.
  • the time division can be determined by setting the start time and the end time with the reference time as 0 when the moving image represents the crossing operation.
  • a plurality of time segments are determined in advance.
  • the lengths of the time segments may be different from one another, and the plurality of time segments may include a common time zone.
  • FIG. 3 is a schematic diagram showing an example of a plurality of time segments.
  • a plurality of time segments a to g are shown.
  • “A” to “g” illustrated in FIG. 3 are time segment identification information.
  • the time division a is a range from time “ ⁇ 2.5” to time “ ⁇ 2.0” (see FIG. 3).
  • the time division b is a range from time “ ⁇ 2.5” to time “ ⁇ 1.5” (see FIG. 3).
  • the start time and the end time are determined in the other time sections c, d, e,.
  • the start time of each time segment is “ ⁇ 2.5”, but the start time “ ⁇ 2.5” of each time segment is an example.
  • the start time of each time segment may be earlier than “ ⁇ 2.5” or later than “ ⁇ 2.5”.
  • the start time of each time segment is common, but the start time of each time segment may not be common.
  • each time segment may be determined continuously so as not to include a common time zone with other time segments. For example, the time interval “ ⁇ 2.5” to the time “ ⁇ 2.0” is set as the time division a, the time interval “ ⁇ 2.0” to the time “ ⁇ 1.5” is set as the time division b, and the time “ ⁇ 1” is set. Each time segment may be determined such that the time segment c is from .5 ”to time“ ⁇ 1.0 ”.
  • each time segment may be determined so that the start time is different with the end time being the same.
  • the reference time “0” may be set as a common end time.
  • the plurality of time segments are determined so as to be ordered based on time.
  • the start time of each time segment is common and the end time is different. Therefore, the time segments shown in FIG. 3 can be ordered in the order of a, b, c, d, e, f, and g in the order of end times.
  • the time divisions are determined to be ordered based on time will be described as an example.
  • the time segment image extraction unit 3 identifies a range corresponding to each of a plurality of predetermined time segments for each image data stored in the data storage unit 2. Then, the time segment image extraction unit 3 extracts still images from the range corresponding to each time segment for each image data, and generates a set of still images for each time segment of each image data.
  • the first image data is image data # 1
  • any n-th image data is image data #n.
  • the time segment image extraction unit 3 specifies, for each time segment, a range corresponding to the time segment in the image data # 1. That is, the time segment image extraction unit 3 specifies a range corresponding to the time segment a, a range corresponding to the time segment b, and the like in the image data # 1. Then, the time segment image extraction unit 3 extracts still images from the range corresponding to the time segment a in the image data # 1, and generates a set of still images corresponding to the time segment a in the image data # 1.
  • the time segmented image extraction unit 3 may extract a still image every predetermined time when extracting a still image from moving image data.
  • the time segment image extracting unit 3 may extract the still image from the range corresponding to the time segment a every 0.1 second.
  • 0.1 second was illustrated as an example of the above-mentioned predetermined time, the above-mentioned predetermined time is not limited to 0.1 second.
  • time segment image extraction unit 3 extracts still images from the range corresponding to each time segment in the image data # 1, and generates a set of still images for each time segment.
  • the time-segment image extraction unit 3 performs the same processing on each image data other than the image data # 1, and generates a set of still images for each time segment in each image data.
  • the set of still images obtained for each time segment in each image data represents the player's action and form in the corresponding time segment.
  • FIG. 4 is an explanatory diagram showing identification information of a set of still images obtained for each time segment in each image data.
  • a set of still images extracted from the time segment a in the image data # 1 is represented as “# 1, a” (see FIG. 4).
  • the identification information of the other set of still images is also expressed by the same rule.
  • FIG. 4 also shows results corresponding to each image data.
  • the learning unit 4 learns, for each of a plurality of time segments, a model representing the relationship between the motion in the time segment and the grade (jumping distance) corresponding to the motion (in other words, the generation) To do).
  • this model is a model showing a relationship between a form in a time segment (a form represented by a set of still images corresponding to the time segment) and a grade.
  • the learning unit 4 learns a model in which a set of still images representing a form is an explanatory variable and a score is an objective variable. Therefore, it is possible to calculate the predicted value of the score using the model obtained by learning and the set of still images.
  • the learning unit 4 determines, for each image data, a set of still images corresponding to a time segment of interest and a combination of results, and uses the combination determined for each image data as learning data,
  • a model corresponding to the category may be learned by machine learning.
  • the machine learning algorithm may be any algorithm that can learn a model for calculating a predicted value of a score as described above.
  • the learning unit 4 learns a model corresponding to the time segment a.
  • the learning unit 4 combines the still image set “# 1, a” with the score “5.8 m”, the still image set “# 2, a” with the score “6.5 m”, A combination of a set of still images “# 3, a” and a score “7.1 m”, a combination of a set of still images “# 4, a” and a score “6.2 m”, etc. (see FIG. 4)
  • a model indicating the relationship between a form (a set of still images corresponding to the time section a) in the time section a and the grade is learned by using the combination of these as learning data.
  • the learning unit 4 similarly learns models corresponding to the respective time segments for the other time segments b, c, d,.
  • a model is obtained for each time segment by the processing of the learning unit 4.
  • the prediction unit 5 calculates the predicted value of the grade for each image data using a model corresponding to each time division.
  • the prediction unit 5 calculates the predicted value of the grade using the model corresponding to the time division a for the image data # 1.
  • the prediction unit 5 calculates the predicted value of the score by applying the set “# 1, a” of the still images corresponding to the time segment a in the image data # 1 to the model corresponding to the time segment a. To do.
  • the prediction unit 5 similarly calculates the predicted value of the result based on the model corresponding to each other time segment for the image data # 1. That is, the prediction unit 5 calculates the predicted value of the score for each model corresponding to each time segment with respect to the image data # 1.
  • the prediction unit 5 calculates the predicted value of the grade for the number of time sections for one image data.
  • the prediction unit 5 similarly calculates the predicted value of the result for the number of time segments for each image data other than the image data # 1.
  • the prediction unit 5 calculates N prediction values corresponding to the time segment a. Similarly, N predicted values corresponding to other time segments are also calculated.
  • the evaluation unit 6 calculates the prediction accuracy of the predicted value using a plurality of predicted values (predicted values for the number of image data) corresponding to the time segment and the true value of the grade for each time segment. It can be said that the evaluation part 6 is evaluating the prediction accuracy of a model for every time division.
  • the true value of the grade is the grade value stored in the data storage unit 2 in association with the image data.
  • the evaluation unit 6 may calculate the average value of values obtained by dividing the predicted value of the result by the true value of the result for each time segment. This value can be said to be a value representing prediction accuracy. For example, the evaluation unit 6 divides the predicted value calculated based on the time section a of the image data # 1 by the result value (true value) associated with the image data # 1. The evaluation unit 6 also divides the predicted value calculated based on the time division a by the value of the grade (true value) associated with the image data of interest for each of the other image data. The evaluation unit 6 calculates the average value of the division results obtained for each image data as the prediction accuracy corresponding to the time segment a (more specifically, the prediction accuracy of the model corresponding to the time segment a). The evaluation unit 6 calculates the prediction accuracy corresponding to each time segment in the same manner for each other time segment.
  • the calculation method of the prediction accuracy is not limited to the above example.
  • the evaluation unit 6 may calculate the average value of values obtained by dividing the absolute value of the difference between the predicted value of the grade and the true value of the grade by the true value for each time segment. This value can also be said to be a value representing prediction accuracy.
  • the display unit 7 displays the relationship between the time segment and the prediction accuracy corresponding to the time segment (more specifically, the prediction accuracy of the model corresponding to the time segment).
  • the display unit 7 may display time information and text information in association with prediction accuracy of a model corresponding to the time information.
  • the display unit 7 displays the relationship between the time segment and the prediction accuracy of the model corresponding to the time segment in a graph from the viewpoint of easy understanding for the person viewing the displayed information.
  • FIG. 5 is an explanatory diagram illustrating an example of a graph displayed by the display unit 7, and illustrates a graph indicating a relationship between a time segment and a prediction accuracy of a model corresponding to the time segment.
  • the vertical axis of the graph shown in FIG. 5 represents the prediction accuracy of the model.
  • the horizontal axis of the graph represents time segments.
  • the start time of each time segment is common, and the end time is different. Therefore, each time segment can be ordered in the order of end time.
  • the display unit 7 orders the time segments in the order of the end time, and displays the identification information of each time segment along the horizontal axis in that order (see FIG. 5).
  • the display part 7 displays the graph which shows the change of the prediction precision corresponding to each time division so that it may illustrate in FIG. 5, for example.
  • the time segment image extraction unit 3, the learning unit 4, the prediction unit 5, the evaluation unit 6, and the display unit 7 are realized by, for example, a CPU (Central Processing Unit) of a computer having a display device (not shown in FIG. 1).
  • the CPU reads a sports motion analysis support program from a program recording medium such as a computer program storage device (not shown in FIG. 1), and according to the sports motion analysis support program, the time segment image extraction unit 3 and the learning unit 4
  • the prediction unit 5, the evaluation unit 6, and the display unit 7 may be operated.
  • a part of the display unit 7 that determines display contents (for example, a graph) and displays the display contents on the display device is realized by the CPU.
  • the part that actually performs display is realized by a display device.
  • the computer may be a personal computer or a portable computer such as a smartphone. These points are the same in other embodiments described later.
  • the sports motion analysis support system 1 may have a configuration in which two or more physically separated devices are connected by wire or wirelessly.
  • the sports motion analysis support system 1 may be realized as a system in which a portable computer such as a smartphone and a server cooperate. This also applies to other embodiments described later.
  • FIG. 6 is a flowchart showing an example of processing progress of the first embodiment of the present invention. Since the details of the operation of each step shown below have already been described, detailed description thereof will be omitted here.
  • the time segment image extraction unit 3 identifies a range corresponding to each of a plurality of predetermined time segments for each image data stored in the data storage unit 2. Then, the time segment image extraction unit 3 extracts a still image from a range corresponding to each time segment for each image data, and generates a set of still images for each time segment of the individual image data ( Step S1).
  • the learning unit 4 learns, by machine learning, a model representing the relationship between the form and the grade in each time segment for each of the plurality of time segments (step S2).
  • the prediction unit 5 calculates the predicted value of the grade for each piece of image data using a model corresponding to each time segment (step S3).
  • the evaluation unit 6 calculates the prediction accuracy of the model corresponding to the time segment using a plurality of predicted values corresponding to the time segment and the true value of the grade (step S4).
  • the display unit 7 displays the relationship between the time segment and the prediction accuracy of the model corresponding to the time segment (step S5).
  • the display unit 7 may display time information and text information in association with the prediction accuracy of a model corresponding to the time information.
  • the display unit 7 displays the relationship between the time segment and the prediction accuracy of the model corresponding to the time segment in a graph as illustrated in FIG.
  • a model corresponding to one time segment is generated using a plurality of combinations of a set of still images and results. Therefore, when the predicted value of the grade is calculated by this model, if the prediction accuracy of the model is good, it can be said that the predicted value statistically represents the tendency of the grade according to the form.
  • the evaluation part 6 calculates the prediction precision of the model corresponding to a time division for every time division.
  • the display unit 7 displays the relationship between the time segment and the prediction accuracy of the model corresponding to the time segment.
  • the user of the sports motion analysis support system 1 (the player whose image data and results are stored in the data storage unit 2, or the coach thereof) has the prediction accuracy for each time segment ordered based on the time. It is possible to confirm, and based on the change in the prediction accuracy, it is possible to grasp a time zone in which the degree of improvement in the prediction accuracy of the results predicted using the model is large. As described above, if the prediction accuracy of the model is good, it can be said that the predicted value of the result obtained using the model statistically represents the tendency of the result according to the form. Therefore, being able to grasp the time zone in which the degree of improvement in the prediction accuracy of the results predicted using the model is large means that the time zone in which the form has a great influence on the result can be grasped.
  • this embodiment it can assist so that a user can grasp
  • the prediction accuracy of the corresponding model is slightly increased.
  • the prediction accuracy of the corresponding model is slightly increased.
  • the degree of improvement of the prediction accuracy of the model corresponding to the time segment c is larger than the prediction accuracy of the model corresponding to the time segment b. Therefore, the user can select the time zone corresponding to the difference between the time zone c and the time zone b (specifically, the time zone from the time “ ⁇ 1.5” to the time “ ⁇ 1.0”, see FIG. 3). ) Can be recognized as a time zone in which the degree of improvement in the prediction accuracy of the results predicted using the model is large. Therefore, the user can grasp that the time zone is a time zone in which the form has a great influence on the result among the time during which the long jump is performed.
  • time zone in this example, the time zone from time “ ⁇ 1.5” to time “ ⁇ 1.0”
  • the user focuses on the time zone form. By confirming, it can contribute to the improvement of the player's performance.
  • Embodiment 2 FIG.
  • the sports motion analysis support system 1 according to the first embodiment displays a graph illustrated in FIG. 5, for example, so that the user has a time zone in which the degree of improvement in prediction accuracy of results predicted using a model is large. I was able to grasp.
  • the sports motion analysis support system according to the second exemplary embodiment of the present invention specifies a time segment that can identify a time zone in which the degree of improvement in prediction accuracy of results predicted using a model is large, and displays the time segment To do.
  • the sports motion analysis support system according to the second embodiment of the present invention can be represented by the blocks shown in FIG. 1 similarly to the sports motion analysis support system 1 according to the first embodiment.
  • a second embodiment will be described. Explanation of matters similar to those in the first embodiment will be omitted as appropriate.
  • the data storage unit 2, the time segment image extraction unit 3, the learning unit 4 and the prediction unit 5 are the same as the data storage unit 2, the time segment image extraction unit 3, the learning unit 4 and the prediction unit 5 of the first embodiment. The description is omitted.
  • the evaluation unit 6 of the second embodiment performs the same operation as the evaluation unit 6 of the first embodiment, and further performs the following operations.
  • the evaluation unit 6 identifies a time segment that can identify a time zone in which the degree of improvement in prediction accuracy of the results predicted using the model is large.
  • the evaluation unit 6 orders the time segments based on the time, and calculates the difference in prediction accuracy between the models corresponding to the two time segments whose order is adjacent. In this case, the evaluation unit 6 calculates a value obtained by subtracting the prediction accuracy of the model corresponding to the time segment a from the prediction accuracy of the model corresponding to the time segment b. The evaluation unit 6 also uses the prediction accuracy of the model corresponding to the time segment whose order is later, for other sets of time segments whose order is adjacent to each other, such as a set of time segments b and c and a set of time segments c and d. Then, a value obtained by subtracting the prediction accuracy of the model corresponding to the previous time segment in the order is calculated.
  • the evaluation unit 6 sets a time segment whose order is later in a set of time segments having the largest difference in prediction accuracy as a time segment that can identify a time zone in which the degree of improvement in prediction accuracy is large. What is necessary is just to specify. For example, it is assumed that the difference in prediction accuracy in the set of time segments b and c is the largest among the differences in prediction accuracy. In this case, the evaluation part 6 should just specify the time division c as a time division which can specify the time slot
  • a time segment that is later in order is identified as a time segment that can identify a time zone in which the degree of improvement in prediction accuracy is large.
  • a threshold value may be determined in advance.
  • the method of specifying a time segment that can specify a time zone in which the degree of improvement in prediction accuracy of results predicted using a model is large is not limited to the above method, and may be another method.
  • the display unit 7 displays the time division specified by the evaluation unit 6.
  • FIG. 7 is a flowchart showing an example of processing progress of the second embodiment of the present invention. Steps S1 to S4 (see FIG. 7) in the second embodiment are the same as steps S1 to S4 (see FIG. 6) in the first embodiment, and a description thereof will be omitted.
  • step S4 the evaluation unit 6 specifies a time segment in which a time zone in which the degree of improvement in prediction accuracy of the results predicted using the model is large can be specified (step S11). Since the example of the operation of the evaluation unit 6 has already been described, the description thereof is omitted here.
  • step S11 the display unit 7 displays the time segment specified in step S11 (a time segment in which a time zone in which the degree of improvement in prediction accuracy of results predicted using the model is large) can be identified (step S11). S12).
  • FIG. 8 is an explanatory diagram showing an example of the display mode of the time segment in step S12.
  • the display unit 7 displays an icon 21 representing the time segment identification information for each time segment.
  • the display unit 7 also displays the start time and end time of the time segment.
  • the display part 7 displays the icon 21 of the time division identified in step S11 (in this example, the icon 21 of the time division c) in a different form from the icons 21 of the other time divisions.
  • the display unit 7 displays that the time segment c is the time segment specified in step S11 by displaying the frame line of the icon 21 of the time segment c thicker than the frame lines of the other icons 21. Represents.
  • FIG. 8 is an example of the display mode in step S12, and the display mode in step S12 is not limited to the example shown in FIG.
  • the evaluation unit 6 identifies a time segment in which a time zone in which the degree of improvement in prediction accuracy of a result predicted using a model is large can be identified. And the display part 7 displays the time division. Therefore, as in the first embodiment, the user can grasp the time zone in which the form has a great influence on the result among the time during which the long jump is performed. For example, it is assumed that the evaluation unit 6 specifies the time segment c in step S11 and the display unit 7 displays the time portion c. In this case, the user can select a time zone corresponding to the difference between the time segment c and the previous time segment b (specifically, from time “ ⁇ 1.5” to time “ ⁇ 1.0”). It is possible to recognize that the time zone (see FIGS.
  • the time zone is a time zone in which the degree of improvement in prediction accuracy of results predicted using the model is large. Therefore, the user can grasp that the time zone is a time zone in which the form has a great influence on the result among the time during which the long jump is performed.
  • step S12 the display unit 7 may perform the operation of step S5 in the first embodiment together. That is, the display unit 7 may display the time segment specified in step S11 and display the relationship between the time segment and the prediction accuracy of the model corresponding to the time segment. For example, the display unit 7 may display the time segment c in the manner illustrated in FIG. 8 and the graph illustrated in FIG.
  • FIG. 9 is a block diagram showing one modification of the second embodiment.
  • the sports motion analysis support system 1 shown in FIG. 9 further includes an operation unit 8 in addition to the components shown in FIG.
  • the operation up to step S12 is the same as the operation described in the second embodiment or the above-described modification.
  • the display unit 7 displays the icon 21 of each time segment in the manner illustrated in FIG. 8 in step S12 will be described as an example.
  • the operation unit 8 is a user interface for the user to specify a time segment, and is realized by, for example, a mouse.
  • the user operates the operation unit 8 to specify the icon 21 displayed as illustrated in FIG.
  • the display unit 7 accepts designation of a time segment according to the operation.
  • step S11 description will be made assuming that the time segment specified in step S11 is designated. For example, it is assumed that an operation such as a click is performed on the icon 21 of the time segment c among the icons 21 shown in FIG.
  • the display unit 7 When the display unit 7 receives designation of the time segment c from the outside, the display unit 7 displays the prediction accuracy of the model corresponding to the time segment c.
  • the display unit 7 calculates the image data in which the predicted value of the result calculated using the model (model corresponding to the time segment c) in step S3 is the maximum, and uses the model in step S3.
  • the image data for which the predicted value of the recorded result is the smallest is specified.
  • the display unit 7 is calculated using the image data in which the predicted value of the result calculated using the model corresponding to the designated time segment is the maximum, and the model.
  • the image data having the minimum predicted value of the grade is specified.
  • the display unit 7 specifies the image data predicted to have the best result and the image data predicted to have the worst result.
  • the maximum means the maximum among the predicted values calculated using the model corresponding to the designated time segment c
  • the minimum means the model corresponding to the designated time segment c. This means the minimum of the predicted values calculated using.
  • the display unit 7 displays the moving image in the range corresponding to the specified time segment c in the image data having the maximum predicted value and the specified time segment c in the image data having the minimum predicted value. Each video in the corresponding range is displayed.
  • FIG. 10 is a schematic diagram illustrating an example of a screen displayed by the display unit 7 when a time segment is designated after step S12.
  • the prediction accuracy display column 31 is a column for displaying the prediction accuracy of the model corresponding to the designated time segment.
  • the first image display field 32 is a field for displaying a moving image in a range corresponding to a designated time segment in the image data having the maximum predicted value.
  • the second image display field 32 is a field for displaying a moving image in a range corresponding to a designated time segment in the image data having the smallest predicted value.
  • the display unit 7 displays the screen illustrated in FIG.
  • the display unit 7 specifies the image data having the maximum predicted value and the image data having the minimum predicted value.
  • the display unit 7 is calculated using each image data in which the predicted value of the score calculated using the model corresponding to the designated time segment corresponds to the top first to the top predetermined, and the model.
  • the image data corresponding to the predicted value of the grade from the lower first to the predetermined lower order may be specified.
  • the display unit 7 includes a moving image in a range corresponding to the specified time segment in each image data in which the predicted value corresponds from the upper first to the upper predetermined position, and the predicted value from the lower first to the lower predetermined.
  • the moving image in the range corresponding to the designated time segment in each image data corresponding to the first may be displayed.
  • Values representing the upper predetermined number and the lower predetermined number may be determined in advance.
  • the sports motion analysis support system 1 may be provided with an input device (user interface) for the user to input values representing the upper predetermined order and the lower predetermined order into the sport motion analysis support system 1.
  • this modification it is possible to display a video that is predicted to have the best grade, and that can be displayed in a time zone in which the form has a great influence on the result, and that the grade is predicted to be the worst. It is possible to display a moving image of a time zone in which the form greatly affects the result. Therefore, by analyzing or comparing these images, it can be used for improving the form, finding a wrinkle of the form, and the like.
  • the display unit 7 may perform the above display operation only when the time segment specified in step S11 is designated. In addition, when an arbitrary time segment is designated, the display unit 7 may perform the display operation according to the designated time segment.
  • the result of the operation is a result and is represented by a numerical value
  • the result of movement in sports may be an event.
  • a case where a PK (penalty kick) scene in soccer is applied to the present invention will be described as an example.
  • the result of the PK action is one of two types of events: “the ball flew to the right” and “the ball flew to the left”. Description of the same matters as those in the first embodiment, the second embodiment, and the modifications thereof will be omitted.
  • the result (event) of the PK operation is represented by binary values.
  • the data storage unit 2 associates video image data representing a series of actions of a person performing PK (hereinafter referred to as a player) with the result of the action (event “1” or event “0”).
  • a plurality of data is stored in advance.
  • the result of this operation can be said to be a nominal measure, for example.
  • a plurality of time segments may be determined based on the time point when the video represents the player's kicking motion.
  • the learning unit 4 learns, for each of a plurality of time segments, a model representing the relationship between the operation in the time segment and the probability that the event “1” occurs or the event “0” occurs.
  • the probability that event “1” will occur or the probability that event “0” will occur is represented by one objective variable.
  • the range that this objective variable can take is 0 to 1. If the value of the objective variable is larger than 0.5, the value represents the probability that the event “1” (that is, the event “the ball flies to the right”) will occur.
  • the probability that the event “1” occurs is high, and the closer the value is to 0.5, the lower the probability that the event “1” occurs.
  • the learning unit 4 may use, for example, logistic regression analysis as a machine learning algorithm.
  • the learning unit 4 determines a combination of a set of still images corresponding to a time segment of interest and a result (“1” or “0”) for each image data, and learns a combination determined for each image data
  • the model corresponding to the time segment of interest may be learned by machine learning (for example, logistic regression analysis).
  • the learning unit 4 learns the model for each time segment.
  • the prediction unit 5 calculates a predicted value of the result for each image data using a model corresponding to each time segment.
  • the predicted value is the value of the objective variable and represents the probability that the event “1” will occur or the probability that the event “0” will occur.
  • the probability (value of the objective variable) calculated by the prediction unit 5 is a continuous value that can take a value in the range of 0 to 1. Therefore, it can be said that the value of the objective variable calculated by the prediction unit 5 is, for example, an order scale.
  • the evaluation unit 6 may perform the following processing.
  • the true value of the result is “1” or “0”, and is stored in the data storage unit 2 in association with the image data.
  • the evaluation unit 6 calculates the prediction accuracy of the model corresponding to the time segment a by focusing on the time segment a. It is assumed that the prediction unit 5 calculates a predicted value for each image data using a model corresponding to the time segment a. The number of predicted values calculated for the time segment a is the number of image data. The evaluation unit 6 regards each predicted value as “1” or “0”. That is, the evaluation unit 6 regards the predicted value as “1” if one predicted value is greater than 0.5, and sets the predicted value as “0” if the predicted value is smaller than 0.5. I reckon. Then, the evaluation unit 6 determines whether or not the predicted value regarded as “1” or “0” matches the true value of the result, and counts the number of predicted values that match the true value. To do.
  • the evaluation unit 6 regards the predicted value “0.8” as “1” and determines that it matches the true value “1”. In this case, if the true value is “0”, the evaluation unit 6 may regard the predicted value “0.8” as “1” and determine that it does not match the true value “0”.
  • the evaluation unit 6 regards the predicted value “0.3” as “0” and determines that it matches the true value “0”. In this case, if the true value is “1”, the evaluation unit 6 may regard the predicted value “0.3” as “0” and determine that it does not match the true value “1”.
  • the evaluation unit 6 regards the predicted value as “1” or “0” for each predicted value corresponding to the image data, and counts the number of predicted values whose values match the true value. . Then, the evaluation unit 6 sets the value obtained by dividing the count value by the number of predicted values calculated for the time segment a as the prediction accuracy of the model corresponding to the time segment a. Thus, the value obtained by dividing the count value by the number of predicted values can also be referred to as the coincidence rate.
  • the evaluation unit 6 calculates the prediction accuracy of the model corresponding to each time segment for each other time segment.
  • the result of the operation may be an event even when the designation of the time segment is received from the outside after step S12.
  • step S12 it is assumed that the display unit 7 displays the icon 21 of each time segment in the manner illustrated in FIG. 8, and then the time segment c is designated. In this case, the display unit 7 displays the prediction accuracy of the model corresponding to the time segment c.
  • the display unit 7 specifies image data having a maximum predicted value calculated using the model and image data having a minimum predicted value calculated using the model.
  • the higher the value the higher the probability that the ball will fly to the right after kicking, and the lower the value, the higher the probability that the ball will fly to the left after kicking. That is, the display unit 7 identifies the image data of the form predicted to have the highest probability that the ball will fly to the right after the kick and the image data of the form predicted to have the highest probability of the ball to fly to the left after the kick. Will be.
  • the display unit 7 displays the moving image in the range corresponding to the specified time segment c in the image data having the maximum predicted value and the specified time segment c in the image data having the minimum predicted value. Each video in the corresponding range is displayed.
  • the screen displayed by the display unit 7 when the time division is designated may be the same as the screen illustrated in FIG.
  • the text information such as “score: good” and “score: bad” shown in FIG. 10 may be displayed as “ball progress direction: right”, “ball progress direction: left”, etc., respectively.
  • the sports motion analysis support system 1 may include a data acquisition unit that acquires data to be stored in the data storage unit 2 from the outside.
  • FIG. 11 is a block diagram illustrating a configuration example in the case where a data acquisition unit is provided.
  • the data storage unit 2, the time segment image extraction unit 3, the learning unit 4, the prediction unit 5, the evaluation unit 6, and the display unit 7 are those elements in the first embodiment, and those in the second embodiment and its modifications. These are the same as those elements, and a description thereof will be omitted.
  • the data acquisition unit 9 acquires a plurality of pieces of data in which moving image data representing a series of motions in sports and the results of the motions are associated with each other, and stores them in the data storage unit 2.
  • the data acquisition unit 9 may access the device, acquire a plurality of data from the device, and store the data in the data storage unit 2.
  • the processing after the data acquisition unit 9 stores a plurality of data in the data storage unit 2 is the same as the processing described in the first embodiment or the processing described in the second embodiment or its modification. is there.
  • the data acquisition unit 9 is realized by, for example, a CPU of a computer that operates according to a sports motion analysis support program.
  • the operation of the sport to which the present invention is applied is not limited to these.
  • the time point at which the ball is hit with the golf club may be set as the reference time.
  • the data storage unit 2 may store data associating image data of an action tossed by a volleyball setter with a result indicating whether the ball flew to the right or left. Good.
  • the time when the setter toss may be set as the reference time.
  • the present invention can be applied to a pitcher pitching operation in baseball, a rugby formation, or an American football formation.
  • the present invention can be applied to various sports operations.
  • FIG. 12 is a schematic block diagram showing a configuration example of a computer according to each embodiment of the present invention.
  • the computer 1000 includes a CPU 1001, a main storage device 1002, an auxiliary storage device 1003, an interface 1004, a display device 1005, and an input device 1006.
  • the input device 1006 corresponds to the operation unit 8 illustrated in FIG.
  • the sports motion analysis support system 1 is implemented in a computer 1000.
  • the operation of the sports motion analysis support system 1 is stored in the auxiliary storage device 1003 in the form of a program (sport motion analysis support program).
  • the CPU 1001 reads out the program from the auxiliary storage device 1003, develops it in the main storage device 1002, and executes the above processing according to the program.
  • the auxiliary storage device 1003 is an example of a tangible medium that is not temporary.
  • Other examples of the non-temporary tangible medium include a magnetic disk, a magneto-optical disk, a CD-ROM, a DVD-ROM, and a semiconductor memory connected via the interface 1004.
  • this program is distributed to the computer 1000 via a communication line, the computer 1000 that has received the distribution may develop the program in the main storage device 1002 and execute the above processing.
  • the program may be for realizing a part of the above-described processing.
  • the program may be a differential program that realizes the above-described processing in combination with another program already stored in the auxiliary storage device 1003.
  • circuitry IV circuitry IV
  • processors or combinations thereof. These may be configured by a single chip or may be configured by a plurality of chips connected via a bus. Part or all of each component may be realized by a combination of the above-described circuit and the like and a program.
  • the plurality of information processing devices and circuits may be centrally arranged or distributedly arranged.
  • the information processing apparatus, the circuit, and the like may be realized as a form in which each is connected via a communication network, such as a client and server system and a cloud computing system.
  • FIG. 13 is a block diagram showing an outline of the present invention.
  • the sports motion analysis support system of the present invention includes a data storage unit 2, a learning unit 4, and an evaluation unit 6.
  • the data storage unit 2 stores a plurality of pieces of data in which image data of moving images representing a series of motions in sports and the motion results are associated with each other.
  • the learning unit 4 uses a plurality of data, and for each of a plurality of time segments determined based on a time point representing a predetermined operation, the relationship between the operation in the time segment and the result corresponding to the operation Learn a model that represents
  • the evaluation unit 6 calculates the prediction accuracy of the result predicted using the model for each time segment.
  • FIG. 14 is another block diagram showing the outline of the present invention.
  • the sports motion analysis support system of the present invention may include a data storage unit 2, a learning unit 4, and a specifying unit 16.
  • the data storage unit 2 and the learning unit 4 are the same as the data storage unit 2 and the learning unit 4 shown in FIG.
  • the specifying unit 16 (for example, the evaluation unit 6 in the second embodiment) specifies a time segment in which a time zone in which the degree of improvement in prediction accuracy of a result predicted using a model is large can be specified.
  • a data storage unit that stores a plurality of data in which image data of a moving image representing a series of actions in sports and the results of the actions are associated; Using each of the plurality of data, for each of a plurality of time segments determined on the basis of a time point representing a predetermined operation, a model representing a relationship between the operation in the time segment and the result corresponding to the operation A learning department to learn,
  • a sports motion analysis support system comprising: an evaluation unit that calculates a prediction accuracy of a result predicted using a model for each time segment.
  • the display unit displays prediction accuracy of a model corresponding to a time segment designated from the outside, specifies a predetermined number of image data based on a prediction value predicted using the model, and specifies each specified image data
  • the sports motion analysis support system according to any one of Supplementary Note 1 to Supplementary Note 3, wherein the moving image of the time segment is displayed.
  • the result of the action associated with the image data is a numerical value indicating the grade
  • the sports motion analysis support system according to any one of Supplementary Note 1 to Supplementary Note 4, wherein the learning unit learns a model representing a relationship between the motion in the time division and a numerical value indicating the result for each time division.
  • the result of the action associated with the image data is an event
  • the sports motion analysis support system according to any one of appendix 1 to appendix 4, wherein the learning unit learns a model representing a relationship between the motion in the time segment and the probability that the event occurs for each time segment.
  • a data storage unit that stores a plurality of data in which image data of a moving image representing a series of actions in sports and the results of the actions are associated; Using each of the plurality of data, for each of a plurality of time segments determined on the basis of a time point representing a predetermined operation, a model representing a relationship between the operation in the time segment and the result corresponding to the operation A learning department to learn,
  • a sports motion analysis support system comprising: a specifying unit that specifies a time segment in which a time zone in which a degree of improvement in prediction accuracy of a result predicted using a model is large can be specified.
  • a computer comprising a data storage unit for storing a plurality of data in which image data of a moving image representing a series of actions in sports and the results of the actions are associated with each other, Using each of the plurality of data, for each of a plurality of time segments determined on the basis of a time point representing a predetermined operation, a model representing a relationship between the operation in the time segment and the result corresponding to the operation Learn, A sports motion analysis support method, characterized in that the prediction accuracy of a result predicted using a model is calculated for each time segment.
  • a computer comprising a data storage unit for storing a plurality of data in which image data of a moving image representing a series of actions in sports and the results of the actions are associated with each other, Using each of the plurality of data, for each of a plurality of time segments determined on the basis of a time point representing a predetermined operation, a model representing a relationship between the operation in the time segment and the result corresponding to the operation Learn,
  • a sports motion analysis support method characterized by identifying a time segment in which a time zone in which a degree of improvement in prediction accuracy of a result predicted using a model is large can be identified.
  • a sports motion analysis support program installed in a computer including a data storage unit that stores a plurality of data in which image data of a moving image representing a series of motions in sports and the results of the motions are associated with each other,
  • a model representing a relationship between the operation in the time segment and the result corresponding to the operation Learning process to learn
  • a sports motion analysis support program for executing an evaluation process for calculating the prediction accuracy of a result predicted using a model for each time segment.
  • a sports motion analysis support program installed in a computer including a data storage unit that stores a plurality of data in which image data of a moving image representing a series of motions in sports and the results of the motions are associated with each other,
  • a model representing a relationship between the operation in the time segment and the result corresponding to the operation Learning process to learn
  • a sports motion analysis support program for executing a specific process for specifying a time segment in which a time zone in which a degree of improvement in prediction accuracy of a result predicted using a model is large can be specified.
  • the present invention can be suitably applied to a sports motion analysis support system that supports motion analysis in sports.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

In the present invention, a data storage unit 2 stores a plurality of items of video data showing a series of motions in a sport, associated with the results of the motions. A learning unit 4 uses the plurality of items of data to learn a model representing the relationship between motions in a time interval and results corresponding to said motions, for each of a plurality of time intervals stipulated with reference to a point in time that represents the predetermined motion. An evaluation unit 6 calculates, for each time interval, the prediction accuracy for results predicted using the model.

Description

スポーツ動作解析支援システム、方法およびプログラムSports motion analysis support system, method and program
 本発明は、スポーツにおける動作解析を支援するスポーツ動作解析支援システム、スポーツ動作解析支援方法およびスポーツ動作解析支援プログラムに関する。 The present invention relates to a sports motion analysis support system, a sports motion analysis support method, and a sports motion analysis support program that support motion analysis in sports.
 スポーツにおける動作の解析に利用可能な技術として、モーションキャプチャが知られている。 Motion capture is known as a technique that can be used for motion analysis in sports.
 また、スポーツをしている人を撮像して得られた動画に基づいて、身体の特定の部位(例えば、足首)等の軌跡を表示するアプリケーションソフトウェアが存在する。また、手本となるフォームと、スポーツをしている人のフォームとの差を表示するアプリケーションソフトウェアが存在する。 Also, there is application software that displays a trajectory of a specific part of the body (for example, an ankle) based on a moving image obtained by imaging a person who is playing sports. There is also application software that displays the difference between a model form and the form of a sports person.
 また、特許文献1には、画像のフレーム差分の測定値から、ルールベースモデルを使用して、ゴルフのスウィング段階および主要な位置を特定することが記載されている。さらに特許文献1には、この場合に、隠れマルコフモデル、状態空間モデル、有限状態マシン、回帰法、サポートベクトルマシン、ニューラルネットワークおよびファジー理論等が使用されてもよいことが記載されている。 Further, Patent Document 1 describes that a swing stage and a main position of golf are specified from a measurement value of a frame difference of an image using a rule-based model. Further, Patent Document 1 describes that in this case, a hidden Markov model, a state space model, a finite state machine, a regression method, a support vector machine, a neural network, a fuzzy theory, and the like may be used.
 また、特許文献2には、10試行の動作画像データを収集し、そのうちの5試行を学習用データとして使用し隠れマルコフモデルのパラメータを推定し、残りの5試行をテストデータとして認識実験を行ったことが記載されている。そして、特許文献2には、比較的低周波成分の離散コサイン変換係数が人物動作の画像認識のための特徴量として有効であることが分かったと記載されている。 Patent Document 2 collects motion image data of 10 trials, uses 5 trials as learning data, estimates the parameters of the hidden Markov model, and performs a recognition experiment using the remaining 5 trials as test data. It is described. Patent Document 2 describes that a discrete cosine transform coefficient having a relatively low frequency component has been found to be effective as a feature amount for image recognition of human motion.
特表2014-521139号公報Special table 2014-521139 gazette 特開平10-013832号公報Japanese Patent Laid-Open No. 10-013832
 上記の種々のアプリケーションソフトウェアによって、スポーツをする人は、自分や他人のフォームを解析することができる。 ¡Through the various application software described above, people who play sports can analyze their own and others' forms.
 ここで、フォームと、そのフォームを含む動作の結果とが、完全に対応しているとは限らない。例えば、ある人のフォームがよくても、その人のコンディションがよくなかったために、よい成績が出なかったということが生じ得る。また、例えば、ある人のフォームがよくても、雨や風等の外的要因により、よい成績が出なかったということが生じ得る。 Here, the form and the result of the action including the form do not always correspond completely. For example, even if a person's form is good, it may happen that the person's condition was not good and the result was not good. Also, for example, even if a person's form is good, it may occur that good results are not achieved due to external factors such as rain and wind.
 しかし、統計的には、このようなフォームであればよい成績が出やすいという傾向や、このようなフォームであればボールが特定の方向に飛びやすい等といった傾向は存在する。 However, statistically, there is a tendency that such a form is likely to produce good results, and that such a form tends to cause the ball to fly in a specific direction.
 また、人が動作を行っている時間の中のある時間帯のフォームが、結果に対して大きな影響を与える重要なフォームであると言えると、本願発明の発明者は考えた。 In addition, the inventors of the present invention thought that a form in a certain time zone during a time when a person is operating is an important form having a great influence on the result.
 なお、結果の例として、スポーツの成績を示す数値が挙げられる。成績を示す数値以外の結果の例として、事象が挙げられる。事象の具体例として、例えば、サッカーのPK(penalty kick)後にボールがどちらの方向に飛んだかという事項のような、スポーツで用いる用具の動きに関する事項等が挙げられる。 In addition, as an example of the result, there is a numerical value indicating the sports performance. An example of the result other than the numerical value indicating the result is an event. Specific examples of the event include, for example, matters relating to the movement of equipment used in sports, such as the matter in which direction the ball flew after soccer PK (penalty kick).
 本発明は、スポーツにおける一連の動作を行う時間のうち、フォームが結果に対して大きな影響を与える時間帯を把握できるようにユーザを支援することができるスポーツ動作解析支援システム、スポーツ動作解析支援方法およびスポーツ動作解析支援プログラムを提供することを目的とする。 The present invention relates to a sports motion analysis support system and a sports motion analysis support method capable of assisting a user so that a time zone in which a form has a great influence on a result among times during which a series of motions are performed in sports can be grasped. And it aims at providing a sports movement analysis support program.
 本発明によるスポーツ動作解析支援システムは、スポーツにおける一連の動作を表す動画の画像データと、動作の結果とを対応付けたデータを複数記憶するデータ記憶部と、複数のデータを用いて、所定の動作を表している時点を基準として定められた複数の時間区分のそれぞれに対して、時間区分における動作と当該動作に対応する結果との関係を表すモデルを学習する学習部と、モデルを用いて予測された結果の予測精度を、時間区分毎に算出する評価部とを備えることを特徴とする。 A sports motion analysis support system according to the present invention uses a plurality of data, a data storage unit that stores a plurality of data in which moving image data representing a series of motions in sports and the results of the motions are associated with each other. A learning unit that learns a model representing a relationship between an operation in a time segment and a result corresponding to the operation for each of a plurality of time segments determined with respect to a time point representing the operation, and using the model And an evaluation unit that calculates the prediction accuracy of the predicted result for each time segment.
 また、本発明によるスポーツ動作解析支援システムは、スポーツにおける一連の動作を表す動画の画像データと、動作の結果とを対応付けたデータを複数記憶するデータ記憶部と、複数のデータを用いて、所定の動作を表している時点を基準として定められた複数の時間区分のそれぞれに対して、時間区分における動作と当該動作に対応する結果との関係を表すモデルを学習する学習部と、モデルを用いて予測された結果の予測精度の向上の度合いが大きい時間帯を特定可能な時間区分を特定する特定部とを備えることを特徴とする。 In addition, the sports motion analysis support system according to the present invention uses a plurality of data, and a data storage unit that stores a plurality of data in which image data of a moving image representing a series of motions in sports and the results of the motions are associated, A learning unit that learns a model representing a relationship between an operation in a time segment and a result corresponding to the operation for each of a plurality of time segments determined with respect to a time point representing a predetermined motion, and a model And a specifying unit that specifies a time segment in which a time zone in which the degree of improvement in the prediction accuracy of the result predicted using the method is large can be specified.
 また、本発明によるスポーツ動作解析支援方法は、スポーツにおける一連の動作を表す動画の画像データと、動作の結果とを対応付けたデータを複数記憶するデータ記憶部を備えるコンピュータが、複数のデータを用いて、所定の動作を表している時点を基準として定められた複数の時間区分のそれぞれに対して、時間区分における動作と当該動作に対応する結果との関係を表すモデルを学習し、モデルを用いて予測された結果の予測精度を、時間区分毎に算出することを特徴とする。 Further, in the sports motion analysis support method according to the present invention, a computer including a data storage unit that stores a plurality of data in which image data of a moving image representing a series of motions in sports is associated with a result of the motion is stored in the computer. And learning a model representing the relationship between the motion in the time segment and the result corresponding to the motion for each of the plurality of time segments defined with reference to the time point representing the predetermined motion. The prediction accuracy of the result predicted by using is calculated for each time segment.
 また、本発明によるスポーツ動作解析支援方法は、スポーツにおける一連の動作を表す動画の画像データと、動作の結果とを対応付けたデータを複数記憶するデータ記憶部を備えるコンピュータが、複数のデータを用いて、所定の動作を表している時点を基準として定められた複数の時間区分のそれぞれに対して、時間区分における動作と当該動作に対応する結果との関係を表すモデルを学習し、モデルを用いて予測された結果の予測精度の向上の度合いが大きい時間帯を特定可能な時間区分を特定することを特徴とする。 Further, in the sports motion analysis support method according to the present invention, a computer including a data storage unit that stores a plurality of data in which image data of a moving image representing a series of motions in sports is associated with a result of the motion is stored in the computer. And learning a model representing the relationship between the motion in the time segment and the result corresponding to the motion for each of the plurality of time segments defined with reference to the time point representing the predetermined motion. A time segment in which a time zone in which a degree of improvement in prediction accuracy of a result predicted by use is large can be specified is specified.
 また、本発明によるスポーツ動作解析支援プログラムは、スポーツにおける一連の動作を表す動画の画像データと、動作の結果とを対応付けたデータを複数記憶するデータ記憶部を備えるコンピュータに搭載されるスポーツ動作解析支援プログラムであって、コンピュータに、複数のデータを用いて、所定の動作を表している時点を基準として定められた複数の時間区分のそれぞれに対して、時間区分における動作と当該動作に対応する結果との関係を表すモデルを学習する学習処理、および、モデルを用いて予測された結果の予測精度を、時間区分毎に算出する評価処理を実行させることを特徴とする。 Further, the sports motion analysis support program according to the present invention is a sports motion mounted on a computer including a data storage unit that stores a plurality of data in which image data of a moving image representing a series of motions in sports is associated with a result of the motion. An analysis support program that uses a plurality of data in a computer and corresponds to the operation in the time division and the corresponding operation for each of the plurality of time divisions defined based on the time point representing the predetermined operation. A learning process for learning a model representing a relationship with a result to be performed, and an evaluation process for calculating a prediction accuracy of a result predicted using the model for each time segment.
 また、本発明によるスポーツ動作解析支援プログラムは、スポーツにおける一連の動作を表す動画の画像データと、動作の結果とを対応付けたデータを複数記憶するデータ記憶部を備えるコンピュータに搭載されるスポーツ動作解析支援プログラムであって、コンピュータに、複数のデータを用いて、所定の動作を表している時点を基準として定められた複数の時間区分のそれぞれに対して、時間区分における動作と当該動作に対応する結果との関係を表すモデルを学習する学習処理、および、モデルを用いて予測された結果の予測精度の向上の度合いが大きい時間帯を特定可能な時間区分を特定する特定処理を実行させることを特徴とする。 Further, the sports motion analysis support program according to the present invention is a sports motion mounted on a computer including a data storage unit that stores a plurality of data in which image data of a moving image representing a series of motions in sports is associated with a result of the motion. An analysis support program that uses a plurality of data in a computer and corresponds to the operation in the time division and the corresponding operation for each of the plurality of time divisions defined based on the time point representing the predetermined operation. A learning process that learns a model that expresses the relationship with the result to be performed, and a specific process that specifies a time zone that can identify a time zone in which the degree of improvement in prediction accuracy of the result predicted using the model is large It is characterized by.
 本発明によれば、スポーツにおける一連の動作を行う時間のうち、フォームが結果に対して大きな影響を与える時間帯を把握できるようにユーザを支援することができる。 According to the present invention, it is possible to assist the user so that the time zone in which the form has a great influence on the result among the time during which a series of actions in sports is performed can be grasped.
本発明の第1の実施形態のスポーツ動作解析支援システムの例を示すブロック図である。1 is a block diagram illustrating an example of a sports motion analysis support system according to a first embodiment of the present invention. 走り幅跳びを行う選手の一連の動作を示す模式図である。It is a schematic diagram which shows a series of operation | movement of the player who performs long jump. 複数の時間区分の例を示す模式図である。It is a schematic diagram which shows the example of several time divisions. 各画像データにおける時間区分毎に得られた静止画の集合の識別情報を示す説明図である。It is explanatory drawing which shows the identification information of the collection of the still images obtained for every time division in each image data. 表示部が表示するグラフの例を示す説明図である。It is explanatory drawing which shows the example of the graph which a display part displays. 本発明の第1の実施形態の処理経過の例を示すフローチャートである。It is a flowchart which shows the example of the process progress of the 1st Embodiment of this invention. 本発明の第2の実施形態の処理経過の例を示すフローチャートである。It is a flowchart which shows the example of the process progress of the 2nd Embodiment of this invention. ステップS12における時間区分の表示態様の例を示す説明図である。It is explanatory drawing which shows the example of the display mode of the time division in step S12. 第2の実施形態の一つの変形例を示すブロック図である。It is a block diagram which shows one modification of 2nd Embodiment. 表示部が表示する画面の例を示す模式図である。It is a schematic diagram which shows the example of the screen which a display part displays. データ取得部を備える場合の構成例を示すブロック図である。It is a block diagram which shows the structural example in the case of providing a data acquisition part. 本発明の各実施形態に係るコンピュータの構成例を示す概略ブロック図である。It is a schematic block diagram which shows the structural example of the computer which concerns on each embodiment of this invention. 本発明の概要を示すブロック図である。It is a block diagram which shows the outline | summary of this invention. 本発明の概要を示す他のブロック図である。It is another block diagram which shows the outline | summary of this invention.
 以下、スポーツとして、走り幅跳びを例にして説明する。ただし、本発明は、走り幅跳び以外のスポーツにも適用可能である。また、以下の説明では、走り幅跳びの動作の結果が、成績(本例では、跳躍距離)である場合を例にして説明する。この例では、結果は数値で表される。 Hereinafter, as a sport, a long jump will be described as an example. However, the present invention is applicable to sports other than long jumping. Moreover, in the following description, a case where the result of the long jump operation is a result (in this example, a jump distance) will be described as an example. In this example, the result is represented numerically.
 ただし、前述のように、スポーツにおける動作の結果は、ボールがどちらの方向に飛んだか等のような事象であってもよい。動作の結果が事象である場合を本発明に適用する際には、結果を事象の内容に応じた値(例えば、“0”と“1”の二値等)で表す。動作の結果が事象である場合については、後述する。 However, as described above, the result of movement in sports may be an event such as in which direction the ball flew. When the case where the result of the operation is an event is applied to the present invention, the result is represented by a value (for example, a binary value of “0” and “1”) according to the content of the event. The case where the result of the operation is an event will be described later.
実施形態1.
 図1は、本発明の第1の実施形態のスポーツ動作解析支援システムの例を示すブロック図である。本発明の第1の実施形態のスポーツ動作解析支援システム1は、データ記憶部2と、時間区分画像抽出部3と、学習部4と、予測部5と、評価部6と、表示部7とを備える。
Embodiment 1. FIG.
FIG. 1 is a block diagram showing an example of a sports motion analysis support system according to the first embodiment of the present invention. The sports motion analysis support system 1 according to the first embodiment of the present invention includes a data storage unit 2, a time segment image extraction unit 3, a learning unit 4, a prediction unit 5, an evaluation unit 6, and a display unit 7. Is provided.
 データ記憶部2は、スポーツにおける一連の動作を表す動画の画像データと、その動作の結果とを対応付けたデータを複数記憶する記憶装置である。本例では、各データにおいて、走り幅跳びを行う人の一連の動作を表す動画の画像データと、その動作の結果として得られた成績(跳躍距離)とが対応付けられている。データ記憶部2は、そのようなデータを複数記憶する。 The data storage unit 2 is a storage device that stores a plurality of data in which image data of a moving image representing a series of actions in sports and the results of the actions are associated with each other. In this example, in each data, moving image data representing a series of actions of a person who performs a long jump is associated with a result (jumping distance) obtained as a result of the action. The data storage unit 2 stores a plurality of such data.
 画像データと成績との対応付けの態様は、特に限定されない。例えば、成績が画像データとは別のデータとして存在し、画像データと成績とが対応付けられていてもよい。また、例えば、画像データの動画に成績が内包される態様(換言すれば、成績が動画上に表される態様)で、画像データと成績とが対応付けられていてもよい。この点は、後述の実施形態でも同様である。 The manner of associating the image data with the results is not particularly limited. For example, the grade may exist as data different from the image data, and the image data and the grade may be associated with each other. Further, for example, the image data and the results may be associated with each other in a manner in which the grade is included in the moving image of the image data (in other words, the manner in which the grade is expressed on the moving image). This also applies to the embodiments described later.
 図2は、走り幅跳びを行う人(以下、選手と記す。)の一連の動作を示す模式図である。なお、図2では、図面を簡単にするために、例えば、助走しているときの選手の姿勢として、同一の姿勢を図示しているが、実際の選手は、手足等を動かしながら、一連の動作を行う。この点は、図3に示す模式図に関しても同様である。選手は、助走をして、踏切板11で跳躍し、その後、着地する。この一連の動作をビデオカメラで撮像することによって、一連の動作を表す動画の画像データを得ることができる。そして、その画像データと、その時の成績とを対応付けたデータが1つのデータとなる。データ記憶部2は、予め、そのようなデータを複数記憶する。 FIG. 2 is a schematic diagram showing a series of actions of a person who performs a long jump (hereinafter referred to as a player). In FIG. 2, to simplify the drawing, for example, the same posture is illustrated as the posture of the player when running, but the actual player moves a series of limbs while moving the limbs and the like. Perform the action. This also applies to the schematic diagram shown in FIG. The athlete makes a run, jumps at the crossing board 11, and then lands. By capturing this series of operations with a video camera, moving image data representing the series of operations can be obtained. And the data which matched the image data and the result at that time become one data. The data storage unit 2 stores a plurality of such data in advance.
 なお、特定の選手を対象にして解析を行う場合には、その特定の選手の練習時や競技時に撮像を行って得られた動画の画像データとその時の成績とを対応付けたデータを、データ記憶部2に複数、記憶させておけばよい。以下、データ記憶部2が、特定の選手に関するデータを複数記憶している場合を例にして説明する。また、この特定の選手は、本発明のスポーツ動作解析支援システム1を利用する選手自身、または、スポーツ動作解析支援システム1を利用するコーチ等によって指導される選手であってもよい。あるいは、上記の特定の選手は、本発明のスポーツ動作解析支援システム1を利用するユーザにとって、競争相手となる選手であってもよい。この点は、後述の他の実施形態においても同様である。 In the case of analysis for a specific player, data that associates the image data of the video obtained by taking an image during practice or competition of the specific player with the results at that time, A plurality of storage units 2 may be stored. Hereinafter, a case where the data storage unit 2 stores a plurality of data related to a specific player will be described as an example. Further, the specific player may be a player who uses the sports motion analysis support system 1 of the present invention, or a player who is instructed by a coach using the sports motion analysis support system 1. Alternatively, the specific player may be a player who is a competitor for the user who uses the sports motion analysis support system 1 of the present invention. This also applies to other embodiments described later.
 動画は、選手の一連の動作を表しているので、その動画は、その選手の各時点でのフォームも表している。 ∙ Since the movie represents a series of movements of a player, the movie also represents the form at each point of the player.
 ここで、本発明における時間区分について説明する。本発明における時間区分は、動画が選手の特定の動作を表している時点を基準として定められた時間の区分である。走り幅跳びの例では、動画が踏切動作を表している時点を基準とし、この基準の時刻を0とする。また、基準(時刻0)よりも前の時刻を負で表し、基準よりも後の時刻を正で表すこととする。時間区分は、動画が踏切動作を表している時点を基準の時刻を0として、開始時刻および終了時刻を定めることによって、定めることができる。また、時間区分は、予め複数定められている。また、時間区分の長さはそれぞれ異なっていてもよく、また、複数の時間区分が、共通の時間帯を含んでいてもよい。 Here, the time division in the present invention will be described. The time division in the present invention is a time division determined on the basis of a time point when the moving image represents a specific action of the player. In the example of the long jump, the time point when the moving image represents the crossing operation is used as a reference, and the time of this reference is set to zero. In addition, the time before the reference (time 0) is expressed as negative, and the time after the reference is expressed as positive. The time division can be determined by setting the start time and the end time with the reference time as 0 when the moving image represents the crossing operation. In addition, a plurality of time segments are determined in advance. In addition, the lengths of the time segments may be different from one another, and the plurality of time segments may include a common time zone.
 図3は、複数の時間区分の例を示す模式図である。図3に示す例では、複数の時間区分a~gを示している。図3に例示する“a”~“g”は、時間区分の識別情報である。図3に示す例では、時間区分aは、時刻“-2.5”から時刻“-2.0”までの範囲である(図3参照)。また、時間区分bは、時刻“-2.5”から時刻“-1.5”までの範囲である(図3参照)。他の時間区分c,d,e,・・・も同様に、開始時刻および終了時刻が定められている。また、図3に示す例では、各時間区分の開始時刻をいずれも“-2.5”としているが、各時間区分の開始時刻“-2.5”は例示である。各時間区分の開始時刻は、“-2.5”より早い時刻であっても、“-2.5”より遅い時刻であってもよい。また、図3に示す例では、各時間区分の開始時刻を共通としているが、各時間区分の開始時刻が共通でなくてもよい。 FIG. 3 is a schematic diagram showing an example of a plurality of time segments. In the example shown in FIG. 3, a plurality of time segments a to g are shown. “A” to “g” illustrated in FIG. 3 are time segment identification information. In the example shown in FIG. 3, the time division a is a range from time “−2.5” to time “−2.0” (see FIG. 3). The time division b is a range from time “−2.5” to time “−1.5” (see FIG. 3). Similarly, the start time and the end time are determined in the other time sections c, d, e,. In the example shown in FIG. 3, the start time of each time segment is “−2.5”, but the start time “−2.5” of each time segment is an example. The start time of each time segment may be earlier than “−2.5” or later than “−2.5”. In the example shown in FIG. 3, the start time of each time segment is common, but the start time of each time segment may not be common.
 また、それぞれの時間区分は、他の時間区分と共通の時間帯を含まないように連続して定められていてもよい。例えば、時刻“-2.5”から時刻“-2.0”までを時間区分aとし、時刻“-2.0”から時刻“-1.5”までを時間区分bとし、時刻“-1.5”から時刻“-1.0”までを時間区分cとするというように、各時間区分を定めてもよい。 Also, each time segment may be determined continuously so as not to include a common time zone with other time segments. For example, the time interval “−2.5” to the time “−2.0” is set as the time division a, the time interval “−2.0” to the time “−1.5” is set as the time division b, and the time “−1” is set. Each time segment may be determined such that the time segment c is from .5 ”to time“ −1.0 ”.
 また、終了時刻を共通として、開始時刻が異なるように、各時間区分が定められていてもよい。このとき、基準となる時刻“0”が共通の終了時刻として定められていてもよい。 Also, each time segment may be determined so that the start time is different with the end time being the same. At this time, the reference time “0” may be set as a common end time.
 また、複数の時間区分は、時刻に基づいて順序付けられるように定められることが好ましい。図3に示す例では、各時間区分の開始時刻が共通であり、終了時刻が異なっている。従って、図3に示す時間区分は、終了時刻順に、a,b,c,d,e,f,gの順に順序付けることができる。以下、時間区分が、時刻に基づいて順序付けられるように定められている場合を例にして説明する。 Also, it is preferable that the plurality of time segments are determined so as to be ordered based on time. In the example shown in FIG. 3, the start time of each time segment is common and the end time is different. Therefore, the time segments shown in FIG. 3 can be ordered in the order of a, b, c, d, e, f, and g in the order of end times. Hereinafter, a case where the time divisions are determined to be ordered based on time will be described as an example.
 時間区分画像抽出部3は、データ記憶部2に記憶されている画像データ毎に、予め定められている複数の時間区分のそれぞれに該当する範囲を特定する。そして、時間区分画像抽出部3は、画像データ毎に、それぞれの時間区分に該当する範囲から静止画を抽出し、個々の画像データの個々の時間区分毎に、静止画の集合を生成する。 The time segment image extraction unit 3 identifies a range corresponding to each of a plurality of predetermined time segments for each image data stored in the data storage unit 2. Then, the time segment image extraction unit 3 extracts still images from the range corresponding to each time segment for each image data, and generates a set of still images for each time segment of each image data.
 例えば、1番目の画像データを画像データ#1とし、任意のn番目の画像データを画像データ#nとする。時間区分画像抽出部3は、画像データ#1において、時間区分に該当する範囲を、時間区分毎に特定する。すなわち、時間区分画像抽出部3は、画像データ#1において、時間区分aに該当する範囲、時間区分bに該当する範囲等をそれぞれ特定する。そして、時間区分画像抽出部3は、画像データ#1における時間区分aに該当する範囲から、静止画を抽出し、画像データ#1における時間区分aに対応する静止画の集合を生成する。時間区分画像抽出部3は、動画の画像データから静止画を抽出する場合、所定時間毎に静止画を抽出すればよい。例えば、時間区分画像抽出部3は、画像データ#1における時間区分aに対応する静止画を抽出する場合、時間区分aに該当する範囲から0.1秒毎に静止画を抽出すればよい。ここでは、上述の所定時間の例として0.1秒を例示したが、上述の所定時間は0.1秒に限定されない。 For example, the first image data is image data # 1, and any n-th image data is image data #n. The time segment image extraction unit 3 specifies, for each time segment, a range corresponding to the time segment in the image data # 1. That is, the time segment image extraction unit 3 specifies a range corresponding to the time segment a, a range corresponding to the time segment b, and the like in the image data # 1. Then, the time segment image extraction unit 3 extracts still images from the range corresponding to the time segment a in the image data # 1, and generates a set of still images corresponding to the time segment a in the image data # 1. The time segmented image extraction unit 3 may extract a still image every predetermined time when extracting a still image from moving image data. For example, when extracting the still image corresponding to the time segment a in the image data # 1, the time segment image extracting unit 3 may extract the still image from the range corresponding to the time segment a every 0.1 second. Here, although 0.1 second was illustrated as an example of the above-mentioned predetermined time, the above-mentioned predetermined time is not limited to 0.1 second.
 同様に、時間区分画像抽出部3は、画像データ#1における各時間区分に該当する範囲から、それぞれ静止画を抽出し、時間区分毎に静止画の集合を生成する。 Similarly, the time segment image extraction unit 3 extracts still images from the range corresponding to each time segment in the image data # 1, and generates a set of still images for each time segment.
 時間区分画像抽出部3は、画像データ#1以外のそれぞれの画像データに対しても同様の処理を行い、各画像データにおける時間区分毎に、静止画の集合を生成する。 The time-segment image extraction unit 3 performs the same processing on each image data other than the image data # 1, and generates a set of still images for each time segment in each image data.
 各画像データにおける時間区分毎に得られた静止画の集合は、対応する時間区分における選手の動作およびフォームを表している。 The set of still images obtained for each time segment in each image data represents the player's action and form in the corresponding time segment.
 以下、便宜的に、各画像データにおける時間区分毎に得られた静止画の集合を、その集合の識別情報で表すこととする。任意の画像データにおける任意の時間区分に対応する静止画の集合の識別情報は、その画像データの識別情報と、その時間区分の識別情報を並べた符号で表すこととする。図4は、各画像データにおける時間区分毎に得られた静止画の集合の識別情報を示す説明図である。例えば、画像データ#1における時間区分aから抽出された静止画の集合を“#1,a”と表す(図4参照)。他の静止画の集合の識別情報も、同様の規則で表される。また、図4では、各画像データに対応する成績も示している。 Hereinafter, for convenience, a set of still images obtained for each time segment in each image data is represented by identification information of the set. The identification information of a set of still images corresponding to an arbitrary time segment in arbitrary image data is represented by a code in which the identification information of the image data and the identification information of the time segment are arranged. FIG. 4 is an explanatory diagram showing identification information of a set of still images obtained for each time segment in each image data. For example, a set of still images extracted from the time segment a in the image data # 1 is represented as “# 1, a” (see FIG. 4). The identification information of the other set of still images is also expressed by the same rule. FIG. 4 also shows results corresponding to each image data.
 学習部4は、複数の時間区分のそれぞれに対して、時間区分における動作と、その動作に対応する成績(跳躍距離)との関係を表すモデルを、機械学習によって学習する(換言すれば、生成する)。 The learning unit 4 learns, for each of a plurality of time segments, a model representing the relationship between the motion in the time segment and the grade (jumping distance) corresponding to the motion (in other words, the generation) To do).
 このモデルは、より具体的には、時間区分におけるフォーム(時間区分に対応する静止画の集合が表わしているフォーム)と、成績との関係を示すモデルである。また、学習部4は、このモデルとして、フォームを表している静止画の集合を説明変数とし、成績を目的変数とするモデルを学習する。従って、学習によって得られたモデルと、静止画の集合とを用いて、成績の予測値を算出することが可能である。 More specifically, this model is a model showing a relationship between a form in a time segment (a form represented by a set of still images corresponding to the time segment) and a grade. In addition, the learning unit 4 learns a model in which a set of still images representing a form is an explanatory variable and a score is an objective variable. Therefore, it is possible to calculate the predicted value of the score using the model obtained by learning and the set of still images.
 1つの時間区分に着目した場合、画像データ毎に、その時間区分に対応する静止画の集合が得られていて、また、画像データ毎に、対応する成績がデータ記憶部2に記憶されている。学習部4は、画像データ毎に、着目している時間区分に対応する静止画の集合と成績の組み合わせを定め、画像データ毎に定めた組み合わせを学習用データとして用いて、着目している時間区分に対応するモデルを、機械学習によって学習すればよい。機械学習のアルゴリズムは、上記のように成績の予測値を算出するモデルを学習可能なアルゴリズムであればよい。 When attention is paid to one time section, a set of still images corresponding to the time section is obtained for each image data, and the corresponding results are stored in the data storage unit 2 for each image data. . The learning unit 4 determines, for each image data, a set of still images corresponding to a time segment of interest and a combination of results, and uses the combination determined for each image data as learning data, A model corresponding to the category may be learned by machine learning. The machine learning algorithm may be any algorithm that can learn a model for calculating a predicted value of a score as described above.
 以下、図4を参照して説明する。例えば、学習部4が、時間区分aに対応するモデルを学習するとする。この場合、学習部4は、静止画の集合“#1,a”と成績“5.8m”との組み合わせ、静止画の集合“#2,a”と成績“6.5m”との組み合わせ、静止画の集合“#3,a”と成績“7.1m”との組み合わせ、静止画の集合“#4,a”と成績“6.2m”との組み合わせ等(図4参照)を、画像データ毎に定め、それらの組み合わせを学習用データとして用いて、時間区分aにおけるフォーム(時間区分aに対応する静止画の集合)と、成績との関係を示すモデルを学習する。 Hereinafter, description will be made with reference to FIG. For example, it is assumed that the learning unit 4 learns a model corresponding to the time segment a. In this case, the learning unit 4 combines the still image set “# 1, a” with the score “5.8 m”, the still image set “# 2, a” with the score “6.5 m”, A combination of a set of still images “# 3, a” and a score “7.1 m”, a combination of a set of still images “# 4, a” and a score “6.2 m”, etc. (see FIG. 4) A model indicating the relationship between a form (a set of still images corresponding to the time section a) in the time section a and the grade is learned by using the combination of these as learning data.
 学習部4が、他の時間区分b,c,d,・・・に関しても同様に、それぞれの時間区分に対応するモデルを学習する。 The learning unit 4 similarly learns models corresponding to the respective time segments for the other time segments b, c, d,.
 学習部4の処理によって、時間区分毎にモデルが得られる。 A model is obtained for each time segment by the processing of the learning unit 4.
 予測部5は、個々の画像データに関して、それぞれの時間区分に対応するモデルを用いて、成績の予測値を算出する。 The prediction unit 5 calculates the predicted value of the grade for each image data using a model corresponding to each time division.
 例えば、予測部5は、画像データ#1に関して、時間区分aに対応するモデルを用いて成績の予測値を算出する。この場合、予測部5は、画像データ#1における時間区分aに対応する静止画の集合“#1,a”を、時間区分aに対応するモデルに適用することによって、成績の予測値を算出する。予測部5は、画像データ#1に関して、他の各時間区分に対応するモデルに基づく成績の予測値もそれぞれ同様に算出する。すなわち、予測部5は、画像データ#1に関して、それぞれの時間区分に対応するモデル毎に、成績の予測値を算出する。 For example, the prediction unit 5 calculates the predicted value of the grade using the model corresponding to the time division a for the image data # 1. In this case, the prediction unit 5 calculates the predicted value of the score by applying the set “# 1, a” of the still images corresponding to the time segment a in the image data # 1 to the model corresponding to the time segment a. To do. The prediction unit 5 similarly calculates the predicted value of the result based on the model corresponding to each other time segment for the image data # 1. That is, the prediction unit 5 calculates the predicted value of the score for each model corresponding to each time segment with respect to the image data # 1.
 従って、予測部5は、1つの画像データに関して、時間区分の個数だけ、成績の予測値を算出する。 Therefore, the prediction unit 5 calculates the predicted value of the grade for the number of time sections for one image data.
 予測部5は、画像データ#1以外のそれぞれの画像データに関しても同様に、時間区分の個数だけ、成績の予測値を算出する。 The prediction unit 5 similarly calculates the predicted value of the result for the number of time segments for each image data other than the image data # 1.
 画像データの個数をN個とすると、予測部5によって、時間区分aに対応する予測値がN個算出される。同様に、他の各時間区分に対応する予測値も、それぞれN個算出される。 If the number of image data is N, the prediction unit 5 calculates N prediction values corresponding to the time segment a. Similarly, N predicted values corresponding to other time segments are also calculated.
 評価部6は、時間区分毎に、時間区分に対応する複数の予測値(画像データの個数分の予測値)と、成績の真値とを用いて、予測値の予測精度を算出する。評価部6は、時間区分毎に、モデルの予測精度を評価しているということができる。 The evaluation unit 6 calculates the prediction accuracy of the predicted value using a plurality of predicted values (predicted values for the number of image data) corresponding to the time segment and the true value of the grade for each time segment. It can be said that the evaluation part 6 is evaluating the prediction accuracy of a model for every time division.
 成績の真値は、画像データに対応付けられて、データ記憶部2に記憶されている成績の値である。 The true value of the grade is the grade value stored in the data storage unit 2 in association with the image data.
 例えば、評価部6は、時間区分毎に、成績の予測値を成績の真値で除算した値の平均値を算出してもよい。この値は、予測精度を表す値であると言える。例えば、評価部6は、画像データ#1の時間区分aに基づいて算出された予測値を、画像データ#1に対応付けられている成績の値(真値)で除算する。評価部6は、他の各画像データに関しても、時間区分aに基づいて算出された予測値を、着目している画像データに対応付けられている成績の値(真値)で除算する。評価部6は、画像データ毎に得られた除算結果の平均値を、時間区分aに対応する予測精度(より具体的には、時間区分aに対応するモデルの予測精度)として算出する。評価部6は、他のそれぞれの時間区分に関しても同様に、時間区分に対応する予測精度を算出する。 For example, the evaluation unit 6 may calculate the average value of values obtained by dividing the predicted value of the result by the true value of the result for each time segment. This value can be said to be a value representing prediction accuracy. For example, the evaluation unit 6 divides the predicted value calculated based on the time section a of the image data # 1 by the result value (true value) associated with the image data # 1. The evaluation unit 6 also divides the predicted value calculated based on the time division a by the value of the grade (true value) associated with the image data of interest for each of the other image data. The evaluation unit 6 calculates the average value of the division results obtained for each image data as the prediction accuracy corresponding to the time segment a (more specifically, the prediction accuracy of the model corresponding to the time segment a). The evaluation unit 6 calculates the prediction accuracy corresponding to each time segment in the same manner for each other time segment.
 予測精度の算出方法は、上記の例に限定されない。例えば、評価部6は、時間区分毎に、成績の予測値と成績の真値との差の絶対値をその真値で除算した値の平均値を算出してもよい。この値も、予測精度を表す値であると言える。 The calculation method of the prediction accuracy is not limited to the above example. For example, the evaluation unit 6 may calculate the average value of values obtained by dividing the absolute value of the difference between the predicted value of the grade and the true value of the grade by the true value for each time segment. This value can also be said to be a value representing prediction accuracy.
 表示部7は、時間区分と、時間区分に対応する予測精度(より具体的には、時間区分に対応するモデルの予測精度)との関係を表示する。表示部7は、例えば、時間区分と、時間区分に対応するモデルの予測精度とを対応付けて、テキスト情報として表示してもよい。ただし、表示された情報を見る者にとっての分かりやすさの観点から、表示部7は、時間区分と、時間区分に対応するモデルの予測精度との関係をグラフで表示することが特に好ましい。 The display unit 7 displays the relationship between the time segment and the prediction accuracy corresponding to the time segment (more specifically, the prediction accuracy of the model corresponding to the time segment). For example, the display unit 7 may display time information and text information in association with prediction accuracy of a model corresponding to the time information. However, it is particularly preferable that the display unit 7 displays the relationship between the time segment and the prediction accuracy of the model corresponding to the time segment in a graph from the viewpoint of easy understanding for the person viewing the displayed information.
 図5は、表示部7が表示するグラフの例を示す説明図であり、時間区分と、時間区分に対応するモデルの予測精度との関係を示すグラフを例示している。図5に示すグラフの縦軸は、モデルの予測精度を表している。また、グラフの横軸は、時間区分を表している。本例では、各時間区分の開始時刻が共通であり、終了時刻が異なっている。そのため、各時間区分を、終了時刻順に順序付けることができる。本例では、表示部7は、各時間区分を終了時刻順に順序付け、その順番に、横軸に沿って各時間区分の識別情報を表示する(図5参照)。そして、表示部7は、各時間区分に対応する予測精度の変化を示すグラフを、例えば、図5に例示するように表示する。 FIG. 5 is an explanatory diagram illustrating an example of a graph displayed by the display unit 7, and illustrates a graph indicating a relationship between a time segment and a prediction accuracy of a model corresponding to the time segment. The vertical axis of the graph shown in FIG. 5 represents the prediction accuracy of the model. In addition, the horizontal axis of the graph represents time segments. In this example, the start time of each time segment is common, and the end time is different. Therefore, each time segment can be ordered in the order of end time. In this example, the display unit 7 orders the time segments in the order of the end time, and displays the identification information of each time segment along the horizontal axis in that order (see FIG. 5). And the display part 7 displays the graph which shows the change of the prediction precision corresponding to each time division so that it may illustrate in FIG. 5, for example.
 時間区分画像抽出部3、学習部4、予測部5、評価部6および表示部7は、例えば、ディスプレイ装置(図1において図示略)を有するコンピュータのCPU(Central Processing Unit )によって実現される。この場合、CPUは、コンピュータのプログラム記憶装置(図1において図示略)等のプログラム記録媒体からスポーツ動作解析支援プログラムを読み込み、そのスポーツ動作解析支援プログラムに従って、時間区分画像抽出部3、学習部4、予測部5、評価部6および表示部7として動作すればよい。表示部7のうち、表示内容(例えば、グラフ)を定め、その表示内容をディスプレイ装置に表示させる部分が、CPUによって実現させる。表示部7のうち、実際に表示を行う部分は、ディスプレイ装置によって実現される。また、上記のコンピュータは、パーソナルコンピュータであってもよく、あるいは、スマートフォン等の携帯型のコンピュータであってもよい。これらの点は、後述の他の実施形態においても同様である。 The time segment image extraction unit 3, the learning unit 4, the prediction unit 5, the evaluation unit 6, and the display unit 7 are realized by, for example, a CPU (Central Processing Unit) of a computer having a display device (not shown in FIG. 1). In this case, the CPU reads a sports motion analysis support program from a program recording medium such as a computer program storage device (not shown in FIG. 1), and according to the sports motion analysis support program, the time segment image extraction unit 3 and the learning unit 4 The prediction unit 5, the evaluation unit 6, and the display unit 7 may be operated. A part of the display unit 7 that determines display contents (for example, a graph) and displays the display contents on the display device is realized by the CPU. Of the display unit 7, the part that actually performs display is realized by a display device. Further, the computer may be a personal computer or a portable computer such as a smartphone. These points are the same in other embodiments described later.
 また、スポーツ動作解析支援システム1は、2つ以上の物理的に分離した装置が有線または無線で接続されている構成であってもよい。例えば、スポーツ動作解析支援システム1は、スマートフォン等の携帯型のコンピュータとサーバとが連携したシステムとして実現されてもよい。この点も、後述の他の実施形態において同様である。 Further, the sports motion analysis support system 1 may have a configuration in which two or more physically separated devices are connected by wire or wirelessly. For example, the sports motion analysis support system 1 may be realized as a system in which a portable computer such as a smartphone and a server cooperate. This also applies to other embodiments described later.
 次に、処理経過について説明する。図6は、本発明の第1の実施形態の処理経過の例を示すフローチャートである。以下に示す各ステップの動作の詳細については、既に説明しているので、ここでは、詳細な説明を省略する。 Next, the process progress will be described. FIG. 6 is a flowchart showing an example of processing progress of the first embodiment of the present invention. Since the details of the operation of each step shown below have already been described, detailed description thereof will be omitted here.
 なお、データ記憶部2には、予め複数のデータが記憶されているものとする。 It is assumed that a plurality of data is stored in the data storage unit 2 in advance.
 時間区分画像抽出部3は、データ記憶部2に記憶されている画像データ毎に、予め定められている複数の時間区分のそれぞれに該当する範囲を特定する。そして、時間区分画像抽出部3は、画像データ毎に、それぞれの時間区分に該当する範囲から静止画を抽出し、個々の画像データの個々の時間区分毎に、静止画の集合を生成する(ステップS1)。 The time segment image extraction unit 3 identifies a range corresponding to each of a plurality of predetermined time segments for each image data stored in the data storage unit 2. Then, the time segment image extraction unit 3 extracts a still image from a range corresponding to each time segment for each image data, and generates a set of still images for each time segment of the individual image data ( Step S1).
 次に、学習部4は、複数の時間区分のそれぞれに対して、時間区分におけるフォームと成績との関係を表すモデルを、機械学習によって学習する(ステップS2)。 Next, the learning unit 4 learns, by machine learning, a model representing the relationship between the form and the grade in each time segment for each of the plurality of time segments (step S2).
 次に、予測部5は、個々の画像データに関して、それぞれの時間区分に対応するモデルを用いて、成績の予測値を算出する(ステップS3)。 Next, the prediction unit 5 calculates the predicted value of the grade for each piece of image data using a model corresponding to each time segment (step S3).
 次に、評価部6は、時間区分毎に、時間区分に対応する複数の予測値と、成績の真値とを用いて、時間区分に対応するモデルの予測精度を算出する(ステップS4)。 Next, for each time segment, the evaluation unit 6 calculates the prediction accuracy of the model corresponding to the time segment using a plurality of predicted values corresponding to the time segment and the true value of the grade (step S4).
 次に、表示部7は、時間区分と、時間区分に対応するモデルの予測精度との関係を表示する(ステップS5)。ステップS5において、表示部7は、時間区分と、時間区分に対応するモデルの予測精度とを対応付けて、テキスト情報として表示してもよい。ただし、表示部7は、時間区分と、時間区分に対応するモデルの予測精度との関係を、図5に例示するようなグラフで表示することが好ましい。 Next, the display unit 7 displays the relationship between the time segment and the prediction accuracy of the model corresponding to the time segment (step S5). In step S <b> 5, the display unit 7 may display time information and text information in association with the prediction accuracy of a model corresponding to the time information. However, it is preferable that the display unit 7 displays the relationship between the time segment and the prediction accuracy of the model corresponding to the time segment in a graph as illustrated in FIG.
 本実施形態の効果について説明する。本実施形態では、前述のように、1つの時間区分に対応するモデルは、静止画の集合と成績との組み合わせを複数個用いて生成される。従って、このモデルによって成績の予測値を算出した場合、モデルの予測精度がよければ、その予測値は、フォームに応じた成績の傾向を統計的によく表しているということができる。そして、本実施形態によれば、評価部6が、時間区分毎に、時間区分に対応するモデルの予測精度を算出する。そして、表示部7が、時間区分と、時間区分に対応するモデルの予測精度との関係を表示する。従って、スポーツ動作解析支援システム1のユーザ(データ記憶部2に画像データおよび成績が記憶されている選手、あるいは、そのコーチ等)は、時刻に基づいて順序付けられている時間区分毎の予測精度を確認することができ、その予測精度の変化に基づいて、モデルを用いて予測された成績の予測精度の向上の度合いが大きい時間帯を把握することができる。上記のように、モデルの予測精度がよければ、そのモデルを用いて得られた成績の予測値は、フォームに応じた成績の傾向を統計的によく表していると言える。従って、モデルを用いて予測された成績の予測精度の向上の度合いが大きい時間帯を把握できるということは、フォームが結果に対して大きな影響を与える時間帯を把握できるということを意味する。よって、本実施形態によれば、スポーツ(本例では走り幅跳び)における一連の動作を行う時間のうち、フォームが結果に対して大きな影響を与える時間帯をユーザが把握できるように支援することができる。 The effect of this embodiment will be described. In the present embodiment, as described above, a model corresponding to one time segment is generated using a plurality of combinations of a set of still images and results. Therefore, when the predicted value of the grade is calculated by this model, if the prediction accuracy of the model is good, it can be said that the predicted value statistically represents the tendency of the grade according to the form. And according to this embodiment, the evaluation part 6 calculates the prediction precision of the model corresponding to a time division for every time division. The display unit 7 displays the relationship between the time segment and the prediction accuracy of the model corresponding to the time segment. Therefore, the user of the sports motion analysis support system 1 (the player whose image data and results are stored in the data storage unit 2, or the coach thereof) has the prediction accuracy for each time segment ordered based on the time. It is possible to confirm, and based on the change in the prediction accuracy, it is possible to grasp a time zone in which the degree of improvement in the prediction accuracy of the results predicted using the model is large. As described above, if the prediction accuracy of the model is good, it can be said that the predicted value of the result obtained using the model statistically represents the tendency of the result according to the form. Therefore, being able to grasp the time zone in which the degree of improvement in the prediction accuracy of the results predicted using the model is large means that the time zone in which the form has a great influence on the result can be grasped. Therefore, according to this embodiment, it can assist so that a user can grasp | ascertain the time slot | zone when a form has a big influence with respect to a result among the time which performs a series of operation | movement in a sport (this example long run jump). .
 例えば、図5に例示するグラフが表示されたとする。時間区分a,bに関しては、対応するモデルの予測精度は微増である。同様に、時間区分c~gに関しても、対応するモデルの予測精度は微増である。一方、時間区分cに対応するモデルの予測精度は、時間区分bに対応するモデルの予測精度に比べて、向上の度合いが大きい。従って、ユーザは、時間区分cと時間区分bとの差に該当する時間帯(具体的には、時刻“-1.5”から時刻“-1.0”までの時間帯。図3参照。)が、モデルを用いて予測された成績の予測精度の向上の度合いが大きい時間帯であると認識することができる。従って、ユーザは、その時間帯が、走り幅跳びの一連の動作を行う時間のうち、フォームが結果に対して大きな影響を与える時間帯であると把握することができる。 For example, assume that the graph illustrated in FIG. 5 is displayed. For time segments a and b, the prediction accuracy of the corresponding model is slightly increased. Similarly, for the time segments c to g, the prediction accuracy of the corresponding model is slightly increased. On the other hand, the degree of improvement of the prediction accuracy of the model corresponding to the time segment c is larger than the prediction accuracy of the model corresponding to the time segment b. Therefore, the user can select the time zone corresponding to the difference between the time zone c and the time zone b (specifically, the time zone from the time “−1.5” to the time “−1.0”, see FIG. 3). ) Can be recognized as a time zone in which the degree of improvement in the prediction accuracy of the results predicted using the model is large. Therefore, the user can grasp that the time zone is a time zone in which the form has a great influence on the result among the time during which the long jump is performed.
 そして、そのような時間帯(本例では、時刻“-1.5”から時刻“-1.0”までの時間帯)を把握することができれば、ユーザは、その時間帯のフォームを重点的に確認することによって、選手の成績の向上に寄与することができる。 If such a time zone (in this example, the time zone from time “−1.5” to time “−1.0”) can be grasped, the user focuses on the time zone form. By confirming, it can contribute to the improvement of the player's performance.
実施形態2.
 第1の実施形態のスポーツ動作解析支援システム1は、例えば、図5に例示するグラフを表示することによって、モデルを用いて予測された成績の予測精度の向上の度合いが大きい時間帯をユーザが把握できるようにした。本発明の第2の実施形態のスポーツ動作解析支援システムは、モデルを用いて予測された成績の予測精度の向上の度合いが大きい時間帯を特定可能な時間区分を特定し、その時間区分を表示する。
Embodiment 2. FIG.
The sports motion analysis support system 1 according to the first embodiment displays a graph illustrated in FIG. 5, for example, so that the user has a time zone in which the degree of improvement in prediction accuracy of results predicted using a model is large. I was able to grasp. The sports motion analysis support system according to the second exemplary embodiment of the present invention specifies a time segment that can identify a time zone in which the degree of improvement in prediction accuracy of results predicted using a model is large, and displays the time segment To do.
 本発明の第2の実施形態のスポーツ動作解析支援システムは、第1の実施形態のスポーツ動作解析支援システム1と同様に、図1に示すブロックで表すことができるので、図1を用いて第2の実施形態を説明する。第1の実施形態と同様の事項については、適宜説明を省略する。 The sports motion analysis support system according to the second embodiment of the present invention can be represented by the blocks shown in FIG. 1 similarly to the sports motion analysis support system 1 according to the first embodiment. A second embodiment will be described. Explanation of matters similar to those in the first embodiment will be omitted as appropriate.
 データ記憶部2、時間区分画像抽出部3、学習部4および予測部5は、第1の実施形態のデータ記憶部2、時間区分画像抽出部3、学習部4および予測部5と同様であり、説明を省略する。 The data storage unit 2, the time segment image extraction unit 3, the learning unit 4 and the prediction unit 5 are the same as the data storage unit 2, the time segment image extraction unit 3, the learning unit 4 and the prediction unit 5 of the first embodiment. The description is omitted.
 第2の実施形態の評価部6は、第1の実施形態の評価部6と同様の動作を行い、さらに、以下に述べる動作も行う。 The evaluation unit 6 of the second embodiment performs the same operation as the evaluation unit 6 of the first embodiment, and further performs the following operations.
 評価部6は、モデルを用いて予測された成績の予測精度の向上の度合いが大きい時間帯を特定可能な時間区分を特定する。 The evaluation unit 6 identifies a time segment that can identify a time zone in which the degree of improvement in prediction accuracy of the results predicted using the model is large.
 例えば、評価部6は、時刻に基づいて時間区分を順序付け、順序が隣り合う2つの時間区分に対応するモデルの予測精度の差をそれぞれ算出する。この場合、評価部6は、時間区分bに対応するモデルの予測精度から、時間区分aに対応するモデルの予測精度を減算した値を算出する。評価部6は、時間区分b,cの組、時間区分c,dの組等、順序が隣り合う時間区分の他の組に関してもそれぞれ、順序が後の時間区分に対応するモデルの予測精度から、順序が先の時間区分に対応するモデルの予測精度を減算した値を算出する。 For example, the evaluation unit 6 orders the time segments based on the time, and calculates the difference in prediction accuracy between the models corresponding to the two time segments whose order is adjacent. In this case, the evaluation unit 6 calculates a value obtained by subtracting the prediction accuracy of the model corresponding to the time segment a from the prediction accuracy of the model corresponding to the time segment b. The evaluation unit 6 also uses the prediction accuracy of the model corresponding to the time segment whose order is later, for other sets of time segments whose order is adjacent to each other, such as a set of time segments b and c and a set of time segments c and d. Then, a value obtained by subtracting the prediction accuracy of the model corresponding to the previous time segment in the order is calculated.
 そして、評価部6は、例えば、予測精度の差が最大となっている時間区分の組において、順序が後の時間区分を、予測精度の向上の度合いが大きい時間帯を特定可能な時間区分として特定すればよい。例えば、時間区分b,cの組における予測精度の差が、予測精度の差の中で最大であるとする。この場合、評価部6は、モデルを用いて予測された成績の予測精度の向上の度合いが大きい時間帯を特定可能な時間区分として、時間区分cを特定すればよい。 For example, the evaluation unit 6 sets a time segment whose order is later in a set of time segments having the largest difference in prediction accuracy as a time segment that can identify a time zone in which the degree of improvement in prediction accuracy is large. What is necessary is just to specify. For example, it is assumed that the difference in prediction accuracy in the set of time segments b and c is the largest among the differences in prediction accuracy. In this case, the evaluation part 6 should just specify the time division c as a time division which can specify the time slot | zone in which the improvement degree of the prediction precision of the result predicted using the model is large.
 また、例えば、予測精度の差が所定の閾値以上となっている時間区分の組において、順序が後の時間区分を、予測精度の向上の度合いが大きい時間帯を特定可能な時間区分として特定してもよい。例えば、時間区分b,cの組における予測精度の差が閾値以上であるとする。この場合、評価部6は、モデルを用いて予測された成績の予測精度の向上の度合いが大きい時間帯を特定可能な時間区分として、時間区分cを特定すればよい。なお、閾値は予め定めておけばよい。 In addition, for example, in a set of time segments in which the difference in prediction accuracy is equal to or greater than a predetermined threshold, a time segment that is later in order is identified as a time segment that can identify a time zone in which the degree of improvement in prediction accuracy is large. May be. For example, it is assumed that the difference in prediction accuracy in the set of time segments b and c is greater than or equal to a threshold value. In this case, the evaluation part 6 should just specify the time division c as a time division which can specify the time slot | zone in which the improvement degree of the prediction precision of the result predicted using the model is large. The threshold value may be determined in advance.
 モデルを用いて予測された成績の予測精度の向上の度合いが大きい時間帯を特定可能な時間区分を特定する方法は、上記の方法に限定されず、他の方法であってもよい。 The method of specifying a time segment that can specify a time zone in which the degree of improvement in prediction accuracy of results predicted using a model is large is not limited to the above method, and may be another method.
 表示部7は、評価部6によって特定された時間区分を表示する。 The display unit 7 displays the time division specified by the evaluation unit 6.
 図7は、本発明の第2の実施形態の処理経過の例を示すフローチャートである。第2の実施形態におけるステップS1~S4(図7参照)は、第1の実施形態におけるステップS1~S4(図6参照)と同様であり、説明を省略する。 FIG. 7 is a flowchart showing an example of processing progress of the second embodiment of the present invention. Steps S1 to S4 (see FIG. 7) in the second embodiment are the same as steps S1 to S4 (see FIG. 6) in the first embodiment, and a description thereof will be omitted.
 ステップS4の後、評価部6は、モデルを用いて予測された成績の予測精度の向上の度合いが大きい時間帯を特定可能な時間区分を特定する(ステップS11)。この評価部6の動作の例については、既に説明したので、ここでは説明を省略する。 After step S4, the evaluation unit 6 specifies a time segment in which a time zone in which the degree of improvement in prediction accuracy of the results predicted using the model is large can be specified (step S11). Since the example of the operation of the evaluation unit 6 has already been described, the description thereof is omitted here.
 ステップS11の後、表示部7は、ステップS11で特定された時間区分(モデルを用いて予測された成績の予測精度の向上の度合いが大きい時間帯を特定可能な時間区分)を表示する(ステップS12)。 After step S11, the display unit 7 displays the time segment specified in step S11 (a time segment in which a time zone in which the degree of improvement in prediction accuracy of results predicted using the model is large) can be identified (step S11). S12).
 図8は、ステップS12における時間区分の表示態様の例を示す説明図である。ここでは、ステップS11において、時間区分cが特定されたものとして説明する。表示部7は、時間区分毎に、時間区分の識別情報を表すアイコン21を表示する。また、表示部7は、時間区分の開始時刻および終了時刻も表示する。そして、表示部7は、ステップS11で特定された時間区分のアイコン21(本例では、時間区分cのアイコン21)を、他の各時間区分のアイコン21とは異なる態様で表示する。図8に示す例では、時間区分cのアイコン21の枠線を、他の各アイコン21の枠線よりも太く表示する場合を例示している。すなわち、表示部7は、時間区分cのアイコン21の枠線を、他の各アイコン21の枠線よりも太く表示することによって、時間区分cがステップS11で特定された時間区分であることを表している。 FIG. 8 is an explanatory diagram showing an example of the display mode of the time segment in step S12. Here, description will be made assuming that the time segment c is specified in step S11. The display unit 7 displays an icon 21 representing the time segment identification information for each time segment. The display unit 7 also displays the start time and end time of the time segment. And the display part 7 displays the icon 21 of the time division identified in step S11 (in this example, the icon 21 of the time division c) in a different form from the icons 21 of the other time divisions. In the example shown in FIG. 8, the case where the frame line of the icon 21 of the time division c is displayed thicker than the frame line of the other icons 21 is illustrated. That is, the display unit 7 displays that the time segment c is the time segment specified in step S11 by displaying the frame line of the icon 21 of the time segment c thicker than the frame lines of the other icons 21. Represents.
 図8は、ステップS12における表示態様の一例であり、ステップS12における表示態様は、図8に示す例に限定されない。 FIG. 8 is an example of the display mode in step S12, and the display mode in step S12 is not limited to the example shown in FIG.
 本実施形態では、評価部6が、モデルを用いて予測された成績の予測精度の向上の度合いが大きい時間帯を特定可能な時間区分を特定する。そして、表示部7が、その時間区分を表示する。従って、ユーザは、第1の実施形態と同様に、走り幅跳びの一連の動作を行う時間のうち、フォームが結果に対して大きな影響を与える時間帯を把握することができる。例えば、評価部6がステップS11で時間区分cを特定し、表示部7が時間分cを表示したとする。この場合、ユーザは、時間区分cと、その1つ前の時間区分bとの差に該当する時間帯(具体的には、時刻“-1.5”から時刻“-1.0”までの時間帯。図3、図8を参照。)が、モデルを用いて予測された成績の予測精度の向上の度合いが大きい時間帯であると認識することができる。従って、ユーザは、その時間帯が、走り幅跳びの一連の動作を行う時間のうち、フォームが結果に対して大きな影響を与える時間帯であると把握することができる。 In the present embodiment, the evaluation unit 6 identifies a time segment in which a time zone in which the degree of improvement in prediction accuracy of a result predicted using a model is large can be identified. And the display part 7 displays the time division. Therefore, as in the first embodiment, the user can grasp the time zone in which the form has a great influence on the result among the time during which the long jump is performed. For example, it is assumed that the evaluation unit 6 specifies the time segment c in step S11 and the display unit 7 displays the time portion c. In this case, the user can select a time zone corresponding to the difference between the time segment c and the previous time segment b (specifically, from time “−1.5” to time “−1.0”). It is possible to recognize that the time zone (see FIGS. 3 and 8) is a time zone in which the degree of improvement in prediction accuracy of results predicted using the model is large. Therefore, the user can grasp that the time zone is a time zone in which the form has a great influence on the result among the time during which the long jump is performed.
 次に、第2の実施形態の種々の変形例について説明する。 Next, various modifications of the second embodiment will be described.
 表示部7は、ステップS12において、第1の実施形態におけるステップS5の動作を合わせて行ってもよい。すなわち、表示部7は、ステップS11で特定された時間区分を表示するとともに、時間区分と、時間区分に対応するモデルの予測精度との関係を表示してもよい。例えば、表示部7は、図8に例示する態様で時間区分cを表示するとともに、図5に例示するグラフを表示してもよい。 In step S12, the display unit 7 may perform the operation of step S5 in the first embodiment together. That is, the display unit 7 may display the time segment specified in step S11 and display the relationship between the time segment and the prediction accuracy of the model corresponding to the time segment. For example, the display unit 7 may display the time segment c in the manner illustrated in FIG. 8 and the graph illustrated in FIG.
 また、スポーツ動作解析支援システムは、外部(例えば、ユーザ)から時間区分を指定された場合、その時間区分に関する情報をさらに表示してもよい。図9は、第2の実施形態の一つの変形例を示すブロック図である。図9に示すスポーツ動作解析支援システム1は、図1に示す各構成要素に加えて、さらに、操作部8を備える。ステップS12までの動作は、第2の実施形態で説明した動作または、前述の変形例における動作と同様である。以下の説明では、表示部7が、ステップS12において、図8に例示する態様で各時間区分のアイコン21を表示する場合を例にして説明する。 In addition, when a time segment is designated from the outside (for example, a user), the sports motion analysis support system may further display information regarding the time segment. FIG. 9 is a block diagram showing one modification of the second embodiment. The sports motion analysis support system 1 shown in FIG. 9 further includes an operation unit 8 in addition to the components shown in FIG. The operation up to step S12 is the same as the operation described in the second embodiment or the above-described modification. In the following description, the case where the display unit 7 displays the icon 21 of each time segment in the manner illustrated in FIG. 8 in step S12 will be described as an example.
 操作部8は、ユーザが時間区分を指定するためのユーザインタフェースであり、例えば、マウス等によって実現される。ユーザは、操作部8を操作して、図8に例示するように表示されたアイコン21を指定する。表示部7は、その操作に応じて、時間区分の指定を受け付ける。 The operation unit 8 is a user interface for the user to specify a time segment, and is realized by, for example, a mouse. The user operates the operation unit 8 to specify the icon 21 displayed as illustrated in FIG. The display unit 7 accepts designation of a time segment according to the operation.
 ここでは、ステップS11で特定された時間区分が指定されたものとして説明する。例えば、図8に示す各アイコン21のうち、時間区分cのアイコン21に対して、クリック等の操作が行われたものとして説明する。 Here, description will be made assuming that the time segment specified in step S11 is designated. For example, it is assumed that an operation such as a click is performed on the icon 21 of the time segment c among the icons 21 shown in FIG.
 表示部7は、外部から時間区分cの指定を受け付けると、その時間区分cに対応するモデルの予測精度を表示する。 When the display unit 7 receives designation of the time segment c from the outside, the display unit 7 displays the prediction accuracy of the model corresponding to the time segment c.
 さらに、表示部7は、ステップS3においてそのモデル(時間区分cに対応するモデル)を用いて算出された成績の予測値が最大となっている画像データと、ステップS3においてそのモデルを用いて算出された成績の予測値が最小となっている画像データとを特定する。走り幅跳びの例では、成績(跳躍距離)の値が大きいほどより成績がよく、成績の値が小さいほど成績は悪い。スポーツの種類によっては、成績の値が大きいほど成績が悪く、成績の値が小さいほど成績がよいと言える場合もあり得る。いずれの場合であっても、表示部7は、指定された時間区分に対応するモデルを用いて算出された成績の予測値が最大となっている画像データと、そのモデルを用いて算出された成績の予測値が最小となっている画像データとを特定する。このように画像データを特定することによって、表示部7は、成績が最もよいと予測される画像データと、成績が最も悪いと予測される画像データとを特定することになる。なお、ここで、最大とは、指定された時間区分cに対応するモデルを用いて算出された予測値の中での最大を意味し、最小とは、指定された時間区分cに対応するモデルを用いて算出された予測値の中での最小を意味する。 Further, the display unit 7 calculates the image data in which the predicted value of the result calculated using the model (model corresponding to the time segment c) in step S3 is the maximum, and uses the model in step S3. The image data for which the predicted value of the recorded result is the smallest is specified. In the long jump example, the higher the score (jumping distance), the better the score, and the smaller the score, the worse the score. Depending on the type of sport, it can be said that the higher the grade value, the worse the grade, and the smaller the grade value, the better the grade. In any case, the display unit 7 is calculated using the image data in which the predicted value of the result calculated using the model corresponding to the designated time segment is the maximum, and the model. The image data having the minimum predicted value of the grade is specified. By specifying the image data in this way, the display unit 7 specifies the image data predicted to have the best result and the image data predicted to have the worst result. Here, the maximum means the maximum among the predicted values calculated using the model corresponding to the designated time segment c, and the minimum means the model corresponding to the designated time segment c. This means the minimum of the predicted values calculated using.
 表示部7は、予測値が最大となっている画像データにおける、指定された時間区分cに該当する範囲の動画と、予測値が最小となっている画像データにおける、指定された時間区分cに該当する範囲の動画とをそれぞれ表示する。 The display unit 7 displays the moving image in the range corresponding to the specified time segment c in the image data having the maximum predicted value and the specified time segment c in the image data having the minimum predicted value. Each video in the corresponding range is displayed.
 図10は、ステップS12の後に時間区分が指定されたときに、表示部7が表示する画面の例を示す模式図である。予測精度表示欄31は、指定された時間区分に対応するモデルの予測精度を表示する欄である。第1の画像表示欄32は、予測値が最大となっている画像データにおける、指定された時間区分に該当する範囲の動画を表示する欄である。第2の画像表示欄32は、予測値が最小となっている画像データにおける、指定された時間区分に該当する範囲の動画を表示する欄である。 FIG. 10 is a schematic diagram illustrating an example of a screen displayed by the display unit 7 when a time segment is designated after step S12. The prediction accuracy display column 31 is a column for displaying the prediction accuracy of the model corresponding to the designated time segment. The first image display field 32 is a field for displaying a moving image in a range corresponding to a designated time segment in the image data having the maximum predicted value. The second image display field 32 is a field for displaying a moving image in a range corresponding to a designated time segment in the image data having the smallest predicted value.
 表示部7は、時間区分が指定されると、図10に例示する画面を表示する。 When the time section is designated, the display unit 7 displays the screen illustrated in FIG.
 本変形例では、表示部7が、予測値が最大となっている画像データと、予測値が最小となっている画像データとを特定する場合を説明した。表示部7は、指定された時間区分に対応するモデルを用いて算出された成績の予測値が上位1番目から上位所定番目までに該当している各画像データと、そのモデルを用いて算出された成績の予測値が下位1番目から下位所定番目までに該当している各画像データとを特定してもよい。そして、表示部7は、予測値が上位1番目から上位所定番目までに該当している各画像データにおける、指定された時間区分に該当する範囲の動画と、予測値が下位1番目から下位所定番目までに該当している各画像データにおける、指定された時間区分に該当する範囲の動画とを、それぞれ表示してもよい。上位所定番目および下位所定番目を表す値は、予め定められていてもよい。あるいは、上位所定番目および下位所定番目を表す値をユーザがスポーツ動作解析支援システム1に入力するための入力デバイス(ユーザインタフェース)がスポーツ動作解析支援システム1に設けられていてもよい。 In the present modification, the case has been described in which the display unit 7 specifies the image data having the maximum predicted value and the image data having the minimum predicted value. The display unit 7 is calculated using each image data in which the predicted value of the score calculated using the model corresponding to the designated time segment corresponds to the top first to the top predetermined, and the model. The image data corresponding to the predicted value of the grade from the lower first to the predetermined lower order may be specified. Then, the display unit 7 includes a moving image in a range corresponding to the specified time segment in each image data in which the predicted value corresponds from the upper first to the upper predetermined position, and the predicted value from the lower first to the lower predetermined. The moving image in the range corresponding to the designated time segment in each image data corresponding to the first may be displayed. Values representing the upper predetermined number and the lower predetermined number may be determined in advance. Alternatively, the sports motion analysis support system 1 may be provided with an input device (user interface) for the user to input values representing the upper predetermined order and the lower predetermined order into the sport motion analysis support system 1.
 本変形例によれば、成績が最もよいと予測される動画であって、フォームが結果に対して大きな影響を与える時間帯の動画を表示することができ、また、成績が最も悪いと予測される動画であって、フォームが結果に対して大きな影響を与える時間帯の動画を表示することができる。従って、それらの画像を解析したり、比較したりすることによって、フォームの改善、フォームの癖の発見等に利用することができる。 According to this modification, it is possible to display a video that is predicted to have the best grade, and that can be displayed in a time zone in which the form has a great influence on the result, and that the grade is predicted to be the worst. It is possible to display a moving image of a time zone in which the form greatly affects the result. Therefore, by analyzing or comparing these images, it can be used for improving the form, finding a wrinkle of the form, and the like.
 なお、表示部7は、ステップS11で特定された時間区分が指定された場合にのみ、上記の表示動作を行ってもよい。また、表示部7は、任意の時間区分が指定された場合に、指定された時間区分に応じて、上記の表示動作を行ってもよい。 The display unit 7 may perform the above display operation only when the time segment specified in step S11 is designated. In addition, when an arbitrary time segment is designated, the display unit 7 may perform the display operation according to the designated time segment.
 上記の第1の実施形態、並びに、第2の実施形態およびその変形例では、動作の結果が成績であり、数値で表される場合を例に説明した。スポーツにおける動作の結果が、事象であってもよい。以下、サッカーにおけるPK(penalty kick)の場面を本発明に適用する場合を例にして説明する。そして、PKの動作の結果が、「ボールが右に飛んだ。」、「ボールが左に飛んだ。」という二種類の事象のいずれかであるものとして説明する。上記の第1の実施形態、並びに、第2の実施形態およびその変形例と同様の事項については、説明を省略する。 In the first embodiment, the second embodiment, and the modification thereof, the case where the result of the operation is a result and is represented by a numerical value has been described as an example. The result of movement in sports may be an event. Hereinafter, a case where a PK (penalty kick) scene in soccer is applied to the present invention will be described as an example. Then, it is assumed that the result of the PK action is one of two types of events: “the ball flew to the right” and “the ball flew to the left”. Description of the same matters as those in the first embodiment, the second embodiment, and the modifications thereof will be omitted.
 「ボールが右に飛んだ。」という事象を“1”で表し、「ボールが左に飛んだ。」という事象を“0”で表すものとする。すなわち、PKの動作の結果(事象)を二値で表すものとする。 Suppose that the event “the ball flew to the right” is represented by “1” and the event “the ball flew to the left” is represented by “0”. That is, the result (event) of the PK operation is represented by binary values.
 データ記憶部2には、PKを行う人(以下、選手と記す。)の一連の動作を表す動画の画像データと、その動作の結果(事象“1”または事象“0”)とを対応付けたデータを、予め複数記憶させておく。この動作の結果は、例えば、名義尺度であるということができる。 The data storage unit 2 associates video image data representing a series of actions of a person performing PK (hereinafter referred to as a player) with the result of the action (event “1” or event “0”). A plurality of data is stored in advance. The result of this operation can be said to be a nominal measure, for example.
 また、本例では、動画が選手のキック動作を表している時点を基準として、複数の時間区分を定めておけばよい。 Also, in this example, a plurality of time segments may be determined based on the time point when the video represents the player's kicking motion.
 本例では、学習部4は、複数の時間区分のそれぞれに対して、時間区分における動作と、事象“1”が生じる確率または事象“0”が生じる確率との関係を表すモデルを学習する。事象“1”が生じる確率または事象“0”が生じる確率は、1つの目的変数で表される。ここでは、この目的変数の取り得る範囲は0~1であるものとする。目的変数の値が0.5よりも大きければ、その値は、事象“1” (すなわち、「ボールが右に飛ぶ。」という事象)が生じる確率を表わしていて、その値が1に近いほど、事象“1”が生じる確率が高く、その値が0.5に近いほど、事象“1”が生じる確率が低いということを表している。また、目的変数の値が0.5よりも小さければ、その値は、事象“0”(すなわち、「ボールが左に飛ぶ。」という事象)が生じる確率を表わしていて、その値が0に近いほど、事象“0”が生じる確率が高く、その値が0.5に近いほど、事象“0”が生じる確率が低いということを表している。このようなモデルを学習する場合、学習部4は、機械学習のアルゴリズムとして、例えば、ロジスティック回帰分析を用いればよい。 In this example, the learning unit 4 learns, for each of a plurality of time segments, a model representing the relationship between the operation in the time segment and the probability that the event “1” occurs or the event “0” occurs. The probability that event “1” will occur or the probability that event “0” will occur is represented by one objective variable. Here, it is assumed that the range that this objective variable can take is 0 to 1. If the value of the objective variable is larger than 0.5, the value represents the probability that the event “1” (that is, the event “the ball flies to the right”) will occur. The probability that the event “1” occurs is high, and the closer the value is to 0.5, the lower the probability that the event “1” occurs. If the value of the objective variable is smaller than 0.5, the value represents the probability that the event “0” (that is, the event “the ball flies to the left”) will occur, and the value is zero. The closer it is, the higher the probability that an event “0” will occur. The closer the value is to 0.5, the lower the probability that an event “0” will occur. When learning such a model, the learning unit 4 may use, for example, logistic regression analysis as a machine learning algorithm.
 学習部4は、画像データ毎に、着目している時間区分に対応する静止画の集合と、結果(“1”または“0”)との組み合わせを定め、画像データ毎に定めた組み合わせを学習用データとして用いて、着目している時間区分に対応するモデルを、機械学習(例えば、ロジスティック回帰分析)によって学習すればよい。 The learning unit 4 determines a combination of a set of still images corresponding to a time segment of interest and a result (“1” or “0”) for each image data, and learns a combination determined for each image data The model corresponding to the time segment of interest may be learned by machine learning (for example, logistic regression analysis).
 また、学習部4は、時間区分毎に、モデルを学習する。 Also, the learning unit 4 learns the model for each time segment.
 予測部5は、個々の画像データに関して、それぞれの時間区分に対応するモデルを用いて、結果の予測値を算出する。予測値は、目的変数の値であり、事象“1”が生じる確率または事象“0”が生じる確率を表している。予測部5によって算出される確率(目的変数の値)は、0~1の範囲の値をとり得る連続値である。よって、予測部5によって算出される目的変数の値は、例えば、順序尺度であるということができる。 The prediction unit 5 calculates a predicted value of the result for each image data using a model corresponding to each time segment. The predicted value is the value of the objective variable and represents the probability that the event “1” will occur or the probability that the event “0” will occur. The probability (value of the objective variable) calculated by the prediction unit 5 is a continuous value that can take a value in the range of 0 to 1. Therefore, it can be said that the value of the objective variable calculated by the prediction unit 5 is, for example, an order scale.
 評価部6が、時間区分毎に、時間区分に対応するモデルの予測精度を算出する場合、評価部6は、以下の処理を行えばよい。なお、結果の真値は“1”または“0”であり、画像データに対応付けられて、データ記憶部2に記憶されている。 When the evaluation unit 6 calculates the prediction accuracy of the model corresponding to the time segment for each time segment, the evaluation unit 6 may perform the following processing. The true value of the result is “1” or “0”, and is stored in the data storage unit 2 in association with the image data.
 ここでは、時間区分aに着目して、評価部6が、時間区分aに対応するモデルの予測精度を算出する場合を例にして説明する。予測部5は、時間区分aに対応するモデルを用いて、画像データ毎に予測値を算出しているものとする。時間区分aに関して算出されている予測値の数は、画像データの個数である。評価部6は、個々の予測値を“1”または“0”とみなす。すなわち、評価部6は、1つの予測値が0.5よりも大きければ、その予測値を“1”とみなし、その予測値が0.5よりも小さければ、その予測値を“0”とみなす。そして、評価部6は、“1”または“0”とみなした予測値と、結果の真値とが一致しているか否かを判定し、真値と一致している予測値の個数をカウントする。 Here, the case where the evaluation unit 6 calculates the prediction accuracy of the model corresponding to the time segment a will be described by focusing on the time segment a. It is assumed that the prediction unit 5 calculates a predicted value for each image data using a model corresponding to the time segment a. The number of predicted values calculated for the time segment a is the number of image data. The evaluation unit 6 regards each predicted value as “1” or “0”. That is, the evaluation unit 6 regards the predicted value as “1” if one predicted value is greater than 0.5, and sets the predicted value as “0” if the predicted value is smaller than 0.5. I reckon. Then, the evaluation unit 6 determines whether or not the predicted value regarded as “1” or “0” matches the true value of the result, and counts the number of predicted values that match the true value. To do.
 例えば、時間区分aに対応するモデルを画像データ#1に適用して算出した予測値が“0.8”であったとする。また、画像データ#1に対応する結果の真値が“1”であったとする。0.8>0.5であるので、評価部6は、予測値“0.8”を“1”とみなし、真値“1”と一致すると判定する。この場合において、真値が“0”であったならば、評価部6は、予測値“0.8”を“1”とみなし、真値“0”と一致しないと判定すればよい。 For example, assume that the predicted value calculated by applying the model corresponding to the time segment a to the image data # 1 is “0.8”. Further, it is assumed that the true value of the result corresponding to the image data # 1 is “1”. Since 0.8> 0.5, the evaluation unit 6 regards the predicted value “0.8” as “1” and determines that it matches the true value “1”. In this case, if the true value is “0”, the evaluation unit 6 may regard the predicted value “0.8” as “1” and determine that it does not match the true value “0”.
 また、例えば、時間区分aに対応するモデルを画像データ#2に適用して算出した予測値が“0.3”であったとする。また、画像データ#2に対応する結果の真値が“0”であったとする。0.3<0.5であるので、評価部6は、予測値“0.3”を“0”とみなし、真値“0”と一致すると判定する。この場合において、真値が“1”であったならば、評価部6は、予測値“0.3”を“0”とみなし、真値“1”と一致しないと判定すればよい。 Also, for example, assume that the predicted value calculated by applying the model corresponding to the time segment a to the image data # 2 is “0.3”. Further, it is assumed that the true value of the result corresponding to the image data # 2 is “0”. Since 0.3 <0.5, the evaluation unit 6 regards the predicted value “0.3” as “0” and determines that it matches the true value “0”. In this case, if the true value is “1”, the evaluation unit 6 may regard the predicted value “0.3” as “0” and determine that it does not match the true value “1”.
 このように、評価部6は、画像データに対応する予測値毎に、予測値を“1”または“0”とみなし、その値と真値とが一致している予測値の個数をカウントする。そして、評価部6は、そのカウント値を、時間区分aに関して算出されている予測値の数で除算した値を、時間区分aに対応するモデルの予測精度とする。このように、上記のカウント値を予測値の数で除算した値は、一致率と称することもできる。 In this way, the evaluation unit 6 regards the predicted value as “1” or “0” for each predicted value corresponding to the image data, and counts the number of predicted values whose values match the true value. . Then, the evaluation unit 6 sets the value obtained by dividing the count value by the number of predicted values calculated for the time segment a as the prediction accuracy of the model corresponding to the time segment a. Thus, the value obtained by dividing the count value by the number of predicted values can also be referred to as the coincidence rate.
 評価部6は、他の各時間区分に関してもそれぞれ、時間区分に対応するモデルの予測精度を算出する。 The evaluation unit 6 calculates the prediction accuracy of the model corresponding to each time segment for each other time segment.
 その他の点は、前述の第1の実施形態や第2の実施形態と同様である。よって、動作の結果が事象である場合も、第1の実施形態や第2の実施形態に適用可能である。そして、第1の実施形態や第2の実施形態と同様の効果を得ることができる。すなわち、ユーザは、PKの一連の動作を行う時間のうち、フォームが結果(本例では、ボールの進行方向)に対して大きな影響を与える時間帯を把握することができる。 Other points are the same as those in the first and second embodiments described above. Therefore, even when the result of the operation is an event, it can be applied to the first embodiment and the second embodiment. And the same effect as a 1st embodiment and a 2nd embodiment can be acquired. That is, the user can grasp a time zone in which the form has a great influence on the result (in this example, the traveling direction of the ball) among the time during which a series of PK operations are performed.
 特に、本例では、競争相手(対戦チームの選手)に関するデータをデータ記憶部2に記憶させておけば、フォームが結果に対して大きな影響を与える時間帯を把握した上で、その時間帯のフォームを重点的に確認することができる。すると、キーパは、対戦チームの選手の癖等を見つけやすくなり、キーパのPK阻止率に寄与することができる。 In particular, in this example, if data relating to competitors (players of the opponent team) is stored in the data storage unit 2, after grasping the time zone in which the form has a great influence on the result, The form can be checked with emphasis. Then, it becomes easy for the keeper to find a habit of a player of the battle team, and can contribute to the keeper's PK blocking rate.
 また、第2の実施形態の変形例の1つとして示したように、ステップS12の後、外部から時間区分の指定を受ける場合においても、動作の結果が事象であってもよい。例えば、ステップS12において、表示部7が、図8に例示する態様で各時間区分のアイコン21を表示し、その後、時間区分cが指定されたとする。この場合、表示部7は、その時間区分cに対応するモデルの予測精度を表示する。 Also, as shown as one of the modifications of the second embodiment, the result of the operation may be an event even when the designation of the time segment is received from the outside after step S12. For example, in step S12, it is assumed that the display unit 7 displays the icon 21 of each time segment in the manner illustrated in FIG. 8, and then the time segment c is designated. In this case, the display unit 7 displays the prediction accuracy of the model corresponding to the time segment c.
 さらに、表示部7は、そのモデルを用いて算出された予測値が最大となっている画像データと、そのモデルを用いて算出された予測値が最小となっている画像データとを特定する。本例では、値が大きいほど、キック後にボールが右に飛ぶ確率が高く、値が小さいほど、キック後にボールが左に飛ぶ確率が高い。すなわち、表示部7は、キック後にボールが右に飛ぶ確率が最も高いと予測されるフォームの画像データと、キック後にボールが左に飛ぶ確率が最も高いと予測されるフォームの画像データとを特定していることになる。 Further, the display unit 7 specifies image data having a maximum predicted value calculated using the model and image data having a minimum predicted value calculated using the model. In this example, the higher the value, the higher the probability that the ball will fly to the right after kicking, and the lower the value, the higher the probability that the ball will fly to the left after kicking. That is, the display unit 7 identifies the image data of the form predicted to have the highest probability that the ball will fly to the right after the kick and the image data of the form predicted to have the highest probability of the ball to fly to the left after the kick. Will be.
 表示部7は、予測値が最大となっている画像データにおける、指定された時間区分cに該当する範囲の動画と、予測値が最小となっている画像データにおける、指定された時間区分cに該当する範囲の動画とをそれぞれ表示する。 The display unit 7 displays the moving image in the range corresponding to the specified time segment c in the image data having the maximum predicted value and the specified time segment c in the image data having the minimum predicted value. Each video in the corresponding range is displayed.
 時間区分が指定された場合に表示部7が表示する画面は、図10に例示する画面と同様でよい。なお、図10に示す「成績:良」、「成績:悪」等のテキスト情報はそれぞれ、「ボール進行方向:右」、「ボール進行方向:左」等として表示すればよい。 The screen displayed by the display unit 7 when the time division is designated may be the same as the screen illustrated in FIG. The text information such as “score: good” and “score: bad” shown in FIG. 10 may be displayed as “ball progress direction: right”, “ball progress direction: left”, etc., respectively.
 この場合には、ボールが最も右に飛びやすいと予測される動画であって、フォームが結果に対して大きな影響を与える時間帯の動画を表示することができ、また、ボールが最も左に飛びやすいと予測される動画であって、フォームが結果に対して大きな影響を与える時間帯の動画を表示することができる。従って、それらの画像を解析したり、比較したりすることによって、フォームの改善、フォームの癖の発見等に利用することができる。 In this case, it is possible to display a video in which the ball is predicted to fly most to the right, and the time zone in which the form has a large effect on the result can be displayed. It is possible to display a moving image that is predicted to be easy, and that is in a time zone in which the form has a great influence on the result. Therefore, by analyzing or comparing these images, it can be used for improving the form, finding a wrinkle of the form, and the like.
 また、本発明の各実施形態やその変形例において、スポーツ動作解析支援システム1は、データ記憶部2に記憶させるデータを外部から取得するデータ取得部を備えていてもよい。図11は、データ取得部を備える場合の構成例を示すブロック図である。データ記憶部2、時間区分画像抽出部3、学習部4、予測部5、評価部6および表示部7は、第1の実施形態におけるそれらの要素や、第2の実施形態やその変形例におけるそれらの要素と同様であり、説明を省略する。 Also, in each embodiment of the present invention and its modifications, the sports motion analysis support system 1 may include a data acquisition unit that acquires data to be stored in the data storage unit 2 from the outside. FIG. 11 is a block diagram illustrating a configuration example in the case where a data acquisition unit is provided. The data storage unit 2, the time segment image extraction unit 3, the learning unit 4, the prediction unit 5, the evaluation unit 6, and the display unit 7 are those elements in the first embodiment, and those in the second embodiment and its modifications. These are the same as those elements, and a description thereof will be omitted.
 データ取得部9は、スポーツにおける一連の動作を表す動画の画像データと、その動作の結果とを対応付けたデータを複数取得し、データ記憶部2に記憶させる。 The data acquisition unit 9 acquires a plurality of pieces of data in which moving image data representing a series of motions in sports and the results of the motions are associated with each other, and stores them in the data storage unit 2.
 例えば、上記のデータが外部の装置(図示略)に複数記憶されているとする。この場合、データ取得部9は、その装置にアクセスして、その装置からデータを複数取得して、データ記憶部2に記憶させればよい。データ取得部9が複数のデータをデータ記憶部2に記憶させた後の処理は、第1の実施形態で説明した処理、または、第2の実施形態やその変形例で説明した処理と同様である。 For example, assume that a plurality of the above data is stored in an external device (not shown). In this case, the data acquisition unit 9 may access the device, acquire a plurality of data from the device, and store the data in the data storage unit 2. The processing after the data acquisition unit 9 stores a plurality of data in the data storage unit 2 is the same as the processing described in the first embodiment or the processing described in the second embodiment or its modification. is there.
 データ取得部9は、例えば、スポーツ動作解析支援プログラムに従って動作するコンピュータのCPUによって実現される。 The data acquisition unit 9 is realized by, for example, a CPU of a computer that operates according to a sports motion analysis support program.
 以上の説明では、走り幅跳びやサッカーにおけるPKを例にして説明した。本発明が適用されるスポーツの動作は、これらに限定されない。例えば、ゴルフにおけるスイング動作の画像データと、成績(飛距離)とを対応付けたデータを、複数データ記憶部2に記憶させておいてもよい。この場合、ゴルフクラブでボールを打つ時点を基準の時刻とすればよい。また、例えば、バレーボールのセッターがトスする動作の画像データと、ボールが右に飛んだか、左に飛んだかを示す結果とを対応付けたデータを、複数データ記憶部2に記憶させておいてもよい。この場合、セッターがトスする時点を基準の時刻とすればよい。また、例えば、野球におけるピッチャーの投球動作、ラグビーのフォーメーション、または、アメリカンフットボールのフォーメーション等にも、本発明を適用可能である。このように、本発明は、種々のスポーツの動作に適用可能である。 In the above description, long jumps and PK in soccer have been described as examples. The operation of the sport to which the present invention is applied is not limited to these. For example, you may memorize | store the data which matched the image data of the swing action in golf, and a result (flying distance) in the multiple data storage part 2. FIG. In this case, the time point at which the ball is hit with the golf club may be set as the reference time. Further, for example, the data storage unit 2 may store data associating image data of an action tossed by a volleyball setter with a result indicating whether the ball flew to the right or left. Good. In this case, the time when the setter toss may be set as the reference time. Further, for example, the present invention can be applied to a pitcher pitching operation in baseball, a rugby formation, or an American football formation. Thus, the present invention can be applied to various sports operations.
 図12は、本発明の各実施形態に係るコンピュータの構成例を示す概略ブロック図である。コンピュータ1000は、CPU1001と、主記憶装置1002と、補助記憶装置1003と、インタフェース1004と、ディスプレイ装置1005と、入力デバイス1006とを備える。入力デバイス1006は、図9に示す操作部8に相当する。 FIG. 12 is a schematic block diagram showing a configuration example of a computer according to each embodiment of the present invention. The computer 1000 includes a CPU 1001, a main storage device 1002, an auxiliary storage device 1003, an interface 1004, a display device 1005, and an input device 1006. The input device 1006 corresponds to the operation unit 8 illustrated in FIG.
 本発明の各実施形態のスポーツ動作解析支援システム1は、コンピュータ1000に実装される。スポーツ動作解析支援システム1の動作は、プログラム(スポーツ動作解析支援プログラム)の形式で補助記憶装置1003に記憶されている。CPU1001は、プログラムを補助記憶装置1003から読み出して主記憶装置1002に展開し、そのプログラムに従って上記の処理を実行する。 The sports motion analysis support system 1 according to each embodiment of the present invention is implemented in a computer 1000. The operation of the sports motion analysis support system 1 is stored in the auxiliary storage device 1003 in the form of a program (sport motion analysis support program). The CPU 1001 reads out the program from the auxiliary storage device 1003, develops it in the main storage device 1002, and executes the above processing according to the program.
 補助記憶装置1003は、一時的でない有形の媒体の例である。一時的でない有形の媒体の他の例として、インタフェース1004を介して接続される磁気ディスク、光磁気ディスク、CD-ROM、DVD-ROM、半導体メモリ等が挙げられる。また、このプログラムが通信回線によってコンピュータ1000に配信される場合、配信を受けたコンピュータ1000がそのプログラムを主記憶装置1002に展開し、上記の処理を実行してもよい。 The auxiliary storage device 1003 is an example of a tangible medium that is not temporary. Other examples of the non-temporary tangible medium include a magnetic disk, a magneto-optical disk, a CD-ROM, a DVD-ROM, and a semiconductor memory connected via the interface 1004. When this program is distributed to the computer 1000 via a communication line, the computer 1000 that has received the distribution may develop the program in the main storage device 1002 and execute the above processing.
 また、プログラムは、前述の処理の一部を実現するためのものであってもよい。さらに、プログラムは、補助記憶装置1003に既に記憶されている他のプログラムとの組み合わせで前述の処理を実現する差分プログラムであってもよい。 Further, the program may be for realizing a part of the above-described processing. Furthermore, the program may be a differential program that realizes the above-described processing in combination with another program already stored in the auxiliary storage device 1003.
 また、各構成要素の一部または全部は、汎用または専用の回路(circuitry )、プロセッサ等やこれらの組み合わせによって実現されてもよい。これらは、単一のチップによって構成されてもよいし、バスを介して接続される複数のチップによって構成されてもよい。各構成要素の一部または全部は、上述した回路等とプログラムとの組み合わせによって実現されてもよい。 Also, some or all of the constituent elements may be realized by general-purpose or dedicated circuits (circuitry IV), processors, or combinations thereof. These may be configured by a single chip or may be configured by a plurality of chips connected via a bus. Part or all of each component may be realized by a combination of the above-described circuit and the like and a program.
 各構成要素の一部または全部が複数の情報処理装置や回路等により実現される場合には、複数の情報処理装置や回路等は集中配置されてもよいし、分散配置されてもよい。例えば、情報処理装置や回路等は、クライアントアンドサーバシステム、クラウドコンピューティングシステム等、各々が通信ネットワークを介して接続される形態として実現されてもよい。 When some or all of the constituent elements are realized by a plurality of information processing devices and circuits, the plurality of information processing devices and circuits may be centrally arranged or distributedly arranged. For example, the information processing apparatus, the circuit, and the like may be realized as a form in which each is connected via a communication network, such as a client and server system and a cloud computing system.
 次に、本発明の概要について説明する。図13は、本発明の概要を示すブロック図である。本発明のスポーツ動作解析支援システムは、データ記憶部2と、学習部4と、評価部6とを備える。 Next, the outline of the present invention will be described. FIG. 13 is a block diagram showing an outline of the present invention. The sports motion analysis support system of the present invention includes a data storage unit 2, a learning unit 4, and an evaluation unit 6.
 データ記憶部2は、スポーツにおける一連の動作を表す動画の画像データと、動作の結果とを対応付けたデータを複数記憶する。 The data storage unit 2 stores a plurality of pieces of data in which image data of moving images representing a series of motions in sports and the motion results are associated with each other.
 学習部4は、複数のデータを用いて、所定の動作を表している時点を基準として定められた複数の時間区分のそれぞれに対して、時間区分における動作と当該動作に対応する結果との関係を表すモデルを学習する。 The learning unit 4 uses a plurality of data, and for each of a plurality of time segments determined based on a time point representing a predetermined operation, the relationship between the operation in the time segment and the result corresponding to the operation Learn a model that represents
 評価部6は、モデルを用いて予測された結果の予測精度を、時間区分毎に算出する。 The evaluation unit 6 calculates the prediction accuracy of the result predicted using the model for each time segment.
 そのような構成によって、スポーツにおける一連の動作を行う時間のうち、フォームが結果に対して大きな影響を与える時間帯を把握できるようにユーザを支援することができる。 With such a configuration, it is possible to assist the user so that the time zone in which the form has a great influence on the result among the time for performing a series of actions in sports can be grasped.
 また、図14は、本発明の概要を示す他のブロック図である。本発明のスポーツ動作解析支援システムは、データ記憶部2と、学習部4と、特定部16とを備えていてもよい。 FIG. 14 is another block diagram showing the outline of the present invention. The sports motion analysis support system of the present invention may include a data storage unit 2, a learning unit 4, and a specifying unit 16.
 データ記憶部2および学習部4は、図13に示すデータ記憶部2および学習部4と同様である。 The data storage unit 2 and the learning unit 4 are the same as the data storage unit 2 and the learning unit 4 shown in FIG.
 特定部16(例えば、第2の実施形態における評価部6)は、モデルを用いて予測された結果の予測精度の向上の度合いが大きい時間帯を特定可能な時間区分を特定する。 The specifying unit 16 (for example, the evaluation unit 6 in the second embodiment) specifies a time segment in which a time zone in which the degree of improvement in prediction accuracy of a result predicted using a model is large can be specified.
 この場合にも、スポーツにおける一連の動作を行う時間のうち、フォームが結果に対して大きな影響を与える時間帯を把握できるようにユーザを支援することができる。 Also in this case, it is possible to assist the user so that the time period in which the form has a great influence on the result among the time during which a series of actions in sports are performed can be grasped.
 上記の各実施形態は、以下の付記のようにも記載され得るが、以下に限定されるわけではない。 The above embodiments can be described as in the following supplementary notes, but are not limited to the following.
(付記1)
 スポーツにおける一連の動作を表す動画の画像データと、前記動作の結果とを対応付けたデータを複数記憶するデータ記憶部と、
 複数の前記データを用いて、所定の動作を表している時点を基準として定められた複数の時間区分のそれぞれに対して、時間区分における動作と当該動作に対応する結果との関係を表すモデルを学習する学習部と、
 モデルを用いて予測された結果の予測精度を、時間区分毎に算出する評価部とを備える
 ことを特徴とするスポーツ動作解析支援システム。
(Appendix 1)
A data storage unit that stores a plurality of data in which image data of a moving image representing a series of actions in sports and the results of the actions are associated;
Using each of the plurality of data, for each of a plurality of time segments determined on the basis of a time point representing a predetermined operation, a model representing a relationship between the operation in the time segment and the result corresponding to the operation A learning department to learn,
A sports motion analysis support system comprising: an evaluation unit that calculates a prediction accuracy of a result predicted using a model for each time segment.
(付記2)
 時間区分と、時間区分に対応するモデルの予測精度との関係を示すグラフを表示する表示部を備える
 付記1に記載のスポーツ動作解析支援システム。
(Appendix 2)
The sports motion analysis support system according to attachment 1, further comprising a display unit that displays a graph indicating a relationship between a time segment and a prediction accuracy of a model corresponding to the time segment.
(付記3)
 評価部は、モデルを用いて予測された結果の予測精度の向上の度合いが大きい時間帯を特定可能な時間区分を特定する
 付記1または付記2に記載のスポーツ動作解析支援システム。
(Appendix 3)
The sports motion analysis support system according to Supplementary Note 1 or Supplementary Note 2, wherein the evaluation unit identifies a time segment in which a time zone in which a degree of improvement in prediction accuracy of a result predicted using the model is large can be identified.
(付記4)
 表示部は、外部から指定された時間区分に対応するモデルの予測精度を表示するとともに、前記モデルを用いて予測された予測値に基づいて所定数の画像データを特定し、特定した各画像データにおける前記時間区分の動画を表示する
 付記1から付記3のうちのいずれかに記載のスポーツ動作解析支援システム。
(Appendix 4)
The display unit displays prediction accuracy of a model corresponding to a time segment designated from the outside, specifies a predetermined number of image data based on a prediction value predicted using the model, and specifies each specified image data The sports motion analysis support system according to any one of Supplementary Note 1 to Supplementary Note 3, wherein the moving image of the time segment is displayed.
(付記5)
 画像データに対応付けられる動作の結果は、成績を示す数値であり、
 学習部は、時間区分毎に、時間区分における動作と、成績を示す数値との関係を表すモデルを学習する
 付記1から付記4のうちのいずれかに記載のスポーツ動作解析支援システム。
(Appendix 5)
The result of the action associated with the image data is a numerical value indicating the grade,
The sports motion analysis support system according to any one of Supplementary Note 1 to Supplementary Note 4, wherein the learning unit learns a model representing a relationship between the motion in the time division and a numerical value indicating the result for each time division.
(付記6)
 画像データに対応付けられる動作の結果は、事象であり、
 学習部は、時間区分毎に、時間区分における動作と、事象が生じる確率との関係を表すモデルを学習する
 付記1から付記4のうちのいずれかに記載のスポーツ動作解析支援システム。
(Appendix 6)
The result of the action associated with the image data is an event,
The sports motion analysis support system according to any one of appendix 1 to appendix 4, wherein the learning unit learns a model representing a relationship between the motion in the time segment and the probability that the event occurs for each time segment.
(付記7)
 スポーツにおける一連の動作を表す動画の画像データと、前記動作の結果とを対応付けたデータを複数記憶するデータ記憶部と、
 複数の前記データを用いて、所定の動作を表している時点を基準として定められた複数の時間区分のそれぞれに対して、時間区分における動作と当該動作に対応する結果との関係を表すモデルを学習する学習部と、
 モデルを用いて予測された結果の予測精度の向上の度合いが大きい時間帯を特定可能な時間区分を特定する特定部とを備える
 ことを特徴とするスポーツ動作解析支援システム。
(Appendix 7)
A data storage unit that stores a plurality of data in which image data of a moving image representing a series of actions in sports and the results of the actions are associated;
Using each of the plurality of data, for each of a plurality of time segments determined on the basis of a time point representing a predetermined operation, a model representing a relationship between the operation in the time segment and the result corresponding to the operation A learning department to learn,
A sports motion analysis support system comprising: a specifying unit that specifies a time segment in which a time zone in which a degree of improvement in prediction accuracy of a result predicted using a model is large can be specified.
(付記8)
 スポーツにおける一連の動作を表す動画の画像データと、前記動作の結果とを対応付けたデータを複数記憶するデータ記憶部を備えるコンピュータが、
 複数の前記データを用いて、所定の動作を表している時点を基準として定められた複数の時間区分のそれぞれに対して、時間区分における動作と当該動作に対応する結果との関係を表すモデルを学習し、
 モデルを用いて予測された結果の予測精度を、時間区分毎に算出する
 ことを特徴とするスポーツ動作解析支援方法。
(Appendix 8)
A computer comprising a data storage unit for storing a plurality of data in which image data of a moving image representing a series of actions in sports and the results of the actions are associated with each other,
Using each of the plurality of data, for each of a plurality of time segments determined on the basis of a time point representing a predetermined operation, a model representing a relationship between the operation in the time segment and the result corresponding to the operation Learn,
A sports motion analysis support method, characterized in that the prediction accuracy of a result predicted using a model is calculated for each time segment.
(付記9)
 スポーツにおける一連の動作を表す動画の画像データと、前記動作の結果とを対応付けたデータを複数記憶するデータ記憶部を備えるコンピュータが、
 複数の前記データを用いて、所定の動作を表している時点を基準として定められた複数の時間区分のそれぞれに対して、時間区分における動作と当該動作に対応する結果との関係を表すモデルを学習し、
 モデルを用いて予測された結果の予測精度の向上の度合いが大きい時間帯を特定可能な時間区分を特定する
 ことを特徴とするスポーツ動作解析支援方法。
(Appendix 9)
A computer comprising a data storage unit for storing a plurality of data in which image data of a moving image representing a series of actions in sports and the results of the actions are associated with each other,
Using each of the plurality of data, for each of a plurality of time segments determined on the basis of a time point representing a predetermined operation, a model representing a relationship between the operation in the time segment and the result corresponding to the operation Learn,
A sports motion analysis support method characterized by identifying a time segment in which a time zone in which a degree of improvement in prediction accuracy of a result predicted using a model is large can be identified.
(付記10)
 スポーツにおける一連の動作を表す動画の画像データと、前記動作の結果とを対応付けたデータを複数記憶するデータ記憶部を備えるコンピュータに搭載されるスポーツ動作解析支援プログラムであって、
 前記コンピュータに、
 複数の前記データを用いて、所定の動作を表している時点を基準として定められた複数の時間区分のそれぞれに対して、時間区分における動作と当該動作に対応する結果との関係を表すモデルを学習する学習処理、および、
 モデルを用いて予測された結果の予測精度を、時間区分毎に算出する評価処理
 を実行させるためのスポーツ動作解析支援プログラム。
(Appendix 10)
A sports motion analysis support program installed in a computer including a data storage unit that stores a plurality of data in which image data of a moving image representing a series of motions in sports and the results of the motions are associated with each other,
In the computer,
Using each of the plurality of data, for each of a plurality of time segments determined on the basis of a time point representing a predetermined operation, a model representing a relationship between the operation in the time segment and the result corresponding to the operation Learning process to learn, and
A sports motion analysis support program for executing an evaluation process for calculating the prediction accuracy of a result predicted using a model for each time segment.
(付記11)
 スポーツにおける一連の動作を表す動画の画像データと、前記動作の結果とを対応付けたデータを複数記憶するデータ記憶部を備えるコンピュータに搭載されるスポーツ動作解析支援プログラムであって、
 前記コンピュータに、
 複数の前記データを用いて、所定の動作を表している時点を基準として定められた複数の時間区分のそれぞれに対して、時間区分における動作と当該動作に対応する結果との関係を表すモデルを学習する学習処理、および、
 モデルを用いて予測された結果の予測精度の向上の度合いが大きい時間帯を特定可能な時間区分を特定する特定処理
 を実行させるためのスポーツ動作解析支援プログラム。
(Appendix 11)
A sports motion analysis support program installed in a computer including a data storage unit that stores a plurality of data in which image data of a moving image representing a series of motions in sports and the results of the motions are associated with each other,
In the computer,
Using each of the plurality of data, for each of a plurality of time segments determined on the basis of a time point representing a predetermined operation, a model representing a relationship between the operation in the time segment and the result corresponding to the operation Learning process to learn, and
A sports motion analysis support program for executing a specific process for specifying a time segment in which a time zone in which a degree of improvement in prediction accuracy of a result predicted using a model is large can be specified.
 以上、実施形態を参照して本願発明を説明したが、本願発明は上記の実施形態に限定されるものではない。本願発明の構成や詳細には、本願発明のスコープ内で当業者が理解し得る様々な変更をすることができる。 The present invention has been described above with reference to the embodiments, but the present invention is not limited to the above-described embodiments. Various changes that can be understood by those skilled in the art can be made to the configuration and details of the present invention within the scope of the present invention.
産業上の利用の可能性Industrial applicability
 本発明は、スポーツにおける動作解析を支援するスポーツ動作解析支援システムに好適に適用可能である。 The present invention can be suitably applied to a sports motion analysis support system that supports motion analysis in sports.
 1 スポーツ動作解析支援システム
 2 データ記憶部
 3 時間区分画像抽出部
 4 学習部
 5 予測部
 6 評価部
 7 表示部
 8 操作部
 9 データ取得部
DESCRIPTION OF SYMBOLS 1 Sports motion analysis support system 2 Data storage part 3 Time division image extraction part 4 Learning part 5 Prediction part 6 Evaluation part 7 Display part 8 Operation part 9 Data acquisition part

Claims (11)

  1.  スポーツにおける一連の動作を表す動画の画像データと、前記動作の結果とを対応付けたデータを複数記憶するデータ記憶部と、
     複数の前記データを用いて、所定の動作を表している時点を基準として定められた複数の時間区分のそれぞれに対して、時間区分における動作と当該動作に対応する結果との関係を表すモデルを学習する学習部と、
     モデルを用いて予測された結果の予測精度を、時間区分毎に算出する評価部とを備える
     ことを特徴とするスポーツ動作解析支援システム。
    A data storage unit that stores a plurality of data in which image data of a moving image representing a series of actions in sports and the results of the actions are associated;
    Using each of the plurality of data, for each of a plurality of time segments determined on the basis of a time point representing a predetermined operation, a model representing a relationship between the operation in the time segment and the result corresponding to the operation A learning department to learn,
    A sports motion analysis support system comprising: an evaluation unit that calculates a prediction accuracy of a result predicted using a model for each time segment.
  2.  時間区分と、時間区分に対応するモデルの予測精度との関係を示すグラフを表示する表示部を備える
     請求項1に記載のスポーツ動作解析支援システム。
    The sports motion analysis support system according to claim 1, further comprising a display unit that displays a graph indicating a relationship between a time segment and a prediction accuracy of a model corresponding to the time segment.
  3.  評価部は、モデルを用いて予測された結果の予測精度の向上の度合いが大きい時間帯を特定可能な時間区分を特定する
     請求項1または請求項2に記載のスポーツ動作解析支援システム。
    The sports motion analysis support system according to claim 1 or 2, wherein the evaluation unit specifies a time segment in which a time zone in which a degree of improvement in prediction accuracy of a result predicted using the model is large can be specified.
  4.  表示部は、外部から指定された時間区分に対応するモデルの予測精度を表示するとともに、前記モデルを用いて予測された予測値に基づいて所定数の画像データを特定し、特定した各画像データにおける前記時間区分の動画を表示する
     請求項1から請求項3のうちのいずれか1項に記載のスポーツ動作解析支援システム。
    The display unit displays prediction accuracy of a model corresponding to a time segment designated from the outside, specifies a predetermined number of image data based on a prediction value predicted using the model, and specifies each specified image data The sports motion analysis support system according to any one of claims 1 to 3, wherein a moving image of the time segment is displayed.
  5.  画像データに対応付けられる動作の結果は、成績を示す数値であり、
     学習部は、時間区分毎に、時間区分における動作と、成績を示す数値との関係を表すモデルを学習する
     請求項1から請求項4のうちのいずれか1項に記載のスポーツ動作解析支援システム。
    The result of the action associated with the image data is a numerical value indicating the grade,
    The sports motion analysis support system according to any one of claims 1 to 4, wherein the learning unit learns a model representing a relationship between an operation in the time segment and a numerical value indicating the result for each time segment. .
  6.  画像データに対応付けられる動作の結果は、事象であり、
     学習部は、時間区分毎に、時間区分における動作と、事象が生じる確率との関係を表すモデルを学習する
     請求項1から請求項4のうちのいずれか1項に記載のスポーツ動作解析支援システム。
    The result of the action associated with the image data is an event,
    The sports motion analysis support system according to any one of claims 1 to 4, wherein the learning unit learns a model representing a relationship between the motion in the time segment and the probability of occurrence of the event for each time segment. .
  7.  スポーツにおける一連の動作を表す動画の画像データと、前記動作の結果とを対応付けたデータを複数記憶するデータ記憶部と、
     複数の前記データを用いて、所定の動作を表している時点を基準として定められた複数の時間区分のそれぞれに対して、時間区分における動作と当該動作に対応する結果との関係を表すモデルを学習する学習部と、
     モデルを用いて予測された結果の予測精度の向上の度合いが大きい時間帯を特定可能な時間区分を特定する特定部とを備える
     ことを特徴とするスポーツ動作解析支援システム。
    A data storage unit that stores a plurality of data in which image data of a moving image representing a series of actions in sports and the results of the actions are associated;
    Using each of the plurality of data, for each of a plurality of time segments determined on the basis of a time point representing a predetermined operation, a model representing a relationship between the operation in the time segment and the result corresponding to the operation A learning department to learn,
    A sports motion analysis support system comprising: a specifying unit that specifies a time segment in which a time zone in which a degree of improvement in prediction accuracy of a result predicted using a model is large can be specified.
  8.  スポーツにおける一連の動作を表す動画の画像データと、前記動作の結果とを対応付けたデータを複数記憶するデータ記憶部を備えるコンピュータが、
     複数の前記データを用いて、所定の動作を表している時点を基準として定められた複数の時間区分のそれぞれに対して、時間区分における動作と当該動作に対応する結果との関係を表すモデルを学習し、
     モデルを用いて予測された結果の予測精度を、時間区分毎に算出する
     ことを特徴とするスポーツ動作解析支援方法。
    A computer comprising a data storage unit for storing a plurality of data in which image data of a moving image representing a series of actions in sports and the results of the actions are associated with each other,
    Using each of the plurality of data, for each of a plurality of time segments determined on the basis of a time point representing a predetermined operation, a model representing a relationship between the operation in the time segment and the result corresponding to the operation Learn,
    A sports motion analysis support method, characterized in that the prediction accuracy of a result predicted using a model is calculated for each time segment.
  9.  スポーツにおける一連の動作を表す動画の画像データと、前記動作の結果とを対応付けたデータを複数記憶するデータ記憶部を備えるコンピュータが、
     複数の前記データを用いて、所定の動作を表している時点を基準として定められた複数の時間区分のそれぞれに対して、時間区分における動作と当該動作に対応する結果との関係を表すモデルを学習し、
     モデルを用いて予測された結果の予測精度の向上の度合いが大きい時間帯を特定可能な時間区分を特定する
     ことを特徴とするスポーツ動作解析支援方法。
    A computer comprising a data storage unit for storing a plurality of data in which image data of a moving image representing a series of actions in sports and the results of the actions are associated with each other,
    Using each of the plurality of data, for each of a plurality of time segments determined on the basis of a time point representing a predetermined operation, a model representing a relationship between the operation in the time segment and the result corresponding to the operation Learn,
    A sports motion analysis support method characterized by identifying a time segment in which a time zone in which a degree of improvement in prediction accuracy of a result predicted using a model is large can be identified.
  10.  スポーツにおける一連の動作を表す動画の画像データと、前記動作の結果とを対応付けたデータを複数記憶するデータ記憶部を備えるコンピュータに搭載されるスポーツ動作解析支援プログラムであって、
     前記コンピュータに、
     複数の前記データを用いて、所定の動作を表している時点を基準として定められた複数の時間区分のそれぞれに対して、時間区分における動作と当該動作に対応する結果との関係を表すモデルを学習する学習処理、および、
     モデルを用いて予測された結果の予測精度を、時間区分毎に算出する評価処理
     を実行させるためのスポーツ動作解析支援プログラム。
    A sports motion analysis support program installed in a computer including a data storage unit that stores a plurality of data in which image data of a moving image representing a series of motions in sports and the results of the motions are associated with each other,
    In the computer,
    Using each of the plurality of data, for each of a plurality of time segments determined on the basis of a time point representing a predetermined operation, a model representing a relationship between the operation in the time segment and the result corresponding to the operation Learning process to learn, and
    A sports motion analysis support program for executing an evaluation process for calculating the prediction accuracy of a result predicted using a model for each time segment.
  11.  スポーツにおける一連の動作を表す動画の画像データと、前記動作の結果とを対応付けたデータを複数記憶するデータ記憶部を備えるコンピュータに搭載されるスポーツ動作解析支援プログラムであって、
     前記コンピュータに、
     複数の前記データを用いて、所定の動作を表している時点を基準として定められた複数の時間区分のそれぞれに対して、時間区分における動作と当該動作に対応する結果との関係を表すモデルを学習する学習処理、および、
     モデルを用いて予測された結果の予測精度の向上の度合いが大きい時間帯を特定可能な時間区分を特定する特定処理
     を実行させるためのスポーツ動作解析支援プログラム。
    A sports motion analysis support program installed in a computer including a data storage unit that stores a plurality of data in which image data of a moving image representing a series of motions in sports and the results of the motions are associated with each other,
    In the computer,
    Using each of the plurality of data, for each of a plurality of time segments determined on the basis of a time point representing a predetermined operation, a model representing a relationship between the operation in the time segment and the result corresponding to the operation Learning process to learn, and
    A sports motion analysis support program for executing a specific process for specifying a time segment in which a time zone in which a degree of improvement in prediction accuracy of a result predicted using a model is large can be specified.
PCT/JP2016/088884 2016-12-27 2016-12-27 Sport motion analysis support system, method and program WO2018122956A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2018558560A JP6677319B2 (en) 2016-12-27 2016-12-27 Sports motion analysis support system, method and program
PCT/JP2016/088884 WO2018122956A1 (en) 2016-12-27 2016-12-27 Sport motion analysis support system, method and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2016/088884 WO2018122956A1 (en) 2016-12-27 2016-12-27 Sport motion analysis support system, method and program

Publications (1)

Publication Number Publication Date
WO2018122956A1 true WO2018122956A1 (en) 2018-07-05

Family

ID=62707143

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/088884 WO2018122956A1 (en) 2016-12-27 2016-12-27 Sport motion analysis support system, method and program

Country Status (2)

Country Link
JP (1) JP6677319B2 (en)
WO (1) WO2018122956A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021177316A (en) * 2020-05-08 2021-11-11 株式会社電通 Game result prediction system
WO2022049694A1 (en) * 2020-09-03 2022-03-10 日本電信電話株式会社 Training device, estimation device, training method, and training program
WO2022230504A1 (en) * 2021-04-28 2022-11-03 オムロン株式会社 Movement improvement device, movement improvement method, movement improvement program, and movement improvement system

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7344510B2 (en) 2019-11-05 2023-09-14 テンソル・コンサルティング株式会社 Motion analysis system, motion analysis method, and motion analysis program
KR102284802B1 (en) * 2020-10-06 2021-08-02 김세원 Apparatus and method for providing condition information of player regarding sports game

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080312010A1 (en) * 2007-05-24 2008-12-18 Pillar Vision Corporation Stereoscopic image capture with performance outcome prediction in sporting environments
WO2015080063A1 (en) * 2013-11-27 2015-06-04 株式会社ニコン Electronic apparatus
JP2015119833A (en) * 2013-12-24 2015-07-02 カシオ計算機株式会社 Exercise support system, exercise support method, and exercise support program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080312010A1 (en) * 2007-05-24 2008-12-18 Pillar Vision Corporation Stereoscopic image capture with performance outcome prediction in sporting environments
WO2015080063A1 (en) * 2013-11-27 2015-06-04 株式会社ニコン Electronic apparatus
JP2015119833A (en) * 2013-12-24 2015-07-02 カシオ計算機株式会社 Exercise support system, exercise support method, and exercise support program

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021177316A (en) * 2020-05-08 2021-11-11 株式会社電通 Game result prediction system
WO2021225007A1 (en) * 2020-05-08 2021-11-11 株式会社電通 Outcome prediction system
CN113939837A (en) * 2020-05-08 2022-01-14 株式会社电通 Win-loss prediction system
JP7078667B2 (en) 2020-05-08 2022-05-31 株式会社電通 Win / Loss Prediction System
WO2022049694A1 (en) * 2020-09-03 2022-03-10 日本電信電話株式会社 Training device, estimation device, training method, and training program
JPWO2022049694A1 (en) * 2020-09-03 2022-03-10
JP7393701B2 (en) 2020-09-03 2023-12-07 日本電信電話株式会社 Learning device, estimation device, learning method, and learning program
WO2022230504A1 (en) * 2021-04-28 2022-11-03 オムロン株式会社 Movement improvement device, movement improvement method, movement improvement program, and movement improvement system

Also Published As

Publication number Publication date
JP6677319B2 (en) 2020-04-08
JPWO2018122956A1 (en) 2019-03-28

Similar Documents

Publication Publication Date Title
WO2018122956A1 (en) Sport motion analysis support system, method and program
Blank et al. Sensor-based stroke detection and stroke type classification in table tennis
US10803762B2 (en) Body-motion assessment device, dance assessment device, karaoke device, and game device
US11967086B2 (en) Player trajectory generation via multiple camera player tracking
JP6704606B2 (en) Judgment system and judgment method
US11839805B2 (en) Computer vision and artificial intelligence applications in basketball
US11798318B2 (en) Detection of kinetic events and mechanical variables from uncalibrated video
US20210170230A1 (en) Systems and methods for training players in a sports contest using artificial intelligence
US10664691B2 (en) Method and system for automatic identification of player
WO2019106672A1 (en) Method of real time monitoring of a person during an event and event dynamics system thereof
JP2021531057A (en) Dynamic region determination
JP6677320B2 (en) Sports motion analysis support system, method and program
JP6653423B2 (en) Play section extracting method and play section extracting apparatus
JP2020054748A (en) Play analysis device and play analysis method
CN110314368B (en) Auxiliary method, device, equipment and readable medium for billiard ball hitting
RU2599699C1 (en) Method of detecting and analysing competition game activities of athletes
JP6760610B2 (en) Position measurement system and position measurement method
Tanaka et al. Automatic Edge Error Judgment in Figure Skating Using 3D Pose Estimation from a Monocular Camera and IMUs
Kuruppu et al. Comparison of different template matching algorithms in high speed sports motion tracking
CN114495254A (en) Action comparison method, system, equipment and medium
Malawski et al. Automatic analysis of techniques and body motion patterns in sport
US20220343649A1 (en) Machine learning for basketball rule violations and other actions
JP2023044410A (en) Tactical analyzer and method for controlling the same, and control program
EP4325448A1 (en) Data processing apparatus and method
EP4325443A1 (en) Data processing apparatus and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16924989

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018558560

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16924989

Country of ref document: EP

Kind code of ref document: A1