WO2023219049A1 - Motion analysis system and motion analysis method - Google Patents

Motion analysis system and motion analysis method Download PDF

Info

Publication number
WO2023219049A1
WO2023219049A1 PCT/JP2023/017219 JP2023017219W WO2023219049A1 WO 2023219049 A1 WO2023219049 A1 WO 2023219049A1 JP 2023017219 W JP2023017219 W JP 2023017219W WO 2023219049 A1 WO2023219049 A1 WO 2023219049A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion analysis
motion
analysis system
unit
target event
Prior art date
Application number
PCT/JP2023/017219
Other languages
French (fr)
Japanese (ja)
Inventor
貴紀 生田
健汰 西村
義之 川口
Original Assignee
京セラ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京セラ株式会社 filed Critical 京セラ株式会社
Publication of WO2023219049A1 publication Critical patent/WO2023219049A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B69/00Training appliances or apparatus for special sports

Definitions

  • the present disclosure relates to a motion analysis system and a motion analysis method that analyze body movements.
  • Patent Document 1 There is a known technique for improving a player's skill by attaching a sensor to a racket and analyzing the movement of the racket (for example, Patent Document 1 and Patent Document 2).
  • Patent Document 1 discloses a technique in which a table tennis racket is provided with an acceleration sensor and a contact sensor.
  • Patent Document 2 discloses a technique for displaying an image in which a subject is photographed and an index indicating the subject's movement that is generated based on a measurement result obtained by measuring the subject's movement using a sensor.
  • a motion analysis system includes: a first acquisition unit that acquires a first moving image of a user's motion; and a second acquisition unit that acquires detection information regarding the motion detected by a motion sensor. , a target event detection unit that detects a target event occurring during the operation based on the detection information, and extracting at least one first still image corresponding to the detected target event from the first video.
  • a first extractor includes: a first acquisition unit that acquires a first moving image of a user's motion; and a second acquisition unit that acquires detection information regarding the motion detected by a motion sensor.
  • a target event detection unit that detects a target event occurring during the operation based on the detection information, and extracting at least one first still image corresponding to the detected target event from the first video.
  • a motion analysis method includes an acquisition step in which a computer acquires a first moving image of a user's motion and detection information regarding the motion detected by a motion sensor; a target event detection step of detecting a target event occurring during the operation based on detection information; and a computer extracting at least one first still image corresponding to the detected target event from the first video. a first extraction step.
  • FIG. 1 is a diagram illustrating an example of a schematic configuration of a motion analysis system according to an aspect of the present disclosure.
  • FIG. 2 is a perspective view showing a configuration example of a racket to which a motion sensor device is attached.
  • FIG. 1 is a block diagram showing an example of a schematic configuration of a motion analysis system.
  • FIG. 2 is a block diagram showing an example of a schematic configuration of a user terminal.
  • FIG. 2 is a block diagram showing an example of a schematic configuration of a server device.
  • FIG. 2 is a sequence diagram showing an example of the flow of main processing in the motion analysis system. It is a figure showing an example of a display screen in a user terminal. It is a figure showing an example of a display screen in a user terminal.
  • the motion analysis system 100 is a system that can analyze a predetermined target event included in a user's motion.
  • the "user's motion” may include any general motion that the user performs using his or her own body.
  • the "target event” is an event that can occur due to an action by a user, and may be any event that can be an analysis target.
  • the target event is that the equipment comes into contact with the predetermined moving object.
  • the tool may be in a predetermined position.
  • Equipment is intended to be a tool that a user moves when playing sports, etc.
  • moving object is intended to be a moving object used in sports, etc.
  • Examples of equipment include rackets for tennis, badminton, table tennis, squash, lacrosse, etc., baseball and cricket bats, golf clubs, hockey and ice hockey sticks, gymnastics equipment, fishing rods, clothing worn by the user, and the like.
  • examples of moving objects include balls used in various ball games, badminton shuttles, and ice hockey pucks.
  • a cue in billiards is also a tool for hitting the ball, and can be included in the equipment.
  • the action of kicking a ball in rugby and soccer can be considered as the action of hitting the ball with a shoe. Therefore, shoes in rugby and soccer may also be included in the equipment.
  • the equipment may include clothes, gloves, socks, etc. worn by the user.
  • the equipment may also include tights-like clothing worn over all or part of the user's body. In this case, by analyzing the movement of the tool, it becomes possible to analyze the movement of the user's body.
  • the user's actions are not limited to sports.
  • the user's action may be work, dancing, etc. that requires a predetermined posture (or maintaining the predetermined posture).
  • the target event may be that the user assumes a predetermined posture.
  • the user's action may be an action of cooking predetermined ingredients using cooking utensils.
  • the cooking utensils may be, for example, knives and frying pans.
  • the target event may be that the tool comes into contact with a predetermined food material, or that the tool assumes a predetermined posture.
  • a motion analysis system 100 that analyzes a user's motion using a racket 6 will be described as an example in order to improve the user's table tennis skills.
  • the equipment is a table tennis racket 6
  • the moving object is, for example, a celluloid table tennis ball 7 (also called a ping pong ball).
  • the "target event" is intended to mean, for example, that the user hits the ball 7 using the racket 6 (that is, the racket 6 comes into contact with the ball 7).
  • FIG. 1 is a diagram showing an example of a schematic configuration of a motion analysis system 100.
  • the motion analysis system 100 includes a user terminal 2 capable of acquiring detection information regarding the user's motion from the motion sensor device 5 and capturing a first moving image of the motion, and a user terminal 2 that can detect a target event and analyze the user's motion.
  • a server 1 motion analysis device
  • the server 1 and the user terminal 2 are communicably connected to each other via a network 4, as shown in FIG.
  • the network 4 may be the Internet or a local network provided within the facility.
  • the user terminal 2 is capable of transmitting an analysis instruction regarding the user's actions to the server device 1.
  • the transmitted analysis instruction includes a first moving image of the user's motion and detected information regarding the motion.
  • the user terminal 2 may include a touch panel 29 having the functions of an input section 23 and a display section 24, and an imaging section 26 capable of capturing a first moving image.
  • FIG. 1 shows how a user terminal 2 is used to capture an image of a user's motion of gripping a racket 6 to which a motion sensor device 5 is attached.
  • the server device 1 analyzes the detection information and the first video acquired from the user terminal 2 in accordance with the analysis instruction received from the user terminal 2.
  • the server device 1 transmits the analysis results to the user terminal 2.
  • the motion analysis system 100 can provide the user with an analysis result based on the first moving image and the detection information detected by the sensor device 5.
  • the analysis result includes suggestions regarding not only how the user moves his or her body but also how to handle the tool used. Therefore, the motion analysis system 100 can help the user recognize corrections in how to move the user's body and how to handle his or her own equipment, and perform effective exercises. Therefore, the user can perform effective exercises to improve his or her skills.
  • FIG. 1 shows a behavior analysis system 100 that includes one server 1 and one user terminal 2, the configuration is not limited to this. That is, the motion analysis system 100 may include multiple servers 1 and multiple user terminals 2.
  • a user using the motion analysis system 100 uses a racket 6 to which a motion sensor device 5 (motion sensor) is attached.
  • the motion sensor device 5 measures the motion of the racket 6 held by the user in real time.
  • FIG. 2 is a perspective view showing an example of the configuration of the racket 6 to which the motion sensor device 5 is attached.
  • a shake-hand type racket 6 is illustrated here as an example, the present invention is not limited to this.
  • the racket 6 may be a racket of other shapes, such as a pen holder type.
  • the racket 6 has a hitting part 62 for hitting the ball 7 and a grip 61 for the user to hold.
  • the hitting portion 62 has a hitting surface 9a for hitting the ball 7.
  • the racket 6 shown in FIG. 2 is of a shake-hand type and has a pair of front and back hitting surfaces 9a.
  • the hitting surface 9a of the racket 6 is often filled with rubber having various properties (for example, a resilient sheet made of a sponge sheet and a rubber sheet).
  • the grip 61 is a part that the user holds to manipulate the racket 6.
  • the striking portion 62 and the grip 61 may be formed integrally with each other.
  • the motion sensor device 5 is fixed to the racket 6 and functions as an inertial sensor that detects at least angular velocity.
  • the attachment position, shape, size, etc. of the motion sensor device 5 to the racket 6 may be set as appropriate.
  • the motion sensor device 5 is attached to the end of the grip 61 of the racket 6 on the opposite side from the hitting portion 62, although the movement sensor device 5 is not limited thereto.
  • the motion sensor device 5 may be removably attached to the racket 6, or may be fixed to the racket 6 in a non-removable manner.
  • the motion sensor device 5 can also detect an impact when the ball 7 contacts the racket 6.
  • the motion sensor device 5 can also detect that the racket 6 has assumed a predetermined posture.
  • FIG. 3 is a block diagram showing an example of a schematic configuration of the motion analysis system 100.
  • the motion sensor device 5 includes a storage section 52, an angular velocity sensor 53 that detects the angular velocity of the racket 6, an acceleration sensor 54 that detects the acceleration of the racket 6, a communication section 55 that communicates with the user terminal 2, and a CPU (Central Processing Unit). ) 50.
  • the angular velocity sensor 53 is, for example, a gyro sensor that detects the angular velocity of the racket 6.
  • the CPU 50 processes signals input to or output from the angular velocity sensor 53, the acceleration sensor 54, and the communication unit 55.
  • the motion sensor device 5 may include a power source that supplies power to the various configurations described above. In FIG. 4, illustration of the power supply is omitted.
  • the storage unit 52 may include a ROM (Read Only Memory), a RAM (Random Access Memory), an external storage device, and the like.
  • the storage unit 52 stores predetermined application programs and the like executed by the CPU 50.
  • the angular velocity sensor 53 is constituted by a three-axis angular velocity sensor capable of detecting angular velocity in each of three axes: x-axis, y-axis, and z-axis.
  • Examples of such an angular velocity sensor include a combination of an angular velocity sensor that detects angular velocity about the x-axis, an angular velocity sensor that detects angular velocity about the y-axis, and an angular velocity sensor that detects angular velocity about the z-axis.
  • the acceleration sensor 54 is composed of a three-axis acceleration sensor that can detect acceleration in directions along each of the three axes: x-axis, y-axis, and z-axis. Any known configuration can be used as the configuration of the three-axis acceleration sensor.
  • the motion sensor device 5 can detect the moving direction and rotational movement of the racket 6 using the angular velocity sensor 53, and can measure the moving distance and speed of the racket 6 using the acceleration sensor 54. In this way, the motion of the racket 6 can be analyzed by the motion sensor device 5.
  • the three axes, the x-axis, y-axis, and z-axis can be set arbitrarily.
  • the axis perpendicular to the hitting surface 9a of the racket 6 is the x-axis
  • the axis parallel to the hitting surface 9a and perpendicular to the axis of the grip 61 is the y-axis
  • the axis parallel to the hitting surface 9a and possessed by the grip 61 is the y-axis.
  • the parallel axes are the z-axis, the invention is not limited to this.
  • the configuration may be such that the three axes, the x-axis, y-axis, and z-axis, are an axis parallel to the direction of gravity and two axes that are perpendicular to the direction of gravity and perpendicular to each other.
  • parallel when “parallel” is used, it is not strictly required to be “parallel”.
  • vertical when “vertical” is used, it is not strictly required to be “vertical”.
  • the communication unit 17 has a configuration that allows communication with at least the user terminal 2.
  • This communication may be wireless communication or wired communication.
  • wireless communication those using radio waves, those using infrared rays, etc. can be applied.
  • Bluetooth (registered trademark), WiFi (registered trademark), and the like can be applied as devices that use radio waves.
  • the user terminal 2 may include a computer.
  • the user terminal 2 may be a smartphone, a tablet, or the like. Any known computer hardware and OS (Operating System) may be used.
  • the user terminal 2 can be obtained by installing a predetermined application program on a general computer.
  • the user terminal 2 may include a CPU 20, a storage section 22, an input section 23 that receives various user operations, a display section 24, a communication section 25, and an imaging section 26.
  • the user terminal 2 may be configured to acquire a first moving image captured by a video camera or the like. Therefore, in the user terminal 2, the imaging unit 26 is not an essential component.
  • the user terminal 2 may be configured to output the analysis result to a display device (not shown) that is separate from the user terminal 2 and is used by the user. Therefore, in the user terminal 2, the display section 24 is not an essential component.
  • the CPU 20 processes signals input to or output from the input unit 23, display unit 24, communication unit 25, and imaging unit 26.
  • the storage unit 22 may include a ROM (Read Only Memory), a RAM (Random Access Memory), an external storage device, and the like.
  • the storage unit 22 stores various control programs executed by the CPU 20, data used for various processes executed by the CPU 20, and the like.
  • the user terminal 2 is capable of capturing an image of the user's motion while simultaneously acquiring detection information regarding the user's motion detected by the motion sensor device 5. For example, the user terminal 2 can start acquiring detection information regarding the user's motion and capturing an image of the user's motion at the same time (or almost simultaneously).
  • the server device 1 may include a computer. Any known computer hardware and OS (Operating System) may be used.
  • the server device 1 can be obtained by installing a predetermined application program on a general computer.
  • the server device 1 includes a CPU 10, a storage section 12, and a communication section 15.
  • the CPU 10 processes signals input to or output from the communication unit 15.
  • the storage unit 12 may include a ROM (Read Only Memory), a RAM (Random Access Memory), an external storage device, and the like.
  • the storage unit 12 stores analysis programs executed by the CPU 10 and the like.
  • the server device 1 detects the target event and executes the motion analysis in response to acquiring the detection information regarding the user's motion, the first video, and the analysis instruction from the user terminal 2.
  • the server device 1 provides the analysis results to the user terminal 2 that is the source of the analysis instruction.
  • the server device 1 may generate a second video showing the change in posture of the racket 6 used by the user during the operation based on the acquired detection information.
  • the second video may be an animation video showing changes in the posture of the racket 6.
  • the analysis results provided from the server device 1 to the user terminal 2 may include the following. ⁇ Second video ⁇ First still image and second still image described later ⁇ First related image group and second related image group described later ⁇ Estimated trajectory and estimated rotation speed of ball 7 launched by racket 6 ⁇ User's body A skeletal model showing the movement of.
  • FIG. 4 is a block diagram showing an example of a schematic configuration of the user terminal 2. As shown in FIG. The control unit 21 shown in FIG. 4 corresponds to the CPU 20 in FIG. 3.
  • the user terminal 2 may include a control section 21 , a storage section 22 , an imaging section 26 , an input section 23 , a display section 24 , and a communication section 25 .
  • the control unit 21 controls the user terminal 2 to execute processing of each function.
  • the storage unit 22 is a storage device that stores various control programs read by the control unit 21 and analysis target data used in various processes executed by the control unit 21.
  • the control unit 21 includes an imaging control unit 211 and a measurement value acquisition unit 212.
  • FIG. 4 shows a configuration further including a display control section 214 and a data management section 213.
  • the imaging control unit 211 controls the imaging unit 26 to acquire the first moving image.
  • the first video is an image of a user playing table tennis using a racket 6 to which a motion sensor device 5 is attached.
  • the first video may show the trajectory of the ball 7 actually hit by the racket 6 held by the user.
  • the imaging control unit 211 may store the first moving image captured by the imaging unit 26 in the analysis target data 222.
  • the control unit 21 may include a moving image acquisition unit (not shown) as an alternative to the imaging control unit 211.
  • the user terminal 2 equipped with this moving image acquisition section can acquire, via the communication section 25, a moving image captured by an external device (not shown) such as a digital camera and a digital video.
  • an external device not shown
  • the detection information for example, measured value
  • the acquired moving image may be stored in the analysis target data 222 of the storage unit 22 together with time information at which imaging was started in order to synchronize with the measured values.
  • the measurement value acquisition unit 212 acquires the detection information detected by the motion sensor device 5 via the communication unit 25.
  • the detection information is information in any format output from the motion sensor device 5, and may be, for example, a graph representing changes in the position of the racket 6 over time, or a graph representing changes in the position and posture of the racket 6 over time. It may be a measured value indicating a change.
  • the detection information may include time information indicating the measured time.
  • the measurement value acquisition section 212 may be configured integrally with the imaging control section 211.
  • the measured value acquisition unit 212 may store the acquired detection information in the analysis target data 222.
  • control unit 21 simultaneously starts capturing a moving image by the imaging unit and acquiring detection information from the motion sensor device 5.
  • the measurement value acquisition unit 212 measures the detection information from the motion sensor device 5. An example of how to obtain a value will be explained.
  • the data management unit 213 associates the captured first video and the acquired measurement values and stores them in the analysis target data 222. Further, the data management unit 213 stores the first video and the measured values in the analysis target data 222 in association with the date on which the first video was captured, identification information corresponding to the person who performed the captured action, and the like. You can. If this configuration is adopted, the user terminal 2 can receive the selection of the date on which the analysis request was sent, and display the analysis result corresponding to the analysis request on the display unit 24.
  • the data management unit 213 may be able to store analysis results related to multiple users in the storage unit 22.
  • the plurality of users may be, for example, a plurality of players who receive guidance from an instructor in order to improve their table tennis skills.
  • the motion analysis system 100 it is possible to provide an instructor who instructs a plurality of players with the analysis results obtained by analyzing the motions of each player. This allows the coach to understand the characteristics of each player's movements and provide guidance suitable for each player.
  • the data management unit 213 generates an analysis request that includes the first video and detection information specified by the user.
  • the data management unit 213 may store the analysis request in the analysis request log 221 of the storage unit 22 in association with the date on which the analysis request was generated.
  • the data management unit 213 may store the analysis results received from the server device 1 in the analysis results 223. In this case, the data management unit 213 may associate the transmitted analysis request with the analysis result corresponding to the analysis request and store it in the analysis result 223. For example, the data management unit 213 may manage the first still image acquired from the server device 1 by associating it with the date on which the first video transmitted to the server device 1 was captured.
  • the display control unit 214 causes the display unit 24 to display the captured first video, a screen for allowing the user to input an analysis instruction to be sent to the server device 1, the received analysis results, and the like.
  • the display control unit 214 may display the analysis result and the second moving image at the same time.
  • the display screen that the display control unit 214 causes the display unit 24 to display will be described later using a specific example.
  • the communication unit 25 communicates with the motion sensor device 5, the user terminal 2, and the server device 1.
  • the user terminal 2 receives detection information from the motion sensor device 5, transmits an analysis request to the server device 1, receives analysis results from the server device 1, etc. via the communication unit 25.
  • FIG. 5 is a block diagram showing an example of a schematic configuration of the server device 1.
  • the control unit 11 shown in FIG. 5 corresponds to the CPU 10 in FIG. 3. That is, each part of the control unit 11 is a functional block realized by the CPU 10 in FIG. 3 executing the analysis program 121.
  • the server device 1 includes a control section 11, a storage section 12, and a communication section 15.
  • the control unit 11 controls the server device 1 to execute processing of each function.
  • the storage unit 12 is a storage device that stores various computer programs read by the control unit 11 and data used in various processes, such as the analysis program 121 executed by the control unit 11. As shown in FIG. 5, the storage unit 12 may store analysis results 122, user information 123, and the like.
  • the analysis result 122 stores the analysis result sent by the server device 1 in response to the acquired analysis request in association with the analysis request.
  • the user information 123 may store information regarding users who are registered as users of the motion analysis system 100. The information regarding the user may include, for example, address and information indicating contact information such as an email address.
  • the user information 123 may store information about the user terminal 2 that is the sender of the analysis request, a history of sending analysis requests from each user, and the like.
  • the control unit 11 includes an analysis request reception unit 111 (first acquisition unit, second acquisition unit), a target event detection unit 112, and a first extraction unit 113.
  • the analysis request reception unit 111 acquires an analysis request from the user terminal 2.
  • the analysis request includes a first moving image of the user's motion and detection information regarding the motion detected by the motion sensor.
  • the analysis request includes information specific to the user terminal 2 that sent the analysis request, identification information of the user performing the action shown in the video to be analyzed, and the name of the action. May be included.
  • the information unique to the user terminal 2 may include information indicating the destination of the analysis result.
  • the user's identification information may be, for example, the user's name or identification information given to the user within a team to which the user belongs as a member.
  • the identification information may be a membership number assigned to each member of a team to which the user belongs as a member.
  • the name of the action may be the name of a target event to be detected in the analysis process (for example, "hit the ball”). Alternatively, it may be the name of an action that includes the target event.
  • the name of the operation may be, for example, "serve,” “drive,” “push,” and “serve return.”
  • the server device 1 may have a configuration including a first acquisition unit that acquires the first video and a second acquisition unit that acquires the detection information, and is not limited to the configuration shown in FIG. 5.
  • the analysis request reception unit 111 shown in FIG. 5 has a function as a first acquisition unit and a function as a second acquisition unit.
  • the server device 1 including the analysis request reception unit 111 will be described as an example.
  • the target event detection unit 112 detects a target event during the user's action based on the detection information.
  • the target event detection unit 112 may detect the target event based on at least one of the speed of posture change and the movement speed of the racket 6 during the motion, which are calculated based on the detection information. For example, when the racket 6 and the ball 7 come into contact, vibrations generated in the racket 6 can be detected. This vibration affects at least one of the speed of posture change and the movement speed of the racket 6 during operation. Therefore, the target event detection unit 112 can detect the target event based on the fact that at least one of the speed of attitude change and the movement speed of the racket 6 is influenced by the vibration.
  • the first extraction unit 113 extracts at least one first still image corresponding to the detected target event from the first moving image.
  • the first still image is a still image corresponding to a time point at which the target event detected by the target event detection unit 112 occurs or a time point in the vicinity thereof.
  • the first extraction unit 113 may be able to extract a first related image group, which is a plurality of still images before and after the first still image, from the first moving image.
  • a first related image group which is a plurality of still images before and after the first still image
  • the first extraction unit 113 may extract at least one first still image for each detected target event.
  • the first related image group will be explained later using a specific example of a display screen.
  • the first extraction unit 113 analyzes the movement of the racket 6 based on the acquired measurement values. For example, the first extraction unit 113 may synchronize the first video and the measured value and detect a target event included in the first video. The first extraction unit 113 stores the analysis result in the storage unit 22 (analysis result 223 in FIG. 4) in association with the measured values and the first video used for the analysis.
  • the server device 1 may analyze the movement of the racket 6 and create the second video using any known method.
  • the server device 1 may further include a second moving image generation section 114 and a second extraction section 115, as shown in FIG.
  • the second video generation unit 114 generates a second video showing a change in the posture of the racket 6 during the user's movement based on the detection information.
  • the second extraction unit 115 may be able to extract a second related image group, which is a plurality of still images before and after the second still image, from the second moving image.
  • the first related image group will be explained later using a specific example of a display screen.
  • the second extraction unit 115 extracts at least one second still image corresponding to the target event detected by the target event detection unit from the second moving image.
  • the second still image is a still image corresponding to a time point at which the target event detected by the first extraction unit 113 based on the detection information occurs or a time point in the vicinity thereof.
  • the second extraction unit 115 may extract a plurality of second still images corresponding to each of the detected target events.
  • the second video generation unit 114 may be able to generate a plurality of second videos corresponding to each of the detected target events.
  • the second extraction unit 115 may be able to extract at least one second still image from each of the generated second moving images.
  • the communication unit 15 communicates with the user terminal 2.
  • the server device 1 receives analysis requests to the server device 1 and transmits analysis results from the server device 1 via the communication unit 15 .
  • FIG. 6 is a sequence diagram showing an example of the main processing flow in the motion analysis system 100.
  • the motion sensor device 5 is activated by the user (step S10). Further, the imaging unit of the user terminal 2 is activated by the user (step S11).
  • the user terminal 2 outputs a detection start instruction to the motion sensor device 5 (step S12), starts imaging (step S13), and starts capturing images after a predetermined period of time (for example, 30 seconds). 1 Capture a video.
  • the motion sensor device 5 that has received the detection start instruction starts transmitting a measured value indicating the movement of the racket 6 to the user terminal 2.
  • the user terminal 2 acquires detection information indicating the user's motion while capturing the first video from the motion sensor device 5 (step S15).
  • the motion sensor device 5 that has received the detection start instruction may collect measured values indicating the motion of the racket 6 measured over a predetermined period of time (for example, 30 seconds) and transmit them to the user terminal 2.
  • the user terminal 2 receives a predetermined operation by the user and transmits an analysis request to the server device 1 (step S16).
  • the analysis instruction includes a first moving image of the user's motion and detection information regarding the motion.
  • the server device 1 acquires an analysis instruction from the user terminal 2 (step S17: acquisition step), and analyzes the detection information and the first video (step S18: target event detection step, first extraction step). In step S18, the server device 1 detects a target event that occurs during operation based on the detection information, and extracts at least one first still image corresponding to the detected target event from the first video. The server device 1 transmits the analysis result to the user terminal 2 (step S19).
  • the user terminal 2 receives the analysis result received from the server device 1 (step S20), and displays the analysis result on the display unit 24 (step S21).
  • the user terminal 2 that has received the analysis result may display the analysis result on the display unit 24 in any manner.
  • the user terminal 2 may read out the first moving image to be analyzed from the analysis target data 222 and display it on the display unit 24 together with the analysis result.
  • FIGS. 7 to 14 are diagrams showing examples of display screens on the user terminal 2.
  • 7 to 11 show examples of display screens from the user terminal 2 until the analysis request is sent to the server device 1.
  • 12 to 14 show examples of display screens from when the user terminal 2 receives the analysis results from the server device 1 to when the received analysis results are displayed on the display unit 24.
  • each player and coach may be a user.
  • the coach can manage the practice diaries, analysis requests, and analysis results of each of a plurality of players in association with each other.
  • FIG. 7 is an example of a display screen (login screen) that is displayed when the user starts using the motion analysis system 100.
  • Display R1 of the login screen shown in FIG. 7 displays a column for allowing the user to input user information.
  • the user information may be, for example, an email address and a password.
  • the user terminal 2 may transmit the input user information to the server device 1.
  • the server device 1 may approve the user's use of the motion analysis system 100 by comparing the received user information with the user information 123 registered in advance.
  • FIGS. 8 and 9 are examples of display screens that display the practice diaries of each of a plurality of players.
  • Display R2 shown in FIG. 8 displays a list of players being coached by the instructor. This screen is a display screen that allows the coach or user to select a certain player from among the players displayed in the player list.
  • FIGS. 9 to 14 show the case where "Player A" is selected in FIG. 8.
  • FIG. 9 is an example of a display screen that displays player A's practice diary.
  • Display R3 displays the selected date and calendar.
  • FIG. 9 shows an example of player A's practice diary displayed on January 9, 2022.
  • the selected date "January 9th (Sunday)" is displayed, and a circular mark is attached to January 9th on the calendar.
  • the date column and calendar display displayed in display R3 have a function that allows the instructor or user to select the date of the practice diary to be displayed.
  • a square mark is attached to January 8th on the calendar of display R3. This square mark indicates that January 8th was the day on which the request for analysis of Athlete A's movements was sent and the analysis results were received. A square mark will not be placed on days when no analysis request has been sent.
  • the user terminal 2 displays information indicating the analysis request sent on January 8th and information indicating the received analysis results on the display. 24 (for example, see FIG. 14).
  • the analysis request includes the first moving image and detection information
  • the analysis result includes the first still image extracted by the server device 1. Therefore, in display R3, days to which the first still image acquired from the server device 1 is associated and days to which it is not are displayed are distinguished.
  • the mark indicating the date on which the analysis result was received is not limited to a square mark, and may be changed as appropriate.
  • FIG. 10 is an example of a display screen for inputting player A's practice diary for January 9th.
  • the display screen for entering the practice diary includes ⁇ Practice goals'', ⁇ Practice content'', ⁇ Important points and precautions (what the coach taught me)'', ⁇ What I noticed (image, future Headings such as ⁇ Goal for 2019'' and ⁇ Free Entry Field'' are displayed.
  • Athlete A or the coach can input information into the practice diary, update the practice diary, etc. on this display screen.
  • the display control unit 214 causes the display unit 24 to display a display screen for accepting an operation for transmitting an analysis request.
  • FIG. 11 is an example of a display screen for accepting an operation for transmitting an analysis request.
  • Display R6 in FIG. 11 includes columns for allowing the user or instructor to input or prepare items and data necessary for transmitting the analysis request to be transmitted.
  • “Title (required)” in display R6 is a field that accepts input of the name of the analysis request.
  • FIG. 11 shows how "Serve 1" is input.
  • "Sensor data CSV (required)” in display R6 is a column for allowing the user or instructor to specify detection information to be analyzed by the server device 1 (uninputted columns are shown).
  • "Swing video (required)” in display R6 is a column for allowing the user or instructor to specify the first video to be transmitted to the server device 1 (an uninputted column is shown).
  • the "latest geomagnetic data registration date" in display R6 is a field for allowing the user or instructor to specify geomagnetic data that can be used when analyzing detected information in the server device 1 (unfilled fields are indicated). ). "Latest geomagnetic data registration date” and “Geomagnetic data CSV (required if there is no registered geomagnetic data)" in display R6 indicate geomagnetic data that can be used by the server device 1 when analyzing detection information. Or this is a field for the instructor to specify (unfilled fields are shown).
  • the data management unit 213 reads out the detection information and the first video specified in these columns from the analysis target data 222 and creates an analysis request. The created analysis request is then transmitted to the server device via the communication unit 25.
  • the display control unit 214 causes the display unit 24 to display a display screen that displays a practice diary as shown in FIG.
  • FIG. 12 is an example of a display screen when the practice diary for January 9 is displayed after receiving the analysis results from the server device 1. Unlike the display screen where the practice diary for January 9th is being created (see FIG. 10), "Serve 1" of display R9 is displayed. When “Serve 1” of display R9 is selected, the display control unit 214 reads the analysis result from the analysis result 223 and causes the display unit 24 to display the display screens shown in FIGS. 13 and 14.
  • FIG. 13 is a diagram showing a part of the display screen that displays the analysis results.
  • a first moving image (or a thumbnail of the first moving image) that is an analysis target is displayed in display R10.
  • the display R11 displays the detection information ("sensor data” in the diagram), geomagnetic data, and the original of the first video ("captured video” in the diagram) sent together with the first video at the time of the analysis request on the display unit 24.
  • a button for display may be displayed. For example, when the user selects "photographed video", the display control unit 214 starts a predetermined video playback application and plays the first video.
  • FIG. 14 is a diagram showing a part of the display screen that displays the analysis results for each target event detected by the server device 1.
  • FIG. 14 is a display screen that includes at least a display regarding "Hit No. 1" and a display regarding "Hit No. 2".
  • Displays R12 to R15 in FIG. 14 display analysis results regarding "Hit No. 1."
  • a second moving image is displayed on the display R12 ("racquet animation image” in the figure).
  • Display R13 displays a graph showing the movement of the racket 6 analyzed based on the detected information.
  • FIG. 14 shows a graph showing "speed by rotation direction” and a graph showing "swing speed.”
  • a second still image and a second related image group are displayed in display R14 (“racquet posture continuous images” in the figure).
  • the image corresponding to the "hit time” is the second still image
  • the image group corresponding to "0.3 seconds ago” to "0.1 seconds ago” and "0.1 seconds later” is the second still image. This is a group of 2 related images.
  • a first still image and a first related image group are displayed in display R15 (“skeleton extraction continuous images (0.1 second interval)” in the figure).
  • display R15 the images corresponding to "No. 04" to "No. 05" are the first still images, and the other image groups are the first related image group.
  • the display control section 214 may display the first still image and the second still image on the display section 24 at the same time.
  • the motion analysis system 100 allows the user to check the first still image and the second still image simultaneously, thereby making it easier for the user to understand the characteristics of the user's own target event and points for improvement.
  • the server device 1 may be able to generate skeletal data of the skeletal model based on the first video.
  • the display control unit 214 may display the generated skeletal model on the first still image on the display unit 24, as shown in display R15.
  • the display control unit 214 may display the generated skeletal model on the display unit 24 by superimposing it on the first related image group.
  • the user's body movements can be clearly shown by using a skeletal model. Therefore, as shown in display R15, by displaying the first still image (and the first related image group) on which the skeletal model is superimposed, the motion analysis system 100 allows the user to understand the characteristics of his or her own motion and points for improvement. can be clearly recognized. Thereby, the user can practice to improve his/her skill while being aware of the characteristics of his/her own movements and areas for improvement.
  • the display control unit 214 may display at least one first still image and a first related image group in chronological order in the first moving image. Thereby, the motion analysis system 100 can make the user understand the analysis result regarding the target event in association with the flow of a series of motions performed by the user.
  • the display control unit 214 may display at least one first still image and first related image group in chronological order for each detected target event. For example, when “Hit No. 2" is detected as a target event in addition to "Hit No. 1", the display control unit 214 displays "Hit No. 2" in the same display mode as display R14 and R15. The first still image (and the first related image group) and the second still image (and the second related image group) may be displayed on the display unit 24.
  • FIGS. 13 and 14 show an example in which the first moving image and the second moving image are displayed on the display section 24, the present invention is not limited thereto.
  • the display control unit 214 may display only the second still image and the second related image group without displaying the second moving image. That is, the display control unit 214 may display at least one of the first video and the second video.
  • the display R15 in FIG. 14 displays an image in which a skeletal model is superimposed on the first still image and the first still image group
  • the display R15 is not limited to such a display mode.
  • the display control unit 214 may display the moving image of the skeletal model generated by the server device 1 based on the first moving image on the display R15.
  • the motion analysis system 100 has a configuration in which analysis results are transmitted from the server device 1 to the user terminal 2 (see FIG. 6).
  • the motion analysis system 100 is not limited to this configuration.
  • the server device 1 may be configured to create a dedicated web page for each acquired analysis request, and allow the analysis results to be viewed on this web page.
  • the server device 1 may transmit information for accessing the web page to the user terminal 2 that is the source of the analysis request. According to this configuration, the transmission load between the server device 1 and the user terminal 2 can be reduced.
  • the motion analysis system 100 has a configuration in which the server device 1 acquires an analysis request from the user terminal 2 and executes motion analysis (see FIGS. 4 and 5).
  • the motion analysis system 100 is not limited to this configuration.
  • the user terminal 2 may include all or part of the functions of the server device 1 (that is, the control unit 11 and the storage unit 12).
  • the user terminal 2 can acquire the first video and detection information, detect the target event, extract the first still image, is possible.
  • the user terminal 2 functions as a motion analysis device.
  • the motion analysis system 100 includes a first acquisition unit (analysis request reception unit 111) that acquires a first video capturing a user's motion, and a first acquisition unit (analysis request reception unit 111) that acquires detection information regarding the motion detected by a motion sensor.
  • a second acquisition unit (analysis request reception unit 111) that acquires information, a target event detection unit 112 that detects a target event that occurs during operation based on the detection information, and at least one second acquisition unit that corresponds to the detected target event.
  • the first extraction unit 113 extracts one still image from the first moving image.
  • the first extraction unit 113 extracts a first related image group, which is a plurality of still images before and after the first still image, from the first moving image. It may be possible.
  • the first extraction unit 113 extracts at least one first still image from the detected target event. It may be extracted separately.
  • the motion analysis system 100 may include a display control unit 214 that displays at least one first still image and a first related image group in chronological order in the first moving image. good.
  • the display control unit 214 displays at least one first still image and the first related image group in chronological order for each detected target event. You can.
  • the motion is a motion in which the user uses a tool (racquet 6), and based on the detection information, the motion analysis system a second video generation unit 114 that generates a second video showing a change in attitude of the racket 6); and a second extraction unit that extracts at least one second still image corresponding to the detected target event from the second video.
  • the motion analysis system a second video generation unit 114 that generates a second video showing a change in attitude of the racket 6); and a second extraction unit that extracts at least one second still image corresponding to the detected target event from the second video.
  • the second extraction unit 115 extracts a plurality of second still images corresponding to each of the detected target events. may be extractable.
  • the second video generation unit 114 can generate a plurality of second videos corresponding to each of the detected target events, and The second extraction unit 115 may be able to extract at least one second still image from each of the generated second moving images.
  • the second extraction unit 115 extracts a second related image group that is a plurality of still images before and after the second still image. It may be possible to extract it from the second video.
  • the motion analysis system 100 may include a display control unit 214 that simultaneously displays the first still image and the second still image in any of Aspects 6 to 9 above.
  • the motion analysis system 100 may include a display control unit 214 that displays at least one of the first video and the second video in any of the above aspects 6 to 10.
  • the target event detection unit 112 detects the attitude change of the equipment (racquet 6) during the motion, which is calculated based on the detection information.
  • a target event may be detected based on at least one of speed and movement speed.
  • a motion analysis system 100 includes a display control unit 214 that superimposes and displays a skeletal model generated based on data of a first video on a first still image in any of the aspects 1 to 12 above. may be provided.
  • a motion analysis system 100 includes at least one user terminal 2 capable of capturing a first moving image when acquiring detection information, and a first acquisition unit (analysis at least one motion analysis device (server device 1) comprising a request reception section 111), a second acquisition section (analysis request reception section 111), a target event detection section 112, and a first extraction section 113;
  • the analysis device (server device 1) detects the target event and extracts the first still image in response to acquiring the analysis instruction including the detection information and the first video from the user terminal 2. good.
  • the user terminal 2 transmits the motion analysis device (server device 1) to the motion analysis device (server device 1) on the date when the first video transmitted to the motion analysis device (server device The first still images acquired from 1) may be managed in association with each other.
  • the motion analysis system 100 is configured such that, in aspect 14 or 15, the first still image acquired from the motion analysis device (server device 1) is associated with a day and a day with which the first still image is not associated. It may also include a display control unit 214 that distinguishes and displays the .
  • the target event is that the tool (racquet 6) used by the user comes into contact with a predetermined moving body, or the tool ( The racket 6) may be in a predetermined posture.
  • the motion analysis method includes an acquisition step (step S17), a target event detection step (step S18) in which the computer (server device 1) detects a target event that occurred during operation based on the detection information;
  • the method includes a first extraction step (step S18) of extracting one first still image from the first moving image.
  • Server device (behavior analysis device) 2 User terminal 3 User terminal (motion analysis device) 4 Network 5 Motion sensor device (motion sensor) 6 Racket 8 Table tennis table 21 Control units 22, 52 Storage unit 23 Input unit 24 Display unit 26 Imaging unit 53 Angular velocity sensor 54 Acceleration sensor 100 Motion analysis system 111 Analysis request reception unit (first acquisition unit, second acquisition unit) 112 Target event detection section 113 First extraction section 114 Second moving image generation section 115 Second extraction section 211 Imaging control section 212 Measurement value acquisition section 213 Display control section 214 Data management section

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

In order to improve the skill of a user, the present invention effectively presents the movement of the body of the user. This motion analysis system comprises: an analysis request reception unit which acquires a first video which captures motions of a user and detection information pertaining to motions detected by a movement sensor; a target event detection unit which detects, on the basis of the detection information, a target event that has occurred during the motions; and a first extraction unit which extracts, from the first video, at least one first static image corresponding to the detected target event.

Description

動作解析システム、動作解析方法Motion analysis system, motion analysis method
 本開示は、身体の動きを解析する動作解析システムおよび動作解析方法に関する。 The present disclosure relates to a motion analysis system and a motion analysis method that analyze body movements.
 ラケットにセンサを取り付け、ラケットの動きを解析することによって選手の技量の向上を図る技術が知られている(例えば、特許文献1、特許文献2)。 There is a known technique for improving a player's skill by attaching a sensor to a racket and analyzing the movement of the racket (for example, Patent Document 1 and Patent Document 2).
 特許文献1は、卓球用のラケットに加速度センサおよび接触センサを設けた技術を開示する。特許文献2は、対象者が撮影された画像と、対象者の動きをセンサによって計測した計測結果に基づいて生成した対象者の動きを示す指標とを共に表示する技術を開示する。 Patent Document 1 discloses a technique in which a table tennis racket is provided with an acceleration sensor and a contact sensor. Patent Document 2 discloses a technique for displaying an image in which a subject is photographed and an index indicating the subject's movement that is generated based on a measurement result obtained by measuring the subject's movement using a sensor.
日本国特開2009-183455号公報Japanese Patent Application Publication No. 2009-183455 日本国特開2020-205608号公報Japanese Patent Application Publication No. 2020-205608
 本開示の一態様に係る動作解析システムは、ユーザの動作を撮像した第1動画を取得する第1取得部と、動きセンサによって検知された、前記動作に関する検知情報を取得する第2取得部と、前記検知情報に基づいて、前記動作中に発生した対象イベントを検出する対象イベント検出部と、検出された前記対象イベントに対応する少なくとも1つの第1静止画像を、前記第1動画から抽出する第1抽出部と、を備える。 A motion analysis system according to an aspect of the present disclosure includes: a first acquisition unit that acquires a first moving image of a user's motion; and a second acquisition unit that acquires detection information regarding the motion detected by a motion sensor. , a target event detection unit that detects a target event occurring during the operation based on the detection information, and extracting at least one first still image corresponding to the detected target event from the first video. A first extractor.
 本開示の一態様に係る動作解析方法は、コンピュータが、ユーザの動作を撮像した第1動画、および、動きセンサによって検知された、前記動作に関する検知情報を取得する取得ステップと、コンピュータが、前記検知情報に基づいて、前記動作中に発生した対象イベントを検出する対象イベント検出ステップと、コンピュータが、検出された前記対象イベントに対応する少なくとも1つの第1静止画像を、前記第1動画から抽出する第1抽出ステップと、を含む。 A motion analysis method according to an aspect of the present disclosure includes an acquisition step in which a computer acquires a first moving image of a user's motion and detection information regarding the motion detected by a motion sensor; a target event detection step of detecting a target event occurring during the operation based on detection information; and a computer extracting at least one first still image corresponding to the detected target event from the first video. a first extraction step.
本開示の一態様に係る動作解析システムの概略構成の一例を示す図である。FIG. 1 is a diagram illustrating an example of a schematic configuration of a motion analysis system according to an aspect of the present disclosure. 動きセンサ装置が取り付けられているラケットの構成例を示す斜視図である。FIG. 2 is a perspective view showing a configuration example of a racket to which a motion sensor device is attached. 動作解析システムの概略構成の一例を示すブロック図である。FIG. 1 is a block diagram showing an example of a schematic configuration of a motion analysis system. ユーザ端末の概略構成の一例を示すブロック図である。FIG. 2 is a block diagram showing an example of a schematic configuration of a user terminal. サーバ装置の概略構成の一例を示すブロック図である。FIG. 2 is a block diagram showing an example of a schematic configuration of a server device. 動作解析システムにおける主要な処理の流れの一例を示すシーケンス図である。FIG. 2 is a sequence diagram showing an example of the flow of main processing in the motion analysis system. ユーザ端末における表示画面の一例を示す図である。It is a figure showing an example of a display screen in a user terminal. ユーザ端末における表示画面の一例を示す図である。It is a figure showing an example of a display screen in a user terminal. ユーザ端末における表示画面の一例を示す図である。It is a figure showing an example of a display screen in a user terminal. ユーザ端末における表示画面の一例を示す図である。It is a figure showing an example of a display screen in a user terminal. ユーザ端末における表示画面の一例を示す図である。It is a figure showing an example of a display screen in a user terminal. ユーザ端末における表示画面の一例を示す図である。It is a figure showing an example of a display screen in a user terminal. ユーザ端末における表示画面の一例を示す図である。It is a figure showing an example of a display screen in a user terminal. ユーザ端末における表示画面の一例を示す図である。It is a figure showing an example of a display screen in a user terminal.
 以下、本開示の一実施形態について、模式的に示した図1などを参照しながら詳細に説明する。 Hereinafter, one embodiment of the present disclosure will be described in detail with reference to schematically shown FIG. 1 and the like.
 (本開示に係る動作解析システム100の適用範囲)
 まず、本開示に係る動作解析システム100の適用範囲について説明する。動作解析システム100は、ユーザの動作中に含まれる所定の対象イベントを解析対象とすることが可能なシステムである。ここで、「ユーザの動作」には、ユーザが自身の身体を用いて行う動作全般であってもよい。また、「対象イベント」は、ユーザによる動作によって発生し得るイベントであって、解析対象となり得る任意のイベントであってもよい。
(Scope of application of motion analysis system 100 according to the present disclosure)
First, the scope of application of the motion analysis system 100 according to the present disclosure will be described. The motion analysis system 100 is a system that can analyze a predetermined target event included in a user's motion. Here, the "user's motion" may include any general motion that the user performs using his or her own body. Further, the "target event" is an event that can occur due to an action by a user, and may be any event that can be an analysis target.
 例えば、ユーザの動作が、スポーツ用の用具を用いて所定の移動体に打撃を与えることにより該移動体を移動させる動作である場合、対象イベントは、該用具が所定の移動体と接触すること、または、用具が所定の姿勢になることであってもよい。 For example, if the user's action is to move a predetermined moving object by striking it with sports equipment, the target event is that the equipment comes into contact with the predetermined moving object. , or the tool may be in a predetermined position.
 「用具」とは、ユーザがスポーツなどを行う場合に動かす道具を意図しており、「移動体」とは、スポーツなどにおいて使用される移動物体を意図している。 "Equipment" is intended to be a tool that a user moves when playing sports, etc., and "moving object" is intended to be a moving object used in sports, etc.
 用具としては、例えば、テニス、バドミントン、卓球、スカッシュ、ラクロスなどのラケット、野球およびクリケットのバット、ゴルフのクラブ、ホッケーおよびアイスホッケーのスティック、体操用具、釣り竿、ユーザが装着するウェアなどが挙げられる。一方、移動体としては、例えば、各種球技において使用されるボール、バドミントンのシャトル、アイスホッケー用のパック(puck)などが挙げられる。 Examples of equipment include rackets for tennis, badminton, table tennis, squash, lacrosse, etc., baseball and cricket bats, golf clubs, hockey and ice hockey sticks, gymnastics equipment, fishing rods, clothing worn by the user, and the like. . On the other hand, examples of moving objects include balls used in various ball games, badminton shuttles, and ice hockey pucks.
 さらに、ビリヤードにおけるキューも、ボールに打撃を与えるための道具であり、用具に含まれ得る。例えば、ラグビーおよびサッカーにおけるボールを蹴る動作は、シューズを用いてボールに打撃を与える動作と捉えることも可能である。それゆえ、ラグビーおよびサッカーにおけるシューズも用具に含まれ得る。 Furthermore, a cue in billiards is also a tool for hitting the ball, and can be included in the equipment. For example, the action of kicking a ball in rugby and soccer can be considered as the action of hitting the ball with a shoe. Therefore, shoes in rugby and soccer may also be included in the equipment.
 また、用具には、ユーザが着る衣服や手袋、靴下などが含まれていてもよい。また、用具には、ユーザの身体の全体または一部に装着するタイツ状の衣服が含まれていてもよい。この場合、用具の動きを解析することによって、ユーザの身体の動きの解析が可能となる。 Additionally, the equipment may include clothes, gloves, socks, etc. worn by the user. The equipment may also include tights-like clothing worn over all or part of the user's body. In this case, by analyzing the movement of the tool, it becomes possible to analyze the movement of the user's body.
 ユーザの動作は、スポーツに限定されない。例えば、ユーザの動作は、所定の姿勢となること(または該姿勢を維持すること)が要求される作業およびダンスなどであってもよい。この場合、対象イベントは、ユーザが所定の姿勢になることであり得る。あるいは、ユーザの動作は、料理用の用具を用いて所定の食材を調理する動作であってもよい。料理用の用具は、例えば、ナイフおよびフライパンなどであってもよい。この場合、対象イベントは、該用具が所定の食材と接触すること、または、用具が所定の姿勢になることであってもよい。 The user's actions are not limited to sports. For example, the user's action may be work, dancing, etc. that requires a predetermined posture (or maintaining the predetermined posture). In this case, the target event may be that the user assumes a predetermined posture. Alternatively, the user's action may be an action of cooking predetermined ingredients using cooking utensils. The cooking utensils may be, for example, knives and frying pans. In this case, the target event may be that the tool comes into contact with a predetermined food material, or that the tool assumes a predetermined posture.
 以下の実施形態では、ユーザの卓球の技量を向上させるために、ラケット6を用いたユーザの動作を解析する動作解析システム100を例に挙げて説明する。この場合、用具は卓球用のラケット6であり、移動体は、例えばセルロイド製の卓球用のボール7(ピンポン球とも呼称される)である。以下の実施形態において、「対象イベント」は、例えば、ユーザがラケット6を用いてボール7を打った(すなわち、ラケット6がボール7に接触した)ことを意図している。 In the following embodiment, a motion analysis system 100 that analyzes a user's motion using a racket 6 will be described as an example in order to improve the user's table tennis skills. In this case, the equipment is a table tennis racket 6, and the moving object is, for example, a celluloid table tennis ball 7 (also called a ping pong ball). In the embodiments below, the "target event" is intended to mean, for example, that the user hits the ball 7 using the racket 6 (that is, the racket 6 comes into contact with the ball 7).
 〔実施形態1〕
 (動作解析システム100の構成)
 動作解析システム100の構成について、図1などを適宜参照して説明する。図1は、動作解析システム100の概略構成の一例を示す図である。
[Embodiment 1]
(Configuration of motion analysis system 100)
The configuration of the motion analysis system 100 will be explained with reference to FIG. 1 and the like as appropriate. FIG. 1 is a diagram showing an example of a schematic configuration of a motion analysis system 100.
 動作解析システム100は、動きセンサ装置5からのユーザの動作に関する検知情報の取得と、該動作を撮像した第1動画の撮像とが可能なユーザ端末2と、対象イベントを検出して、ユーザの動作を解析するサーバ1(動作解析装置)と、を備える。サーバ1およびユーザ端末2は、図1に示すように、ネットワーク4を介して互いに通信可能に接続されている。ネットワーク4は、インターネットであってもよいし、施設内に設けられたローカルネットワークであってもよい。 The motion analysis system 100 includes a user terminal 2 capable of acquiring detection information regarding the user's motion from the motion sensor device 5 and capturing a first moving image of the motion, and a user terminal 2 that can detect a target event and analyze the user's motion. A server 1 (motion analysis device) that analyzes motion. The server 1 and the user terminal 2 are communicably connected to each other via a network 4, as shown in FIG. The network 4 may be the Internet or a local network provided within the facility.
 ユーザ端末2は、ユーザの動作に関する解析指示をサーバ装置1に送信可能である。送信される解析指示には、ユーザの動作を撮像した第1動画、および該動作に関する検知情報が含まれている。ユーザ端末2は、入力部23および表示部24の機能を備えるタッチパネル29、および第1動画を撮像可能な撮像部26を備えていてもよい。図1には、動きセンサ装置5が取り付けられているラケット6を把持するユーザの動作を、ユーザ端末2を用いて撮像している様子を示している。 The user terminal 2 is capable of transmitting an analysis instruction regarding the user's actions to the server device 1. The transmitted analysis instruction includes a first moving image of the user's motion and detected information regarding the motion. The user terminal 2 may include a touch panel 29 having the functions of an input section 23 and a display section 24, and an imaging section 26 capable of capturing a first moving image. FIG. 1 shows how a user terminal 2 is used to capture an image of a user's motion of gripping a racket 6 to which a motion sensor device 5 is attached.
 サーバ装置1は、ユーザ端末2から受信した解析指示に応じて、該ユーザ端末2から取得した検知情報および第1動画を解析する。サーバ装置1は、解析結果を、ユーザ端末2に送信する。このように、動作解析システム100は、第1動画およびセンサ装置5によって検知された検知情報に基づく解析結果をユーザに提供することができる。例えば、検知情報が用具の動きおよび姿勢変化を示す場合、この解析結果は、ユーザの身体の動かし方のみならず、用いた用具の扱い方に関する示唆を含んでいる。それゆえ、動作解析システム100は、ユーザに身体の動かし方および自身の用具の扱い方における修正点を認識させ、効果的な練習を行うよう支援することができる。よって、ユーザは、自身の技量を向上させるための効果的な練習を実行することができる。 The server device 1 analyzes the detection information and the first video acquired from the user terminal 2 in accordance with the analysis instruction received from the user terminal 2. The server device 1 transmits the analysis results to the user terminal 2. In this way, the motion analysis system 100 can provide the user with an analysis result based on the first moving image and the detection information detected by the sensor device 5. For example, when the detection information indicates the movement of the tool and a change in posture, the analysis result includes suggestions regarding not only how the user moves his or her body but also how to handle the tool used. Therefore, the motion analysis system 100 can help the user recognize corrections in how to move the user's body and how to handle his or her own equipment, and perform effective exercises. Therefore, the user can perform effective exercises to improve his or her skills.
 図1には、1つのサーバ1、および、1つのユーザ端末2を備える動作解析システム100を示しているが、この構成に限定されない。すなわち、動作解析システム100は、複数のサーバ1および複数のユーザ端末2を備えていてもよい。 Although FIG. 1 shows a behavior analysis system 100 that includes one server 1 and one user terminal 2, the configuration is not limited to this. That is, the motion analysis system 100 may include multiple servers 1 and multiple user terminals 2.
 <動きセンサ装置5が取り付けられているラケット6の構成>
 動作解析システム100を利用するユーザは、動きセンサ装置5(動きセンサ)が取り付けられているラケット6を使用する。動きセンサ装置5は、ユーザによって把持されているラケット6の動きをリアルタイムで計測する。
<Configuration of the racket 6 to which the motion sensor device 5 is attached>
A user using the motion analysis system 100 uses a racket 6 to which a motion sensor device 5 (motion sensor) is attached. The motion sensor device 5 measures the motion of the racket 6 held by the user in real time.
 図2は、動きセンサ装置5が取り付けられているラケット6の構成例を示す斜視図である。ここでは、シェイクハンドタイプのラケット6を例に挙げて図示しているが、これに限定されない。例えば、ラケット6は、ペンホルダータイプなど他の形状のラケットであってもよい。 FIG. 2 is a perspective view showing an example of the configuration of the racket 6 to which the motion sensor device 5 is attached. Although a shake-hand type racket 6 is illustrated here as an example, the present invention is not limited to this. For example, the racket 6 may be a racket of other shapes, such as a pen holder type.
 ラケット6は、ボール7を打つための打部62と、ユーザが握るためのグリップ61とを有している。打部62は、ボール7を打つための打面9aを有している。図2に示すラケット6は、シェイクハンドタイプであり、表裏1対の打面9aを有している。ラケット6の打面9aには、さまざまな性質を有するラバー(例えば、スポンジシートとラバーシートから成る反発性を有するシート)が配されることが多い。 The racket 6 has a hitting part 62 for hitting the ball 7 and a grip 61 for the user to hold. The hitting portion 62 has a hitting surface 9a for hitting the ball 7. The racket 6 shown in FIG. 2 is of a shake-hand type and has a pair of front and back hitting surfaces 9a. The hitting surface 9a of the racket 6 is often filled with rubber having various properties (for example, a resilient sheet made of a sponge sheet and a rubber sheet).
 グリップ61は、ユーザがラケット6を操るために握る部位である。打部62とグリップ61とは互いに一体的に形成されていてもよい。 The grip 61 is a part that the user holds to manipulate the racket 6. The striking portion 62 and the grip 61 may be formed integrally with each other.
 動きセンサ装置5は、ラケット6に固定されており、少なくとも角速度を検出する慣性センサとして機能する。動きセンサ装置5のラケット6に対する取り付け位置、形状および大きさなどは適宣に設定されてよい。図示の例では、動きセンサ装置5は、これに限定されるものではないが、ラケット6のグリップ61の、打部62とは反対側の端部に取り付けられている。動きセンサ装置5は、ラケット6に対して着脱可能であってもよいし、ラケット6に対して着脱不可能に固定されていてもよい。動きセンサ装置5は、ラケット6にボール7が接触したときの衝撃を検出することもできる。また、動きセンサ装置5は、ラケット6が所定の姿勢になったことを検出することもできる。 The motion sensor device 5 is fixed to the racket 6 and functions as an inertial sensor that detects at least angular velocity. The attachment position, shape, size, etc. of the motion sensor device 5 to the racket 6 may be set as appropriate. In the illustrated example, the motion sensor device 5 is attached to the end of the grip 61 of the racket 6 on the opposite side from the hitting portion 62, although the movement sensor device 5 is not limited thereto. The motion sensor device 5 may be removably attached to the racket 6, or may be fixed to the racket 6 in a non-removable manner. The motion sensor device 5 can also detect an impact when the ball 7 contacts the racket 6. Furthermore, the motion sensor device 5 can also detect that the racket 6 has assumed a predetermined posture.
 次に、動作解析システム100の構成について、図3を用いて説明する。図3は、動作解析システム100の概略構成の一例を示すブロック図である。 Next, the configuration of the motion analysis system 100 will be explained using FIG. 3. FIG. 3 is a block diagram showing an example of a schematic configuration of the motion analysis system 100.
 <動きセンサ装置5の概略構成>
 動きセンサ装置5は、記憶部52、ラケット6の角速度を検出する角速度センサ53、ラケット6の加速度を検出する加速度センサ54、ユーザ端末2との通信を行う通信部55、およびCPU(Central Processing Unit)50を備えている。角速度センサ53は、例えば、ラケット6の角速度を検出するジャイロセンサである。CPU50は、角速度センサ53、加速度センサ54、および通信部55に対して入力または出力がなされる信号の処理などを行う。また、動きセンサ装置5は、上記の各種構成に対して電力を供給する電源を備えていてもよい。図4では、電源の図示を省略している。
<Schematic configuration of motion sensor device 5>
The motion sensor device 5 includes a storage section 52, an angular velocity sensor 53 that detects the angular velocity of the racket 6, an acceleration sensor 54 that detects the acceleration of the racket 6, a communication section 55 that communicates with the user terminal 2, and a CPU (Central Processing Unit). ) 50. The angular velocity sensor 53 is, for example, a gyro sensor that detects the angular velocity of the racket 6. The CPU 50 processes signals input to or output from the angular velocity sensor 53, the acceleration sensor 54, and the communication unit 55. Further, the motion sensor device 5 may include a power source that supplies power to the various configurations described above. In FIG. 4, illustration of the power supply is omitted.
 記憶部52は、ROM(Read Only Memory)、RAM(Random Access Memory)、および外部記憶装置などを含んでいてもよい。記憶部52には、CPU50によって実行される所定のアプリケーションプログラムなどが記憶されている。 The storage unit 52 may include a ROM (Read Only Memory), a RAM (Random Access Memory), an external storage device, and the like. The storage unit 52 stores predetermined application programs and the like executed by the CPU 50.
 角速度センサ53は、x軸、y軸、およびz軸の3軸それぞれの角速度を検出可能な3軸角速度センサによって構成されている。このような角速度センサとしては、例えば、x軸回りの角速度を検出する角速度センサ、y軸回りの角速度を検出する角速度センサ、およびz軸回りの角速度を検出する角速度センサの組み合わせが挙げられる。 The angular velocity sensor 53 is constituted by a three-axis angular velocity sensor capable of detecting angular velocity in each of three axes: x-axis, y-axis, and z-axis. Examples of such an angular velocity sensor include a combination of an angular velocity sensor that detects angular velocity about the x-axis, an angular velocity sensor that detects angular velocity about the y-axis, and an angular velocity sensor that detects angular velocity about the z-axis.
 加速度センサ54は、x軸、y軸、およびz軸の3軸それぞれに沿った方向の加速度を検出可能な3軸加速度センサによって構成されている。3軸加速度センサの構成としては、任意の公知のものが適用可能である。 The acceleration sensor 54 is composed of a three-axis acceleration sensor that can detect acceleration in directions along each of the three axes: x-axis, y-axis, and z-axis. Any known configuration can be used as the configuration of the three-axis acceleration sensor.
 動きセンサ装置5は、角速度センサ53によりラケット6の動く方向および回転運動などを検知し、加速度センサ54によりラケット6の動く距離および速度などを計測することができる。このように、動きセンサ装置5によってラケット6の動きの解析が可能である。 The motion sensor device 5 can detect the moving direction and rotational movement of the racket 6 using the angular velocity sensor 53, and can measure the moving distance and speed of the racket 6 using the acceleration sensor 54. In this way, the motion of the racket 6 can be analyzed by the motion sensor device 5.
 ここで、x軸、y軸、およびz軸の3軸は、任意に設定され得る。図2には、ラケット6の打面9aに垂直な軸をx軸、打面9aと平行でグリップ61が有する軸に垂直な軸をy軸、打面9aと平行でグリップ61が有する軸に平行な軸をz軸とする例を示しているが、これに限定されない。例えば、x軸、y軸、およびz軸の3軸を、重力の方向に平行な軸、および重力の方向に垂直で互いに直行する2つの軸とする構成であってもよい。本明細書において、「平行」と記載している場合、厳密に「平行」であることを求めない。また、本明細書において、「垂直」と記載している場合、厳密に「垂直」であることを求めない。 Here, the three axes, the x-axis, y-axis, and z-axis, can be set arbitrarily. In FIG. 2, the axis perpendicular to the hitting surface 9a of the racket 6 is the x-axis, the axis parallel to the hitting surface 9a and perpendicular to the axis of the grip 61 is the y-axis, and the axis parallel to the hitting surface 9a and possessed by the grip 61 is the y-axis. Although an example is shown in which the parallel axes are the z-axis, the invention is not limited to this. For example, the configuration may be such that the three axes, the x-axis, y-axis, and z-axis, are an axis parallel to the direction of gravity and two axes that are perpendicular to the direction of gravity and perpendicular to each other. In this specification, when "parallel" is used, it is not strictly required to be "parallel". Furthermore, in this specification, when "vertical" is used, it is not strictly required to be "vertical".
 通信部17は、少なくともユーザ端末2との通信を行うことが可能な構成を有している。この通信は、無線通信であってもよいし、有線通信であってもよい。無線通信の場合、電波を用いるもの、赤外線を用いるものなどが適用され得る。電波を用いるものとしては、Bluetooth(登録商標)およびWiFi(登録商標)などが適用され得る。 The communication unit 17 has a configuration that allows communication with at least the user terminal 2. This communication may be wireless communication or wired communication. In the case of wireless communication, those using radio waves, those using infrared rays, etc. can be applied. Bluetooth (registered trademark), WiFi (registered trademark), and the like can be applied as devices that use radio waves.
 <ユーザ端末2の概略構成>
 次に、ユーザ端末2の構成について説明する。ユーザ端末2は、コンピュータを含んで構成されてもよい。例えば、ユーザ端末2は、スマートフォンおよびタブレットなどであってもよい。コンピュータのハードウェア、およびOS(Operating System)は、任意の公知のものが適用され得る。所定のアプリケーションプログラムを一般的なコンピュータにインストールすることによって、ユーザ端末2を得ることができる。
<Schematic configuration of user terminal 2>
Next, the configuration of the user terminal 2 will be explained. The user terminal 2 may include a computer. For example, the user terminal 2 may be a smartphone, a tablet, or the like. Any known computer hardware and OS (Operating System) may be used. The user terminal 2 can be obtained by installing a predetermined application program on a general computer.
 図3に示すように、ユーザ端末2は、CPU20、記憶部22、ユーザの各種操作を受け付ける入力部23、表示部24、通信部25、および撮像部26を備えていてもよい。ユーザ端末2は、ビデオカメラなどによって撮像された第1動画を取得する構成であってもよい。それゆえ、ユーザ端末2において、撮像部26は必須の構成ではない。また、ユーザ端末2は、ユーザが使用する、ユーザ端末2とは別体の表示装置(図示せず)に解析結果を出力する構成であってもよい。それゆえ、ユーザ端末2において、表示部24は必須の構成ではない。 As shown in FIG. 3, the user terminal 2 may include a CPU 20, a storage section 22, an input section 23 that receives various user operations, a display section 24, a communication section 25, and an imaging section 26. The user terminal 2 may be configured to acquire a first moving image captured by a video camera or the like. Therefore, in the user terminal 2, the imaging unit 26 is not an essential component. Further, the user terminal 2 may be configured to output the analysis result to a display device (not shown) that is separate from the user terminal 2 and is used by the user. Therefore, in the user terminal 2, the display section 24 is not an essential component.
 CPU20は、入力部23、表示部24、通信部25、および撮像部26に対して入力または出力がなされる信号の処理などを行う。記憶部22は、ROM(Read Only Memory)、RAM(Random Access Memory)、および外部記憶装置などを含んでいてもよい。記憶部22には、CPU20によって実行される各種の制御プログラム、およびCPU20によって実行される各種処理に用いられるデータなどが記憶されている。 The CPU 20 processes signals input to or output from the input unit 23, display unit 24, communication unit 25, and imaging unit 26. The storage unit 22 may include a ROM (Read Only Memory), a RAM (Random Access Memory), an external storage device, and the like. The storage unit 22 stores various control programs executed by the CPU 20, data used for various processes executed by the CPU 20, and the like.
 ユーザ端末2は、動きセンサ装置5によって検知された、ユーザの動作に関する検知情報を取得しながら、同時にユーザの動作を撮像することが可能である。例えば、ユーザ端末2は、ユーザの動作に関する検知情報の取得と、ユーザの動作の撮像とを同時に(あるいは、ほとんど同時に)開始させることが可能である。 The user terminal 2 is capable of capturing an image of the user's motion while simultaneously acquiring detection information regarding the user's motion detected by the motion sensor device 5. For example, the user terminal 2 can start acquiring detection information regarding the user's motion and capturing an image of the user's motion at the same time (or almost simultaneously).
 <サーバ装置1の概略構成>
 次に、サーバ装置1の構成について説明する。サーバ装置1は、コンピュータを含んで構成されてもよい。コンピュータのハードウェア、およびOS(Operating System)は、任意の公知のものが適用され得る。所定のアプリケーションプログラムを一般的なコンピュータにインストールすることによって、サーバ装置1を得ることができる。
<Schematic configuration of server device 1>
Next, the configuration of the server device 1 will be explained. The server device 1 may include a computer. Any known computer hardware and OS (Operating System) may be used. The server device 1 can be obtained by installing a predetermined application program on a general computer.
 図3に示すように、サーバ装置1は、CPU10、記憶部12、および通信部15を備えている。 As shown in FIG. 3, the server device 1 includes a CPU 10, a storage section 12, and a communication section 15.
 CPU10は、通信部15に対して入力または出力がなされる信号の処理などを行う。記憶部12は、ROM(Read Only Memory)、RAM(Random Access Memory)、および外部記憶装置などを含んでいてもよい。記憶部12には、CPU10によって実行される解析プログラムなどが記憶されている。 The CPU 10 processes signals input to or output from the communication unit 15. The storage unit 12 may include a ROM (Read Only Memory), a RAM (Random Access Memory), an external storage device, and the like. The storage unit 12 stores analysis programs executed by the CPU 10 and the like.
 サーバ装置1は、ユーザ端末2から、ユーザの動作に関する検知情報、第1動画、および解析指示を取得したことに応じて、対象イベントの検出、および、動作解析を実行する。サーバ装置1は、解析結果を、解析指示の送信元であるユーザ端末2に提供する。サーバ装置1は、ユーザが使用したラケット6の該動作中の姿勢変化を示した第2動画を、取得した検知情報に基づいて生成してもよい。第2動画は、ラケット6の姿勢変化を示したアニメーション動画であってもよい。 The server device 1 detects the target event and executes the motion analysis in response to acquiring the detection information regarding the user's motion, the first video, and the analysis instruction from the user terminal 2. The server device 1 provides the analysis results to the user terminal 2 that is the source of the analysis instruction. The server device 1 may generate a second video showing the change in posture of the racket 6 used by the user during the operation based on the acquired detection information. The second video may be an animation video showing changes in the posture of the racket 6.
 サーバ装置1からユーザ端末2に提供される解析結果には、以下が含まれ得る。
・第2動画
・後述の第1静止画像および第2静止画像
・後述の第1関連画像群および第2関連画像群
・ラケット6によって打ち出されたボール7の推定軌道および推定回転数
・ユーザの身体の動きを示す骨格モデル。
The analysis results provided from the server device 1 to the user terminal 2 may include the following.
・Second video ・First still image and second still image described later ・First related image group and second related image group described later ・Estimated trajectory and estimated rotation speed of ball 7 launched by racket 6 ・User's body A skeletal model showing the movement of.
 (ユーザ端末2の構成)
 ユーザ端末2の構成について、より具体的に、図4を用いて説明する。図4は、ユーザ端末2の概略構成の一例を示すブロック図である。図4に示す制御部21は、図3におけるCPU20に対応する。
(Configuration of user terminal 2)
The configuration of the user terminal 2 will be explained in more detail using FIG. 4. FIG. 4 is a block diagram showing an example of a schematic configuration of the user terminal 2. As shown in FIG. The control unit 21 shown in FIG. 4 corresponds to the CPU 20 in FIG. 3.
 ユーザ端末2は、制御部21、記憶部22、撮像部26、入力部23、表示部24、および通信部25を備えていてもよい。制御部21は、ユーザ端末2が備える各機能の処理を実行するように制御する。 The user terminal 2 may include a control section 21 , a storage section 22 , an imaging section 26 , an input section 23 , a display section 24 , and a communication section 25 . The control unit 21 controls the user terminal 2 to execute processing of each function.
 記憶部22は、制御部21によって読み出される各種制御プログラム、および、制御部21が実行した各種処理において利用される解析対象データなどが格納されている記憶装置である。 The storage unit 22 is a storage device that stores various control programs read by the control unit 21 and analysis target data used in various processes executed by the control unit 21.
 制御部21は、撮像制御部211および計測値取得部212を備えている。図4には、さらに、表示制御部214およびデータ管理部213を備える構成が示されている。 The control unit 21 includes an imaging control unit 211 and a measurement value acquisition unit 212. FIG. 4 shows a configuration further including a display control section 214 and a data management section 213.
 撮像制御部211は、撮像部26を制御して第1動画を取得する。第1動画は、動きセンサ装置5が取り付けられているラケット6を用いて卓球をしているユーザの様子を撮像したものである。第1動画には、ユーザに把持されたラケット6によって実際に打たれたボール7の軌道が映っていてもよい。撮像制御部211は、撮像部26によって撮像された第1動画を解析対象データ222に格納してもよい。 The imaging control unit 211 controls the imaging unit 26 to acquire the first moving image. The first video is an image of a user playing table tennis using a racket 6 to which a motion sensor device 5 is attached. The first video may show the trajectory of the ball 7 actually hit by the racket 6 held by the user. The imaging control unit 211 may store the first moving image captured by the imaging unit 26 in the analysis target data 222.
 制御部21は、撮像制御部211の代替として動画取得部(図示せず)を備えていてもよい。この動画取得部を備えるユーザ端末2は、デジタルカメラおよびデジタルビデオなどの外部機器(図示せず)によって撮像された動画を、通信部25を介して取得することが可能である。ただし、この場合、動きセンサ装置5から受信した検知情報(例えば、計測値)と、外部機器によって撮像された動画との間の同期をとる必要がある。そこで、取得された動画は、例えば、計測値との同期をとるため、撮像を開始した時刻情報と共に、記憶部22の解析対象データ222に格納されてもよい。 The control unit 21 may include a moving image acquisition unit (not shown) as an alternative to the imaging control unit 211. The user terminal 2 equipped with this moving image acquisition section can acquire, via the communication section 25, a moving image captured by an external device (not shown) such as a digital camera and a digital video. However, in this case, it is necessary to synchronize the detection information (for example, measured value) received from the motion sensor device 5 and the moving image captured by the external device. Therefore, for example, the acquired moving image may be stored in the analysis target data 222 of the storage unit 22 together with time information at which imaging was started in order to synchronize with the measured values.
 計測値取得部212は、通信部25を介して、動きセンサ装置5が検知した検知情報を取得する。検知情報は、動きセンサ装置5から出力される任意の形式の情報であり、例えば、ラケット6の経時的な位置変化を表すグラフであってもよいし、ラケット6の位置および姿勢の経時的な変化を示す計測値であってもよい。検知情報には、計測された時刻を示す時刻情報が含まれてよい。計測値取得部212は、撮像制御部211と一体に構成されてよい。計測値取得部212は、取得した検知情報を解析対象データ222に格納してもよい。 The measurement value acquisition unit 212 acquires the detection information detected by the motion sensor device 5 via the communication unit 25. The detection information is information in any format output from the motion sensor device 5, and may be, for example, a graph representing changes in the position of the racket 6 over time, or a graph representing changes in the position and posture of the racket 6 over time. It may be a measured value indicating a change. The detection information may include time information indicating the measured time. The measurement value acquisition section 212 may be configured integrally with the imaging control section 211. The measured value acquisition unit 212 may store the acquired detection information in the analysis target data 222.
 例えば、制御部21が、撮像部による動画の撮像と、動きセンサ装置5からの検知情報の取得とを同時に開始すること以下では、計測値取得部212が、動きセンサ装置5から検知情報として計測値を取得する場合を例に挙げて説明する。 For example, the control unit 21 simultaneously starts capturing a moving image by the imaging unit and acquiring detection information from the motion sensor device 5. Below, the measurement value acquisition unit 212 measures the detection information from the motion sensor device 5. An example of how to obtain a value will be explained.
 データ管理部213は、撮像された第1動画、および取得された計測値を関連付けて解析対象データ222に格納する。また、データ管理部213は、第1動画および計測値を、第1動画が撮像された日付、撮像された動作を行った者に対応する識別情報などと対応付けて解析対象データ222に格納してもよい。この構成を採用すれば、ユーザ端末2は、解析依頼を送信した日の選択を受け付けて、該解析依頼に対応する解析結果を表示部24に表示させることが可能である。 The data management unit 213 associates the captured first video and the acquired measurement values and stores them in the analysis target data 222. Further, the data management unit 213 stores the first video and the measured values in the analysis target data 222 in association with the date on which the first video was captured, identification information corresponding to the person who performed the captured action, and the like. You can. If this configuration is adopted, the user terminal 2 can receive the selection of the date on which the analysis request was sent, and display the analysis result corresponding to the analysis request on the display unit 24.
 データ管理部213は、複数のユーザに関連する解析結果を記憶部22に格納することが可能であってもよい。ここで、複数のユーザは、例えば、卓球の技量向上のため、指導者から指導を受ける複数の選手であってもよい。動作解析システム100を採用すれば、複数の選手を指導する指導者に、選手毎の動作を解析した解析結果を提供することができる。これにより、指導者は、各選手の動作の特徴を理解したうえで、各選手に適した指導を行うことができる。 The data management unit 213 may be able to store analysis results related to multiple users in the storage unit 22. Here, the plurality of users may be, for example, a plurality of players who receive guidance from an instructor in order to improve their table tennis skills. By employing the motion analysis system 100, it is possible to provide an instructor who instructs a plurality of players with the analysis results obtained by analyzing the motions of each player. This allows the coach to understand the characteristics of each player's movements and provide guidance suitable for each player.
 データ管理部213は、ユーザによって指定された第1動画および検知情報を含む解析依頼を生成する。データ管理部213は、解析依頼を生成した日付と関連付けて、記憶部22の解析依頼ログ221に格納してもよい。 The data management unit 213 generates an analysis request that includes the first video and detection information specified by the user. The data management unit 213 may store the analysis request in the analysis request log 221 of the storage unit 22 in association with the date on which the analysis request was generated.
 データ管理部213は、サーバ装置1から受信した解析結果を、解析結果223に格納してもよい。この場合、データ管理部213は、送信した解析依頼と該解析依頼に対応する解析結果とを関連付けて、解析結果223に格納してもよい。例えば、データ管理部213は、サーバ装置1に送信した第1動画が撮像された日付に、サーバ装置1から取得した第1静止画像を対応付けて管理してもよい。 The data management unit 213 may store the analysis results received from the server device 1 in the analysis results 223. In this case, the data management unit 213 may associate the transmitted analysis request with the analysis result corresponding to the analysis request and store it in the analysis result 223. For example, the data management unit 213 may manage the first still image acquired from the server device 1 by associating it with the date on which the first video transmitted to the server device 1 was captured.
 表示制御部214は、撮像された第1動画、サーバ装置1に送信する解析指示をユーザに入力させるための画面、および受信した解析結果などを表示部24に表示させる。表示制御部214は、解析結果および第2動画を同時に表示させてもよい。表示制御部214が表示部24に表示させる表示画面については後に具体例を挙げて説明する。 The display control unit 214 causes the display unit 24 to display the captured first video, a screen for allowing the user to input an analysis instruction to be sent to the server device 1, the received analysis results, and the like. The display control unit 214 may display the analysis result and the second moving image at the same time. The display screen that the display control unit 214 causes the display unit 24 to display will be described later using a specific example.
 通信部25は、動きセンサ装置5との通信、ユーザ端末2およびサーバ装置1との通信を行う。ユーザ端末2は、通信部25を介して、動きセンサ装置5からの検知情報の受信、サーバ装置1への解析依頼の送信、および、サーバ装置1からの解析結果の受信などを行う。 The communication unit 25 communicates with the motion sensor device 5, the user terminal 2, and the server device 1. The user terminal 2 receives detection information from the motion sensor device 5, transmits an analysis request to the server device 1, receives analysis results from the server device 1, etc. via the communication unit 25.
 (サーバ装置1の構成)
 サーバ装置1の構成について、より具体的に、図5を用いて説明する。図5は、サーバ装置1の概略構成の一例を示すブロック図である。図5に示す制御部11は、図3におけるCPU10に対応する。すなわち、制御部11の各部は、図3におけるCPU10が解析プログラム121を実行することによって実現される機能ブロックである。
(Configuration of server device 1)
The configuration of the server device 1 will be explained in more detail using FIG. 5. FIG. 5 is a block diagram showing an example of a schematic configuration of the server device 1. As shown in FIG. The control unit 11 shown in FIG. 5 corresponds to the CPU 10 in FIG. 3. That is, each part of the control unit 11 is a functional block realized by the CPU 10 in FIG. 3 executing the analysis program 121.
 サーバ装置1は、制御部11、記憶部12、および通信部15を備えている。制御部11は、サーバ装置1が備える各機能の処理を実行するように制御する。 The server device 1 includes a control section 11, a storage section 12, and a communication section 15. The control unit 11 controls the server device 1 to execute processing of each function.
 記憶部12は、制御部11によって読み出される各種コンピュータプログラム、および、制御部11が実行する解析プログラム121など、各種処理において利用されるデータなどが格納されている記憶装置である。記憶部12には、図5に示すように、解析結果122、ユーザ情報123などが格納されていてもよい。解析結果122には、取得した解析依頼に応じてサーバ装置1が送信した解析結果を、解析依頼と対応付けて格納されている。ユーザ情報123には、動作解析システム100を利用する者として登録済のユーザに関する情報が格納されていてもよい。ユーザに関する情報には、例えば、住所、およびメールアドレスなどの連絡先を示す情報が含まれていてもよい。ユーザ情報123には、解析依頼の送信元であるユーザ端末2の情報、各ユーザから解析依頼を送信した履歴などが格納されていてもよい。 The storage unit 12 is a storage device that stores various computer programs read by the control unit 11 and data used in various processes, such as the analysis program 121 executed by the control unit 11. As shown in FIG. 5, the storage unit 12 may store analysis results 122, user information 123, and the like. The analysis result 122 stores the analysis result sent by the server device 1 in response to the acquired analysis request in association with the analysis request. The user information 123 may store information regarding users who are registered as users of the motion analysis system 100. The information regarding the user may include, for example, address and information indicating contact information such as an email address. The user information 123 may store information about the user terminal 2 that is the sender of the analysis request, a history of sending analysis requests from each user, and the like.
 制御部11は、解析依頼受付部111(第1取得部、第2取得部)、対象イベント検出部112、および第1抽出部113を備えている。 The control unit 11 includes an analysis request reception unit 111 (first acquisition unit, second acquisition unit), a target event detection unit 112, and a first extraction unit 113.
 解析依頼受付部111は、ユーザ端末2から解析依頼を取得する。解析依頼には、ユーザの動作を撮像した第1動画、および、動きセンサによって検知された、該動作に関する検知情報が含まれている。また、解析依頼には、該解析依頼の送信元のユーザ端末2に固有の情報、解析対象となる動画に写っている動作を実行しているユーザの識別情報、および、該動作の名称などが含まれていてもよい。ここで、ユーザ端末2に固有の情報は、解析結果の送信先を示す情報を含んでいてもよい。ユーザの識別情報は、例えば、ユーザの名前、あるいはユーザが会員として所属するチーム内において該ユーザに付与されている識別情報であってもよい。識別情報は、ユーザが会員として所属するチームにおいて、会員毎に付与された会員番号であってもよい。動作の名称は、解析処理において検出対象となる対象イベントの名称(例えば、「ボールをヒット」)であってもよい。あるいは、対象イベントを含む動作の名称であってもよい。動作の名称は、例えば、「サーブ」、「ドライブ」、「プッシュ」、および「サーブリターン」などであってもよい。 The analysis request reception unit 111 acquires an analysis request from the user terminal 2. The analysis request includes a first moving image of the user's motion and detection information regarding the motion detected by the motion sensor. In addition, the analysis request includes information specific to the user terminal 2 that sent the analysis request, identification information of the user performing the action shown in the video to be analyzed, and the name of the action. May be included. Here, the information unique to the user terminal 2 may include information indicating the destination of the analysis result. The user's identification information may be, for example, the user's name or identification information given to the user within a team to which the user belongs as a member. The identification information may be a membership number assigned to each member of a team to which the user belongs as a member. The name of the action may be the name of a target event to be detected in the analysis process (for example, "hit the ball"). Alternatively, it may be the name of an action that includes the target event. The name of the operation may be, for example, "serve," "drive," "push," and "serve return."
 例えば、サーバ装置1は、第1動画を取得する第1取得部と、検知情報を取得する第2取得部とを備える構成であってもよく、図5に示す構成に限定されない。図5に示す解析依頼受付部111は、第1取得部としての機能と、第2取得部としての機能とを有する。以下、解析依頼受付部111を備えるサーバ装置1を例に挙げて説明する。 For example, the server device 1 may have a configuration including a first acquisition unit that acquires the first video and a second acquisition unit that acquires the detection information, and is not limited to the configuration shown in FIG. 5. The analysis request reception unit 111 shown in FIG. 5 has a function as a first acquisition unit and a function as a second acquisition unit. Hereinafter, the server device 1 including the analysis request reception unit 111 will be described as an example.
 対象イベント検出部112は、検知情報に基づいて、ユーザの動作中の対象イベントを検出する。対象イベント検出部112は、検知情報に基づいて算出された、動作におけるラケット6の姿勢変化の速度および移動速度の少なくともいずれか一方に基づいて、対象イベントを検出してもよい。例えば、ラケット6とボール7とが接触したとき、ラケット6に生じた振動が検知され得る。この振動は、動作におけるラケット6の姿勢変化の速度および移動速度の少なくともいずれか一方に影響を及ぼす。そこで、対象イベント検出部112は、ラケット6の姿勢変化の速度および移動速度の少なくともいずれか一方が振動によって影響されたことに基づいて、対象イベントを検出することができる。 The target event detection unit 112 detects a target event during the user's action based on the detection information. The target event detection unit 112 may detect the target event based on at least one of the speed of posture change and the movement speed of the racket 6 during the motion, which are calculated based on the detection information. For example, when the racket 6 and the ball 7 come into contact, vibrations generated in the racket 6 can be detected. This vibration affects at least one of the speed of posture change and the movement speed of the racket 6 during operation. Therefore, the target event detection unit 112 can detect the target event based on the fact that at least one of the speed of attitude change and the movement speed of the racket 6 is influenced by the vibration.
 第1抽出部113は、検出された対象イベントに対応する少なくとも1つの第1静止画像を、第1動画から抽出する。ここで、第1静止画像は、対象イベント検出部112によって検出された対象イベントが起きた時点またはその近傍の時点に対応する静止画像である。 The first extraction unit 113 extracts at least one first still image corresponding to the detected target event from the first moving image. Here, the first still image is a still image corresponding to a time point at which the target event detected by the target event detection unit 112 occurs or a time point in the vicinity thereof.
 第1抽出部113は、第1静止画像の前後の複数の静止画像である第1関連画像群を、第1動画から抽出可能であってもよい。第1抽出部113は、検知情報から対象イベントが複数検出された場合、少なくとも1つの第1静止画像を、検出された対象イベント毎に抽出してもよい。第1関連画像群については、後に表示画面の具体例を用いて説明する。 The first extraction unit 113 may be able to extract a first related image group, which is a plurality of still images before and after the first still image, from the first moving image. When a plurality of target events are detected from the detection information, the first extraction unit 113 may extract at least one first still image for each detected target event. The first related image group will be explained later using a specific example of a display screen.
 第1抽出部113は、取得された計測値に基づいて、ラケット6の動きを解析する。例えば、第1抽出部113は、第1動画と計測値との同期をとり、第1動画に含まれている対象イベントを検出してもよい。第1抽出部113は、解析結果を、解析に用いた計測値および第1動画と関連付けて記憶部22(図4の解析結果223)に格納する。 The first extraction unit 113 analyzes the movement of the racket 6 based on the acquired measurement values. For example, the first extraction unit 113 may synchronize the first video and the measured value and detect a target event included in the first video. The first extraction unit 113 stores the analysis result in the storage unit 22 (analysis result 223 in FIG. 4) in association with the measured values and the first video used for the analysis.
 サーバ装置1は、解析依頼に応じて、ラケット6の動きを解析し、任意の公知の手法によって第2動画を作成してもよい。この場合、サーバ装置1は、図5に示すように、第2動画生成部114および第2抽出部115をさらに備えていてもよい。 In response to the analysis request, the server device 1 may analyze the movement of the racket 6 and create the second video using any known method. In this case, the server device 1 may further include a second moving image generation section 114 and a second extraction section 115, as shown in FIG.
 第2動画生成部114は、検知情報に基づいて、ユーザの動作中のラケット6の姿勢変化を示した第2動画を生成する。第2抽出部115は、第2静止画像の前後の複数の静止画像である第2関連画像群を、第2動画から抽出可能であってもよい。第1関連画像群については、後に表示画面の具体例を用いて説明する。 The second video generation unit 114 generates a second video showing a change in the posture of the racket 6 during the user's movement based on the detection information. The second extraction unit 115 may be able to extract a second related image group, which is a plurality of still images before and after the second still image, from the second moving image. The first related image group will be explained later using a specific example of a display screen.
 第2抽出部115は、対象イベント検出部によって検出された対象イベントに対応する少なくとも1つの第2静止画像を、第2動画から抽出する。ここで、第2静止画像は、第1抽出部113によって、検知情報に基づいて検出された対象イベントが起きた時点またはその近傍の時点に対応する静止画像である。第2抽出部115は、検知情報から対象イベントが複数検出された場合、検出された対象イベントのそれぞれに対応する複数の第2静止画像を抽出してもよい。 The second extraction unit 115 extracts at least one second still image corresponding to the target event detected by the target event detection unit from the second moving image. Here, the second still image is a still image corresponding to a time point at which the target event detected by the first extraction unit 113 based on the detection information occurs or a time point in the vicinity thereof. When a plurality of target events are detected from the detection information, the second extraction unit 115 may extract a plurality of second still images corresponding to each of the detected target events.
 第2動画生成部114は、検出された対象イベントのそれぞれに対応する複数の第2動画を生成可能であってもよい。この場合、第2抽出部115は、少なくとも1つの第2静止画像を、生成された第2動画のそれぞれから抽出可能であってもよい。 The second video generation unit 114 may be able to generate a plurality of second videos corresponding to each of the detected target events. In this case, the second extraction unit 115 may be able to extract at least one second still image from each of the generated second moving images.
 通信部15は、ユーザ端末2との通信を行う。サーバ装置1は、通信部15を介して、サーバ装置1への解析依頼の受信、およびサーバ装置1からの解析結果の送信などを行う。 The communication unit 15 communicates with the user terminal 2. The server device 1 receives analysis requests to the server device 1 and transmits analysis results from the server device 1 via the communication unit 15 .
 (動作解析システム100における主要な処理の流れ)
 続いて、動作解析システム100における主要な処理の流れについて、図6を用いて説明する。図6は、動作解析システム100における主要な処理の流れの一例を示すシーケンス図である。
(Main processing flow in motion analysis system 100)
Next, the main processing flow in the motion analysis system 100 will be explained using FIG. 6. FIG. 6 is a sequence diagram showing an example of the main processing flow in the motion analysis system 100.
 まず、ユーザによって、動きセンサ装置5が起動される(ステップS10)。また、ユーザによって、ユーザ端末2の撮像部が起動される(ステップS11)。 First, the motion sensor device 5 is activated by the user (step S10). Further, the imaging unit of the user terminal 2 is activated by the user (step S11).
 次に、ユーザ端末2は、ユーザによる操作に応じて、検知開始指示を動きセンサ装置5に出力するとともに(ステップS12)、撮像を開始し(ステップS13)、所定時間(例えば30秒間)の第1動画を撮像する。検知開始指示を取得した動きセンサ装置5は、ラケット6の動きを示す計測値のユーザ端末2への送信を開始する。ユーザ端末2は、動きセンサ装置5から、第1動画を撮像しているときのユーザの動作を示す検知情報を取得する(ステップS15)。検知開始指示を取得した動きセンサ装置5は、所定時間(例えば30秒間)に計測された、ラケット6の動きを示す計測値をまとめて、ユーザ端末2へ送信してもよい。 Next, in response to the user's operation, the user terminal 2 outputs a detection start instruction to the motion sensor device 5 (step S12), starts imaging (step S13), and starts capturing images after a predetermined period of time (for example, 30 seconds). 1 Capture a video. The motion sensor device 5 that has received the detection start instruction starts transmitting a measured value indicating the movement of the racket 6 to the user terminal 2. The user terminal 2 acquires detection information indicating the user's motion while capturing the first video from the motion sensor device 5 (step S15). The motion sensor device 5 that has received the detection start instruction may collect measured values indicating the motion of the racket 6 measured over a predetermined period of time (for example, 30 seconds) and transmit them to the user terminal 2.
 ユーザ端末2は、ユーザによる所定の操作を受け付けて、解析依頼をサーバ装置1に送信する(ステップS16)。解析指示には、ユーザの動作を撮像した第1動画、および該動作に関する検知情報が含まれている。 The user terminal 2 receives a predetermined operation by the user and transmits an analysis request to the server device 1 (step S16). The analysis instruction includes a first moving image of the user's motion and detection information regarding the motion.
 サーバ装置1は、ユーザ端末2から解析指示を取得し(ステップS17:取得ステップ)、検知情報および第1動画を解析する(ステップS18:対象イベント検出ステップ、第1抽出ステップ)。ステップS18において、サーバ装置1は、検知情報に基づいて、動作中に発生した対象イベントを検出し、検出された対象イベントに対応する少なくとも1つの第1静止画像を、第1動画から抽出する。サーバ装置1は、解析結果をユーザ端末2に送信する(ステップS19)。 The server device 1 acquires an analysis instruction from the user terminal 2 (step S17: acquisition step), and analyzes the detection information and the first video (step S18: target event detection step, first extraction step). In step S18, the server device 1 detects a target event that occurs during operation based on the detection information, and extracts at least one first still image corresponding to the detected target event from the first video. The server device 1 transmits the analysis result to the user terminal 2 (step S19).
 ユーザ端末2は、サーバ装置1から受信した解析結果を受信し(ステップS20)、解析結果を表示部24に表示させる(ステップS21)。解析結果を受信したユーザ端末2は、任意の態様で、解析結果を表示部24に表示させてもよい。例えば、ユーザ端末2は、解析対象となった第1動画を解析対象データ222から読み出して、解析結果と共に表示部24に表示させてもよい。 The user terminal 2 receives the analysis result received from the server device 1 (step S20), and displays the analysis result on the display unit 24 (step S21). The user terminal 2 that has received the analysis result may display the analysis result on the display unit 24 in any manner. For example, the user terminal 2 may read out the first moving image to be analyzed from the analysis target data 222 and display it on the display unit 24 together with the analysis result.
 (表示画面例)
 次に、表示制御部214が表示部24に表示させる表示画面例について、図7~14を用いて説明する。図7~14は、ユーザ端末2における表示画面の一例を示す図である。図7~11は、ユーザ端末2から解析依頼をサーバ装置1へ送信するまでの表示画面の一例を示している。図12~14は、ユーザ端末2がサーバ装置1から解析結果を受信し、受信した解析結果を表示部24に表示させるまでの表示画面の一例を示している。
(Display screen example)
Next, examples of display screens displayed on the display unit 24 by the display control unit 214 will be described using FIGS. 7 to 14. 7 to 14 are diagrams showing examples of display screens on the user terminal 2. 7 to 11 show examples of display screens from the user terminal 2 until the analysis request is sent to the server device 1. 12 to 14 show examples of display screens from when the user terminal 2 receives the analysis results from the server device 1 to when the received analysis results are displayed on the display unit 24.
 ここでは、複数の選手をコーチングしている指導者および複数の選手が共用しているユーザ端末2を例に挙げて説明する。この場合、各選手および指導者がユーザであり得る。このように、動作解析システム100を利用すれば、指導者は、複数の選手の各々の練習日誌と、解析依頼と、解析結果とを互いに関連付けて管理することが可能である。 Here, the explanation will be given using an example of an instructor coaching a plurality of players and a user terminal 2 shared by the plurality of players. In this case, each player and coach may be a user. In this way, by using the motion analysis system 100, the coach can manage the practice diaries, analysis requests, and analysis results of each of a plurality of players in association with each other.
 <解析依頼を送信>
 図7は、ユーザが動作解析システム100の利用を開始する際に表示される表示画面(ログイン画面)の一例である。図7に示すログイン画面の表示R1には、ユーザ情報をユーザに入力させる欄が表示されている。ユーザ情報は、例えば、メールアドレスおよびパスワードであってもよい。ユーザ端末2は、入力されたユーザ情報をサーバ装置1に送信してもよい。この場合、サーバ装置1は、受信したユーザ情報と、予め登録されているユーザ情報123とを照合することによって、該ユーザによる動作解析システム100の利用を承認してもよい。
<Send analysis request>
FIG. 7 is an example of a display screen (login screen) that is displayed when the user starts using the motion analysis system 100. Display R1 of the login screen shown in FIG. 7 displays a column for allowing the user to input user information. The user information may be, for example, an email address and a password. The user terminal 2 may transmit the input user information to the server device 1. In this case, the server device 1 may approve the user's use of the motion analysis system 100 by comparing the received user information with the user information 123 registered in advance.
 次に、図8および図9は、複数の選手の各々の練習日誌を表示する表示画面の一例である。図8に示す表示R2には、指導者がコーチングしている選手一覧が表示されている。この画面は、指導者またはユーザに、選手一覧に表示されている選手の中から、ある選手を選択させるための表示画面である。以下、図9~図14では、図8において「選手A」が選択された場合を示している。 Next, FIGS. 8 and 9 are examples of display screens that display the practice diaries of each of a plurality of players. Display R2 shown in FIG. 8 displays a list of players being coached by the instructor. This screen is a display screen that allows the coach or user to select a certain player from among the players displayed in the player list. Hereinafter, FIGS. 9 to 14 show the case where "Player A" is selected in FIG. 8.
 図9は、選手Aの練習日誌を表示する表示画面の一例である。表示R3には、選択された日付およびカレンダーが表示されている。図9は、2022年1月9日に選手Aの練習日誌を表示させた例を示している。表示R3には、選択された日付である「1月9日(日)」が表示されており、かつ、カレンダーの1月9日に円形のマークが付されている。表示R3に表示されている日付の欄およびカレンダー表示は、指導者またはユーザに、表示させる練習日誌の日付を選択させる機能を有している。 FIG. 9 is an example of a display screen that displays player A's practice diary. Display R3 displays the selected date and calendar. FIG. 9 shows an example of player A's practice diary displayed on January 9, 2022. In display R3, the selected date "January 9th (Sunday)" is displayed, and a circular mark is attached to January 9th on the calendar. The date column and calendar display displayed in display R3 have a function that allows the instructor or user to select the date of the practice diary to be displayed.
 表示R3のカレンダーの1月8日には四角いマークが付されている。この四角いマークは、1月8日が、選手Aの動作の解析依頼を送信し、解析結果を受信した日であることを示している。四角いマークは、解析依頼を送信していない日には付されない。指導者またはユーザによって、表示R3の「1月8日」が選択された場合、ユーザ端末2は、1月8日に送信した解析依頼を示す情報、および受信した解析結果を示す情報を表示部24に表示させる(例えば、図14参照)。ここで、解析依頼には第1動画および検知情報が含まれており、解析結果にはサーバ装置1によって抽出された第1静止画像が含まれている。したがって、表示R3において、サーバ装置1から取得した第1静止画像が対応付けられている日と対応付けられていない日とを区別して表示されている。解析結果を受信した日であることを示すマークは四角いマークに限定されず、適宜変更され得る。 A square mark is attached to January 8th on the calendar of display R3. This square mark indicates that January 8th was the day on which the request for analysis of Athlete A's movements was sent and the analysis results were received. A square mark will not be placed on days when no analysis request has been sent. When the instructor or user selects "January 8th" in display R3, the user terminal 2 displays information indicating the analysis request sent on January 8th and information indicating the received analysis results on the display. 24 (for example, see FIG. 14). Here, the analysis request includes the first moving image and detection information, and the analysis result includes the first still image extracted by the server device 1. Therefore, in display R3, days to which the first still image acquired from the server device 1 is associated and days to which it is not are displayed are distinguished. The mark indicating the date on which the analysis result was received is not limited to a square mark, and may be changed as appropriate.
 図9の表示R4の「練習日誌を作成する」が選択された場合、表示制御部214は、新たに練習日程を作成したり、練習日程を更新したりするための操作を受け付ける画面を表示部24に表示させる。 When “Create a practice diary” in display R4 in FIG. 24.
 図10は、選手Aの1月9日の練習日誌を入力するための表示画面の一例である。練習日誌を入力するための表示画面には、「練習目標」、「練習内容」、「大切なポイント、注意点(コーチに教えてもらったこと)」、「自分が気づいたこと(イメージ、今後の目標)」、および「自由入力欄」などの見出しが表示されている。選手Aまたは指導者は、この表示画面において、練習日誌への入力および練習日誌の更新などを行うことができる。 FIG. 10 is an example of a display screen for inputting player A's practice diary for January 9th. The display screen for entering the practice diary includes ``Practice goals'', ``Practice content'', ``Important points and precautions (what the coach taught me)'', ``What I noticed (image, future Headings such as ``Goal for 2019'' and ``Free Entry Field'' are displayed. Athlete A or the coach can input information into the practice diary, update the practice diary, etc. on this display screen.
 図10の表示R5の「解析対象データを登録する」が選択された場合、表示制御部214は、解析依頼を送信するための操作を受け付けるための表示画面を表示部24に表示させる。 When "Register data to be analyzed" in display R5 in FIG. 10 is selected, the display control unit 214 causes the display unit 24 to display a display screen for accepting an operation for transmitting an analysis request.
 図11は、解析依頼を送信するための操作を受け付けるための表示画面の一例である。図11の表示R6は、送信する解析依頼を送信するために必要な項目およびデータなどを、ユーザまたは指導者に入力させたり、準備させたりするための欄を備えている。 FIG. 11 is an example of a display screen for accepting an operation for transmitting an analysis request. Display R6 in FIG. 11 includes columns for allowing the user or instructor to input or prepare items and data necessary for transmitting the analysis request to be transmitted.
 表示R6の「タイトル(必須)」は、解析依頼の名称の入力を受け付ける欄である。図11には、「サーブ1」と入力された様子が示されている。表示R6の「センサデータCSV(必須)」は、サーバ装置1に解析させる検知情報をユーザまたは指導者に指定させるための欄である(未入力の欄が示されている)。表示R6の「スイング動画(必須)」は、サーバ装置1に送信する第1動画をユーザまたは指導者に指定させるための欄である(未入力の欄が示されている)。表示R6の「最新の地磁気データ登録日」は、サーバ装置1にて検知情報を解析するときに利用され得る地磁気データをユーザまたは指導者に指定させるための欄である(未入力の欄が示されている)。表示R6の「最新の地磁気データ登録日」および「地磁気データCSV(登録済の地磁気データが無い場合は必須)」は、サーバ装置1にて検知情報を解析するときに利用され得る地磁気データをユーザまたは指導者に指定させるための欄である(未入力の欄が示されている)。 "Title (required)" in display R6 is a field that accepts input of the name of the analysis request. FIG. 11 shows how "Serve 1" is input. "Sensor data CSV (required)" in display R6 is a column for allowing the user or instructor to specify detection information to be analyzed by the server device 1 (uninputted columns are shown). "Swing video (required)" in display R6 is a column for allowing the user or instructor to specify the first video to be transmitted to the server device 1 (an uninputted column is shown). The "latest geomagnetic data registration date" in display R6 is a field for allowing the user or instructor to specify geomagnetic data that can be used when analyzing detected information in the server device 1 (unfilled fields are indicated). ). "Latest geomagnetic data registration date" and "Geomagnetic data CSV (required if there is no registered geomagnetic data)" in display R6 indicate geomagnetic data that can be used by the server device 1 when analyzing detection information. Or this is a field for the instructor to specify (unfilled fields are shown).
 表示R7の「登録する」が選択された場合、データ管理部213は、これらの欄に指定された検知情報および第1動画を解析対象データ222から読み出して解析依頼を作成する。そして、作成された解析依頼は、通信部25を介してサーバ装置に送信される。一方、表示R8の「戻る」が選択された場合、表示制御部214は、図10に示すような練習日誌を表示する表示画面を表示部24に表示させる。 If "Register" in display R7 is selected, the data management unit 213 reads out the detection information and the first video specified in these columns from the analysis target data 222 and creates an analysis request. The created analysis request is then transmitted to the server device via the communication unit 25. On the other hand, when "Back" in display R8 is selected, the display control unit 214 causes the display unit 24 to display a display screen that displays a practice diary as shown in FIG.
 <解析結果を表示>
 次に、ユーザ端末2がサーバ装置1から解析結果を受信し、受信した解析結果を表示部24に表示させるまでの表示画面について、図12~図14を用いて説明する。
<Display analysis results>
Next, the display screen from when the user terminal 2 receives the analysis result from the server device 1 to when the received analysis result is displayed on the display unit 24 will be described using FIGS. 12 to 14.
 図12は、1月9日の練習日誌を、サーバ装置1から解析結果を受信した後に表示した場合の表示画面例である。1月9日の練習日誌を作成中の表示画面(図10参照)と異なり、表示R9の「サーブ1」が表示されている。表示R9の「サーブ1」が選択された場合、表示制御部214は、解析結果223から解析結果を読み出して、図13および図14に示す表示画面を表示部24に表示させる。 FIG. 12 is an example of a display screen when the practice diary for January 9 is displayed after receiving the analysis results from the server device 1. Unlike the display screen where the practice diary for January 9th is being created (see FIG. 10), "Serve 1" of display R9 is displayed. When “Serve 1” of display R9 is selected, the display control unit 214 reads the analysis result from the analysis result 223 and causes the display unit 24 to display the display screens shown in FIGS. 13 and 14.
 図13は、解析結果を表示する表示画面の一部を示す図である。図13において、表示R10には、解析対象となった第1動画(あるいは第1動画のサムネイル)が表示されている。表示R11には、解析依頼時に第1動画と共に送信された検知情報(図中の「センサーデータ」)、地磁気データ、および第1動画のオリジナル(図中の「撮影動画」)を表示部24に表示させるためのボタンが表示されていてもよい。例えば、ユーザによって「撮影動画」が選択された場合、表示制御部214は、所定の動画再生用のアプリケーションを起動し、当該第1動画を再生させる。 FIG. 13 is a diagram showing a part of the display screen that displays the analysis results. In FIG. 13, a first moving image (or a thumbnail of the first moving image) that is an analysis target is displayed in display R10. The display R11 displays the detection information ("sensor data" in the diagram), geomagnetic data, and the original of the first video ("captured video" in the diagram) sent together with the first video at the time of the analysis request on the display unit 24. A button for display may be displayed. For example, when the user selects "photographed video", the display control unit 214 starts a predetermined video playback application and plays the first video.
 図14は、サーバ装置1によって検出された対象イベント毎の解析結果を表示する表示画面の一部を示す図である。ここでは、第1動画および検知情報から、2以上の対象イベント「ヒット」が検出された例を示している。それゆえ、図14は、少なくとも「ヒットNо.1」に関する表示と、「ヒットNо2」に関する表示とを含む表示画面である。 FIG. 14 is a diagram showing a part of the display screen that displays the analysis results for each target event detected by the server device 1. Here, an example is shown in which two or more target events "hit" are detected from the first video and the detection information. Therefore, FIG. 14 is a display screen that includes at least a display regarding "Hit No. 1" and a display regarding "Hit No. 2".
 図14の表示R12~R15には、「ヒットNо.1」に関する解析結果が表示されている。表示R12(図中の「ラケットアニメーション画像」)には、第2動画が表示されている。表示R13には、検知情報に基づいて解析されたラケット6の動きを示すグラフが表示されている。図14には、「回転方向別スピード」を示すグラフ、および「スイングスピード」を示すグラフが示されている。表示R14(図中の「ラケット姿勢 連続画像」)には、第2静止画像および第2関連画像群が表示されている。表示R14において、「ヒット時刻」に対応する画像が第2静止画像であり、「0.3秒前」~「0.1秒前」および「0.1秒後」に対応する画像群が第2関連画像群である。 Displays R12 to R15 in FIG. 14 display analysis results regarding "Hit No. 1." A second moving image is displayed on the display R12 ("racquet animation image" in the figure). Display R13 displays a graph showing the movement of the racket 6 analyzed based on the detected information. FIG. 14 shows a graph showing "speed by rotation direction" and a graph showing "swing speed." A second still image and a second related image group are displayed in display R14 (“racquet posture continuous images” in the figure). In display R14, the image corresponding to the "hit time" is the second still image, and the image group corresponding to "0.3 seconds ago" to "0.1 seconds ago" and "0.1 seconds later" is the second still image. This is a group of 2 related images.
 表示R15(図中の「骨格抽出 連続画像(0.1秒間隔)」)には、第1静止画像および第1関連画像群が表示されている。表示R15において、「No.04」~「No.05」に対応する画像が第1静止画像であり、それ以外の画像群が第1関連画像群である。このように、表示制御部214は、第1静止画像および第2静止画像を同時に表示部24に表示させてもよい。この構成によれば、動作解析システム100は、ユーザに、第1静止画像および第2静止画像を同時に確認させることにより、自身の対象イベントの特徴および改善点の理解を容易化することができる。 A first still image and a first related image group are displayed in display R15 (“skeleton extraction continuous images (0.1 second interval)” in the figure). In display R15, the images corresponding to "No. 04" to "No. 05" are the first still images, and the other image groups are the first related image group. In this way, the display control section 214 may display the first still image and the second still image on the display section 24 at the same time. According to this configuration, the motion analysis system 100 allows the user to check the first still image and the second still image simultaneously, thereby making it easier for the user to understand the characteristics of the user's own target event and points for improvement.
 サーバ装置1は、第1動画に基づいて骨格モデルの骨格データを生成可能であってもよい。この場合、表示制御部214は、表示R15に示すように、生成された骨格モデルを第1静止画像に重畳して表示部24に表示させてもよい。表示制御部214は、生成された骨格モデルを第1関連画像群に重畳して表示部24に表示させてもよい。ユーザの身体の動きは、骨格モデルを用いることによって明確に示され得る。それゆえ、表示R15に示すように、骨格モデルが重畳された第1静止画像(および第1関連画像群)を表示させることによって、動作解析システム100は、ユーザに自身の動きの特徴と改善点とを明確に認識させることができる。これにより、ユーザは、自身の動きの特徴と改善点とを意識して、技量を向上させるための練習に取り組むことができる。 The server device 1 may be able to generate skeletal data of the skeletal model based on the first video. In this case, the display control unit 214 may display the generated skeletal model on the first still image on the display unit 24, as shown in display R15. The display control unit 214 may display the generated skeletal model on the display unit 24 by superimposing it on the first related image group. The user's body movements can be clearly shown by using a skeletal model. Therefore, as shown in display R15, by displaying the first still image (and the first related image group) on which the skeletal model is superimposed, the motion analysis system 100 allows the user to understand the characteristics of his or her own motion and points for improvement. can be clearly recognized. Thereby, the user can practice to improve his/her skill while being aware of the characteristics of his/her own movements and areas for improvement.
 図14に示すように、表示制御部214は、少なくとも1つの第1静止画像および第1関連画像群を、第1動画における時系列順に表示させてもよい。これにより、動作解析システム100は、対象イベントに関する解析結果を、ユーザによる一連の動作の流れと関連付けて理解させることができる。 As shown in FIG. 14, the display control unit 214 may display at least one first still image and a first related image group in chronological order in the first moving image. Thereby, the motion analysis system 100 can make the user understand the analysis result regarding the target event in association with the flow of a series of motions performed by the user.
 表示制御部214は、検出された対象イベント毎に少なくとも1つの第1静止画像および第1関連画像群を時系列順に表示させてもよい。例えば、「ヒットNo.1」のほかに「ヒットNo.2」が対象イベントとして検出されている場合、表示制御部214は、「ヒットNо.2」についても、表示R14およびR15と同じ表示態様で第1静止画像(および第1関連画像群)および第2静止画像(及び第2関連画像群)を表示部24に表示させてもよい。 The display control unit 214 may display at least one first still image and first related image group in chronological order for each detected target event. For example, when "Hit No. 2" is detected as a target event in addition to "Hit No. 1", the display control unit 214 displays "Hit No. 2" in the same display mode as display R14 and R15. The first still image (and the first related image group) and the second still image (and the second related image group) may be displayed on the display unit 24.
 (別の実施形態1)
 図13および図14では、第1動画および第2動画を表示部24に表示させる例を示したが、これに限定されない。例えば、表示制御部214は、第2動画は表示せずに第2静止画像及び第2関連画像群のみを表示させてもよい。すなわち、表示制御部214は、第1動画および第2動画の少なくともいずれか一方を表示させてもよい。
(Another Embodiment 1)
Although FIGS. 13 and 14 show an example in which the first moving image and the second moving image are displayed on the display section 24, the present invention is not limited thereto. For example, the display control unit 214 may display only the second still image and the second related image group without displaying the second moving image. That is, the display control unit 214 may display at least one of the first video and the second video.
 (別の実施形態2)
 図14の表示R15は、第1静止画像及び第1静止画像群に骨格モデルを重畳した画像が表示されているが、このような表示態様に限定されない。例えば、表示制御部214は、第1動画に基づいてサーバ装置1が生成した骨格モデルの動画を表示R15に表示させてもよい。
(Another Embodiment 2)
Although the display R15 in FIG. 14 displays an image in which a skeletal model is superimposed on the first still image and the first still image group, the display R15 is not limited to such a display mode. For example, the display control unit 214 may display the moving image of the skeletal model generated by the server device 1 based on the first moving image on the display R15.
 〔実施形態2〕
 本開示の実施形態1に係る動作解析システム100は、サーバ装置1からユーザ端末2に解析結果が送信される構成である(図6参照)。しかし、動作解析システム100は、この構成に限定されない。例えば、サーバ装置1は、取得した解析依頼毎に専用ウェブページを作成し、このウェブページにて解析結果を閲覧させる構成であってもよい。この場合、サーバ装置1は、解析依頼の送信元のユーザ端末2に、当該ウェブページにアクセスするための情報を送信すればよい。この構成によれば、サーバ装置1とユーザ端末2との送信負荷を軽減させることができる。
[Embodiment 2]
The motion analysis system 100 according to Embodiment 1 of the present disclosure has a configuration in which analysis results are transmitted from the server device 1 to the user terminal 2 (see FIG. 6). However, the motion analysis system 100 is not limited to this configuration. For example, the server device 1 may be configured to create a dedicated web page for each acquired analysis request, and allow the analysis results to be viewed on this web page. In this case, the server device 1 may transmit information for accessing the web page to the user terminal 2 that is the source of the analysis request. According to this configuration, the transmission load between the server device 1 and the user terminal 2 can be reduced.
 〔実施形態3〕
 本開示の実施形態1に係る動作解析システム100は、サーバ装置1が、ユーザ端末2から解析依頼を取得して、動作解析を実行する構成である(図4および図5参照)。しかし、動作解析システム100は、この構成に限定されない。例えば、サーバ装置1(すなわち、制御部11および記憶部12)が備える機能のすべてまたは一部を、ユーザ端末2が備える構成であってもよい。例えば、ユーザ端末2が、サーバ装置1が備える機能のすべてを備えている場合、ユーザ端末2は、第1動画および検知情報の取得と、対象イベントの検出と、第1静止画像の抽出と、を実行可能である。この場合、ユーザ端末2は、動作解析装置として機能する。
[Embodiment 3]
The motion analysis system 100 according to Embodiment 1 of the present disclosure has a configuration in which the server device 1 acquires an analysis request from the user terminal 2 and executes motion analysis (see FIGS. 4 and 5). However, the motion analysis system 100 is not limited to this configuration. For example, the user terminal 2 may include all or part of the functions of the server device 1 (that is, the control unit 11 and the storage unit 12). For example, if the user terminal 2 has all the functions of the server device 1, the user terminal 2 can acquire the first video and detection information, detect the target event, extract the first still image, is possible. In this case, the user terminal 2 functions as a motion analysis device.
 以上、本開示に係る発明について、諸図面および実施例に基づいて説明してきた。しかし、本開示に係る発明は上述した各実施形態に限定されるものではない。すなわち、本開示に係る発明は本開示で示した範囲で種々の変更が可能であり、異なる実施形態にそれぞれ開示された技術的手段を適宜組み合わせて得られる実施形態についても本開示に係る発明の技術的範囲に含まれる。つまり、当業者であれば本開示に基づき種々の変形または修正を行うことが容易であることに注意されたい。また、これらの変形または修正は本開示の範囲に含まれることに留意されたい。 The invention according to the present disclosure has been described above based on the drawings and examples. However, the invention according to the present disclosure is not limited to each embodiment described above. That is, the invention according to the present disclosure can be modified in various ways within the scope shown in the present disclosure, and the invention according to the present disclosure also applies to embodiments obtained by appropriately combining technical means disclosed in different embodiments. Included in technical scope. In other words, it should be noted that those skilled in the art can easily make various changes or modifications based on the present disclosure. It should also be noted that these variations or modifications are included within the scope of this disclosure.
 〔まとめ〕
 本開示の態様1に係る動作解析システム100は、ユーザの動作を撮像した第1動画を取得する第1取得部(解析依頼受付部111)と、動きセンサによって検知された、動作に関する検知情報を取得する第2取得部(解析依頼受付部111)と、検知情報に基づいて、動作中に発生した対象イベントを検出する対象イベント検出部112と、検出された対象イベントに対応する少なくとも1つの第1静止画像を、第1動画から抽出する第1抽出部113と、を備える。
〔summary〕
The motion analysis system 100 according to aspect 1 of the present disclosure includes a first acquisition unit (analysis request reception unit 111) that acquires a first video capturing a user's motion, and a first acquisition unit (analysis request reception unit 111) that acquires detection information regarding the motion detected by a motion sensor. A second acquisition unit (analysis request reception unit 111) that acquires information, a target event detection unit 112 that detects a target event that occurs during operation based on the detection information, and at least one second acquisition unit that corresponds to the detected target event. The first extraction unit 113 extracts one still image from the first moving image.
 本開示の態様2に係る動作解析システム100は、前記態様1において、第1抽出部113は、第1静止画像の前後の複数の静止画像である第1関連画像群を、第1動画から抽出可能であってもよい。 In the motion analysis system 100 according to Aspect 2 of the present disclosure, in Aspect 1, the first extraction unit 113 extracts a first related image group, which is a plurality of still images before and after the first still image, from the first moving image. It may be possible.
 本開示の態様3に係る動作解析システム100は、前記態様1または2において、第1抽出部113は、対象イベントが複数検出された場合、少なくとも1つの第1静止画像を、検出された対象イベント毎に抽出してもよい。 In the motion analysis system 100 according to aspect 3 of the present disclosure, in the aspect 1 or 2, when a plurality of target events are detected, the first extraction unit 113 extracts at least one first still image from the detected target event. It may be extracted separately.
 本開示の態様4に係る動作解析システム100は、前記態様2において、少なくとも1つの第1静止画像および第1関連画像群を第1動画における時系列順に表示させる表示制御部214を備えていてもよい。 The motion analysis system 100 according to Aspect 4 of the present disclosure, in Aspect 2, may include a display control unit 214 that displays at least one first still image and a first related image group in chronological order in the first moving image. good.
 本開示の態様5に係る動作解析システム100は、前記態様4において、表示制御部214は、検出された対象イベント毎に少なくとも1つの第1静止画像および第1関連画像群を時系列順に表示させてもよい。 In the motion analysis system 100 according to aspect 5 of the present disclosure, in aspect 4, the display control unit 214 displays at least one first still image and the first related image group in chronological order for each detected target event. You can.
 本開示の態様6に係る動作解析システム100は、前記態様1~5の何れかにおいて、動作は、ユーザが用具(ラケット6)を用いる動作であり、検知情報に基づいて、動作中の用具(ラケット6)の姿勢変化を示した第2動画を生成する第2動画生成部114と、検出された対象イベントに対応する少なくとも1つの第2静止画像を、第2動画から抽出する第2抽出部115と、を備えていてもよい。 In the motion analysis system 100 according to aspect 6 of the present disclosure, in any of the aspects 1 to 5, the motion is a motion in which the user uses a tool (racquet 6), and based on the detection information, the motion analysis system a second video generation unit 114 that generates a second video showing a change in attitude of the racket 6); and a second extraction unit that extracts at least one second still image corresponding to the detected target event from the second video. 115.
 本開示の態様7に係る動作解析システム100は、前記態様6において、第2抽出部115は、対象イベントが複数検出された場合、検出された対象イベントのそれぞれに対応する複数の第2静止画像を抽出可能であってもよい。 In the motion analysis system 100 according to aspect 7 of the present disclosure, in the aspect 6, when a plurality of target events are detected, the second extraction unit 115 extracts a plurality of second still images corresponding to each of the detected target events. may be extractable.
 本開示の態様8に係る動作解析システム100は、前記態様6または7において、第2動画生成部114は、検出された対象イベントのそれぞれに対応する複数の第2動画を生成可能であり、第2抽出部115は、少なくとも1つの第2静止画像を、生成された第2動画のそれぞれから抽出可能であってもよい。 In the motion analysis system 100 according to aspect 8 of the present disclosure, in the aspect 6 or 7, the second video generation unit 114 can generate a plurality of second videos corresponding to each of the detected target events, and The second extraction unit 115 may be able to extract at least one second still image from each of the generated second moving images.
 本開示の態様9に係る動作解析システム100は、前記態様6~8の何れかにおいて、第2抽出部115は、第2静止画像の前後の複数の静止画像である第2関連画像群を、第2動画から抽出可能であってもよい。 In the motion analysis system 100 according to aspect 9 of the present disclosure, in any of the aspects 6 to 8, the second extraction unit 115 extracts a second related image group that is a plurality of still images before and after the second still image. It may be possible to extract it from the second video.
 本開示の態様10に係る動作解析システム100は、前記態様6~9の何れかにおいて、第1静止画像および第2静止画像を同時に表示させる表示制御部214を備えていてもよい。 The motion analysis system 100 according to Aspect 10 of the present disclosure may include a display control unit 214 that simultaneously displays the first still image and the second still image in any of Aspects 6 to 9 above.
 本開示の態様11に係る動作解析システム100は、前記態様6~10の何れかにおいて、第1動画および第2動画の少なくともいずれか一方を表示させる表示制御部214を備えていてもよい。 The motion analysis system 100 according to aspect 11 of the present disclosure may include a display control unit 214 that displays at least one of the first video and the second video in any of the above aspects 6 to 10.
 本開示の態様12に係る動作解析システム100は、前記態様6~11の何れかにおいて、対象イベント検出部112は、検知情報に基づいて算出された、動作における用具(ラケット6)の姿勢変化の速度および移動速度の少なくともいずれか一方に基づいて、対象イベントを検出してもよい。 In the motion analysis system 100 according to aspect 12 of the present disclosure, in any of the aspects 6 to 11, the target event detection unit 112 detects the attitude change of the equipment (racquet 6) during the motion, which is calculated based on the detection information. A target event may be detected based on at least one of speed and movement speed.
 本開示の態様13に係る動作解析システム100は、前記態様1~12の何れかにおいて、第1動画のデータに基づき生成された骨格モデルを第1静止画像に重畳して表示させる表示制御部214を備えてもよい。 A motion analysis system 100 according to an aspect 13 of the present disclosure includes a display control unit 214 that superimposes and displays a skeletal model generated based on data of a first video on a first still image in any of the aspects 1 to 12 above. may be provided.
 本開示の態様14に係る動作解析システム100は、前記態様1~13の何れかにおいて、検知情報を取得時に第1動画を撮像可能な、少なくとも1つのユーザ端末2と、第1取得部(解析依頼受付部111)、第2取得部(解析依頼受付部111)、対象イベント検出部112、および第1抽出部113を備える、少なくとも1つの動作解析装置(サーバ装置1)と、を備え、動作解析装置(サーバ装置1)は、ユーザ端末2から、検知情報および第1動画を含む解析指示を取得したことに応じて、対象イベントの検出、および、第1静止画像の抽出を実行してもよい。 A motion analysis system 100 according to an aspect 14 of the present disclosure includes at least one user terminal 2 capable of capturing a first moving image when acquiring detection information, and a first acquisition unit (analysis at least one motion analysis device (server device 1) comprising a request reception section 111), a second acquisition section (analysis request reception section 111), a target event detection section 112, and a first extraction section 113; The analysis device (server device 1) detects the target event and extracts the first still image in response to acquiring the analysis instruction including the detection information and the first video from the user terminal 2. good.
 本開示の態様15に係る動作解析システム100は、前記態様14において、ユーザ端末2は、動作解析装置(サーバ装置1)に送信した第1動画が撮像された日付に、動作解析装置(サーバ装置1)から取得した第1静止画像を対応付けて管理してもよい。 In the motion analysis system 100 according to aspect 15 of the present disclosure, in the aspect 14, the user terminal 2 transmits the motion analysis device (server device 1) to the motion analysis device (server device 1) on the date when the first video transmitted to the motion analysis device (server device The first still images acquired from 1) may be managed in association with each other.
 本開示の態様16に係る動作解析システム100は、前記態様14または15において、動作解析装置(サーバ装置1)から取得した第1静止画像が対応付けられている日と対応付けられていない日とを区別して表示させる表示制御部214を備えていてもよい。 The motion analysis system 100 according to aspect 16 of the present disclosure is configured such that, in aspect 14 or 15, the first still image acquired from the motion analysis device (server device 1) is associated with a day and a day with which the first still image is not associated. It may also include a display control unit 214 that distinguishes and displays the .
 本開示の態様17に係る動作解析システム100は、前記態様1から16のいずれかにおいて、対象イベントは、ユーザが使用する用具(ラケット6)が所定の移動体と接触すること、または、用具(ラケット6)が所定の姿勢になること、であってもよい。 In the motion analysis system 100 according to aspect 17 of the present disclosure, in any of the aspects 1 to 16, the target event is that the tool (racquet 6) used by the user comes into contact with a predetermined moving body, or the tool ( The racket 6) may be in a predetermined posture.
 本開示の態様18に係る動作解析方法は、コンピュータ(サーバ装置1)が、ユーザの動作を撮像した第1動画、および、動きセンサによって検知された、動作に関する検知情報を取得する取得ステップ(ステップS17)と、コンピュータ(サーバ装置1)が、検知情報に基づいて、動作中に発生した対象イベントを検出する対象イベント検出ステップ(ステップS18)と、コンピュータが、検出された対象イベントに対応する少なくとも1つの第1静止画像を、第1動画から抽出する第1抽出ステップ(ステップS18)と、を含む。 The motion analysis method according to aspect 18 of the present disclosure includes an acquisition step (step S17), a target event detection step (step S18) in which the computer (server device 1) detects a target event that occurred during operation based on the detection information; The method includes a first extraction step (step S18) of extracting one first still image from the first moving image.
1 サーバ装置(動作解析装置)
2 ユーザ端末
3 ユーザ端末(動作解析装置)
4 ネットワーク
5 動きセンサ装置(動きセンサ)
6 ラケット
8 卓球台
21 制御部
22、52 記憶部
23 入力部
24 表示部
26 撮像部
53 角速度センサ
54 加速度センサ
100 動作解析システム
111 解析依頼受付部(第1取得部、第2取得部)
112 対象イベント検出部
113 第1抽出部
114 第2動画生成部
115 第2抽出部
211 撮像制御部
212 計測値取得部
213 表示制御部
214 データ管理部
1 Server device (behavior analysis device)
2 User terminal 3 User terminal (motion analysis device)
4 Network 5 Motion sensor device (motion sensor)
6 Racket 8 Table tennis table 21 Control units 22, 52 Storage unit 23 Input unit 24 Display unit 26 Imaging unit 53 Angular velocity sensor 54 Acceleration sensor 100 Motion analysis system 111 Analysis request reception unit (first acquisition unit, second acquisition unit)
112 Target event detection section 113 First extraction section 114 Second moving image generation section 115 Second extraction section 211 Imaging control section 212 Measurement value acquisition section 213 Display control section 214 Data management section

Claims (18)

  1.  ユーザの動作を撮像した第1動画を取得する第1取得部と、
     動きセンサによって検知された、前記動作に関する検知情報を取得する第2取得部と、
     前記検知情報に基づいて、前記動作中に発生した対象イベントを検出する対象イベント検出部と、
     検出された前記対象イベントに対応する少なくとも1つの第1静止画像を、前記第1動画から抽出する第1抽出部と、
    を備える、
    動作解析システム。
    a first acquisition unit that acquires a first video capturing an image of the user's actions;
    a second acquisition unit that acquires detection information regarding the motion detected by a motion sensor;
    a target event detection unit that detects a target event occurring during the operation based on the detection information;
    a first extraction unit that extracts at least one first still image corresponding to the detected target event from the first video;
    Equipped with
    Motion analysis system.
  2.  前記第1抽出部は、前記第1静止画像の前後の複数の静止画像である第1関連画像群を、前記第1動画から抽出可能である、
    請求項1に記載の動作解析システム。
    The first extraction unit is capable of extracting a first related image group, which is a plurality of still images before and after the first still image, from the first moving image.
    The motion analysis system according to claim 1.
  3.  前記第1抽出部は、前記対象イベントが複数検出された場合、前記少なくとも1つの第1静止画像を、検出された前記対象イベント毎に抽出する、
    請求項1または2に記載の動作解析システム。
    The first extraction unit extracts the at least one first still image for each detected target event when a plurality of target events are detected.
    The motion analysis system according to claim 1 or 2.
  4.  前記少なくとも1つの第1静止画像および前記第1関連画像群を前記第1動画における時系列順に表示させる表示制御部を備える、
    請求項2に記載の動作解析システム。
    comprising a display control unit that displays the at least one first still image and the first related image group in chronological order in the first moving image;
    The motion analysis system according to claim 2.
  5.  前記表示制御部は、検出された前記対象イベント毎に前記少なくとも1つの第1静止画像および前記第1関連画像群を時系列順に表示させる、
    請求項4に記載の動作解析システム。
    The display control unit displays the at least one first still image and the first related image group in chronological order for each detected target event.
    The motion analysis system according to claim 4.
  6.  前記動作は、前記ユーザが用具を用いる動作であり、
     前記検知情報に基づいて、前記動作中の前記用具の姿勢変化を示した第2動画を生成する第2動画生成部と、
     検出された前記対象イベントに対応する少なくとも1つの第2静止画像を、前記第2動画から抽出する第2抽出部と、を備える、
    請求項1から5のいずれか1項に記載の動作解析システム。
    The operation is an operation in which the user uses a tool,
    a second video generation unit that generates a second video showing a change in posture of the tool during the operation based on the detection information;
    a second extraction unit that extracts at least one second still image corresponding to the detected target event from the second moving image;
    The motion analysis system according to any one of claims 1 to 5.
  7.  前記第2抽出部は、前記対象イベントが複数検出された場合、検出された前記対象イベントのそれぞれに対応する複数の第2静止画像を抽出可能である、
    請求項6に記載の動作解析システム。
    When a plurality of target events are detected, the second extraction unit is capable of extracting a plurality of second still images corresponding to each of the detected target events.
    The motion analysis system according to claim 6.
  8.  前記第2動画生成部は、検出された前記対象イベントのそれぞれに対応する複数の前記第2動画を生成可能であり、
     前記第2抽出部は、少なくとも1つの前記第2静止画像を、生成された前記第2動画のそれぞれから抽出可能である、
    請求項6または7に記載の動作解析システム。
    The second video generation unit is capable of generating a plurality of second videos corresponding to each of the detected target events,
    The second extraction unit is capable of extracting at least one second still image from each of the generated second moving images.
    The motion analysis system according to claim 6 or 7.
  9.  前記第2抽出部は、前記第2静止画像の前後の複数の静止画像である第2関連画像群を、前記第2動画から抽出可能である、
    請求項6から8のいずれか1項に記載の動作解析システム。
    The second extraction unit is capable of extracting a second related image group, which is a plurality of still images before and after the second still image, from the second moving image.
    The motion analysis system according to any one of claims 6 to 8.
  10.  前記第1静止画像および前記第2静止画像を同時に表示させる表示制御部を備える、
    請求項6から9のいずれか1項に記載の動作解析システム。
    comprising a display control unit that simultaneously displays the first still image and the second still image;
    The motion analysis system according to any one of claims 6 to 9.
  11.  前記第1動画および前記第2動画の少なくともいずれか一方を表示させる表示制御部を備える、
    請求項6から10のいずれか1項に記載の動作解析システム。
    comprising a display control unit that displays at least one of the first moving image and the second moving image;
    The motion analysis system according to any one of claims 6 to 10.
  12.  前記対象イベント検出部は、前記検知情報に基づいて算出された、前記動作における前記用具の姿勢変化の速度および移動速度の少なくともいずれか一方に基づいて、前記対象イベントを検出する、
    請求項6から11のいずれか1項に記載の動作解析システム。
    The target event detection unit detects the target event based on at least one of a speed of posture change and a movement speed of the tool in the operation, which are calculated based on the detection information.
    The motion analysis system according to any one of claims 6 to 11.
  13.  前記第1動画のデータに基づき生成された骨格モデルを前記第1静止画像に重畳して表示させる表示制御部を備える、
    請求項1から12のいずれか1項に記載の動作解析システム。
    comprising a display control unit that displays a skeletal model generated based on data of the first moving image superimposed on the first still image;
    The motion analysis system according to any one of claims 1 to 12.
  14.  前記検知情報を取得時に前記第1動画を撮像可能な、少なくとも1つのユーザ端末と、
     前記第1取得部、前記第2取得部、前記対象イベント検出部、および前記第1抽出部を備える、少なくとも1つの動作解析装置と、を備え、
     前記動作解析装置は、
      前記ユーザ端末から、前記検知情報および前記第1動画を含む解析指示を取得したことに応じて、前記対象イベントの検出、および、前記第1静止画像の抽出を実行する、
    請求項1から13のいずれか1項に記載の動作解析システム。
    at least one user terminal capable of capturing the first video when acquiring the detection information;
    at least one motion analysis device comprising the first acquisition unit, the second acquisition unit, the target event detection unit, and the first extraction unit;
    The motion analysis device includes:
    Detecting the target event and extracting the first still image in response to acquiring an analysis instruction including the detection information and the first video from the user terminal;
    The motion analysis system according to any one of claims 1 to 13.
  15.  前記ユーザ端末は、前記動作解析装置に送信した前記第1動画が撮像された日付に、前記動作解析装置から取得した前記第1静止画像を対応付けて管理する、
    請求項14に記載の動作解析システム。
    The user terminal manages the first still image obtained from the motion analysis device by associating it with the date on which the first video transmitted to the motion analysis device was captured.
    The motion analysis system according to claim 14.
  16.  前記動作解析装置から取得した前記第1静止画像が対応付けられている日と対応付けられていない日とを区別して表示させる表示制御部を備える、
    請求項14または15に記載の動作解析システム。
    comprising a display control unit that distinguishes and displays days to which the first still image acquired from the motion analysis device is associated and days to which it is not associated;
    The motion analysis system according to claim 14 or 15.
  17.  前記対象イベントは、前記ユーザが使用する用具が所定の移動体と接触すること、または、前記用具が所定の姿勢になること、である、
    請求項1から16のいずれか1項に記載の動作解析システム。
    The target event is that the tool used by the user comes into contact with a predetermined moving object, or that the tool takes a predetermined posture.
    The motion analysis system according to any one of claims 1 to 16.
  18.  コンピュータが、ユーザの動作を撮像した第1動画、および、動きセンサによって検知された、前記動作に関する検知情報を取得する取得ステップと、
     コンピュータが、前記検知情報に基づいて、前記動作中に発生した対象イベントを検出する対象イベント検出ステップと、
     コンピュータが、検出された前記対象イベントに対応する少なくとも1つの第1静止画像を、前記第1動画から抽出する第1抽出ステップと、
    を含む、
    動作解析方法。
    an acquisition step in which the computer acquires a first moving image of the user's motion and detection information regarding the motion detected by the motion sensor;
    a target event detection step in which the computer detects a target event that occurred during the operation based on the detection information;
    a first extraction step in which the computer extracts at least one first still image corresponding to the detected target event from the first video;
    including,
    Motion analysis method.
PCT/JP2023/017219 2022-05-13 2023-05-08 Motion analysis system and motion analysis method WO2023219049A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022079562 2022-05-13
JP2022-079562 2022-05-13

Publications (1)

Publication Number Publication Date
WO2023219049A1 true WO2023219049A1 (en) 2023-11-16

Family

ID=88730452

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/017219 WO2023219049A1 (en) 2022-05-13 2023-05-08 Motion analysis system and motion analysis method

Country Status (1)

Country Link
WO (1) WO2023219049A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080094472A1 (en) * 2005-07-12 2008-04-24 Serge Ayer Method for analyzing the motion of a person during an activity
WO2017119403A1 (en) * 2016-01-08 2017-07-13 ソニー株式会社 Information processing device
WO2018008259A1 (en) * 2016-07-05 2018-01-11 ソニー株式会社 Information processing device, sensor device, and information processing system
JP2020175047A (en) * 2019-04-23 2020-10-29 カシオ計算機株式会社 Operation analyzing device, operation analyzing method and program
JP2020195573A (en) * 2019-06-03 2020-12-10 Kddi株式会社 Training evaluation device, method, and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080094472A1 (en) * 2005-07-12 2008-04-24 Serge Ayer Method for analyzing the motion of a person during an activity
WO2017119403A1 (en) * 2016-01-08 2017-07-13 ソニー株式会社 Information processing device
WO2018008259A1 (en) * 2016-07-05 2018-01-11 ソニー株式会社 Information processing device, sensor device, and information processing system
JP2020175047A (en) * 2019-04-23 2020-10-29 カシオ計算機株式会社 Operation analyzing device, operation analyzing method and program
JP2020195573A (en) * 2019-06-03 2020-12-10 Kddi株式会社 Training evaluation device, method, and program

Similar Documents

Publication Publication Date Title
EP2973215B1 (en) Feedback signals from image data of athletic performance
US10121065B2 (en) Athletic attribute determinations from image data
CN104023799B (en) Method and system to analyze sports motions using motion sensors of mobile device
CN104488022B (en) Method for the physical education for providing Dynamic Customization in response to the action of mobile device
US20150105882A1 (en) Inertial measurement of sports motion
KR20160106671A (en) Movement analysis device, movement analysis system, movement analysis method, display method for movement analysis information, and program
CN106422211B (en) Statistical method and device for ball training technology
WO2023085333A1 (en) Display device and display method
WO2023219049A1 (en) Motion analysis system and motion analysis method
US20170312577A1 (en) System and Method for Sport Performance Monitoring, Analysis, and Coaching
KR20180085843A (en) Swing analyzing device capable of correcting a swing posture and playing a game, and operation method thereof
WO2022224998A1 (en) Display device and display method
WO2021153573A1 (en) Movement analysis system, server, movement analysis method, control program, and recording medium
JP2009183455A (en) Competition information input device
KR101654846B1 (en) Device for game with function analyzing user&#39;s swing and analyzing method
WO2014123419A1 (en) Motion tracking method and device
WO2021172459A1 (en) Movement analysis system, server, movement analysis method, control program, and recording medium
JP2018027188A (en) System and method for determination of sports hitting tool
JP2016016002A (en) Exercise information output method, exercise information output device, exercise information output system, and exercise information output program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23803526

Country of ref document: EP

Kind code of ref document: A1