WO2019235350A1 - Information processing system, information processing method, and storage medium - Google Patents

Information processing system, information processing method, and storage medium Download PDF

Info

Publication number
WO2019235350A1
WO2019235350A1 PCT/JP2019/021505 JP2019021505W WO2019235350A1 WO 2019235350 A1 WO2019235350 A1 WO 2019235350A1 JP 2019021505 W JP2019021505 W JP 2019021505W WO 2019235350 A1 WO2019235350 A1 WO 2019235350A1
Authority
WO
WIPO (PCT)
Prior art keywords
person
behavior
information
processing system
information processing
Prior art date
Application number
PCT/JP2019/021505
Other languages
French (fr)
Japanese (ja)
Inventor
典孝 志村
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to JP2020523665A priority Critical patent/JP6939999B2/en
Publication of WO2019235350A1 publication Critical patent/WO2019235350A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Definitions

  • the present invention relates to an information processing system, an information processing method, and a storage medium.
  • Patent Document 1 discloses a navigation device that detects a driver's line of sight and detects a gaze target based on the movement of the line of sight.
  • the navigation device has a function of predicting driving behavior based on a gaze behavior pattern.
  • the present invention has been made in view of the above-described problems, and an object of the present invention is to provide an information processing system, an information processing method, and a storage medium that can perform behavior prediction in consideration of different behavior trends for each person. And
  • an identification information acquisition unit that acquires identification information of a person specified by image recognition from an image, and attention information indicating the attention state of the person is acquired from the face image of the person.
  • an information processing system comprising an attention information acquisition unit that performs an action prediction unit that predicts the behavior of the person based on the identification information and the attention information.
  • an information processing method comprising: a step; and predicting the behavior of the person based on the identification information and the attention information.
  • a step of acquiring identification information of a person specified by image recognition from an image in a computer, and attention information indicating the attention state of the person from the face image of the person A storage medium storing a program for executing an information processing method comprising: a step of acquiring an information processing method; and a step of predicting the behavior of the person based on the identification information and the attention information Is provided.
  • an information processing system an information processing method, and a storage medium that can perform behavior prediction considering that the tendency of behavior differs for each person.
  • the information processing system according to the present embodiment will be described with reference to FIGS.
  • the information processing system of this embodiment is a system used for predicting the behavior of a person who appears in an image.
  • This image is, for example, a video of a sports game, and the person is a sports player.
  • By displaying the player's behavior prediction it is possible to visualize the player's psychology, strategy, etc., and improve the satisfaction of the viewer watching the sport.
  • an image is a concept including a still image, a moving image, and a frame image included in the moving image. If the image is a moving image, it may further include audio data.
  • sports include physical sports involving physical exercise, but is not limited thereto, and may include competitions that have few physical movement elements and sports that do not involve physical movements. That is, sports may include motor sports such as car racing, motorcycle racing, and motor boat racing, e-sports that play computer games (Electronic Sports), go sports, shogi, chess, mahjong, poker, backgammon, and other mind sports. .
  • FIG. 1 is a block diagram illustrating a hardware configuration example of the information processing system 100.
  • the information processing system 100 may be a computer such as a desktop PC (Personal Computer), a notebook PC, or a tablet PC.
  • the information processing system 100 includes a CPU (Central Processing Unit) 151, a RAM (Random Access Memory) 152, a ROM (Read Only Memory) 153, and an HDD (Hard Disk Drive) 154 as computers that perform calculation, control, and storage.
  • the information processing system 100 includes a communication I / F (interface) 155, a display device 156, and an input device 157.
  • the CPU 151, RAM 152, ROM 153, HDD 154, communication I / F 155, display device 156, and input device 157 are connected to each other via a bus 158.
  • the display device 156 and the input device 157 may be connected to the bus 158 via a driving device (not shown) for driving these devices.
  • each unit configuring the information processing system 100 is illustrated as an integrated device, but some of these functions may be provided by an external device.
  • the display device 156 and the input device 157 may be external devices that are different from the parts that constitute the functions of the computer including the CPU 151 and the like.
  • the CPU 151 is a processor that performs a predetermined operation according to a program stored in the ROM 153, the HDD 154, and the like and also has a function of controlling each part of the information processing system 100.
  • the RAM 152 is composed of a volatile storage medium and provides a temporary memory area necessary for the operation of the CPU 151.
  • the ROM 153 is composed of a nonvolatile storage medium and stores necessary information such as a program used for the operation of the information processing system 100.
  • the HDD 154 is a storage device that includes a nonvolatile storage medium and stores data necessary for processing, an operation program for the information processing system 100, and the like.
  • the communication I / F 155 is a communication interface based on standards such as Ethernet (registered trademark), Wi-Fi (registered trademark), and 4G, and is a module for performing communication with other devices.
  • the display device 156 is a liquid crystal display, an OLED display, or the like, and is used for displaying images, characters, interfaces, and the like.
  • the input device 157 is a keyboard, a pointing device, or the like, and is used for a user to operate the information processing system 100. Examples of pointing devices include a mouse, a trackball, a touch panel, and a pen tablet.
  • the display device 156 and the input device 157 may be integrally formed as a touch panel.
  • the hardware configuration shown in FIG. 1 is an exemplification, and other devices may be added, or some devices may not be provided. Some devices may be replaced with another device having the same function. Furthermore, a part of the functions of the present embodiment may be provided by another device via a network, and the functions of the present embodiment may be realized by being distributed to a plurality of devices.
  • the HDD 154 may be replaced with an SSD (Solid State Drive) using a semiconductor memory, or may be replaced with a cloud storage.
  • FIG. 2 is a functional block diagram of the information processing system 100 according to the present embodiment.
  • the information processing system 100 includes an image acquisition unit 101, a face detection unit 102, a feature amount extraction unit 103, a collation unit 104, a state determination unit 105, an identification information acquisition unit 106, an attention information acquisition unit 107, a behavior prediction unit 108, and an image generation Part 109 and storage part 110.
  • the CPU 151 loads a program stored in the ROM 153, HDD 154, etc. into the RAM 152 and executes it. Thereby, the CPU 151 causes the image acquisition unit 101, the face detection unit 102, the feature amount extraction unit 103, the collation unit 104, the state determination unit 105, the identification information acquisition unit 106, the attention information acquisition unit 107, the behavior prediction unit 108, and the image generation. The function of the unit 109 is realized. Processing performed in each of these units will be described later.
  • the CPU 151 realizes the function of the storage unit 110 by controlling the HDD 154.
  • FIG. 3 is a flowchart showing an outline of processing performed by the information processing system 100 according to the present embodiment.
  • the behavior prediction process performed by the information processing system 100 will be described with reference to FIG.
  • the information processing system 100 detects a player (server) trying to hit a serve from a video of a volleyball game, predicts a direction in which the ball will fly, and displays a prediction result. Shall be performed.
  • This processing is desirably performed in real time during live broadcasting from the viewpoint of improving the satisfaction of the viewer by providing a behavior prediction of the player. This is because live broadcasts have stronger needs for providing behavior predictions than recorded broadcasts.
  • live broadcasting refers not only to broadcasting moving images taken at substantially the same time as shooting, but also to broadcasting with a delay of several seconds to several minutes from shooting using a delayed sending system. Including.
  • step S101 the image acquisition unit 101 acquires video data of a volleyball game (frame image data constituting the video).
  • the acquisition of the moving image data may be, for example, received from a device external to the information processing system 100 such as a video camera, or may be read out the moving image data temporarily stored in the storage unit 110. .
  • step S102 the face detection unit 102 detects an area including a human face from the frame image data included in the moving image data.
  • FIG. 4 is a diagram schematically illustrating an example of detecting a human face from video data of a volleyball game.
  • a player 302 holding a ball to hit a serve is displayed.
  • the face detection unit 102 automatically searches for a human face from the frame image data 301 and detects the face. Thereby, as shown in FIG. 4, a rectangular face region R1 including the face of the player 302 is extracted.
  • the feature amount extraction unit 103 extracts a feature amount from the image in the face area R1 extracted by the face detection unit 102.
  • the feature amount may be an amount indicating a facial feature such as a position of a characteristic part such as a pupil, a nose, or a mouth end.
  • step S104 the collation unit 104 collates the feature amount extracted in the process of step S103 with the feature amount to be collated included in the player database (player DB), and determines whether there is a matching combination. Perform face matching. If there is a feature quantity that matches the feature quantity extracted by the feature quantity extraction unit 103 in the player database (YES in step S104), it is determined that the detected person is a player participating in the game, and the process is as follows. The process proceeds to step S105. If there is no feature quantity that matches the feature quantity extracted by the feature quantity extraction unit 103 in the database (NO in step S104), the detected person is a person different from the player participating in the game such as a manager. It is determined that there is, and the process ends.
  • FIG. 5 is a table showing a configuration example of the player database.
  • the player database items include names of players who can participate in the game, player IDs (Identifiers), player face image features, player spin numbers, player teams and players. Positions are included. In the player database, these items are associated with each other.
  • FIG. 5 shows a feature amount ID for identifying the feature amount, but actually, a feature amount corresponding to the feature amount ID is also included in the player database.
  • the player database may include the face image itself used for extracting the feature amount.
  • the feature amount and information for identifying the player are associated with each other, the player name, spine number, team, position, etc., among the items shown in FIG. May be omitted.
  • the player database may be stored in the storage unit 110 in the information processing system 100, or may be stored in a device external to the information processing system 100 such as a data server.
  • the player database may be owned by the broadcasting station or the like to which the user of the information processing system 100 belongs. However, the player database may be provided in a cloud environment so that a plurality of broadcasting stations and the like can be used jointly.
  • step S105 the state determination unit 105 determines whether or not the player extracted by the face detection unit 102 is a server. If the player is a server (YES in step S105), the process proceeds to step S106. If the player is not a server (NO in step S105), the process ends.
  • the determination algorithm in the state determination unit 105 may be based on, for example, the position of the player on the court, the posture of the player, whether or not the player has the ball.
  • the algorithm of the state determination unit 105 uses a learning model generated by machine learning using as input data learning data in which a player image is associated with information indicating whether or not the player is a server. It may be a thing.
  • the determination algorithm in the state determination unit 105 may refer to the player database and determine that the player is not a server when the player's position is Libero. This is because Libero cannot serve according to the rules, so if the extracted player is Libero, that player is not a server.
  • the determination algorithm in the state determination unit 105 may be determined with reference to the court position information.
  • the state determination unit 105 can specify the server by calculating the change (rotation) of the court position accompanying the movement of the serve right.
  • step S104 it is not essential that the timing at which the process of step S105 is performed is after step S104.
  • it may be after step S102, after step S103, or the like.
  • the identification information acquisition unit 106 acquires the identification information of the person specified as the server.
  • This identification information may be a player ID in the player database.
  • the identification information may be information other than the player ID as long as the information can be used for identifying the player, such as a feature amount ID, a player name, a spine number, and a team name.
  • step S107 the attention information acquisition unit 107 detects the direction of the player's line of sight from the face image in the player's face area R1.
  • FIG. 6 is a diagram schematically illustrating a gaze detection example from the player's face image. As illustrated in FIG. 6, the attention information acquisition unit 107 detects the orientation of the player's line of sight 303 based on the face image.
  • This line-of-sight detection algorithm can acquire the direction of the line of sight by, for example, acquiring the relative positions of the iris, pupil, and the like from the image with the eyes and the corners of the eyes as reference points. Alternatively, the algorithm may acquire the direction of the line of sight by acquiring the position of the pupil from the image based on the corneal reflection of the light emitted from the light source.
  • the attention information acquisition unit 107 acquires attention information indicating the attention state of the player.
  • the attention information acquisition unit 107 may be one that detects the orientation of the player's face, or may be a combination of detection of the direction of the line of sight and detection of the orientation of the face.
  • the attention information may include at least one of a gaze direction and a face direction. Since the face has a larger area than the eyes, there is an advantage that the orientation can be acquired with relatively high accuracy even when the resolution of the image is low.
  • Other examples that can be included in the attention information include the player's facial expression, the player's mouth shape, and the like.
  • step S108 the behavior prediction unit 108 predicts the player's behavior based on the identification information acquired by the identification information acquisition unit 106 and the attention information acquired by the attention information acquisition unit 107.
  • the prediction result obtained by this process may be called action prediction information.
  • An example of an algorithm for behavior prediction performed by the behavior prediction unit 108 will be described.
  • attention information such as line of sight
  • the player's line of sight often points in the direction he is aiming to hit the serve.
  • some players intentionally or unconsciously gaze at a direction different from the direction in which they are trying to hit in order to avoid being read by the opponent.
  • the prediction accuracy is improved by performing behavior prediction using a different model for each player. Can be made.
  • the behavior prediction unit 108 of the present embodiment performs prediction using a different model for each player, the player is identified using the identification information, and the behavior of the player is determined from the attention information using the identified player model. Predict. Thereby, prediction in consideration of individual differences in correlation between attention information and behavior is possible, and prediction accuracy is improved.
  • the individual difference in the correlation between this attention information and action can be obtained by analyzing each player's past game information.
  • a learning model for predicting behavior different for each player can be obtained by machine learning using as input learning data in which the relationship between attention information and behavior is stored in a database for each player.
  • behavior prediction that takes into consideration that the tendency of behavior differs for each player is realized.
  • labor is reduced compared to creating a model manually, so a highly accurate learning model using a large amount of data can be easily constructed. .
  • the model that can be used in the behavior prediction unit 108 is not limited to that obtained by machine learning, and other models may be used.
  • the data analyst may classify the direction of the line of sight included in the attention information and manually construct the behavior prediction model by deriving the most likely behavior for each classification from past data.
  • the process performed by the behavior prediction unit 108 may refer to a database such as a table in which attention information and a player's behavior are associated with each other.
  • step S109 the image generation unit 109 generates a predicted image including the behavior prediction information predicted by the behavior prediction unit 108.
  • This predicted image is provided to viewers, for example, by being incorporated in a broadcast image of a volleyball game.
  • FIG. 7 is a diagram illustrating a display example of a predicted image.
  • the prediction image 305 is generated by superimposing the behavior prediction information 304 on the frame image of the volleyball game.
  • a broken line in the behavior prediction information 304 indicates a trajectory of the ball hit by the server. The viewer can visually understand that the server is aiming at the lower left corner of the court by looking at the predicted image 305.
  • the display method of the behavior prediction information 304 does not have to be based on a figure such as an arrow shown in FIG. 7, and may be a display based on a change in color, a display based on text, or the like.
  • the prediction image 305 may be a schematic diagram in which a serve course is displayed on a picture of a volleyball court.
  • the result of behavior prediction may be used for purposes other than the purpose of providing it to viewers by incorporating it into a broadcast image, for example, may be provided to a person involved in a sports team such as a director for strategic planning.
  • the behavior prediction result can be transmitted from the terminal of the information processing system 100 to the tablet terminal owned by the director, for example.
  • the director can formulate a strategy using the behavior prediction result and send an instruction to the player.
  • a person is identified by image recognition, and behavior prediction is performed based on identification information of the identified person and attention information of the person.
  • the information processing system 100 which can perform the behavior prediction considering that the tendency of the behavior is different for each person is provided.
  • the behavior prediction performed by the behavior prediction unit 108 in the above-described embodiment is performed based on uncertain information such as the player's line of sight, and may not necessarily predict the player's behavior reliably. Absent. For example, when a player consciously performs an action having a different tendency from the conventional one, the action prediction may be missed. However, viewers are required to ensure that the behavior prediction matches the actual behavior in order to view the behavior prediction results for the purpose of enjoying the game while predicting whether the behavior prediction matches the actual behavior. I don't mean. Therefore, in the information processing system 100 of this embodiment, it is not essential that the accuracy of behavior prediction is high.
  • FIG. 8 is a functional block diagram of the information processing system 200 according to the second embodiment.
  • the information processing system 200 includes an identification information acquisition unit 201, an attention information acquisition unit 202, and an action prediction unit 203.
  • the identification information acquisition unit 201 acquires identification information of a person specified by image recognition from an image.
  • the attention information acquisition unit 202 acquires attention information indicating the attention state of the person from the face image of the person.
  • the behavior prediction unit 203 predicts the behavior of the person based on the identification information and the attention information.
  • an information processing system 200 capable of performing behavior prediction considering that the tendency of behavior differs for each person is provided.
  • the direction of the serve is predicted from the line of sight of the server in a volleyball game, but the behavior prediction of this embodiment is applicable to other scenes.
  • the direction in which the setter raises the toss, the type of toss, the position where the attack is hit, and the like may be predicted based on the direction of the line of sight of the setter or the attacker.
  • the behavior prediction of this embodiment can be applied to behavior prediction of sports other than volleyball.
  • the present embodiment can be applied to a soccer free kick or penalty kick scene.
  • the direction in which the ball is kicked, the direction in which the keeper flies, and the like can be predicted from the direction of the line of sight of the kicker or the line of sight of the goalkeeper.
  • this embodiment can be applied to baseball.
  • the ball type, course, speed, and the like of the ball thrown by the pitcher can be predicted from the direction of the line of sight of the pitcher or the direction of the line of sight of the catcher.
  • the direction in which the batter is about to strike can be predicted from the direction of the line of sight of the batter.
  • this embodiment can be applied to scenes other than sports.
  • the behavior prediction of this embodiment may be applied to a person shown in an image taken with a security camera.
  • the behavior prediction unit 108 uses identification information for identifying a person and attention information as input information for behavior prediction, but performs behavior prediction using further information other than this. May be. Examples of such information are listed as examples.
  • a history of past actions in the game may be used for action prediction.
  • the server may determine the direction in which the server is struck in consideration of the past serve direction and its success or failure. If there is a success in the left corner (if you have scored), the server will tend to hit the next serve to the left corner. Also, if the server continues to serve in the left corner but fails in the previous serve (if it conceded), the server tends to change the direction of the next strike and hit the right corner. Alternatively, the direction of the server's line of sight is not consistent with the direction of the serve, and if this is successful, the server will also serve in the direction that does not match the line of sight in the next serve as well. Tend.
  • the server will change the direction of the next serve and the direction of the line of sight and the direction of the serve Tend to match.
  • the past behavior history can be used for behavior prediction, so that the accuracy of behavior prediction can be further improved.
  • the above-mentioned tendency is an example, and there exists a player with another tendency. Therefore, in the behavior prediction, the way of considering the past history may be changed depending on the player.
  • a combination of players involved in cooperative play may be taken into account in action prediction.
  • players such as toss and attack of volleyball
  • the right corner is often aimed at a combination of a setter and an attacker
  • the left corner is often aimed at another certain combination.
  • the direction of the line of sight often coincides with the direction of the attack, and in the case of another combination, the direction of the line of sight and the direction of the attack are often different.
  • Such a tendency may exist.
  • the result of facial expression analysis based on the movement of the mouth corner or the like may be used for behavior prediction.
  • the expression When the expression is normal, there is often room for a strategic action such as hitting a serve with a different direction from the line of sight.
  • the facial expression when the facial expression is not normal, such as when fatigue is accumulated, there is often no room for such strategic action. For this reason, when the facial expression is normal, it is often aimed at a different direction from the line of sight, and when a difficult expression is seen from the corner of the mouth, etc., it is often aimed at the same direction as the line of sight. Behavior may be correlated. Therefore, by using the result of facial expression analysis for behavior prediction, the accuracy of behavior prediction can be further improved.
  • the situation of the game includes the situation of a player different from the player whose behavior is to be predicted, such as the scores of both teams, the positions of the setters or ace attackers of both teams, and the positional relationship between the players.
  • the situation of the game includes the situation of a player different from the player whose behavior is to be predicted, such as the scores of both teams, the positions of the setters or ace attackers of both teams, and the positional relationship between the players.
  • a strategy of hitting a serve with an ace attacker is known. Therefore, the prediction accuracy of the direction of a serve can be improved by performing action prediction in consideration of the position of the ace attacker.
  • the behavior prediction unit 108 performs behavior prediction by further using identification information for identifying a person as input information for behavior prediction, attention information, and other information, thereby improving the accuracy of behavior prediction. Can be improved. Note that some of the above examples may be combined and used for behavior prediction.
  • a person is specified by face matching, but other techniques may be adopted as long as the person can be identified by image recognition.
  • a person may be specified by acquiring a character number, a name, and the like written on a player's uniform by character recognition.
  • the installation position of an imaging device (camera) that acquires an image for performing face detection, line-of-sight detection, and the like is not particularly limited, but is preferably close to the player in order to improve the accuracy of face detection, line-of-sight detection, and the like.
  • an imaging device camera
  • a processing method for recording a program for operating the configuration of the embodiment to realize the functions of the above-described embodiment on a storage medium, reading the program recorded on the storage medium as a code, and executing the program on a computer is also included in each embodiment. Included in the category. That is, a computer-readable storage medium is also included in the scope of each embodiment. In addition to the storage medium on which the above-described program is recorded, the program itself is included in each embodiment. In addition, one or more components included in the above-described embodiments are circuits such as an ASIC (Application Specific Integrated Circuit) and an FPGA (Field Programmable Gate Array) configured to realize the function of each component. There may be.
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • a floppy (registered trademark) disk, hard disk, optical disk, magneto-optical disk, CD (Compact Disk) -ROM, magnetic tape, nonvolatile memory card, ROM can be used.
  • the program runs on an OS (Operating System) in cooperation with other software and expansion board functions.
  • OS Operating System
  • SaaS Software as a Service
  • An identification information acquisition unit for acquiring identification information of a person identified by image recognition from the image;
  • An attention information acquisition unit that acquires attention information indicating the attention state of the person from the face image of the person;
  • An action prediction unit that predicts the action of the person based on the identification information and the attention information;
  • An information processing system comprising:
  • Appendix 2 The information processing system according to appendix 1, wherein the attention information includes information extracted from features of the person's face or eyes.
  • Appendix 3 The information processing system according to appendix 1 or 2, wherein the attention information includes at least one of a gaze direction of the person and a face direction of the person.
  • Appendix 4 The information processing system according to any one of appendices 1 to 3, wherein the image recognition includes at least one of face matching and character recognition.
  • the behavior prediction unit predicts the behavior of the person using a learning model generated by machine learning that receives learning data including the attention information and the behavior of the person.
  • the information processing system according to any one of the above.
  • Appendix 6 The information processing according to any one of appendices 1 to 4, wherein the behavior prediction unit predicts the behavior of the person using a database in which the attention information and the behavior of the person are associated with each other. system.
  • Appendix 7 The information processing system according to any one of appendices 1 to 6, further comprising an image generation unit that generates a predicted image including behavior prediction information including the behavior of the person predicted by the behavior prediction unit.
  • Appendix 8 The information processing system according to appendix 7, wherein the image generation unit generates the predicted image by superimposing the behavior prediction information on the image.
  • the image is a video of a sports game,
  • the information processing system according to any one of appendices 1 to 8, wherein the person is a player of the sport.
  • Appendix 10 The information processing system according to appendix 9, wherein the behavior prediction unit further predicts the behavior of the person based on the situation of the game.
  • Appendix 11 The information processing system according to appendix 9 or 10, wherein the behavior prediction unit further predicts the behavior of the person based on a situation of a player other than the person.
  • Appendix 12 The information processing system according to any one of appendices 9 to 11, wherein the behavior prediction unit further predicts the behavior of the person based on a history of the behavior of the person in the game.
  • Appendix 13 The information according to any one of appendices 9 to 12, wherein the behavior prediction unit further predicts the behavior of the person based on a combination of the person and a player who cooperates with the person. Processing system.
  • Appendix 14 The information processing system according to any one of appendices 9 to 13, wherein the behavior prediction unit further predicts the behavior of the person based on the facial expression of the person.
  • Appendix 15 The information processing system according to any one of appendices 9 to 14, wherein the behavior prediction unit further predicts the behavior of the person based on the score of the game.
  • Appendix 16 16. The information according to any one of appendices 9 to 15, wherein the behavior predicting unit further predicts the behavior of the person based on past game information of players of the opponent team of the person. Processing system.
  • An information processing method comprising:
  • Image generation unit 100, 200 Information processing system 101 Image acquisition unit 102 Face detection unit 103 Feature amount extraction unit 104 Collation unit 105 State determination unit 106, 201 Identification information acquisition unit 107, 202 Attention information acquisition unit 108, 203 Action prediction unit 109 Image generation unit 110 Storage unit 151 CPU 152 RAM 153 ROM 154 HDD 155 Communication I / F 156 Display device 157 Input device 158 Bus 301 Frame image data 302 Player 303 Line of sight 304 Behavior prediction information 305 Predicted image R1 Facial region

Abstract

Provided is an information processing system comprising: an identification information acquisition unit for acquiring, from an image, identification information for a person specified by means of image recognition; an attention information acquisition unit for acquiring, from a facial image of the person, attention information indicating an attention state of the person; and a behavior prediction unit for predicting a behavior of the person on the basis of the identification information and the attention information.

Description

情報処理システム、情報処理方法及び記憶媒体Information processing system, information processing method, and storage medium
 本発明は、情報処理システム、情報処理方法及び記憶媒体に関する。 The present invention relates to an information processing system, an information processing method, and a storage medium.
 特許文献1には、ドライバの視線を検出し、視線の動きに基づいて注視対象物を検出するナビゲーション装置が開示されている。当該ナビゲーション装置は、注視行動パターンに基づいて運転行動を予測する機能を備える。 Patent Document 1 discloses a navigation device that detects a driver's line of sight and detects a gaze target based on the movement of the line of sight. The navigation device has a function of predicting driving behavior based on a gaze behavior pattern.
特開2008-232912号公報JP 2008-232912 A
 特許文献1に例示されているナビゲーション装置においては、注視対象に基づく行動の予測において、対象となる人物によって行動の傾向が異なることを十分に考慮できていない場合があった。 In the navigation apparatus exemplified in Patent Document 1, there is a case in which it is not possible to sufficiently consider that the behavior tendency differs depending on the target person in the behavior prediction based on the gaze target.
 本発明は、上述の課題に鑑みてなされたものであって、人物ごとに行動の傾向が異なることを考慮した行動予測を行い得る情報処理システム、情報処理方法及び記憶媒体を提供することを目的とする。 The present invention has been made in view of the above-described problems, and an object of the present invention is to provide an information processing system, an information processing method, and a storage medium that can perform behavior prediction in consideration of different behavior trends for each person. And
 本発明の一観点によれば、画像の中から画像認識により特定された人物の識別情報を取得する識別情報取得部と、前記人物の顔画像から、前記人物の注目状態を示す注目情報を取得する注目情報取得部と、前記識別情報と前記注目情報とに基づいて、前記人物の行動を予測する行動予測部と、を備えることを特徴とする情報処理システムが提供される。 According to an aspect of the present invention, an identification information acquisition unit that acquires identification information of a person specified by image recognition from an image, and attention information indicating the attention state of the person is acquired from the face image of the person. There is provided an information processing system comprising an attention information acquisition unit that performs an action prediction unit that predicts the behavior of the person based on the identification information and the attention information.
 本発明の他の一観点によれば、画像の中から画像認識により特定された人物の識別情報を取得するステップと、前記人物の顔画像から、前記人物の注目状態を示す注目情報を取得するステップと、前記識別情報と前記注目情報とに基づいて、前記人物の行動を予測するステップと、を備えることを特徴とする情報処理方法が提供される。 According to another aspect of the present invention, the step of acquiring identification information of a person specified by image recognition from an image, and acquiring attention information indicating the attention state of the person from the face image of the person. There is provided an information processing method comprising: a step; and predicting the behavior of the person based on the identification information and the attention information.
 本発明の他の一観点によれば、コンピュータに、画像の中から画像認識により特定された人物の識別情報を取得するステップと、前記人物の顔画像から、前記人物の注目状態を示す注目情報を取得するステップと、前記識別情報と前記注目情報とに基づいて、前記人物の行動を予測するステップと、を備えることを特徴とする情報処理方法を実行させるためのプログラムが記憶された記憶媒体が提供される。 According to another aspect of the present invention, a step of acquiring identification information of a person specified by image recognition from an image in a computer, and attention information indicating the attention state of the person from the face image of the person A storage medium storing a program for executing an information processing method comprising: a step of acquiring an information processing method; and a step of predicting the behavior of the person based on the identification information and the attention information Is provided.
 本発明によれば、人物ごとに行動の傾向が異なることを考慮した行動予測を行い得る情報処理システム、情報処理方法及び記憶媒体を提供することができる。 According to the present invention, it is possible to provide an information processing system, an information processing method, and a storage medium that can perform behavior prediction considering that the tendency of behavior differs for each person.
第1実施形態に係る情報処理システムのハードウェア構成例を示すブロック図である。It is a block diagram which shows the hardware structural example of the information processing system which concerns on 1st Embodiment. 第1実施形態に係る情報処理システムの機能ブロック図である。It is a functional block diagram of the information processing system concerning a 1st embodiment. 第1実施形態に係る情報処理システムにより行われる処理の概略を示すフローチャートである。It is a flowchart which shows the outline of the process performed by the information processing system which concerns on 1st Embodiment. 試合の動画からの人物の顔の検出例を模式的に示す図である。It is a figure which shows typically the example of a person's face detection from the video of a game. プレイヤーデータベースの構成例を示す表である。It is a table | surface which shows the structural example of a player database. プレイヤーの顔画像からの視線検出例を模式的に示す図である。It is a figure which shows typically the example of a gaze detection from a player's face image. 予測画像の表示例を示す図である。It is a figure which shows the example of a display of an estimated image. 第2実施形態に係る情報処理システムの機能ブロック図である。It is a functional block diagram of the information processing system concerning a 2nd embodiment.
 以下、図面を参照して、本発明の例示的な実施形態を説明する。図面において同様の要素又は対応する要素には同一の符号を付し、その説明を省略又は簡略化することがある。 Hereinafter, exemplary embodiments of the present invention will be described with reference to the drawings. In the drawings, similar or corresponding elements are denoted by the same reference numerals, and the description thereof may be omitted or simplified.
 [第1実施形態]
 本実施形態に係る情報処理システムについて、図1乃至図7を参照しつつ説明する。本実施形態の情報処理システムは、画像に出現している人物の行動予測に用いられるシステムである。この画像は、例えば、スポーツの試合の動画であり、人物は、スポーツのプレイヤーである。プレイヤーの行動予測を表示させることにより、プレイヤーの心理、戦略等を可視化することができ、スポーツを観戦する視聴者の満足度を向上させることができる。
[First Embodiment]
The information processing system according to the present embodiment will be described with reference to FIGS. The information processing system of this embodiment is a system used for predicting the behavior of a person who appears in an image. This image is, for example, a video of a sports game, and the person is a sports player. By displaying the player's behavior prediction, it is possible to visualize the player's psychology, strategy, etc., and improve the satisfaction of the viewer watching the sport.
 なお、本明細書において、画像とは、静止画、動画及、動画に含まれるフレーム画像等を包括する概念である。画像が動画である場合には、音声データを更に含んでもよい。また、本明細書において、スポーツとは、身体運動を伴うフィジカルスポーツを含むが、これに限定されるものではなく、身体運動の要素が少ない競技及び身体運動を伴わない競技をも含み得る。すなわち、スポーツとは、自動車レース、バイクレース、モーターボートレース等のモータースポーツ、コンピュータゲームをプレイするeスポーツ(Electronic Sports)、囲碁、将棋、チェス、麻雀、ポーカー、バックギャモン等のマインドスポーツを含み得る。 In addition, in this specification, an image is a concept including a still image, a moving image, and a frame image included in the moving image. If the image is a moving image, it may further include audio data. In addition, in this specification, sports include physical sports involving physical exercise, but is not limited thereto, and may include competitions that have few physical movement elements and sports that do not involve physical movements. That is, sports may include motor sports such as car racing, motorcycle racing, and motor boat racing, e-sports that play computer games (Electronic Sports), go sports, shogi, chess, mahjong, poker, backgammon, and other mind sports. .
 以下に説明する実施形態では、一例としてバレーボールの試合の動画に出現するプレイヤーの視線等を検出してプレイヤーの行動を予測し、予測結果を表示する例を説明するが、これに限定されるものではない。 In the embodiment described below, as an example, an example in which a player's line of sight appearing in a video of a volleyball game is detected to predict the player's behavior and the prediction result is displayed will be described, but the present invention is not limited thereto. is not.
 図1は、情報処理システム100のハードウェア構成例を示すブロック図である。情報処理システム100は、例えば、デスクトップPC(Personal Computer)、ノートPC、タブレットPC等のコンピュータであり得る。 FIG. 1 is a block diagram illustrating a hardware configuration example of the information processing system 100. The information processing system 100 may be a computer such as a desktop PC (Personal Computer), a notebook PC, or a tablet PC.
 情報処理システム100は、演算、制御及び記憶を行うコンピュータとして、CPU(Central Processing Unit)151、RAM(Random Access Memory)152、ROM(Read Only Memory)153及びHDD(Hard Disk Drive)154を備える。また、情報処理システム100は、通信I/F(インターフェース)155、表示装置156及び入力装置157を備える。CPU151、RAM152、ROM153、HDD154、通信I/F155、表示装置156及び入力装置157は、バス158を介して相互に接続される。なお、表示装置156及び入力装置157は、これらの装置を駆動するための不図示の駆動装置を介してバス158に接続されてもよい。 The information processing system 100 includes a CPU (Central Processing Unit) 151, a RAM (Random Access Memory) 152, a ROM (Read Only Memory) 153, and an HDD (Hard Disk Drive) 154 as computers that perform calculation, control, and storage. The information processing system 100 includes a communication I / F (interface) 155, a display device 156, and an input device 157. The CPU 151, RAM 152, ROM 153, HDD 154, communication I / F 155, display device 156, and input device 157 are connected to each other via a bus 158. Note that the display device 156 and the input device 157 may be connected to the bus 158 via a driving device (not shown) for driving these devices.
 図1では、情報処理システム100を構成する各部が一体の装置として図示されているが、これらの機能の一部は外付け装置により提供されるものであってもよい。例えば、表示装置156及び入力装置157は、CPU151等を含むコンピュータの機能を構成する部分とは別の外付け装置であってもよい。 In FIG. 1, each unit configuring the information processing system 100 is illustrated as an integrated device, but some of these functions may be provided by an external device. For example, the display device 156 and the input device 157 may be external devices that are different from the parts that constitute the functions of the computer including the CPU 151 and the like.
 CPU151は、ROM153、HDD154等に記憶されたプログラムに従って所定の動作を行うとともに、情報処理システム100の各部を制御する機能をも有するプロセッサである。RAM152は、揮発性記憶媒体から構成され、CPU151の動作に必要な一時的なメモリ領域を提供する。ROM153は、不揮発性記憶媒体から構成され、情報処理システム100の動作に用いられるプログラム等の必要な情報を記憶する。HDD154は、不揮発性記憶媒体から構成され、処理に必要なデータ、情報処理システム100の動作用プログラム等の記憶を行う記憶装置である。 The CPU 151 is a processor that performs a predetermined operation according to a program stored in the ROM 153, the HDD 154, and the like and also has a function of controlling each part of the information processing system 100. The RAM 152 is composed of a volatile storage medium and provides a temporary memory area necessary for the operation of the CPU 151. The ROM 153 is composed of a nonvolatile storage medium and stores necessary information such as a program used for the operation of the information processing system 100. The HDD 154 is a storage device that includes a nonvolatile storage medium and stores data necessary for processing, an operation program for the information processing system 100, and the like.
 通信I/F155は、イーサネット(登録商標)、Wi-Fi(登録商標)、4G等の規格に基づく通信インターフェースであり、他の装置との通信を行うためのモジュールである。表示装置156は、液晶ディスプレイ、OLEDディスプレイ等であって、画像、文字、インターフェース等の表示に用いられる。入力装置157は、キーボード、ポインティングデバイス等であって、ユーザが情報処理システム100を操作するために用いられる。ポインティングデバイスの例としては、マウス、トラックボール、タッチパネル、ペンタブレット等が挙げられる。表示装置156及び入力装置157は、タッチパネルとして一体に形成されていてもよい。 The communication I / F 155 is a communication interface based on standards such as Ethernet (registered trademark), Wi-Fi (registered trademark), and 4G, and is a module for performing communication with other devices. The display device 156 is a liquid crystal display, an OLED display, or the like, and is used for displaying images, characters, interfaces, and the like. The input device 157 is a keyboard, a pointing device, or the like, and is used for a user to operate the information processing system 100. Examples of pointing devices include a mouse, a trackball, a touch panel, and a pen tablet. The display device 156 and the input device 157 may be integrally formed as a touch panel.
 なお、図1に示されているハードウェア構成は例示であり、これら以外の装置が追加されていてもよく、一部の装置が設けられていなくてもよい。また、一部の装置が同様の機能を有する別の装置に置換されていてもよい。更に、本実施形態の一部の機能がネットワークを介して他の装置により提供されてもよく、本実施形態の機能が複数の装置に分散されて実現されるものであってもよい。例えば、HDD154は、半導体メモリを用いたSSD(Solid State Drive)に置換されていてもよく、クラウドストレージに置換されていてもよい。 Note that the hardware configuration shown in FIG. 1 is an exemplification, and other devices may be added, or some devices may not be provided. Some devices may be replaced with another device having the same function. Furthermore, a part of the functions of the present embodiment may be provided by another device via a network, and the functions of the present embodiment may be realized by being distributed to a plurality of devices. For example, the HDD 154 may be replaced with an SSD (Solid State Drive) using a semiconductor memory, or may be replaced with a cloud storage.
 図2は、本実施形態に係る情報処理システム100の機能ブロック図である。情報処理システム100は、画像取得部101、顔検出部102、特徴量抽出部103、照合部104、状態判定部105、識別情報取得部106、注目情報取得部107、行動予測部108、画像生成部109及び記憶部110を有する。 FIG. 2 is a functional block diagram of the information processing system 100 according to the present embodiment. The information processing system 100 includes an image acquisition unit 101, a face detection unit 102, a feature amount extraction unit 103, a collation unit 104, a state determination unit 105, an identification information acquisition unit 106, an attention information acquisition unit 107, a behavior prediction unit 108, and an image generation Part 109 and storage part 110.
 CPU151は、ROM153、HDD154等に記憶されたプログラムをRAM152にロードして実行する。これにより、CPU151は、画像取得部101、顔検出部102、特徴量抽出部103、照合部104、状態判定部105、識別情報取得部106、注目情報取得部107、行動予測部108及び画像生成部109の機能を実現する。これらの各部で行われる処理については後述する。CPU151は、HDD154を制御することにより記憶部110の機能を実現する。 The CPU 151 loads a program stored in the ROM 153, HDD 154, etc. into the RAM 152 and executes it. Thereby, the CPU 151 causes the image acquisition unit 101, the face detection unit 102, the feature amount extraction unit 103, the collation unit 104, the state determination unit 105, the identification information acquisition unit 106, the attention information acquisition unit 107, the behavior prediction unit 108, and the image generation. The function of the unit 109 is realized. Processing performed in each of these units will be described later. The CPU 151 realizes the function of the storage unit 110 by controlling the HDD 154.
 図3は、本実施形態に係る情報処理システム100により行われる処理の概略を示すフローチャートである。図3を参照しつつ、情報処理システム100により行われる行動予測処理を説明する。なお、以下の説明においては、情報処理システム100は、バレーボールの試合の動画の中からサーブを打とうとしているプレーヤー(サーバー)を検出し、ボールが飛ぶ方向を予測して予測結果を表示する処理を行うものとする。この処理は、プレイヤーの行動予測を提供することで視聴者の満足度を向上させる観点から、生放送時にリアルタイムで行われることが望ましい。録画放送の場合よりも生放送の場合の方が行動予測の提供に対するニーズが強いためである。なお、本明細書において、生放送とは、撮影と実質的に同時に撮影された動画が放送されるものだけでなく、遅延送出システムを用いて撮影から数秒から数分程度遅延させて放送するものを含む。 FIG. 3 is a flowchart showing an outline of processing performed by the information processing system 100 according to the present embodiment. The behavior prediction process performed by the information processing system 100 will be described with reference to FIG. In the following description, the information processing system 100 detects a player (server) trying to hit a serve from a video of a volleyball game, predicts a direction in which the ball will fly, and displays a prediction result. Shall be performed. This processing is desirably performed in real time during live broadcasting from the viewpoint of improving the satisfaction of the viewer by providing a behavior prediction of the player. This is because live broadcasts have stronger needs for providing behavior predictions than recorded broadcasts. In this specification, live broadcasting refers not only to broadcasting moving images taken at substantially the same time as shooting, but also to broadcasting with a delay of several seconds to several minutes from shooting using a delayed sending system. Including.
 ステップS101において、画像取得部101は、バレーボールの試合の動画データ(動画を構成するフレーム画像データ)を取得する。この動画データの取得は、例えば、ビデオカメラ等の情報処理システム100の外部の装置から受信するものであってもよく、記憶部110に一次記憶されている動画データを読み出すものであってもよい。 In step S101, the image acquisition unit 101 acquires video data of a volleyball game (frame image data constituting the video). The acquisition of the moving image data may be, for example, received from a device external to the information processing system 100 such as a video camera, or may be read out the moving image data temporarily stored in the storage unit 110. .
 ステップS102において、顔検出部102は、動画データに含まれるフレーム画像データ内から人物の顔が含まれる領域を検出する。図4は、バレーボールの試合の動画データから人物の顔を検出する例を模式的に示す図である。フレーム画像データ301内には、サーブを打とうとしてボールを持って構えているプレイヤー302が表示されている。顔検出部102は、フレーム画像データ301内から自動的に人物の顔を探索して顔を検出する。これにより、図4に示されるようにプレイヤー302の顔を含む矩形の顔領域R1が抽出される。 In step S102, the face detection unit 102 detects an area including a human face from the frame image data included in the moving image data. FIG. 4 is a diagram schematically illustrating an example of detecting a human face from video data of a volleyball game. In the frame image data 301, a player 302 holding a ball to hit a serve is displayed. The face detection unit 102 automatically searches for a human face from the frame image data 301 and detects the face. Thereby, as shown in FIG. 4, a rectangular face region R1 including the face of the player 302 is extracted.
 ステップS103において、特徴量抽出部103は、顔検出部102により抽出された顔領域R1内の画像から特徴量を抽出する。特徴量とは、例えば、瞳、鼻、口端といった特徴的なパーツの位置等の顔の特徴を示す量であり得る。 In step S103, the feature amount extraction unit 103 extracts a feature amount from the image in the face area R1 extracted by the face detection unit 102. The feature amount may be an amount indicating a facial feature such as a position of a characteristic part such as a pupil, a nose, or a mouth end.
 ステップS104において、照合部104は、ステップS103の処理で抽出された特徴量と、プレイヤーデータベース(プレイヤーDB)に含まれる照合対象の特徴量とを照合し、一致する組み合わせがあるかどうかを判定する顔照合を行う。プレイヤーデータベース内に特徴量抽出部103により抽出された特徴量と一致する特徴量がある場合(ステップS104においてYES)、検出された人物は試合に出場しているプレイヤーであると判定され、処理はステップS105に移行する。データベース内に特徴量抽出部103により抽出された特徴量と一致する特徴量がない場合(ステップS104においてNO)、検出された人物は、監督等の試合に出場しているプレイヤーとは異なる人物であると判定され、処理は終了する。 In step S104, the collation unit 104 collates the feature amount extracted in the process of step S103 with the feature amount to be collated included in the player database (player DB), and determines whether there is a matching combination. Perform face matching. If there is a feature quantity that matches the feature quantity extracted by the feature quantity extraction unit 103 in the player database (YES in step S104), it is determined that the detected person is a player participating in the game, and the process is as follows. The process proceeds to step S105. If there is no feature quantity that matches the feature quantity extracted by the feature quantity extraction unit 103 in the database (NO in step S104), the detected person is a person different from the player participating in the game such as a manager. It is determined that there is, and the process ends.
 ここで、図5を参照して、プレイヤーデータベースの構成について説明する。図5は、プレイヤーデータベースの構成例を示す表である。プレイヤーデータベースの項目には、図5に示されるように、試合に出場可能なプレイヤーの名前、プレイヤーのID(Identifier)、プレイヤーの顔画像の特徴量、プレイヤーの背番号、プレイヤーの所属チーム並びにプレイヤーのポジションが含まれる。プレイヤーデータベース内において、これらの項目は相互に対応付けられている。なお、図5には、特徴量を識別するための特徴量IDが図示されているが、実際には特徴量IDに対応する特徴量もプレイヤーデータベースに含まれている。 Here, the configuration of the player database will be described with reference to FIG. FIG. 5 is a table showing a configuration example of the player database. As shown in FIG. 5, the player database items include names of players who can participate in the game, player IDs (Identifiers), player face image features, player spin numbers, player teams and players. Positions are included. In the player database, these items are associated with each other. FIG. 5 shows a feature amount ID for identifying the feature amount, but actually, a feature amount corresponding to the feature amount ID is also included in the player database.
 図5に示されているものに加えて、プレイヤーデータベースは、特徴量の抽出に用いられた顔画像そのものを含んでいてもよい。また、特徴量とプレイヤーを識別する情報とが対応付けられていれば本実施形態の処理が可能であるため、図5に示されている項目のうちのプレイヤー名、背番号、チーム、ポジション等は省略されていてもよい。 In addition to the one shown in FIG. 5, the player database may include the face image itself used for extracting the feature amount. In addition, since the processing of this embodiment is possible if the feature amount and information for identifying the player are associated with each other, the player name, spine number, team, position, etc., among the items shown in FIG. May be omitted.
 なお、プレイヤーデータベースは、情報処理システム100内の記憶部110に記憶されたものであってもよく、データサーバ等の情報処理システム100の外部の装置に記憶されたものであってもよい。プレイヤーデータベースは、情報処理システム100のユーザが所属する放送局等が自社内に保有するものであり得る。しかしながら、プレイヤーデータベースは、複数の放送局等が共同で利用できるようにクラウド環境で提供されるものであってもよい。 The player database may be stored in the storage unit 110 in the information processing system 100, or may be stored in a device external to the information processing system 100 such as a data server. The player database may be owned by the broadcasting station or the like to which the user of the information processing system 100 belongs. However, the player database may be provided in a cloud environment so that a plurality of broadcasting stations and the like can be used jointly.
 ステップS105において、状態判定部105は、顔検出部102により抽出されたプレイヤーがサーバーであるか否かを判定する。当該プレイヤーがサーバーである場合(ステップS105においてYES)、処理はステップS106に移行する。当該プレイヤーがサーバーではない場合(ステップS105においてNO)、処理は終了する。 In step S105, the state determination unit 105 determines whether or not the player extracted by the face detection unit 102 is a server. If the player is a server (YES in step S105), the process proceeds to step S106. If the player is not a server (NO in step S105), the process ends.
 ここで、状態判定部105における判定のアルゴリズムは、例えば、当該プレイヤーのコート内における位置、当該プレイヤーの体勢、当該プレイヤーがボールを持っているか否か等に基づくものであり得る。また、状態判定部105のアルゴリズムは、プレイヤーの画像と当該プレイヤーがサーバーであるか否かを示す情報とが対応付けられた学習データを入力データとする機械学習により生成された学習モデルを用いたものであってもよい。 Here, the determination algorithm in the state determination unit 105 may be based on, for example, the position of the player on the court, the posture of the player, whether or not the player has the ball. The algorithm of the state determination unit 105 uses a learning model generated by machine learning using as input data learning data in which a player image is associated with information indicating whether or not the player is a server. It may be a thing.
 また、状態判定部105における判定のアルゴリズムは、プレイヤーデータベースを参照して、当該プレイヤーのポジションがリベロである場合に、サーバーではないと判定するものであってもよい。リベロはルール上サーブを打つことができないので、抽出されたプレイヤーがリベロの場合にはそのプレイヤーはサーバーではないためである。 Also, the determination algorithm in the state determination unit 105 may refer to the player database and determine that the player is not a server when the player's position is Libero. This is because Libero cannot serve according to the rules, so if the extracted player is Libero, that player is not a server.
 また、状態判定部105における判定のアルゴリズムは、コートポジションの情報を参照して判定するものであってもよい。サーブ権の移動に伴うコートポジションの変化(ローテーション)を演算することにより、状態判定部105はサーバーを特定することができる。 The determination algorithm in the state determination unit 105 may be determined with reference to the court position information. The state determination unit 105 can specify the server by calculating the change (rotation) of the court position accompanying the movement of the serve right.
 ステップS105の処理が行われるタイミングがステップS104の後であることは必須ではなく、例えば、ステップS102の後、ステップS103の後等であってもよい。 It is not essential that the timing at which the process of step S105 is performed is after step S104. For example, it may be after step S102, after step S103, or the like.
 ステップS106において、識別情報取得部106は、サーバーとして特定された人物の識別情報を取得する。この識別情報は、プレイヤーデータベース内のプレイヤーIDであり得る。しかしながら、識別情報は、特徴量ID、プレイヤー名、背番号、所属チーム名等、プレイヤーの識別に用いることが可能な情報であればプレイヤーID以外のものであってもよい。 In step S106, the identification information acquisition unit 106 acquires the identification information of the person specified as the server. This identification information may be a player ID in the player database. However, the identification information may be information other than the player ID as long as the information can be used for identifying the player, such as a feature amount ID, a player name, a spine number, and a team name.
 ステップS107において、注目情報取得部107は、プレイヤーの顔領域R1内の顔画像からプレイヤーの視線の向きを検出する。図6は、プレイヤーの顔画像からの視線検出例を模式的に示す図である。図6に示されるように、注目情報取得部107は、顔画像に基づいて、プレイヤーの視線303の向きを検出する。この視線検出のアルゴリズムは、例えば、目頭、目尻等を基準点として、虹彩、瞳孔等の相対的な位置を画像から取得することにより、視線の向きを取得するものであり得る。あるいは、当該アルゴリズムは、光源から射出された光の角膜反射を基準として瞳孔の位置を画像から取得することにより、視線の向きを取得するものであってもよい。プレイヤーの視線303の向きは、プレイヤーが注目している方向を示していることから、プレイヤーの注目状態を示す典型的な情報であるといえる。このようにして、注目情報取得部107は、プレイヤーの注目状態を示す注目情報を取得する。 In step S107, the attention information acquisition unit 107 detects the direction of the player's line of sight from the face image in the player's face area R1. FIG. 6 is a diagram schematically illustrating a gaze detection example from the player's face image. As illustrated in FIG. 6, the attention information acquisition unit 107 detects the orientation of the player's line of sight 303 based on the face image. This line-of-sight detection algorithm can acquire the direction of the line of sight by, for example, acquiring the relative positions of the iris, pupil, and the like from the image with the eyes and the corners of the eyes as reference points. Alternatively, the algorithm may acquire the direction of the line of sight by acquiring the position of the pupil from the image based on the corneal reflection of the light emitted from the light source. Since the direction of the player's line of sight 303 indicates the direction in which the player is paying attention, it can be said that this is typical information indicating the attention state of the player. In this way, the attention information acquisition unit 107 acquires attention information indicating the attention state of the player.
 なお、注目情報取得部107は、プレイヤーの顔の向きを検出するものであってもよく、視線の向きの検出と顔の向きの検出とを併用するものであってもよい。すなわち、注目情報は、視線の向き及び顔の向きの少なくとも1つを含み得る。顔は目よりも面積が大きいので、画像の解像度が低い場合であっても比較的高精度に向きを取得できる利点がある。なお、注目情報に含まれ得る他の例としては、プレイヤーの表情、プレイヤーの口の形等も挙げられる。 Note that the attention information acquisition unit 107 may be one that detects the orientation of the player's face, or may be a combination of detection of the direction of the line of sight and detection of the orientation of the face. In other words, the attention information may include at least one of a gaze direction and a face direction. Since the face has a larger area than the eyes, there is an advantage that the orientation can be acquired with relatively high accuracy even when the resolution of the image is low. Other examples that can be included in the attention information include the player's facial expression, the player's mouth shape, and the like.
 ステップS108において、行動予測部108は、識別情報取得部106により取得された識別情報と、注目情報取得部107により取得された注目情報とに基づいて、プレイヤーの行動を予測する。なお、この処理により得られた予測結果は行動予測情報と呼ばれることもある。 In step S108, the behavior prediction unit 108 predicts the player's behavior based on the identification information acquired by the identification information acquisition unit 106 and the attention information acquired by the attention information acquisition unit 107. In addition, the prediction result obtained by this process may be called action prediction information.
 行動予測部108により行われる行動予測のアルゴリズムの例を説明する。一般的に、視線等の注目情報と、プレイヤーの行動の間には相関がある。例えば、サーブの例では、プレイヤーの視線は、サーブを打とうとして狙っている方向を向いていることが多い。しかしながら、相手に狙いを読まれることを避けるため、サーブを打とうとする方向とは異なる方向を意図的に又は無意識的に注視するプレイヤーもいる。このように、プレイヤーの視線等の注目情報と、プレイヤーの実際の行動との間の相関には個人差がある、したがって、プレイヤーごとに異なるモデルを用いて行動予測を行うことで予測精度を向上させることができる。 An example of an algorithm for behavior prediction performed by the behavior prediction unit 108 will be described. In general, there is a correlation between attention information such as line of sight and the action of the player. For example, in the serve example, the player's line of sight often points in the direction he is aiming to hit the serve. However, some players intentionally or unconsciously gaze at a direction different from the direction in which they are trying to hit in order to avoid being read by the opponent. In this way, there is an individual difference in the correlation between the attention information such as the player's line of sight and the actual behavior of the player. Therefore, the prediction accuracy is improved by performing behavior prediction using a different model for each player. Can be made.
 そこで、本実施形態の行動予測部108は、プレイヤーごとに異なるモデルによる予測を行うため、識別情報を用いてプレイヤーを特定し、特定されたプレイヤーのモデルを用いて注目情報から当該プレイヤーの行動を予測する。これにより、注目情報と行動との間の相関の個人差を考慮した予測が可能となり、予測精度が向上する。 Therefore, since the behavior prediction unit 108 of the present embodiment performs prediction using a different model for each player, the player is identified using the identification information, and the behavior of the player is determined from the attention information using the identified player model. Predict. Thereby, prediction in consideration of individual differences in correlation between attention information and behavior is possible, and prediction accuracy is improved.
 この注目情報と行動との間の相関の個人差は、各プレイヤーの過去の試合情報を解析することにより得られる。具体的には、注目情報と行動との関係をプレイヤーごとにデータベース化した学習データを入力とする機械学習により、プレイヤーごとに異なる行動予測の学習モデルを得ることができる。この学習モデルを行動予測部108に組み込むことにより、プレイヤーごとに行動の傾向が異なることを考慮した行動予測が実現される。機械学習を活用して学習モデルを生成することにより、手動でモデルを作成する場合に比べて手間が軽減されるので、多量のデータを用いた精度の高い学習モデルを容易に構築することができる。 The individual difference in the correlation between this attention information and action can be obtained by analyzing each player's past game information. Specifically, a learning model for predicting behavior different for each player can be obtained by machine learning using as input learning data in which the relationship between attention information and behavior is stored in a database for each player. By incorporating this learning model into the behavior prediction unit 108, behavior prediction that takes into consideration that the tendency of behavior differs for each player is realized. By using machine learning to generate a learning model, labor is reduced compared to creating a model manually, so a highly accurate learning model using a large amount of data can be easily constructed. .
 なお、行動予測部108において用いられ得るモデルは機械学習により得られるものに限定されず、これ以外のものであってもよい。例えば、データ解析者が注目情報に含まれる視線の向き等を分類し、分類ごとに最も可能性が高い行動を過去のデータから導き出すこと等により手動で行動予測モデルを構築してもよい。手動でモデルを作成することにより、解釈性の高い学習モデルを構築することができる。なお、この場合、行動予測部108により行われる処理は、注目情報とプレイヤーの行動とが対応付けられたテーブル等のデータベースを参照するものであり得る。 Note that the model that can be used in the behavior prediction unit 108 is not limited to that obtained by machine learning, and other models may be used. For example, the data analyst may classify the direction of the line of sight included in the attention information and manually construct the behavior prediction model by deriving the most likely behavior for each classification from past data. By creating a model manually, it is possible to construct a learning model with high interpretability. In this case, the process performed by the behavior prediction unit 108 may refer to a database such as a table in which attention information and a player's behavior are associated with each other.
 ステップS109において、画像生成部109は、行動予測部108により予測された行動予測情報を含む予測画像を生成する。この予測画像は、例えば、バレーボールの試合の放送用画像に組み込まれて、視聴者に提供される。図7は、予測画像の表示例を示す図である。図7に示される例では、行動予測情報304をバレーボールの試合のフレーム画像に重畳させることにより予測画像305が生成されている。行動予測情報304の破線は、サーバーが打つボールの軌跡を示している。視聴者は、予測画像305を見ることにより、サーバーがコートの左下隅を狙っていることを視覚的に把握することができる。 In step S109, the image generation unit 109 generates a predicted image including the behavior prediction information predicted by the behavior prediction unit 108. This predicted image is provided to viewers, for example, by being incorporated in a broadcast image of a volleyball game. FIG. 7 is a diagram illustrating a display example of a predicted image. In the example shown in FIG. 7, the prediction image 305 is generated by superimposing the behavior prediction information 304 on the frame image of the volleyball game. A broken line in the behavior prediction information 304 indicates a trajectory of the ball hit by the server. The viewer can visually understand that the server is aiming at the lower left corner of the court by looking at the predicted image 305.
 なお、行動予測情報304の表示方法は図7に示す矢印のような図形によるものでなくてもよく、例えば、色の変化による表示、文章による表示等であってもよい。また、行動予測情報304を試合の画像に重畳させることは必須ではなく、例えば、予測画像305は、バレーボールコートの絵にサーブのコースを表示するような模式図であってもよい。 Note that the display method of the behavior prediction information 304 does not have to be based on a figure such as an arrow shown in FIG. 7, and may be a display based on a change in color, a display based on text, or the like. In addition, it is not essential to superimpose the behavior prediction information 304 on the game image. For example, the prediction image 305 may be a schematic diagram in which a serve course is displayed on a picture of a volleyball court.
 行動予測の結果は、放送用画像に組み込むことにより視聴者に提供する目的以外の目的に用いられてもよく、例えば、戦略立案のために監督等のスポーツチームの関係者に提供されてもよい。この場合、行動予測の結果は、例えば情報処理システム100の端末から監督が持っているタブレット端末に送信され得る。監督は、行動予測結果を用いて戦略を立案し、プレイヤーに指示を送ることができる。 The result of behavior prediction may be used for purposes other than the purpose of providing it to viewers by incorporating it into a broadcast image, for example, may be provided to a person involved in a sports team such as a director for strategic planning. . In this case, the behavior prediction result can be transmitted from the terminal of the information processing system 100 to the tablet terminal owned by the director, for example. The director can formulate a strategy using the behavior prediction result and send an instruction to the player.
 上述のように、本実施形態では、画像認識により人物を特定し、特定された人物の識別情報と当該人物の注目情報とに基づいて、行動予測が行われる。これにより、人物ごとに行動の傾向が異なることを考慮した行動予測を行い得る情報処理システム100が提供される。 As described above, in the present embodiment, a person is identified by image recognition, and behavior prediction is performed based on identification information of the identified person and attention information of the person. Thereby, the information processing system 100 which can perform the behavior prediction considering that the tendency of the behavior is different for each person is provided.
 なお、上述の実施形態において行動予測部108が実行する行動予測は、プレイヤーの視線等の不確実な情報に基づいて行われるものであり、プレイヤーの行動を確実に予測するものであるとは限らない。例えば、プレイヤーが従来とは傾向が異なる行動を意識的に行った場合には、行動予測が外れる場合がある。しかしながら、視聴者は、行動予測が実際の行動と合致するか否かを予想しながら試合を楽しむ等の目的で行動予測結果を見るため、行動予測が実際の行動と確実に合致することを求めているわけではない。したがって、本実施形態の情報処理システム100において、行動予測の精度が高いことは必須ではない。 Note that the behavior prediction performed by the behavior prediction unit 108 in the above-described embodiment is performed based on uncertain information such as the player's line of sight, and may not necessarily predict the player's behavior reliably. Absent. For example, when a player consciously performs an action having a different tendency from the conventional one, the action prediction may be missed. However, viewers are required to ensure that the behavior prediction matches the actual behavior in order to view the behavior prediction results for the purpose of enjoying the game while predicting whether the behavior prediction matches the actual behavior. I don't mean. Therefore, in the information processing system 100 of this embodiment, it is not essential that the accuracy of behavior prediction is high.
 上述の実施形態において説明したシステムは以下の第2実施形態のようにも構成することができる。 The system described in the above embodiment can also be configured as in the following second embodiment.
 [第2実施形態]
 図8は、第2実施形態に係る情報処理システム200の機能ブロック図である。情報処理システム200は、識別情報取得部201、注目情報取得部202及び行動予測部203を備える。識別情報取得部201は、画像の中から画像認識により特定された人物の識別情報を取得する。注目情報取得部202は、前記人物の顔画像から、前記人物の注目状態を示す注目情報を取得する。行動予測部203は、前記識別情報と前記注目情報とに基づいて、前記人物の行動を予測する。
[Second Embodiment]
FIG. 8 is a functional block diagram of the information processing system 200 according to the second embodiment. The information processing system 200 includes an identification information acquisition unit 201, an attention information acquisition unit 202, and an action prediction unit 203. The identification information acquisition unit 201 acquires identification information of a person specified by image recognition from an image. The attention information acquisition unit 202 acquires attention information indicating the attention state of the person from the face image of the person. The behavior prediction unit 203 predicts the behavior of the person based on the identification information and the attention information.
 本実施形態によれば、人物ごとに行動の傾向が異なることを考慮した行動予測を行い得る情報処理システム200が提供される。 According to the present embodiment, an information processing system 200 capable of performing behavior prediction considering that the tendency of behavior differs for each person is provided.
 [変形実施形態]
 本発明は、上述の実施形態に限定されることなく、本発明の趣旨を逸脱しない範囲において適宜変更可能である。
[Modified Embodiment]
The present invention is not limited to the above-described embodiment, and can be appropriately changed without departing from the spirit of the present invention.
 上述の実施形態においては、バレーボールの試合においてサーバーの視線からサーブの方向を予測する例を示しているが、本実施形態の行動予測は、これ以外の場面にも適用可能である。例えば、セッター又はアタッカーの視線の方向に基づいて、セッターがトスを上げる方向、トスの種類、アタックが打たれる位置等を予測してもよい。 In the above-described embodiment, an example is shown in which the direction of the serve is predicted from the line of sight of the server in a volleyball game, but the behavior prediction of this embodiment is applicable to other scenes. For example, the direction in which the setter raises the toss, the type of toss, the position where the attack is hit, and the like may be predicted based on the direction of the line of sight of the setter or the attacker.
 また、本実施形態の行動予測は、バレーボール以外のスポーツの行動予測にも適用可能である。例えば、本実施形態をサッカーのフリーキック又はペナルティキックの場面に適用することもできる。この場合には、キッカーの視線の方向又はゴールキーパーの視線の方向から、ボールが蹴られる方向、キーパーが飛ぶ方向等を予測することができる。 Also, the behavior prediction of this embodiment can be applied to behavior prediction of sports other than volleyball. For example, the present embodiment can be applied to a soccer free kick or penalty kick scene. In this case, the direction in which the ball is kicked, the direction in which the keeper flies, and the like can be predicted from the direction of the line of sight of the kicker or the line of sight of the goalkeeper.
 また、本実施形態を野球に適用することもできる。この場合には、ピッチャーの視線の方向又はキャッチャーの視線の方向からピッチャーが投げるボールの球種、コース、スピード等を予測することができる。同様に、バッターの視線の方向からバッターが打とうとしている方向を予測することもできる。 Also, this embodiment can be applied to baseball. In this case, the ball type, course, speed, and the like of the ball thrown by the pitcher can be predicted from the direction of the line of sight of the pitcher or the direction of the line of sight of the catcher. Similarly, the direction in which the batter is about to strike can be predicted from the direction of the line of sight of the batter.
 また、本実施形態をスポーツ以外の場面に適用することもできる。例えば、防犯カメラで撮影された画像に映っている人物に対して本実施形態の行動予測を適用してもよい。 Also, this embodiment can be applied to scenes other than sports. For example, the behavior prediction of this embodiment may be applied to a person shown in an image taken with a security camera.
 上述の実施形態においては、行動予測部108は、行動予測のための入力情報として人物を特定する識別情報と、注目情報とを用いているが、これ以外の情報を更に用いて行動予測を行ってもよい。そのような情報の例を例示的に列挙する。 In the above-described embodiment, the behavior prediction unit 108 uses identification information for identifying a person and attention information as input information for behavior prediction, but performs behavior prediction using further information other than this. May be. Examples of such information are listed as examples.
 例えば、行動予測に当該試合における過去の行動の履歴を用いてもよい。バレーボールのサーブの例で説明すると、サーバーは、過去のサーブの方向及びその成否を考慮してサーブを打つ向きを決めることがある。左隅にサーブを打った場合に連続して成功している場合(得点できている場合)には、サーバーは、次のサーブを左隅に打つ傾向がある。また、左隅にサーブを打ち続けたものの前回のサーブで失敗した場合(失点した場合)には、サーバーは、次は打つ方向を変えて右隅に打つ傾向がある。あるいは、サーバーの視線の方向とサーブを打つ方向が一致しないことが続いており、これが成功している場合には、サーバーは、次のサーブでも同様に視線の方向と一致しない方向にサーブを打つ傾向がある。一方、サーバーの視線の方向とサーブを打つ方向が一致していなかったが、前回のサーブで失敗した場合には、サーバーは次のサーブでは打ち方を変えて、視線の方向とサーブを打つ方向を一致させる傾向がある。このように、当該試合における過去の行動の履歴が次の行動に影響することがあるため、過去の行動の履歴を行動予測に用いることで、更に行動予測の精度を向上させることができる。なお、上述の傾向は一例であり、別の傾向を持つプレイヤーも存在する。そこで、行動予測において、プレイヤーに応じて過去の履歴の考慮の仕方を変えてもよい。 For example, a history of past actions in the game may be used for action prediction. In the example of the volleyball serve, the server may determine the direction in which the server is struck in consideration of the past serve direction and its success or failure. If there is a success in the left corner (if you have scored), the server will tend to hit the next serve to the left corner. Also, if the server continues to serve in the left corner but fails in the previous serve (if it conceded), the server tends to change the direction of the next strike and hit the right corner. Alternatively, the direction of the server's line of sight is not consistent with the direction of the serve, and if this is successful, the server will also serve in the direction that does not match the line of sight in the next serve as well. Tend. On the other hand, if the direction of the server's line of sight does not match the direction of the serve, but if the previous serve fails, the server will change the direction of the next serve and the direction of the line of sight and the direction of the serve Tend to match. Thus, since the past behavior history in the game may affect the next behavior, the past behavior history can be used for behavior prediction, so that the accuracy of behavior prediction can be further improved. In addition, the above-mentioned tendency is an example, and there exists a player with another tendency. Therefore, in the behavior prediction, the way of considering the past history may be changed depending on the player.
 また、行動予測に、連携プレイに関与するプレイヤーの組み合わせを考慮してもよい。バレーボールのトスとアタックのように、プレイヤー間での連携が存在する場合には、プレイヤーの組み合わせによって、行動に傾向がある場合がある。例えば、セッターとアタッカーがある組み合わせの場合には右隅を狙う場合が多く、別のある組み合わせの場合には、左隅を狙う場合が多いというような傾向が存在することがある。また、セッターとアタッカーがある組み合わせの場合には視線の向きとアタックの向きが一致することが多く、別のある組み合わせの場合には、視線の向きとアタックの向きが異なっていることが多いというような傾向が存在することもある。このように、プレイヤーの組み合わせに依存した傾向がある場合には、これらの傾向を考慮した行動予測を行うことが望ましい。そこで、連携プレイに関与するプレイヤーの組み合わせを行動予測に用いることで、更に行動予測の精度を向上させることができる。 Also, a combination of players involved in cooperative play may be taken into account in action prediction. When there is cooperation between players, such as toss and attack of volleyball, there are cases where there is a tendency for behavior depending on the combination of players. For example, there is a tendency that the right corner is often aimed at a combination of a setter and an attacker, and the left corner is often aimed at another certain combination. Also, in the case of a combination with a setter and an attacker, the direction of the line of sight often coincides with the direction of the attack, and in the case of another combination, the direction of the line of sight and the direction of the attack are often different. Such a tendency may exist. Thus, when there exists a tendency depending on the combination of players, it is desirable to perform behavior prediction in consideration of these tendencies. Therefore, by using a combination of players involved in cooperative play for behavior prediction, the accuracy of behavior prediction can be further improved.
 また、行動予測に口角の動き等に基づく表情解析の結果を用いてもよい。表情が平常である場合には、視線と異なる向きを狙ってサーブを打つなどの戦略的行動を行う余裕がある場合が多い。しかしながら、疲労が蓄積している場合等の表情が平常でない場合には、そのような戦略的行動を行う余裕がないことが多い。このような理由により、表情が平常である場合には視線と異なる向きを狙うことが多く、口角等から苦しい表情が見受けられる場合には視線と同じ向きを狙うことが多い、というように表情と行動が相関することがある。そこで、表情解析の結果を行動予測に用いることで、更に行動予測の精度を向上させることができる。 Also, the result of facial expression analysis based on the movement of the mouth corner or the like may be used for behavior prediction. When the expression is normal, there is often room for a strategic action such as hitting a serve with a different direction from the line of sight. However, when the facial expression is not normal, such as when fatigue is accumulated, there is often no room for such strategic action. For this reason, when the facial expression is normal, it is often aimed at a different direction from the line of sight, and when a difficult expression is seen from the corner of the mouth, etc., it is often aimed at the same direction as the line of sight. Behavior may be correlated. Therefore, by using the result of facial expression analysis for behavior prediction, the accuracy of behavior prediction can be further improved.
 また、行動予測に試合の状況に関する情報を用いてもよい。試合の状況とは、バレーボールの例では、両チームのスコア、両チームのセッター又はエースアタッカーの位置等の行動予測対象のプレイヤーとは別のプレイヤーの状況、プレイヤー間の位置関係等が挙げられる。例えば、エースアタッカーがアタックをしにくくするため、エースアタッカーを狙ってサーブを打つという戦略が知られている。そのため、エースアタッカーの位置を考慮した行動予測を行うことによりサーブの方向の予測精度を向上させることができる。また、スコアが自チームに不利な場合は、精神的な余裕がなくなるため、視線の向きと行動が合致しやすくなる等のスコアと行動に相関がある場合もある。すなわち、自チームのスコアが勝っている場合には視線と異なる向きを狙うことが多く、自チームのスコアが負けている場合には視線と同じ向きを狙うことが多い、というようにスコアと行動が相関することがある。そこで、スコア等の試合の状況を考慮した行動予測を行うことにより、予測精度が向上し得る。 Moreover, you may use the information regarding the situation of a game for action prediction. In the example of volleyball, the situation of the game includes the situation of a player different from the player whose behavior is to be predicted, such as the scores of both teams, the positions of the setters or ace attackers of both teams, and the positional relationship between the players. For example, in order to make it difficult for an ace attacker to attack, a strategy of hitting a serve with an ace attacker is known. Therefore, the prediction accuracy of the direction of a serve can be improved by performing action prediction in consideration of the position of the ace attacker. Further, when the score is disadvantageous to the own team, there is a case where there is a correlation between the score and the behavior such as the direction of the line of sight and the behavior are easily matched because the mental margin is lost. That is, if the score of your team is winning, you often aim for a different direction from the line of sight, and if you lose your score, you often aim for the same direction as your line of sight. May be correlated. Therefore, prediction accuracy can be improved by performing behavior prediction in consideration of the game situation such as score.
 また、行動予測に特定した人物の相手方のチームのプレイヤーの過去の試合情報を用いてもよい。相手チームにフェイントが上手なプレイヤーがいる場合、自チームのフェイントが読まれやすくなるため、あえてフェイントを行わず、視線の向きとボールを打つ向きを合わせる等の戦略があり得る。すなわち、相手チームにフェイントが上手なプレイヤーがいる場合には視線と同じ向きを狙うことが多く、フェイントが上手なプレイヤーがいない場合には視線と異なる向きを狙うことが多い、というように相手方のチームのプレイヤーの過去の試合情報を考慮して戦略を決定することがある。そこで、相手方のチームのプレイヤーの過去の試合情報を行動予測に用いることで、更に行動予測の精度を向上させることができる。 Moreover, you may use the past game information of the player of the other party's team specified in the action prediction. If there is a player who has a good feint in the opponent team, it becomes easier to read the faint of the own team, so there may be a strategy such as matching the direction of the line of sight with the direction of hitting the ball without intentionally performing the feint. In other words, when there is a player who is good at the opponent in the opponent team, it is often aimed at the same direction as the line of sight, and when there is no player who is good at the faint, it is often aimed at a direction different from the line of sight. A strategy may be determined in consideration of past game information of team players. Therefore, the accuracy of the behavior prediction can be further improved by using the past game information of the player of the opponent team for the behavior prediction.
 以上のように、行動予測部108は、行動予測のための入力情報として人物を特定する識別情報と、注目情報とこれ以外の情報を更に用いて行動予測を行うことで、行動予測の精度を向上させることができる。なお、上述の例のいくつかを組み合わせて行動予測に用いてもよい。 As described above, the behavior prediction unit 108 performs behavior prediction by further using identification information for identifying a person as input information for behavior prediction, attention information, and other information, thereby improving the accuracy of behavior prediction. Can be improved. Note that some of the above examples may be combined and used for behavior prediction.
 上述の実施形態においては、顔照合により人物の特定を行う例を示しているが、画像認識により人物の識別が可能であればこれ以外の手法を採用してもよい。例えば、プレイヤーのユニフォームに記載されている背番号、名前等を文字認識により取得して人物を特定してもよい。 In the above-described embodiment, an example is shown in which a person is specified by face matching, but other techniques may be adopted as long as the person can be identified by image recognition. For example, a person may be specified by acquiring a character number, a name, and the like written on a player's uniform by character recognition.
 顔検出、視線検出等を行うための画像を取得する撮像装置(カメラ)の設置位置は特に限定されないが、顔検出、視線検出等の精度を高めるため、プレイヤーに近いことが望ましい。例えば、バレーボールであれば、コート外のカメラではなく、ネットに設置されたカメラで撮影された画像を用いることが望ましい場合がある。 The installation position of an imaging device (camera) that acquires an image for performing face detection, line-of-sight detection, and the like is not particularly limited, but is preferably close to the player in order to improve the accuracy of face detection, line-of-sight detection, and the like. For example, in the case of volleyball, it may be desirable to use an image taken with a camera installed on the net rather than a camera outside the court.
 上述の実施形態の機能を実現するように該実施形態の構成を動作させるプログラムを記憶媒体に記録させ、記憶媒体に記録されたプログラムをコードとして読み出し、コンピュータにおいて実行する処理方法も各実施形態の範疇に含まれる。すなわち、コンピュータ読取可能な記憶媒体も各実施形態の範囲に含まれる。また、上述のプログラムが記録された記憶媒体だけでなく、そのプログラム自体も各実施形態に含まれる。また、上述の実施形態に含まれる1又は2以上の構成要素は、各構成要素の機能を実現するように構成されたASIC(Application Specific Integrated Circuit)、FPGA(Field Programmable Gate Array)等の回路であってもよい。 A processing method for recording a program for operating the configuration of the embodiment to realize the functions of the above-described embodiment on a storage medium, reading the program recorded on the storage medium as a code, and executing the program on a computer is also included in each embodiment. Included in the category. That is, a computer-readable storage medium is also included in the scope of each embodiment. In addition to the storage medium on which the above-described program is recorded, the program itself is included in each embodiment. In addition, one or more components included in the above-described embodiments are circuits such as an ASIC (Application Specific Integrated Circuit) and an FPGA (Field Programmable Gate Array) configured to realize the function of each component. There may be.
 該記憶媒体としては例えばフロッピー(登録商標)ディスク、ハードディスク、光ディスク、光磁気ディスク、CD(Compact Disk)-ROM、磁気テープ、不揮発性メモリカード、ROMを用いることができる。また該記憶媒体に記録されたプログラム単体で処理を実行しているものに限らず、他のソフトウェア、拡張ボードの機能と共同して、OS(Operating System)上で動作して処理を実行するものも各実施形態の範疇に含まれる。 As the storage medium, for example, a floppy (registered trademark) disk, hard disk, optical disk, magneto-optical disk, CD (Compact Disk) -ROM, magnetic tape, nonvolatile memory card, ROM can be used. In addition to a program executed in a single program recorded in the storage medium, the program runs on an OS (Operating System) in cooperation with other software and expansion board functions. Are also included in the category of each embodiment.
 上述の各実施形態の機能により実現されるサービスは、SaaS(Software as a Service)の形態でユーザに対して提供することもできる。 The service realized by the functions of the above-described embodiments can be provided to the user in the form of SaaS (Software as a Service).
 なお、上述の実施形態は、いずれも本発明を実施するにあたっての具体化の例を示したものに過ぎず、これらによって本発明の技術的範囲が限定的に解釈されてはならないものである。すなわち、本発明はその技術思想、又はその主要な特徴から逸脱することなく、様々な形で実施することができる。 It should be noted that the above-described embodiments are merely examples of implementation in carrying out the present invention, and the technical scope of the present invention should not be interpreted in a limited manner. That is, the present invention can be implemented in various forms without departing from the technical idea or the main features thereof.
 上述の実施形態の一部又は全部は、以下の付記のようにも記載されうるが、以下には限られない。 Some or all of the above-described embodiments can be described as in the following supplementary notes, but are not limited thereto.
 (付記1)
 画像の中から画像認識により特定された人物の識別情報を取得する識別情報取得部と、
 前記人物の顔画像から、前記人物の注目状態を示す注目情報を取得する注目情報取得部と、
 前記識別情報と前記注目情報とに基づいて、前記人物の行動を予測する行動予測部と、
 を備えることを特徴とする情報処理システム。
(Appendix 1)
An identification information acquisition unit for acquiring identification information of a person identified by image recognition from the image;
An attention information acquisition unit that acquires attention information indicating the attention state of the person from the face image of the person;
An action prediction unit that predicts the action of the person based on the identification information and the attention information;
An information processing system comprising:
 (付記2)
 前記注目情報は、前記人物の顔又は目の特徴から抽出された情報を含む
 ことを特徴とする付記1に記載の情報処理システム。
(Appendix 2)
The information processing system according to appendix 1, wherein the attention information includes information extracted from features of the person's face or eyes.
 (付記3)
 前記注目情報は、前記人物の視線の向き及び前記人物の顔の向きの少なくとも1つを含む
 ことを特徴とする付記1又は2に記載の情報処理システム。
(Appendix 3)
The information processing system according to appendix 1 or 2, wherein the attention information includes at least one of a gaze direction of the person and a face direction of the person.
 (付記4)
 前記画像認識は、顔照合及び文字認識の少なくとも1つを含む
 ことを特徴とする付記1乃至3のいずれか1項に記載の情報処理システム。
(Appendix 4)
The information processing system according to any one of appendices 1 to 3, wherein the image recognition includes at least one of face matching and character recognition.
 (付記5)
 前記行動予測部は、前記注目情報と前記人物の行動とを含む学習データを入力とする機械学習により生成された学習モデルを用いて前記人物の行動を予測する
 ことを特徴とする付記1乃至4のいずれか1項に記載の情報処理システム。
(Appendix 5)
The behavior prediction unit predicts the behavior of the person using a learning model generated by machine learning that receives learning data including the attention information and the behavior of the person. The information processing system according to any one of the above.
 (付記6)
 前記行動予測部は、前記注目情報と前記人物の行動とが対応付けられたデータベースを用いて前記人物の行動を予測する
 ことを特徴とする付記1乃至4のいずれか1項に記載の情報処理システム。
(Appendix 6)
The information processing according to any one of appendices 1 to 4, wherein the behavior prediction unit predicts the behavior of the person using a database in which the attention information and the behavior of the person are associated with each other. system.
 (付記7)
 前記行動予測部が予測した前記人物の行動を含む行動予測情報を含む予測画像を生成する画像生成部を更に備える
 ことを特徴とする付記1乃至6のいずれか1項に記載の情報処理システム。
(Appendix 7)
The information processing system according to any one of appendices 1 to 6, further comprising an image generation unit that generates a predicted image including behavior prediction information including the behavior of the person predicted by the behavior prediction unit.
 (付記8)
 前記画像生成部は、前記行動予測情報を前記画像に重畳させることにより、前記予測画像を生成する
 ことを特徴とする付記7に記載の情報処理システム。
(Appendix 8)
The information processing system according to appendix 7, wherein the image generation unit generates the predicted image by superimposing the behavior prediction information on the image.
 (付記9)
 前記画像はスポーツの試合の動画であり、
 前記人物は、前記スポーツのプレイヤーである
 ことを特徴とする付記1乃至8のいずれか1項に記載の情報処理システム。
(Appendix 9)
The image is a video of a sports game,
The information processing system according to any one of appendices 1 to 8, wherein the person is a player of the sport.
 (付記10)
 前記行動予測部は、更に前記試合の状況に基づいて、前記人物の行動を予測する
 ことを特徴とする付記9に記載の情報処理システム。
(Appendix 10)
The information processing system according to appendix 9, wherein the behavior prediction unit further predicts the behavior of the person based on the situation of the game.
 (付記11)
 前記行動予測部は、更に前記人物以外のプレイヤーの状況に基づいて、前記人物の行動を予測する
 ことを特徴とする付記9又は10に記載の情報処理システム。
(Appendix 11)
The information processing system according to appendix 9 or 10, wherein the behavior prediction unit further predicts the behavior of the person based on a situation of a player other than the person.
 (付記12)
 前記行動予測部は、更に、前記試合における前記人物の行動の履歴に基づいて、前記人物の行動を予測する
 ことを特徴とする付記9乃至11のいずれか1項に記載の情報処理システム。
(Appendix 12)
The information processing system according to any one of appendices 9 to 11, wherein the behavior prediction unit further predicts the behavior of the person based on a history of the behavior of the person in the game.
 (付記13)
 前記行動予測部は、更に、前記人物と、前記人物と連携するプレイヤーとの組み合わせに基づいて、前記人物の行動を予測する
 ことを特徴とする付記9乃至12のいずれか1項に記載の情報処理システム。
(Appendix 13)
The information according to any one of appendices 9 to 12, wherein the behavior prediction unit further predicts the behavior of the person based on a combination of the person and a player who cooperates with the person. Processing system.
 (付記14)
 前記行動予測部は、更に、前記人物の表情に基づいて、前記人物の行動を予測する
 ことを特徴とする付記9乃至13のいずれか1項に記載の情報処理システム。
(Appendix 14)
The information processing system according to any one of appendices 9 to 13, wherein the behavior prediction unit further predicts the behavior of the person based on the facial expression of the person.
 (付記15)
 前記行動予測部は、更に、前記試合のスコアに基づいて、前記人物の行動を予測する
 ことを特徴とする付記9乃至14のいずれか1項に記載の情報処理システム。
(Appendix 15)
The information processing system according to any one of appendices 9 to 14, wherein the behavior prediction unit further predicts the behavior of the person based on the score of the game.
 (付記16)
 前記行動予測部は、更に、前記人物の相手方のチームのプレイヤーの過去の試合情報に基づいて、前記人物の行動を予測する
 ことを特徴とする付記9乃至15のいずれか1項に記載の情報処理システム。
(Appendix 16)
16. The information according to any one of appendices 9 to 15, wherein the behavior predicting unit further predicts the behavior of the person based on past game information of players of the opponent team of the person. Processing system.
 (付記17)
 画像の中から画像認識により特定された人物の識別情報を取得するステップと、
 前記人物の顔画像から、前記人物の注目状態を示す注目情報を取得するステップと、
 前記識別情報と前記注目情報とに基づいて、前記人物の行動を予測するステップと、
 を備えることを特徴とする情報処理方法。
(Appendix 17)
Obtaining identification information of a person identified by image recognition from an image;
Obtaining attention information indicating the attention state of the person from the face image of the person;
Predicting the behavior of the person based on the identification information and the attention information;
An information processing method comprising:
 (付記18)
 コンピュータに、
 画像の中から画像認識により特定された人物の識別情報を取得するステップと、
 前記人物の顔画像から、前記人物の注目状態を示す注目情報を取得するステップと、
 前記識別情報と前記注目情報とに基づいて、前記人物の行動を予測するステップと、
 を備えることを特徴とする情報処理方法を実行させるためのプログラムが記憶された記憶媒体。
(Appendix 18)
On the computer,
Obtaining identification information of a person identified by image recognition from an image;
Obtaining attention information indicating the attention state of the person from the face image of the person;
Predicting the behavior of the person based on the identification information and the attention information;
A storage medium storing a program for executing an information processing method.
 この出願は、2018年6月6日に出願された日本出願特願2018-108332を基礎とする優先権を主張し、その開示の全てをここに取り込む。 This application claims priority based on Japanese Patent Application No. 2018-108332 filed on June 6, 2018, the entire disclosure of which is incorporated herein.
100、200   情報処理システム
101       画像取得部
102       顔検出部
103       特徴量抽出部
104       照合部
105       状態判定部
106、201   識別情報取得部
107、202   注目情報取得部
108、203   行動予測部
109       画像生成部
110       記憶部
151       CPU
152       RAM
153       ROM
154       HDD
155       通信I/F
156       表示装置
157       入力装置
158       バス
301       フレーム画像データ
302       プレイヤー
303       視線
304       行動予測情報
305       予測画像
R1        顔領域
100, 200 Information processing system 101 Image acquisition unit 102 Face detection unit 103 Feature amount extraction unit 104 Collation unit 105 State determination unit 106, 201 Identification information acquisition unit 107, 202 Attention information acquisition unit 108, 203 Action prediction unit 109 Image generation unit 110 Storage unit 151 CPU
152 RAM
153 ROM
154 HDD
155 Communication I / F
156 Display device 157 Input device 158 Bus 301 Frame image data 302 Player 303 Line of sight 304 Behavior prediction information 305 Predicted image R1 Facial region

Claims (18)

  1.  画像の中から画像認識により特定された人物の識別情報を取得する識別情報取得部と、
     前記人物の顔画像から、前記人物の注目状態を示す注目情報を取得する注目情報取得部と、
     前記識別情報と前記注目情報とに基づいて、前記人物の行動を予測する行動予測部と、
     を備えることを特徴とする情報処理システム。
    An identification information acquisition unit for acquiring identification information of a person identified by image recognition from the image;
    An attention information acquisition unit that acquires attention information indicating the attention state of the person from the face image of the person;
    An action prediction unit that predicts the action of the person based on the identification information and the attention information;
    An information processing system comprising:
  2.  前記注目情報は、前記人物の顔又は目の特徴から抽出された情報を含む
     ことを特徴とする請求項1に記載の情報処理システム。
    The information processing system according to claim 1, wherein the attention information includes information extracted from characteristics of the face or eyes of the person.
  3.  前記注目情報は、前記人物の視線の向き及び前記人物の顔の向きの少なくとも1つを含む
     ことを特徴とする請求項1又は2に記載の情報処理システム。
    The information processing system according to claim 1, wherein the attention information includes at least one of a gaze direction of the person and a face direction of the person.
  4.  前記画像認識は、顔照合及び文字認識の少なくとも1つを含む
     ことを特徴とする請求項1乃至3のいずれか1項に記載の情報処理システム。
    The information processing system according to any one of claims 1 to 3, wherein the image recognition includes at least one of face matching and character recognition.
  5.  前記行動予測部は、前記注目情報と前記人物の行動とを含む学習データを入力とする機械学習により生成された学習モデルを用いて前記人物の行動を予測する
     ことを特徴とする請求項1乃至4のいずれか1項に記載の情報処理システム。
    The behavior predicting unit predicts the behavior of the person using a learning model generated by machine learning using learning data including the attention information and the behavior of the person as inputs. 5. The information processing system according to any one of 4.
  6.  前記行動予測部は、前記注目情報と前記人物の行動とが対応付けられたデータベースを用いて前記人物の行動を予測する
     ことを特徴とする請求項1乃至4のいずれか1項に記載の情報処理システム。
    The information according to any one of claims 1 to 4, wherein the behavior prediction unit predicts the behavior of the person using a database in which the attention information and the behavior of the person are associated with each other. Processing system.
  7.  前記行動予測部が予測した前記人物の行動を含む行動予測情報を含む予測画像を生成する画像生成部を更に備える
     ことを特徴とする請求項1乃至6のいずれか1項に記載の情報処理システム。
    The information processing system according to any one of claims 1 to 6, further comprising: an image generation unit that generates a predicted image including behavior prediction information including the behavior of the person predicted by the behavior prediction unit. .
  8.  前記画像生成部は、前記行動予測情報を前記画像に重畳させることにより、前記予測画像を生成する
     ことを特徴とする請求項7に記載の情報処理システム。
    The information processing system according to claim 7, wherein the image generation unit generates the predicted image by superimposing the behavior prediction information on the image.
  9.  前記画像はスポーツの試合の動画であり、
     前記人物は、前記スポーツのプレイヤーである
     ことを特徴とする請求項1乃至8のいずれか1項に記載の情報処理システム。
    The image is a video of a sports game,
    The information processing system according to any one of claims 1 to 8, wherein the person is a player of the sport.
  10.  前記行動予測部は、更に前記試合の状況に基づいて、前記人物の行動を予測する
     ことを特徴とする請求項9に記載の情報処理システム。
    The information processing system according to claim 9, wherein the behavior prediction unit further predicts the behavior of the person based on the situation of the game.
  11.  前記行動予測部は、更に前記人物以外のプレイヤーの状況に基づいて、前記人物の行動を予測する
     ことを特徴とする請求項9又は10に記載の情報処理システム。
    The information processing system according to claim 9 or 10, wherein the behavior prediction unit further predicts the behavior of the person based on a situation of a player other than the person.
  12.  前記行動予測部は、更に、前記試合における前記人物の行動の履歴に基づいて、前記人物の行動を予測する
     ことを特徴とする請求項9乃至11のいずれか1項に記載の情報処理システム。
    The information processing system according to any one of claims 9 to 11, wherein the behavior prediction unit further predicts the behavior of the person based on a history of the behavior of the person in the game.
  13.  前記行動予測部は、更に、前記人物と、前記人物と連携するプレイヤーとの組み合わせに基づいて、前記人物の行動を予測する
     ことを特徴とする請求項9乃至12のいずれか1項に記載の情報処理システム。
    The behavior predicting unit further predicts the behavior of the person based on a combination of the person and a player who cooperates with the person. Information processing system.
  14.  前記行動予測部は、更に、前記人物の表情に基づいて、前記人物の行動を予測する
     ことを特徴とする請求項9乃至13のいずれか1項に記載の情報処理システム。
    The information processing system according to claim 9, wherein the behavior prediction unit further predicts the behavior of the person based on the facial expression of the person.
  15.  前記行動予測部は、更に、前記試合のスコアに基づいて、前記人物の行動を予測する
     ことを特徴とする請求項9乃至14のいずれか1項に記載の情報処理システム。
    The information processing system according to claim 9, wherein the behavior prediction unit further predicts the behavior of the person based on the score of the game.
  16.  前記行動予測部は、更に、前記人物の相手方のチームのプレイヤーの過去の試合情報に基づいて、前記人物の行動を予測する
     ことを特徴とする請求項9乃至15のいずれか1項に記載の情報処理システム。
    The said action prediction part further predicts the action of the said person based on the past game information of the player of the team of the other party of the said person. The Claim 1 characterized by the above-mentioned. Information processing system.
  17.  画像の中から画像認識により特定された人物の識別情報を取得するステップと、
     前記人物の顔画像から、前記人物の注目状態を示す注目情報を取得するステップと、
     前記識別情報と前記注目情報とに基づいて、前記人物の行動を予測するステップと、
     を備えることを特徴とする情報処理方法。
    Obtaining identification information of a person identified by image recognition from an image;
    Obtaining attention information indicating the attention state of the person from the face image of the person;
    Predicting the behavior of the person based on the identification information and the attention information;
    An information processing method comprising:
  18.  コンピュータに、
     画像の中から画像認識により特定された人物の識別情報を取得するステップと、
     前記人物の顔画像から、前記人物の注目状態を示す注目情報を取得するステップと、
     前記識別情報と前記注目情報とに基づいて、前記人物の行動を予測するステップと、
     を備えることを特徴とする情報処理方法を実行させるためのプログラムが記憶された記憶媒体。
    On the computer,
    Obtaining identification information of a person identified by image recognition from an image;
    Obtaining attention information indicating the attention state of the person from the face image of the person;
    Predicting the behavior of the person based on the identification information and the attention information;
    A storage medium storing a program for executing an information processing method.
PCT/JP2019/021505 2018-06-06 2019-05-30 Information processing system, information processing method, and storage medium WO2019235350A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020523665A JP6939999B2 (en) 2018-06-06 2019-05-30 Information processing system, information processing method and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-108332 2018-06-06
JP2018108332 2018-06-06

Publications (1)

Publication Number Publication Date
WO2019235350A1 true WO2019235350A1 (en) 2019-12-12

Family

ID=68770348

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/021505 WO2019235350A1 (en) 2018-06-06 2019-05-30 Information processing system, information processing method, and storage medium

Country Status (2)

Country Link
JP (1) JP6939999B2 (en)
WO (1) WO2019235350A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022520498A (en) * 2019-12-30 2022-03-30 シャンハイ センスタイム リンガン インテリジェント テクノロジー カンパニー リミテッド Image processing methods, devices, storage media and electronic devices

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005202653A (en) * 2004-01-15 2005-07-28 Canon Inc Behavior recognition device and method, animal object recognition device and method, equipment control device and method, and program
JP2008126818A (en) * 2006-11-20 2008-06-05 Denso Corp User hospitality system for automobile
JP2009009413A (en) * 2007-06-28 2009-01-15 Sanyo Electric Co Ltd Operation detector and operation detection program, and operation basic model generator and operation basic model generation program
JP2010123019A (en) * 2008-11-21 2010-06-03 Fujitsu Ltd Device and method for recognizing motion
US20150332450A1 (en) * 2007-05-24 2015-11-19 Pillar Vision, Inc. Stereoscopic Image Capture with Performance Outcome Prediction in Sporting Environments

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005202653A (en) * 2004-01-15 2005-07-28 Canon Inc Behavior recognition device and method, animal object recognition device and method, equipment control device and method, and program
JP2008126818A (en) * 2006-11-20 2008-06-05 Denso Corp User hospitality system for automobile
US20150332450A1 (en) * 2007-05-24 2015-11-19 Pillar Vision, Inc. Stereoscopic Image Capture with Performance Outcome Prediction in Sporting Environments
JP2009009413A (en) * 2007-06-28 2009-01-15 Sanyo Electric Co Ltd Operation detector and operation detection program, and operation basic model generator and operation basic model generation program
JP2010123019A (en) * 2008-11-21 2010-06-03 Fujitsu Ltd Device and method for recognizing motion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022520498A (en) * 2019-12-30 2022-03-30 シャンハイ センスタイム リンガン インテリジェント テクノロジー カンパニー リミテッド Image processing methods, devices, storage media and electronic devices
JP7105383B2 (en) 2019-12-30 2022-07-22 シャンハイ センスタイム リンガン インテリジェント テクノロジー カンパニー リミテッド Image processing method, device, storage medium and electronic equipment

Also Published As

Publication number Publication date
JPWO2019235350A1 (en) 2021-02-12
JP6939999B2 (en) 2021-09-22

Similar Documents

Publication Publication Date Title
US11715303B2 (en) Dynamically predicting shot type using a personalized deep neural network
US11040287B2 (en) Experience-oriented virtual baseball game apparatus and virtual baseball game control method using the same
CN107530585B (en) Method, apparatus and server for determining cheating in dart game
US11458405B1 (en) Systems and methods for cheat detection in electronic games
US10709972B2 (en) Sports-based card game systems and methods
US10245505B2 (en) Generating custom recordings of skeletal animations
US20230330485A1 (en) Personalizing Prediction of Performance using Data and Body-Pose for Analysis of Sporting Performance
JP2023533078A (en) Automatic harassment monitoring system
JP6939999B2 (en) Information processing system, information processing method and storage medium
JP2020108795A (en) Game system, game terminal, and program
DK180109B1 (en) Method and device for user interaction with a video stream
WO2020039473A1 (en) Image management system, image management method, program, and image management device
US11235244B2 (en) Gaming system, gaming method, server device, terminal device, and program
US20230112232A1 (en) System for wagering on event outcomes based on two timings during an event
KR102120711B1 (en) A system for management and assistance of billiard game
US20220072430A1 (en) System and method for fraud prevention in esports
US20240087072A1 (en) Live event information display method, system, and apparatus
JP6312269B2 (en) GAME CONTROL DEVICE, GAME SYSTEM, AND PROGRAM
US20230302357A1 (en) Systems and methods for analyzing video data of predictive movements
JP6940168B2 (en) Game system, game control device, program, and game control method
US20220148366A1 (en) On deck wagering
WO2023106201A1 (en) Play analysis device, play analysis method, and computer-readable storage medium
JP7105606B2 (en) Information processing device and program
US20230053181A1 (en) Motion learning system
JP2018068926A (en) Game control device, game system, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19816075

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020523665

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19816075

Country of ref document: EP

Kind code of ref document: A1