WO2009122510A1 - Système et procédé d'affichage d'image graphique par ordinateur - Google Patents

Système et procédé d'affichage d'image graphique par ordinateur Download PDF

Info

Publication number
WO2009122510A1
WO2009122510A1 PCT/JP2008/056366 JP2008056366W WO2009122510A1 WO 2009122510 A1 WO2009122510 A1 WO 2009122510A1 JP 2008056366 W JP2008056366 W JP 2008056366W WO 2009122510 A1 WO2009122510 A1 WO 2009122510A1
Authority
WO
WIPO (PCT)
Prior art keywords
computer graphics
frame
video
player
coordinates
Prior art date
Application number
PCT/JP2008/056366
Other languages
English (en)
Japanese (ja)
Inventor
カールハインツ フーゲル
Original Assignee
堺市
アンリミテッド ゲーエムベーハー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 堺市, アンリミテッド ゲーエムベーハー filed Critical 堺市
Publication of WO2009122510A1 publication Critical patent/WO2009122510A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates to a computer graphics image display system and method for producing a three-dimensional computer graphics (hereinafter abbreviated as “CG”) moving image that can be reproduced from a free viewpoint using a video of a ball sport such as soccer as a material.
  • CG three-dimensional computer graphics
  • CG 3D computer graphics
  • footage such as soccer
  • CG 3D computer graphics
  • the depth information of the target object viewed from the observer's position is acquired and reflected in the shadow expression on the target object. It corresponds with.
  • multiple viewpoint cameras are set at the observer's position, shadow information is obtained from the target object image captured by each camera, and light source position information is obtained from the plurality of shadow information.
  • the intensity information of the light source is calculated and stored in the storage device, and the surface image information obtained by viewing the target object from all directions is acquired from the target object video captured by each camera, and the installation position information of each camera is obtained. Based on this, the three-dimensional coordinates of the acquired entire surface image information are calculated and stored in the storage device.
  • a shadow representation of the object surface is calculated and displayed based on the light source information and the three-dimensional coordinate information of the entire surface. ing.
  • the device worn on the player's body is required to have a strong durability that does not cause breakage or dropout during the game.
  • the game organizer and the player must approve the wearing of the device, and it is necessary to satisfy various requirements such as not obstructing the movement of the player and not obstructing the game progress in all aspects. .
  • Non-Patent Document 1 discloses a technique for recognizing a two-dimensional coordinate on a field using a television camera image fixed in one place in a television broadcast sports broadcast without using a coordinate grasping device and system.
  • a trapezoidal sports field hereinafter referred to as “visual coordinates”
  • absolute coordinates an actual rectangular field
  • the “forced method” can obtain an accurate conversion result, it requires a lot of calculation processing time, while the “quick method” can obtain a result quickly, but only a part of the coat is photographed, for example.
  • the material information includes an error, and a correct result may not be obtained.
  • the focus of the camera does not move, that is, the shape of the visual coordinates does not change, as in the case of taking out one video frame of the video material on which the entire surface of the coat has been shot. This is a process that is only possible when the TV camera is focused on the player or the ball, as seen in actual TV broadcast video.
  • the line moves to the outside of the screen, that is, in a so-called “out of line” state, a correct conversion processing result cannot be obtained.
  • a player animation assignment operation is performed.
  • Non-Patent Document 2 a model is created by extracting only the 2D video information of the model from the video frame without attaching points to the model. The technique is shown in Non-Patent Document 2.
  • Non-Patent Document 2 first, in order to create a video that captures the court from multiple viewpoints, eight cameras are arranged uniformly so as to surround the court. The height of the camera is set at an angle of about 45 degrees with respect to the horizontal surface of the court in order to avoid duplication of players on the captured video. Next, with each camera, a court image with no player on the court is acquired in advance as a background image and stored in a storage device. Next, each camera shoots a moving image in which a game is actually played, extracts difference information from the above-described background image in each video frame, and extracts player or ball video information as texture information. .
  • a rectangular polygon is placed perpendicular to the viewpoint direction at the place where the player ball exists on the background information, and this polygon is subjected to mapping processing using the texture extracted in the above processing. If the viewpoint direction is set as an intermediate point of the above-described eight photographing cameras, the rectangular polygon is approximated by a two-dimensional image rotated so as to be perpendicular to the viewpoint direction.
  • the present invention can accurately set the correspondence between visual coordinates and absolute coordinates using a camera moving image installed in one place as a material, and can smoothly perform a smooth operation using a high-quality three-dimensional human body model.
  • An object of the present invention is to provide a computer graphics image display system and method capable of producing a simple animation.
  • a computer graphics image display system generates and displays a three-dimensional computer graphics image based on an actual image of a player shown in a video material.
  • a plurality of coordinate setting points are set on a line segment that characterizes the location of the video material, and a wire frame generating means for generating a wire frame serving as a reference for absolute coordinates, and a plurality of coordinate setting points set on the wire frame Input means for designating the corresponding image portion on the screen and inputting the visual coordinates of the coordinate setting points, and the input visual coordinates and absolute coordinates of the at least four coordinate setting points.
  • a conversion parameter obtained by linearly interpolating the conversion parameter obtained in the preceding and subsequent video frames is described.
  • the input of the visual coordinates of the coordinate setting point is such that the wire frame from the wire frame generation unit is displayed so as to overlap the real video screen, and the coordinate setting point on the wire frame is overlapped with the corresponding part of the real video screen. It is characterized by performing.
  • the computer graphics image display system is a computer graphics image display system that generates and displays a three-dimensional computer graphics image based on an actual image of a player shown in a video material.
  • a figure record table that pre-stores computer graphics data of a three-dimensional human body model that expresses a unique finite number of motions, and means for designating a start frame and an end frame for a series of video frames that set a data record;
  • control means for allocating, from the figure recording table, computer graphics data of a three-dimensional human body model that expresses an optimum action in accordance with the movement of the player object from the start point frame to the end point frame.
  • control means is a computer of a three-dimensional human body model that expresses an optimal action based on the moving distance of the player object from the start point frame to the end point frame and the number of video frames from the start point frame to the end point frame. It is characterized by determining graphics data.
  • control means manages a series of figure data indicating a motion change from the start to the end of the individual motion as a data group expressing the individual motion of the player.
  • control means manages a continuum of a series of data groups indicating a motion change from the start to the end of the continuous motion as a sequence expressing the continuous motion of the player.
  • a computer graphics image display method characterizes a location of a video material in a computer graphics image display system that generates and displays a three-dimensional computer graphics image based on an actual image of a player displayed in the video material.
  • a step of generating a wire frame serving as a reference for absolute coordinates by setting a plurality of coordinate setting points on a line segment, and at least four points among the plurality of coordinate setting points set on the wire frame on the screen Specifying the image portion to be input, inputting the visual coordinates of the coordinate setting points, and based on the correspondence between the input visual coordinates and the absolute coordinates of at least four coordinate setting points,
  • recording the meter based on conversion parameters calculated, characterized in that it comprises a step of recording the correspondence between the visual coordinates and absolute coordinates for a plurality of coordinate set point.
  • the computer graphics image display method of the present invention is a computer graphics image display method for generating and displaying a three-dimensional computer graphics image based on an actual image of a player shown in a video material.
  • a plurality of lines are defined on a line segment characterizing the location of the video material.
  • Coordinate setting points are set, wire frame generating means for generating a wire frame serving as a reference for absolute coordinates, and at least four of the plurality of coordinate setting points set on the wire frame, corresponding images on the screen
  • the visual coordinates and the absolute coordinates Calculation means for calculating conversion parameters and conversion parameters are recorded corresponding to each video frame. Since there is a conversion parameter table and a coordinate setting point recording table describing the correspondence between visual coordinates and absolute coordinates for a plurality of coordinate setting points based on the obtained conversion parameters, it is easy to process.
  • the visual coordinates of any point on the screen can be converted into absolute coordinates and captured, and the graphics image specified on the absolute coordinates can be converted into arbitrary visual coordinates and displayed on the screen. .
  • a conversion parameter obtained by linear interpolation of the conversion parameter obtained in the preceding and following video frames is described. Natural motion conversion parameters can be generated.
  • the wire frame from the wire frame generation unit is displayed so as to be superimposed on the real video screen, and the coordinate setting point on the wire frame is superimposed on the corresponding part of the real video screen.
  • a finite number unique to each game item player is provided.
  • Figure recording table that pre-stores computer graphics data of a 3D human body model representing motion, means for specifying the start and end frames for a series of video frames that set the data records, and the start and end frames It is equipped with a control means that assigns the computer graphics data of the 3D human body model that expresses the optimal motion from the figure recording table according to the movement of the player object up to the frame, so it is natural with simple processing. Compilation of 3D human body model of movement It can be displayed over data graphics.
  • the optimal motion is expressed based on the moving distance of the player object from the start point frame to the end point frame and the number of video frames from the start point frame to the end point frame.
  • the computer graphics of the 3D human body model can be displayed.
  • a series of figure data indicating a change in motion from the start to the end of individual motion is managed as a data group representing individual motions of the player. Can be displayed.
  • a plurality of lines are defined on a line segment characterizing the location of the video material.
  • the step of generating a wire frame that is used as a reference for absolute coordinates and specifying the corresponding image portion on the screen for at least four of the multiple coordinate setting points set on the wire frame Then, conversion parameters between the visual coordinates and the absolute coordinates are calculated based on the step of inputting the visual coordinates of the coordinate setting points and the correspondence between the input visual coordinates and the absolute coordinates of at least four coordinate setting points.
  • a step of recording conversion parameters corresponding to each video frame and The step of recording the correspondence between the visual coordinates and the absolute coordinates for a plurality of coordinate setting points based on the converted parameters is provided.
  • a finite number unique to each game item player is provided.
  • the computer graphics data of the 3D human body model that expresses the optimal motion is allocated from the figure recording table according to the movement of the player object until the third order of natural movement with simple processing.
  • Computer graphic of former human body model It can display box.
  • FIG. 1 shows the configuration of a computer graphics image display system according to an embodiment of the present invention.
  • a computer graphics image display system includes a computing device operation device 1, a computing device 2, a video data playback device 4, a video data input interface 5, and a video data memory.
  • display data processing device 7 VRAM (Video Random Access Memory) 7, work display 9, wire frame generation unit 10, conversion parameter table 11, coordinate setting point recording table 12, figure recording table 14 and an operation determination table 15.
  • VRAM Video Random Access Memory
  • the computing device operating device 1 is an input device for an operator to perform various operations on the computing device 2.
  • the arithmetic device operating device 1 selects and operates various processes of the arithmetic device 2, and at the same time, coordinates setting points displayed as wire frames on the work display 9 and visual coordinates displayed as a video screen. Since it is responsible for operations to manipulate the placement with the coordinate setting points and to input the visual coordinate position of the player's object on the video screen and collect it as a parameter on the absolute coordinate, such as a mouse, tablet, touch panel, etc. It is desirable that the device has excellent operability for inputting the coordinates.
  • the computing device 2 performs processing such as various video information processing and numerical calculation.
  • the arithmetic device 2 displays a wire frame for a video frame that is a target of visual coordinate setting work, acquires visual coordinates on the screen corresponding to the coordinate setting point of the wire frame, and sets the coordinates.
  • a conversion parameter is calculated from the correspondence between the visual coordinates of the points and the absolute coordinates.
  • the arithmetic unit 2 selects an optimum sequence from the figure recording table 14 based on the selected player position and moving direction and the number of recorded video frames at the time of creating the CG, and the operation of the player based on the sequence is performed. Processing is performed in which the figure data constituting the animation is assigned to the player position on each video frame.
  • the arithmetic device 2 further performs various calculations and processes.
  • the video data reproduction device 4 is a device that reproduces actual video data as a material. More specifically, the video data playback device 4 is a video movie, a VCR (Video Cassette Recorder), a DVD (Digital Versatile Disc) player, or a HDD (Hard Disk Drive) video player.
  • VCR Video Cassette Recorder
  • DVD Digital Versatile Disc
  • HDD Hard Disk Drive
  • One feature of the embodiment of the present invention is that it does not depend on the format or quality of the actual video data as the material. Any video can be used as a material, such as a video of a soccer game broadcast on television, a video of a soccer game taken by an amateur from the audience, or a video of a soccer game aired several years ago.
  • the video data input interface 5 is an interface for capturing video data from the video data playback device 4 into the system.
  • the video data memory 6 is a memory for storing video data captured from the video data playback device 4. It is desirable that the video data memory 6 has a sufficient capacity to store video images of a large number of video frames.
  • the display data processing device 7 performs a process of collecting the work results of the operator and the processing results of the arithmetic device and sending them to the VRAM 7.
  • the VRAM 7 stores the processing result of the display data processing device as image information.
  • the work display 9 is a display for displaying the contents of the VRAM 7.
  • an LCD (Liquid Crystal Display) display or a CRT (Cathode-Ray Tube) display is used as the work display 9.
  • the wire frame generation unit 10 generates a wire frame image for setting a plurality of coordinate setting points serving as a reference for absolute coordinates (see FIG. 3).
  • a plurality of coordinate setting points are set on a line segment that characterizes the location of a video material (soccer court), such as a center circle, a side line, a penalty area line, and a penalty arc. Correspondence between visual coordinates and absolute coordinates is obtained for these coordinate setting points.
  • conversion parameters are described for each video frame as shown in FIG. 5 (see FIG. 5).
  • the coordinate setting point recording table 12 the relationship between the visual coordinates and absolute coordinates is described for each coordinate setting point (see FIG. 4).
  • the figure recording table 14 is a database for storing figure data.
  • the figure refers to a three-dimensional human body model used on a computer graphics screen.
  • data groups expressing operations such as “walking”, “starting running”, “sprinting all the way”, “stopping suddenly”, and the like are prepared in advance.
  • video frames in which the moving direction and moving speed of the position recording target player change rapidly are selected while frame-by-frame playback of the actual video data is performed, and the foot portion of the player video is marked.
  • the figure recording table 14 includes, for each player, the arrangement of each player, the result of rotation correction of figure data, the flag for identifying the manually set video frame, the type of data group, and the type of data group.
  • the figure data record ID and the like are recorded (see FIG. 10).
  • the operation determination table 15 is a determination table for determining the operation of the player based on the selected player position and moving direction and the number of recorded video frames at the time of CG creation (see FIG. 12). These tables will be described later.
  • the computer graphics image display system according to the embodiment of the present invention configured as described above can be used for display with changing the viewpoint of a highlight scene, tactical analysis, or the like from actual video data as material.
  • a plurality of coordinate setting points are set on the line segment characterizing the coat, and a wire frame image based on the coat line is generated.
  • the coordinate setting point of the wire frame is dragged to a position on the corresponding coat on the screen and superimposed to obtain the visual coordinates of the coordinate setting point, and the conversion parameter is obtained.
  • figure data of a three-dimensional human body model computer graphics expressing a finite number of movements specific to each game item player is created in advance as a database by the figure recording table 14 and becomes a target of CG production.
  • An optimum data record is set for the player object displayed in the actual video frame.
  • smooth motion animation is realized by handling motion as a series of figure data, and by treating continuous motion as a sequence.
  • a court such as a center circle, a side line, a penalty area line, a penalty arc or the like with absolute coordinates viewed from directly above is displayed on a screen displaying an actual video image to be produced by CG.
  • a wire frame including a line segment is displayed in an overlapping manner, and four coordinate setting points set on the wire frame are selected and dragged to the same point shown on the actual video frame. .
  • visual coordinates capable of one-to-one correspondence with absolute coordinates can be set.
  • FIG. 2 explains the conversion between visual coordinates and absolute coordinates.
  • the coat is rectangular when viewed from directly above, but appears to be trapezoidal from the camera's viewpoint.
  • an image viewed from directly above when the original rectangle is formed is indicated by absolute coordinates, and an image viewed from the viewpoint of the camera is defined as visual coordinates.
  • various video data are used such as a video of a soccer game taken by an amateur from a audience seat, a video of a soccer game aired several years ago.
  • the camera is moved by panning or tilting, zooming up, zooming down, or scene changes.
  • the four reference points are not always displayed on the screen.
  • a wire frame including a line segment characterizing a coat such as a center circle, a side line, a penalty area line, and a penalty arc as shown in FIG. 3 is used.
  • the soccer court has line segments that characterize the court, such as the center circle, side line, penalty area line, and penalty arc.
  • a plurality of coordinate setting points P1 to P33 are set on these line segments.
  • a coordinate setting point recording table 12 for recording the correspondence between visual coordinates and absolute coordinates is prepared.
  • the absolute coordinates of the coordinate setting points P1 to P33 can be set based on this. If the visual coordinates of at least four of the coordinate setting points P1 to P33 are obtained from the video screen, as described above, the correspondence between the visual coordinates and the absolute coordinates can be determined, and based on this, other coordinates can be obtained. The visual coordinates of the set point can be obtained by calculation.
  • the visual coordinates at the coordinate setting points P1 to P33 are input by displaying the wire frame shown in FIG. 3 on the real screen and dragging the point on the wire frame to the corresponding point on the screen. For example, when the coordinate setting point P1 on the wire frame is dragged so as to overlap with the corresponding point on the corner of the coat on the real screen, the visual coordinates of the coordinate setting point P1 are input, and the visual setting of the coordinate setting point P1 Correspondence between coordinates and absolute coordinates is acquired. Similarly, the visual coordinates of the corresponding points are acquired for the other coordinate setting points.
  • the visual coordinates based on the calculated conversion parameters are displayed overlaid on the selected actual video frame, the deviation between the actual video frame and the visual coordinates on the display is confirmed, and the actual video frame with a large deviation is set as the setting target
  • the conversion parameter can be corrected by selecting a coordinate setting point having a large visual coordinate deviation from the actual video frame and dragging to a point on the actual video frame.
  • conversion parameters change continuously when the camera is panned or tilted, zoomed up or down. For this reason, when visual coordinates are corrected, conversion parameters in the meantime can be obtained by interpolation.
  • the conversion parameters obtained for each video frame are recorded in the conversion parameter table 11 as shown in FIG.
  • conversion parameters (h00 to h21) and video frame IDs to be set are recorded in association with each other.
  • the manual setting flag “1” is added to the conversion parameters obtained in the actual video frame.
  • the conversion parameter in the meantime is obtained by interpolation, and the manual setting flag is “0”.
  • FIG. 6 is a flowchart describing a work procedure in the coordinate setting described above.
  • the operator operates the video data playback device 4 through the operation of the arithmetic device 2, and the video data serving as the material of the three-dimensional computer graphics image is transferred to the video data memory through the video data input interface 5.
  • step S21 Data to be reproduced by the video data reproduction device 4 is required to be able to perform normal reproduction, frame-by-frame reproduction and rewind for each video frame on the work display 9, and the data type and data format are not limited.
  • the operator performs frame-by-frame playback or rewinding of the video data stored in the video data memory 6 through the operation of the arithmetic device 2 (step S22), and a video frame that is a visual coordinate setting work target is selected.
  • Select step S23.
  • the wire frame generator 10 displays the wire frame having the absolute coordinates shown in FIG. 3 and causes the arithmetic device 2 to calculate the conversion parameter H.
  • step S24 The operator performs visual coordinate setting work using this result (step S24), displays the set visual coordinates on the screen, superimposes the video frames that are the work targets, finely corrects the position, and calculates
  • the apparatus 2 is caused to calculate the conversion parameter H for every fine correction (step S25).
  • the arithmetic unit 2 records the conversion parameters H obtained as a result of steps S24 and S25 in association with all the video frame IDs on the conversion parameter table 11 shown in FIG.
  • the manual setting flag of the setting target video frame ID on the parameter table 11 is set to “1” (step S26).
  • step S25 the computing device 2 counts the number of video frames for which the visual coordinates are set by the operator, and determines whether the number of video frames is greater than 1 (step S27).
  • step S22 If the number of video frames is 1, the worker performs visual coordinate code setting work for another video frame (step S22).
  • the visual coordinate setting parameter is linearly interpolated with the elapsed time as a variable for each video frame in which the visual coordinates existing between the set video frames are not set. Automatically calculated and set, and recorded in association with the interpolation target video frame ID on the conversion parameter table 11 (step S28). At this time, the manual setting flag is set to “0”.
  • step S29 The operator performs the processing from step S22 to step S28 on a plurality of video frames in the processing target video data, and finally confirms the setting state of the visual coordinate code throughout, and if necessary, Fine adjustment is performed on the video frame that needs to be adjusted (step S29).
  • FIG. 7 is a flowchart showing details of step S24 of the present invention.
  • the operator causes the wire frame generation unit 10 to display a wire frame of absolute coordinates on the selected actual video frame through the operation of the arithmetic device 2 (step S241).
  • the operator sets coordinates on the processing target video data displayed on the actual video frame from among the coordinate setting points P1 to P33 (see FIG. 3) shown on the wire frame. Four points are selected, and the coordinate setting point on the wire frame is dragged to the same point on the actual video frame (step S242).
  • the arithmetic unit 2 calculates the coordinate conversion parameter H between the absolute coordinates and the visual coordinates, and based on this, the remaining coordinate setting points not selected in step S242 are calculated. Coordinates on the visual coordinates are automatically calculated, described in the coordinate setting point recording table 12 shown in FIG. 4, lines are drawn between the points, and are displayed on the screen as the visual coordinates of the wire frame (step S243).
  • the operator compares the visual coordinates of the wire frame displayed as a result of the automatic calculation of the conversion parameter H with the line displayed on the actual video frame, and finds the portion where the line and the coordinate setting point are shifted, Finely adjust the coordinates of the coordinate setting point where the deviation occurs.
  • the arithmetic unit 2 recalculates the coordinate conversion parameter H every time fine adjustment occurs, and redisplays the coordinate setting points and lines displayed on the visual coordinate court according to the calculation result. Make corrections while watching.
  • FIG. 8 is a flowchart showing details of step S243 of the present invention.
  • a correspondence table between visual coordinates and absolute coordinates of the wire frame is prepared in advance on the coordinate setting point recording table 12.
  • the arithmetic unit 2 causes the points on the work display 9 to be dragged and matched.
  • Visual coordinates (X1, Y1) to (X4, Y4) are acquired (step S2431).
  • the calculation device 2 calculates the conversion parameter H from the visual coordinates and absolute coordinates of each coordinate setting point by the calculation shown in the equation (1).
  • the conversion parameter H and the coordinate (X′n, Y′n) on the absolute coordinate corresponding to the selected coordinate setting point are substituted into the equation (2) to calculate the visual coordinate coordinate (Xn, Yn). (Step S2432).
  • the arithmetic unit 2 reads the line segment configuration information in the coordinate setting point recording table 12 shown in FIG. 4 and draws the line segments on the work display 9 for all the calculated coordinate setting points.
  • the visual coordinates reflecting the conversion parameter H are drawn on the work display 9 (step S2433).
  • step S2434 the arithmetic device 2 records the obtained conversion parameter H in the conversion parameter table 11 shown in FIG. 5 in association with the actual video frame ID to be set (step S2435). .
  • FIG. 9 is a flowchart showing details of step S25 of the present invention.
  • the arithmetic unit 2 converts the conversion associated with the displayed actual video frame ID recorded in the conversion parameter table 11 shown in FIG.
  • the visual coordinates based on the parameter H are displayed superimposed on the selected actual video frame (step S251).
  • the worker confirms the deviation between the actual video frame on the display and the visual coordinates, and selects the actual video frame having a large deviation as a setting target (step S252).
  • the operator selects a coordinate setting point with a large deviation between visual coordinates and the actual video frame, and drags and corrects the point to a point on the actual video frame (step S253).
  • the arithmetic unit 2 acquires the coordinates of the coordinate setting point that has been drag-corrected as (X′4, Y′4) (step S254), and three coordinate setting points on the visual coordinates other than the coordinate-setting point ID that has been drag-corrected. Then, the coordinates are acquired as (X1, Y1) to (X3, Y3) (step S255).
  • the calculation device 2 calculates the conversion parameter H from the visual coordinates and absolute coordinates of each coordinate setting point by the calculation shown in the equation (1).
  • the conversion parameter H and the coordinate (X′n, Y′n) on the absolute coordinate corresponding to the selected coordinate setting point are substituted into the equation (2) to calculate the visual coordinate coordinate (Xn, Yn). (Step S256).
  • the arithmetic unit 2 reads the line segment configuration information in the coordinate setting point recording table 12 shown in FIG. 4 and draws the line segments on the work display 9 for all the calculated coordinate setting points. Thus, the visual coordinates reflecting the conversion parameter H are drawn on the work display 9 (step S257).
  • step S2434 the arithmetic device 2 records the obtained conversion parameter H in the conversion parameter table 11 shown in FIG. 5 in association with the actual video frame ID to be set (step S259).
  • steps S256 to S259 are the same as steps S2432 to S2435 in FIG.
  • figure data of a three-dimensional human body model computer graphics expressing a finite number of movements specific to each game item player is stored in the figure recording table 14 in advance as a database, and a real image to be produced by CG.
  • An optimal figure data record is assigned to the player object displayed in the frame.
  • human motion is complex, but if it is limited to the motion of a soccer player, it can be aggregated into several motions such as stationary, walking, running, dribbling, trapping, and heading. Therefore, a series of figure data expressing these actions is determined in advance, and the player object displayed in the actual video frame that is the target of CG production is determined to be most appropriate among these actions. Those are read from the database to generate a CG image.
  • motion is a collection of a plurality of figure data so that motions such as “walking”, “starting running”, “sprinting as much as possible”, and “stopping suddenly” can be expressed smoothly.
  • a data group is composed of a plurality of data group sequences so that a series of actions of the player can be expressed as a smooth moving image. This will be described.
  • an identification ID is assigned to all displayed players. Then, the position recording target player is selected on the screen, and while the actual video data is played back frame by frame, the video frame in which the moving direction and moving speed of the position recording target player change rapidly is selected, and the foot portion of the player video is selected. Is marked. Thereby, the start point video frame and the end point video frame are designated. At this time, the state of the figure data of each player on the absolute coordinate court is recorded.
  • FIG. 10 shows the structure of the figure recording table 14 in which the state of the figure data of each player on the absolute coordinate court at this time is recorded.
  • the figure recording table 14 includes, for each video frame, the arrangement of each player on the absolute coordinates, rotation of figure data, a flag for identifying a manually set video frame, and the type of data group.
  • the figure data record ID is recorded.
  • the figure data rotation item is the result of the figure data rotation correction by the 3D direction correction tool.
  • the three-dimensional direction correction tool is a correction tool for displaying a player and a ball in a three-dimensional space. That is, the display screen on the work display 9 is two-dimensional, and it is difficult to operate the ball and the player image in the three-dimensional space shown on the absolute coordinate court with a general pointer display. Therefore, a simple operation is realized by using the three-dimensional direction correction tool shown in FIG.
  • FIG. 13A shows a three-dimensional traveling direction pointer.
  • This pointer has a shape of a Cartesian coordinate axis in a three-dimensional direction, shows the traveling direction of the ball and the player image in an easy-to-see manner, and realizes simple correction work. Further, the pointer can be tilted in any of the three-dimensional directions. As shown in FIG. 13B, the moving direction of the ball and the player image can be easily seen even in the tilted state, and simple correction is possible. Realize work.
  • FIG. 13C shows a three-dimensional rotation pointer. This pointer indicates a three-dimensional rotation plane and rotation direction, and realizes a simple correction work when rotating the player image.
  • the manual setting flag indicates whether or not recording by marking has actually been performed. As described above, a video frame in which the moving direction and moving speed of the position recording target player changes rapidly is selected, and the player video is marked. In this case, the manual setting flag is “1”. Since the movement is continuous between the start video frame and the end video frame, each item is obtained by interpolation, and the manually set flag is “0” for the interpolated video frame.
  • the data group is a collection of a plurality of figure data for expressing a series of individual actions such as “walking”, “running”, “trapping”, and “heading”.
  • the "Walking" data group has human body model figure data that shows “right foot forward”, human body figure data that shows “right foot on the ground”, and "left foot forward” state It can be expressed as a collection of figure data of a human body model that indicates, and figure data of a human body model that indicates a state of “attaching the left foot to the ground”.
  • Each figure data is managed by a wire frame ID. The direction and intensity of the light ray that hits the wire frame is set, and a surface image mapping process is performed in consideration of the shadow formed by the light ray, so that a three-dimensional human body model computer graphics image of the figure is obtained.
  • Figure data record ID is a set record ID assigned based on a sequence derived from the order of data groups.
  • FIG. 11 shows the structure of a sequence for expressing a smooth operation.
  • the sequence is configured as a continuum of data groups indicating individual operations, and the data group is configured as a continuum of figure data record IDs indicating operation changes from the start to the end of individual operations.
  • Figure data records are selected in advance so that a smooth transition can be made from one data group to the next. Also, the breakdown of the sequence, that is, the composition of the data group, can be selected from a certain data group so that it can express movements other than simple driving such as “falling down” and “trapping” even at the same moving speed. Is prepared to choose.
  • an optimal data group is selected based on the position and moving direction of the selected player and the number of recorded video frames, and an optimal sequence is selected based on a continuous data group configuration for the player.
  • the figure data constituting the motion animation of the player is assigned to the player position on each video frame. For example, when the player's movement with respect to the number of recorded video frames is calculated, the speed of the player's movement is obtained. This makes it possible to determine the operation of the player, such as “still”, “walking”, or “running”.
  • FIG. 12 is an example of the operation determination table 15 that determines the operation as described above. In this example, an operation is assigned corresponding to the calculated speed, but a direction or the like may be further used.
  • the figure data of the data group “run” is read between “walking” and “running”. It can be replayed and the single movement is not discontinuous.
  • FIG. 14 is a flowchart showing processing of figure data according to the present invention.
  • the worker assigns identification IDs to all displayed players through the entire real video data to be produced by CG (step S31).
  • step S33 a position recording target player is selected on the screen (step S33), and the moving direction and moving speed of the position recording target player change abruptly while moving and reproducing the actual video data.
  • the video frame to be selected is selected, and the foot portion of the player video on the work display 9 is marked (step S34).
  • step S32 When the position recording of all the players is completed (step S32), the worker then displays the first video frame of the real video data, and plays back the real video data frame-by-frame for the trajectory of the ball projected on the ground. On the other hand, a video frame in which the moving direction and moving speed of the ball change rapidly is selected, and the locus of the ball projected on the ground on the work display 9 is marked (step S35).
  • step S36 When the work is completed for all the video frames, the worker again displays the first video frame of the real video data, and the ball is moved in the height direction based on the trajectory of the ball projected on the ground recorded in step S35.
  • a video frame in which the moving direction and the moving speed of the moving image rapidly change is selected, and a ball trajectory including a height element is recorded using a three-dimensional direction correction tool (step S36).
  • the operator When the work is completed for all the video frames, the operator operates the arithmetic device 2 to reproduce the animation of the three-dimensional computer graphics based on the recorded player and ball position, and assigns the sequence and data group. Correction is performed (step S37).
  • the arithmetic unit 2 selects an optimal data group based on the selected position and moving direction of one player and the number of recorded video frames, and optimizes based on a continuous data group configuration for the player.
  • a sequence is selected, and figure data constituting a motion animation of the player based on the sequence is assigned to the player position on each video frame.
  • the operator displays the actual video data and the data after the allocation in a superimposed manner, confirms the motion, direction, and inclination of the allocated figure data while frame-by-frame playback is performed.
  • the 3D direction correction tool or by making fine adjustments such as correcting the start or end position of the sequence animation, or correcting the scale of the sequence animation,
  • the operation of the player is produced as a CG animation.
  • FIG. 15 is a flowchart showing details of step S31 of the present invention.
  • the operator operates the arithmetic device 2 to display the first video frame of the real video data in which the visual coordinates are already set on the work display 9 by operating the arithmetic device 2. (Step S311).
  • Step S312 the operator operates the arithmetic device 2 through the arithmetic device operation device 1 to give an identification ID to all players displayed on the first video frame of the video data on the work display 9.
  • step S313 the worker reproduces the video data (step S313) (step S314), and determines whether or not a new player to which no ID is assigned is displayed on the video frame in step S312 (step S315). If it is displayed, an ID is assigned to the player using the same procedure as in step S312 (step S316). The operations from step S313 to step S316 are repeated until the last video frame.
  • FIG. 16 is a flowchart showing details of step S33 of the present invention.
  • the operator operates the arithmetic device 2 to display the first video frame of the position setting start work on the work display 9 (step S331).
  • the operator operates the arithmetic device 2 through the arithmetic device operating device 1, and selects a position setting target player from the players displayed on the work display 9 (step S332).
  • FIG. 17 is a flowchart showing details of step S34 of the present invention.
  • the operator operates the computing device 2 through the computing device operating device 1, and puts a mark indicating the position of the player on the visual coordinates on the displayed position setting target player (step S341). .
  • a detailed flowchart of step S341 is shown in FIG.
  • step S342 the worker reproduces the video data and determines whether or not it is the final video frame (step S342). If it is not the final video frame, the video frame is postponed (step S343), and the position setting target player A video frame in which the running direction or running speed changes abruptly is searched for by frame advance (step S344). If there is a video frame that is changing rapidly, the process returns to step S341. If there is no corresponding final video frame, the position setting target player's position is marked on the final video frame (step S345).
  • FIG. 18 is a flowchart showing details of step S35 of the present invention.
  • the worker operates the arithmetic device 2 to display the first video frame of the ball position setting start work on the work display 9 (step S351).
  • step S352 the operator operates the computing device 2 through the computing device operating device 1, and the visual of the ball is displayed on the portion of the ball image displayed on the work display 9 corresponding to the locus projected on the ground. A mark indicating the position on the coordinates is attached (step S352).
  • step S352 the worker reproduces the video data and determines whether or not it is the final video frame (step S352). If it is not the final video frame, the video frame is postponed (step S354) and the ball moves or A video frame whose moving speed is changing rapidly is searched by frame advance (step S355). If there is a current video frame in which the ball is changing rapidly, the process returns to step S352, and if there is no corresponding video frame, the ball position is marked on the final video frame (step S356).
  • FIG. 19 is a flowchart showing details of step S36 of the present invention.
  • the operator operates the arithmetic device 2 to display a soccer court screen based on absolute coordinates having coordinate axes in the vertical, horizontal, and height directions on the work display 9 (step S361). .
  • step S362 the operator operates the calculation device 2 to display the movement trajectory information of the ball recorded through step S34 on the soccer court screen as a line segment and a point indicating the ball (step S362).
  • the worker instructs to set the display of the figure animation of the player involved in the ball together with the movement trajectory information of the ball.
  • the arithmetic unit 2 assigns and displays a sequential prepared as a player figure animation involved in the ball (step S363).
  • the worker compares the movement of the point indicating the ball with the movement of the figure animation, sets correction points on the soccer court screen so that they are in a natural positional relationship with each other (step S364), and sets the three-dimensional direction correction tool. Using this, the height direction component of the line segment of the ball trajectory is corrected (step S365).
  • step S366 After the correction is completed, the movement of the player and the ball is reproduced as an animation on the soccer court screen (step S366), the mutual movement and positional relationship are reconfirmed (step S367). This process is repeated until there is no difference between the positional relationship between the player and the ball throughout the animation.
  • FIG. 20 is a flowchart showing details of step S37 of the present invention.
  • the operator operates the arithmetic device 2 to display the actual video data video frame to be processed on the work display 9 (step S371).
  • the computing device 2 converts the movement trajectory of the ball set on the absolute coordinates and the player figure animation to the visual coordinates on the displayed real video frame based on the equation (2). It is converted into visual coordinates viewed from the same viewpoint, superimposed on the actual video data video frame displayed on the work display 9 (step S372), and the video data and animation are reproduced in synchronization (step S372). S373).
  • the operator operates the arithmetic device 2, compares the actual image on the video data video frame with the player figure (step S374), and selects a point where a deviation occurs (step S375).
  • Step S376 If there is a discrepancy in the player's movement, the figure data constituting the data group is corrected by operating the three-dimensional direction correction tool and the arithmetic unit 2, and another figure is necessary if necessary.
  • the record is changed (step S377).
  • step S378 the worker reproduces only the corrected ball and the player animation (step S378), confirms whether or not the ball trajectory and the player's motion are reproduced smoothly (step S379), and is not smooth. If YES in step S376, the flow returns to step S376. If smooth, the flow returns to step S373 to repeat until the ball trajectory and the player's motion are smoothly reproduced throughout the animation.
  • FIG. 21 is a flowchart showing details of step S341 of the present invention.
  • the figure recording table 14 shown in FIG. 11 is made writable (step S34101).
  • step S341 shown in FIG. 17 when the operator records a mark at the foot of the player's position in the first video frame, the arithmetic unit 2 detects the mark coordinate on the visual coordinate (step S34102), (2 ) Is converted from visual coordinates to absolute coordinates (step S34103), the actual video frame ID and absolute coordinates are written into the figure recording table 14 of FIG. 11, and 1 indicating the position mark setting by the operator is recorded in the manual setting flag. (Step S34104).
  • step S34101 The worker and the computing device 2 perform the operations and processes from step S34101 to step S34104 for all real video frames for which position marks should be set (step S34105).
  • the arithmetic device 2 extracts a plurality of data records in which 1 is input to the manual setting flag (step S34106). ), The video frame ID included in the data record and the absolute coordinates recorded in the record are extracted, and the number of video frames existing between the extracted video frames adjacent to each other, that is, the required between video frames. The time is counted (step S34107).
  • step S34108 the difference in the absolute coordinates of the arrangement of each player between the extracted video frames adjacent to each other, that is, the position movement distance of the player between the video frames is calculated (step S34108).
  • step S34108 The result of step S34108 is divided by step S34107, and the moving speed between the video frames of the player is calculated (step S34109).
  • step S34109 Data group candidates to be displayed are displayed (step S34110), and the operator sets the corresponding data group (step S34111).
  • the arithmetic unit 2 sets the corresponding sequence from the arrangement of the data groups (step S34112), and the figure data record ID recorded in the sequence is set to the figure recording. Recording is performed for each real video frame record in the table 14 (step S34113), and the figure recording table 14 is closed (step S34114).
  • a plurality of coordinate setting points are set on a line segment that characterizes a coat, and an image of the wire frame based on the line of the coat is generated.
  • the coordinate setting point is dragged to a position on the corresponding court on the screen and overlapped to obtain the visual coordinates of the coordinate setting point to obtain the conversion parameter.
  • the correspondence between visual coordinates and absolute coordinates is shifted.
  • the point on the wire frame may be dragged again to the corresponding point on the screen for correction.
  • the camera pans from the court on the ally side to the opponent's court following the movement of the ball.
  • P8 comes off the screen.
  • the coordinate setting points P3, P6, P7, and P11 of the opponent's court may be used.
  • conversion parameters change continuously when the camera is panned or tilted, zoomed up or down. For this reason, when visual coordinates are corrected, conversion parameters in the meantime can be obtained by interpolation.
  • figure data of a three-dimensional human body model computer graphics expressing a finite number of movements specific to each game item player is created in advance as a database by the figure recording table 14 and becomes a target of CG production.
  • An optimum data record is set for the player object displayed in the actual video frame.
  • smooth motion animation is realized by handling motion as a series of figure data, and by treating continuous motion as a sequence.
  • a user can freely set a viewpoint using a camera moving image installed in one place where the entire subject space cannot be added and the focus of the video always changes. It is possible to easily and accurately set an accurate two-dimensional coordinate axis on the court, which is necessary for producing a quality three-dimensional image.
  • the display viewpoint is set at a position sufficiently close to the target object through a short work time and a simple content work using the television broadcast video data of the sports game in which the player is not wearing points as a material. Even in this case, it is possible to produce a smooth animation using a high-quality three-dimensional human body model. From this, it is possible to produce high-quality three-dimensional CG animation from any sports game relay video by preparing a figure record table, data group, and sequence specific to each sport.
  • the present invention provides accurate object space necessary for producing 3D images using video data of broadcasting studios, theaters, concert halls, etc. It can be used to set a two-dimensional coordinate axis including a depth direction.
  • Create 3D computer graphics as a new video product from past entertainment video data create 3D computer graphics at the scene of an accident from the camera image of a drive recorder installed in a sales vehicle, security camera It is possible to produce 3D computer graphics of crime scenes from images.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne un procédé permettant de régler les correspondances entre des coordonnées visuelles et des coordonnées absolues de façon précise avec les matières premières d'images à variation temporelle d'une caméra disposée au niveau d'un endroit, pour ainsi créer des animations lisses avec un modèle tridimensionnel de qualité élevée. Des paramètres de conversion sont déterminés par réglage d'une pluralité de points de réglage de coordonnées sur un segment caractérisant un terrain, par la création de l'image d'un modèle fil de fer sur la base des lignes du terrain, et par glissement des points de réglage de coordonnées du modèle fil de fer vers les positions de terrain correspondantes sur un écran, pour ainsi acquérir les coordonnées visuelles des points de réglage de coordonnées. Les données de figure du graphique d'ordinateur à modèle de corps humain tridimensionnel, qui expriment les actions d'un nombre fini intrinsèque au joueur de chaque événement sportif, sont construites à l'avance dans une base de données par une table d'enregistrement de figure (14), de telle sorte que l'enregistrement de données le plus optimal est réglé pour un objet joueur affiché dans un objet de cadre d'image réelle pour les créations CG.
PCT/JP2008/056366 2008-03-31 2008-03-31 Système et procédé d'affichage d'image graphique par ordinateur WO2009122510A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2008089438A JP2009245060A (ja) 2008-03-31 2008-03-31 コンピュータグラフィックス画像表示システム及び方法
JP2008-089438 2008-03-31

Publications (1)

Publication Number Publication Date
WO2009122510A1 true WO2009122510A1 (fr) 2009-10-08

Family

ID=41134931

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2008/056366 WO2009122510A1 (fr) 2008-03-31 2008-03-31 Système et procédé d'affichage d'image graphique par ordinateur

Country Status (2)

Country Link
JP (1) JP2009245060A (fr)
WO (1) WO2009122510A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6583923B2 (ja) * 2016-08-19 2019-10-02 Kddi株式会社 カメラのキャリブレーション装置、方法及びプログラム

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006230630A (ja) * 2005-02-23 2006-09-07 Nihon Knowledge Kk 実技分析システム及びプログラム

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006230630A (ja) * 2005-02-23 2006-09-07 Nihon Knowledge Kk 実技分析システム及びプログラム

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HIDEHIRO OKI ET AL.: "Soccer Kyogi o Taisho to Shita Digital Scorebook no Tameno Gazo Shori Shien", THE JOURNAL OF THE INSTITUTE OF IMAGE ELECTRONICS ENGINEERS OF JAPAN, vol. 34, no. 5, 25 September 2005 (2005-09-25), pages 567 - 577 *

Also Published As

Publication number Publication date
JP2009245060A (ja) 2009-10-22

Similar Documents

Publication Publication Date Title
US7075556B1 (en) Telestrator system
US6741250B1 (en) Method and system for generation of multiple viewpoints into a scene viewed by motionless cameras and for presentation of a view path
US9747870B2 (en) Method, apparatus, and computer-readable medium for superimposing a graphic on a first image generated from cut-out of a second image
JP5861499B2 (ja) 動画提示装置
US9747714B2 (en) Method, device and computer software
US9773523B2 (en) Apparatus, method and computer program
JP4700476B2 (ja) 多視点映像合成装置及び多視点映像合成システム
US20160381290A1 (en) Apparatus, method and computer program
US20230028531A1 (en) Information processing apparatus that changes viewpoint in virtual viewpoint image during playback, information processing method and storage medium
JP2024079736A (ja) 画像表示装置、制御方法、およびプログラム
US20230353717A1 (en) Image processing system, image processing method, and storage medium
JP2024019537A (ja) 画像処理装置、画像処理方法、及びプログラム
WO2009122510A1 (fr) Système et procédé d'affichage d'image graphique par ordinateur
JP3325859B2 (ja) データ入力方法、データ入力装置、およびデータ入力プログラムを格納する記録媒体、ならびに映像データ操作方法、映像データ操作装置、および映像データ操作プログラムを格納する記録媒体
WO2023100703A1 (fr) Système de production d'image, procédé de production d'image et programme
JPH0926966A (ja) 映像再生方法および映像再生装置
JP2024124310A (ja) 画像処理装置、画像処理方法、及び、プログラム
JP2021179687A (ja) 情報処理装置、情報処理方法およびプログラム
CN118302796A (zh) 图像制作系统、图像制作方法和程序
JPH11175745A (ja) 被写体映像統合方法及び装置及びその方法を記録した記録媒体
GB2539896A (en) Apparatus, method and computer program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08739479

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08739479

Country of ref document: EP

Kind code of ref document: A1