WO2013125098A1 - Ar技術を用いたコンピュータ・グラフィックス画像処理システム及び方法 - Google Patents

Ar技術を用いたコンピュータ・グラフィックス画像処理システム及び方法 Download PDF

Info

Publication number
WO2013125098A1
WO2013125098A1 PCT/JP2012/078175 JP2012078175W WO2013125098A1 WO 2013125098 A1 WO2013125098 A1 WO 2013125098A1 JP 2012078175 W JP2012078175 W JP 2012078175W WO 2013125098 A1 WO2013125098 A1 WO 2013125098A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
computer graphics
camera
marker
virtual camera
Prior art date
Application number
PCT/JP2012/078175
Other languages
English (en)
French (fr)
Japanese (ja)
Inventor
伊藤 和彦
Original Assignee
株式会社マイクロネット
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社マイクロネット filed Critical 株式会社マイクロネット
Publication of WO2013125098A1 publication Critical patent/WO2013125098A1/ja

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present invention relates to a computer graphics image processing system and method using augmented reality (AR).
  • AR augmented reality
  • CG computer graphics
  • AR augmented reality
  • General AR technology basically has the processing contents shown in FIG. That is, in STEP 1 and STEP 2, a scene including the AR marker 101 is captured by the camera 103 such as a web camera or a digital video camera to acquire the camera frame 105. In STEP 3, the position detection of the AR marker image 101 in the camera frame 105 is performed. Spatial image recognition is performed, and in step 4, the CG object 107 previously associated with the AR marker image 101 according to the position, posture, and scale of the AR marker image 101 is synthesized and displayed with the same posture and scale.
  • the camera 103 such as a web camera or a digital video camera to acquire the camera frame 105.
  • the position detection of the AR marker image 101 in the camera frame 105 is performed. Spatial image recognition is performed, and in step 4, the CG object 107 previously associated with the AR marker image 101 according to the position, posture, and scale of the AR marker image 101 is synthesized and displayed with the same posture and scale.
  • pinhole camera model In computer vision that performs such AR processing, it is generally approximated using a pinhole camera model.
  • the idea of the pinhole camera model is that all the light reaching the image plane passes through a pinhole that is the focal point of one point and forms an image at a position intersecting the image plane. Such a projection is called a central projection.
  • the intersection of the optical axis 111 and the image plane 113 is set as the origin O1, and the x axis and the y axis are taken on the image plane 113 in accordance with the image sensor arrangement direction of the camera 103.
  • the coordinate system is called an image coordinate system.
  • a coordinate system in which the pinhole O2 is regarded as the center of the camera 103, the direction of the optical axis 111 is the Z axis, and the X axis and the Y axis are parallel to the x axis and the y axis of the image coordinate system is defined as a camera coordinate system. Call.
  • digital images that are actually captured are images that are recorded through correction by a lens or a computer.
  • the origin of the image, the aspect ratio of the pixel, etc., depending on the mechanical characteristics of the camera 103, lens distortion, image sensor characteristics, etc. It does not match that of the actual (x, y, z) image coordinate system. Therefore, in the digital image, a coordinate system is set in which the coordinate origin is set at the upper left, the right direction is the u axis, and the vertical direction is the v axis, and this is called a digital image coordinate system.
  • Such a projective transformation matrix P is given by the camera internal parameter matrix A, the rotation matrix R, and the translation vector t.
  • the rotation matrix R is a 3 ⁇ 3 matrix
  • t] is a homogeneous coordinate system, and is represented as a 3 ⁇ 4 matrix.
  • Such determination of the internal parameter matrix A, rotation matrix R, and translation vector t of the camera 103 is called camera internal parameter estimation or camera calibration.
  • the camera calibration pattern examples P1 and P2 as shown in FIG.
  • the solution of the equation is obtained from the correlation obtained from a plurality of images, and camera parameters are determined.
  • Zang's method is used. This technique is described in Non-Patent Document 1 below.
  • the system that detects the position of the AR marker image 101 from the digital image 105 actually captured by the camera 103 shown in FIG. 22 by image recognition using the camera parameters obtained in this way is called an AR analyzer.
  • the detected orientation of the AR marker image 101 is drawn with a three-dimensional CG, so that 4 ⁇ used in general three-dimensional computer graphics calculations.
  • 4 projection matrices Pa and 4 ⁇ 4 model view matrix Ma are calculated, and an arbitrary point in the three-dimensional space is projectively transformed and displayed with reference to the position of the AR marker image 101 on the digital image 105.
  • the projection matrix Pa is defined when the pinhole camera model is defined as the frustum 121 shown in FIG.
  • transposed matrix Or its transpose matrix.
  • the above notation is used here because the matrix calculation direction may be reversed.
  • the upper left vertex of the upper bottom surface SCR-A before the frustum 121 is (l, t, ⁇ n)
  • the bottom left vertex of the bottom surface SCR-A is (l, b, -n)
  • the top right vertex is (r, t, -n)
  • the bottom right vertex is (r, b, -n)
  • the top bottom surface is f.
  • the projection matrix Pa becomes a fixed value in the imaging system of the AR analyzer
  • the model view matrix Ma represents the detection position, orientation, and scale of the AR marker image.
  • CG computer graphics
  • CG computer graphics
  • the reason why the model view matrix Ma cannot be determined is none other than when the AR marker image cannot be recognized.
  • the AR marker image 101 is not included in the range that can be recognized by the AR analyzer in the digital image 105 captured by the camera 103 as shown in FIG.
  • the AR marker image 101 is blurred in the digital image 105 photographed due to the rapid change of the moving speed and the angle of view due to the performance of the image sensor of the camera.
  • a so-called markerless AR technology that does not use a dedicated AR marker has been developed.
  • This is an AR technology that detects a feature point from the shape of a real-world object such as a mountain or a human face in a frame photographed by a camera, and performs posture detection and solid identification.
  • Even in this markerless AR technology in a system that uses a feature point group as an identification target, when the feature point group is out of the camera frame or when the feature point group is difficult to identify due to a change in the angle of view, the same problem is encountered. Occurs.
  • the present invention has been made in order to solve the above-described problems of the prior art that occur when CG is synthesized and displayed using the image recognition AR technology, and restricts the camera position, angle of view, and camera work.
  • the purpose is to provide AR technology that is less imposed.
  • a camera for detecting the position of an AR marker is always a camera that observes a fixed point, and a virtual camera is defined in the CG space and then the angle of view and position on the virtual camera side are changed.
  • one feature of the present invention is that the AR marker is captured, a fixed camera whose position is fixed, a parameter setting unit that stores camera parameters of the fixed camera, and the fixed camera
  • the camera parameters stored in the parameter setting unit are used to analyze the AR marker position, posture and scale, and the AR marker
  • the corresponding object is placed at a position corresponding to the position of the AR marker image on the image frame on the computer graphics image space based on the analysis result of the AR marker attitude analysis unit.
  • An object generated as a computer graphics image of a posture and scale corresponding to the scale And a computer graphics image of the object generated by the object image generation unit when viewed from a virtual camera installed at a predetermined coordinate position on the computer graphics image space.
  • a virtual camera observation image generation unit that determines the appearance and generates a virtual camera observation image, and a background image and a computer graphics image of the object viewed from the virtual camera generated by the virtual camera observation image generation unit
  • a computer graphics image processing system using AR technology comprising: a computer graphics image composition unit for performing a display; and a display unit for displaying a computer graphics composite image synthesized by the computer graphics image composition unit.
  • a computer is used to capture an image frame including an AR marker image captured by a fixed camera, and camera parameters stored in advance in the computer are used to obtain the AR marker image.
  • the position and orientation of the AR marker image are determined by analyzing the image frame including the image frame, and the object corresponding to the AR marker is converted into the image frame in the computer graphics image space based on the analysis result of the orientation of the AR marker image.
  • a computer graphics image of the posture and scale corresponding to the posture and scale of the AR marker image is generated, and the AR marker image on the object image frame is generated on the AR marker image.
  • the computer graphics of the object A visual image when it is viewed from a virtual camera installed at a predetermined coordinate position in the computer graphics image space, and is generated as a virtual camera observation image, from a background image and the virtual camera
  • a part of the AR marker is detached from the camera frame, or the AR marker image in the camera frame is small, making it difficult to identify the AR marker image, and the position of the AR marker image cannot be detected correctly. It is possible to provide a computer graphics image processing technology using AR technology that does not cause problems and does not impose restrictions on the position, angle of view, and camera work of the camera.
  • FIG. 1 is a functional block diagram of a computer graphics image processing system according to an embodiment of the present invention.
  • FIG. 2 is a flowchart of computer graphics image processing executed by the computer graphics image processing system.
  • FIG. 3 is a diagram for explaining the principle of computer graphics image processing by the system of the embodiment.
  • FIG. 4 is an explanatory diagram showing a relationship between a real camera screen and a virtual camera screen in computer graphics image processing by the system of the above embodiment.
  • FIG. 5 is an explanatory diagram of a digital image projected by the AR analyzer in the system of the above embodiment.
  • FIG. 6 is an explanatory diagram showing a spatial coordinate arrangement between a projection volume and a CG object in an actual camera photographing system in computer graphics image processing by the system of the above embodiment.
  • FIG. 1 is a functional block diagram of a computer graphics image processing system according to an embodiment of the present invention.
  • FIG. 2 is a flowchart of computer graphics image processing executed by the computer graphics image processing system.
  • FIG. 3 is
  • FIG. 8 is a diagram illustrating projection of a CG object on the first screen plane (when t + b ⁇ 0) in the YZ plane of the real camera photographing coordinate system in the computer graphics image processing by the system of the above embodiment.
  • FIG. 9 is an explanatory diagram showing a process of projecting a CG object onto the first screen surface in the real camera photographing coordinate system in the computer graphics image processing by the system of the above embodiment.
  • FIG. 10 is an explanatory diagram of a state in which a CG object is projected onto the first screen surface in the real camera photographing coordinate system in the computer graphics image processing by the system of the above embodiment.
  • FIG. 11 is an explanatory diagram of a state in which a CG object is projected onto the first screen surface in the actual camera photographing coordinate system by the conventional AR analysis technique as viewed from the second screen.
  • FIG. 12 is an explanatory diagram of a state in which the CG object is projected onto the first screen surface in the real camera photographing coordinate system in the computer graphics image processing by the system according to the above-described embodiment, as viewed from the second screen.
  • FIG. 11 is an explanatory diagram of a state in which a CG object is projected onto the first screen surface in the actual camera photographing coordinate system by the conventional AR analysis technique as viewed from the second screen.
  • FIG. 12 is an explanatory diagram of a state in which the CG object is projected onto the first screen surface in the real camera photographing coordinate system in the computer graphics image
  • FIG. 13 is an explanatory diagram of a state in which a CG object is displayed at the position of the AR marker image on the first screen surface in the real camera photographing coordinate system in the computer graphics image processing by the system of the above embodiment.
  • FIG. 14 shows a state in which the CG object is displayed at the position of the AR marker image on the first screen surface in the real camera photographing coordinate system in the computer graphics image processing by the system of the above embodiment, and the second screen surface of the virtual camera.
  • FIG. 15 is an explanatory diagram of image processing when a virtual camera is moved in computer graphics image processing by the system of the embodiment.
  • FIG. 16 is an explanatory diagram showing the movement of the AR marker image on the digital image of the real camera when the AR marker is moved in the computer graphics image processing by the system of the embodiment.
  • FIG. 17 is an explanatory diagram showing movement of a CG object viewed from a virtual camera when an AR marker is moved in computer graphics image processing by the system of the above embodiment.
  • FIG. 18 is a photograph of an image obtained by synthesizing a CG object at the position of the AR marker image with respect to a fixed-point image captured by the AR analyzer of the comparative example.
  • FIG. 19 is a photograph of a projective transformation image from a virtual camera observation system on a CG by the computer graphics image processing system according to the first embodiment of the present invention.
  • FIG. 20 is an explanatory diagram of a soccer commentary image (CG for moving two corresponding players by moving two AR markers) by the computer graphics image processing system according to the second embodiment of the present invention.
  • FIG. 21 is a photograph of a captured image of a scene including a general AR marker.
  • FIG. 22 is an explanatory diagram of a conventional AR analysis process.
  • FIG. 23 is an explanatory diagram showing a relationship between a camera coordinate system (X, Y, Z) and an image coordinate system (x, y, z) in a pinhole camera model in a general AR analysis process.
  • FIG. 24 is an explanatory diagram of a pattern example used for general camera calibration.
  • FIG. 25 is an explanatory diagram of the definition of a viewing frustum in a general pinhole camera model.
  • FIG. 26 is an explanatory diagram of problems of the conventional AR analysis technique.
  • a computer graphics image processing system using AR technology has the configuration shown in FIG. 1, and includes an AR analyzer 1, a computer graphics (CG) rendering unit 2, and a display 3. Further, the camera calibration unit 4, the fixed camera 5 as the real camera CAM-A placed at the fixed position, the offset matrix setting unit 6, and chroma key composition processing are performed on the captured image of the fixed camera 5 when necessary. A chroma key device 7 is provided.
  • the AR analyzer 1, the CG rendering unit 2, the camera calibration unit 4, and the offset matrix setting unit 6 are installed by executing necessary software programs installed in one computer system.
  • each processing function required for implementation is disassembled and each is described as a specific processing unit.
  • the AR analyzer 1 finds an AR marker image from the video of the fixed camera 5 and the storage unit 11 that stores a projection matrix Pa, a view model matrix Ma, camera parameters, and other necessary data, which will be described later, and its position and orientation.
  • An AR marker image analysis unit 13 that analyzes the scale and registers the view model matrix Ma in the storage unit 11 and an Mq matrix determination unit 15 that calculates an affine transformation matrix Mq for the analysis result of the AR marker image are provided.
  • the CG rendering unit 2 is composed of, for example, a CG graphics card, and stores a digital image of an object to be displayed at the AR marker image position, a background image, and other necessary data, a Pb matrix for setting a Pb matrix
  • the setting unit 23, the object posture determining unit 25 that determines the display position, posture, and scale of the object image, and the object posture determining unit 25 determines the object image stored in the storage unit 21 with respect to the captured image of the fixed camera 5.
  • a CG image synthesizing unit 27 that synthesizes the position of the AR marker image with the posture and creates a CG synthesized image is provided.
  • the CG rendering unit 2 also includes a background image input unit 29 for inputting a background image in order to synthesize an object image on the background image.
  • the fixed camera 5 uses a web camera or a video camera capable of digital output of video.
  • the chroma key device 7 is used to input the chroma key composite image to the CG rendering unit 2.
  • the camera calibration unit 4 calculates camera internal parameters and external parameters by camera calibration of the fixed camera 5 and registers them in the storage unit 11 of the AR analyzer 1.
  • the fixed camera 5 fixes its position and angle of view to a fixed state mainly in order to photograph the AR marker 101 clearly.
  • the offset matrix setting unit 6 sets an offset matrix Mp, and the data of the matrix Mp set here is registered in the storage unit 21 of the CG rendering unit 2.
  • an observation system is defined by the virtual camera CAM-B separately from the photographing system by the fixed camera 5 that is the real camera CAM-A. As shown in FIG.
  • an AR marker image MRK1 obtained by projective transformation of an image photographed by the real camera CAM-A (5) by the AR analyzer 1 is affine transformed to the first screen surface SCR-A on the CG space 20
  • the AR marker image MRK1 affine transformed to the position corresponding to the first screen surface SCR-A is projectively transformed to the second screen surface SCR-B viewed from the virtual camera VRCAM-B arranged on the same CG space 20. Since the first screen surface SCR-A is arranged on the same CG space 20, it can be observed from the virtual camera VRCAM-B system at a free position and angle of view.
  • the first screen surface SCR-A when the first screen surface SCR-A is observed from the virtual camera VRCAM-B, a CG image photographed by the real camera CAM-A and projectively transformed by the AR analyzer 1 is used as the first screen surface SCR-A. Projective transformation can be easily performed on A. However, with this method, only a planar CG image projected onto the screen surface SCR-A can be observed.
  • the first screen surface SCR-A that is, a rectangle having the same aspect ratio as that of the projected image of the real camera CAM-A photographing system at a position corresponding to the first screen surface SCR-A.
  • a region SQ-A is defined on the VRCAM-B world space 20.
  • texture mapping of the digital image obtained by projective transformation by the AR analyzer 1 is performed on the rectangular area SQ-A.
  • the observation result from the virtual camera VRCAM-B that is, the projection conversion result to the second screen surface SCR-B is transferred to the second screen surface SCR-B in the rectangular area SQ-A.
  • the CG object OBJ1 designed as a three-dimensional shape is only distorted into a plane and projected onto the second screen surface SCR-B.
  • the virtual camera VRCAM-B observation system is positioned at the position of the AR marker image MRK1 projected and converted onto the first screen surface SCR-A by the AR analyzer 1.
  • the CG object OBJ1 is placed at the angle of view, and this is projected and converted to the second screen SCR-B.
  • the CG object OBJ1 that is projected and converted to the second screen SCR-B is an AR marker image that is projected and converted onto the first screen surface SCR-A, which is arranged on the coordinates of the virtual camera VRCAM-B system. Since the projection conversion of the CG object OBJ1 is correctly performed at the position of the MRK1 and is the projection conversion of the virtual camera VRCAM-B system, it is projected onto the second screen surface SCR-B while maintaining the three-dimensional shape.
  • STEP 11 The AR marker 101 is created in advance, and the CG of the object OBJ1 corresponding to the AR marker 101 is created and stored.
  • STEP 13 In addition, the internal parameter matrix A, the rotation matrix R, and the translation vector t, which are camera parameters, are determined and stored in advance by camera calibration of the fixed camera 5.
  • STEP 15 The shooting space of the AR analyzer 1 corresponding to the image shot by the fixed camera 5 is determined and stored. That is, the projection matrix Pa is determined and stored.
  • STEP17 A scene in which the AR marker 101 exists is photographed by the fixed camera 5, and a photographed image in which the AR marker image MRK1 is captured is obtained.
  • STEP19 The AR marker image MRK1 is found from the photographed digital image.
  • STEP 21 The position (depth), orientation (posture), and size (scale) of the AR marker image MRK1 are determined, and the view model matrix Ma is determined and stored.
  • STEP23 The appearance of the CG object OBJ1 corresponding to the AR marker image MRK1 on the real camera screen SCR-A is calculated using the matrices Pa and Ma of the storage unit 11.
  • STEP 25 Determines the appearance when projected on the virtual camera (second) screen SCR-B with respect to the CG object OBJ1 for which the appearance on the real camera (first) screen SCR-A has been determined.
  • STEP 27 A digital image as a background and a CG object OBJ1 on the virtual camera screen are synthesized.
  • STEP 29 A composite image of a digital image as a background and the CG object OBJ1 on the virtual camera screen is displayed.
  • Ma represents a 4 ⁇ 4 model view matrix in the real camera CAM-A photographing system, and is an affine transformation itself of the spatial coordinates in the coordinates of the real camera CAM-A photographing system. As described above, this is a relative value calculated from the camera parameters of the real camera CAM-A. If the projection matrix Pa is not always multiplied, the AR in the digital image of the real camera CAM-A photographing system A CG object cannot be correctly displayed at the position of the marker image MRK1. However, since the projective transformation by the projection matrix Pa is equivalent to the mapping to the first screen surface SCR-A in the real camera CAM-A system, the projective transformation matrix Pa cannot be applied as it is.
  • a frustum shape corresponding to the view volume in the coordinates of the real camera CAM-A imaging system is defined on the coordinates of the virtual camera VRCAM-B observation system and arranged at the position of the AR marker image MRK1.
  • An affine transformation is performed to project the coordinates of the CG object OBJ1 onto the position of the AR marker image MRK1 on the first screen surface SCR-A.
  • the known parameters are the 4 ⁇ 4 projection matrix Pa defined by the AR analyzer 1 and
  • the 4 ⁇ 4 model view matrix Ma in the coordinates of the real camera CAM-A imaging system is determined as follows.
  • FIG. 5 shows a CG object OBJ1 projected onto the first screen surface SCR-A by the AR analyzer 1.
  • the CG object OBJ1 projected on the first screen surface SCR-A is translated in the spatial coordinate system by the affine transformation by the model view matrix Ma in the spatial coordinates of the real camera CAM-A photographing system.
  • This is a mapping of rotation / scaling and projective transformation by the projection matrix Pa. Therefore, arbitrary spatial coordinates representing the CG object OBJ1 placed on the AR marker image MRK1 by affine transformation using the model view matrix Ma.
  • the geometric elements constituting the real camera CAM-A system view volume are derived from the projective transformation matrix Pa.
  • this matrix Pa The components of this matrix Pa are defined in FIG. 25, Formula 5 and Formula 6, as in the past.
  • n 1 is often given.
  • the projective transformation matrix Pa includes an optical center deflection component.
  • Ma is a projection matrix Pa, that is, a model view that coincides with the AR marker image MRK1 in appearance in the real camera CAM-A view volume determined by the camera parameters of the real camera CAM-A.
  • the actually captured digital image is an image that is recorded through correction by a lens or a computer. The origin of the image, the aspect ratio of the pixel, etc. are actually changed depending on the mechanical characteristics of the fixed camera 5, lens distortion, characteristics of the image sensor, and the like.
  • camera parameters are estimated as r + l ⁇ 0 or t + b ⁇ 0.
  • a scaling parameter considering the projection transformation by the projection matrix Pa is defined as follows.
  • Vb is a constant and is the height scale of the first screen surface SCR-A in the virtual camera VRCAM-B observation system.
  • the movement amount Tp at the position of the first screen surface SCR-A in consideration of the deflection component of the optical center axis Oax is defined as follows.
  • Ax is a constant indicating the aspect ratio in the horizontal direction of the first screen surface SCR-A.
  • the digital image of the actual camera CAM-A system is a 16: 9 image, 16/9, 4: 3 image. In this case, the value is 4/3.
  • This matrix Mp is set in advance by the offset matrix setting unit 6 and stored in the storage unit 21. The data is variable.
  • [Tp] and [Tr] are 4 ⁇ 4 matrix homogeneous coordinate expressions of the respective translation vectors.
  • an arbitrary affine transformation matrix for arranging the coordinates of the real camera CAM-A photographing system is set as Mq at the coordinates of the virtual camera VRCAM-B observation system, and the coordinates on the virtual camera VRCAM-B observation system are set.
  • a 4 ⁇ 4 matrix projected onto the second screen surface SCR-B is set as Pb.
  • an arbitrary space coordinate which is a space coordinate representation of the CG object OBJ1
  • the projective transformation mb ′ to the second screen surface SCR-B can be expressed by the following equation.
  • the angle of view by the virtual camera VRCAM-B coordinate system arbitrarily set in the CG space on the AR marker image MRK1 observed in the digital image of the real camera CAM-A system. It is possible to projectively convert the CG object OBJ1 to the second screen surface SCR-B of the virtual camera VRCAM-B system while maintaining its three-dimensional shape at the photographing position.
  • the real camera CAM-A coordinate system observed from the virtual camera VRCAM-B system is moved and rotated to an arbitrary position. ⁇ Scaling and drawing angle of view can be changed.
  • the following operations and effects can be achieved.
  • the digital image obtained by the fixed camera 5 in the prior art, as shown in FIG. 11, for example, when the virtual camera VRCAM-B system is arranged beside the real camera CAM-A photographing system, the CG object OBJ1 is only a plane. Because it was not drawn, it was converted to a straight line. On the other hand, in the present embodiment, it is possible to draw the CG object OBJ1 as seen from the side as shown in FIG.
  • the virtual camera VRCAM-B can change the angle of view, the problem that the AR marker 101 cannot be recognized when the AR marker 101 protrudes from the frame 105 as shown in FIG. As shown in FIG. 13, when the real camera CAM-A system is set and fixed at a field angle capable of recognizing the AR marker, the virtual camera VRCAM-B can be enlarged and reduced as shown in FIG. . This can be similarly solved in the case where it is difficult to identify the AR marker 101 shown in FIG.
  • the blur when the AR marker 101 is moving as shown in FIG. 26C is a problem that the AR analyzer 1 cannot recognize when the AR marker 101 is moving at high speed.
  • the AR analyzer 1 can be expressed by changing the position of the virtual camera VRCAM-B as shown in FIG. 15, and in this case, the real camera CAM-A system is fixed.
  • the marker analyzer 1 can express high-speed camera work with the virtual camera VRCAM-B system while correctly detecting the position of the AR marker image MRK1.
  • FIGS. 16 and 17 show how the movement of the AR marker image MRK1 observed by the AR analyzer 1 is reflected on the movement of the CG object OBJ1 viewed from the virtual camera VRCAM-B system.
  • the movement of the AR marker image MRK1 is expressed on a plane on the first screen surface SCR-A, but the second is arranged on the virtual camera VRCAM-B coordinate system. Since the one-screen surface SCR-A can be affine-transformed into an arbitrary plane space, the movement on the plane of the real camera CAM-A imaging system can be converted into the movement on the space in the virtual camera VRCAM-B coordinate system.
  • the affine transformation by is nothing but the transformation to the virtual camera VRCAM-B coordinate system, for example,
  • FIG. 18 shows a CG image in which a three-dimensional AR marker object is synthesized and displayed at a position of an AR marker image with respect to a fixed-point image captured by the AR analyzer 1 as a comparative example.
  • FIG. 19 shows a CG image obtained by performing chroma key processing on a fixed-point image captured by the same real camera CAM-A and performing projective transformation on the chroma key image from the virtual camera CAM-B system as Example 1. Show.
  • the AR marker 101 is too small to be resolved and the CG object OBJ1 cannot be displayed at the position of the AR marker image MRK1, but the CG object OBJ1 can be displayed at an arbitrary small angle of view. Can be displayed with the position, posture and scale of the AR marker image MRK1.
  • FIG. 20 shows a second embodiment in which a commentary image of a soccer game is generated by chroma key processing.
  • two AR markers prepared for soccer commentary are arranged in front of the soccer field board as shown in FIG.
  • the CG images of the two soccer players are placed on the AR marker image on the soccer field by chroma key processing and CG image composition processing as shown in FIG.
  • a realistic CG image in which the soccer player image moves is represented.
  • the soccer field image can be expressed as a video image of the player image on the top as well as the video image viewed from the angle and direction according to the image.
  • the fixed camera 5 for detecting the position of the AR marker is the real camera CAM-A that always observes a fixed point. Solves the problems that occur when combining and displaying CG using image recognition AR (AR marker) by defining virtual camera VRCAM-B on the CG space side and changing the angle of view and position on the virtual camera side In addition, it is possible to create and display a CG composite image using an image recognition type AR (AR marker), which is impossible with a conventional system. Therefore, according to the present invention, utilization in the television broadcasting field is facilitated.
  • the technical scope of the present invention also includes a program for causing the computer system to perform the above-described series of processing and a recording medium on which the program is recorded.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
PCT/JP2012/078175 2012-02-22 2012-10-31 Ar技術を用いたコンピュータ・グラフィックス画像処理システム及び方法 WO2013125098A1 (ja)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012-036627 2012-02-22
JP2012036627A JP5847610B2 (ja) 2012-02-22 2012-02-22 Ar技術を用いたコンピュータ・グラフィックス画像処理システム及び方法

Publications (1)

Publication Number Publication Date
WO2013125098A1 true WO2013125098A1 (ja) 2013-08-29

Family

ID=49005299

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/078175 WO2013125098A1 (ja) 2012-02-22 2012-10-31 Ar技術を用いたコンピュータ・グラフィックス画像処理システム及び方法

Country Status (3)

Country Link
JP (1) JP5847610B2 (zh)
TW (1) TWI501193B (zh)
WO (1) WO2013125098A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115460395A (zh) * 2022-06-24 2022-12-09 北京电影学院 一种基于led背景墙分时复用的摄影机注册跟踪方法

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6585665B2 (ja) * 2017-06-29 2019-10-02 ファナック株式会社 仮想オブジェクト表示システム
JP6781201B2 (ja) 2018-06-05 2020-11-04 ファナック株式会社 仮想オブジェクト表示システム
US11538574B2 (en) 2019-04-04 2022-12-27 Centerline Biomedical, Inc. Registration of spatial tracking system with augmented reality display
JP7404137B2 (ja) 2020-04-01 2023-12-25 株式会社豊田中央研究所 顔画像処理装置及び顔画像処理プログラム
JP7404282B2 (ja) 2021-02-10 2023-12-25 株式会社豊田中央研究所 顔モデルパラメータ推定装置、顔モデルパラメータ推定方法及び顔モデルパラメータ推定プログラム

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000057350A (ja) * 1998-08-10 2000-02-25 Toshiba Corp 画像処理装置と方法及び画像送信装置と方法
JP2011141828A (ja) * 2010-01-08 2011-07-21 Sony Corp 情報処理装置、情報処理システム及び情報処理方法
JP2012003598A (ja) * 2010-06-18 2012-01-05 Riso Kagaku Corp 拡張現実感表示システム

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5476036B2 (ja) * 2009-04-30 2014-04-23 国立大学法人大阪大学 網膜投影型ヘッドマウントディスプレイ装置を用いた手術ナビゲーションシステムおよびシミュレーションイメージの重ね合わせ方法
TWI419081B (zh) * 2009-12-29 2013-12-11 Univ Nat Taiwan Science Tech 提供擴增實境的標籤追蹤方法、系統與電腦程式產品
KR101082285B1 (ko) * 2010-01-29 2011-11-09 주식회사 팬택 증강 현실 제공 단말기 및 방법
JP5573238B2 (ja) * 2010-03-04 2014-08-20 ソニー株式会社 情報処理装置、情報処理法方法およびプログラム
TW201126451A (en) * 2011-03-29 2011-08-01 Yuan-Hong Li Augmented-reality system having initial orientation in space and time and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000057350A (ja) * 1998-08-10 2000-02-25 Toshiba Corp 画像処理装置と方法及び画像送信装置と方法
JP2011141828A (ja) * 2010-01-08 2011-07-21 Sony Corp 情報処理装置、情報処理システム及び情報処理方法
JP2012003598A (ja) * 2010-06-18 2012-01-05 Riso Kagaku Corp 拡張現実感表示システム

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Augmented Reality(AR: Kakucho Genjitsu) System AR-NIXUS.M1", November 2011 (2011-11-01), Retrieved from the Internet <URL:http://www.nixus.jp/ar-nixus/images/20111101_ar_01.pdf> [retrieved on 20121204] *
"Real Time 3DCG System 3D-NIXUS xf", November 2011 (2011-11-01), Retrieved from the Internet <URL:http://www.nixus.jp/3d-nixus/images/201111013dnixus.pdf> [retrieved on 20121204] *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115460395A (zh) * 2022-06-24 2022-12-09 北京电影学院 一种基于led背景墙分时复用的摄影机注册跟踪方法

Also Published As

Publication number Publication date
JP5847610B2 (ja) 2016-01-27
TW201335884A (zh) 2013-09-01
TWI501193B (zh) 2015-09-21
JP2013171522A (ja) 2013-09-02

Similar Documents

Publication Publication Date Title
JP5872923B2 (ja) Ar画像処理装置及び方法
EP3057066B1 (en) Generation of three-dimensional imagery from a two-dimensional image using a depth map
CN110809786B (zh) 校准装置、校准图表、图表图案生成装置和校准方法
Forssén et al. Rectifying rolling shutter video from hand-held devices
WO2013125098A1 (ja) Ar技術を用いたコンピュータ・グラフィックス画像処理システム及び方法
US7782320B2 (en) Information processing method and information processing apparatus
Tomioka et al. Approximated user-perspective rendering in tablet-based augmented reality
JP2013521544A (ja) 拡張現実のポインティング装置
EP3572916B1 (en) Apparatus, system, and method for accelerating positional tracking of head-mounted displays
JP7164968B2 (ja) 画像処理装置、画像処理装置の制御方法及びプログラム
US10825249B2 (en) Method and device for blurring a virtual object in a video
JP2004213355A (ja) 情報処理方法
KR20140121529A (ko) 광 필드 영상을 생성하는 방법 및 장치
KR20110132260A (ko) 모니터 기반 증강현실 시스템
CN110969706B (zh) 增强现实设备及其图像处理方法、系统以及存储介质
JP6061334B2 (ja) 光学式シースルー型hmdを用いたarシステム
JP2008217593A (ja) 被写体領域抽出装置及び被写体領域抽出プログラム
CN114913308A (zh) 摄像机跟踪方法、装置、设备及存储介质
KR101529820B1 (ko) 월드 좌표계 내의 피사체의 위치를 결정하는 방법 및 장치
JPWO2018189880A1 (ja) 情報処理装置、情報処理システム、および画像処理方法
JP2013231607A (ja) 校正器具表示装置、校正器具表示方法、校正装置、校正方法、校正システム及びプログラム
JP2008203538A (ja) 画像表示システム
US20230306676A1 (en) Image generation device and image generation method
Zheng et al. Pixel-wise closed-loop registration in video-based augmented reality
JP2018032991A (ja) 画像表示装置、画像表示方法及び画像表示用コンピュータプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12869107

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12869107

Country of ref document: EP

Kind code of ref document: A1