US7487468B2 - Video combining apparatus and method - Google Patents

Video combining apparatus and method Download PDF

Info

Publication number
US7487468B2
US7487468B2 US10/671,611 US67161103A US7487468B2 US 7487468 B2 US7487468 B2 US 7487468B2 US 67161103 A US67161103 A US 67161103A US 7487468 B2 US7487468 B2 US 7487468B2
Authority
US
United States
Prior art keywords
image
user
video
unit
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US10/671,611
Other languages
English (en)
Other versions
US20040070611A1 (en
Inventor
Rika Tanaka
Toshikazu Ohshima
Kaname Tomite
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Assigned to CANON KABUSHIKI KAISHA reassignment CANON KABUSHIKI KAISHA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: OHSHIMA, TOSHIKAZU, TANAKA, RIKA, TOMITE, KANAME
Publication of US20040070611A1 publication Critical patent/US20040070611A1/en
Application granted granted Critical
Publication of US7487468B2 publication Critical patent/US7487468B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/445Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
    • H04N5/44504Circuit details of the additional information generator, e.g. details of the character or graphics signal generator, overlay mixing circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/4143Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a Personal Computer [PC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • H04N21/42653Internal components of the client ; Characteristics thereof for processing graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • H04N21/42206User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
    • H04N21/42222Additional components integrated in the remote control device, e.g. timer, speaker, sensors for detecting position, direction or movement of the remote control, microphone or battery charging device

Definitions

  • the present invention relates to video combining apparatus and method for superimposition of video image and information generated by a computer (CG: Computer Graphics) on a video image of the real world, and more particularly, to CG image display control on an area of real space to which a user is to pay attention.
  • CG Computer Graphics
  • AR Augmented Reality
  • MR Mixed Reality
  • a technique to display a particular real object e.g., a user's hand
  • an always-visible object mask processing technique
  • the present invention has been made in consideration of the problems of the conventional techniques, and has a main object to realize a video combining apparatus for superimposition of a computer-generated image on the real world observed by a user, in which CG image display is not made in a particular real-space area to which the user pays attention with simple setting.
  • a video combining method for superimposing a virtual image generated by a computer on the real world observed by a user comprising the steps of: inputting an image obtained by image sensing the real world; inputting position and orientation information of the user's view point; generating a virtual image based on the position and orientation information; extracting a virtual image elimination area of the virtual image; and combining the virtual image with the image obtained by image sensing based on the virtual image elimination area.
  • a video combining apparatus for superimposing a virtual image generated by a computer on the real world observed by a user, comprising: image input unit adapted to input an image obtained by image sensing the real world; position and orientation information input unit adapted to input position and orientation information of the user's view point; virtual image generation unit adapted to generate a virtual image based on the position and orientation information; elimination area extraction unit adapted to extract a virtual image elimination area of the virtual image; and combining unit adapted to combine the virtual image with the image obtained by image sensing based on the virtual image elimination area.
  • a video combining method for superimposing a virtual image on a video image of the real world observed by a user comprising: an image input step of inputting a video image of the real world observed by the user; a position and orientation information input step of inputting position and orientation information of the user's view point; a virtual image generation step of generating a virtual image based on the position and orientation information; a designated area detection step of detecting a predetermined area designated by the user; and a superimposition step of superimposing the virtual image on the video image except a portion corresponding to the area in the video image detected at the designated area detection step.
  • a computer-readable medium holding program code to realize a video combining method for superimposing a virtual image generated by a computer on the real world observed by a user, by a computer comprising: process procedure code for inputting an image of the real world obtained by image sensing; process procedure code for inputting position and orientation information of the user's view point; process procedure code for generating a virtual image based on the position and orientation information; process procedure code for extracting a virtual image elimination area; and process procedure code for combining the virtual image with the image obtained by image sensing based on the information on the virtual image elimination area.
  • a computer-readable medium holding program code to realize a video combining method for superimposing a virtual image on a video image of the real world observed by a user comprising: process procedure code for inputting a video image of the real world observed by the user obtained by image sensing; process procedure code for inputting position and orientation information of the user's view point; process procedure code for generating a virtual image based on the position and orientation information; process procedure code for detecting a predetermined area designated by the user; and process procedure code for superimposing the virtual image on the video image obtained by image sensing except a portion corresponding to the area in the video image detected at the detection process.
  • FIGS. 1A and 1B are explanatory views showing the conception of the present invention for designation of a CG elimination area using a frame;
  • FIGS. 2A to 2D are examples of a CG elimination frame
  • FIG. 3 is a block diagram showing an example of the construction of a video combining apparatus according to a first embodiment of the present invention
  • FIG. 4 is a flowchart showing an operation of the video combining apparatus according to the first embodiment
  • FIGS. 5A and 5B are explanatory views of a stylus used in the video combining apparatus according to a second embodiment of the present invention.
  • FIG. 6 is a block diagram showing an example of the construction of the video combining apparatus according to the second embodiment
  • FIG. 7 is a flowchart showing the operation of the video combining apparatus according to the second embodiment.
  • FIG. 8 is an explanatory view showing a method of designation of CG elimination area in the video combining apparatus according to a third embodiment of the present invention.
  • FIGS. 9A and 9B are explanatory views showing the method of designation of CG elimination area, by a user's hand(s), in the video combining apparatus according to the third embodiment;
  • FIG. 10 is a block diagram showing an example of the construction of the video combining apparatus according to the third embodiment.
  • FIG. 11 is a flowchart showing the operation of the video combining apparatus according to the third embodiment.
  • FIG. 12 is a flowchart showing the operation of CG elimination area extraction according to the third embodiment.
  • FIG. 13 is a flowchart showing the operation of CG elimination area extraction according to the third embodiment.
  • An example of a video combining apparatus is an MR system for auxiliary display of position information and names in correspondence with a landscape viewed by a user wearing a display device.
  • a video see-through HMD capable of position and orientation measurement is employed as the display device. That is, the HMD includes a position and orientation sensor and a camera, and a video image from the user's approximate view point position can be obtained based on position and orientation information (strictly, the position and orientation of the camera) of the user's head.
  • CG elimination area an area on which the user does not want CG image superimposition
  • a user interface hereinbelow, referred to as a “CG elimination frame” with markers associated with the CG elimination area is employed.
  • the CG elimination area is extracted from the video image from the user's view point position by extracting a marker provided in the CG elimination frame.
  • FIGS. 2A to 2D show examples of the CG elimination frame (association between the markers and the CG elimination area).
  • the CG elimination frame is used under the constraint that it is used in parallel to an image sensing plane of the camera (image sensing unit) provided in the HMD.
  • Small circles indicate the markers, and a hatched portion indicates the CG elimination area.
  • the hatched portion may or may not exist. If the hatched portion exists, the hatched portion should be made of transparent or semi-transparent material, or an input means to input an image of real world corresponding to the hatched portion is required.
  • the CG elimination frame has a handgrip 21 which the user holds and a frame 22 . When the user observes a superimposed image as shown in FIG.
  • the display device is an optical see-through type device, however, the arrangement thereof is the same except that the real world is directly observed through the display device.
  • FIG. 2A three markers as one set are provided in every four corners of a rectangular frame.
  • the CG elimination area can be calculated only if any one of the four marker sets is extracted.
  • FIG. 2B markers are provided surrounding a circular frame. Since a circle can be defined by three points, if arbitrary three markers are extracted, the internal area thereof can be calculated as a CG elimination area.
  • FIG. 2C show a variation that three of the markers in FIG. 2B are used. This arrangement is effective when markers attached to the CG elimination frame as shown in FIG. 2C make the appearance of the flame being troublesome.
  • FIG. 2D an area (circle in this figure) ahead of a marker having directionality is defined as a CG elimination area. This arrangement is effective in a case where a marker is not to be placed on the boundary between a CG image drawing portion and a CG elimination area.
  • the color of the frame can be arbitrarily determined, however, considering that generally a fluorescent color or the like not used in real objects is used for the marker for assistance of detection, a color contrastive to the marker is preferably used for the frame.
  • the size of the frame (the size of CG elimination area) is arbitrarily determined, however, if the frame is too large, the CG elimination area is too large and most of the CG image included in the field of view cannot be displayed; on the other hand, if the frame is too small, the position control of the frame becomes difficult. Accordingly, an arbitrarily frame size is set in consideration of general hand length (in consideration of a variable range since the percentage of the frame in the image changes in correspondence with the distance from the camera to the frame).
  • FIG. 3 is a block diagram showing an example of the construction of the video combining apparatus according to the first embodiment of the present invention.
  • an image sensing unit 1 is a camera included in the HMD.
  • the image sensing unit 1 obtains video images of real space observed by the user's right eye and left eye, and outputs the obtained video images as video signals to a video capturing unit 2 .
  • processing for the right eye image and processing for left eye image are not separately described.
  • processing for right eye and processing for left eye are performed.
  • the video capturing unit 2 converts the video signal inputted from the image sensing unit 1 into signal of format suitable for processing in a video combining unit 6 and a CG elimination area extraction unit 3 , and outputs the signal to the video combining unit 6 and the CG elimination area extraction unit 3 .
  • the CG elimination area extraction unit 3 extracts the markers provided in the CG elimination frame from the video images inputted from the video capturing unit 2 and extracts a CG elimination area. The area on the video image is the CG elimination area. Then the CG elimination area extraction unit 3 outputs the extracted CG elimination area to the video combining unit 6 .
  • An image-sensing position and orientation measurement unit 4 included in the HMD in this embodiment, transmits position and orientation information of the image sensing unit 1 to a CG generation unit 5 in accordance with or without a request from the CG generation unit 5 .
  • a geomagnetic sensor, a gyroscopic or an optical sensor or the like may be utilized as the image-sensing position and orientation measurement unit 4 .
  • the CG generation unit 5 obtains the position and orientation information of the image sensing unit 1 from the image-sensing position and orientation measurement unit 4 , and estimates the position and image sensing direction of the image sensing unit 1 . Since the field of view can be obtained from a lens parameter of the image sensing unit 1 if the position and image sensing direction of the image sensing unit 1 are estimated, the CG generation unit 5 reads data included in the field of view of the image sensing unit 1 from a data unit 7 , generates a CG image to be superimposed on the video image obtained by the image sensing unit 1 , and outputs the CG image to the video combining unit 6 .
  • the video combining unit 6 reads the video image from the video capturing unit 2 , the CG image from the CG generation unit 5 , and the CG elimination area from the CG elimination area extraction unit 3 . Then the video combining unit 6 combines the CG image from the CG generation unit 5 with the video image from the video capturing unit 2 . At this time, the CG image is not drawn in a portion overlapped with the CG elimination area obtained by the CG elimination area extraction unit 3 . In this manner, a combined video image where only the CG image is eliminated from the CG elimination area is generated.
  • the CG elimination frame having the appearance correspond with its function is more preferable as a user interface. Further, it may be arranged such that instead of restraint of CG image drawing in the CG elimination area, a CG image with high transparency (by controlling an a component value indicating transparency) is drawn or a CG image is blinked in correspondence with the type of the CG elimination frame.
  • the CG image generated by the video combining unit 6 is transmitted to the display unit 8 (HMD in the present embodiment).
  • the data unit 7 of e.g. a hard disk holds data to be delivered to the CG generation unit 5 .
  • data stored in the data unit 7 text information, panorama video images, three-dimensional CG data and the like are stored.
  • the data unit 7 transmits appropriate data to the CG generation unit 5 .
  • the data unit 7 sends three-dimensional CG data included in the field of view of the image sensing unit 1 to the CG generation unit 5 .
  • the data unit 7 is not limited to a hard disk but any storage medium such as a tape or a memory device can be used as long as it can store data.
  • the display unit 8 which is an HMD in the present embodiment displays the combined video image signal transmitted from the video combining unit 6 .
  • the HMD has a right-eye image display unit and a left-eye image display unit.
  • the video combining unit 6 generates a display image for right eye and a display image for left eye and supplies them to the HMD, thereby the user can experience three-dimensional CG image display.
  • the operation of the MR system as an example of the video combining apparatus according to the present embodiment having the above construction will be described with reference to the flowchart of FIG. 4 .
  • the data unit 7 holds necessary data in advance.
  • step S1 the system is started.
  • a video image is obtained from the image sensing unit 1 .
  • the video image is converted to an appropriate format image by the video capturing unit 2 , and sent to the video combining unit 6 and the CG elimination area extraction unit 3 .
  • the markers are extracted from the video image input in the CG elimination area extraction unit 3 , and a CG elimination area is calculated. Then the obtained CG elimination area is sent to the video combining unit 6 .
  • the image-sensing position and orientation measurement unit 4 measures the position and orientation of the image sensing unit 1 .
  • the measured position and orientation information is sent to the CG generation unit 5 .
  • the CG generation unit 5 estimates the field of view of the image sensing unit 1 from the position and orientation information transmitted from the image-sensing position and orientation measurement unit 4 , and obtains data in a range included in the field of view of the image sensing unit 1 , from the data unit 7 .
  • the CG generation unit 5 generates a CG image using the data obtained from the data unit 7 , and sends the generated video image to the video combining unit 6 .
  • the video combining unit 6 combines the video image transmitted from the video capturing unit 2 with the CG image transmitted from the CG generation unit 5 .
  • the CG image is not combined with the portion of the CG elimination area from the CG elimination area extraction unit 3 .
  • a combined video image where the CG image is eliminated from the CG elimination area is generated.
  • the combined video image is sent to the display unit 8 .
  • step S8 the video image information transmitted from the video combining unit 6 is displayed on the display unit 8 .
  • step S9 it is checked whether or not the system is ended. If YES, the system is ended at step S10, otherwise, the process returns to step S2 to repeat the above-described processing.
  • an MR system to display, when a user wears the HMD and looks at, e.g., a landscape, position information and names in correspondence with the landscape, even when an object of interest is hidden by CG, the object can be observed by holding the CG elimination frame in a corresponding position.
  • An example of the video combining apparatus is a medical assistant system to present an image for a doctor as if the inside of the patient's body is visualized.
  • an optical see-through HMD is used as the display device since the display resolution of video see-through HMD is limited.
  • FIGS. 5A and 5B show an example of the stylus. Note that in the following description, it is assumed that a view point position and orientation of the user is fixed. However, a relative relation between the view point position and orientation of the user and those of the stylus is considered practically.
  • a stylus 51 has e.g. a pen shape, and includes a position and orientation sensor.
  • a stylus end position is estimated from a distance d between the position detected by the position and orientation sensor and a distal end of the stylus, and an area designated by the end of the stylus is obtained from the stylus end position and a detected inclination ⁇ of the stylus.
  • an area corresponding to a virtual circle 52 in contact with the end of the stylus is defined as a CG elimination designation area.
  • an elliptic area obtained from the virtual circle 52 in correspondence with the inclination ⁇ of the stylus is a CG elimination area. Note that if the inclination of the stylus (orientation information) cannot be obtained, the virtual circle 52 can be utilized.
  • the position and orientation information of the stylus, and information on an ON-OFF switch (not shown) can be obtained from the outside via a signal line connected to the stylus or a communicator.
  • the position and orientation input device is employed as a user interface because:
  • the ON-OFF button of the stylus can be allocated to ON-OFF of CG elimination area definition.
  • CG image display method can be easily selected by simply selecting the “surgical tool with sensor” or a “surgical tool without sensor”.
  • FIG. 6 is a block diagram showing an example of the construction of the MR system according to the second embodiment.
  • a head position and orientation measurement unit 14 included in the HMD as the display unit 18 to be described later, transmits head position and orientation information of a user to the CG elimination area extraction unit 13 and the CG generation unit 15 in accordance with or without a request from the CG elimination area extraction unit 13 and the CG generation unit 15 .
  • a geomagnetic sensor, a gyroscopic or optical sensor or the like may be utilized as the head position and orientation measurement unit 14 .
  • a stylus state detection unit 19 obtains stylus information (position, orientation, button ON/OFF state and the like) from a stylus 20 , and in accordance with or without a request from the CG elimination area extraction unit 13 , transmits the information to the CG elimination area extraction unit 13 .
  • the CG elimination area extraction unit 13 calculates a CG elimination area from the position and orientation data inputted from the head position and orientation measurement unit 14 and the stylus information inputted from the stylus state detection unit 19 .
  • an end position of the stylus on an image plane and the orientation of the stylus to the image plane can be calculated from the position and orientation of the head and the position and orientation of the stylus.
  • An elliptic area (ellipticity is determined from the information on the orientation of the stylus to the image plane) spread on the image plane from the end of the stylus is defined as a CG elimination area.
  • the CG elimination area extraction unit 13 outputs the extracted CG elimination area to a video combining unit 16 .
  • the CG generation unit 15 inputs the head position and orientation information from the head position and orientation measurement unit 14 and estimates the position and direction of the head. As the field of view of the user can be obtained if the position and orientation are estimated, the CG generation unit 15 inputs data corresponding to a portion included in the field of view of the user from a data unit 17 , generates a CG image overlapped with the field of view of the user, and outputs it to the video combining unit 16 .
  • the video combining unit 16 reads the CG image from the CG generation unit 15 and the CG elimination area from the CG elimination area extraction unit 13 . Then the video combining unit 16 processes the CG image based on the CG elimination area data from the CG elimination area extraction unit 13 , and transmits the CG image to the display unit 18 .
  • the data unit 17 of e.g. a hard disk holds data to be delivered to the CG generation unit 15 .
  • data stored in the data unit 17 text information, panorama video images, three-dimensional CG data and the like are stored.
  • the data unit 17 transmits appropriate data to the CG generation unit 15 .
  • the data unit 17 sends three-dimensional CG data included in the field of view of the user to the CG generation unit 15 .
  • the data unit 17 is not limited to a hard disk but any storage medium such as a tape or a memory can be used as long as it can store data.
  • the display unit 18 here is an optical see-through HMD.
  • the display unit 18 displays the video image signal transmitted from the video combining unit 16 so that the video image overlaps on the real world seen through a half mirror by, e.g., projecting the video image signal on the half mirror.
  • step S11 the system is started.
  • the stylus state detection unit 19 detects the state of the stylus.
  • the detected information is sent to the CG elimination area extraction unit 13 .
  • the head position and orientation measurement unit 14 measures the position and orientation of the user.
  • the measured position and orientation information is sent to the CG elimination area extraction unit 13 and the CG generation unit 15 .
  • the CG generation area extraction unit 13 calculates a CG elimination area based on the stylus position and orientation information inputted from the stylus state detection unit 19 and the head position and orientation information inputted from the head position and orientation measurement unit 14 .
  • the CG elimination area is sent to the video combining unit 16 .
  • the CG generation unit 15 estimates the field of view of the user from the head position and orientation information transmitted from the head position and orientation measurement unit 14 , and obtains data in a range included in the field of view of the user, from the data unit 17 .
  • the CG generation unit 15 generates a CG image using the data obtained from the data unit 17 , and sends the generated video image to the video combining unit 16 .
  • the video combining unit 16 processes the CG image transmitted from the CG generation unit 15 based on the CG elimination area data from the CG elimination area extraction unit 13 (the CG image is not drawn in a portion of the CG elimination area transmitted from the CG elimination area extraction unit 13 ).
  • the video image is sent to the display unit 18 .
  • step S18 the video image information transmitted from the video combining unit 16 is displayed on the display unit 18 as an optical see-through HMD, thereby the user recognizes the CG image superimposed in real space.
  • step S19 it is checked whether or not the system is ended. If YES, the system is ended, otherwise, the process returns to step S12 to repeat the above-described processing.
  • CG image display very near the hands or CG image display not made very near the hands can be easily selected.
  • the frame as shown in FIGS. 2A to 2D is employed for designation of CG elimination area.
  • a CG elimination area can be designated with not the frame but a user's hands in the video combining apparatus according to the first embodiment.
  • FIGS. 9A and 9B an area surrounded with the user's hands (hatched area) as shown in FIGS. 9A and 9B is recognized as a CG elimination area. That is, the user forms an eye hole with his/her hand(s) thereby designates a desired area as a CG elimination area.
  • FIG. 9A shows an example of designation of CG elimination area with both hands
  • FIG. 9B an example of designation of CG elimination area with a single hand. In this manner, the CG elimination area can be designated with hand(s) and the frame is not necessary. Further, the designation of CG elimination area can be made by the user's natural action.
  • the video combining apparatus of the present embodiment has a construction to extract the area of the user's hand(s) from a video image from the user's view point position such that the hand(s) is always visible, and perform mask processing of not drawing a CG image in the area (visualizing an object which is hidden by the CG image in the user's sight).
  • This construction to perform the mask processing is realized by using e.g. a chroma key technique proposed in Japanese Published Unexamined Patent Application No. 2002-95535.
  • the mask processing on the hand includes a process of extraction of the area of the user's hand(s)
  • the internal area of the hand(s) can be easily extracted from a video image from the user's view point.
  • the video combining apparatus of the present embodiment can be realized only by adding processing of extracting the internal area of the hand(s) (the hatched area in FIGS. 9A and 9B ) in the video image from the user's view point to the MR system capable of hand mask processing.
  • the area of the user's hand(s) is extracted from the video image from the user's view point position, and further, the hand internal area (eye hole area) is extracted, thereby a designated area is obtained.
  • the extraction of the user's hand area in the video image from the user's view point and the restraint of drawing in the hand internal area solve the problem that the hand(s) positioned in the user's sight is hidden by the CG image, and further, enable clear visualization of a predetermined area in real space without obstruction of a CG image.
  • processing is simplified by handling the hand area and the hand internal area as a CG elimination area.
  • the hand area is handled separately from the CG elimination area (the hand internal area).
  • a flesh color portion is extracted as a hand area, and its internal area is detected as a CG elimination area.
  • it may be arranged such that the user wears a blue glove, then a blue area is extracted as a hand area and a flesh color CG image is combined with the hand area, and a CG image is not displayed in the CG elimination area.
  • FIG. 10 is a block diagram showing the construction of an MR system as an example of the video combining apparatus according to the third embodiment.
  • constituent elements corresponding to those in FIG. 3 have the same reference numerals and explanations thereof will be omitted.
  • the CG elimination area extraction unit 3 ′ extracts a hand area and an area surrounded with the hand area (hand internal area) from a video image inputted from the video capturing unit 2 and data on hand area extraction from the data unit 7 (for example, data defining the above-described particular color), if necessary.
  • the CG elimination area extraction unit 3 ′ extracts at least the hand internal area as a CG elimination area, and outputs the extracted CG elimination area to the video combining unit 6 .
  • the image-sensing position and orientation measurement unit 4 is included in the HMD. In accordance with or without a request from the CG generation unit 5 , the image-sensing position and orientation measurement unit 4 transmits the position and orientation information of the image sensing unit 1 to the CG generation unit 5 .
  • a geomagnetic sensor, a gyroscopic or optical sensor or the like may be utilized as the image-sensing position and orientation measurement unit 4 .
  • the CG generation unit 5 obtains the position and orientation information of the image sensing unit 1 from the image-sensing position and orientation measurement unit 4 , and estimates the position and image sensing direction of the image sensing unit 1 .
  • the CG generation unit 5 reads data included in the field of view of the image sensing unit 1 from a data unit 7 , generates a CG image to be combined with the video image obtained by the image sensing unit 1 , and outputs the CG image to the video combining unit 6 .
  • the video combining unit 6 reads the video image from the video capturing unit 2 , the CG image from the CG generation unit 5 , and the CG elimination area from the CG elimination area extraction unit 3 . Then the video combining unit 6 combines the CG image from the CG generation unit 5 with the video image from the video capturing unit 2 . At this time, the CG image is not drawn in a portion overlapped with the CG elimination area obtained by the CG elimination area extraction unit 3 . In this manner, a combined video image where only the CG image is eliminated from the CG elimination area is generated.
  • CG image generation unit 6 is transmitted to the display unit 8 (HMD in the present embodiment).
  • the data unit 7 of e.g. a hard disk holds data to be delivered to the CG generation unit 5 and the CG elimination area extraction unit 3 ′.
  • data stored in the data unit 7 text information, panorama video images, three-dimensional CG data, further, data necessary for extraction of particular area such as a hand area or a hand internal area (data defining a particular color or the like), and the like, are stored.
  • the data unit 7 transmits appropriate data to the CG generation unit 5 . For example, if a request for three-dimensional CG data to be combined in the field of view of the image sensing unit 1 is received from the CG generation unit 5 , the data unit 7 sends three-dimensional CG data included in the field of view of the image sensing unit 1 to the CG generation unit 5 .
  • the data unit 7 transmits appropriate data to the CG elimination area extraction unit 3 ′.
  • the display unit 8 which is an HMD in the present embodiment displays the combined video image signal transmitted from the video combining unit 6 .
  • the HMD has a right-eye image display unit and a left-eye image display unit.
  • the video combining unit 6 generates a display image for right eye and a display image for left eye and supplies them to the HMD, thereby the user can experience three-dimensional CG image display.
  • the operation of the MR system as an example of the video combining apparatus according to the third embodiment having the above construction will be described with reference to the flowchart of FIG. 11 .
  • the operation of the video combining apparatus of the present embodiment is the same as that in the first embodiment except that the order of the step of acquisition of image-sensing position and orientation information and the step of extraction of CG elimination area is inverted and that the content of the CG elimination area extraction processing is different.
  • the data unit 7 holds necessary data in advance.
  • step S1 the system is started.
  • a video image is obtained from the image sensing unit 1 .
  • the video image is converted to an appropriate format image by the video capturing unit 2 , and sent to the video combining unit 6 and the CG elimination area extraction unit 3 .
  • the image-sensing position and orientation measurement unit 4 measures the position and orientation of the image sensing unit 1 .
  • the measured position and orientation information is sent to the CG generation unit 5 .
  • a CG elimination area is calculated from the video image inputted into the CG elimination area extraction unit 3 ′.
  • step S4′ which is a characteristic step of the present embodiment will be described with reference to the flowchart of FIG. 12 .
  • step S4′ an example of step S4′ will be described about a case where a hand area is extracted by using image processing to extract a particular color.
  • the CG elimination area extraction unit 3 ′ reads data on a hand area, if necessary, from the data unit 7 .
  • data on a hand area information on flesh color of the hand or the like is used.
  • the data on hand area is read at once, however, in a case where the position of a light source changing in a real time manner is measured and flesh color data is required in correspondence with the changing light source position, the step is necessary.
  • a hand area is extracted from the video image input in the CG elimination area extraction unit 3 ′ from the data on the hand area.
  • step S4a-3 an internal area of the hand area on the video image is extracted.
  • the hand area and the internal area of the hand area on the video image are extracted as a CG elimination area.
  • step S4 is implemented using any other hand area extraction method in hand mask processing than the above processing.
  • the calculated CG elimination area is sent to the video combining unit 6 .
  • the CG generation unit 5 estimates the field of view of the image sensing unit 1 from the position and orientation information transmitted from the image-sensing position and orientation measurement unit 4 , and obtains data in a range included in the field of view of the image sensing unit 1 , from the data unit 7 .
  • the CG generation unit 5 generates a CG image using the data obtained from the data unit 7 , and sends the generated video image to the video combining unit 6 .
  • the video combining unit 6 combines the video image transmitted from the video capturing unit 2 with the CG image transmitted from the CG generation unit 5 .
  • the CG image is not combined with the portion of the CG elimination area from the CG elimination area extraction unit 3 .
  • a combined video image where the CG image is eliminated from the CG elimination area is generated.
  • the combined video image is sent to the display unit 8 .
  • step S8 the video image information transmitted from the video combining unit 6 is displayed on the display unit 8 .
  • step S9 it is checked whether or not the system is ended. If YES, the system is ended at step S10, otherwise, the process returns to step S2 to repeat the above-described processing.
  • an MR system to display when a user wears the HMD and looks at e.g. a landscape, position information and names in correspondence with the landscape, even if an object of interest is hidden by a CG image, the object can be observed by forming an eye hole with the user's hand(s) in a corresponding position.
  • the HMD is employed as a display unit, however, the present invention is applicable to a head up display (HUD) optical see-through AR system as disclosed in Japanese Published Unexamined Patent Application No. 10-051711, in which a superimposed image is generated in correspondence with a display device and a view point position.
  • HUD head up display
  • the optical see-through HMD is employed, however, the second embodiment is applicable to a system using a video see-through HMD as described in the first embodiment.
  • a CG elimination area is designated utilizing mask processing without any tool such as a frame, however, mask processing can be utilized even in designation of CG elimination area using a frame as shown in FIGS. 2A to 2D or the like.
  • a frame having a shape as shown in FIGS. 2A to 2D has a particular color, and a CG elimination area is defined as “internal area of the particular color”, thereby similar CG elimination processing to that of the third embodiment can be performed.
  • the frame is not necessarily provided with markers.
  • the particular color of the frame is not limited to a flesh color but may be blue, red or any color, however, it is preferable that the color is not included in a background color.
  • a hand area is extracted by utilizing mask processing, however, the hand area may be extracted by other processing than the mask processing. For example, it may be arranged such that the user wears a glove provided with plural position sensors, and the hand area is extracted from outputs from the sensors.
  • step S4b-1 the position of the hand is measured.
  • a hand area on a video image from the view point position of the user is calculated from the measured hand position information and view-point position information of the user.
  • step S4b-3 an internal area of the hand area on the video image from the view point position of the user is extracted.
  • a CG elimination area is calculated from the hand area and the internal area of the hand area on the video image from the view point position of the user.
  • the present invention includes a case where a software program to realize the functions of the above-described embodiments is supplied directly from a recording medium or via cable/radio communication to a system or apparatus having a computer capable of execution of the program, and the computer of the system or apparatus executes the supplied program thereby achieves equivalent functions.
  • the program code itself supplied and installed into the computer realizes the present invention. That is, the computer program itself to realize the functional processing of the present invention is included in the present invention.
  • the program having any form such as object code, an interpreter-executable program and script data supplied to an OS can be employed as long as it has a program function.
  • the storage medium such as a flexible disk, a hard disk, a magnetic recording medium such as a magnetic tape, an optical/magneto-optical storage medium such as an MO, a CD-ROM, a CD-R, a CD-RW, a DVD-ROM, a DVD-R and a DVD-RW, a nonvolatile semiconductor memory, and the like, can be used for providing the program code.
  • a data file (program data file) of a computer program itself or a compressed file having automatic installation function which can be a computer program forming the present invention on a client computer, is stored on a server on a computer network, and the program data file is downloaded to a connected client computer.
  • the program data file may be divided into plural segment files and stored on different servers.
  • the server apparatus for downloading the program data file to realize the functional processing of the present invention to plural users is included in the present invention.
  • the program of the present invention may be encrypted and stored on a storage medium such as a CD-ROM delivered to users, such that a user who satisfied a predetermined condition is allowed to download key information to decryption from a homepage via e.g. the Internet, then the program is decrypted with the key information and installed into a computer, thereby the present invention is realized.
  • a storage medium such as a CD-ROM delivered to users, such that a user who satisfied a predetermined condition is allowed to download key information to decryption from a homepage via e.g. the Internet, then the program is decrypted with the key information and installed into a computer, thereby the present invention is realized.
  • the present invention includes a case where an OS or the like working on the computer performs a part or entire actual processing in accordance with designations of the program code and realizes the functions of the above embodiments.
  • the present invention also includes a case where, after the program code read from the storage medium is written in a function expansion card which is inserted into the computer or in a memory provided in a function expansion unit which is connected to the computer, CPU or the like contained in the function expansion card or unit performs a part or entire process in accordance with designations of the program code and realizes the functions of the above embodiments.
  • an MR system to superimpose a CG image on real space, if a real space portion to be observed is hidden by the CG image, an area where the CG image is not to be displayed is simply designated and the CG image in the area is partially not displayed (deleted), thereby real space of interest can be observed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)
  • Studio Circuits (AREA)
  • Controls And Circuits For Display Device (AREA)
US10/671,611 2002-09-30 2003-09-29 Video combining apparatus and method Expired - Fee Related US7487468B2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2002-287054 2002-09-30
JP2002287054 2002-09-30
JP2003-204813 2003-07-31
JP2003204813A JP4298407B2 (ja) 2002-09-30 2003-07-31 映像合成装置及び映像合成方法

Publications (2)

Publication Number Publication Date
US20040070611A1 US20040070611A1 (en) 2004-04-15
US7487468B2 true US7487468B2 (en) 2009-02-03

Family

ID=31980657

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/671,611 Expired - Fee Related US7487468B2 (en) 2002-09-30 2003-09-29 Video combining apparatus and method

Country Status (5)

Country Link
US (1) US7487468B2 (ja)
EP (1) EP1404126B1 (ja)
JP (1) JP4298407B2 (ja)
CN (1) CN1324534C (ja)
DE (1) DE60313412T2 (ja)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060097985A1 (en) * 2004-11-08 2006-05-11 Samsung Electronics Co., Ltd. Portable terminal and data input method therefor
US20080005703A1 (en) * 2006-06-28 2008-01-03 Nokia Corporation Apparatus, Methods and computer program products providing finger-based and hand-based gesture commands for portable electronic device applications
US20110242133A1 (en) * 2010-03-30 2011-10-06 Allen Greaves Augmented reality methods and apparatus
US20110260967A1 (en) * 2009-01-16 2011-10-27 Brother Kogyo Kabushiki Kaisha Head mounted display
US20110279355A1 (en) * 2009-01-27 2011-11-17 Brother Kogyo Kabushiki Kaisha Head mounted display
US20130201178A1 (en) * 2012-02-06 2013-08-08 Honeywell International Inc. System and method providing a viewable three dimensional display cursor
US8558759B1 (en) * 2011-07-08 2013-10-15 Google Inc. Hand gestures to signify what is important
US20140179369A1 (en) * 2012-12-20 2014-06-26 Nokia Corporation Apparatus and method for providing proximity-based zooming
US8847850B1 (en) * 2014-02-17 2014-09-30 Lg Electronics Inc. Head mounted display device for displaying augmented reality image capture guide and control method for the same
US8861797B2 (en) 2010-11-12 2014-10-14 At&T Intellectual Property I, L.P. Calibrating vision systems
US9052804B1 (en) * 2012-01-06 2015-06-09 Google Inc. Object occlusion to initiate a visual search
US20150271396A1 (en) * 2014-03-24 2015-09-24 Samsung Electronics Co., Ltd. Electronic device and method for image data processing
US9230171B2 (en) 2012-01-06 2016-01-05 Google Inc. Object outlining to initiate a visual search
US20160048024A1 (en) * 2014-08-13 2016-02-18 Beijing Lenovo Software Ltd. Information processing method and electronic device
US9489774B2 (en) * 2013-05-16 2016-11-08 Empire Technology Development Llc Three dimensional user interface in augmented reality
US11194438B2 (en) * 2019-05-09 2021-12-07 Microsoft Technology Licensing, Llc Capture indicator for a virtual world
US11563915B2 (en) 2019-03-11 2023-01-24 JBF Interlude 2009 LTD Media content presentation
US11997413B2 (en) 2019-03-11 2024-05-28 JBF Interlude 2009 LTD Media content presentation

Families Citing this family (80)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4298407B2 (ja) * 2002-09-30 2009-07-22 キヤノン株式会社 映像合成装置及び映像合成方法
KR100611182B1 (ko) * 2004-02-27 2006-08-10 삼성전자주식회사 회전상태에 따라 메뉴표시상태를 변경하는 휴대형전자기기 및 그 방법
JP4367926B2 (ja) * 2004-05-17 2009-11-18 キヤノン株式会社 画像合成システムおよび画像合成方法、および画像合成装置
US7657125B2 (en) * 2004-08-02 2010-02-02 Searete Llc Time-lapsing data methods and systems
US9155373B2 (en) * 2004-08-02 2015-10-13 Invention Science Fund I, Llc Medical overlay mirror
US20060044399A1 (en) * 2004-09-01 2006-03-02 Eastman Kodak Company Control system for an image capture device
JP4500632B2 (ja) * 2004-09-07 2010-07-14 キヤノン株式会社 仮想現実感提示装置および情報処理方法
US20060050070A1 (en) * 2004-09-07 2006-03-09 Canon Kabushiki Kaisha Information processing apparatus and method for presenting image combined with virtual image
DE102004046430A1 (de) * 2004-09-24 2006-04-06 Siemens Ag System zur visuellen Situations-bedingten Echtzeit-basierten Unterstützung eines Chirurgen und Echtzeit-basierter Dokumentation und Archivierung der vom Chirurgen während der Operation visuell wahrgenommenen Unterstützungs-basierten Eindrücke
JP4137078B2 (ja) 2005-04-01 2008-08-20 キヤノン株式会社 複合現実感情報生成装置および方法
DE602005013752D1 (de) * 2005-05-03 2009-05-20 Seac02 S R L Augmented-Reality-System mit Identifizierung der realen Markierung des Objekts
US20070065143A1 (en) * 2005-09-16 2007-03-22 Richard Didow Chroma-key event photography messaging
US20070064125A1 (en) * 2005-09-16 2007-03-22 Richard Didow Chroma-key event photography
JP4804256B2 (ja) * 2006-07-27 2011-11-02 キヤノン株式会社 情報処理方法
JP4789745B2 (ja) * 2006-08-11 2011-10-12 キヤノン株式会社 画像処理装置および方法
EP1887526A1 (en) * 2006-08-11 2008-02-13 Seac02 S.r.l. A digitally-augmented reality video system
US20080266323A1 (en) * 2007-04-25 2008-10-30 Board Of Trustees Of Michigan State University Augmented reality user interaction system
JP2008278103A (ja) * 2007-04-27 2008-11-13 Nippon Hoso Kyokai <Nhk> 映像合成装置及び映像合成プログラム
JP4909176B2 (ja) * 2007-05-23 2012-04-04 キヤノン株式会社 複合現実感提示装置及びその制御方法、コンピュータプログラム
US7724322B2 (en) * 2007-09-20 2010-05-25 Sharp Laboratories Of America, Inc. Virtual solar liquid crystal window
US9703369B1 (en) * 2007-10-11 2017-07-11 Jeffrey David Mullen Augmented reality video game systems
US8189035B2 (en) * 2008-03-28 2012-05-29 Sharp Laboratories Of America, Inc. Method and apparatus for rendering virtual see-through scenes on single or tiled displays
JP4725595B2 (ja) * 2008-04-24 2011-07-13 ソニー株式会社 映像処理装置、映像処理方法、プログラム及び記録媒体
WO2010032079A2 (en) 2008-09-17 2010-03-25 Nokia Corp. User interface for augmented reality
US9204050B2 (en) 2008-12-25 2015-12-01 Panasonic Intellectual Property Management Co., Ltd. Information displaying apparatus and information displaying method
JP5201015B2 (ja) 2009-03-09 2013-06-05 ブラザー工業株式会社 ヘッドマウントディスプレイ
KR20110006022A (ko) * 2009-07-13 2011-01-20 삼성전자주식회사 가상 오브젝트 기반의 이미지 처리 방법 및 장치
KR100957575B1 (ko) * 2009-10-01 2010-05-11 (주)올라웍스 단말기의 움직임 또는 자세에 기초하여 비주얼 서치를 수행하기 위한 방법, 단말기 및 컴퓨터 판독 가능한 기록 매체
JP2011118842A (ja) * 2009-12-07 2011-06-16 Canon Inc 情報処理装置、表示制御方法及びプログラム
US20110202603A1 (en) * 2010-02-12 2011-08-18 Nokia Corporation Method and apparatus for providing object based media mixing
US9182596B2 (en) 2010-02-28 2015-11-10 Microsoft Technology Licensing, Llc See-through near-eye display glasses with the optical assembly including absorptive polarizers or anti-reflective coatings to reduce stray light
US20150309316A1 (en) 2011-04-06 2015-10-29 Microsoft Technology Licensing, Llc Ar glasses with predictive control of external device based on event input
US8482859B2 (en) 2010-02-28 2013-07-09 Osterhout Group, Inc. See-through near-eye display glasses wherein image light is transmitted to and reflected from an optically flat film
US9223134B2 (en) 2010-02-28 2015-12-29 Microsoft Technology Licensing, Llc Optical imperfections in a light transmissive illumination system for see-through near-eye display glasses
US20120249797A1 (en) 2010-02-28 2012-10-04 Osterhout Group, Inc. Head-worn adaptive display
US9366862B2 (en) 2010-02-28 2016-06-14 Microsoft Technology Licensing, Llc System and method for delivering content to a group of see-through near eye display eyepieces
US9097891B2 (en) 2010-02-28 2015-08-04 Microsoft Technology Licensing, Llc See-through near-eye display glasses including an auto-brightness control for the display brightness based on the brightness in the environment
US8488246B2 (en) 2010-02-28 2013-07-16 Osterhout Group, Inc. See-through near-eye display glasses including a curved polarizing film in the image source, a partially reflective, partially transmitting optical element and an optically flat film
US9285589B2 (en) 2010-02-28 2016-03-15 Microsoft Technology Licensing, Llc AR glasses with event and sensor triggered control of AR eyepiece applications
US9091851B2 (en) 2010-02-28 2015-07-28 Microsoft Technology Licensing, Llc Light control in head mounted displays
US9128281B2 (en) 2010-09-14 2015-09-08 Microsoft Technology Licensing, Llc Eyepiece with uniformly illuminated reflective display
US9134534B2 (en) 2010-02-28 2015-09-15 Microsoft Technology Licensing, Llc See-through near-eye display glasses including a modular image source
US9129295B2 (en) 2010-02-28 2015-09-08 Microsoft Technology Licensing, Llc See-through near-eye display glasses with a fast response photochromic film system for quick transition from dark to clear
US8472120B2 (en) 2010-02-28 2013-06-25 Osterhout Group, Inc. See-through near-eye display glasses with a small scale image source
AU2011220382A1 (en) 2010-02-28 2012-10-18 Microsoft Corporation Local advertising content on an interactive head-mounted eyepiece
US9097890B2 (en) 2010-02-28 2015-08-04 Microsoft Technology Licensing, Llc Grating in a light transmissive illumination system for see-through near-eye display glasses
US9229227B2 (en) 2010-02-28 2016-01-05 Microsoft Technology Licensing, Llc See-through near-eye display glasses with a light transmissive wedge shaped illumination system
US9759917B2 (en) 2010-02-28 2017-09-12 Microsoft Technology Licensing, Llc AR glasses with event and sensor triggered AR eyepiece interface to external devices
US10180572B2 (en) 2010-02-28 2019-01-15 Microsoft Technology Licensing, Llc AR glasses with event and user action control of external applications
US9341843B2 (en) 2010-02-28 2016-05-17 Microsoft Technology Licensing, Llc See-through near-eye display glasses with a small scale image source
US8477425B2 (en) 2010-02-28 2013-07-02 Osterhout Group, Inc. See-through near-eye display glasses including a partially reflective, partially transmitting optical element
US8467133B2 (en) 2010-02-28 2013-06-18 Osterhout Group, Inc. See-through display with an optical assembly including a wedge-shaped illumination system
JP5564300B2 (ja) * 2010-03-19 2014-07-30 富士フイルム株式会社 ヘッドマウント型拡張現実映像提示装置及びその仮想表示物操作方法
US20110310227A1 (en) * 2010-06-17 2011-12-22 Qualcomm Incorporated Mobile device based content mapping for augmented reality environment
JP5211120B2 (ja) 2010-07-30 2013-06-12 株式会社東芝 情報表示装置及び情報表示方法
KR101357262B1 (ko) * 2010-08-13 2014-01-29 주식회사 팬택 필터 정보를 이용한 객체 인식 장치 및 방법
KR101690955B1 (ko) * 2010-10-04 2016-12-29 삼성전자주식회사 증강 현실을 이용한 영상 데이터 생성 방법 및 재생 방법, 그리고 이를 이용한 촬영 장치
JP5691632B2 (ja) * 2011-02-24 2015-04-01 株式会社大林組 画像合成方法
JP5776218B2 (ja) * 2011-02-24 2015-09-09 株式会社大林組 画像合成方法
US8601380B2 (en) * 2011-03-16 2013-12-03 Nokia Corporation Method and apparatus for displaying interactive preview information in a location-based user interface
JP5683402B2 (ja) * 2011-07-29 2015-03-11 三菱電機株式会社 画像合成装置及び画像合成方法
US9497501B2 (en) 2011-12-06 2016-11-15 Microsoft Technology Licensing, Llc Augmented reality virtual monitor
JP5970872B2 (ja) * 2012-03-07 2016-08-17 セイコーエプソン株式会社 頭部装着型表示装置および頭部装着型表示装置の制御方法
JP2013235080A (ja) * 2012-05-08 2013-11-21 Sony Corp 画像表示装置、画像表示プログラム及び画像表示方法
JP5687670B2 (ja) * 2012-08-31 2015-03-18 株式会社コナミデジタルエンタテインメント 表示制御システム、ゲームシステム、表示制御装置、及びプログラム
JP6007079B2 (ja) * 2012-11-22 2016-10-12 株式会社日立システムズ 仮想のぞき穴画像生成システム
TWI649675B (zh) 2013-03-28 2019-02-01 新力股份有限公司 Display device
DE112014006745T5 (de) 2014-06-13 2017-05-18 Mitsubishi Electric Corporation Informationsverarbeitungseinrichtung, Anzeigeeinrichtung für ein Bild mit Einblendungsinformationen, Marker-Anzeigeprogramm, Anzeigeprogramm für ein Bild mit Einblendungsinformationen, Marker-Anzeigeverfahren, und Anzeigeverfahren für ein Bild mit Einblendungsinformationen
JP6335696B2 (ja) * 2014-07-11 2018-05-30 三菱電機株式会社 入力装置
JP6186457B2 (ja) * 2016-01-19 2017-08-23 ヤフー株式会社 情報表示プログラム、情報表示装置、情報表示方法および配信装置
JP6217772B2 (ja) * 2016-02-12 2017-10-25 セイコーエプソン株式会社 頭部装着型表示装置および頭部装着型表示装置の制御方法
KR102281400B1 (ko) * 2017-03-23 2021-07-23 도시바 미쓰비시덴키 산교시스템 가부시키가이샤 철강 플랜트의 분석 지원 장치
US9892564B1 (en) 2017-03-30 2018-02-13 Novarad Corporation Augmenting real-time views of a patient with three-dimensional data
JP6419278B1 (ja) * 2017-09-19 2018-11-07 キヤノン株式会社 制御装置、制御方法、及びプログラム
TWI692968B (zh) * 2018-04-26 2020-05-01 財團法人工業技術研究院 三維建模裝置及應用於其之校準方法
JP6559870B1 (ja) * 2018-11-30 2019-08-14 株式会社ドワンゴ 動画合成装置、動画合成方法及び動画合成プログラム
JP7330507B2 (ja) * 2019-12-13 2023-08-22 株式会社Agama-X 情報処理装置、プログラム、及び、方法
EP4131923A4 (en) * 2020-03-30 2023-08-23 Sony Group Corporation IMAGING DEVICE, IMAGING DEVICE CONTROL METHOD, AND PROGRAM
CN114885090A (zh) * 2021-04-27 2022-08-09 青岛海尔电冰箱有限公司 冰箱内图像获取方法、冰箱及计算机存储介质
CN114979487B (zh) * 2022-05-27 2024-06-18 联想(北京)有限公司 图像处理方法、装置及电子设备和存储介质

Citations (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4270848A (en) * 1975-11-21 1981-06-02 Pierre Angenieux Image enlarging optical variator
US5319747A (en) * 1990-04-02 1994-06-07 U.S. Philips Corporation Data processing system using gesture-based input data
EP0772350A2 (en) 1995-10-30 1997-05-07 Kabushiki Kaisha Photron Keying system and composite image producing method
JPH1051711A (ja) 1996-08-05 1998-02-20 Sony Corp 3次元仮想物体表示装置および方法
US5765561A (en) * 1994-10-07 1998-06-16 Medical Media Systems Video-based surgical targeting system
US5790104A (en) * 1996-06-25 1998-08-04 International Business Machines Corporation Multiple, moveable, customizable virtual pointing devices
US6002808A (en) 1996-07-26 1999-12-14 Mitsubishi Electric Information Technology Center America, Inc. Hand gesture control system
JP2000095535A (ja) 1998-07-15 2000-04-04 Shinetsu Quartz Prod Co Ltd エキシマレ―ザ―用光学部材の製造方法
JP2000101898A (ja) 1998-09-21 2000-04-07 Fuji Photo Film Co Ltd 電子カメラ
US6084557A (en) 1997-05-23 2000-07-04 Minolta Co., Ltd. System for displaying combined imagery
US6084594A (en) * 1997-06-24 2000-07-04 Fujitsu Limited Image presentation apparatus
JP2000276610A (ja) 1999-03-26 2000-10-06 Mr System Kenkyusho:Kk ユーザインタフェース方法、情報処理装置およびプログラム記憶媒体
US6181302B1 (en) * 1996-04-24 2001-01-30 C. Macgill Lynde Marine navigation binoculars with virtual display superimposing real world image
US6184932B1 (en) 1995-03-27 2001-02-06 Canon Kabushiki Kaisha Lens control apparatus
US20010005218A1 (en) 1998-09-04 2001-06-28 Sportvision, Inc. System for enhancing a video presentation of a live event
US6317128B1 (en) 1996-04-18 2001-11-13 Silicon Graphics, Inc. Graphical user interface with anti-interference outlines for enhanced variably-transparent applications
US6346929B1 (en) * 1994-04-22 2002-02-12 Canon Kabushiki Kaisha Display apparatus which detects an observer body part motion in correspondence to a displayed element used to input operation instructions to start a process
WO2002015110A1 (en) 1999-12-07 2002-02-21 Fraunhofer Crcg, Inc. Virtual showcases
US20020044152A1 (en) * 2000-10-16 2002-04-18 Abbott Kenneth H. Dynamic integration of computer generated and real world images
USRE37668E1 (en) * 1994-10-19 2002-04-23 Matsushita Electric Industrial Co., Ltd. Image encoding/decoding device
JP2002269593A (ja) 2001-03-13 2002-09-20 Canon Inc 画像処理装置及び方法、並びに記憶媒体
US20020186228A1 (en) * 2001-06-11 2002-12-12 Honda Giken Kogyo Kabushiki Kaisha Display device for vehicle
US20030012409A1 (en) * 2001-07-10 2003-01-16 Overton Kenneth J. Method and system for measurement of the duration an area is included in an image stream
US20030029464A1 (en) * 2000-12-21 2003-02-13 Chen David T. Video-based surgical targeting system
US20030081237A1 (en) 2001-10-31 2003-05-01 Canon Kabushiki Kaisha Imaging apparatus, system having imaging apparatus and printing apparatus, and control method therefor
US20030081251A1 (en) 2001-10-31 2003-05-01 Canon Kabushiki Kaisha Imaging apparatus, system having imaging apparatus and printing apparatus, and control method therefor
US20030081235A1 (en) 2001-10-31 2003-05-01 Canon Kabushiki Kaisha Imaging apparatus, system having imaging apparatus and printing apparatus, and control method therefor
US6559813B1 (en) * 1998-07-01 2003-05-06 Deluca Michael Selective real image obstruction in a virtual reality display apparatus and method
US20030154476A1 (en) * 1999-12-15 2003-08-14 Abbott Kenneth H. Storing and recalling information to augment human memories
US20040068758A1 (en) * 2002-10-02 2004-04-08 Mike Daily Dynamic video annotation
US20040070611A1 (en) * 2002-09-30 2004-04-15 Canon Kabushiki Kaisha Video combining apparatus and method
US20040139156A1 (en) * 2001-12-21 2004-07-15 Matthews W. Donald Methods of providing direct technical support over networks
US6771294B1 (en) * 1999-12-29 2004-08-03 Petri Pulli User interface
US6803928B2 (en) * 2000-06-06 2004-10-12 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Extended virtual table: an optical extension for table-like projection systems
US6850252B1 (en) * 1999-10-05 2005-02-01 Steven M. Hoffberg Intelligent electronic appliance system and method
US7046263B1 (en) * 1998-12-18 2006-05-16 Tangis Corporation Requesting computer user's context data
US7055101B2 (en) * 1998-12-18 2006-05-30 Tangis Corporation Thematic response to a computer user's context, such as by a wearable personal computer
US7058894B2 (en) * 1998-12-18 2006-06-06 Tangis Corporation Managing interactions between computer users' context models
US7062715B2 (en) * 1998-12-18 2006-06-13 Tangis Corporation Supplying notifications related to supply and consumption of user context data
US7073129B1 (en) * 1998-12-18 2006-07-04 Tangis Corporation Automated selection of appropriate information based on a computer user's context
US7076737B2 (en) * 1998-12-18 2006-07-11 Tangis Corporation Thematic response to a computer user's context, such as by a wearable personal computer
US7080322B2 (en) * 1998-12-18 2006-07-18 Tangis Corporation Thematic response to a computer user's context, such as by a wearable personal computer
US7107539B2 (en) * 1998-12-18 2006-09-12 Tangis Corporation Thematic response to a computer user's context, such as by a wearable personal computer
US7167779B2 (en) * 2003-03-28 2007-01-23 Denso Corporation Display method and apparatus for changing display position based on external environment
US7225229B1 (en) * 1998-12-18 2007-05-29 Tangis Corporation Automated pushing of computer user's context data to clients

Patent Citations (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4270848A (en) * 1975-11-21 1981-06-02 Pierre Angenieux Image enlarging optical variator
US5319747A (en) * 1990-04-02 1994-06-07 U.S. Philips Corporation Data processing system using gesture-based input data
US6346929B1 (en) * 1994-04-22 2002-02-12 Canon Kabushiki Kaisha Display apparatus which detects an observer body part motion in correspondence to a displayed element used to input operation instructions to start a process
US5765561A (en) * 1994-10-07 1998-06-16 Medical Media Systems Video-based surgical targeting system
US20030032876A1 (en) * 1994-10-07 2003-02-13 Chen David T. Video-based surgical targeting system
USRE37668E1 (en) * 1994-10-19 2002-04-23 Matsushita Electric Industrial Co., Ltd. Image encoding/decoding device
US20010000435A1 (en) 1995-03-27 2001-04-26 Taeko Tanaka Video camera system
US6184932B1 (en) 1995-03-27 2001-02-06 Canon Kabushiki Kaisha Lens control apparatus
EP0772350A2 (en) 1995-10-30 1997-05-07 Kabushiki Kaisha Photron Keying system and composite image producing method
US6317128B1 (en) 1996-04-18 2001-11-13 Silicon Graphics, Inc. Graphical user interface with anti-interference outlines for enhanced variably-transparent applications
US6181302B1 (en) * 1996-04-24 2001-01-30 C. Macgill Lynde Marine navigation binoculars with virtual display superimposing real world image
US5790104A (en) * 1996-06-25 1998-08-04 International Business Machines Corporation Multiple, moveable, customizable virtual pointing devices
US6002808A (en) 1996-07-26 1999-12-14 Mitsubishi Electric Information Technology Center America, Inc. Hand gesture control system
JPH1051711A (ja) 1996-08-05 1998-02-20 Sony Corp 3次元仮想物体表示装置および方法
US6084557A (en) 1997-05-23 2000-07-04 Minolta Co., Ltd. System for displaying combined imagery
US6084594A (en) * 1997-06-24 2000-07-04 Fujitsu Limited Image presentation apparatus
US6559813B1 (en) * 1998-07-01 2003-05-06 Deluca Michael Selective real image obstruction in a virtual reality display apparatus and method
JP2000095535A (ja) 1998-07-15 2000-04-04 Shinetsu Quartz Prod Co Ltd エキシマレ―ザ―用光学部材の製造方法
US20010005218A1 (en) 1998-09-04 2001-06-28 Sportvision, Inc. System for enhancing a video presentation of a live event
JP2000101898A (ja) 1998-09-21 2000-04-07 Fuji Photo Film Co Ltd 電子カメラ
US7055101B2 (en) * 1998-12-18 2006-05-30 Tangis Corporation Thematic response to a computer user's context, such as by a wearable personal computer
US7089497B2 (en) * 1998-12-18 2006-08-08 Tangis Corporation Managing interactions between computer users' context models
US7225229B1 (en) * 1998-12-18 2007-05-29 Tangis Corporation Automated pushing of computer user's context data to clients
US7203906B2 (en) * 1998-12-18 2007-04-10 Tangis Corporation Supplying notifications related to supply and consumption of user context data
US20060277474A1 (en) * 1998-12-18 2006-12-07 Tangis Corporation Automated selection of appropriate information based on a computer user's context
US7137069B2 (en) * 1998-12-18 2006-11-14 Tangis Corporation Thematic response to a computer user's context, such as by a wearable personal computer
US7107539B2 (en) * 1998-12-18 2006-09-12 Tangis Corporation Thematic response to a computer user's context, such as by a wearable personal computer
US7046263B1 (en) * 1998-12-18 2006-05-16 Tangis Corporation Requesting computer user's context data
US7080322B2 (en) * 1998-12-18 2006-07-18 Tangis Corporation Thematic response to a computer user's context, such as by a wearable personal computer
US7058894B2 (en) * 1998-12-18 2006-06-06 Tangis Corporation Managing interactions between computer users' context models
US7076737B2 (en) * 1998-12-18 2006-07-11 Tangis Corporation Thematic response to a computer user's context, such as by a wearable personal computer
US7073129B1 (en) * 1998-12-18 2006-07-04 Tangis Corporation Automated selection of appropriate information based on a computer user's context
US7062715B2 (en) * 1998-12-18 2006-06-13 Tangis Corporation Supplying notifications related to supply and consumption of user context data
US7058893B2 (en) * 1998-12-18 2006-06-06 Tangis Corporation Managing interactions between computer users' context models
US6559870B1 (en) 1999-03-26 2003-05-06 Canon Kabushiki Kaisha User interface method for determining a layout position of an agent, information processing apparatus, and program storage medium
JP2000276610A (ja) 1999-03-26 2000-10-06 Mr System Kenkyusho:Kk ユーザインタフェース方法、情報処理装置およびプログラム記憶媒体
US6850252B1 (en) * 1999-10-05 2005-02-01 Steven M. Hoffberg Intelligent electronic appliance system and method
WO2002015110A1 (en) 1999-12-07 2002-02-21 Fraunhofer Crcg, Inc. Virtual showcases
US20030154476A1 (en) * 1999-12-15 2003-08-14 Abbott Kenneth H. Storing and recalling information to augment human memories
US7155456B2 (en) * 1999-12-15 2006-12-26 Tangis Corporation Storing and recalling information to augment human memories
US6771294B1 (en) * 1999-12-29 2004-08-03 Petri Pulli User interface
US6803928B2 (en) * 2000-06-06 2004-10-12 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Extended virtual table: an optical extension for table-like projection systems
US20020044152A1 (en) * 2000-10-16 2002-04-18 Abbott Kenneth H. Dynamic integration of computer generated and real world images
US20030029464A1 (en) * 2000-12-21 2003-02-13 Chen David T. Video-based surgical targeting system
US20050027186A1 (en) * 2000-12-21 2005-02-03 Chen David T. Video-based surgical targeting system
US6690960B2 (en) * 2000-12-21 2004-02-10 David T. Chen Video-based surgical targeting system
JP2002269593A (ja) 2001-03-13 2002-09-20 Canon Inc 画像処理装置及び方法、並びに記憶媒体
US6741223B2 (en) * 2001-06-11 2004-05-25 Honda Giken Kogyo Kabushiki Kaisha Display device for vehicle
US20020186228A1 (en) * 2001-06-11 2002-12-12 Honda Giken Kogyo Kabushiki Kaisha Display device for vehicle
US20030012409A1 (en) * 2001-07-10 2003-01-16 Overton Kenneth J. Method and system for measurement of the duration an area is included in an image stream
US20030081235A1 (en) 2001-10-31 2003-05-01 Canon Kabushiki Kaisha Imaging apparatus, system having imaging apparatus and printing apparatus, and control method therefor
US20030081251A1 (en) 2001-10-31 2003-05-01 Canon Kabushiki Kaisha Imaging apparatus, system having imaging apparatus and printing apparatus, and control method therefor
US20030081237A1 (en) 2001-10-31 2003-05-01 Canon Kabushiki Kaisha Imaging apparatus, system having imaging apparatus and printing apparatus, and control method therefor
US20040139156A1 (en) * 2001-12-21 2004-07-15 Matthews W. Donald Methods of providing direct technical support over networks
US20040070611A1 (en) * 2002-09-30 2004-04-15 Canon Kabushiki Kaisha Video combining apparatus and method
US20040068758A1 (en) * 2002-10-02 2004-04-08 Mike Daily Dynamic video annotation
US7167779B2 (en) * 2003-03-28 2007-01-23 Denso Corporation Display method and apparatus for changing display position based on external environment

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Alberto Valinetti, et al., "Model Tracking for Video-Based Virtual Reality", IEEE pp. 372-377 (Sep. 2001).
G. Reltmayr, et al., "Mobile Calloborative Augmented Reality", Proc. IEEE Virtual Reality 2001, pp. 114-123 (2001).
Kiyohide Satoh, et al., "A Hybrid Registration Method for Outdoor Augmented Reality", Proceedings: IEEE and ACM International Symposium in New York, pp. 67-76 (Oct. 2001).
Masayuki Kanbara, et al., "Real-Time Composition of Stereo Images for Mixed Reality",ITE Technical Report, vol. 22, No. 33, pp. 31-36 (Jun. 19, 1998).

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060097985A1 (en) * 2004-11-08 2006-05-11 Samsung Electronics Co., Ltd. Portable terminal and data input method therefor
US8311370B2 (en) * 2004-11-08 2012-11-13 Samsung Electronics Co., Ltd Portable terminal and data input method therefor
US8086971B2 (en) * 2006-06-28 2011-12-27 Nokia Corporation Apparatus, methods and computer program products providing finger-based and hand-based gesture commands for portable electronic device applications
US20080005703A1 (en) * 2006-06-28 2008-01-03 Nokia Corporation Apparatus, Methods and computer program products providing finger-based and hand-based gesture commands for portable electronic device applications
US20110260967A1 (en) * 2009-01-16 2011-10-27 Brother Kogyo Kabushiki Kaisha Head mounted display
US20110279355A1 (en) * 2009-01-27 2011-11-17 Brother Kogyo Kabushiki Kaisha Head mounted display
US8928556B2 (en) * 2009-01-27 2015-01-06 Brother Kogyo Kabushiki Kaisha Head mounted display
US9158777B2 (en) * 2010-03-30 2015-10-13 Gravity Jack, Inc. Augmented reality methods and apparatus
US20110242133A1 (en) * 2010-03-30 2011-10-06 Allen Greaves Augmented reality methods and apparatus
US11003253B2 (en) 2010-11-12 2021-05-11 At&T Intellectual Property I, L.P. Gesture control of gaming applications
US9933856B2 (en) 2010-11-12 2018-04-03 At&T Intellectual Property I, L.P. Calibrating vision systems
US8861797B2 (en) 2010-11-12 2014-10-14 At&T Intellectual Property I, L.P. Calibrating vision systems
US9483690B2 (en) 2010-11-12 2016-11-01 At&T Intellectual Property I, L.P. Calibrating vision systems
US8558759B1 (en) * 2011-07-08 2013-10-15 Google Inc. Hand gestures to signify what is important
US9024842B1 (en) 2011-07-08 2015-05-05 Google Inc. Hand gestures to signify what is important
US9536354B2 (en) 2012-01-06 2017-01-03 Google Inc. Object outlining to initiate a visual search
US10437882B2 (en) 2012-01-06 2019-10-08 Google Llc Object occlusion to initiate a visual search
US9230171B2 (en) 2012-01-06 2016-01-05 Google Inc. Object outlining to initiate a visual search
US9052804B1 (en) * 2012-01-06 2015-06-09 Google Inc. Object occlusion to initiate a visual search
US20130201178A1 (en) * 2012-02-06 2013-08-08 Honeywell International Inc. System and method providing a viewable three dimensional display cursor
US20140179369A1 (en) * 2012-12-20 2014-06-26 Nokia Corporation Apparatus and method for providing proximity-based zooming
US9489774B2 (en) * 2013-05-16 2016-11-08 Empire Technology Development Llc Three dimensional user interface in augmented reality
US8847850B1 (en) * 2014-02-17 2014-09-30 Lg Electronics Inc. Head mounted display device for displaying augmented reality image capture guide and control method for the same
US9560272B2 (en) * 2014-03-24 2017-01-31 Samsung Electronics Co., Ltd. Electronic device and method for image data processing
US20150271396A1 (en) * 2014-03-24 2015-09-24 Samsung Electronics Co., Ltd. Electronic device and method for image data processing
US20160048024A1 (en) * 2014-08-13 2016-02-18 Beijing Lenovo Software Ltd. Information processing method and electronic device
US9696551B2 (en) * 2014-08-13 2017-07-04 Beijing Lenovo Software Ltd. Information processing method and electronic device
US11563915B2 (en) 2019-03-11 2023-01-24 JBF Interlude 2009 LTD Media content presentation
US11997413B2 (en) 2019-03-11 2024-05-28 JBF Interlude 2009 LTD Media content presentation
US11194438B2 (en) * 2019-05-09 2021-12-07 Microsoft Technology Licensing, Llc Capture indicator for a virtual world

Also Published As

Publication number Publication date
EP1404126A2 (en) 2004-03-31
JP2004178554A (ja) 2004-06-24
US20040070611A1 (en) 2004-04-15
DE60313412T2 (de) 2008-01-17
CN1497504A (zh) 2004-05-19
DE60313412D1 (de) 2007-06-06
EP1404126B1 (en) 2007-04-25
JP4298407B2 (ja) 2009-07-22
EP1404126A3 (en) 2004-12-15
CN1324534C (zh) 2007-07-04

Similar Documents

Publication Publication Date Title
US7487468B2 (en) Video combining apparatus and method
JP4137078B2 (ja) 複合現実感情報生成装置および方法
US9824497B2 (en) Information processing apparatus, information processing system, and information processing method
US8098263B2 (en) Image processing method and image processing apparatus
US20070002037A1 (en) Image presentation system, image presentation method, program for causing computer to execute the method, and storage medium storing the program
JP2004062756A (ja) 情報提示装置および情報処理方法
JP2006059136A (ja) ビューア装置及びそのプログラム
KR20020025301A (ko) 다중 사용자를 지원하는 파노라믹 이미지를 이용한증강현실 영상의 제공 장치 및 그 방법
JP2006072903A (ja) 画像合成方法及び装置
JP2005107971A (ja) 複合現実空間画像生成方法及び複合現実感システム
JP2007042055A (ja) 画像処理方法、画像処理装置
JP2004151085A (ja) 情報処理方法及び情報処理装置
CN110199324B (zh) 显示装置及其控制方法
JP4367926B2 (ja) 画像合成システムおよび画像合成方法、および画像合成装置
JP2007064684A (ja) マーカ配置補助方法及び装置
CN112667179B (zh) 一种基于混合现实的远程同步协作系统
JP2016122392A (ja) 情報処理装置、情報処理システム、その制御方法及びプログラム
JP2007233971A (ja) 画像合成方法及び装置
KR20190100133A (ko) 증강현실용 컨텐츠 제공 장치 및 방법
JP2019009816A (ja) 情報処理装置、情報処理システム、その制御方法及びプログラム
JP2018142230A (ja) 情報処理装置、情報処理システム、情報処理方法及びプログラム
JP2006012042A (ja) 画像生成方法及び装置
US20240288948A1 (en) Wearable terminal apparatus, program, and image processing method
JP7163498B2 (ja) 表示制御装置、表示制御方法、及びプログラム
JP2018124746A (ja) 情報処理装置、情報処理システム、情報処理方法及びプログラム

Legal Events

Date Code Title Description
AS Assignment

Owner name: CANON KABUSHIKI KAISHA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TANAKA, RIKA;OHSHIMA, TOSHIKAZU;TOMITE, KANAME;REEL/FRAME:014558/0754

Effective date: 20030924

FPAY Fee payment

Year of fee payment: 4

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20170203