WO2013043289A1 - Augmentation d'une visioconférence - Google Patents

Augmentation d'une visioconférence Download PDF

Info

Publication number
WO2013043289A1
WO2013043289A1 PCT/US2012/051595 US2012051595W WO2013043289A1 WO 2013043289 A1 WO2013043289 A1 WO 2013043289A1 US 2012051595 W US2012051595 W US 2012051595W WO 2013043289 A1 WO2013043289 A1 WO 2013043289A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
user
video
video conference
incorporated
Prior art date
Application number
PCT/US2012/051595
Other languages
English (en)
Inventor
Eric Setton
Original Assignee
Tangome, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/241,918 external-priority patent/US9544543B2/en
Application filed by Tangome, Inc. filed Critical Tangome, Inc.
Priority to CN201280045938.4A priority Critical patent/CN103814568A/zh
Priority to EP12834018.9A priority patent/EP2759127A4/fr
Priority to KR1020147006144A priority patent/KR20140063673A/ko
Priority to JP2014531822A priority patent/JP2014532330A/ja
Publication of WO2013043289A1 publication Critical patent/WO2013043289A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/157Conference systems defining a virtual conference space and using avatars or agents

Definitions

  • Participants in a video conference communicate with one another by transmitting audio/video signals to one another. For example, participants are able to interact via two-way video and audio transmissions simultaneously. However, the participants may not be able to completely articulate what they are attempting to communicate to one another based solely on the captured audio captured by microphones and video signals captured by video cameras.
  • this writing presents a computer-implemented method for augmenting a video conference between a first device and a second device.
  • the method includes receiving a virtual object at the first device, wherein the virtual object is configured to augment the video conference and wherein the virtual object is specifically related to an event.
  • the method also includes incorporating said virtual object into said video conference.
  • FIGs. 1, 2 and 6 illustrate examples of devices, in accordance with embodiments of the present invention.
  • FIGs. 3 and 7 illustrate embodiments of a method for providing an augmented video conference.
  • FIGs. 4, 5, 8 and 9 illustrate embodiments of a method for augmenting a video conference.
  • Figure 1 depicts an embodiment of device 100.
  • Device 100 is configured for participation in a video conference.
  • Figure 2 depicts devices 100 and 200
  • video conferencing allows two or more locations to interact via two-way video and audio transmissions simultaneously.
  • Devices 100 and 200 are any communication devices (e.g., laptop, desktop, etc.) capable of participating in a video conference.
  • device 100 is a hand-held mobile device, such as smart phone, personal digital assistant (PDA), and the like.
  • PDA personal digital assistant
  • device 200 operates in a similar fashion as device 100.
  • device 200 is the same as device 100 and includes the same components as device 100.
  • Device 100 includes display 1 10, virtual object receiver 120, virtual object incorporator 130, transmitter 140, camera 150, microphone 152 and speaker 154.
  • Device 100 optionally includes global positioning system 160 and virtual object generator 170.
  • Display 1 10 is configured for displaying video captured at device 200. In another embodiment, display 1 10 is further configured for displaying video captured at device 100.
  • Virtual object receiver 120 is configured to access a virtual object.
  • a virtual object is configured for augmenting a video conference, which will be described in detail below.
  • Virtual object incorporator 130 is configured for incorporating the virtual object into the video conference.
  • virtual object incorporator 130 is configured for incorporating the virtual object into a video captured at device 100 and/or device 200.
  • Transmitter 140 is for transmitting data (e.g., virtual object control code).
  • Virtual object manipulator 135 is configured to enable manipulation of the virtual object in the video conference.
  • Camera 150 is for capturing video at device 100.
  • Microphone 152 is for capturing audio at device 100.
  • Speaker 154 is for generating an audible signal at device 100.
  • Global positioning system 160 is for determining a location of a device 100.
  • Virtual object generator 170 is for generating a virtual object.
  • devices 100 and 200 are participating in a video conference with one another. In various embodiments, more than two devices participate in a video conference with each another.
  • video camera 250 captures video at device 200.
  • video camera 250 captures video of user 205 of device 200.
  • Video camera 150 captures video at device 100.
  • video camera 150 captures video of user 105. It should be appreciated that video cameras 150 and 250 capture any objects that are within the respective viewing ranges of cameras 150 and 250.
  • Microphone 152 captures audio signals corresponding to the captured video signal at device 100. Similarly, a microphone of device 200 captures audio signals corresponding to the captured video signal at device 200.
  • the video captured at device 200 is transmitted to and displayed on display 1 10 of device 100.
  • a video of user 205 is displayed on a first view 1 12 of display 1 10.
  • the video of user 205 is displayed on a second view 214 of display 210.
  • the video captured at device 100 is transmitted to and displayed on display 210 of device 200.
  • a video of user 105 is displayed on first view 212 of display 210.
  • the video of user 105 is displayed on a second view 1 14 of display 1 10.
  • the audio signals captured at devices 100 and 200 are incorporated into the captured video. In another embodiment, the audio signals are transmitted separate from the transmitted video.
  • first view 1 12 is the primary view displayed on display 1 10 and second view 1 14 is the smaller secondary view displayed on display 1 10.
  • the size of both first view 1 12 and second view 1 14 are adjustable.
  • second view 1 14 can be enlarged to be the primary view and view 1 12 can be diminished in size to be a secondary view.
  • either one of views 1 12 and 1 14 can be closed or fully diminished such that it is not viewable.
  • Virtual object receiver 120 receives virtual object 190 for augmenting the video conference.
  • Virtual objects can be received from a server or device 200.
  • Virtual objects can be received at different times. For example, virtual objects can be received when an augmenting application is downloaded onto device 100, during login, or in real-time, when the virtual objects are instructed to be incorporated into the video conference.
  • Virtual objects 191 that are depicted in Figs. 2 and 6 are merely a few of any number of examples of virtual objects.
  • a virtual object can be any object that is capable of augmenting a video conference.
  • a virtual object can be any object that is able to supplement the communication between participants in a video conference.
  • virtual objects can be, but are not limited to, a kiss, heart, emoticon, high-five, background (photo-booth type of effects), color space changes, and/or image process changes (e.g., thinning, fattening).
  • a virtual object is not limited to a viewable virtual object.
  • a virtual object can be one of a plurality of sounds.
  • virtual objects 191 are displayed on display 1 10 for viewing by user 105.
  • virtual objects 191 are displayed on virtual object bar 192.
  • virtual object bar 192 is overlayed with first view 1 12.
  • virtual object bar 192 is displayed concurrently with first view 1 12 and/or second view 1 14.
  • virtual object bar 192 is displayed in response to user input, such as, but not limited to key stroke, cursor movement, a detected touch on a touch screen, and designated movement by a user (e.g., expressions, winking, blowing a kiss, hand gesture and the like).
  • user input such as, but not limited to key stroke, cursor movement, a detected touch on a touch screen, and designated movement by a user (e.g., expressions, winking, blowing a kiss, hand gesture and the like).
  • Virtual object incorporator 130 facilitates in incorporating virtual object 190 into the video conference.
  • virtual object incorporator 130 incorporates virtual object 190 into the video captured at device 200.
  • virtual object 190 is incorporated above the head of user 205.
  • video captured at device 200 is incorporated with object 190 and the augmented video is displayed at least at device 200. Also, the augmented video with incorporated virtual object 190 is displayed at device 100.
  • user 105 selects virtual object 190 in virtual object bar 192 and drags virtual object 190 to and places it at a location designated by user 105 (e.g., above the head of user 205, as displayed on first view 1 12). Once placed at the designated location, virtual object incorporator 130 incorporates virtual object at the designated location.
  • virtual object incorporator 130 generates control code.
  • the control code instructs how virtual object 190 is to be incorporated into the video captured at device 200.
  • control code can be transmitted directly to device 200 to instruct device 200 how virtual object 190 is to be incorporated into video displayed at device 200.
  • control code signals or instructs device 200 that virtual object 190 is to be displayed in the video conference.
  • the control code is sent to a server, device 200 then receives the control code from the server.
  • Figure 2 depicts virtual object 190 incorporated into the video conference.
  • any number of virtual objects can be incorporated into the video conference at any time.
  • five different virtual objects may be concurrently incorporated into the video conference.
  • the term "incorporate” used herein is used to describe that a virtual object is merely displayed along with some portion of the video conference. As such, the virtual object is merely displayed concurrently with some portion of the video conference. Accordingly, the virtual object is understood to be incorporated into the video and comprises the virtual object. However, it is not understood that the virtual object is integrated with or made part of the video stream.
  • the virtual object is superimposed as an overlay on a video.
  • a virtual object is concurrently superimposed as an overlay displayed on devices 100 and 200.
  • a virtual object is concurrently overlayed on video displayed in view 1 12 and view 214 (as depicted in Fig. 2), and a virtual object can be concurrent overlayed on video displayed in view 1 14 and view 212 (as depicted in Fig. 6).
  • the virtual object is integrated into the bit stream of the video conference.
  • a virtual object is concurrently overlayed on video displayed in view 1 12 and view 212. Also, the virtual object is displayed in a portion of a display independent of the views at the devices and does not require a two-way video to be active (e.g., a one-way video could be active).
  • transmitter 140 then transmits the video captured at device 200, which now includes virtual object 190, to second device 200 such that the video including virtual object 190 is displayed on display 210.
  • transmitter 140 transmits control code to device 200 (or a server) to instruct device 200 how virtual object 190 is to be incorporated into the video conference.
  • Virtual object manipulator 135 manipulates incorporated virtual object 190.
  • virtual object 190 is manipulated at device 100.
  • user 105 rotates virtual object 190 clockwise.
  • video captured at device 200 (and displayed on device 100 and/or device 200) is augmented such that virtual object spins clockwise.
  • virtual object 190 is manipulated at device 200.
  • virtual object 190 is manipulated (via a virtual object manipulator of device 200) such that it moves from left to right with respect to the head movement of user 205.
  • video captured at device 200 (and displayed on device 100 and/or device 200) is augmented such that virtual object 190 is moved from left to right.
  • virtual object 190 is concurrently manipulated at device 100 and device 200.
  • virtual object 190 is manipulated such that it concurrently moves from left to right with respect to the head movement of user 205 and spins in response to input from user 105.
  • video captured at device 200 and displayed on device 100 and/or device 200 is augmented such that virtual object 190 is moved from left to right while spinning clockwise.
  • virtual object 190 is directionally manipulated.
  • user 105 sends a "punch" virtual object (e.g., fist, boxing glove) to user 205.
  • a "punch” virtual object e.g., fist, boxing glove
  • user 105 views the "punch” virtual object going into display 1 10 and user 205 views the "punch” virtual object coming out of display 210.
  • virtual objects are manipulated in response to a variety of inputs.
  • virtual objects can be manipulated via sounds, gestures, expressions, movements, etc.
  • a virtual object e.g., a star
  • a kiss by a user red lips fly out of the mouth of the user.
  • virtual objects 191 are not displayed on display 1 10 and/or virtual display bar 192 until there is at least one of a variety of inputs, as described above.
  • a virtual object of a heart is not displayed until there is a double-tapping on a touch screen.
  • virtual objects 191 are geographical-related virtual objects.
  • virtual objects 191 are based on a location of devices 100 and/or 200.
  • virtual objects 191 are related to that location.
  • geographical-related virtual objects based on a location in Hawaii determined from GPS 160, could be, but are not limited to, a surfboard, sun, palm tree, coconut, etc.
  • the determination of location can be provided in a variety of ways.
  • the determination of a location of a device can be based on information provided by a user upon registrations, an IP address of the device or any other method that can be used to determine location.
  • virtual objects 191 are temporal-related virtual objects based on a time of the video conference. For example, if the video conference occurs on or around Christmas, then virtual objects would be Christmas related (e.g., stocking, Christmas tree, candy cane, etc.). In another example, if the video conference occurs in the evening, then virtual objects would be associated with the evening (e.g., moon, stars, pajamas, etc.)
  • virtual objects 191 are culturally-related virtual objects. For example, if user 105 and/or user 205 are located in Canada, then virtual objects 191 could be, but are not limited to, a Canadian flag, hockey puck, curling stone, etc.
  • virtual objects 191 are user-created virtual objects.
  • users 105 and/or 205 manually create the virtual objects then virtual object generator 170 utilizes the creation to generate user-created virtual objects.
  • virtual objects 191 are available and/or accessed based on account status. For example, user 105 has a payable account to have access to virtual objects 191. If user 105 has provided adequate payment to the account, then user 105 is able to access virtual objects 191. In contrast, if user has not provided adequate payment to the account, then user 105 is unable to access virtual objects 191.
  • Holidays can be, but are not limited to, religious holidays (e.g., Christmas, Easter, Yom Kippur, etc.), national holidays (e.g., New Years, Presidents Day, Memorial Day, etc.) or any other observed holiday (official or unofficial).
  • Events or special occasions can be, but are not limited to, birthdays, anniversaries, graduation, weddings, new job, retirement and the like.
  • a user is prompted to utilize a virtual object specifically related to events, holidays, special occasions and the like. For example, on or around the Fourth of July, a user is prompted to select and/or use virtual objects (e.g., fireworks) specifically related to the Fourth of July.
  • virtual objects e.g., fireworks
  • the virtual objects are presented to a user and the user is prompted to send the virtual objects to another user in the videoconference. In other words, the virtual objects are incorporated into the video conference.
  • a user can be prompted to send a virtual object to another user where a relationship between the parties is suspected, known, or inferred. For example, a mother is speaking with her son over a videoconference. If the mother/son relationship is suspected, known, or inferred, then the son is prompted to utilize virtual objects (e.g., flowers) specifically related to Mother's Day.
  • virtual objects e.g., flowers
  • the relationship can be determined in a variety of ways. For example, the relationship can be determined based on, but not limited to, surname, location of users, call logs, etc. [0064] Moreover, the son may be prompted with a message, such as "This appears to be your mother. Is this correct?" As such, if the son responds that he is speaking with his mother, then the son is prompted to utilize virtual objects (e.g., flowers) specifically related to Mother's Day.
  • virtual objects e.g., flowers
  • virtual objects can enhance revenue stream. For example, 100,000 virtual objects are used on Valentine's Day, and there is a $0.50 fee for each virtual object. As a result, $50,000 in fees is accumulated on Valentine's Day.
  • Figures 3-5 depict embodiments of methods 300-500, respectively.
  • methods 300-500 are carried out by processors and electrical components under the control of computer readable and computer executable instructions.
  • the computer readable and computer executable instructions reside, for example, in a data storage medium such as computer usable volatile and non-volatile memory. However, the computer readable and computer executable instructions may reside in any type of computer readable storage medium.
  • methods 300-500 are performed by device 100 and/or device 200, as described in Figures 1 and 2.
  • a virtual object is enabled to be accessed by a first device, wherein the first device is configured for participating in the video conference with a second device.
  • virtual object 190 is enabled to be accessed by device 100, wherein device 100 is configured for participating in a video conference with at least device 200.
  • the virtual object is enabled to be incorporated into a video of the video conference captured at the second device, wherein the video comprising the virtual object is configured to be displayed at the second device.
  • virtual object 190 is enabled to be incorporated into the video captured of user 205 at device 200 and also displayed at device 200.
  • the video comprising the virtual object is enabled to be transmitted from the first device to the second device.
  • transmission of the video comprising any one of virtual objects 191 is enabled to be transmitted by transmitter 140 to device 200.
  • concurrent display of the video comprising the virtual object is enabled at the first device and the second device.
  • the video comprising object 190 is enabled to be simultaneously displayed at devices 100 and 200.
  • instructions are received to access a virtual object.
  • a virtual object For example, in response to user input (e.g., key stroke, cursor movement, a detected touch on a touch screen, etc.), instructions are received to access virtual object 190.
  • the virtual object(s) can be, but is not limited to, a geographical-related virtual object, a temporal-related virtual object, a culturally-related virtual object, and/or a user-created virtual object
  • the virtual object is incorporated into the video conference, wherein the virtual object is accessed by the first device and configured to be displayed at the second device.
  • virtual object 190 is accessed at device 100 and incorporated, at device 100, into the video captured at device 200.
  • the video comprising incorporated virtual object 190 is configured to be displayed at device 200.
  • user 105 is able to place a virtual object of lips (to signify a kiss) on the cheek of user 205 by designating a location of the lips on the cheek of user 105 in first view 1 12. Accordingly, the virtual object of lips is incorporated into the video captured at device 200 and displayed on device 100 and 200. The virtual object of lips can be incorporated for the duration of the video conference or can be incorporated for a designated time duration.
  • the virtual object in response to user input on a touch screen display, the virtual object is incorporated into the video conference. For example, in response to user input on a touch screen display of device 100, the virtual object is incorporated into the video conference.
  • a video of the video conference comprising the incorporated virtual object is transmitted to the second device. For example, video that includes the virtual object is transmitted to the device 200 via transmitter 140.
  • a video of the video conference captured at the second device is displayed at the first device.
  • video of user 205 at device 200 is captured at device 200 and displayed at device 100.
  • the virtual object incorporated into the video conference is manipulated at the second device.
  • user 205 interacts with virtual object 190 displayed in second view 214 by rotating virtual object 190.
  • the virtual object incorporated into the video conference is manipulated at the first device. For example, user 105 interacts with virtual object 190 displayed in first view 1 12 by reducing the size of virtual object 190.
  • the virtual object incorporated into the video conference is manipulated.
  • device 100 is a hand-held device (e.g., cell phone) with a touch screen display. Accordingly, in response to user 105 touching the touch screen display, the size of virtual object 190 is reduced.
  • the virtual object incorporated into the video conference cooperatively manipulated at the first device and the second device. For example, user 205 moves his head from left to right such that virtual object 190 tracks with the head movement. Also, user 105 cooperatively rotates virtual object 190 while virtual object 190 is tracking with the head movement of user 205.
  • a video of the video conference captured at the second device and the virtual object are concurrently displayed at the first device.
  • video captured at second device 200 including incorporated virtual object 190 are concurrently displayed on first view 1 12.
  • a first video captured at the first device and a second video captured at the second device are concurrently displayed at the first device.
  • video captured at device 200 is displayed on first view 1 12 and video captured at device 100 is concurrently displayed on second view 1 14.
  • a virtual object is received at the first device, wherein the virtual object is configured to augment the video conference.
  • the virtual object(s) can be, but is not limited to, a geographical-related virtual object, a temporal-related virtual object, a culturally-related virtual object, and/or a user- created virtual object.
  • the virtual object is incorporated into the video captured at the second device.
  • virtual object 190 is incorporated into the video captured at device 200, such that virtual object 190 is placed above the head of user 205 and tracks with movements of the head of user 205.
  • the virtual object in response to user input at a touch screen display, is incorporated into the video captured at the second device.
  • the virtual object is incorporated into the video captured at the second device.
  • any number of virtual objects are incorporated into the video captured at the device 200.
  • the video comprising the virtual object is enabled to be displayed at the second device.
  • the video comprising the virtual object is transmitted to the second device.
  • the virtual object incorporated into the video captured at the second device is manipulated at the second device. For example, user 205 changes the color of virtual device 190, displayed in second view 214, to red.
  • the virtual object incorporated into the video captured at the second device is manipulated at the first device.
  • user 105 changes the location virtual device 190 from the top of the head of user 205 to the left hand of user 205.
  • the virtual object incorporated into the video captured at the second device is manipulated.
  • user 105 changes virtual device 190 from a star (as depicted) to a light bulb (not shown).
  • the virtual object incorporated into the video captured at the second device cooperatively manipulated at the first device and the second device.
  • user 205 manipulates virtual object 190 in second view 214 and user 105 cooperatively manipulates virtual object in first view 1 12.
  • the virtual object and the video captured at the second device is concurrently display at the first device.
  • a video captured at the first device and the video captured at the second device are concurrently displayed at the first device.
  • Figure 6 depicts an embodiment of devices 100 and 200 participating in a video conference with one another. Devices 100 and 200 operate in a similar fashion, as described above.
  • video camera 150 captures video at device 100.
  • video camera 150 captures video of user 105 of device 100.
  • Video camera 250 captures video at device 200.
  • video camera 250 captures video of user 205, who is the user of device 200.
  • the video captured at device 100 is displayed on display 1 10 of device 100.
  • a video of user 105 is displayed on a second view 1 14 displayed on display 110.
  • the video of user 205 is displayed on first view 1 12 on display 1 10.
  • Virtual object receiver 120 receives virtual object 190 for augmenting the video conference between for users 105 and 205 participating in the video conference.
  • Virtual objects 191 are displayed on display 1 10 for viewing by user 105.
  • virtual objects 191 are displayed on virtual object bar 192.
  • virtual object bar 192 is overlayed with first view 1 12.
  • virtual object bar 192 is displayed concurrently with first view 1 12 and/or second view 1 14.
  • Virtual object incorporator 130 incorporates virtual object 190 into the video conference.
  • virtual object 190 is incorporated into the video captured at device 100.
  • virtual object 190 is incorporated above the head of user 105. Therefore, as depicted, video captured at device 100 is incorporated with object 190 and the augmented video is displayed at least at device 200. Also, the augmented video with incorporated virtual object 190 is concurrently displayed at device 100.
  • user 105 selects virtual object 190 in virtual object bar 190 and drags virtual object 190 to and places it at a location designated by user 105 (e.g., above the head of user 105, as depicted). Once placed at the designated location, virtual object incorporator 130 incorporates virtual object at the designated location. [00102] Transmitter 140 then transmits the video captured at device 100, which now includes virtual object 190, to second device 200 such that the video including virtual object 190 is displayed on display 210.
  • a virtual object manipulator of device 200 manipulates incorporated virtual object 190. For example, in response to user input of user 205 at a touch screen, user 205 rotates virtual object 190 clockwise. Accordingly, video captured at device 100 (and displayed on device 200 and/or device 100) is augmented such that virtual object spins clockwise.
  • virtual object 190 is manipulated at device
  • virtual object 190 is manipulated (via virtual object manipulator 135) such that it moves from left to right with respect to the head movement of user 105. Accordingly, video captured at device 100 (and displayed on device 100 and/or device 200) is augmented such that virtual object 190 is moved from left to right.
  • virtual object 190 is concurrently
  • FIG. 7-9 depict embodiments of methods 700-900, respectively. In various embodiments, methods 700-900 are carried out by processors and electrical components under the control of computer readable and computer executable instructions.
  • the computer readable and computer executable instructions reside, for example, in a data storage medium such as computer usable volatile and non-volatile memory. However, the computer readable and computer executable instructions may reside in any type of computer readable storage medium. In some embodiments, methods 700-900 are performed by device 100 and/or device 200, as described in Figures 1 and 6.
  • a virtual object is enabled to be accessed by a first device, wherein the first device is configured for participating in the video conference with a second device.
  • virtual object 190 is enabled to be accessed by device 100, wherein device 100 is configured for participating in a video conference with at least device 200.
  • the virtual object is enabled to be incorporated into a video of the video conference captured at the first device, wherein the video comprising the virtual object is configured to be displayed at the second device.
  • virtual object 190 is enabled to be incorporated into the video captured at device 100 of user 105 and displayed at the device 100 and 200.
  • the video comprising the virtual object is enabled to be transmitted from the first device to the second device.
  • transmission of the video comprising any one of virtual objects 191 is enabled to be transmitted by transmitter 140 to device 200.
  • concurrent display of the video comprising said virtual object is enabled at the first device and the second device.
  • the video comprising object 190 is enabled to be simultaneously displayed at devices 100 and 200.
  • cooperative manipulation of the incorporated virtual object at the first device and the second device is enabled.
  • user 205 interacts with virtual object 190 in first view 212 and user 105 also cooperatively (or simultaneously) interacts with virtual object in second view 1 14.
  • instructions are received to access a virtual object. For example, in response to user input at a touch screen display, instructions are received to access virtual object 190.
  • the virtual object is incorporated into the video conference, wherein the virtual object is to be manipulated by a user of the second device.
  • virtual object 190 is accessed at device 100 and incorporated, at device 100, into the video captured at device 100.
  • the video comprising incorporated virtual object 190 is configured to be displayed and manipulated at device 200.
  • the virtual object is incorporated into the video conference.
  • a video of the video conference comprising the incorporated virtual object is transmitted to the second device.
  • the video conference captured at the first device is displayed at the second device.
  • video of user 105 at device 100 is captured at device 100 and displayed at device 200.
  • the virtual object incorporated into the video conference is manipulated at the second device.
  • user 205 interacts with virtual object 190 displayed in first view 212 by rotating virtual object 190.
  • the virtual object incorporated into the video conference is manipulated at a hand-held mobile device.
  • device 200 is a hand-held device (e.g., cell phone) with a touch screen display. Accordingly, in response to user 205 touching the touch screen display, the size of virtual object 190 is reduced.
  • the virtual object incorporated into the video conference is manipulated at the first device. For example, user 105 interacts with virtual object 190 displayed in second view 1 14 by reducing the size of virtual object 190.
  • the virtual object incorporated into the video conference is cooperatively manipulated at the first device and the second device. For example, user 105 moves his head from left to right such that virtual object 190 tracks with the head movement. Also, user 205 cooperatively rotates virtual object 190 while virtual object 190 is tracking with the head movement of user 105.
  • a video of the video conference captured at the second device and the virtual object are concurrently displayed at the first device.
  • video captured at second device 200 is displayed on first view 1 12 and video captured at device 100 including incorporated virtual object 190 are concurrently displayed on second view 1 14.
  • a first video captured at the first device and a second video captured at the second device are concurrently display at the first device.
  • a virtual object is received at the first device, wherein the virtual object is configured to augment the video conference.
  • the virtual object(s) can be, but is not limited to, a geographical-related virtual object, a temporal-related virtual object, a culturally-related virtual object, and/or a user-created virtual object.
  • the virtual object is incorporated into the video captured at the first device.
  • virtual object 190 is incorporated into the video captured at device 100, such that virtual object 190 is placed above the head of user 105 and tracks with movements of the head of user 105.
  • the virtual object is incorporated into the video captured at the device.
  • any number of virtual objects are incorporated into the video captured at device 100.
  • the video comprising the virtual object is enabled to be displayed at the second device, such that the virtual object is manipulated at the second device.
  • the video comprising the virtual object is transmitted to the second device.
  • the virtual object incorporated into the video captured at the first device is manipulated at the second device. For example, user 205 changes the color of virtual device 190, displayed in first view 212, to red.
  • the virtual object incorporated into the video captured at the first device is manipulated.
  • the virtual object incorporated into the video captured at the first device is manipulated.
  • user 205 changes virtual device 190 from a star (as depicted) to a light bulb (not shown).
  • the virtual object incorporated into the video captured at the first device is manipulated at the first device.
  • user 105 changes the location virtual device 190 from the top of the head of user 105 to the left hand of user 105.
  • the virtual object incorporated into the video captured at the first device cooperatively manipulated at the first device and the second device.
  • user 205 manipulates virtual object 190 in first view 212 and user 105 cooperatively manipulates virtual object in second view 1 14.
  • the virtual object and the video captured at the first device are concurrently display at the first device.
  • a video captured at the first device and the video captured at the second device are concurrently displayed at the first device.
  • a decision step could be carried out by a decision-making unit in a processor by implementing a decision algorithm.
  • this decisionmaking unit can exist physically or effectively, for example in a computer's processor when carrying out the aforesaid decision algorithm.
  • a computer-implemented method for augmenting a video conference between a first device and a second device comprising:
  • a tangible computer-readable storage medium having instructions stored thereon which, when executed, cause a computer processor to perform a method of:

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Processing Or Creating Images (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un procédé mis en œuvre par un ordinateur pour augmenter une visioconférence entre un premier dispositif et un second dispositif. Le procédé comprend la réception d'un objet virtuel dans le premier dispositif, l'objet virtuel étant configuré pour augmenter la visioconférence et l'objet virtuel étant associé spécifiquement à un événement. Le procédé comprend également l'incorporation dudit objet virtuel dans ladite visioconférence.
PCT/US2012/051595 2011-09-23 2012-08-20 Augmentation d'une visioconférence WO2013043289A1 (fr)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201280045938.4A CN103814568A (zh) 2011-09-23 2012-08-20 增强视频会议
EP12834018.9A EP2759127A4 (fr) 2011-09-23 2012-08-20 Augmentation d'une visioconférence
KR1020147006144A KR20140063673A (ko) 2011-09-23 2012-08-20 비디오 컨퍼런스 증강
JP2014531822A JP2014532330A (ja) 2011-09-23 2012-08-20 テレビ会議の強化

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/241,918 US9544543B2 (en) 2011-02-11 2011-09-23 Augmenting a video conference
US13/241,918 2011-09-23

Publications (1)

Publication Number Publication Date
WO2013043289A1 true WO2013043289A1 (fr) 2013-03-28

Family

ID=47914747

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2012/051595 WO2013043289A1 (fr) 2011-09-23 2012-08-20 Augmentation d'une visioconférence

Country Status (5)

Country Link
EP (1) EP2759127A4 (fr)
JP (1) JP2014532330A (fr)
KR (1) KR20140063673A (fr)
CN (1) CN103814568A (fr)
WO (1) WO2013043289A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11182615B2 (en) 2017-08-04 2021-11-23 Tencent Technology (Shenzhen) Company Limited Method and apparatus, and storage medium for image data processing on real object and virtual object

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101751620B1 (ko) * 2015-12-15 2017-07-11 라인 가부시키가이샤 시각적 또는 청각적 효과의 양방향 전달을 이용한 영상 통화 방법 및 시스템
CN107613242A (zh) * 2017-09-12 2018-01-19 宇龙计算机通信科技(深圳)有限公司 视频会议处理方法及终端、服务器
KR102271308B1 (ko) 2017-11-21 2021-06-30 주식회사 하이퍼커넥트 영상통화 중에 상호작용 가능한 시각적 오브젝트를 제공하기 위한 방법, 및 이를 수행하는 상호작용 가능한 시각적 오브젝트 제공 시스템
US10681310B2 (en) * 2018-05-07 2020-06-09 Apple Inc. Modifying video streams with supplemental content for video conferencing
US11012389B2 (en) 2018-05-07 2021-05-18 Apple Inc. Modifying images with supplemental content for messaging
CN110716641B (zh) * 2019-08-28 2021-07-23 北京市商汤科技开发有限公司 交互方法、装置、设备以及存储介质
CN113766168A (zh) * 2021-05-31 2021-12-07 腾讯科技(深圳)有限公司 一种互动处理方法、装置、终端及介质
KR102393042B1 (ko) 2021-06-15 2022-04-29 주식회사 브이온 화상 회의 시스템
CN113938336A (zh) * 2021-11-15 2022-01-14 网易(杭州)网络有限公司 会议控制的方法、装置和电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060088038A1 (en) * 2004-09-13 2006-04-27 Inkaar, Corporation Relationship definition and processing system and method
US20070120954A1 (en) * 1994-09-19 2007-05-31 Destiny Conferencing Llc Teleconferencing method and system
US20070242066A1 (en) * 2006-04-14 2007-10-18 Patrick Levy Rosenthal Virtual video camera device with three-dimensional tracking and virtual object insertion
US20110063404A1 (en) * 2009-09-17 2011-03-17 Nokia Corporation Remote communication system and method
KR20110042447A (ko) * 2009-10-19 2011-04-27 한국전자통신연구원 화상회의 시스템을 위한 단말, 중계 노드 및 스트림 처리 방법

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4378072B2 (ja) * 2001-09-07 2009-12-02 キヤノン株式会社 電子機器、撮像装置、携帯通信機器、映像の表示制御方法及びプログラム
JP2003244425A (ja) * 2001-12-04 2003-08-29 Fuji Photo Film Co Ltd 伝送画像の修飾パターンの登録方法および装置ならびに再生方法および装置
US6731323B2 (en) * 2002-04-10 2004-05-04 International Business Machines Corporation Media-enhanced greetings and/or responses in communication systems
US7003040B2 (en) * 2002-09-24 2006-02-21 Lg Electronics Inc. System and method for multiplexing media information over a network using reduced communications resources and prior knowledge/experience of a called or calling party
JP4352380B2 (ja) * 2003-08-29 2009-10-28 株式会社セガ 動画像双方向通信端末、コンピュータプログラム及び通話制御方法
JP2006173879A (ja) * 2004-12-14 2006-06-29 Hitachi Ltd コミュニケーションシステム
US8373799B2 (en) * 2006-12-29 2013-02-12 Nokia Corporation Visual effects for video calls
US8373742B2 (en) * 2008-03-27 2013-02-12 Motorola Mobility Llc Method and apparatus for enhancing and adding context to a video call image
KR101533065B1 (ko) * 2008-12-01 2015-07-01 삼성전자주식회사 화상통화 중 애니메이션 효과 제공 방법 및 장치
US8665307B2 (en) * 2011-02-11 2014-03-04 Tangome, Inc. Augmenting a video conference

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070120954A1 (en) * 1994-09-19 2007-05-31 Destiny Conferencing Llc Teleconferencing method and system
US20060088038A1 (en) * 2004-09-13 2006-04-27 Inkaar, Corporation Relationship definition and processing system and method
US20070242066A1 (en) * 2006-04-14 2007-10-18 Patrick Levy Rosenthal Virtual video camera device with three-dimensional tracking and virtual object insertion
US20110063404A1 (en) * 2009-09-17 2011-03-17 Nokia Corporation Remote communication system and method
KR20110042447A (ko) * 2009-10-19 2011-04-27 한국전자통신연구원 화상회의 시스템을 위한 단말, 중계 노드 및 스트림 처리 방법

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2759127A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11182615B2 (en) 2017-08-04 2021-11-23 Tencent Technology (Shenzhen) Company Limited Method and apparatus, and storage medium for image data processing on real object and virtual object

Also Published As

Publication number Publication date
KR20140063673A (ko) 2014-05-27
EP2759127A4 (fr) 2014-10-15
EP2759127A1 (fr) 2014-07-30
JP2014532330A (ja) 2014-12-04
CN103814568A (zh) 2014-05-21

Similar Documents

Publication Publication Date Title
US9544543B2 (en) Augmenting a video conference
US9253440B2 (en) Augmenting a video conference
US8767034B2 (en) Augmenting a video conference
US9262753B2 (en) Video messaging
EP2759127A1 (fr) Augmentation d'une visioconférence
US9911222B2 (en) Animation in threaded conversations
TWI720462B (zh) 以補充內容對視訊會議修改視訊串流
CN105320262A (zh) 操作虚拟世界里的电脑和手机的方法、装置以及使用其的眼镜
US20210318749A1 (en) Information processing system, information processing method, and program
US11456887B1 (en) Virtual meeting facilitator
WO2018207046A1 (fr) Procédés, systèmes et dispositifs prenant en charge des interactions en temps réel dans des environnements de réalité augmentée
JP2011239397A (ja) 直感型仮想対話方法
KR20170127354A (ko) 페이셜 모션 캡쳐를 이용한 얼굴 변환 화상 대화 장치 및 방법
TW202111480A (zh) 虛擬實境與擴增實境之互動系統及其方法
CN116055638A (zh) 视频彩铃交互方法、服务器以及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12834018

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20147006144

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2014531822

Country of ref document: JP

Kind code of ref document: A

REEP Request for entry into the european phase

Ref document number: 2012834018

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2012834018

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE