EP2759127A1 - Augmenting a video conference - Google Patents

Augmenting a video conference

Info

Publication number
EP2759127A1
EP2759127A1 EP12834018.9A EP12834018A EP2759127A1 EP 2759127 A1 EP2759127 A1 EP 2759127A1 EP 12834018 A EP12834018 A EP 12834018A EP 2759127 A1 EP2759127 A1 EP 2759127A1
Authority
EP
European Patent Office
Prior art keywords
virtual object
user
video
video conference
incorporated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP12834018.9A
Other languages
German (de)
French (fr)
Other versions
EP2759127A4 (en
Inventor
Eric Setton
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TangoMe Inc
Original Assignee
TangoMe Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/241,918 external-priority patent/US9544543B2/en
Application filed by TangoMe Inc filed Critical TangoMe Inc
Publication of EP2759127A1 publication Critical patent/EP2759127A1/en
Publication of EP2759127A4 publication Critical patent/EP2759127A4/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/157Conference systems defining a virtual conference space and using avatars or agents

Definitions

  • Participants in a video conference communicate with one another by transmitting audio/video signals to one another. For example, participants are able to interact via two-way video and audio transmissions simultaneously. However, the participants may not be able to completely articulate what they are attempting to communicate to one another based solely on the captured audio captured by microphones and video signals captured by video cameras.
  • this writing presents a computer-implemented method for augmenting a video conference between a first device and a second device.
  • the method includes receiving a virtual object at the first device, wherein the virtual object is configured to augment the video conference and wherein the virtual object is specifically related to an event.
  • the method also includes incorporating said virtual object into said video conference.
  • FIGs. 1, 2 and 6 illustrate examples of devices, in accordance with embodiments of the present invention.
  • FIGs. 3 and 7 illustrate embodiments of a method for providing an augmented video conference.
  • FIGs. 4, 5, 8 and 9 illustrate embodiments of a method for augmenting a video conference.
  • Figure 1 depicts an embodiment of device 100.
  • Device 100 is configured for participation in a video conference.
  • Figure 2 depicts devices 100 and 200
  • video conferencing allows two or more locations to interact via two-way video and audio transmissions simultaneously.
  • Devices 100 and 200 are any communication devices (e.g., laptop, desktop, etc.) capable of participating in a video conference.
  • device 100 is a hand-held mobile device, such as smart phone, personal digital assistant (PDA), and the like.
  • PDA personal digital assistant
  • device 200 operates in a similar fashion as device 100.
  • device 200 is the same as device 100 and includes the same components as device 100.
  • Device 100 includes display 1 10, virtual object receiver 120, virtual object incorporator 130, transmitter 140, camera 150, microphone 152 and speaker 154.
  • Device 100 optionally includes global positioning system 160 and virtual object generator 170.
  • Display 1 10 is configured for displaying video captured at device 200. In another embodiment, display 1 10 is further configured for displaying video captured at device 100.
  • Virtual object receiver 120 is configured to access a virtual object.
  • a virtual object is configured for augmenting a video conference, which will be described in detail below.
  • Virtual object incorporator 130 is configured for incorporating the virtual object into the video conference.
  • virtual object incorporator 130 is configured for incorporating the virtual object into a video captured at device 100 and/or device 200.
  • Transmitter 140 is for transmitting data (e.g., virtual object control code).
  • Virtual object manipulator 135 is configured to enable manipulation of the virtual object in the video conference.
  • Camera 150 is for capturing video at device 100.
  • Microphone 152 is for capturing audio at device 100.
  • Speaker 154 is for generating an audible signal at device 100.
  • Global positioning system 160 is for determining a location of a device 100.
  • Virtual object generator 170 is for generating a virtual object.
  • devices 100 and 200 are participating in a video conference with one another. In various embodiments, more than two devices participate in a video conference with each another.
  • video camera 250 captures video at device 200.
  • video camera 250 captures video of user 205 of device 200.
  • Video camera 150 captures video at device 100.
  • video camera 150 captures video of user 105. It should be appreciated that video cameras 150 and 250 capture any objects that are within the respective viewing ranges of cameras 150 and 250.
  • Microphone 152 captures audio signals corresponding to the captured video signal at device 100. Similarly, a microphone of device 200 captures audio signals corresponding to the captured video signal at device 200.
  • the video captured at device 200 is transmitted to and displayed on display 1 10 of device 100.
  • a video of user 205 is displayed on a first view 1 12 of display 1 10.
  • the video of user 205 is displayed on a second view 214 of display 210.
  • the video captured at device 100 is transmitted to and displayed on display 210 of device 200.
  • a video of user 105 is displayed on first view 212 of display 210.
  • the video of user 105 is displayed on a second view 1 14 of display 1 10.
  • the audio signals captured at devices 100 and 200 are incorporated into the captured video. In another embodiment, the audio signals are transmitted separate from the transmitted video.
  • first view 1 12 is the primary view displayed on display 1 10 and second view 1 14 is the smaller secondary view displayed on display 1 10.
  • the size of both first view 1 12 and second view 1 14 are adjustable.
  • second view 1 14 can be enlarged to be the primary view and view 1 12 can be diminished in size to be a secondary view.
  • either one of views 1 12 and 1 14 can be closed or fully diminished such that it is not viewable.
  • Virtual object receiver 120 receives virtual object 190 for augmenting the video conference.
  • Virtual objects can be received from a server or device 200.
  • Virtual objects can be received at different times. For example, virtual objects can be received when an augmenting application is downloaded onto device 100, during login, or in real-time, when the virtual objects are instructed to be incorporated into the video conference.
  • Virtual objects 191 that are depicted in Figs. 2 and 6 are merely a few of any number of examples of virtual objects.
  • a virtual object can be any object that is capable of augmenting a video conference.
  • a virtual object can be any object that is able to supplement the communication between participants in a video conference.
  • virtual objects can be, but are not limited to, a kiss, heart, emoticon, high-five, background (photo-booth type of effects), color space changes, and/or image process changes (e.g., thinning, fattening).
  • a virtual object is not limited to a viewable virtual object.
  • a virtual object can be one of a plurality of sounds.
  • virtual objects 191 are displayed on display 1 10 for viewing by user 105.
  • virtual objects 191 are displayed on virtual object bar 192.
  • virtual object bar 192 is overlayed with first view 1 12.
  • virtual object bar 192 is displayed concurrently with first view 1 12 and/or second view 1 14.
  • virtual object bar 192 is displayed in response to user input, such as, but not limited to key stroke, cursor movement, a detected touch on a touch screen, and designated movement by a user (e.g., expressions, winking, blowing a kiss, hand gesture and the like).
  • user input such as, but not limited to key stroke, cursor movement, a detected touch on a touch screen, and designated movement by a user (e.g., expressions, winking, blowing a kiss, hand gesture and the like).
  • Virtual object incorporator 130 facilitates in incorporating virtual object 190 into the video conference.
  • virtual object incorporator 130 incorporates virtual object 190 into the video captured at device 200.
  • virtual object 190 is incorporated above the head of user 205.
  • video captured at device 200 is incorporated with object 190 and the augmented video is displayed at least at device 200. Also, the augmented video with incorporated virtual object 190 is displayed at device 100.
  • user 105 selects virtual object 190 in virtual object bar 192 and drags virtual object 190 to and places it at a location designated by user 105 (e.g., above the head of user 205, as displayed on first view 1 12). Once placed at the designated location, virtual object incorporator 130 incorporates virtual object at the designated location.
  • virtual object incorporator 130 generates control code.
  • the control code instructs how virtual object 190 is to be incorporated into the video captured at device 200.
  • control code can be transmitted directly to device 200 to instruct device 200 how virtual object 190 is to be incorporated into video displayed at device 200.
  • control code signals or instructs device 200 that virtual object 190 is to be displayed in the video conference.
  • the control code is sent to a server, device 200 then receives the control code from the server.
  • Figure 2 depicts virtual object 190 incorporated into the video conference.
  • any number of virtual objects can be incorporated into the video conference at any time.
  • five different virtual objects may be concurrently incorporated into the video conference.
  • the term "incorporate” used herein is used to describe that a virtual object is merely displayed along with some portion of the video conference. As such, the virtual object is merely displayed concurrently with some portion of the video conference. Accordingly, the virtual object is understood to be incorporated into the video and comprises the virtual object. However, it is not understood that the virtual object is integrated with or made part of the video stream.
  • the virtual object is superimposed as an overlay on a video.
  • a virtual object is concurrently superimposed as an overlay displayed on devices 100 and 200.
  • a virtual object is concurrently overlayed on video displayed in view 1 12 and view 214 (as depicted in Fig. 2), and a virtual object can be concurrent overlayed on video displayed in view 1 14 and view 212 (as depicted in Fig. 6).
  • the virtual object is integrated into the bit stream of the video conference.
  • a virtual object is concurrently overlayed on video displayed in view 1 12 and view 212. Also, the virtual object is displayed in a portion of a display independent of the views at the devices and does not require a two-way video to be active (e.g., a one-way video could be active).
  • transmitter 140 then transmits the video captured at device 200, which now includes virtual object 190, to second device 200 such that the video including virtual object 190 is displayed on display 210.
  • transmitter 140 transmits control code to device 200 (or a server) to instruct device 200 how virtual object 190 is to be incorporated into the video conference.
  • Virtual object manipulator 135 manipulates incorporated virtual object 190.
  • virtual object 190 is manipulated at device 100.
  • user 105 rotates virtual object 190 clockwise.
  • video captured at device 200 (and displayed on device 100 and/or device 200) is augmented such that virtual object spins clockwise.
  • virtual object 190 is manipulated at device 200.
  • virtual object 190 is manipulated (via a virtual object manipulator of device 200) such that it moves from left to right with respect to the head movement of user 205.
  • video captured at device 200 (and displayed on device 100 and/or device 200) is augmented such that virtual object 190 is moved from left to right.
  • virtual object 190 is concurrently manipulated at device 100 and device 200.
  • virtual object 190 is manipulated such that it concurrently moves from left to right with respect to the head movement of user 205 and spins in response to input from user 105.
  • video captured at device 200 and displayed on device 100 and/or device 200 is augmented such that virtual object 190 is moved from left to right while spinning clockwise.
  • virtual object 190 is directionally manipulated.
  • user 105 sends a "punch" virtual object (e.g., fist, boxing glove) to user 205.
  • a "punch” virtual object e.g., fist, boxing glove
  • user 105 views the "punch” virtual object going into display 1 10 and user 205 views the "punch” virtual object coming out of display 210.
  • virtual objects are manipulated in response to a variety of inputs.
  • virtual objects can be manipulated via sounds, gestures, expressions, movements, etc.
  • a virtual object e.g., a star
  • a kiss by a user red lips fly out of the mouth of the user.
  • virtual objects 191 are not displayed on display 1 10 and/or virtual display bar 192 until there is at least one of a variety of inputs, as described above.
  • a virtual object of a heart is not displayed until there is a double-tapping on a touch screen.
  • virtual objects 191 are geographical-related virtual objects.
  • virtual objects 191 are based on a location of devices 100 and/or 200.
  • virtual objects 191 are related to that location.
  • geographical-related virtual objects based on a location in Hawaii determined from GPS 160, could be, but are not limited to, a surfboard, sun, palm tree, coconut, etc.
  • the determination of location can be provided in a variety of ways.
  • the determination of a location of a device can be based on information provided by a user upon registrations, an IP address of the device or any other method that can be used to determine location.
  • virtual objects 191 are temporal-related virtual objects based on a time of the video conference. For example, if the video conference occurs on or around Christmas, then virtual objects would be Christmas related (e.g., stocking, Christmas tree, candy cane, etc.). In another example, if the video conference occurs in the evening, then virtual objects would be associated with the evening (e.g., moon, stars, pajamas, etc.)
  • virtual objects 191 are culturally-related virtual objects. For example, if user 105 and/or user 205 are located in Canada, then virtual objects 191 could be, but are not limited to, a Canadian flag, hockey puck, curling stone, etc.
  • virtual objects 191 are user-created virtual objects.
  • users 105 and/or 205 manually create the virtual objects then virtual object generator 170 utilizes the creation to generate user-created virtual objects.
  • virtual objects 191 are available and/or accessed based on account status. For example, user 105 has a payable account to have access to virtual objects 191. If user 105 has provided adequate payment to the account, then user 105 is able to access virtual objects 191. In contrast, if user has not provided adequate payment to the account, then user 105 is unable to access virtual objects 191.
  • Holidays can be, but are not limited to, religious holidays (e.g., Christmas, Easter, Yom Kippur, etc.), national holidays (e.g., New Years, Presidents Day, Memorial Day, etc.) or any other observed holiday (official or unofficial).
  • Events or special occasions can be, but are not limited to, birthdays, anniversaries, graduation, weddings, new job, retirement and the like.
  • a user is prompted to utilize a virtual object specifically related to events, holidays, special occasions and the like. For example, on or around the Fourth of July, a user is prompted to select and/or use virtual objects (e.g., fireworks) specifically related to the Fourth of July.
  • virtual objects e.g., fireworks
  • the virtual objects are presented to a user and the user is prompted to send the virtual objects to another user in the videoconference. In other words, the virtual objects are incorporated into the video conference.
  • a user can be prompted to send a virtual object to another user where a relationship between the parties is suspected, known, or inferred. For example, a mother is speaking with her son over a videoconference. If the mother/son relationship is suspected, known, or inferred, then the son is prompted to utilize virtual objects (e.g., flowers) specifically related to Mother's Day.
  • virtual objects e.g., flowers
  • the relationship can be determined in a variety of ways. For example, the relationship can be determined based on, but not limited to, surname, location of users, call logs, etc. [0064] Moreover, the son may be prompted with a message, such as "This appears to be your mother. Is this correct?" As such, if the son responds that he is speaking with his mother, then the son is prompted to utilize virtual objects (e.g., flowers) specifically related to Mother's Day.
  • virtual objects e.g., flowers
  • virtual objects can enhance revenue stream. For example, 100,000 virtual objects are used on Valentine's Day, and there is a $0.50 fee for each virtual object. As a result, $50,000 in fees is accumulated on Valentine's Day.
  • Figures 3-5 depict embodiments of methods 300-500, respectively.
  • methods 300-500 are carried out by processors and electrical components under the control of computer readable and computer executable instructions.
  • the computer readable and computer executable instructions reside, for example, in a data storage medium such as computer usable volatile and non-volatile memory. However, the computer readable and computer executable instructions may reside in any type of computer readable storage medium.
  • methods 300-500 are performed by device 100 and/or device 200, as described in Figures 1 and 2.
  • a virtual object is enabled to be accessed by a first device, wherein the first device is configured for participating in the video conference with a second device.
  • virtual object 190 is enabled to be accessed by device 100, wherein device 100 is configured for participating in a video conference with at least device 200.
  • the virtual object is enabled to be incorporated into a video of the video conference captured at the second device, wherein the video comprising the virtual object is configured to be displayed at the second device.
  • virtual object 190 is enabled to be incorporated into the video captured of user 205 at device 200 and also displayed at device 200.
  • the video comprising the virtual object is enabled to be transmitted from the first device to the second device.
  • transmission of the video comprising any one of virtual objects 191 is enabled to be transmitted by transmitter 140 to device 200.
  • concurrent display of the video comprising the virtual object is enabled at the first device and the second device.
  • the video comprising object 190 is enabled to be simultaneously displayed at devices 100 and 200.
  • instructions are received to access a virtual object.
  • a virtual object For example, in response to user input (e.g., key stroke, cursor movement, a detected touch on a touch screen, etc.), instructions are received to access virtual object 190.
  • the virtual object(s) can be, but is not limited to, a geographical-related virtual object, a temporal-related virtual object, a culturally-related virtual object, and/or a user-created virtual object
  • the virtual object is incorporated into the video conference, wherein the virtual object is accessed by the first device and configured to be displayed at the second device.
  • virtual object 190 is accessed at device 100 and incorporated, at device 100, into the video captured at device 200.
  • the video comprising incorporated virtual object 190 is configured to be displayed at device 200.
  • user 105 is able to place a virtual object of lips (to signify a kiss) on the cheek of user 205 by designating a location of the lips on the cheek of user 105 in first view 1 12. Accordingly, the virtual object of lips is incorporated into the video captured at device 200 and displayed on device 100 and 200. The virtual object of lips can be incorporated for the duration of the video conference or can be incorporated for a designated time duration.
  • the virtual object in response to user input on a touch screen display, the virtual object is incorporated into the video conference. For example, in response to user input on a touch screen display of device 100, the virtual object is incorporated into the video conference.
  • a video of the video conference comprising the incorporated virtual object is transmitted to the second device. For example, video that includes the virtual object is transmitted to the device 200 via transmitter 140.
  • a video of the video conference captured at the second device is displayed at the first device.
  • video of user 205 at device 200 is captured at device 200 and displayed at device 100.
  • the virtual object incorporated into the video conference is manipulated at the second device.
  • user 205 interacts with virtual object 190 displayed in second view 214 by rotating virtual object 190.
  • the virtual object incorporated into the video conference is manipulated at the first device. For example, user 105 interacts with virtual object 190 displayed in first view 1 12 by reducing the size of virtual object 190.
  • the virtual object incorporated into the video conference is manipulated.
  • device 100 is a hand-held device (e.g., cell phone) with a touch screen display. Accordingly, in response to user 105 touching the touch screen display, the size of virtual object 190 is reduced.
  • the virtual object incorporated into the video conference cooperatively manipulated at the first device and the second device. For example, user 205 moves his head from left to right such that virtual object 190 tracks with the head movement. Also, user 105 cooperatively rotates virtual object 190 while virtual object 190 is tracking with the head movement of user 205.
  • a video of the video conference captured at the second device and the virtual object are concurrently displayed at the first device.
  • video captured at second device 200 including incorporated virtual object 190 are concurrently displayed on first view 1 12.
  • a first video captured at the first device and a second video captured at the second device are concurrently displayed at the first device.
  • video captured at device 200 is displayed on first view 1 12 and video captured at device 100 is concurrently displayed on second view 1 14.
  • a virtual object is received at the first device, wherein the virtual object is configured to augment the video conference.
  • the virtual object(s) can be, but is not limited to, a geographical-related virtual object, a temporal-related virtual object, a culturally-related virtual object, and/or a user- created virtual object.
  • the virtual object is incorporated into the video captured at the second device.
  • virtual object 190 is incorporated into the video captured at device 200, such that virtual object 190 is placed above the head of user 205 and tracks with movements of the head of user 205.
  • the virtual object in response to user input at a touch screen display, is incorporated into the video captured at the second device.
  • the virtual object is incorporated into the video captured at the second device.
  • any number of virtual objects are incorporated into the video captured at the device 200.
  • the video comprising the virtual object is enabled to be displayed at the second device.
  • the video comprising the virtual object is transmitted to the second device.
  • the virtual object incorporated into the video captured at the second device is manipulated at the second device. For example, user 205 changes the color of virtual device 190, displayed in second view 214, to red.
  • the virtual object incorporated into the video captured at the second device is manipulated at the first device.
  • user 105 changes the location virtual device 190 from the top of the head of user 205 to the left hand of user 205.
  • the virtual object incorporated into the video captured at the second device is manipulated.
  • user 105 changes virtual device 190 from a star (as depicted) to a light bulb (not shown).
  • the virtual object incorporated into the video captured at the second device cooperatively manipulated at the first device and the second device.
  • user 205 manipulates virtual object 190 in second view 214 and user 105 cooperatively manipulates virtual object in first view 1 12.
  • the virtual object and the video captured at the second device is concurrently display at the first device.
  • a video captured at the first device and the video captured at the second device are concurrently displayed at the first device.
  • Figure 6 depicts an embodiment of devices 100 and 200 participating in a video conference with one another. Devices 100 and 200 operate in a similar fashion, as described above.
  • video camera 150 captures video at device 100.
  • video camera 150 captures video of user 105 of device 100.
  • Video camera 250 captures video at device 200.
  • video camera 250 captures video of user 205, who is the user of device 200.
  • the video captured at device 100 is displayed on display 1 10 of device 100.
  • a video of user 105 is displayed on a second view 1 14 displayed on display 110.
  • the video of user 205 is displayed on first view 1 12 on display 1 10.
  • Virtual object receiver 120 receives virtual object 190 for augmenting the video conference between for users 105 and 205 participating in the video conference.
  • Virtual objects 191 are displayed on display 1 10 for viewing by user 105.
  • virtual objects 191 are displayed on virtual object bar 192.
  • virtual object bar 192 is overlayed with first view 1 12.
  • virtual object bar 192 is displayed concurrently with first view 1 12 and/or second view 1 14.
  • Virtual object incorporator 130 incorporates virtual object 190 into the video conference.
  • virtual object 190 is incorporated into the video captured at device 100.
  • virtual object 190 is incorporated above the head of user 105. Therefore, as depicted, video captured at device 100 is incorporated with object 190 and the augmented video is displayed at least at device 200. Also, the augmented video with incorporated virtual object 190 is concurrently displayed at device 100.
  • user 105 selects virtual object 190 in virtual object bar 190 and drags virtual object 190 to and places it at a location designated by user 105 (e.g., above the head of user 105, as depicted). Once placed at the designated location, virtual object incorporator 130 incorporates virtual object at the designated location. [00102] Transmitter 140 then transmits the video captured at device 100, which now includes virtual object 190, to second device 200 such that the video including virtual object 190 is displayed on display 210.
  • a virtual object manipulator of device 200 manipulates incorporated virtual object 190. For example, in response to user input of user 205 at a touch screen, user 205 rotates virtual object 190 clockwise. Accordingly, video captured at device 100 (and displayed on device 200 and/or device 100) is augmented such that virtual object spins clockwise.
  • virtual object 190 is manipulated at device
  • virtual object 190 is manipulated (via virtual object manipulator 135) such that it moves from left to right with respect to the head movement of user 105. Accordingly, video captured at device 100 (and displayed on device 100 and/or device 200) is augmented such that virtual object 190 is moved from left to right.
  • virtual object 190 is concurrently
  • FIG. 7-9 depict embodiments of methods 700-900, respectively. In various embodiments, methods 700-900 are carried out by processors and electrical components under the control of computer readable and computer executable instructions.
  • the computer readable and computer executable instructions reside, for example, in a data storage medium such as computer usable volatile and non-volatile memory. However, the computer readable and computer executable instructions may reside in any type of computer readable storage medium. In some embodiments, methods 700-900 are performed by device 100 and/or device 200, as described in Figures 1 and 6.
  • a virtual object is enabled to be accessed by a first device, wherein the first device is configured for participating in the video conference with a second device.
  • virtual object 190 is enabled to be accessed by device 100, wherein device 100 is configured for participating in a video conference with at least device 200.
  • the virtual object is enabled to be incorporated into a video of the video conference captured at the first device, wherein the video comprising the virtual object is configured to be displayed at the second device.
  • virtual object 190 is enabled to be incorporated into the video captured at device 100 of user 105 and displayed at the device 100 and 200.
  • the video comprising the virtual object is enabled to be transmitted from the first device to the second device.
  • transmission of the video comprising any one of virtual objects 191 is enabled to be transmitted by transmitter 140 to device 200.
  • concurrent display of the video comprising said virtual object is enabled at the first device and the second device.
  • the video comprising object 190 is enabled to be simultaneously displayed at devices 100 and 200.
  • cooperative manipulation of the incorporated virtual object at the first device and the second device is enabled.
  • user 205 interacts with virtual object 190 in first view 212 and user 105 also cooperatively (or simultaneously) interacts with virtual object in second view 1 14.
  • instructions are received to access a virtual object. For example, in response to user input at a touch screen display, instructions are received to access virtual object 190.
  • the virtual object is incorporated into the video conference, wherein the virtual object is to be manipulated by a user of the second device.
  • virtual object 190 is accessed at device 100 and incorporated, at device 100, into the video captured at device 100.
  • the video comprising incorporated virtual object 190 is configured to be displayed and manipulated at device 200.
  • the virtual object is incorporated into the video conference.
  • a video of the video conference comprising the incorporated virtual object is transmitted to the second device.
  • the video conference captured at the first device is displayed at the second device.
  • video of user 105 at device 100 is captured at device 100 and displayed at device 200.
  • the virtual object incorporated into the video conference is manipulated at the second device.
  • user 205 interacts with virtual object 190 displayed in first view 212 by rotating virtual object 190.
  • the virtual object incorporated into the video conference is manipulated at a hand-held mobile device.
  • device 200 is a hand-held device (e.g., cell phone) with a touch screen display. Accordingly, in response to user 205 touching the touch screen display, the size of virtual object 190 is reduced.
  • the virtual object incorporated into the video conference is manipulated at the first device. For example, user 105 interacts with virtual object 190 displayed in second view 1 14 by reducing the size of virtual object 190.
  • the virtual object incorporated into the video conference is cooperatively manipulated at the first device and the second device. For example, user 105 moves his head from left to right such that virtual object 190 tracks with the head movement. Also, user 205 cooperatively rotates virtual object 190 while virtual object 190 is tracking with the head movement of user 105.
  • a video of the video conference captured at the second device and the virtual object are concurrently displayed at the first device.
  • video captured at second device 200 is displayed on first view 1 12 and video captured at device 100 including incorporated virtual object 190 are concurrently displayed on second view 1 14.
  • a first video captured at the first device and a second video captured at the second device are concurrently display at the first device.
  • a virtual object is received at the first device, wherein the virtual object is configured to augment the video conference.
  • the virtual object(s) can be, but is not limited to, a geographical-related virtual object, a temporal-related virtual object, a culturally-related virtual object, and/or a user-created virtual object.
  • the virtual object is incorporated into the video captured at the first device.
  • virtual object 190 is incorporated into the video captured at device 100, such that virtual object 190 is placed above the head of user 105 and tracks with movements of the head of user 105.
  • the virtual object is incorporated into the video captured at the device.
  • any number of virtual objects are incorporated into the video captured at device 100.
  • the video comprising the virtual object is enabled to be displayed at the second device, such that the virtual object is manipulated at the second device.
  • the video comprising the virtual object is transmitted to the second device.
  • the virtual object incorporated into the video captured at the first device is manipulated at the second device. For example, user 205 changes the color of virtual device 190, displayed in first view 212, to red.
  • the virtual object incorporated into the video captured at the first device is manipulated.
  • the virtual object incorporated into the video captured at the first device is manipulated.
  • user 205 changes virtual device 190 from a star (as depicted) to a light bulb (not shown).
  • the virtual object incorporated into the video captured at the first device is manipulated at the first device.
  • user 105 changes the location virtual device 190 from the top of the head of user 105 to the left hand of user 105.
  • the virtual object incorporated into the video captured at the first device cooperatively manipulated at the first device and the second device.
  • user 205 manipulates virtual object 190 in first view 212 and user 105 cooperatively manipulates virtual object in second view 1 14.
  • the virtual object and the video captured at the first device are concurrently display at the first device.
  • a video captured at the first device and the video captured at the second device are concurrently displayed at the first device.
  • a decision step could be carried out by a decision-making unit in a processor by implementing a decision algorithm.
  • this decisionmaking unit can exist physically or effectively, for example in a computer's processor when carrying out the aforesaid decision algorithm.
  • a computer-implemented method for augmenting a video conference between a first device and a second device comprising:
  • a tangible computer-readable storage medium having instructions stored thereon which, when executed, cause a computer processor to perform a method of:

Abstract

A computer-implemented method for augmenting a video conference between a first device and a second device. The method includes receiving a virtual object at the first device, wherein the virtual object is configured to augment the video conference and wherein the virtual object is specifically related to an event. The method also includes incorporating said virtual object into said video conference.

Description

AUGMENTING A VIDEO CONFERENCE
RELATED U.S. APPLICATION
[0001] The present Application is a Continuation in Part of pending U.S. Patent application number 13/025,943, Attorney Docket No. TNGO-008, entitled "AUGMENTING A VIDEO CONFERENCE" with the filing date of February 1 1, 201 1, assigned to the assignee of the present invention, and which is herein incorporated by reference in its entirety.
BACKGROUND
[0002] Participants in a video conference communicate with one another by transmitting audio/video signals to one another. For example, participants are able to interact via two-way video and audio transmissions simultaneously. However, the participants may not be able to completely articulate what they are attempting to communicate to one another based solely on the captured audio captured by microphones and video signals captured by video cameras. SUMMARY
[0003] In summary, this writing presents a computer-implemented method for augmenting a video conference between a first device and a second device. The method includes receiving a virtual object at the first device, wherein the virtual object is configured to augment the video conference and wherein the virtual object is specifically related to an event. The method also includes incorporating said virtual object into said video conference. BRIEF DESCRIPTION OF THE DRAWINGS
[0004] Figs. 1, 2 and 6 illustrate examples of devices, in accordance with embodiments of the present invention.
[0005] Figs. 3 and 7 illustrate embodiments of a method for providing an augmented video conference.
[0006] Figs. 4, 5, 8 and 9 illustrate embodiments of a method for augmenting a video conference.
[0007] The drawings referred to in this description should be understood as not being drawn to scale except if specifically noted.
DESCRIPTION OF EMBODIMENTS
[0008] Reference will now be made in detail to embodiments of the present technology, examples of which are illustrated in the accompanying drawings. While the technology will be described in conjunction with various embodiment(s), it will be understood that they are not intended to limit the present technology to these embodiments. On the contrary, the present technology is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the various embodiments as defined by the appended claims.
[0009] Furthermore, in the following description of embodiments, numerous specific details are set forth in order to provide a thorough understanding of the present technology. However, the present technology may be practiced without these specific details. In other instances, well known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the present embodiments.
[0010] Figure 1 depicts an embodiment of device 100. Device 100 is configured for participation in a video conference. Figure 2 depicts devices 100 and 200
participating in a video conference. In general, video conferencing allows two or more locations to interact via two-way video and audio transmissions simultaneously.
[0011] The discussion below will first describe the components of device 100. The discussion will then describe the functionality of the components of device 100 during a video conference between devices 100 and 200. Devices 100 and 200 are any communication devices (e.g., laptop, desktop, etc.) capable of participating in a video conference. In various embodiments, device 100 is a hand-held mobile device, such as smart phone, personal digital assistant (PDA), and the like.
[0012] Moreover, for clarity and brevity, the discussion will focus on the components and functionality of device 100. However, device 200 operates in a similar fashion as device 100. In one embodiment, device 200 is the same as device 100 and includes the same components as device 100.
[0013] Device 100 includes display 1 10, virtual object receiver 120, virtual object incorporator 130, transmitter 140, camera 150, microphone 152 and speaker 154. Device 100 optionally includes global positioning system 160 and virtual object generator 170.
[0014] Display 1 10 is configured for displaying video captured at device 200. In another embodiment, display 1 10 is further configured for displaying video captured at device 100.
[0015] Virtual object receiver 120 is configured to access a virtual object. A virtual object is configured for augmenting a video conference, which will be described in detail below.
[0016] Virtual object incorporator 130 is configured for incorporating the virtual object into the video conference. For example, virtual object incorporator 130 is configured for incorporating the virtual object into a video captured at device 100 and/or device 200.
[0017] Transmitter 140 is for transmitting data (e.g., virtual object control code).
[0018] Virtual object manipulator 135 is configured to enable manipulation of the virtual object in the video conference.
[0019] Camera 150 is for capturing video at device 100. Microphone 152 is for capturing audio at device 100. Speaker 154 is for generating an audible signal at device 100.
[0020] Global positioning system 160 is for determining a location of a device 100.
[0021] Virtual object generator 170 is for generating a virtual object.
[0022] Referring now to Figure 2, devices 100 and 200 are participating in a video conference with one another. In various embodiments, more than two devices participate in a video conference with each another.
[0023] During the video conference, video camera 250 captures video at device 200. For example, video camera 250 captures video of user 205 of device 200.
[0024] Video camera 150 captures video at device 100. For example, video camera 150 captures video of user 105. It should be appreciated that video cameras 150 and 250 capture any objects that are within the respective viewing ranges of cameras 150 and 250.
[0025] Microphone 152 captures audio signals corresponding to the captured video signal at device 100. Similarly, a microphone of device 200 captures audio signals corresponding to the captured video signal at device 200.
[0026] The video captured at device 200 is transmitted to and displayed on display 1 10 of device 100. For example, a video of user 205 is displayed on a first view 1 12 of display 1 10. Moreover, the video of user 205 is displayed on a second view 214 of display 210.
[0027] The video captured at device 100 is transmitted to and displayed on display 210 of device 200. For example, a video of user 105 is displayed on first view 212 of display 210. Moreover, the video of user 105 is displayed on a second view 1 14 of display 1 10.
[0028] In one embodiment, the audio signals captured at devices 100 and 200 are incorporated into the captured video. In another embodiment, the audio signals are transmitted separate from the transmitted video.
[0029] As depicted, first view 1 12 is the primary view displayed on display 1 10 and second view 1 14 is the smaller secondary view displayed on display 1 10. In various embodiments, the size of both first view 1 12 and second view 1 14 are adjustable. For example, second view 1 14 can be enlarged to be the primary view and view 1 12 can be diminished in size to be a secondary view. Moreover, either one of views 1 12 and 1 14 can be closed or fully diminished such that it is not viewable.
[0030] Virtual object receiver 120 receives virtual object 190 for augmenting the video conference. Virtual objects can be received from a server or device 200.
Virtual objects can be received at different times. For example, virtual objects can be received when an augmenting application is downloaded onto device 100, during login, or in real-time, when the virtual objects are instructed to be incorporated into the video conference.
[0031] Virtual objects 191 that are depicted in Figs. 2 and 6 (e.g., star, palm tree, flower, rain cloud) are merely a few of any number of examples of virtual objects. It should be appreciated that a virtual object can be any object that is capable of augmenting a video conference. In other words, a virtual object can be any object that is able to supplement the communication between participants in a video conference. For example, virtual objects can be, but are not limited to, a kiss, heart, emoticon, high-five, background (photo-booth type of effects), color space changes, and/or image process changes (e.g., thinning, fattening).
[0032] It should also be appreciated that a virtual object is not limited to a viewable virtual object. For example, a virtual object can be one of a plurality of sounds.
[0033] In one embodiment, virtual objects 191 are displayed on display 1 10 for viewing by user 105. For example, virtual objects 191 are displayed on virtual object bar 192. In one embodiment, virtual object bar 192 is overlayed with first view 1 12. In another embodiment, virtual object bar 192 is displayed concurrently with first view 1 12 and/or second view 1 14.
[0034] In various embodiments, virtual object bar 192 is displayed in response to user input, such as, but not limited to key stroke, cursor movement, a detected touch on a touch screen, and designated movement by a user (e.g., expressions, winking, blowing a kiss, hand gesture and the like).
[0035] Virtual object incorporator 130 facilitates in incorporating virtual object 190 into the video conference. In one embodiment, at device 100, virtual object incorporator 130 incorporates virtual object 190 into the video captured at device 200. For example, virtual object 190 is incorporated above the head of user 205.
Therefore, as depicted, video captured at device 200 is incorporated with object 190 and the augmented video is displayed at least at device 200. Also, the augmented video with incorporated virtual object 190 is displayed at device 100.
[0036] In one embodiment, user 105 selects virtual object 190 in virtual object bar 192 and drags virtual object 190 to and places it at a location designated by user 105 (e.g., above the head of user 205, as displayed on first view 1 12). Once placed at the designated location, virtual object incorporator 130 incorporates virtual object at the designated location.
[0037] In another embodiment, virtual object incorporator 130 generates control code. The control code instructs how virtual object 190 is to be incorporated into the video captured at device 200. [0038] For example, control code can be transmitted directly to device 200 to instruct device 200 how virtual object 190 is to be incorporated into video displayed at device 200. In such an example, control code signals or instructs device 200 that virtual object 190 is to be displayed in the video conference. In another example, the control code is sent to a server, device 200 then receives the control code from the server.
[0039] Figure 2 depicts virtual object 190 incorporated into the video conference. However, it should be appreciated that any number of virtual objects can be incorporated into the video conference at any time. For example, five different virtual objects may be concurrently incorporated into the video conference.
[0040] It should be appreciated that the term "incorporate" used herein, is used to describe that a virtual object is merely displayed along with some portion of the video conference. As such, the virtual object is merely displayed concurrently with some portion of the video conference. Accordingly, the virtual object is understood to be incorporated into the video and comprises the virtual object. However, it is not understood that the virtual object is integrated with or made part of the video stream.
[0041] In one embodiment, the virtual object is superimposed as an overlay on a video. As such, a virtual object is concurrently superimposed as an overlay displayed on devices 100 and 200. For example, a virtual object is concurrently overlayed on video displayed in view 1 12 and view 214 (as depicted in Fig. 2), and a virtual object can be concurrent overlayed on video displayed in view 1 14 and view 212 (as depicted in Fig. 6). [0042] In another embodiment, the virtual object is integrated into the bit stream of the video conference.
[0043] In another example, a virtual object is concurrently overlayed on video displayed in view 1 12 and view 212. Also, the virtual object is displayed in a portion of a display independent of the views at the devices and does not require a two-way video to be active (e.g., a one-way video could be active).
[0044] It should be noted that the various embodiments described herein can also be used in combination with one another. That is one described embodiment can be used in combination with one or more other described embodiments.
[0045] In one embodiment, transmitter 140 then transmits the video captured at device 200, which now includes virtual object 190, to second device 200 such that the video including virtual object 190 is displayed on display 210. In another embodiment, transmitter 140 transmits control code to device 200 (or a server) to instruct device 200 how virtual object 190 is to be incorporated into the video conference.
[0046] Virtual object manipulator 135 manipulates incorporated virtual object 190. In one embodiment, virtual object 190 is manipulated at device 100. For example, in response to user input at a touch screen, user 105 rotates virtual object 190 clockwise. Accordingly, video captured at device 200 (and displayed on device 100 and/or device 200) is augmented such that virtual object spins clockwise. [0047] In another embodiment, virtual object 190 is manipulated at device 200. For example, in response to user 205 moving his head from left to right, virtual object 190 is manipulated (via a virtual object manipulator of device 200) such that it moves from left to right with respect to the head movement of user 205. Accordingly, video captured at device 200 (and displayed on device 100 and/or device 200) is augmented such that virtual object 190 is moved from left to right.
[0048] In a further embodiment, virtual object 190 is concurrently manipulated at device 100 and device 200. For example, in response to user 205 moving his head from left to right and user 105 spinning virtual object 190 (as described above), virtual object 190 is manipulated such that it concurrently moves from left to right with respect to the head movement of user 205 and spins in response to input from user 105. Accordingly, video captured at device 200 (and displayed on device 100 and/or device 200) is augmented such that virtual object 190 is moved from left to right while spinning clockwise.
[0049] In a further embodiment, virtual object 190 is directionally manipulated. For example, user 105 sends a "punch" virtual object (e.g., fist, boxing glove) to user 205. Accordingly, user 105 views the "punch" virtual object going into display 1 10 and user 205 views the "punch" virtual object coming out of display 210.
[0050] It should be appreciated that virtual objects are manipulated in response to a variety of inputs. For example, virtual objects can be manipulated via sounds, gestures, expressions, movements, etc. Various examples are: in response to a wink of a user, a virtual object (e.g., a star) comes out of the eye of the user; and in response to a kiss by a user, red lips fly out of the mouth of the user.
[0051] In one embodiment, virtual objects 191 are not displayed on display 1 10 and/or virtual display bar 192 until there is at least one of a variety of inputs, as described above. For example, a virtual object of a heart is not displayed until there is a double-tapping on a touch screen.
[0052] Any number of virtual objects can be accessed and/or selected to be incorporated into the video conference. In one embodiment, virtual objects 191 are geographical-related virtual objects. For example, virtual objects 191 are based on a location of devices 100 and/or 200.
[0053] In particular, if device 100 is located in Hawaii, then virtual objects 191 are related to that location. For example, geographical-related virtual objects, based on a location in Hawaii determined from GPS 160, could be, but are not limited to, a surfboard, sun, palm tree, coconut, etc.
[0054] It should be appreciated that the determination of location can be provided in a variety of ways. For example, the determination of a location of a device can be based on information provided by a user upon registrations, an IP address of the device or any other method that can be used to determine location.
[0055] In another embodiment, virtual objects 191 are temporal-related virtual objects based on a time of the video conference. For example, if the video conference occurs on or around Christmas, then virtual objects would be Christmas related (e.g., stocking, Christmas tree, candy cane, etc.). In another example, if the video conference occurs in the evening, then virtual objects would be associated with the evening (e.g., moon, stars, pajamas, etc.)
[0056] In a further embodiment, virtual objects 191 are culturally-related virtual objects. For example, if user 105 and/or user 205 are located in Canada, then virtual objects 191 could be, but are not limited to, a Canadian flag, hockey puck, curling stone, etc.
[0057] In another embodiment, virtual objects 191 are user-created virtual objects. For example, users 105 and/or 205 manually create the virtual objects then virtual object generator 170 utilizes the creation to generate user-created virtual objects.
[0058] In yet another embodiment, virtual objects 191 are available and/or accessed based on account status. For example, user 105 has a payable account to have access to virtual objects 191. If user 105 has provided adequate payment to the account, then user 105 is able to access virtual objects 191. In contrast, if user has not provided adequate payment to the account, then user 105 is unable to access virtual objects 191.
[0059] Moreover, use and selection of virtual objects can be specifically related to events, holidays, special occasions and the like. Holidays can be, but are not limited to, religious holidays (e.g., Christmas, Easter, Yom Kippur, etc.), national holidays (e.g., New Years, Presidents Day, Memorial Day, etc.) or any other observed holiday (official or unofficial). Events or special occasions can be, but are not limited to, birthdays, anniversaries, graduation, weddings, new job, retirement and the like.
[0060] For example, on or around Thanksgiving, virtual objects of a turkey, pumpkin pie, a Pilgrim and the like are selected and/or used. In another example, on or around St. Patrick's Day, virtual objects of a shamrock, a pot of gold, and a leprechaun are selected and/or used. In a further example, on or around Easter, virtual objects of an Easter bunny and Easter eggs are selected and/or used.
[0061] In one embodiment, a user is prompted to utilize a virtual object specifically related to events, holidays, special occasions and the like. For example, on or around the Fourth of July, a user is prompted to select and/or use virtual objects (e.g., fireworks) specifically related to the Fourth of July. In particular, the virtual objects are presented to a user and the user is prompted to send the virtual objects to another user in the videoconference. In other words, the virtual objects are incorporated into the video conference.
[0062] In another embodiment, a user can be prompted to send a virtual object to another user where a relationship between the parties is suspected, known, or inferred. For example, a mother is speaking with her son over a videoconference. If the mother/son relationship is suspected, known, or inferred, then the son is prompted to utilize virtual objects (e.g., flowers) specifically related to Mother's Day.
[0063] The relationship can be determined in a variety of ways. For example, the relationship can be determined based on, but not limited to, surname, location of users, call logs, etc. [0064] Moreover, the son may be prompted with a message, such as "This appears to be your mother. Is this correct?" As such, if the son responds that he is speaking with his mother, then the son is prompted to utilize virtual objects (e.g., flowers) specifically related to Mother's Day.
[0065] It should also be appreciated that virtual objects can enhance revenue stream. For example, 100,000 virtual objects are used on Valentine's Day, and there is a $0.50 fee for each virtual object. As a result, $50,000 in fees is accumulated on Valentine's Day.
[0066] Figures 3-5 depict embodiments of methods 300-500, respectively. In various embodiments, methods 300-500 are carried out by processors and electrical components under the control of computer readable and computer executable instructions. The computer readable and computer executable instructions reside, for example, in a data storage medium such as computer usable volatile and non-volatile memory. However, the computer readable and computer executable instructions may reside in any type of computer readable storage medium. In some embodiments, methods 300-500 are performed by device 100 and/or device 200, as described in Figures 1 and 2.
[0067] Now referring to Figure 3, at 310 of method 300, a virtual object is enabled to be accessed by a first device, wherein the first device is configured for participating in the video conference with a second device. For example, virtual object 190 is enabled to be accessed by device 100, wherein device 100 is configured for participating in a video conference with at least device 200.
[0068] At 320, the virtual object is enabled to be incorporated into a video of the video conference captured at the second device, wherein the video comprising the virtual object is configured to be displayed at the second device. For example, virtual object 190 is enabled to be incorporated into the video captured of user 205 at device 200 and also displayed at device 200.
[0069] At 330, the video comprising the virtual object is enabled to be transmitted from the first device to the second device. For example, transmission of the video comprising any one of virtual objects 191 is enabled to be transmitted by transmitter 140 to device 200.
[0070] At 340, concurrent display of the video comprising the virtual object is enabled at the first device and the second device. For example, the video comprising object 190 is enabled to be simultaneously displayed at devices 100 and 200.
[0071] At 350, cooperative manipulation of the incorporated virtual object at the first device and the second device is enabled. For example, user 205 interacts with virtual object 190 in second view 214 and user 105 also cooperatively interacts with virtual object in first view 1 12.
[0072] Now referring to Figure 4, at 410 of method 400, instructions are received to access a virtual object. For example, in response to user input (e.g., key stroke, cursor movement, a detected touch on a touch screen, etc.), instructions are received to access virtual object 190. In various embodiments, the virtual object(s) can be, but is not limited to, a geographical-related virtual object, a temporal-related virtual object, a culturally-related virtual object, and/or a user-created virtual object
[0073] At 420, the virtual object is incorporated into the video conference, wherein the virtual object is accessed by the first device and configured to be displayed at the second device. For example, virtual object 190 is accessed at device 100 and incorporated, at device 100, into the video captured at device 200. The video comprising incorporated virtual object 190 is configured to be displayed at device 200.
[0074] In another example, user 105 is able to place a virtual object of lips (to signify a kiss) on the cheek of user 205 by designating a location of the lips on the cheek of user 105 in first view 1 12. Accordingly, the virtual object of lips is incorporated into the video captured at device 200 and displayed on device 100 and 200. The virtual object of lips can be incorporated for the duration of the video conference or can be incorporated for a designated time duration.
[0075] In one embodiment, at 422, in response to user input on a touch screen display, the virtual object is incorporated into the video conference. For example, in response to user input on a touch screen display of device 100, the virtual object is incorporated into the video conference. [0076] At 430, a video of the video conference comprising the incorporated virtual object is transmitted to the second device. For example, video that includes the virtual object is transmitted to the device 200 via transmitter 140.
[0077] At 440, a video of the video conference captured at the second device is displayed at the first device. For example, video of user 205 at device 200 is captured at device 200 and displayed at device 100.
[0078] At 450, the virtual object incorporated into the video conference is manipulated at the second device. For example, user 205 interacts with virtual object 190 displayed in second view 214 by rotating virtual object 190.
[0079] At 460, the virtual object incorporated into the video conference is manipulated at the first device. For example, user 105 interacts with virtual object 190 displayed in first view 1 12 by reducing the size of virtual object 190.
[0080] In one embodiment, at 465, in response to user input received at a touch screen display of a hand-held device, the virtual object incorporated into the video conference is manipulated. For example, device 100 is a hand-held device (e.g., cell phone) with a touch screen display. Accordingly, in response to user 105 touching the touch screen display, the size of virtual object 190 is reduced.
[0081] At 470, the virtual object incorporated into the video conference cooperatively manipulated at the first device and the second device. For example, user 205 moves his head from left to right such that virtual object 190 tracks with the head movement. Also, user 105 cooperatively rotates virtual object 190 while virtual object 190 is tracking with the head movement of user 205.
[0082] At 480, a video of the video conference captured at the second device and the virtual object are concurrently displayed at the first device. For example, video captured at second device 200 including incorporated virtual object 190 are concurrently displayed on first view 1 12.
[0083] At 490, a first video captured at the first device and a second video captured at the second device are concurrently displayed at the first device. For example, video captured at device 200 is displayed on first view 1 12 and video captured at device 100 is concurrently displayed on second view 1 14.
[0084] Now referring to Figure 5, at 510 of method 500, video captured at a second device is displayed on a first device.
[0085] At 515, a virtual object is received at the first device, wherein the virtual object is configured to augment the video conference. In various embodiments, the virtual object(s) can be, but is not limited to, a geographical-related virtual object, a temporal-related virtual object, a culturally-related virtual object, and/or a user- created virtual object.
[0086] At 520, the virtual object is incorporated into the video captured at the second device. For example, virtual object 190 is incorporated into the video captured at device 200, such that virtual object 190 is placed above the head of user 205 and tracks with movements of the head of user 205.
[0087] In one embodiment, at 522, in response to user input at a touch screen display, the virtual object is incorporated into the video captured at the second device. For example, in response to input of user 105 at a touch screen display of device 100, any number of virtual objects are incorporated into the video captured at the device 200.
[0088] At 530, the video comprising the virtual object is enabled to be displayed at the second device. At 535, the video comprising the virtual object is transmitted to the second device.
[0089] At 540, the virtual object incorporated into the video captured at the second device is manipulated at the second device. For example, user 205 changes the color of virtual device 190, displayed in second view 214, to red.
[0090] At 545, the virtual object incorporated into the video captured at the second device is manipulated at the first device. For example, user 105 changes the location virtual device 190 from the top of the head of user 205 to the left hand of user 205.
[0091] In one embodiment, at 547, in response to user input received at a touch screen display of a hand-held mobile device, the virtual object incorporated into the video captured at the second device is manipulated. For example, in response to user input at a touch screen display of device 100, user 105 changes virtual device 190 from a star (as depicted) to a light bulb (not shown). [0092] At 550, the virtual object incorporated into the video captured at the second device cooperatively manipulated at the first device and the second device. For example, user 205 manipulates virtual object 190 in second view 214 and user 105 cooperatively manipulates virtual object in first view 1 12.
[0093] At 555, the virtual object and the video captured at the second device is concurrently display at the first device. At 560, a video captured at the first device and the video captured at the second device are concurrently displayed at the first device.
[0094] Figure 6 depicts an embodiment of devices 100 and 200 participating in a video conference with one another. Devices 100 and 200 operate in a similar fashion, as described above.
[0095] During the video conference, video camera 150 captures video at device 100. For example, video camera 150 captures video of user 105 of device 100.
[0096] Video camera 250 captures video at device 200. For example, video camera 250 captures video of user 205, who is the user of device 200.
[0097] The video captured at device 100 is displayed on display 1 10 of device 100. For example, a video of user 105 is displayed on a second view 1 14 displayed on display 110. Moreover, the video of user 205 is displayed on first view 1 12 on display 1 10. [0098] Virtual object receiver 120 receives virtual object 190 for augmenting the video conference between for users 105 and 205 participating in the video conference.
[0099] Virtual objects 191 are displayed on display 1 10 for viewing by user 105. For example, virtual objects 191 are displayed on virtual object bar 192. In one embodiment, virtual object bar 192 is overlayed with first view 1 12. In another embodiment, virtual object bar 192 is displayed concurrently with first view 1 12 and/or second view 1 14.
[00100] Virtual object incorporator 130 incorporates virtual object 190 into the video conference. In particular, at device 100, virtual object 190 is incorporated into the video captured at device 100. For example, virtual object 190 is incorporated above the head of user 105. Therefore, as depicted, video captured at device 100 is incorporated with object 190 and the augmented video is displayed at least at device 200. Also, the augmented video with incorporated virtual object 190 is concurrently displayed at device 100.
[00101] In one embodiment, user 105 selects virtual object 190 in virtual object bar 190 and drags virtual object 190 to and places it at a location designated by user 105 (e.g., above the head of user 105, as depicted). Once placed at the designated location, virtual object incorporator 130 incorporates virtual object at the designated location. [00102] Transmitter 140 then transmits the video captured at device 100, which now includes virtual object 190, to second device 200 such that the video including virtual object 190 is displayed on display 210.
[00103] A virtual object manipulator of device 200 manipulates incorporated virtual object 190. For example, in response to user input of user 205 at a touch screen, user 205 rotates virtual object 190 clockwise. Accordingly, video captured at device 100 (and displayed on device 200 and/or device 100) is augmented such that virtual object spins clockwise.
[00104] In another embodiment, virtual object 190 is manipulated at device
100. For example, in response to user 105 moving his head from left to right, virtual object 190 is manipulated (via virtual object manipulator 135) such that it moves from left to right with respect to the head movement of user 105. Accordingly, video captured at device 100 (and displayed on device 100 and/or device 200) is augmented such that virtual object 190 is moved from left to right.
[00105] In a further embodiment, virtual object 190 is concurrently
manipulated at device 100 and device 200. For example, in response to user 105 moving his head from left to right and user 205 spinning virtual object 190, virtual object 190 is manipulated such that it concurrently moves from left to right with respect to the head movement of user 105 and spins in response to input from user 205. Accordingly, video captured at device 100 (and displayed on device 100 and/or device 200) is augmented such that virtual object 190 is moved from left to right while spinning clockwise. [00106] Figures 7-9 depict embodiments of methods 700-900, respectively. In various embodiments, methods 700-900 are carried out by processors and electrical components under the control of computer readable and computer executable instructions. The computer readable and computer executable instructions reside, for example, in a data storage medium such as computer usable volatile and non-volatile memory. However, the computer readable and computer executable instructions may reside in any type of computer readable storage medium. In some embodiments, methods 700-900 are performed by device 100 and/or device 200, as described in Figures 1 and 6.
[00107] Now referring to Figure 7, at 710 of method 300, a virtual object is enabled to be accessed by a first device, wherein the first device is configured for participating in the video conference with a second device. For example, virtual object 190 is enabled to be accessed by device 100, wherein device 100 is configured for participating in a video conference with at least device 200.
[00108] At 720, the virtual object is enabled to be incorporated into a video of the video conference captured at the first device, wherein the video comprising the virtual object is configured to be displayed at the second device. For example, virtual object 190 is enabled to be incorporated into the video captured at device 100 of user 105 and displayed at the device 100 and 200.
[00109] At 730, the video comprising the virtual object is enabled to be transmitted from the first device to the second device. For example, transmission of the video comprising any one of virtual objects 191 is enabled to be transmitted by transmitter 140 to device 200.
[00110] At 740, concurrent display of the video comprising said virtual object is enabled at the first device and the second device. For example, the video comprising object 190 is enabled to be simultaneously displayed at devices 100 and 200.
[00111] At 750, cooperative manipulation of the incorporated virtual object at the first device and the second device is enabled. For example, user 205 interacts with virtual object 190 in first view 212 and user 105 also cooperatively (or simultaneously) interacts with virtual object in second view 1 14.
[00112] Now referring to Figure 8, at 810 of method 800, instructions are received to access a virtual object. For example, in response to user input at a touch screen display, instructions are received to access virtual object 190.
[00113] At 820, the virtual object is incorporated into the video conference, wherein the virtual object is to be manipulated by a user of the second device. For example, virtual object 190 is accessed at device 100 and incorporated, at device 100, into the video captured at device 100. The video comprising incorporated virtual object 190 is configured to be displayed and manipulated at device 200. In one embodiment, at 822, in response to user input on a touch screen display, the virtual object is incorporated into the video conference. [00114] At 830, a video of the video conference comprising the incorporated virtual object is transmitted to the second device.
[00115] At 840, the video conference captured at the first device is displayed at the second device. For example, video of user 105 at device 100 is captured at device 100 and displayed at device 200.
[00116] At 850, the virtual object incorporated into the video conference is manipulated at the second device. For example, user 205 interacts with virtual object 190 displayed in first view 212 by rotating virtual object 190.
[00117] In one embodiment, at 855, in response to user input received at a touch screen display, the virtual object incorporated into the video conference is manipulated at a hand-held mobile device. For example, device 200 is a hand-held device (e.g., cell phone) with a touch screen display. Accordingly, in response to user 205 touching the touch screen display, the size of virtual object 190 is reduced.
[00118] At 860, the virtual object incorporated into the video conference is manipulated at the first device. For example, user 105 interacts with virtual object 190 displayed in second view 1 14 by reducing the size of virtual object 190.
[00119] At 870, the virtual object incorporated into the video conference is cooperatively manipulated at the first device and the second device. For example, user 105 moves his head from left to right such that virtual object 190 tracks with the head movement. Also, user 205 cooperatively rotates virtual object 190 while virtual object 190 is tracking with the head movement of user 105.
[00120] At 880, a video of the video conference captured at the second device and the virtual object are concurrently displayed at the first device. For example, video captured at second device 200 is displayed on first view 1 12 and video captured at device 100 including incorporated virtual object 190 are concurrently displayed on second view 1 14.
[00121] At 890, a first video captured at the first device and a second video captured at the second device are concurrently display at the first device.
[00122] Now referring to Figure 9, at 910 of method 900, video captured at a first device is displayed on the first device.
[00123] At 915, a virtual object is received at the first device, wherein the virtual object is configured to augment the video conference. In various
embodiments, the virtual object(s) can be, but is not limited to, a geographical-related virtual object, a temporal-related virtual object, a culturally-related virtual object, and/or a user-created virtual object.
[00124] At 920, the virtual object is incorporated into the video captured at the first device. For example, virtual object 190 is incorporated into the video captured at device 100, such that virtual object 190 is placed above the head of user 105 and tracks with movements of the head of user 105. [00125] In one embodiment, at 922, in response to user input at a touch screen display, the virtual object is incorporated into the video captured at the device. For example, in response to input of user 105 at a touch screen display of device 100, any number of virtual objects are incorporated into the video captured at device 100.
[00126] At 930, the video comprising the virtual object is enabled to be displayed at the second device, such that the virtual object is manipulated at the second device. At 935, the video comprising the virtual object is transmitted to the second device.
[00127] At 940, the virtual object incorporated into the video captured at the first device is manipulated at the second device. For example, user 205 changes the color of virtual device 190, displayed in first view 212, to red.
[00128] In one embodiment, at 942, in response to user input received at a touch screen display of a hand-held mobile device, the virtual object incorporated into the video captured at the first device is manipulated. For example, in response to user input at a touch screen display of device 200, user 205 changes virtual device 190 from a star (as depicted) to a light bulb (not shown).
[00129] At 945, the virtual object incorporated into the video captured at the first device is manipulated at the first device. For example, user 105 changes the location virtual device 190 from the top of the head of user 105 to the left hand of user 105. [00130] At 950, the virtual object incorporated into the video captured at the first device cooperatively manipulated at the first device and the second device. For example, user 205 manipulates virtual object 190 in first view 212 and user 105 cooperatively manipulates virtual object in second view 1 14.
[00131] At 955, the virtual object and the video captured at the first device are concurrently display at the first device. At 560, a video captured at the first device and the video captured at the second device are concurrently displayed at the first device.
[00132] Various embodiments of the present invention are thus described. While the present invention has been described in particular embodiments, it should be appreciated that the present invention should not be construed as limited by such embodiments, but rather construed according to the following claims.
[00133] All elements, parts and steps described herein are preferably included. It is to be understood that any of these elements, parts and steps may be replaced by other elements, parts and steps or deleted altogether as will be obvious to those skilled in the art.
[00134] The person skilled in the art will understand that the method steps mentioned in this description may be carried out by hardware including but not limited to processors; input devices comprising at least keyboards, mouse, scanners, cameras; output devices comprising at least monitors, printers. The method steps are to be carried out with the appropriate devices when needed. For example, a decision step could be carried out by a decision-making unit in a processor by implementing a decision algorithm. The person skilled in the art will understand that this decisionmaking unit can exist physically or effectively, for example in a computer's processor when carrying out the aforesaid decision algorithm. The above analysis is to be applied to other steps described herein.
[00135] CONCEPTS
This writing also discloses at least the following concepts.
Concept 1. A computer-implemented method for augmenting a video conference between a first device and a second device, said method comprising:
receiving a virtual object at said first device, wherein said virtual object is configured to augment said video conference and wherein said virtual object is specifically related to an event; and
incorporating said virtual object into said video conference.
Concept 2. The computer-implemented method of Concept 1 , wherein said event is selected from a group consisting of: a holiday and a special occasion.
Concept 3. The computer-implemented method of Concept 1 or 2, further comprising:
prompting a user of said first device to incorporate said virtual object into said video conference.
Concept 4. The computer-implemented method of Concept 3, wherein said prompting a user of said first device to incorporate said virtual object into said video conference further comprises:
prompting said user of said first device to incorporate said virtual object into said video conference during a day when said event occurs.
Concept 5. The computer-implemented method of any one of the preceding concepts , further comprising:
determining a possible relationship between a user of said first device and a user of said second device.
Concept 6. The computer-implemented method of Concept 5, further comprising: prompting a user of said first device to confirm said determined possible relationship. Concept 7. The computer-implemented method of any one of the preceding concepts, further comprising:
prompting a user of said first device to incorporate said virtual object into said video conference based on a relationship between said first user and a second user of said second device.
Concept 8. The computer-implemented method of any one of the preceding concepts, further comprising:
manipulating, at said second device, said virtual object incorporated into said video conference.
Concept 9. The computer-implemented method of any one of the preceding concepts, further comprising:
manipulating, at said first device, said virtual object incorporated into said video conference.
Concept 10. A tangible computer-readable storage medium having instructions stored thereon which, when executed, cause a computer processor to perform a method of:
receiving a virtual object at said first device, wherein said virtual object is configured to augment said video conference and wherein said virtual object is specifically related to an event; and
incorporating said virtual object into said video conference.
Concept 1 1. The tangible computer-readable storage medium of Concept 10, wherein said event is selected from a group consisting of: a holiday and a special occasion.
Concept 12. The tangible computer-readable storage medium of Concept 10 or 1 1, further comprising instructions for:
prompting a user of said first device to incorporate said virtual object into said video conference. Concept 13. The tangible computer-readable storage medium of Concept 12, wherein said prompting a user of said first device to incorporate said virtual object into said video conference further comprises:
prompting said user of said first device to incorporate said virtual object into said video conference during a day when said event occurs.
Concept 14. The tangible computer-readable storage medium of Concept 12, further comprising instructions for:
determining a possible relationship between a user of said first device and a user of said second device.
Concept 15. The tangible computer-readable storage medium of Concept 14, further comprising instructions for:
prompting a user of said first device to confirm said determined possible relationship.
Concept 16. The tangible computer-readable storage medium of any one of the concepts 0-15, further comprising instructions for:
prompting a user of said first device to incorporate said virtual object into said video conference based on a relationship between said first user and a second user of said second device.
Concept 17. The tangible computer-readable storage medium of any one of the concepts 10-16, further comprising instructions for:
manipulating, at said second device, said virtual object incorporated into said video conference.
Concept 18. The tangible computer-readable storage medium of any one of the concepts 10-17, further comprising instructions for:
manipulating, at said first device, said virtual object incorporated into said video conference.

Claims

Claims:
1. A computer-implemented method for augmenting a video conference between a first device and a second device, said method comprising:
receiving a virtual object at said first device, wherein said virtual object is configured to augment said video conference and wherein said virtual object is specifically related to an event; and
incorporating said virtual object into said video conference.
2. The computer-implemented method of Claim 1 , wherein said event is selected from a group consisting of: a holiday and a special occasion.
3. The computer-implemented method of Claim 1 , further comprising:
prompting a user of said first device to incorporate said virtual object into said video conference.
4. The computer-implemented method of Claim 3, wherein said prompting a user of said first device to incorporate said virtual object into said video conference further comprises:
prompting said user of said first device to incorporate said virtual object into said video conference during a day when said event occurs.
5. The computer-implemented method of Claim 1, further comprising:
determining a possible relationship between a user of said first device and a user of said second device.
6. The computer-implemented method of Claim 5, further comprising:
prompting a user of said first device to confirm said determined possible relationship.
7. The computer-implemented method of Claim 1, further comprising:
prompting a user of said first device to incorporate said virtual object into said video conference based on a relationship between said first user and a second user of said second device.
8. The computer-implemented method of Claim 1 , further comprising:
manipulating, at said second device, said virtual object incorporated into said video conference.
9. The computer-implemented method of Claim 1, further comprising:
manipulating, at said first device, said virtual object incorporated into said video conference.
10. A tangible computer-readable storage medium having instructions stored thereon which, when executed, cause a computer processor to perform a method of: receiving a virtual object at said first device, wherein said virtual object is configured to augment said video conference and wherein said virtual object is specifically related to an event; and
incorporating said virtual object into said video conference.
1 1. The tangible computer-readable storage medium of Claim 10, wherein said event is selected from a group consisting of: a holiday and a special occasion.
12. The tangible computer-readable storage medium of Claim 10, further comprising instructions for:
prompting a user of said first device to incorporate said virtual object into said video conference.
13. The tangible computer-readable storage medium of Claim 12, wherein said prompting a user of said first device to incorporate said virtual object into said video conference further comprises:
prompting said user of said first device to incorporate said virtual object into said video conference during a day when said event occurs.
14. The tangible computer-readable storage medium of Claim 12, further comprising instructions for:
determining a possible relationship between a user of said first device and a user of said second device.
15. The tangible computer-readable storage medium of Claim 14, further comprising instructions for:
prompting a user of said first device to confirm said determined possible relationship.
16. The tangible computer-readable storage medium of Claim 10, further comprising instructions for:
prompting a user of said first device to incorporate said virtual object into said video conference based on a relationship between said first user and a second user of said second device.
17. The tangible computer-readable storage medium of Claim 10, further comprising instructions for:
manipulating, at said second device, said virtual object incorporated into said video conference.
18. The tangible computer-readable storage medium of Claim 10, further comprising instructions for:
manipulating, at said first device, said virtual object incorporated into said video conference.
EP12834018.9A 2011-09-23 2012-08-20 Augmenting a video conference Withdrawn EP2759127A4 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/241,918 US9544543B2 (en) 2011-02-11 2011-09-23 Augmenting a video conference
PCT/US2012/051595 WO2013043289A1 (en) 2011-09-23 2012-08-20 Augmenting a video conference

Publications (2)

Publication Number Publication Date
EP2759127A1 true EP2759127A1 (en) 2014-07-30
EP2759127A4 EP2759127A4 (en) 2014-10-15

Family

ID=47914747

Family Applications (1)

Application Number Title Priority Date Filing Date
EP12834018.9A Withdrawn EP2759127A4 (en) 2011-09-23 2012-08-20 Augmenting a video conference

Country Status (5)

Country Link
EP (1) EP2759127A4 (en)
JP (1) JP2014532330A (en)
KR (1) KR20140063673A (en)
CN (1) CN103814568A (en)
WO (1) WO2013043289A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101751620B1 (en) * 2015-12-15 2017-07-11 라인 가부시키가이샤 Method and system for video call using two-way communication of visual or auditory effect
CN108305317B (en) 2017-08-04 2020-03-17 腾讯科技(深圳)有限公司 Image processing method, device and storage medium
CN107613242A (en) * 2017-09-12 2018-01-19 宇龙计算机通信科技(深圳)有限公司 Video conference processing method and terminal, server
KR102271308B1 (en) * 2017-11-21 2021-06-30 주식회사 하이퍼커넥트 Method for providing interactive visible object during video call, and system performing the same
US10681310B2 (en) 2018-05-07 2020-06-09 Apple Inc. Modifying video streams with supplemental content for video conferencing
US11012389B2 (en) 2018-05-07 2021-05-18 Apple Inc. Modifying images with supplemental content for messaging
CN110716641B (en) * 2019-08-28 2021-07-23 北京市商汤科技开发有限公司 Interaction method, device, equipment and storage medium
CN113766168A (en) * 2021-05-31 2021-12-07 腾讯科技(深圳)有限公司 Interactive processing method, device, terminal and medium
KR102393042B1 (en) 2021-06-15 2022-04-29 주식회사 브이온 Video conferencing system
CN113938336A (en) * 2021-11-15 2022-01-14 网易(杭州)网络有限公司 Conference control method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030193558A1 (en) * 2002-04-10 2003-10-16 International Business Machines Corporation Media-enhanced greetings and/or responses in communication systems
US20040056887A1 (en) * 2002-09-24 2004-03-25 Lg Electronics Inc. System and method for multiplexing media information over a network using reduced communications resources and prior knowledge/experience of a called or calling party
JP2005079882A (en) * 2003-08-29 2005-03-24 Sega Corp Dynamic image two-way communication terminal, video-phone system, and computer program
US20080158334A1 (en) * 2006-12-29 2008-07-03 Nokia Corporation Visual Effects For Video Calls
US20090244256A1 (en) * 2008-03-27 2009-10-01 Motorola, Inc. Method and Apparatus for Enhancing and Adding Context to a Video Call Image
US20100134588A1 (en) * 2008-12-01 2010-06-03 Samsung Electronics Co., Ltd. Method and apparatus for providing animation effect on video telephony call
US20120206558A1 (en) * 2011-02-11 2012-08-16 Eric Setton Augmenting a video conference

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5572248A (en) * 1994-09-19 1996-11-05 Teleport Corporation Teleconferencing method and system for providing face-to-face, non-animated teleconference environment
JP4378072B2 (en) * 2001-09-07 2009-12-02 キヤノン株式会社 Electronic device, imaging device, portable communication device, video display control method and program
JP2003244425A (en) * 2001-12-04 2003-08-29 Fuji Photo Film Co Ltd Method and apparatus for registering on fancy pattern of transmission image and method and apparatus for reproducing the same
US20060088038A1 (en) * 2004-09-13 2006-04-27 Inkaar, Corporation Relationship definition and processing system and method
JP2006173879A (en) * 2004-12-14 2006-06-29 Hitachi Ltd Communication system
US20070242066A1 (en) * 2006-04-14 2007-10-18 Patrick Levy Rosenthal Virtual video camera device with three-dimensional tracking and virtual object insertion
US8908003B2 (en) * 2009-09-17 2014-12-09 Nokia Corporation Remote communication system and method
KR101234495B1 (en) * 2009-10-19 2013-02-18 한국전자통신연구원 Terminal, node device and method for processing stream in video conference system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030193558A1 (en) * 2002-04-10 2003-10-16 International Business Machines Corporation Media-enhanced greetings and/or responses in communication systems
US20040056887A1 (en) * 2002-09-24 2004-03-25 Lg Electronics Inc. System and method for multiplexing media information over a network using reduced communications resources and prior knowledge/experience of a called or calling party
JP2005079882A (en) * 2003-08-29 2005-03-24 Sega Corp Dynamic image two-way communication terminal, video-phone system, and computer program
US20080158334A1 (en) * 2006-12-29 2008-07-03 Nokia Corporation Visual Effects For Video Calls
US20090244256A1 (en) * 2008-03-27 2009-10-01 Motorola, Inc. Method and Apparatus for Enhancing and Adding Context to a Video Call Image
US20100134588A1 (en) * 2008-12-01 2010-06-03 Samsung Electronics Co., Ltd. Method and apparatus for providing animation effect on video telephony call
US20120206558A1 (en) * 2011-02-11 2012-08-16 Eric Setton Augmenting a video conference

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of WO2013043289A1 *

Also Published As

Publication number Publication date
EP2759127A4 (en) 2014-10-15
CN103814568A (en) 2014-05-21
KR20140063673A (en) 2014-05-27
WO2013043289A1 (en) 2013-03-28
JP2014532330A (en) 2014-12-04

Similar Documents

Publication Publication Date Title
US9544543B2 (en) Augmenting a video conference
US9253440B2 (en) Augmenting a video conference
US8767034B2 (en) Augmenting a video conference
US9262753B2 (en) Video messaging
WO2013043289A1 (en) Augmenting a video conference
US9911222B2 (en) Animation in threaded conversations
TWI720462B (en) Modifying video streams with supplemental content for video conferencing
CN105320262A (en) Method and apparatus for operating computer and mobile phone in virtual world and glasses thereof
US20210318749A1 (en) Information processing system, information processing method, and program
US11456887B1 (en) Virtual meeting facilitator
WO2018207046A1 (en) Methods, systems and devices supporting real-time interactions in augmented reality environments
CN106203288A (en) A kind of photographic method based on augmented reality, device and mobile terminal
JP2011239397A (en) Intuitive virtual interaction method
WO2023080843A2 (en) Role information interaction method and device, storage medium and program product
TW202111480A (en) Virtual reality and augmented reality interaction system and method respectively playing roles suitable for an interaction technology by an augmented reality user and a virtual reality user
CN116055638A (en) Video color ring interaction method, server and storage medium

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20140331

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

A4 Supplementary search report drawn up and despatched

Effective date: 20140915

RIC1 Information provided on ipc code assigned before grant

Ipc: H04N 7/15 20060101AFI20140909BHEP

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20160301