WO2005055602A1 - Noeud d'application video - Google Patents

Noeud d'application video Download PDF

Info

Publication number
WO2005055602A1
WO2005055602A1 PCT/SE2003/001883 SE0301883W WO2005055602A1 WO 2005055602 A1 WO2005055602 A1 WO 2005055602A1 SE 0301883 W SE0301883 W SE 0301883W WO 2005055602 A1 WO2005055602 A1 WO 2005055602A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
data stream
encoded
video data
client
Prior art date
Application number
PCT/SE2003/001883
Other languages
English (en)
Other versions
WO2005055602A8 (fr
Inventor
Norishige Sawada
Naoya Ori
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/SE2003/001883 priority Critical patent/WO2005055602A1/fr
Priority to AU2003304675A priority patent/AU2003304675A1/en
Publication of WO2005055602A1 publication Critical patent/WO2005055602A1/fr
Publication of WO2005055602A8 publication Critical patent/WO2005055602A8/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals

Definitions

  • the present invention relates to a video conversation application in a communication network.
  • Video telephony is an extended peer-to-peer communication application using voice and video simultaneously synchronized. This application enables you to talk to the other party while seeing his/her face on the display of the mobile phone.
  • the standard is described in ITU-T Recommendation H.324 (Terminal for low bit-rate multimedia communication) and in the 3GPP specification TS 26.111.
  • H.324 covers the technical requirements for very low bit-rate multimedia telephone terminals operating over the General Switched Telephone Network, that is the fixed and/or the mobile telephony networks.
  • the scope of H.324 includes a number of other relevant ITU-T Recommendations such as H.223 multiplex/demultiplex, H.245 control, H.263 video codec, and G.723.1 audio codec.
  • the 3GPP specification TS 26.111 is based on H.324, but with additions to fully specify a multimedia codec for use in the 3rd Generation Mobile System.
  • a video terminal can detect facial movements and characteristics in real time, transform these movements and characteristics into a data stream and send it to another video terminal.
  • the latter terminal receives the information and generates a synthesized and animated image of a human face.
  • this image is also referred to as an avatar.
  • US patent 6,044,168 x Model based faced coding and decoding using feature detection and eigenface coding' discloses an algorithm for sensing facial features and to synthesize a face image at the receiving end by mapping the received facial feature locations on a 3D face model.
  • a problem that is addressed by the current invention is to make it possible to display an animated synthesized graphical representation of the user's face on the screen in lieu of displaying a near authentic image.
  • Another problem addressed by the current invention is that the users involved in the video conversation normally do not have the possibility to control the behaviour of the video coding algorithms.
  • the current invention is resolving these problems by moving the implementation of said algorithms from the video terminal (the client) to a centralised server in the communication network, called a Video Application Node (VAN) .
  • VAN Video Application Node
  • the video channel When establishing a video channel between a first video terminal and a second video terminal, the video channel is passing through the VAN.
  • the video data stream which is encoded by the first video terminal using conventional video coding techniques, is received by the VAN and decoded by a first video codec.
  • the decoded video data stream is sent to a Real Time Video Effecter part in the VAN.
  • This Real Time Video Effecter part performs the transforming of the facial characteristics and movements but does also allow for additional graphical effects.
  • the resulting synthesized graphical representation is sent to a second video codec where it is encoded using conventional video coding techniques.
  • the encoded video data stream is sent from the VAN towards the second video terminal .
  • the video channel can be bi-directional and the VAN can process the video data stream received from the second video terminal in the same way as it can process the video data stream received from the first video terminal .
  • the subscriber database is storing data for each video terminal user that is subscribing to the feature of displaying synthesized animated graphical representations (the ⁇ animation feature').
  • the data in the subscriber database comprises for example parameters that are pointing out which synthesized representation is to be displayed and under which conditions.
  • the subscribers have access to the subscriber database via a data communication link from the subscriber's video terminal to the database interface in the VAN. By sending control signalling over this interface, the subscriber can alter the parameters for the synthesized representation. Accessing this database can be done either when the video connection is established or at any other time at the leisure of the subscriber.
  • One object of the invention is to use the transformation of facial characteristics and movements to provide for amusement features for users involved in a video conversation.
  • One example is to display an animated cartoon or some other synthesized representation which mimics the face of the user. This synthesized representation could be unique for each user involved in the video conversation. The appearance of the synthesized representation can also depend on the combination of the identity of the involved users .
  • Another important object of the invention is to enhance the amusement features by allowing the subscriber to alter and update the parameters that control said synthesized representations .
  • An overall advantage of the invention is that the amusement aspect can boost the usage of video conversation applications, which will increase the traffic in the communication network thereby increasing the revenues for communication network operators.
  • VAN centralised VAN
  • network operators controlling the VAN
  • VAN can continuously offer new features, which easily can be made available to the users without software updates in the video terminals.
  • Network operators can also apply different charging schemes to these features.
  • Figure 1 is a block diagram showing a typical video telephony application between two mobile video telephones .
  • FIG. 2 is a block diagram showing typical video telephony clients according to the ITU-T Recommendation H.324.
  • Figure 3 is a block diagram showing a video telephony client with local implementation of the transformation algorithms.
  • FIG. 4 is a block diagram showing the transformation algorithms implemented centrally in the Video Application Node.
  • Figure 5 is a block diagram showing an example on how the 'animation feature' can be perceived by mobile video telephone users.
  • Figure 6 is a flow chart showing the steps of establishing and processing a video telephony call over a VAN.
  • Figure 7 is a block diagram showing the involved network elements in a call setup.
  • Figure 8 is a block diagram showing the involved network elements when a subscriber accesses the subscriber database in the VAN.
  • Figure 9 is a table over subscriber parameters in the subscriber database.
  • the current invention is in a preferred embodiment applied to a video telephony application in a mobile telephony network.
  • Figure 1 a video telephony application between two mobile video telephones is showed.
  • a first video telephony user 101 is calling a second video telephony user 102 each user using a mobile video telephone (111 and 112 respectively) .
  • Each mobile video telephone 111,112 is equipped with an inbuilt video camera and a display and enables one of the users to talk to the other user while simultaneously seeing his/her face on the display.
  • FIG. 2 (prior art) is showing a block diagram with two video telephony clients 210, 220.
  • a typical video telephony client 210 comprises a number of function elements:
  • a video I/O equipment element 211 includes for example a video camera and a display. This element is connected to a video codec element 214, which carries out redundancy reduction encoding and decoding for video data streams.
  • An audio I/O equipment element 212 includes for example a microphone and a speaker. This element is connected to an audio codec element 215, which encodes the audio signal from the microphone for transmission, and decodes the audio code, which is output to the speaker.
  • a system control element 213 is an entity that uses a control protocol element 216 for end-to-end signalling for proper operation of the video telephony client (such as reversion to speech-only telephony mode etc.) .
  • a multiplex/demultiplex element 217 connected to the elements 211, 212 and 213 multiplexes the video data stream, the audio signal and the control signal into a single bit stream, and demultiplexes a received single bit stream into a video data stream, an audio signal and a control signal. In addition, it performs logical framing, sequence numbering, error detection, and error correction by means of retransmission, as appropriate to each stream or signal.
  • Figure 3 shows a first video telephony client 310 that is equipped with a video camera 311 and a video transformation functional block 312, both connected to each other.
  • the video transformation functional block 312 detects facial characteristics and movements in a first video data stream 313 received from the video camera 311 and transforms these movements into a synthesized animated graphical representation 314 which is transmitted in a second video data stream 315 to a second video telephony client 320.
  • the video transformation is made locally in the video telephony clients 310, 320.
  • the algorithms for this video transformation require a large amount of computing capacity which makes it unsuitable for video terminals with limited processing capacity such as mobile video telephones.
  • VAN Video Application Node
  • VAN 4100 A block diagram of a VAN 4100 is found in Figure 4.
  • the VAN 4100 comprises a number of functional elements such as multiplex/demultiplex elements 4101, 4102, video codec elements 4103, 4104, control protocol elements 4105, 4106 and a Real Time Video Effecter element 4107.
  • Two video telephony clients 4200 and 4300 correspond to the video telephony client 210 described in Figure 2.
  • the multiplex/demultiplex element 4101 is on one side connected towards the video telephony client 4200 and on the other side connected to two elements, the video codec element 4103, and the control protocol element 4105 respectively.
  • the video codec element 4103 is also connected to the Real Time Video Effecter element 4107.
  • the Real Time Video Effecter element 4107 is further on connected to the video codec element 4104.
  • the control protocol element 4105 is connected to another control protocol element 4106.
  • the video codec element 4104 and the control protocol element 4106 are both connected to the multiplex/demultiplex element 4102, which in turn is connected towards the video telephony client 4300.
  • the video telephony client 4200 sends a multiplexed video telephony data stream 4400 to the VAN 4100.
  • the multiplexed video telephony data stream 4400 is demultiplexed into a first video data stream 4109, an audio signal 4108 and a control signal 4110 by the multiplex/demultiplex element 4101.
  • the first video data stream 4109 is sent to the video codec 4103 and the control signal 4110 is sent to the control protocol element 4105.
  • the audio signal 4108 is sent to the multiplex/demultiplex element 4102.
  • a decoded first video data stream from the video codec 4103 is sent to the Real Time Video Effecter element 4107.
  • This element detects facial characteristics and movements and transforms in real time the decoded first video data stream to a synthesized animated graphical representation.
  • the synthesized animated graphical representation is sent to the video codec 4104, where it is encoded to an encoded second video data stream 4111 which in turn is sent to the multiplex/demultiplex element 4102.
  • the encoded video data stream is multiplexed together with the encoded audio signal and the control signal and sent as a second video telephony data stream 4500 towards the receiving video telephony client 4300.
  • the block diagram in Figure 4 is symmetrical and the notations 'sending' and 'receiving' video telephony clients are reciprocal. That is, a video telephony data stream sent from the video telephony client 4300 to the video telephony client 4200 is treated in the same way in the VAN as a video telephony data stream sent from 4200 to 4300.
  • the video transformation process implemented in the Real Time Video Effecter element 4107 can for example use one of the several facial-sensing algorithms known from prior art.
  • a user 501 using a mobile video telephone 503 is calling another user 502 with a mobile video telephone 504.
  • the video telephony call passes through a VAN 505, which has the same functionality as the VAN 4100 of Figure 4.
  • the screen on the mobile video telephone 503 displays an animated cartoon 506 mimicking user 502.
  • the screen on the mobile video telephone 504 displays an animated cartoon 507 mimicking user 501.
  • FIG. 6 An example on a call establishment of a video telephony call as seen by e.g. the VAN 4100 of Figure 4 is shown in a flow diagram in Figure 6.
  • Step 601 The VAN receives a call setup from a call originating video telephony client.
  • Step 602 The VAN sends a call set-up to a destination video telephony client.
  • Step 603 A video telephony call including a bi-directional video channel is established between the originating and the destination video telephony clients.
  • Step 604 The VAN receives an encoded first video data stream from a first one of the video telephony clients, which can be any of the originating or destination video telephony clients.
  • Step 605 The encoded first video data stream is decoded by a video codec in the VAN.
  • Step 606 Facial characteristics and movements are detected in the decoded first video data stream and a synthesized animated graphical representation is generated.
  • Step 607 The synthesized animated graphical representation is encoded into an encoded second video data stream.
  • Step 608 The encoded second video data stream is sent to a second one of the video telephony clients.
  • Figure 6 shows a system overview of involved network elements when a video telephony call is established between two mobile video telephones:
  • VAN 708 A Video Application Node, VAN 708.
  • the call establishment is using a signalling protocol such as the ISUP (ISDN User Part) .
  • ISUP ISDN User Part
  • SS7 Standardised Signalling System Number 7
  • the VAN is equipped with a signalling device 709 in order to receive and send this SS7 signalling.
  • a number of signalling information elements are sent from the mobile video telephone 701 including:
  • a 'Call Type' information element - A 'Calling Party Number' information element comprising a telephone number to the call originating mobile video telephone 701.
  • the call is routed over the radio base station 703 to the MSC 705.
  • the MSC 705 analyzes the 'Call Type' information element received from the mobile video telephone 701. If the value in the information element is set to 'video telephony' , the call is routed via the GMSC 707 to the VAN 708.
  • the VAN 708 analyses the received 'Calling and Called Party Number' information elements and checks if the call originating mobile video telephone 701 and the call destination mobile video telephone 702 are subscribers to the 'animation feature'. If one or both mobile video telephones are subscribers, the VAN routes the call back to the GMSC 707 which further routes the call towards the call destination mobile video telephone 702, via the MSC 706 and the radio base station 704. The video data stream between the two mobile video telephones is now passing through the VAN and is processed as illustrated in Figure 4.
  • the VAN 708 concludes that none of the mobile video telephones are subscribers to the 'animation feature', the VAN will instead return the call control to the GMSC 707 which will route the call as an ordinary video telephony call to the call destination mobile video telephone 702. The VAN will not take any further part in the call and the video data stream will not pass through the VAN.
  • the 'Called Party Number' information element sent from the originating mobile video telephone 701 comprises a telephone number to the VAN.
  • a 'User-to-user' information element is sent from the originating mobile video telephone 701 comprising a Subscriber ID of the call destination mobile video telephone 702.
  • the call is routed to the VAN over the radio base station 703, the MSC 705 and the GMSC 707 respectively.
  • the VAN translates the Subscriber ID received in the 'User-to- user' information element to a telephone number to the call destination mobile video telephone 702.
  • the VAN routes the call back to the GMSC 707 which further routes the call towards the call destination mobile video telephone 702, via the MSC 706 and the radio base station 704.
  • the video data stream between the two mobile video telephones is passing through the VAN and is processed as illustrated in Figure 4.
  • the advantage of the second option is that it does not require any upgrades of the nodes in the existing core network (the radio bases stations, the MSCs and the GMSCs) .
  • the VAN comprises a subscriber database where data for each video telephony user subscribing to the 'animation feature' is stored including parameters that control the graphical representations.
  • the subscribers have access to the subscriber database via a data communication link from the mobile video telephone to an interface in the VAN.
  • Figure 8 illustrates two types of database update signalling for accessing the subscriber database.
  • the first type is carried on a separate packet switch network such as for example the GPRS (General Packet Radio Service) network.
  • the second type is using in-band signalling in the established video telephony data stream.
  • GPRS General Packet Radio Service
  • Involved network elements are:
  • GGSN 806 A Serving GPRS Support Node, SGSN 805, - A Gateway GPRS Support Node, GGSN 806,
  • VAN 800 including, in addition to the functionality already described in Figure 4, a subscriber database 801, a web server 810 and a DTMF receiver 811,
  • the data link in the GPRS network from the mobile video telephone 803 to an interface in the VAN 800 is established over the radio base station 804, the SGSN 805, the GGSN 806 and the packet router 807.
  • the web server 810 in the VAN 800 has a peer-to-peer web interface towards the mobile video telephone 803.
  • the user can access and alter the contents of the subscriber database 801 using an inbuilt web browser in the mobile video telephone 803.
  • the database update signalling between the mobile video telephone 803 web client and the web server 810 is carried on an Internet protocol such as the Hypertext Transfer Protocol, HTTP or the Wireless Application Protocol, WAP.
  • the web interface is normally used when the mobile video telephone user is not engaged in any call.
  • a simplified user interface can be applied. Instead of accessing the database using a web browser, the mobile video telephone's numeric keypad can be used.
  • a DTMF Dual Tone Multi-Frequency
  • DTMF uses tone signalling which is sent in-band on the already established voice channel in a call.
  • the VAN is equipped with the DTMF receiver 811.
  • Figure 8 shows a voice channel established over the base station 804, the MSC 808 and the GMSC 809 to the VAN 800.
  • the subscriber's access to the database can preferably be restricted in the sense that it requires a standard login procedure using a unique subscriber ID and a keyword for each subscriber.
  • Each unique subscriber is identified by a key parameter (column 91), here the mobile video telephone's MSISDN (Mobile Subscriber ISDN) number.
  • the MSISDN is basically the telephone number to a mobile video telephone.
  • For each MSISDN a number of unique parameters are assigned.
  • the rows in the list in Figure 9 show for each MSISDN a subscriber (or user) ID (column 92) , a Password (column 93) , a default effect pattern (i.e. the default synthesized graphical representation of the user) (column 94) and Specific Effect Pattern Conditions (column 95) .
  • the latter parameter controls which synthesized graphical representation are to be displayed as a function of one or several conditions.
  • a Password For both subscribers there is a Password defined.
  • Anne is calling another video telephone, an animated cartoon of a princess that mimics Anne's facial movements is displayed on the called party's video telephone by default. However, if Anne calls Bob who has the MSISDN number 8190222, the animated cartoon is modified and eyeglasses are added to the cartoon (Condition
  • the current invention is in a preferred embodiment applied to a mobile telephony network. It is however obvious to a person skilled in the art to also apply the inventive concept to video conversation applications in the fixed telephony network or to the internet where the video terminals can be for example personal computers (PCs) or similar.
  • PCs personal computers

Abstract

L'invention concerne une application de conversation vidéo, qui comprend la détection des caractéristiques et mouvements faciaux d'un être humain par un terminal vidéo et la production d'une représentation graphique animée synthétisée, qui est envoyée à un autre terminal vidéo. Les algorithmes de codage de cette application nécessitent une capacité de traitement élevée qui la rend impropre aux terminaux vidéo à capacité de traitement limitée (tels que les téléphones mobiles). L'invention permet de résoudre ce problème, en effectuant les algorithmes dans un serveur d'application vidéo centralisé (4100) au lieu des terminaux vidéo (4200, 4300). Les utilisateurs peuvent choisir et modifier eux-mêmes différentes représentations graphiques à partir de leurs terminaux vidéo. L'invention trouve une application particulière comme fonctionnalité de divertissement dans les conversations de téléphonie vidéo mobile.
PCT/SE2003/001883 2003-12-04 2003-12-04 Noeud d'application video WO2005055602A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/SE2003/001883 WO2005055602A1 (fr) 2003-12-04 2003-12-04 Noeud d'application video
AU2003304675A AU2003304675A1 (en) 2003-12-04 2003-12-04 Video application node

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SE2003/001883 WO2005055602A1 (fr) 2003-12-04 2003-12-04 Noeud d'application video

Publications (2)

Publication Number Publication Date
WO2005055602A1 true WO2005055602A1 (fr) 2005-06-16
WO2005055602A8 WO2005055602A8 (fr) 2006-09-08

Family

ID=34651610

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2003/001883 WO2005055602A1 (fr) 2003-12-04 2003-12-04 Noeud d'application video

Country Status (2)

Country Link
AU (1) AU2003304675A1 (fr)
WO (1) WO2005055602A1 (fr)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102007010662A1 (de) * 2007-03-02 2008-09-04 Deutsche Telekom Ag Verfahren und Videokommunikationssystem zur Gestik-basierten Echtzeit-Steuerung eines Avatars
DE102007010664A1 (de) * 2007-03-02 2008-09-04 Deutsche Telekom Ag Verfahren und Videokommunikationssystem zur Einspeisung von Avatar-Informationen in einem Videodatenstrom
WO2008091485A3 (fr) * 2007-01-23 2008-11-13 Euclid Discoveries Llc Systèmes et procédés permettant de fournir des services vidéo personnels
US7508990B2 (en) 2004-07-30 2009-03-24 Euclid Discoveries, Llc Apparatus and method for processing video data
WO2009101153A2 (fr) * 2008-02-13 2009-08-20 Ubisoft Entertainment S.A. Capture d'image en prises réelles
US8553782B2 (en) 2007-01-23 2013-10-08 Euclid Discoveries, Llc Object archival systems and methods
US8908766B2 (en) 2005-03-31 2014-12-09 Euclid Discoveries, Llc Computer method and apparatus for processing image data
US8942283B2 (en) 2005-03-31 2015-01-27 Euclid Discoveries, Llc Feature-based hybrid video codec comparing compression efficiency of encodings
WO2015145219A1 (fr) * 2014-03-28 2015-10-01 Navaratnam Ratnakumar Systèmes de service à distance de clients au moyen de mannequins virtuels et physiques
US9325936B2 (en) 2013-08-09 2016-04-26 Samsung Electronics Co., Ltd. Hybrid visual communication
US9578345B2 (en) 2005-03-31 2017-02-21 Euclid Discoveries, Llc Model-based video encoding and decoding
US9621917B2 (en) 2014-03-10 2017-04-11 Euclid Discoveries, Llc Continuous block tracking for temporal prediction in video encoding
US9743078B2 (en) 2004-07-30 2017-08-22 Euclid Discoveries, Llc Standards-compliant model-based video encoding and decoding
US10091507B2 (en) 2014-03-10 2018-10-02 Euclid Discoveries, Llc Perceptual optimization for model-based video encoding
US10097851B2 (en) 2014-03-10 2018-10-09 Euclid Discoveries, Llc Perceptual optimization for model-based video encoding

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9532069B2 (en) 2004-07-30 2016-12-27 Euclid Discoveries, Llc Video compression repository and model reuse
US8902971B2 (en) 2004-07-30 2014-12-02 Euclid Discoveries, Llc Video compression repository and model reuse

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003091902A1 (fr) * 2002-04-26 2003-11-06 Nokia Corporation Procede et appareil permettant d'acheminer des messages et des modeles simples dans un reseau de communication

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003091902A1 (fr) * 2002-04-26 2003-11-06 Nokia Corporation Procede et appareil permettant d'acheminer des messages et des modeles simples dans un reseau de communication

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHANDRASIRI N.P. ET AL.: "Communication over the Internet using a 3D agent with real-time facial expression analysis, synthesis and text to speech capabilities", COMMUNICATION SYSTEMS, 2002. ICCS 2002. THE 8TH INTERNATIONAL CONFERENCE ON 25-28 NOV. 2002, vol. 1, 25 November 2002 (2002-11-25) - 28 November 2002 (2002-11-28), pages 480 - 484, XP010629265 *
KUO C.-C. ET AL.: "Design and implementation of a network application architecture for thin clients", PROCEEDINGS. 26TH ANNUAL INTERNATIONAL COMPUTER SOFTWARE AND APPLICATIONS CONFERENCE, 2002. COMPSAC 2002., 2002, pages 193 - 198, XP010611116 *
PANDZIC I.S.: "3D technologies for the World Wide Web Proceeding of the Seventh international conference on 3D Web technology", 2002, TEMPE, ARIZONA, USA, ISBN: 1-58113-468-1, pages: 27 - 34, XP002979231 *
SUGAWARA S. ET AL.: "A communication environment based on a shared virtual space - High-quality interspace -", 1999 IEEE SMC'99 CONFERENCE PROCEEDINGS. 1999 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS, vol. 6, 1999, pages 54 - 58, XP010363181 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7508990B2 (en) 2004-07-30 2009-03-24 Euclid Discoveries, Llc Apparatus and method for processing video data
US9743078B2 (en) 2004-07-30 2017-08-22 Euclid Discoveries, Llc Standards-compliant model-based video encoding and decoding
US8908766B2 (en) 2005-03-31 2014-12-09 Euclid Discoveries, Llc Computer method and apparatus for processing image data
US8964835B2 (en) 2005-03-31 2015-02-24 Euclid Discoveries, Llc Feature-based video compression
US8942283B2 (en) 2005-03-31 2015-01-27 Euclid Discoveries, Llc Feature-based hybrid video codec comparing compression efficiency of encodings
US9578345B2 (en) 2005-03-31 2017-02-21 Euclid Discoveries, Llc Model-based video encoding and decoding
US9106977B2 (en) 2006-06-08 2015-08-11 Euclid Discoveries, Llc Object archival systems and methods
US8243118B2 (en) 2007-01-23 2012-08-14 Euclid Discoveries, Llc Systems and methods for providing personal video services
US8553782B2 (en) 2007-01-23 2013-10-08 Euclid Discoveries, Llc Object archival systems and methods
US8842154B2 (en) 2007-01-23 2014-09-23 Euclid Discoveries, Llc Systems and methods for providing personal video services
CN102685441A (zh) * 2007-01-23 2012-09-19 欧几里得发现有限责任公司 用于提供个人视频服务的系统和方法
WO2008091485A3 (fr) * 2007-01-23 2008-11-13 Euclid Discoveries Llc Systèmes et procédés permettant de fournir des services vidéo personnels
DE102007010662A1 (de) * 2007-03-02 2008-09-04 Deutsche Telekom Ag Verfahren und Videokommunikationssystem zur Gestik-basierten Echtzeit-Steuerung eines Avatars
DE102007010664A1 (de) * 2007-03-02 2008-09-04 Deutsche Telekom Ag Verfahren und Videokommunikationssystem zur Einspeisung von Avatar-Informationen in einem Videodatenstrom
WO2009101153A3 (fr) * 2008-02-13 2009-10-08 Ubisoft Entertainment S.A. Capture d'image en prises réelles
WO2009101153A2 (fr) * 2008-02-13 2009-08-20 Ubisoft Entertainment S.A. Capture d'image en prises réelles
US9325936B2 (en) 2013-08-09 2016-04-26 Samsung Electronics Co., Ltd. Hybrid visual communication
US9621917B2 (en) 2014-03-10 2017-04-11 Euclid Discoveries, Llc Continuous block tracking for temporal prediction in video encoding
US10091507B2 (en) 2014-03-10 2018-10-02 Euclid Discoveries, Llc Perceptual optimization for model-based video encoding
US10097851B2 (en) 2014-03-10 2018-10-09 Euclid Discoveries, Llc Perceptual optimization for model-based video encoding
WO2015145219A1 (fr) * 2014-03-28 2015-10-01 Navaratnam Ratnakumar Systèmes de service à distance de clients au moyen de mannequins virtuels et physiques
US10152719B2 (en) 2014-03-28 2018-12-11 Ratnakumar Navaratnam Virtual photorealistic digital actor system for remote service of customers
US10163111B2 (en) 2014-03-28 2018-12-25 Ratnakumar Navaratnam Virtual photorealistic digital actor system for remote service of customers

Also Published As

Publication number Publication date
WO2005055602A8 (fr) 2006-09-08
AU2003304675A8 (en) 2005-06-24
AU2003304675A1 (en) 2005-06-24

Similar Documents

Publication Publication Date Title
WO2005055602A1 (fr) Noeud d'application video
EP1768406B1 (fr) Appareil d'appel vidéo pour un terminal de communication mobile et méthode correspondant
US20050009519A1 (en) Communication apparatus and operation control method therefor
CN103179373B (zh) 可视通信系统、终端网关、视频网关以及可视通信方法
EP1819097B1 (fr) Système de surveillance d'appel vidéo
US8411827B2 (en) Method and system for implementing multimedia ring back tone service
GB2336974B (en) Singlecast interactive radio system
EP0948860A1 (fr) Systeme d'acces general
CN1311599A (zh) 用于无线VoIP和VoATM呼叫的召开会议和通告生成方法
CN101352039A (zh) 具有自动用户检测和识别能力的视频电话设备
FI106510B (fi) Järjestelmä puheen siirtämiseksi matkapuhelinverkon ja kiinteän verkon päätelaitteen välillä
TWI332792B (en) System and method for video teleconferencing via a video bridge
KR100853122B1 (ko) 이동통신망을 이용한 실시간 대체 영상 서비스 방법 및시스템
CN100579105C (zh) 一种数据流处理的方法和装置
KR100703421B1 (ko) 트랜스코딩을 이용한 동영상메일 통신장치 및 방법
RU2321960C2 (ru) Связь в реальном времени между телефоном и пользователями сети интернет
CN101383940B (zh) 可视电话的实现方法及装置
JP4089596B2 (ja) 電話交換装置
JP4241916B2 (ja) 電話通信システム
WO2007100178A1 (fr) Procédé, signal et appareil de service conçus pour fournir un service de remplacement de retour d'appel multimedia en fonction de la capacité d'un terminal de communication mobile
JP4513859B2 (ja) 災害地域通信回線捕捉システム、および移動通信システム
Ohira et al. A world first development of a multipoint videophone system over 3G-324M protocol
US9491301B2 (en) Multimedia providing service
KR20080047683A (ko) 휴대용 단말기에서 스트리밍 서비스 전송 방법 및 장치
WO2012155761A1 (fr) Procédé de mise en œuvre d'un cadre photo dynamique visiophonique et terminal mobile

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP