US20030058939A1 - Video telecommunication system - Google Patents
Video telecommunication system Download PDFInfo
- Publication number
- US20030058939A1 US20030058939A1 US10/252,409 US25240902A US2003058939A1 US 20030058939 A1 US20030058939 A1 US 20030058939A1 US 25240902 A US25240902 A US 25240902A US 2003058939 A1 US2003058939 A1 US 2003058939A1
- Authority
- US
- United States
- Prior art keywords
- region
- picture
- background
- background scene
- video telecommunication
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/20—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/20—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
- H04N19/23—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding with coding of regions that are present throughout a whole video segment, e.g. sprites, background or mosaic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
- H04N7/147—Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/74—Circuits for processing colour signals for obtaining special effects
- H04N9/75—Chroma key
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the present invention relates to a video communication system based on a background and object separation, which is capable of separating a background from an object and dynamically synthesizing the separated background and object so that they can be used for a video telecommunication in accordance with a request by a user or communication environments.
- a MPEG-4 allows object-based picture compression coding.
- an object separation technology is prerequisite to the object-based picture compression coding.
- it is difficult for current technologies to accomplish an object separation which is enough fast to separate a required object and a background except for the object and then code the object and the background in a compression mode under the environments (video telecommunication/video conversation) requiring to compress and transmit video signals in real time.
- a face region separation technology there is a technology for extracting the face region by use of a characteristic that the face region has a range of human's skin color.
- the human's skin color exists within a specific range in a color space. Therefore, this technology is a method for extracting the face region by use of only pixels satisfying such a human's skin color condition.
- an object of the present invention is to provide a video telecommunication system, which is capable of automatically separating an object and a background scene and changing the separated background scene into a different scene in a video telecommunication.
- Another object of the present invention is to provide a video telecommunication system, which is capable of realizing a more effective background scene separation by constructing a face region extraction means and a general region extraction means separately and combining them in a video telecommunication system enabling a video telecommunication in which a background scene is automatically changed into a different scene.
- Still another object of the present invention is to provide a video telecommunication system, which is capable of performing a background scene separation and synthesis at terminals by constructing a background scene separation means for separating a background scene and a background scene synthesis means for synthesizing a different background scene and placing the background scene separation means and the background synthesis means at a terminal for performing a video telecommunication in the video telecommunication system enabling the video telecommunication in which a background scene is automatically changed into a different scene.
- Still another object of the present invention is to provide a video telecommunication system, which is capable of realizing a more effective background scene separation and synthesis by constructing a background scene separation means for separating a background scene and a background scene synthesis means for synthesizing a different background scene and placing the background scene separation means at a video telecommunication terminal and the background scene synthesis means at a server for providing services in the video telecommunication system enabling the video telecommunication in which a background scene is automatically changed into a different scene.
- Still another object of the present invention is to provide a video telecommunication system, which is capable of synthesizing a background scene provided by a server into any background scene aiming at an advertisement and so on.
- Still another object of the present invention is to provide a video telecommunication system for separating an object and a background scene and synthesizing the separated background scene into a different background scene replacing the separated background scene, which can be applied to a communication system including pictures, for example, video mail, as well as a video telecommunication including voice.
- Still another object of the present invention is to provide a video telecommunication system for separating an object and a background scene and synthesizing the separated background scene into a different background scene replacing the separated background scene, which is capable of transmitting a video mail after changing and editing the background scene easily whenever a user wishes to transmit a different background scene by separating the object (face region), recording the information on a boundary between the separated object and the background scene, and synthesizing only the background scene without performing a repeated separation of the background scene later by means of the boundary information, when the video telecommunication system is applied to a communication system including pictures, for example, video mail, as well as a video telecommunication including voice.
- a video telecommunication system comprising a background scene separation means for separating an object to be transmitted and a background scene except for the object in a picture in a process of transmitting/receiving data including at least a picture; a background picture database for providing a background picture to be transmitted instead of the background scene; a background picture synthesis means for synthesizing the separated object and a new background picture which is selected from said background picture database; and a picture transmission means for transmitting a synthesized picture synthesized by the separated object and the new background picture.
- a video telecommunication system comprising a background scene separation means for separating an object to be transmitted and a background scene except for the object in a picture in a process of transmitting/receiving data including at least a picture; a boundary region description means for describing a boundary region between the separated object and background scene; a background picture database for providing a background picture to be transmitted instead of the separated background scene; a background picture synthesis means for synthesizing the separated object and a new background picture from said background picture database by use of the information on the boundary region description; and a picture transmission means for transmitting a synthesized picture synthesized by the separated object and the new background picture.
- a video telecommunication control method comprising the steps of: separating an object and a background scene in a picture to be transmitted; selecting a background scene to be transmitted instead of the separated background scene; synthesizing the separated object and the selected new background; and transmitting a synthesized picture synthesized by the separated object and the new background picture.
- FIG. 1 is a view for explaining a concept of picture separation and synthesis for a video telecommunication in a video telecommunication system according to the present invention
- FIG. 2 is a view for explaining a concept of picture separation and synthesis for a video mail in a video telecommunication system according to the present invention
- FIG. 3 is a view for showing a system configuration in which a background scene separation means and a background scene synthesis means are located at a terminal in a video telecommunication system according to the present invention
- FIG. 4 is a view for showing a system configuration in which a background scene separation is achieved in a terminal and a background scene synthesis is achieved at a server in a video telecommunication system according to the present invention
- FIG. 5 is a view for showing a system configuration in which background scene separation and synthesis are achieved at a terminal and a background scene search engine is provided at a server in a video telecommunication system according to the present invention
- FIG. 6 is a view for showing a system configuration in which a background scene separation is achieved at a terminal and a background scene synthesis and a background scene search engine are provided at a server in a video telecommunication system according to the present invention
- FIG. 7 is a flow chart for explaining an operation of a video telecommunication system according to the embodiment of FIG. 3;
- FIG. 8 is a view showing a face region extraction process applied to a video telecommunication system according to the present invention.
- FIG. 9 through FIG. 14 are views showing examples of images for explaining gridding and grid-grouping of skin region pixel image in a face region extraction process applied to a video telecommunication system according to the present invention
- FIG. 15 is a view showing a homogeneous color/texture region segmentation procedure in a face region extraction process applied to a video telecommunication system according to the present invention
- FIG. 16 is a view showing an example of segmentation region image generated in the homogeneous color/texture region segmentation procedure of the FIG. 15;
- FIG. 17 is a flow chart for explaining a procedure of a video telecommunication according to the embodiment of FIG. 4;
- FIG. 18 is a view for showing a system configuration in which background scene separation and synthesis are achieved at a terminal for a video mail in a video telecommunication system according to the present invention
- FIG. 19 is a view for showing a system configuration in which a background scene separation is achieved at a terminal and a background scene synthesis is achieved at a server for a video mail in a video telecommunication system according to the present invention
- FIG. 20 is a view for showing a system configuration in which background scene separation and synthesis are achieved at a server for a video mail in a video telecommunication system according to the present invention
- FIG. 21 is a view for showing a system configuration in which background scene separation and synthesis are achieved at a terminal and a server for providing background scenes provides a cost for a user, as an application of the present invention.
- FIG. 22 is a view for showing a system configuration in which background scene separation and synthesis are achieved at a server and the server for providing background scenes provides a cost for a user, as another application of the present invention.
- a technology for automatically changing a background scene in a video telecommunication system of the present invention can have two applications; one being a case of a real time video telecommunication including voice with other party and another being a case of transmission of not only a picture but also other information such as a text.
- a video telecommunication field in which a background scene is changed into an advertisement background scene or a different background scene desired by a user at the time of video telecommunication.
- a video mail field in which a video mail is transmitted after a background scene is changed into a different background scene desired by a user and a video mail edition including an addition of messages and so on is performed. Both cases will be considered as a video telecommunication system of the present invention.
- FIG. 1 is a view for explaining a concept of a background scene change in a video telecommunication.
- FIG. 2 is a view for explaining a concept of a background scene change in a video mail.
- the characters 3 can be inserted with a designation of a character display method including insertion position, font and size of characters, fixed characters or moving characters, etc.
- the video telecommunication system of the present invention can be implemented as various embodiments depending on positions of a means for separating the background and the object from the picture and a means for synthesizing the separated object and a new background scene.
- FIG. 3 is a view for showing a configuration of an embodiment in which a background scene separation means and a background scene synthesis means are located at a terminal in a video telecommunication system according to the present invention.
- the video telecommunication consists generally of a terminal 4 for performing a video telecommunication and a server 5 for providing services.
- the terminal 4 includes a background separation unit 6 for separating a background scene and an object each other in a picture and a background scene synthesis unit 7 for synthesizing the separated object and a new background scene.
- the background separation unit 6 includes a face region extraction unit 8 for extracting a face region from the picture, a general region separation unit 9 for separating a general region except for the face region, a region synthesis unit 10 for synthesizing regions, which are determined as a human region by use of the extracted face region, and a region track unit 11 for tracking a concerned region in next successive frames by use of information on the extracted face region.
- the terminal 4 further includes a picture encoder 12 for encoding transmission picture signals for telecommunication, a picture decoder 13 for decoding reception picture signals for telecommunication, a buffer 14 for processing telecommunication signals, and a telecommunication device 15 for transmitting and receiving the picture signals according to communication protocol.
- the server 5 includes a buffer 16 for processing picture signals to be used for telecommunication and background scene, a background scene database 17 for storing information on pictures to be provided for the background scene, and a telecommunication device 18 for transmitting and receiving the picture signals according to prescribed communication protocol in order to provide the picture information stored in the background scene database to the terminal.
- the terminal 4 can be a PC on which a PC camera is mounted, a video phone, etc.
- the background scene database 17 for providing the background picture can be placed at either the server 5 or the terminal 4 .
- the database 17 is placed at the server 5 , when a background scene is changed into a different background scene desired by a user, the desired different background scene is received from the server 5 . If the database 17 is placed at the terminal 4 , background pictures in the terminal 4 are used.
- the face region extraction unit 8 extracts a face region from an original picture to be transmitted. A method for extracting the face region will be in detail described with reference to FIG. 8.
- the general region separation unit 9 identifies and separates regions having similar color and textures as single region by color/texture information, and separates the face region as a portion of the general region.
- the region synthesis unit 10 synthesizes regions, which are determined as human regions with reference to position of the face region extracted in the face region extraction unit 8 , of the separated regions. For example, since a neck, a body, an arm, and a leg are typically positioned below a face, when the face region is known, a region which is determined as a human can be extracted from the known face region. Motion information is additionally used for such a extraction.
- a human region can be extracted by a simpler method from next successive picture frames by using an assumption that the separated human region is continuously moved.
- the region track unit 11 takes responsibility for this task.
- the background scene in next frame can be separated only by slightly changing and expanding the human region extracted previously. For example, when the motion information is toward a specific direction, an easier background scene separation can be accomplished by examining pixels having same color information as human region in the previous frame in the direction indicated by the motion information and moving or expanding the region. This method also reflects that the size of the human region depends on a distance between a camera and a human to be photographed. As described above, the background scene separation unit 6 can separate the background scene and the object (human).
- the background scene synthesis unit 7 synthesizes regions other than the human region, that is, a background scene desired by a user or designated by a server. More particularly, the server 5 sends the user background pictures stored in the background scene database 17 and selected by the user or designated by the server through the buffer 16 and the telecommunication device 18 , and the user can obtain a synthesized picture having a different background scene by selecting a desired one of the background pictures or synthesizing the background picture designated by the server. On the other hand, if the database 17 related to the background scene is stored in advance in the terminal 4 , the user can conveniently and directly select the background picture without a sending process of the background picture.
- the picture synthesized with the background scene is encoded into a prescribed format by the picture encoder 12 , transmitted to a terminal of the other party through the buffer 14 and the telecommunication device 15 .
- the terminal 4 receives a picture from the other party, the received picture is decoded by the decoder 13 and displayed on a screen, so that a video telecommunication is accomplished.
- FIG. 4 is a view for showing a system configuration in which a background scene separation is achieved in a terminal and a background scene synthesis is achieved at a server in a video telecommunication system according to the present invention.
- the background scene separation unit 6 is placed at the terminal 4 and the background scene synthesis unit 22 is placed at the server 5 .
- the background separation unit 6 includes the face region extraction unit 8 , the general region separation unit 9 , the region synthesis unit 10 and a face track unit 20 .
- the face track unit 20 performs same function as the region track unit of FIG. 3.
- the terminal 4 includes a region boundary description unit 19 for describing information on a boundary between the separated background scene and the human region, the picture encoder 12 , the picture decoder 13 , the buffer 14 , and the telecommunication device 15 .
- the server 5 includes the buffer 16 , the background scene database 17 , the telecommunication device 18 , a region boundary analysis unit 21 for analyzing the information on the boundary between the separated background scene and the human region provided from the region boundary description unit 19 , a background scene synthesis unit 22 for synthesizing a background scene by using boundary analysis information, and a picture encoder 23 and a picture decoder 24 for transmitting and receiving a picture synthesized with a new background scene.
- the terminal 4 performs only the background scene separation and transmits the separated background scene, with only the boundary region of the separated background scene described by the region boundary description unit 19 .
- the server 5 receives the separated background scene, synthesizes the background picture stored in the database 17 , and then resends the synthesized background picture to the terminal 4 . Such operations will be in more detail described below.
- the background scene separation unit 6 separates the background scene region and the human region. At that time, only the information on the boundary region between the separated human region and the background scene region is described by the region boundary description unit 19 , and the region boundary information together the picture information on the human region is transmitted to the server 5 by use of the pictured encoder 12 , the buffer 14 and the telecommunication device 15 .
- the region boundary information and the picture information on the human region are received through the telecommunication device 18 and the picture decoder 24 , the region boundary analysis unit 21 recognizes the boundary between the human region and the background scene by analyzing the received region boundary information, and the background scene synthesis unit 22 selects the background picture, which is stored in the database, designated by the user or the background picture designated optionally by the server and then synthesizes the selected background picture with the picture information on the human region.
- the picture signals synthesized with such a new background scene (or picture) are encoded by the picture encoder 23 and transmitted again through the telecommunication device 18 .
- a first method is that when pixels of the background region except for the human region are transmitted to the server after filled with pixel values, such as ‘NULL’, distinguished from meaningful pixel values, the server fills the remaining regions except regions having meaningful pixel values with pixels of a new background scene.
- This method allows a fast background scene synthesis since the background scene can be synthesized by only bit operators and also allows a detailed level of boundary expression since the boundary can be expressed by the unit of pixel.
- a second method is that under an assumption that the separated regions can be expressed by a polygon, wherein the separated regions are expressed by a sequence of points corresponding to apexes of the polygon.
- the second method has a merit that the size of data expressing the boundary region becomes very small.
- the second method expresses the boundary region as the polygon not the unit of pixel, it is difficult to express a detailed level of boundary.
- the second method requires a long synthesis time due to a difficulty of background scene synthesis by use of simple bit operators.
- the video telecommunication system of FIG. 4 as described above is particularly useful for a case that an amount of information on a picture to be a background scene is enormous.
- the video telecommunication system of FIG. 3 requires to take much time to transmit a concerned background picture in the server to the terminal for the background scene synthesis performed in the terminal
- the video telecommunication system of FIG. 4 can be particularly effective for a case of an enormous amount of information on a picture to be a background scene since the server can synthesizes directly the background scene without transmitting it to the terminal.
- FIG. 5 is a view for showing a configuration of the video telecommunication system on which a content-based background picture search means is mounted.
- the terminal includes the background scene separation unit and the background scene synthesis unit.
- the terminal 4 includes the background scene separation unit 6 , the background scene synthesis unit 7 , the picture encoder 12 , the picture decoder 13 , the buffer 14 and the telecommunication device 15 , and the background scene separation unit 6 includes the face region extraction unit 8 , the general region separation unit 9 , the region synthesis unit 10 and the face track unit 20 .
- the server 5 includes the buffer 16 , the background picture database 17 , the telecommunication device 18 , a background scene search engine 25 , and a background scene. search interface 26 .
- the background scene search engine 25 allows a user to search and use the background scene through a content-based search when the user is to communicate or send a video mail with a desired different background scene.
- the user can search a background scene in the background picture database 17 desired by him by use of the content-based background scene engine 25 through the background scene search interface 26 .
- FIG. 6 is a view for showing a configuration of the video telecommunication system on which the content-based background picture search means, that is, the background scene search engine 25 and the background scene search interface 26 , is mounted. Particularly, it is shown that the terminal 4 includes the background scene separation unit 6 and the region boundary description unit 19 and the server 5 includes the background scene synthesis unit 22 and the boundary region analysis unit 21 .
- FIG. 6 The operation of the video telecommunication system of FIG. 6 can be understood in same way as FIGS. 3 and 5.
- FIG. 7 is a flow chart for explaining an automatic background scene change video telecommunication in the video telecommunication system of the present invention, which includes procedures of picture input, background scene segmentation, background scene change, picture compression and picture transmission.
- the picture input procedure S 1 when a video telecommunication begins, a picture to be transmitted is inputted as a system input.
- the background scene segmentation procedure S 2 -S 5 the background scene segmentation of the inputted picture is carried out according to the following steps.
- a position of region to be determined as a face by use of color information and the like is extracted.
- regions having similar colors and textures are segmented.
- regions to be determined as human regions are merged (i.e., synthesized) by use of information on regions having homogeneous motions and the position of face region.
- region boundary refine step S 5 the boundary portions in the merged region are smoothed in order to improve a picture quality.
- the remaining regions except the segmented human region are changed into a new desired background scene.
- compression coding encoding for transmitting the picture having the new changed background scene is performed.
- the picture transmission procedure S 8 the compressed picture signals are transmitted.
- the face region extraction step S 2 and the homogeneous color/texture region segmentation step S 3 may be reversed in order.
- FIG. 8 is a view for explaining an embodied example of the face region extraction step S 2 in FIG.
- the skin color regions are extracted before the face region is extracted (S 1 ). Namely, after it is determined whether color of each pixel in the inputted picture corresponds to the skin color regions, only pixels corresponding to the skin color are indicated as face region candidate pixels. Assuming that color of a given pixel is expressed by three values in a YCrCb color space, if the three values Y, Cr, Cb satisfy prescribed values, it is determined that the given pixel is the skin color region. Examples expressing only skin color regions are shown in FIG. 9 and FIG. 10. FIG. 10 shows an image formed by the extraction of only skin color corresponding to the face from an original image of FIG. 9. Here, the reason that the color is expressed in the YCrCb color space is that color information obtained by decoding MPEG files is YCrCb.
- FIG. 8 in next step, gridding of the skin region pixel image is performed (S 2 ). After the image having only skin color pixels are segmented into M*M cells, only cells having a skin color pixel percentage above a prescribed threshold value, the skin color pixel percentage telling how much the skin color pixel occupy in one cell, i.e., (the number of skin color pixel in one cell)/(the total number of pixel in one cell), are set to “1”, the remaining cells set to “0”. An example of image formed by such gridding of FIG. 10 is shown in FIG. 11.
- next step grouping of grids connected together is performed (S 3 ) Namely, if grids set to “1” are adjacent to each other, the grids are determined to be in same region and are grouped.
- a determination on whether the grids are adjacent to each other is made according to either 8 directional way or 4 directional way.
- the 4 directional way means that when the grids are adjacent in top, bottom, left and right directions as shown in FIG. 12, they are determined to be adjacent to each other.
- the 8 directional way is a case that a diagonal direction is further considered as shown in FIG. 13, in addition to the directions in FIG. 12.
- the 8 directional way is used for this embodiment.
- Such grouped grids are indicated as a single region.
- An example of the grouping of the grid image of FIG. 11 is shown in FIG. 14. As shown FIG. 14, it can be seen that the generated grids connected to each other are tied into 7 groups.
- face region candidates are detected (S 4 ). Namely, when face region candidates grid-grouped as shown in FIG. 14 are considered as a single region, candidates, only candidates to be determined as the face region by use of a ration of width to length of the region and the like are leaved.
- An embodied example of determining the candidates is that if a ration of the number of width pixel to length pixel in a face candidate region is within a prescribed range, the candidate region is determined as the face region.
- the face region is confirmed (S 5 ). It is confirmed whether the extracted face region candidates are the face regions by use of a face region template prepared in advance.
- the face region template which means a characteristic of the face region imaged by combining a great number of face region images prepared in advance, is compared to actual candidate regions and then confirmed as the face region if a similarity between the face region template and the actual candidate regions is above a prescribed threshold value.
- FIG. 15 is a view for explaining a color-based general region segmentation method for segmenting the homogeneous color/texture regions in FIG. 7.
- a color space segmentation is performed (S 1 ). Particularly, a YCrCb color space is segmented into N partial spaces by experiment. This is for mapping any pixel color to one of the N colors (color quantization). Subsequently, a picture is inputted (S 2 ), and then a smoothing process for removing noises included in the inputted picture is performed so that values of adjacent m pixels are averaged (S 3 ). Next, the smoothed picture is quantized into the N colors of the color space segmentation step (S 4 ), and then, when pixels having same quantized color value are adjacent to each other, a region generation step for considering the pixels to be in a same region is performed (S 5 ).
- FIG. 16 shows an example of a segmentation region image generated so.
- FIG. 17 is a flow chart for explaining a procedure of a video telecommunication according to the embodiment of FIG. 4;
- a position of region to be determined as a face by use of color information and the like is extracted.
- regions having similar colors and textures are segmented.
- regions to be determined as human regions are merged (i.e., synthesized) by use of information on regions having homogeneous motions and the position of face region.
- the region boundary refine step S 5 the boundary portions in the merged region are smoothed in order to prevent a deterioration of a picture quality due to a roughness of the boundary portions.
- the subsequent background scene boundary region description step S 6 the information on the boundary between the human region and the background scene region, as described earlier.
- compression coding encoding for transmitting the telecommunication picture, for example, the human picture and the information on the boundary of segmented regions, is performed.
- the picture transmission procedure S 8 the compressed picture signals are transmitted.
- the procedures from the picture input S 1 to the picture transmission S 8 are performed in the terminal.
- the server performs remaining procedures, starting with procedure for receiving the data transmitted from the terminal.
- the picture reception step S 9 the picture data of the human region and the region boundary information are received and decoded.
- the boundary region analysis step S 10 the received boundary region information is analyzed.
- the background scene synthesis step S 11 a new background scene is synthesized with the human picture by use of the analyzed segmentation region boundary information.
- the picture synthesized with the new background scene is coded in compression (S 12 ) and the compressed picture having the new background scene is transmitted to a receipt side (S 13 ).
- the video mail transmission system is a system that a user photographs messages into a picture and transmits the picture by an electronic mail.
- the user can edit the message picture with a desired background scene in such a video mail.
- the background scene separation and automatic change technology of the present invention enables an edition of the video mail.
- the video mail need not to change the background scene in real time unlike the video telecommunication environment, there is enough time to separate and synthesize the background scene after the picture is acquired.
- both of the background scene separation means and the background scene synthesis means can be provided in the terminal, or the background scene separation means can be provided in the terminal and the background scene synthesis means is provided in the server, or both of the background scene separation means and the background scene synthesis means can be provided in the server.
- the user can edit the picture such that character string desired by the user is included in the picture.
- the user can set font or size of character, or can select a position at which the character is to be shown in the picture or a mode by which the character is displayed.
- the mode by which the character is displayed can be expected to include an effect for displaying the character at a fixed position or an effect for moving the character.
- Such a character string synthesis means can be located at either the terminal or the server.
- FIG. 18 is a view for explaining a case that the terminal includes the background scene separation means, the background scene synthesis means, and character synthesis means and interface.
- the terminal 4 further includes a character synthesis unit 27 for preparing the video mail and a character input interface 28 for inputting characters.
- a user prepares and inputs messages to be transmitted by use of the character input interface 28 , and selects a display position, display format, etc. of the messages.
- the character synthesis unit 27 synthesizes the characters inputted by the user.
- the synthesized characters together with a user picture having a new background scene synthesized by the background scene synthesis unit 7 are transformed into a format of video mail and then transmitted.
- FIG. 18 has elements that are not described, they have same reference numerals as elements in the video telecommunication system. Therefore, the descriptions of the separation and synthesis of the background scene and the object, the background scene search, and the transmission/receipt operation will be omitted for the sake of brevity.
- FIG. 19 is a view for explaining a case that the terminal includes the background scene separation means and the region boundary description means and the server includes the background scene synthesis means, the region boundary analysis means, the character synthesis unit and the character input interface.
- FIG. 19 The construction of FIG. 19 is same as that of FIG. 18 except that the server 5 includes the character synthesis unit 27 for preparing the video mail and the character input interface 28 for inputting characters in FIG. 19.
- FIG. 20 is a view for explaining a case that the server includes the background scene separation means, the background scene synthesis means, the character synthesis unit and the character input interface.
- FIG. 20 the construction of FIG. 20 is same as those of FIGS. 18 and 19 except that the server 5 includes the background scene separation means, the background scene synthesis means, the character synthesis unit and the character input interface.
- FIG. 21 is a view for explaining an example to which the. video telecommunication system of the present invention is applicable.
- a service provider designates optionally a background scene at the time of video telecommunication and a user has a benefit such as fee discount and so on.
- the video telecommunication includes the video mail system in a wide sense.
- a service provider 30 a designates optionally a background scene and offers a benefit of fee discount to users as a cost for the background scene designation.
- Reference numeral 31 indicates a gateway.
- the terminals 29 a and 29 b include picture input units 32 a and 32 b , background scene separation and synthesis units 33 a and 33 b , buffers 34 a and 34 b , etc., respectively.
- FIG. 22 is a view for explaining another example to which the video telecommunication system of the present invention is applicable.
- a service provider designates optionally a background scene at the time of video telecommunication and a user has a benefit such as fee discount and so on.
- the video telecommunication includes the video mail system in a wide sense.
- the terminals 29 a and 29 b include only elements, for example, the picture input units 32 a and 32 b , required for transmission/receipt process of the picture signals, respectively, and the server 30 includes the background scene separation and synthesis unit 35 and the background scene database 36 .
- the operation related to the background scene change is same as in the video telecommunication system as described above. Therefore, the detailed description of this operation will be omitted for the sake of brevity.
- the background picture optionally selectable by the service provider may be an advertisement.
- the advertisement is to be the background picture, a still picture or a moving picture giving an advertisement effect can be the background picture, or only partial region of an original background picture can be edited in a way that object pictures or characters giving an advertisement effect are inserted.
- the present invention can perform a video telecommunication with a background scene desired by a user and automatically changed in real time.
- the video telecommunication can include both of video telephone and video mail transmission.
- the user can save a telecommunication fee by taking a cost for a background designation by the service provider based on a promise between the service provider and the user.
- the user can converse with other persons with a desired background scene set freely. Accordingly, privacies of individuals can be more reliably protected.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Studio Circuits (AREA)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/520,587 US8798168B2 (en) | 2001-09-26 | 2006-09-14 | Video telecommunication system for synthesizing a separated object with a new background picture |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR59567/2001 | 2001-09-26 | ||
KR10-2001-0059567A KR100516638B1 (ko) | 2001-09-26 | 2001-09-26 | 화상 통신 시스템 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/520,587 Division US8798168B2 (en) | 2001-09-26 | 2006-09-14 | Video telecommunication system for synthesizing a separated object with a new background picture |
Publications (1)
Publication Number | Publication Date |
---|---|
US20030058939A1 true US20030058939A1 (en) | 2003-03-27 |
Family
ID=19714689
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/252,409 Abandoned US20030058939A1 (en) | 2001-09-26 | 2002-09-24 | Video telecommunication system |
US11/520,587 Expired - Fee Related US8798168B2 (en) | 2001-09-26 | 2006-09-14 | Video telecommunication system for synthesizing a separated object with a new background picture |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/520,587 Expired - Fee Related US8798168B2 (en) | 2001-09-26 | 2006-09-14 | Video telecommunication system for synthesizing a separated object with a new background picture |
Country Status (4)
Country | Link |
---|---|
US (2) | US20030058939A1 (de) |
EP (1) | EP1298933A3 (de) |
KR (1) | KR100516638B1 (de) |
CN (1) | CN100370829C (de) |
Cited By (59)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040161163A1 (en) * | 2003-02-14 | 2004-08-19 | Kenya Takamidoh | Portrait image processing method and apparatus |
US20050130108A1 (en) * | 2003-12-12 | 2005-06-16 | Kurzweil Raymond C. | Virtual encounters |
US20060056506A1 (en) * | 2004-09-13 | 2006-03-16 | Ho Chia C | System and method for embedding multimedia compression information in a multimedia bitstream |
US20060078292A1 (en) * | 2004-10-12 | 2006-04-13 | Huang Jau H | Apparatus and method for embedding content information in a video bit stream |
US20070002360A1 (en) * | 2005-07-01 | 2007-01-04 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Modifying restricted images |
US20070005422A1 (en) * | 2005-07-01 | 2007-01-04 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Techniques for image generation |
US20070266049A1 (en) * | 2005-07-01 | 2007-11-15 | Searete Llc, A Limited Liability Corportion Of The State Of Delaware | Implementation of media content alteration |
US20070263865A1 (en) * | 2005-07-01 | 2007-11-15 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Authorization rights for substitute media content |
US20070274519A1 (en) * | 2005-07-01 | 2007-11-29 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Authorization for media content alteration |
US20070276757A1 (en) * | 2005-07-01 | 2007-11-29 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Approval technique for media content alteration |
US20070279410A1 (en) * | 2004-05-14 | 2007-12-06 | Tencent Technology (Shenzhen) Company Limited | Method For Synthesizing Dynamic Virtual Figures |
US20070287477A1 (en) * | 2006-06-12 | 2007-12-13 | Available For Licensing | Mobile device with shakeable snow rendering |
US20070294720A1 (en) * | 2005-07-01 | 2007-12-20 | Searete Llc | Promotional placement in media works |
US20070299877A1 (en) * | 2005-07-01 | 2007-12-27 | Searete Llc | Group content substitution in media works |
US20080013859A1 (en) * | 2005-07-01 | 2008-01-17 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Implementation of media content alteration |
US20080028422A1 (en) * | 2005-07-01 | 2008-01-31 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Implementation of media content alteration |
US20080044035A1 (en) * | 2006-08-14 | 2008-02-21 | Kapil Agrawal | Mixing background effects with real communication data to enhance personal communications |
US20080052161A1 (en) * | 2005-07-01 | 2008-02-28 | Searete Llc | Alteration of promotional content in media works |
US20080052104A1 (en) * | 2005-07-01 | 2008-02-28 | Searete Llc | Group content substitution in media works |
US20080059530A1 (en) * | 2005-07-01 | 2008-03-06 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Implementing group content substitution in media works |
US20080077954A1 (en) * | 2005-07-01 | 2008-03-27 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Promotional placement in media works |
US20080086380A1 (en) * | 2005-07-01 | 2008-04-10 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Alteration of promotional content in media works |
US20080180539A1 (en) * | 2007-01-31 | 2008-07-31 | Searete Llc, A Limited Liability Corporation | Image anonymization |
US20080181533A1 (en) * | 2007-01-31 | 2008-07-31 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Targeted obstrufication of an image |
US20080180459A1 (en) * | 2007-01-31 | 2008-07-31 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Anonymization pursuant to a broadcasted policy |
US20080244755A1 (en) * | 2007-03-30 | 2008-10-02 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Authorization for media content alteration |
US20080246777A1 (en) * | 2007-04-03 | 2008-10-09 | Richard Lee Swanson | Method and apparatus for background replacement in still photographs |
US20080270161A1 (en) * | 2007-04-26 | 2008-10-30 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Authorization rights for substitute media content |
US20090037243A1 (en) * | 2005-07-01 | 2009-02-05 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Audio substitution options in media works |
US20090037278A1 (en) * | 2005-07-01 | 2009-02-05 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Implementing visual substitution options in media works |
US20090046106A1 (en) * | 2007-08-14 | 2009-02-19 | Samsung Techwin Co., Ltd. | Method of displaying images and display apparatus applying the same |
US20090150444A1 (en) * | 2005-07-01 | 2009-06-11 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Media markup for audio content alteration |
US20090151008A1 (en) * | 2005-07-01 | 2009-06-11 | Searete Llc. A Limited Liability Corporation Of The State Of Delaware | Media markup system for content alteration in derivative works |
US20090210946A1 (en) * | 2005-07-01 | 2009-08-20 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Media markup for promotional audio content |
US20140002866A1 (en) * | 2012-06-28 | 2014-01-02 | Xerox Corporation | Method and apparatus for object assisted image editing and transmission of scanned documents |
US9064295B2 (en) * | 2013-02-04 | 2015-06-23 | Sony Corporation | Enhanced video encoding using depth information |
US9153031B2 (en) | 2011-06-22 | 2015-10-06 | Microsoft Technology Licensing, Llc | Modifying video regions using mobile device input |
US9215512B2 (en) | 2007-04-27 | 2015-12-15 | Invention Science Fund I, Llc | Implementation of media content alteration |
US9583141B2 (en) | 2005-07-01 | 2017-02-28 | Invention Science Fund I, Llc | Implementing audio substitution options in media works |
US20170200299A1 (en) * | 2012-11-12 | 2017-07-13 | Sony Corporation | Image processing device, image processing method and program |
US20180129898A1 (en) * | 2005-09-30 | 2018-05-10 | Facebook, Inc. | Apparatus, method and program for image search |
US20180254064A1 (en) * | 2017-03-02 | 2018-09-06 | Ricoh Company, Ltd. | Decomposition of a Video Stream into Salient Fragments |
US20180309973A1 (en) * | 2011-03-25 | 2018-10-25 | Semiconductor Energy Laboratory Co., Ltd. | Image processing method and display device |
US10708635B2 (en) | 2017-03-02 | 2020-07-07 | Ricoh Company, Ltd. | Subsumption architecture for processing fragments of a video stream |
US10713391B2 (en) | 2017-03-02 | 2020-07-14 | Ricoh Co., Ltd. | Tamper protection and video source identification for video processing pipeline |
US10719552B2 (en) | 2017-03-02 | 2020-07-21 | Ricoh Co., Ltd. | Focalized summarizations of a video stream |
US10728499B2 (en) | 2017-11-02 | 2020-07-28 | Hyperconnect Inc. | Electronic apparatus and communication method thereof |
US10789685B2 (en) | 2012-12-20 | 2020-09-29 | Microsoft Technology Licensing, Llc | Privacy image generation |
US10929685B2 (en) | 2017-03-02 | 2021-02-23 | Ricoh Company, Ltd. | Analysis of operator behavior focalized on machine events |
CN112399128A (zh) * | 2020-11-16 | 2021-02-23 | 明磊 | 基于大数据的视频通信系统及方法 |
US10929707B2 (en) | 2017-03-02 | 2021-02-23 | Ricoh Company, Ltd. | Computation of audience metrics focalized on displayed content |
US10943122B2 (en) | 2017-03-02 | 2021-03-09 | Ricoh Company, Ltd. | Focalized behavioral measurements in a video stream |
US10949705B2 (en) | 2017-03-02 | 2021-03-16 | Ricoh Company, Ltd. | Focalized behavioral measurements in a video stream |
US10949463B2 (en) | 2017-03-02 | 2021-03-16 | Ricoh Company, Ltd. | Behavioral measurements in a video stream focalized on keywords |
US10956495B2 (en) | 2017-03-02 | 2021-03-23 | Ricoh Company, Ltd. | Analysis of operator behavior focalized on machine events |
US10956773B2 (en) | 2017-03-02 | 2021-03-23 | Ricoh Company, Ltd. | Computation of audience metrics focalized on displayed content |
US10956494B2 (en) | 2017-03-02 | 2021-03-23 | Ricoh Company, Ltd. | Behavioral measurements in a video stream focalized on keywords |
US20220122344A1 (en) * | 2019-01-09 | 2022-04-21 | Samsung Electronics Co., Ltd | Image optimization method and system based on artificial intelligence |
US20220201245A1 (en) * | 2020-07-31 | 2022-06-23 | Zoom Video Communications, Inc. | Methods and system for transmitting content during a networked conference |
Families Citing this family (61)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100459376B1 (ko) * | 2001-12-14 | 2004-12-03 | 박유상 | 영상 객체 합성기 |
KR100451652B1 (ko) * | 2002-05-28 | 2004-10-08 | 엘지전자 주식회사 | 적응적 칼라 클러스터링을 이용한 얼굴 영역 추출 방법 |
AU2003264402A1 (en) * | 2003-09-10 | 2005-04-06 | Fujitsu Limited | Information processing device for setting background image, information display method, and program |
KR101119067B1 (ko) * | 2004-08-24 | 2012-06-12 | 주식회사 비즈모델라인 | 무선 단말기 |
JP2006215685A (ja) * | 2005-02-02 | 2006-08-17 | Funai Electric Co Ltd | フォトプリンタ |
US20060284895A1 (en) * | 2005-06-15 | 2006-12-21 | Marcu Gabriel G | Dynamic gamma correction |
US8085318B2 (en) * | 2005-10-11 | 2011-12-27 | Apple Inc. | Real-time image capture and manipulation based on streaming data |
US7663691B2 (en) * | 2005-10-11 | 2010-02-16 | Apple Inc. | Image capture using display device as light source |
KR100748059B1 (ko) * | 2005-08-11 | 2007-08-09 | 주식회사 오아시스미디어 | 실시간 다층 동영상 합성보드 |
KR100912230B1 (ko) * | 2005-09-16 | 2009-08-14 | 주식회사 인스프리트 | 대체 영상 통화 서비스 제공 방법 및 이를 위한 시스템 |
KR100728296B1 (ko) | 2006-01-02 | 2007-06-13 | 삼성전자주식회사 | 아이피 네트워크에서의 배경 스킨 서비스 방법 및 그 장치 |
KR100813936B1 (ko) * | 2006-04-14 | 2008-03-14 | 텔미정보통신 주식회사 | 동영상의 동적 피사체 추출 및 영상합성 서비스 방법 |
KR101287843B1 (ko) * | 2006-11-01 | 2013-07-18 | 엘지전자 주식회사 | 단말기 및 구도화면 제공방법 |
US8340398B2 (en) * | 2006-12-02 | 2012-12-25 | Electronics And Telecommunications Research Institute | Correlation extract method for generating 3D motion data, and motion capture system and method for easy composition of humanoid character on real background image using the same |
US8122378B2 (en) * | 2007-06-08 | 2012-02-21 | Apple Inc. | Image capture and manipulation |
US20080303949A1 (en) * | 2007-06-08 | 2008-12-11 | Apple Inc. | Manipulating video streams |
JP2009053815A (ja) * | 2007-08-24 | 2009-03-12 | Nikon Corp | 被写体追跡プログラム、および被写体追跡装置 |
DE112007003641T5 (de) | 2007-08-28 | 2010-10-14 | LSI Corp., Milpitas | Datenübertragung über ein Mobilfunknetz |
CN101201895B (zh) * | 2007-09-20 | 2010-06-02 | 北京清大维森科技有限责任公司 | 嵌入式人脸识别监测器及其检测方法 |
US8437514B2 (en) * | 2007-10-02 | 2013-05-07 | Microsoft Corporation | Cartoon face generation |
JP2009205283A (ja) * | 2008-02-26 | 2009-09-10 | Olympus Corp | 画像処理装置、画像処理方法及び画像処理プログラム |
US8831379B2 (en) * | 2008-04-04 | 2014-09-09 | Microsoft Corporation | Cartoon personalization |
US8275197B2 (en) | 2008-06-14 | 2012-09-25 | Microsoft Corporation | Techniques to manage a whiteboard for multimedia conference events |
US8115613B2 (en) | 2008-07-18 | 2012-02-14 | Ford Global Technologies | Tire pressure monitoring system auto learn algorithm |
US8943398B2 (en) * | 2009-04-02 | 2015-01-27 | Vistaprint Schweiz Gmbh | System and method for generating colored table from variable height background content imagery to support rich content in multiple email readers |
WO2011152841A1 (en) * | 2010-06-01 | 2011-12-08 | Hewlett-Packard Development Company, L.P. | Replacement of a person or object in an image |
KR101687613B1 (ko) * | 2010-06-21 | 2016-12-20 | 엘지전자 주식회사 | 이동 단말기 및 이것의 그룹 생성 방법 |
CN102387311A (zh) * | 2010-09-02 | 2012-03-21 | 深圳Tcl新技术有限公司 | 一种合成视频图像的装置以及合成视频图像的方法 |
CN102025973B (zh) * | 2010-12-17 | 2014-07-02 | 广东威创视讯科技股份有限公司 | 视频合成方法及视频合成系统 |
KR101461149B1 (ko) * | 2011-12-28 | 2014-11-17 | 주식회사 비즈모델라인 | 화상 중첩 방법 |
KR101862128B1 (ko) * | 2012-02-23 | 2018-05-29 | 삼성전자 주식회사 | 얼굴을 포함하는 영상 처리 방법 및 장치 |
US8878663B2 (en) | 2013-01-29 | 2014-11-04 | Ford Global Technologies, Llc | Automatic sensor detection |
KR102056633B1 (ko) * | 2013-03-08 | 2019-12-17 | 삼성전자 주식회사 | 다자간 영상 통화 단말기 및 그의 ui 운용 방법 |
CN104079860B (zh) * | 2013-03-26 | 2019-06-25 | 联想(北京)有限公司 | 一种信息处理方法及电子设备 |
WO2015044994A1 (ja) * | 2013-09-24 | 2015-04-02 | 日立マクセル株式会社 | テレビ通話装置、及びテレビ通話処理方法 |
US10283162B2 (en) | 2014-02-05 | 2019-05-07 | Avatar Merger Sub II, LLC | Method for triggering events in a video |
US9589363B2 (en) | 2014-03-25 | 2017-03-07 | Intel Corporation | Object tracking in encoded video streams |
CN104134225B (zh) | 2014-08-06 | 2016-03-02 | 深圳市中兴移动通信有限公司 | 图片的合成方法及装置 |
US10116901B2 (en) * | 2015-03-18 | 2018-10-30 | Avatar Merger Sub II, LLC | Background modification in video conferencing |
EP3099081B1 (de) * | 2015-05-28 | 2020-04-29 | Samsung Electronics Co., Ltd. | Anzeigevorrichtung und steuerungsverfahren dafür |
CN105430317B (zh) * | 2015-10-23 | 2018-10-26 | 东莞酷派软件技术有限公司 | 一种视频背景设置方法及终端设备 |
CN105872408A (zh) * | 2015-12-04 | 2016-08-17 | 乐视致新电子科技(天津)有限公司 | 图像处理方法及设备 |
TWI616102B (zh) * | 2016-06-24 | 2018-02-21 | 和碩聯合科技股份有限公司 | 視訊影像生成系統及其視訊影像生成之方法 |
CN107707833A (zh) * | 2017-09-11 | 2018-02-16 | 广东欧珀移动通信有限公司 | 图像处理方法和装置、电子装置和计算机可读存储介质 |
EP3680853A4 (de) | 2017-09-11 | 2020-11-04 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Bildverarbeitungsverfahren und -vorrichtung, elektronische vorrichtung und computerlesbares speichermedium |
CN107623824B (zh) * | 2017-09-11 | 2019-08-20 | Oppo广东移动通信有限公司 | 背景图像处理方法、装置和电子设备 |
CN107623823B (zh) * | 2017-09-11 | 2020-12-18 | Oppo广东移动通信有限公司 | 视频通信背景显示方法和装置 |
CN107613239B (zh) * | 2017-09-11 | 2020-09-11 | Oppo广东移动通信有限公司 | 视频通信背景显示方法和装置 |
CN107734264B (zh) * | 2017-09-11 | 2020-12-22 | Oppo广东移动通信有限公司 | 图像处理方法和装置 |
CN107707838A (zh) * | 2017-09-11 | 2018-02-16 | 广东欧珀移动通信有限公司 | 图像处理方法和装置 |
CN107682656B (zh) * | 2017-09-11 | 2020-07-24 | Oppo广东移动通信有限公司 | 背景图像处理方法、电子设备和计算机可读存储介质 |
CN107592491B (zh) * | 2017-09-11 | 2019-12-27 | Oppo广东移动通信有限公司 | 视频通信背景显示方法和装置 |
CN107707837B (zh) * | 2017-09-11 | 2021-06-29 | Oppo广东移动通信有限公司 | 图像处理方法及装置、电子装置和计算机可读存储介质 |
CN108737765B (zh) * | 2018-08-02 | 2021-05-11 | 广东小天才科技有限公司 | 一种视频通话处理方法、装置、终端设备及存储介质 |
CN109254712B (zh) * | 2018-09-30 | 2022-05-31 | 联想(北京)有限公司 | 信息处理方法及电子设备 |
CN110049378A (zh) * | 2019-04-17 | 2019-07-23 | 珠海格力电器股份有限公司 | 一种视频模式下的互动方法、控制系统及终端 |
US11593947B2 (en) | 2020-03-10 | 2023-02-28 | Cisco Technology, Inc. | Automatic adjusting background |
DE102022119217A1 (de) * | 2021-08-04 | 2023-02-09 | Motional Ad Llc | Trainieren eines neuronalen Netzwerks unter Verwendung einer Datenmenge mit Labeln mehrerer Granularitäten |
US20220109838A1 (en) * | 2021-12-17 | 2022-04-07 | Intel Corporation | Methods and apparatus to process video frame pixel data using artificial intelligence video frame segmentation |
CN115514989B (zh) * | 2022-08-16 | 2024-04-09 | 如你所视(北京)科技有限公司 | 一种数据传输方法、系统及存储介质 |
CN115550704B (zh) * | 2022-12-01 | 2023-03-14 | 成都掌声如雷网络科技有限公司 | 一种基于多功能家电的远程家人互动活动方法 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5469536A (en) * | 1992-02-25 | 1995-11-21 | Imageware Software, Inc. | Image editing system including masking capability |
US6698943B2 (en) * | 1997-10-22 | 2004-03-02 | Media Technologies Licensing, Llc. | Imaging system and method |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1013799A (ja) * | 1996-06-19 | 1998-01-16 | Mega Chips:Kk | テレビ電話装置 |
WO1998010331A1 (fr) * | 1996-09-02 | 1998-03-12 | Snk Corporation | Appareil de prises de vues |
US6078619A (en) * | 1996-09-12 | 2000-06-20 | University Of Bath | Object-oriented video system |
EP0866606B1 (de) * | 1996-10-04 | 2004-12-29 | Nippon Telegraph And Telephone Corporation | Vorrichtung und verfahren zur zeitlichen und räumlichen integration und verwaltung einer vielzahl von videos sowie speichermedium zur speicherung eines programms dafür |
GB2324428A (en) * | 1997-04-17 | 1998-10-21 | Sharp Kk | Image tracking; observer tracking stereoscopic display |
RO116685B1 (ro) * | 1997-08-29 | 2001-04-30 | Fx Design Srl | Procedeu de inlocuire a fundalului intr-un material filmat |
US6483521B1 (en) * | 1998-02-02 | 2002-11-19 | Matsushita Electric Industrial Co., Ltd. | Image composition method, image composition apparatus, and data recording media |
KR100316639B1 (ko) * | 1998-05-22 | 2002-01-16 | 윤종용 | 다지점 영상회의 시스템 및 그에 따른 구현방법 |
US6483851B1 (en) * | 1998-11-13 | 2002-11-19 | Tektronix, Inc. | System for network transcoding of multimedia data flow |
KR100350790B1 (ko) * | 1999-01-22 | 2002-08-28 | 엘지전자 주식회사 | 오브젝트 의존적인 특징소 학습에 의한 적응적 오브젝트 추출방법 |
KR200167992Y1 (ko) | 1999-08-17 | 2000-02-15 | 김장배 | 미싱용 피드도그 |
KR100353258B1 (ko) * | 1999-10-21 | 2002-09-28 | 김문겸 | 전자 메일 서비스 장치 및 서비스 방법 |
GB2358098A (en) * | 2000-01-06 | 2001-07-11 | Sharp Kk | Method of segmenting a pixelled image |
KR20000049561A (ko) * | 2000-04-11 | 2000-08-05 | 권진석 | 이미지가 포함된 화면 전송 방법 및 장치 |
KR20010067992A (ko) * | 2001-04-13 | 2001-07-13 | 장민근 | 배경화상 추출 및 삽입이 가능한 동화상 제공 이동통신단말기 및 이를 이용한 배경화상 분리 방법 |
-
2001
- 2001-09-26 KR KR10-2001-0059567A patent/KR100516638B1/ko not_active IP Right Cessation
-
2002
- 2002-09-24 EP EP02021378A patent/EP1298933A3/de not_active Withdrawn
- 2002-09-24 US US10/252,409 patent/US20030058939A1/en not_active Abandoned
- 2002-09-26 CN CNB021433739A patent/CN100370829C/zh not_active Expired - Fee Related
-
2006
- 2006-09-14 US US11/520,587 patent/US8798168B2/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5469536A (en) * | 1992-02-25 | 1995-11-21 | Imageware Software, Inc. | Image editing system including masking capability |
US6698943B2 (en) * | 1997-10-22 | 2004-03-02 | Media Technologies Licensing, Llc. | Imaging system and method |
Cited By (90)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040161163A1 (en) * | 2003-02-14 | 2004-08-19 | Kenya Takamidoh | Portrait image processing method and apparatus |
US20050130108A1 (en) * | 2003-12-12 | 2005-06-16 | Kurzweil Raymond C. | Virtual encounters |
US20070279410A1 (en) * | 2004-05-14 | 2007-12-06 | Tencent Technology (Shenzhen) Company Limited | Method For Synthesizing Dynamic Virtual Figures |
US10032290B2 (en) * | 2004-05-14 | 2018-07-24 | Tencent Technology (Shenzhen) Company Limited | Method for synthesizing dynamic virtual figures |
US20060056506A1 (en) * | 2004-09-13 | 2006-03-16 | Ho Chia C | System and method for embedding multimedia compression information in a multimedia bitstream |
US20060078292A1 (en) * | 2004-10-12 | 2006-04-13 | Huang Jau H | Apparatus and method for embedding content information in a video bit stream |
US7706663B2 (en) * | 2004-10-12 | 2010-04-27 | Cyberlink Corp. | Apparatus and method for embedding content information in a video bit stream |
US20080052104A1 (en) * | 2005-07-01 | 2008-02-28 | Searete Llc | Group content substitution in media works |
US20070005422A1 (en) * | 2005-07-01 | 2007-01-04 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Techniques for image generation |
US20070263865A1 (en) * | 2005-07-01 | 2007-11-15 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Authorization rights for substitute media content |
US20070274519A1 (en) * | 2005-07-01 | 2007-11-29 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Authorization for media content alteration |
US20070276757A1 (en) * | 2005-07-01 | 2007-11-29 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Approval technique for media content alteration |
US20070005651A1 (en) * | 2005-07-01 | 2007-01-04 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Restoring modified assets |
US8126938B2 (en) | 2005-07-01 | 2012-02-28 | The Invention Science Fund I, Llc | Group content substitution in media works |
US20070294720A1 (en) * | 2005-07-01 | 2007-12-20 | Searete Llc | Promotional placement in media works |
US20070299877A1 (en) * | 2005-07-01 | 2007-12-27 | Searete Llc | Group content substitution in media works |
US20080013859A1 (en) * | 2005-07-01 | 2008-01-17 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Implementation of media content alteration |
US20080028422A1 (en) * | 2005-07-01 | 2008-01-31 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Implementation of media content alteration |
US7860342B2 (en) | 2005-07-01 | 2010-12-28 | The Invention Science Fund I, Llc | Modifying restricted images |
US20080052161A1 (en) * | 2005-07-01 | 2008-02-28 | Searete Llc | Alteration of promotional content in media works |
US20070005423A1 (en) * | 2005-07-01 | 2007-01-04 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Providing promotional content |
US20080059530A1 (en) * | 2005-07-01 | 2008-03-06 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Implementing group content substitution in media works |
US20080077954A1 (en) * | 2005-07-01 | 2008-03-27 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Promotional placement in media works |
US20080086380A1 (en) * | 2005-07-01 | 2008-04-10 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Alteration of promotional content in media works |
US20070266049A1 (en) * | 2005-07-01 | 2007-11-15 | Searete Llc, A Limited Liability Corportion Of The State Of Delaware | Implementation of media content alteration |
US8732087B2 (en) | 2005-07-01 | 2014-05-20 | The Invention Science Fund I, Llc | Authorization for media content alteration |
US20080180538A1 (en) * | 2005-07-01 | 2008-07-31 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Image anonymization |
US9583141B2 (en) | 2005-07-01 | 2017-02-28 | Invention Science Fund I, Llc | Implementing audio substitution options in media works |
US9426387B2 (en) | 2005-07-01 | 2016-08-23 | Invention Science Fund I, Llc | Image anonymization |
US9230601B2 (en) | 2005-07-01 | 2016-01-05 | Invention Science Fund I, Llc | Media markup system for content alteration in derivative works |
US9092928B2 (en) | 2005-07-01 | 2015-07-28 | The Invention Science Fund I, Llc | Implementing group content substitution in media works |
US9065979B2 (en) | 2005-07-01 | 2015-06-23 | The Invention Science Fund I, Llc | Promotional placement in media works |
US8910033B2 (en) | 2005-07-01 | 2014-12-09 | The Invention Science Fund I, Llc | Implementing group content substitution in media works |
US20090037243A1 (en) * | 2005-07-01 | 2009-02-05 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Audio substitution options in media works |
US20090037278A1 (en) * | 2005-07-01 | 2009-02-05 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Implementing visual substitution options in media works |
US8792673B2 (en) | 2005-07-01 | 2014-07-29 | The Invention Science Fund I, Llc | Modifying restricted images |
US20090150444A1 (en) * | 2005-07-01 | 2009-06-11 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Media markup for audio content alteration |
US20090151008A1 (en) * | 2005-07-01 | 2009-06-11 | Searete Llc. A Limited Liability Corporation Of The State Of Delaware | Media markup system for content alteration in derivative works |
US20090210946A1 (en) * | 2005-07-01 | 2009-08-20 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Media markup for promotional audio content |
US20070002360A1 (en) * | 2005-07-01 | 2007-01-04 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Modifying restricted images |
US20180129898A1 (en) * | 2005-09-30 | 2018-05-10 | Facebook, Inc. | Apparatus, method and program for image search |
US10810454B2 (en) * | 2005-09-30 | 2020-10-20 | Facebook, Inc. | Apparatus, method and program for image search |
US20070287477A1 (en) * | 2006-06-12 | 2007-12-13 | Available For Licensing | Mobile device with shakeable snow rendering |
US7973818B2 (en) * | 2006-08-14 | 2011-07-05 | Broadcom Corporation | Mixing background effects with real communication data to enhance personal communications |
US20080044035A1 (en) * | 2006-08-14 | 2008-02-21 | Kapil Agrawal | Mixing background effects with real communication data to enhance personal communications |
US8203609B2 (en) | 2007-01-31 | 2012-06-19 | The Invention Science Fund I, Llc | Anonymization pursuant to a broadcasted policy |
US8126190B2 (en) | 2007-01-31 | 2012-02-28 | The Invention Science Fund I, Llc | Targeted obstrufication of an image |
US20080180539A1 (en) * | 2007-01-31 | 2008-07-31 | Searete Llc, A Limited Liability Corporation | Image anonymization |
US20080181533A1 (en) * | 2007-01-31 | 2008-07-31 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Targeted obstrufication of an image |
US20080180459A1 (en) * | 2007-01-31 | 2008-07-31 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Anonymization pursuant to a broadcasted policy |
US20080244755A1 (en) * | 2007-03-30 | 2008-10-02 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Authorization for media content alteration |
US20080246777A1 (en) * | 2007-04-03 | 2008-10-09 | Richard Lee Swanson | Method and apparatus for background replacement in still photographs |
US20110134141A1 (en) * | 2007-04-03 | 2011-06-09 | Lifetouch Inc. | Method and apparatus for background replacement in still photographs |
US8134576B2 (en) | 2007-04-03 | 2012-03-13 | Lifetouch Inc. | Method and apparatus for background replacement in still photographs |
US8319797B2 (en) | 2007-04-03 | 2012-11-27 | Lifetouch Inc. | Method and apparatus for background replacement in still photographs |
US7834894B2 (en) | 2007-04-03 | 2010-11-16 | Lifetouch Inc. | Method and apparatus for background replacement in still photographs |
US20080270161A1 (en) * | 2007-04-26 | 2008-10-30 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Authorization rights for substitute media content |
WO2008134065A2 (en) * | 2007-04-26 | 2008-11-06 | Searete Llc | Implementing group content substitution in media works |
WO2008134065A3 (en) * | 2007-04-26 | 2008-12-18 | Searete Llc | Implementing group content substitution in media works |
US9215512B2 (en) | 2007-04-27 | 2015-12-15 | Invention Science Fund I, Llc | Implementation of media content alteration |
US20090046106A1 (en) * | 2007-08-14 | 2009-02-19 | Samsung Techwin Co., Ltd. | Method of displaying images and display apparatus applying the same |
US10484660B2 (en) * | 2011-03-25 | 2019-11-19 | Semiconductor Energy Laboratory Co., Ltd. | Image processing method and display device |
US20180309973A1 (en) * | 2011-03-25 | 2018-10-25 | Semiconductor Energy Laboratory Co., Ltd. | Image processing method and display device |
US9153031B2 (en) | 2011-06-22 | 2015-10-06 | Microsoft Technology Licensing, Llc | Modifying video regions using mobile device input |
US20140002866A1 (en) * | 2012-06-28 | 2014-01-02 | Xerox Corporation | Method and apparatus for object assisted image editing and transmission of scanned documents |
US8824031B2 (en) * | 2012-06-28 | 2014-09-02 | Xerox Corporation | Method and apparatus for object assisted image editing and transmission of scanned documents |
US9842420B2 (en) * | 2012-11-12 | 2017-12-12 | Sony Corporation | Image processing device and method for creating a reproduction effect by separating an image into a foreground image and a background image |
US20170200299A1 (en) * | 2012-11-12 | 2017-07-13 | Sony Corporation | Image processing device, image processing method and program |
US10789685B2 (en) | 2012-12-20 | 2020-09-29 | Microsoft Technology Licensing, Llc | Privacy image generation |
US9064295B2 (en) * | 2013-02-04 | 2015-06-23 | Sony Corporation | Enhanced video encoding using depth information |
US20180254064A1 (en) * | 2017-03-02 | 2018-09-06 | Ricoh Company, Ltd. | Decomposition of a Video Stream into Salient Fragments |
US10943122B2 (en) | 2017-03-02 | 2021-03-09 | Ricoh Company, Ltd. | Focalized behavioral measurements in a video stream |
US10719552B2 (en) | 2017-03-02 | 2020-07-21 | Ricoh Co., Ltd. | Focalized summarizations of a video stream |
US11398253B2 (en) | 2017-03-02 | 2022-07-26 | Ricoh Company, Ltd. | Decomposition of a video stream into salient fragments |
US10713391B2 (en) | 2017-03-02 | 2020-07-14 | Ricoh Co., Ltd. | Tamper protection and video source identification for video processing pipeline |
US10708635B2 (en) | 2017-03-02 | 2020-07-07 | Ricoh Company, Ltd. | Subsumption architecture for processing fragments of a video stream |
US10929685B2 (en) | 2017-03-02 | 2021-02-23 | Ricoh Company, Ltd. | Analysis of operator behavior focalized on machine events |
US10956494B2 (en) | 2017-03-02 | 2021-03-23 | Ricoh Company, Ltd. | Behavioral measurements in a video stream focalized on keywords |
US10929707B2 (en) | 2017-03-02 | 2021-02-23 | Ricoh Company, Ltd. | Computation of audience metrics focalized on displayed content |
US10720182B2 (en) * | 2017-03-02 | 2020-07-21 | Ricoh Company, Ltd. | Decomposition of a video stream into salient fragments |
US10949705B2 (en) | 2017-03-02 | 2021-03-16 | Ricoh Company, Ltd. | Focalized behavioral measurements in a video stream |
US10949463B2 (en) | 2017-03-02 | 2021-03-16 | Ricoh Company, Ltd. | Behavioral measurements in a video stream focalized on keywords |
US10956495B2 (en) | 2017-03-02 | 2021-03-23 | Ricoh Company, Ltd. | Analysis of operator behavior focalized on machine events |
US10956773B2 (en) | 2017-03-02 | 2021-03-23 | Ricoh Company, Ltd. | Computation of audience metrics focalized on displayed content |
US10728499B2 (en) | 2017-11-02 | 2020-07-28 | Hyperconnect Inc. | Electronic apparatus and communication method thereof |
US20220122344A1 (en) * | 2019-01-09 | 2022-04-21 | Samsung Electronics Co., Ltd | Image optimization method and system based on artificial intelligence |
US11830235B2 (en) * | 2019-01-09 | 2023-11-28 | Samsung Electronics Co., Ltd | Image optimization method and system based on artificial intelligence |
US20220201245A1 (en) * | 2020-07-31 | 2022-06-23 | Zoom Video Communications, Inc. | Methods and system for transmitting content during a networked conference |
US11818501B2 (en) * | 2020-07-31 | 2023-11-14 | Zoom Video Communications, Inc. | Transmitting content during a networked conference |
CN112399128A (zh) * | 2020-11-16 | 2021-02-23 | 明磊 | 基于大数据的视频通信系统及方法 |
Also Published As
Publication number | Publication date |
---|---|
KR20030026528A (ko) | 2003-04-03 |
CN100370829C (zh) | 2008-02-20 |
CN1411277A (zh) | 2003-04-16 |
EP1298933A3 (de) | 2006-10-04 |
US8798168B2 (en) | 2014-08-05 |
EP1298933A2 (de) | 2003-04-02 |
US20070009028A1 (en) | 2007-01-11 |
KR100516638B1 (ko) | 2005-09-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8798168B2 (en) | Video telecommunication system for synthesizing a separated object with a new background picture | |
US6961446B2 (en) | Method and device for media editing | |
KR101167432B1 (ko) | 통신 방법 및 통신 시스템 | |
US7760156B2 (en) | Transmitting apparatus, transmitting method, receiving apparatus, receiving method, transmitting and receiving apparatus, transmitting and receiving method, record medium, and signal | |
US7203356B2 (en) | Subject segmentation and tracking using 3D sensing technology for video compression in multimedia applications | |
US11057646B2 (en) | Image processor and image processing method | |
US20080235724A1 (en) | Face Annotation In Streaming Video | |
KR100669837B1 (ko) | 입체 비디오 코딩을 위한 포어그라운드 정보 추출 방법 | |
JP2003087785A (ja) | 動画像符号化データの形式変換方法及び装置 | |
CN1167276C (zh) | 在基于3维模型的编码系统中产生唇部活动参数的方法及装置 | |
JP2001507541A (ja) | スプライトベースによるビデオ符号化システム | |
JP2001188910A (ja) | 画像の輪郭抽出方法、画像からの物体抽出方法およびこの物体抽出方法を用いた画像伝送システム | |
CN114419702B (zh) | 数字人生成模型、模型的训练方法以及数字人生成方法 | |
CN112954398B (zh) | 编码方法、解码方法、装置、存储介质及电子设备 | |
US20020164068A1 (en) | Model switching in a communication system | |
JP2002230574A (ja) | 画像生成方法、装置およびシステム | |
JP2002051315A (ja) | データ伝送方法およびその装置、並びにデータ伝送システム | |
KR100464079B1 (ko) | 화상 통신에서의 얼굴 검출 및 추적 시스템 | |
JP3859989B2 (ja) | 画像マッチング方法およびその方法を利用可能な画像処理方法と装置 | |
JP2004537931A (ja) | シーンを符号化する方法及び装置 | |
KR100460221B1 (ko) | 화상 통신 시스템 | |
JP2003061098A (ja) | 画像処理装置、画像処理方法、記録媒体及びプログラム | |
JPH10243389A (ja) | 動画早見画像作成装置、動画早見画像作成方法および動画データ検索システム | |
JPH0767107A (ja) | 画像符号化装置 | |
Sarris et al. | MPEG-4 Facial Animation and its Application to a Videophone System for the Deaf |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: LG ELECTRONICS INC., KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEE, JIN SOO;LEE, JI EUN;REEL/FRAME:013326/0302 Effective date: 20020814 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |