WO2008116400A1 - Terminal, procédé et système pour réaliser une communication vidéo - Google Patents

Terminal, procédé et système pour réaliser une communication vidéo Download PDF

Info

Publication number
WO2008116400A1
WO2008116400A1 PCT/CN2008/070237 CN2008070237W WO2008116400A1 WO 2008116400 A1 WO2008116400 A1 WO 2008116400A1 CN 2008070237 W CN2008070237 W CN 2008070237W WO 2008116400 A1 WO2008116400 A1 WO 2008116400A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
image
video image
unit
quality parameter
Prior art date
Application number
PCT/CN2008/070237
Other languages
English (en)
Chinese (zh)
Inventor
Jing Lv
Original Assignee
Tencent Technology (Shenzhen) Company Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology (Shenzhen) Company Limited filed Critical Tencent Technology (Shenzhen) Company Limited
Publication of WO2008116400A1 publication Critical patent/WO2008116400A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • H04N2007/145Handheld terminals

Definitions

  • Terminal method and system for realizing video communication
  • the present invention relates to the field of computer graphics technology, and more particularly to a terminal, method and system for implementing video communication. Background of the invention
  • the above digital image processing techniques include techniques such as image segmentation, image description, and recognition.
  • Image segmentation is the extraction of meaningful features from images.
  • the meaningful features include edges, regions, etc. in the image, which is the basis for further image recognition, analysis and understanding.
  • Image description is a necessary prerequisite for image recognition and understanding.
  • the general image description method uses two-dimensional shape description, which has two kinds of methods: boundary description and region description.
  • Image classification recognition belongs to the category of pattern recognition. Its main content is image segmentation and feature extraction after some preprocessing (enhancement, restoration, compression), or matching identification through some a priori features. Conduct a classification of judgments.
  • the transmitting terminal picks up a video image through a camera; encodes the taken video image into video data; transmits the encoded video data to a receiving terminal; and receives the terminal Decode and play the received encoded data.
  • the embodiments of the present invention provide a terminal, a method, and a system for implementing video communication, which solve the problem that the prior art cannot perform different resolution processing for different areas in the video.
  • An image analyzing unit configured to divide the video image in the video sequence into at least two parts
  • An image processing unit configured to respectively encode each of the at least two portions of the video image
  • a data transmission unit configured to output encoded data of the video image.
  • Each of the at least two portions of the video image is encoded separately; the encoded data of the video image is output.
  • a transmitting terminal configured to divide a video image in the video sequence into at least two parts; respectively encode each of the at least two portions of the video image, and output encoded data of the video image;
  • a terminal, a method, and a system for implementing video communication provide a method for dividing a video image into different regions and different regions according to characteristics of the existing video communication.
  • the use of different resolution processing methods satisfies the individualized requirements in the video communication process, such as privacy protection in the video communication process, and solves the problem that the prior art cannot perform different resolution processing for different areas in the video.
  • FIG. 1 is a schematic structural diagram of a terminal for implementing video communication according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of a method for implementing video communication according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a terminal for implementing video communication according to Embodiment 1 of the present invention
  • FIG. 4 is a schematic flowchart of a method for implementing video communication according to Embodiment 1 of the present invention
  • FIG. 5 is a schematic structural diagram of an image analyzing unit according to Embodiment 2 of the present invention
  • FIG. 6 is a schematic flowchart of a method for implementing video communication according to Embodiment 2 of the present invention
  • FIG. 7 is a schematic structural diagram of an image analyzing unit according to Embodiment 3 of the present invention.
  • FIG. 8 is a schematic flowchart of a method for implementing video communication according to Embodiment 3 of the present invention
  • FIG. 9 is a schematic structural diagram of an image processing unit according to Embodiment 4 of the present invention.
  • FIG. 10 is a schematic flowchart of a method for implementing video communication according to Embodiment 4 of the present invention
  • FIG. 11 is a schematic structural diagram of an image processing unit according to Embodiment 5 of the present invention
  • FIG. 12 is a flowchart of a method for implementing video communication according to Embodiment 5 of the present invention
  • FIG. 13 is a schematic structural diagram of a terminal for implementing video communication according to an embodiment of the present invention
  • FIG. 14 is a schematic structural diagram of a method for implementing video communication according to an embodiment of the present invention
  • FIG. 16 is a schematic flowchart diagram of a system method for implementing video communication according to an embodiment of the present invention. Mode for carrying out the invention
  • the terminal, the method and the system for implementing video communication provided by the embodiment of the present invention, after acquiring the video sequence, dividing the video image in the video sequence into at least two parts; each part of at least two parts of the video image
  • the encoding is performed separately, and the encoded video image data is output.
  • FIG. 1 is a schematic structural diagram of a terminal for implementing video communication according to an embodiment of the present invention.
  • the terminal includes: an image analyzing unit 102, an image processing unit 103, and a data transfer unit 104.
  • the image analyzing unit 102 is configured to divide the video image in the video sequence into at least two parts.
  • the image processing unit 103 is for encoding each of the at least two portions of the video image, respectively.
  • the data transmission unit 104 is for outputting encoded data of the above video image.
  • FIG. 2 is a schematic flowchart of a method for implementing video communication in an embodiment of the present invention. As shown in Figure 2, the method includes:
  • Step 202 Divide the video image in the video sequence into at least two parts.
  • Step 203 Encode each of the at least two parts of the video image separately.
  • Step 204 Output encoded data of the video image.
  • FIG. 3 is a schematic structural diagram of a terminal for implementing video communication according to Embodiment 1 of the present invention.
  • the terminal includes: an image analysis unit 302, an image processing unit 303, and a data transmission unit 304.
  • the terminal further includes: a video collection unit 301.
  • the video collection unit 301 is configured to collect a video sequence, and send the video sequence to the image analysis unit 302.
  • the video image in the sequence is split into at least two parts.
  • the image processing unit 303 is for encoding each of the at least two portions of the video image, respectively.
  • the data transmission unit 304 is for outputting encoded data of the above video image.
  • FIG. 4 is a schematic flowchart of a method for implementing video communication according to Embodiment 1 of the present invention. As shown in Figure 4, the method includes:
  • Step 401 Collect a video sequence.
  • Step 402 Analyze a video image in the video sequence, and divide the video image into at least two parts.
  • Step 403 Encode each of the at least two parts of the video image separately.
  • Step 404 Output encoded data of the video image.
  • the terminal for implementing video communication in the second embodiment of the present invention includes: a video capture unit 301, an image analysis unit 302, an image processing unit 303, and a data transmission unit 304.
  • the video collection unit 301 is configured to collect a video sequence.
  • the video capture unit 301 can include a video capture device (eg, a camera connected to a computer, etc.) and corresponding video image processing software.
  • the image analyzing unit 302 is configured to acquire a video image from the video sequence collected by the video capturing unit 301, analyze and identify the video image, and divide the video image into a first portion and a second portion.
  • FIG. 5 is a schematic structural diagram of an image analyzing unit according to Embodiment 2 of the present invention.
  • the image analyzing unit 302 includes an image recognizing unit 502, configured to analyze a video image in the video sequence, and extract a foreground portion and a background from the video image. Part, where for example the foreground part is the first part and the background part is the second part.
  • the image recognition unit 502 can analyze the pixel changes in the before and after image frames by comparing the front and rear image frames (usually bitmaps) in the video sequence to obtain the foreground portion and the background portion in the video image.
  • the image analyzing unit 302 compares the first 3 - 5 image frames of the current image frame in the video sequence. If the pixels in the same position in the preceding and following image frames are unchanged, the position is a stationary point; if the image in the same position in the preceding and succeeding frames If the elements are different, the position is a moving point; the image analyzing unit 302 takes a plurality of moving points and a still point surrounded by the plurality of moving points as a foreground portion, and the other portion is a background portion.
  • Image analysis unit 302 can also use any of the existing foreground and background recognition methods to obtain the foreground portion and the background portion of the video image.
  • the image processing unit 303 is for encoding the first portion and the second portion of the video image transmitted by the image analyzing unit 302.
  • the data transmission unit 304 is for outputting the encoded data of the video image transmitted by the image processing unit 303.
  • FIG. 6 is a schematic flowchart diagram of a method for implementing video communication according to Embodiment 2 of the present invention. As shown in FIG. 6, the method includes: Step 601: The terminal collects a video sequence by using a camera or the like.
  • Step 602 The video image in the video sequence is divided into a first part and a second part by a foreground and a background recognition manner.
  • the foreground portion and the background portion are extracted from the above video image.
  • the first portion is the foreground portion of the video image and the second portion is the background portion of the video image.
  • the foreground and background portions in the video image can be obtained by comparing the front and rear image frames (usually bitmaps) in the video sequence to analyze the pixel changes in the image frames before and after.
  • any existing foreground and background recognition methods can be used to obtain the foreground and background portions of the video image.
  • Step 603 Encode the first part and the second part of the video image.
  • Step 604 Output the above encoded data of the video image.
  • the terminal and the method for realizing video communication in this embodiment segment the video image by using foreground and background recognition modes, which satisfies the problem of individualization in the video communication process and solves the problem of resolution processing.
  • the terminal for implementing video communication in Embodiment 3 of the present invention includes: a video capture unit 301, an image analysis unit 302, an image processing unit 303, and a data transmission unit 304.
  • the video collection unit 301 is the same as the first embodiment.
  • the image analyzing unit 302 is configured to obtain a video image from the video sequence acquired by the video capturing unit 301, and divide the video image into the first portion and the second portion by manual selection.
  • FIG. 7 is a schematic structural diagram of an image analyzing unit according to Embodiment 3 of the present invention.
  • the image analyzing unit 302 includes an image selecting unit 702.
  • the image selection unit 702 is configured to select a partial region from the video image of the video sequence, for example, a rectangular, circular, or irregularly shaped region by a mouse frame. And, for example, the selected area is taken as the first part, and the unselected area is taken as the second part.
  • the image processing unit 303 is for encoding the first portion and the second portion of the video image transmitted by the image analyzing unit 302.
  • the data transmission unit 304 is for outputting the encoded data of the video image transmitted by the image processing unit 303.
  • FIG. 8 is a schematic flowchart of a method for implementing video communication according to Embodiment 3 of the present invention. As shown in Figure 8, the method includes:
  • Step 801 The terminal collects a video sequence by using a camera or the like.
  • Step 802 The area selected in the video image is taken as the first part, and the unselected area is taken as the second part. That is, a partial region is selected from the video images of the above video sequence, and the selected region can be taken as the first portion and the unselected region as the second portion.
  • Step 803 Encode the first part and the second part of the video image.
  • Step 804 Output the above encoded data of the video image.
  • the terminal and method for realizing video communication in this embodiment solves the existing problem by segmenting the video image by manual selection, which satisfies the individualized requirements in the video communication process.
  • the terminal for implementing video communication in Embodiment 4 of the present invention includes: a video collection unit 301, an image analysis unit 302, an image processing unit 303, and a data transmission unit 304.
  • the video collection unit 301 is the same as the first embodiment.
  • the image analyzing unit 302 is configured to obtain a video image from the video sequence acquired by the video capturing unit 301, and divide the video image into a first portion and a second portion.
  • the image processing unit 303 is for encoding the first portion and the second portion of the video image transmitted by the image analyzing unit 302.
  • FIG. 9 is a schematic structural diagram of an image processing unit according to Embodiment 4 of the present invention.
  • the image processing unit 303 includes an image encoding unit 903.
  • the image encoding unit 903 is configured to encode the video image divided by the image analyzing unit 302: encoding the first portion of the video image using the first quality parameter, and using the second quality parameter for the second portion of the video image coding.
  • the first quality parameter and the second quality parameter are used to control the compression ratio at the time of encoding.
  • the quality parameter strategy can be processed by setting a default value, or by the user within a certain range, that is, the user himself controls the clarity of the foreground and the background separately. Therefore, the image encoding unit 903 may further include: a parameter setting unit (not shown) for setting the first quality parameter and the second quality parameter.
  • the image coding unit 903 performs video coding of different quality levels on the foreground and the background according to different quality parameters.
  • the corresponding code stream of each area carries the quality level parameter corresponding to the area.
  • the code stream corresponding to the area without the quality level carries the quality level information of the local area, so only the corresponding processing parameters are required according to the general processing flow. Just restore the image.
  • FIG. 10 is a schematic flowchart diagram of a method for implementing video communication according to Embodiment 4 of the present invention. As shown in FIG. 10, the method includes:
  • Step 1001 The terminal collects a video sequence by using a camera or the like.
  • Step 1002 The video image in the video sequence is divided into a first part and a second part.
  • Step 1003 Encode the first portion of the video image using the first quality parameter and encode the second portion of the video image using the second quality parameter.
  • the quality parameter strategy can be processed with default values or by the user within a certain range. Therefore, preferably, the step further includes: setting the first quality parameter and the second quality parameter.
  • Step 1004 Output the above encoded data of the video image.
  • the terminal and method for realizing video communication in this embodiment for the characteristics of the existing video communication, identify and segment the video sequence collected by the camera, for example, and perform different quality coding on different parts, thereby satisfying the video communication user.
  • Personalized requirements solve the problem that the prior art cannot handle different resolutions for different areas in the video.
  • the terminal for implementing video communication in Embodiment 5 of the present invention includes: a video capture unit 301, an image analysis unit 302, an image processing unit 303, and a data transmission unit 304.
  • the video collection unit 301 is the same as the first embodiment.
  • the image analyzing unit 302 is configured to obtain a video image from the video sequence acquired by the video capturing unit 301, and divide the video image into a first portion and a second portion.
  • the image processing unit 303 is for encoding the first portion and the second portion of the video image transmitted by the image analyzing unit 302.
  • FIG. 11 is a schematic structural diagram of an image processing unit according to Embodiment 5 of the present invention; Figure.
  • the image processing unit 303 includes: a pre-processing unit 11031, and an image encoding unit 11032.
  • the pre-processing unit 11031 is configured to perform fuzzy pre-processing on the second part of the video image provided by the image analyzing unit 302. That is, after reading the video image, the pre-processing unit 11031 performs blurring processing on the second portion before encoding, and sends the processed image to the image encoding unit 11032 for encoding at a uniform quality level.
  • the image encoding unit 11032 is configured to encode the first portion of the video image supplied through the image analyzing unit 302 and the second portion subjected to the blurring processing by the pre-processing unit 11031 to perform uniform quality level (same quality parameter).
  • the video image is encoded according to the quality parameter carried in the encoded data according to a general operation procedure.
  • the data transmission unit 304 is for outputting the encoded data of the video image transmitted by the image processing unit 303.
  • FIG. 12 is a schematic flowchart of a method for implementing video communication according to Embodiment 5 of the present invention. As shown in Figure 12, the method includes:
  • Step 1201 The terminal collects a video sequence by using a camera or the like.
  • Step 1202 Divide the video image in the video sequence into a first part and a second part.
  • Step 1203 Perform blur preprocessing on the second part of the video image, and encode the first part of the video image and the second part of the video image after the blur preprocessing using the same quality parameter.
  • fuzzy preprocessing is a prior art, similar to the common "mosaic" technology, that is, the part that needs to be processed is processed by a pre-selected algorithm, and the most simple solution is to put all the pixels and surrounding pixels in the image. Weighted mean to achieve fuzzy processing Effect.
  • the present invention is described by way of example only, and is not intended to limit the scope of the present invention.
  • the same quality parameter is used to encode the first part of the video image and the second part of the video image after blur preprocessing.
  • Step 1204 Output the above encoded data of the video image.
  • the terminal and method for implementing video communication in this embodiment for the characteristics of the existing video communication, identify and segment the video sequence collected by the camera, for example, and perform blur preprocessing on some parts, and then perform video sequence on the video sequence.
  • the same quality coding thereby satisfying the individualized requirements of the video communication user, solves the problem that the prior art cannot perform different resolution processing for different areas in the video.
  • IM video communication adopts the traditional video coding mode, and uses the same quality parameters for a single frame (single image) or even the entire video sequence in the video sequence, and presents with the same definition, without different regions. deal with. This way, when users are not willing to expose their environment to each other and want to use the video function, there is no suitable solution.
  • FIG. 13 is a schematic structural diagram of a terminal for implementing video communication according to an embodiment of the present invention.
  • the terminal includes: a video collection unit 1301, an image analysis unit 1302, an image processing unit 1303, and a data transmission unit 1304.
  • the video collection unit 1301 is configured to collect a video sequence.
  • Video capture unit 1301 may include a video capture device (eg, a camera coupled to a computer, etc.) and corresponding video image processing software.
  • the image analyzing unit 1302 is for dividing the video image in the video sequence into a foreground portion and a background portion, wherein the foreground portion is the first portion and the background portion is the second portion.
  • the technical solution for dividing the video image in the video sequence into the foreground portion and the background portion can be divided into two parts: automatic implementation according to settings or interaction between users in the video process.
  • This program provides the option "Automatic Blur Background" in the video settings.
  • the basic idea is to highlight only the foreground information (persons) in the user's video process, and to blur the background information.
  • the software automatically divides the foreground and background, and handles the foreground and background differently.
  • This program provides the option "Manual Blur Background" in the video settings.
  • the basic idea is to receive real-time participation of users during the interaction process, and adjust the foreground and background regions according to user participation.
  • the image analyzing unit 1302 detects whether the user selects the "automatic blur background” setting or the “manual blur background” setting. If the user does not check the two options, the same quality level is followed according to the general operation flow.
  • the video image is encoded; the quality parameters are transmitted along with the code stream.
  • the image analysis unit 1302 can analyze the pixel changes in the image frames before and after, by comparing the front and rear image frames (usually bitmaps) in the video sequence to obtain the foreground portion of the video image. And the background section.
  • the image analyzing unit 1302 compares the first 3 - 5 image frames of the current image frame in the video sequence.
  • the position is a stationary point; if the image in the same position in the preceding and succeeding frames If the elements are different, the position is a moving point; the image analyzing unit 1302 will be more The moving points and the still points surrounded by the plurality of moving points are the foreground portions, and the other portions are the background portions.
  • image analysis unit 1302 can also use any existing foreground and background recognition methods to obtain foreground and background portions of the video image.
  • the user can select the background area by using the selection tool provided by the image analysis unit 1302 (frame selection, rectangle, circle, or custom shape), video image The rest of the portion is the foreground portion.
  • the image analysis unit 1302 may further provide a reset option for canceling the background area selected by the user and restoring the default video output.
  • the image processing unit 1303 is for encoding the video image divided by the image analyzing unit 1302.
  • the encoding method is either of the following two:
  • the image processing unit 1303 can determine different quality parameters for the foreground and the background according to a pre-defined strategy, and perform different processing on different regions. That is, the first portion of the video image is encoded using the first quality parameter and the second portion of the video image is encoded using the second quality parameter.
  • the first quality parameter and the second quality parameter are used to control the compression ratio at the time of encoding. The larger the quality parameter value is, the smaller the data amount is, and the corresponding image quality is degraded.
  • the resolution of the decoded video sequence is consistent; when the first quality parameter value is lower than the second quality parameter value, the first part of the decoded video sequence is clear The degree is higher than the sharpness of the second part; when the first quality parameter value is higher than the second quality parameter value, the resolution of the first part in the decoded video sequence is lower than the sharpness of the second part.
  • the first quality parameter is lower than the second quality parameter.
  • the background portion ie, the corresponding portion of the second quality parameter value
  • Hidden view The surrounding environment when chatting.
  • the above-mentioned quality parameter strategy can be processed by default values.
  • the user can also select within a certain range, that is, the user himself and the user can separately control the clarity of the foreground and the background, that is, in the above system, a parameter setting module can also be included.
  • the values of the first quality parameter and the second quality parameter are set.
  • the image processing unit 1303 may also pre-process the background area after reading the video image, that is, perform blur processing before encoding, and perform uniform quality level encoding on the processed background image and the unprocessed foreground image.
  • any other existing encoding methods may be used to separately encode the foreground portion and the background portion of the video image.
  • the area corresponding to the code stream does not carry the quality level information of the area, it is only necessary to restore the image according to the corresponding parameters of each area according to the general processing flow.
  • the receiving end can record whether the quality parameters carried in different areas are different, and the transition can be appropriately performed at the boundary between the foreground area and the background area. deal with.
  • This over-processing can be smoothed to avoid the visually inconspicuous effect of the blurred area boundaries being too obvious.
  • the data transmission unit 1304 is for outputting the encoded data of the video image transmitted by the image processing unit 1303.
  • FIG. 14 is a schematic structural diagram of a method for implementing video communication according to an embodiment of the present invention. As shown in FIG. 14, it is a flowchart of an embodiment of a video communication method according to the present invention. The method is used to implement video sequence transmission between terminals, and specifically includes the following steps:
  • Step 1401 The terminal collects a video sequence through a camera or the like.
  • Step 1402 In step 1401, by automatic identification mode or manual selection mode The video image in the acquired video sequence is divided into a foreground portion and a background portion, wherein the foreground portion is the first portion and the background portion is the second portion.
  • Step 1403 Code the foreground portion and the background portion of the video image separately.
  • the encoding method can be either of the following two:
  • this step may further include a parameter setting step for setting specific values of the first quality parameter and the second quality parameter.
  • Pre-processing the background area that is, performing blurring processing before encoding, and performing uniform quality level encoding on the processed background image and the unprocessed foreground image.
  • Step 1404 Send the encoded video image encoded data to the receiving end to decode the broadcast.
  • the resolution of the decoded video sequence is consistent; when the first quality parameter value is lower than the second quality parameter value or the second part (background part)
  • the resolution of the first part of the decoded video sequence is higher than the resolution of the second part; when the first quality parameter value is higher than the second quality parameter value or the first part (foreground part) is performed
  • the resolution of the first part of the decoded video sequence is lower than the resolution of the second part.
  • a boundary equalization step may be further included for the foreground part and the background of the video image when outputting the video image. Part of the transition process. Therefore, when decoding the video sequence, the boundary between the first part and the second part is smoothed, and the boundary of the blurred area is prevented from being too obvious.
  • the terminal and method for realizing video communication in the embodiment according to the characteristics of the existing video communication, by automatically identifying or manually selecting a video image collected by a camera, and separately encoding different parts, thereby satisfying Video communication
  • the user's personalized needs solve the problem that the prior art cannot handle different resolutions for different areas in the video.
  • the present invention also provides a system for implementing video communication.
  • the system will be described in detail by way of examples.
  • FIG. 15 is a schematic structural diagram of a system for implementing video communication according to an embodiment of the present invention. As can be seen from Fig. 15, the system includes: a transmitting terminal 1501 and a receiving terminal 1502.
  • the transmitting terminal 1501 is configured to divide the video image in the video sequence into at least two parts; separately encode each of the at least two parts of the video image, and output the encoded data of the video image.
  • the sending terminal 1501 is further configured to collect a video sequence.
  • the receiving terminal 1502 is configured to receive the encoded data of the video image from the transmitting terminal 1501 and decode the playing video sequence.
  • the video transmission described above can be bidirectional.
  • the transmitting terminal 1501 is configured to collect a video sequence by means of a camera or the like, and encode the video sequence and transmit the video sequence, and the receiving terminal 1502 can receive the video sequence from the transmitting terminal 1501 through the network, for example, and decode the playing video sequence.
  • the video transmission described above can be bidirectional.
  • the receiving terminal 1502 may further include a boundary equalization unit (not shown) for outputting the video image to the video image. At least two parts of the transition process. Therefore, when decoding the video sequence, the boundary of at least two parts is smoothed, and the boundary of the blurred area is prevented from being too obvious.
  • FIG. 16 is a schematic flowchart diagram of a system method for implementing video communication according to an embodiment of the present invention. As shown in Figure 16, the method includes:
  • Step 1602 dividing the video image in the video sequence into at least two parts.
  • Step 1603 encoding each of the at least two portions of the video image separately.
  • Step 1604 Output encoded data of the video image.
  • Step 1605 Decode and play the above video image.
  • Step 1601 Acquire a video sequence.
  • step 1605 the method further includes: performing a transition processing on at least two portions of the video image when the video image is output. Therefore, when decoding the video sequence, the boundary of at least two parts is smoothed, and the blurred area is prevented from being too obvious.
  • the receiving terminal needs to record whether the quality parameters carried in different areas are different when decoding, and appropriately perform the transition processing at the boundary between the foreground area and the background area.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention concerne un terminal permettant de réaliser une communication vidéo, il comprend : une unité d'analyse d'image adaptée pour diviser les images vidéo d'une séquence vidéo en au moins deux parties (202), une unité de traitement d'image adaptée pour coder respectivement chacune des deux parties ou plus des images vidéo (203) ; une unité de transmission de données adaptée pour émettre les données codées des images vidéo (204). La présente invention concerne également un procédé et un système pour réaliser une communication vidéo.
PCT/CN2008/070237 2007-03-28 2008-02-01 Terminal, procédé et système pour réaliser une communication vidéo WO2008116400A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2007100737198A CN101193261B (zh) 2007-03-28 2007-03-28 一种视频通信系统及方法
CN200710073719.8 2007-03-28

Publications (1)

Publication Number Publication Date
WO2008116400A1 true WO2008116400A1 (fr) 2008-10-02

Family

ID=39487965

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2008/070237 WO2008116400A1 (fr) 2007-03-28 2008-02-01 Terminal, procédé et système pour réaliser une communication vidéo

Country Status (2)

Country Link
CN (1) CN101193261B (fr)
WO (1) WO2008116400A1 (fr)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101998104B (zh) * 2009-08-31 2013-05-29 中国移动通信集团公司 一种视频电话及其替代视频的生成方法
CN101668157B (zh) * 2009-09-24 2011-09-21 中兴通讯股份有限公司 用于视频通话中隐私保护的方法、应用服务器及系统
JP5375490B2 (ja) * 2009-09-29 2013-12-25 ソニー株式会社 送信装置、受信装置、通信システム及びプログラム
CN102630043B (zh) * 2012-04-01 2014-11-12 北京捷成世纪科技股份有限公司 一种基于对象的视频转码方法和装置
CN103428483B (zh) * 2012-05-16 2017-10-17 华为技术有限公司 一种媒体数据处理方法及设备
CN105100671A (zh) * 2014-05-20 2015-11-25 西安中兴新软件有限责任公司 一种基于视频通话的图像处理方法和装置
CN107295360B (zh) * 2016-04-13 2020-08-18 成都鼎桥通信技术有限公司 视频传输方法及装置
CN105872448A (zh) * 2016-05-31 2016-08-17 宇龙计算机通信科技(深圳)有限公司 一种视频通话中视频图像展示方法及装置
CN106550243A (zh) * 2016-12-09 2017-03-29 武汉斗鱼网络科技有限公司 直播视频处理方法、装置及电子设备
CN106851171A (zh) * 2017-02-21 2017-06-13 福建江夏学院 视频通话中实现隐私保护系统及方法
CN107054937A (zh) * 2017-03-23 2017-08-18 广东数相智能科技有限公司 一种基于图像识别的垃圾分类提示装置和系统
CN107493440A (zh) * 2017-09-14 2017-12-19 光锐恒宇(北京)科技有限公司 一种在应用中显示图像的方法和装置
CN109862365B (zh) * 2019-01-30 2022-01-11 西安万像电子科技有限公司 图像数据处理方法及装置
CN111416939A (zh) * 2020-03-30 2020-07-14 咪咕视讯科技有限公司 一种视频处理方法、设备及计算机可读存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1672164A (zh) * 2002-06-26 2005-09-21 摩托罗拉公司 用于限制可视信息的存储或发送的方法和设备
CN1717058A (zh) * 2004-06-29 2006-01-04 三洋电机株式会社 图像编码方法及装置、以及图像译码方法及装置
CN1875636A (zh) * 2003-11-04 2006-12-06 松下电器产业株式会社 视频发送装置以及视频接收装置

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2180934T3 (es) * 1996-03-28 2003-02-16 Koninkl Philips Electronics Nv Metodo y disposicion para codificar y decodificar imagenes.
US5764803A (en) * 1996-04-03 1998-06-09 Lucent Technologies Inc. Motion-adaptive modelling of scene content for very low bit rate model-assisted coding of video sequences
CN100369488C (zh) * 1998-05-22 2008-02-13 松下电器产业株式会社 数据块噪声消除装置及点时钟信号控制装置
EP1311124A1 (fr) * 2001-11-13 2003-05-14 Matsushita Electric Industrial Co., Ltd. Méthode de protection sélective pour la transmission d'images
US7167519B2 (en) * 2001-12-20 2007-01-23 Siemens Corporate Research, Inc. Real-time video object generation for smart cameras
KR100608810B1 (ko) * 2004-07-09 2006-08-08 엘지전자 주식회사 휴대단말기의 화상통신 화질 개선장치 및 방법
CN100469132C (zh) * 2004-07-28 2009-03-11 C&S技术有限公司 一种用于可视电话间保密通话的方法
CN100414997C (zh) * 2004-09-29 2008-08-27 腾讯科技(深圳)有限公司 一种视频数据压缩的量化方法
CN1816149A (zh) * 2005-02-06 2006-08-09 腾讯科技(深圳)有限公司 去除视频图像中块效应的滤波方法及环路滤波器

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1672164A (zh) * 2002-06-26 2005-09-21 摩托罗拉公司 用于限制可视信息的存储或发送的方法和设备
CN1875636A (zh) * 2003-11-04 2006-12-06 松下电器产业株式会社 视频发送装置以及视频接收装置
CN1717058A (zh) * 2004-06-29 2006-01-04 三洋电机株式会社 图像编码方法及装置、以及图像译码方法及装置

Also Published As

Publication number Publication date
CN101193261A (zh) 2008-06-04
CN101193261B (zh) 2010-07-21

Similar Documents

Publication Publication Date Title
WO2008116400A1 (fr) Terminal, procédé et système pour réaliser une communication vidéo
CN105959700B (zh) 视频图像编码的方法、装置、存储介质和终端设备
EP1680928B1 (fr) Procedes de traitement de donnees video et/ou d'image numerique, comprenant le filtrage de luminance base sur des donnees de chrominance
JP2002531020A (ja) 立体画像符号化処理におけるフォアグラウンド情報抽出方法
US6539099B1 (en) System and method for visual chat
US20030058939A1 (en) Video telecommunication system
WO2021164216A1 (fr) Procédé et appareil de codage vidéo, dispositif et support
WO2005025219A2 (fr) Procede et systeme de communications video
CN101141608A (zh) 一种视频即时通讯系统及方法
CN109640169B (zh) 视频增强控制方法、装置以及电子设备
US9619887B2 (en) Method and device for video-signal processing, transmitter, corresponding computer program product
CN110139147B (zh) 一种视频处理方法、系统、移动终端、服务器及存储介质
CN111476866B (zh) 视频优化与播放方法、系统、电子设备及存储介质
US7388966B2 (en) System and method for visual chat
Hadizadeh et al. Saliency-cognizant error concealment in loss-corrupted streaming video
CN111131852A (zh) 视频直播方法、系统及计算机可读存储介质
CN116366852A (zh) 面向机器视觉任务的视频编解码方法、装置、设备及介质
Loh et al. Quality assessment for natural and screen visual contents
JP2919236B2 (ja) 画像符号化装置
Zaghetto et al. Iterative pre-and post-processing for MRC layers of scanned documents
CN108810537B (zh) 一种图片转码方法、装置及图像处理设备
CN110784716B (zh) 媒体数据处理方法、装置及介质
Strutz Improved probability modelling for exception handling in lossless screen content coding
WO2023051705A1 (fr) Procédé et appareil de communication vidéo, dispositif électronique et support lisible par ordinateur
Watanabe et al. Traffic reduction in video call and chat using dnn-based image reconstruction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08706614

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC, EPO FORM 1205A DATED 01.02.2010

122 Ep: pct application non-entry in european phase

Ref document number: 08706614

Country of ref document: EP

Kind code of ref document: A1