GB2325582A - Encoding contour of an object in a video signal - Google Patents

Encoding contour of an object in a video signal Download PDF

Info

Publication number
GB2325582A
GB2325582A GB9807621A GB9807621A GB2325582A GB 2325582 A GB2325582 A GB 2325582A GB 9807621 A GB9807621 A GB 9807621A GB 9807621 A GB9807621 A GB 9807621A GB 2325582 A GB2325582 A GB 2325582A
Authority
GB
United Kingdom
Prior art keywords
contour
bounding rectangle
information
points
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9807621A
Other versions
GB9807621D0 (en
Inventor
Seong-Beom Kim
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WiniaDaewoo Co Ltd
Original Assignee
Daewoo Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Daewoo Electronics Co Ltd filed Critical Daewoo Electronics Co Ltd
Publication of GB9807621D0 publication Critical patent/GB9807621D0/en
Publication of GB2325582A publication Critical patent/GB2325582A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/20Contour coding, e.g. using detection of edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Analysis (AREA)

Abstract

A method detects portions of an edge of a preset bounding rectangle surrounding a video object plane (VOP), which intersect with a contour of an object in a video signal. VOP data having contour data representing positions of contour pixels of the VOP and information relating to the preset bounding rectangle is received; and it is detected whether or not a contour in the VOP data intersects with an edge of the bounding rectangle. Intersection points A,B on the edge of the bounding rectangle where the contour intersects with the bounding rectangle are found. Position information of points located along sides of the preset bounding rectangle between the intersection points A,B is generated and an extra contour is derived based on the position information. The contour information is combined with the extra contour information to provide a closed contour 115' within the bounding rectangle.

Description

METHOD AND APPARATUS FOR ENCODING A CONTOUR OF AN OBJECT IN A VIDEO SIGNAL The present invention relates to a contour coding system for encoding a contour of an object expressed in a video signal; and, more particularly, to a method and apparatus for detecting portions of an edge of a preset bounding rectangle surrounding a video object plane which intersect with a contour of an object within the preset bounding rectangle.
In a digitally televised system such as video-telephone, high definition television or teleconference system, a large amount of digital data is needed to define each image frame signal since each line in the image frame signal comprises a sequence of digital data referred to as pixels?? Since, however, the available frequency bandwidth of a conventional transmission channel is limited, in order to transmit the substantial amount of digital data therethrough, it is inevitable to compress or reduce the volume of data through the use of various data compression techniques especially in such low bit-rate image signal encoding systems as videotelephone and teleconference systems.
One of such methods for encoding image signals for a low bit-rate encoding system is the so-called object-oriented analysis-synthesis coding technique wherein an input video image is divided into objects and three sets of parameters for defining the motion, contour and pixel data of each object are processed through different encoding channels.
One example of such object-oriented coding scheme is the so-called MPEG(Moving Picture Experts Group) phase 4(MPEG-4), which is designated to provide an audio-visual coding standard for allowing content-based interactivity, improved coding efficiency and/or universal accessibility in such applications as low-bit rate communication, interactive multimedia(e.g., games, interactive TV, etc.) and area surveillance(see, for instance, MPEG-4 Video Verification Model Version 2.0, International Organization for Standardization, ISO/IEC JTCl/SC29/WG11 N1260, March 1996).
According to MPEG-4, an input video frame is divided into a plurality of video object planes(VOP's), which correspond to entities in a bitstream that a user can access and manipulate.
A VOP can be referred to as an object and represented by a bounding rectangle whose width and height may be the smallest multiples of 16 pixels (a macroblock size) surrounding each object so that the encoder can process the input video image on a VOP-by-VOP basis, i.e., an object-by-object basis.
That is, each VOP is represented by means of a bounding rectangle; and the phase difference between the luminance(Y) and chrominance(U, V) data of the bounding rectangle has to be correctly set according to a 4:2:0 format as shown in Fig.
1, wherein the luminance and the chrominance data are represented by symbols X and 0, respectively. Specifically, in an absolute(frame) coordinate system as depicted in Fig.
2, the top-left coordinates of a bounding rectangle 10 should be rounded first to the nearest even numbers, e.g., (2n, 2m), not greater than the top-left coordinates, e.g., (2n+l, 2m+l), of the tightest rectangle 20 surrounding an object 30, n and m being integers, respectively. The b6ttom-right corner of the bounding rectangle 10 is then extended so that the width and the height of the bounding rectangle in the chrominance data are the smallest multiples of 16 pixels. Accordingly, the top-left coordinates of the bounding rectangle in the chrominance data are those of the luminance data divided by two.
A VOP disclosed in MPEG-4 includes shape information and color information consisting of luminance and chrominance data, wherein the shape information is represented by, e.g., a binary mask, one binary value, e.g., 0, is used to designate a pixel located outside the object in the VOP and the other binary value, e.g., 1, is used to indicate a pixel inside the object as shown in Fig. 3.
In the VOP encoder, on the other hand, a current contour in a VOP is motion-estimated first. Specifically, an optimum contour most similar to the current contour is detected among predicted contours residing within a preset search region.
After detecting the optimum contour, a motion vector representing a displacement between the current contour and the optimum contour and index data of the optimum contour are obtained. Thereafter, a predicted current contour is provided based on the motion vector and the index data; and intercoding for the current contour and the predicted current contour is carried out together with the motion vector and the index data through the use of a conventional contour intercoding technique.
However, in the VOP encoder, it is difficult to obtain an optimum contour which most closely matches to the current contour in the VOP in case a portion of the current contour is located outside the bounding rectangle and intersects with an edge of a picture or a video frame. Accordingly, in such a case, portions of the edge of the picture are treated as parts of the current contour; and a new contour including the portions and a contour of the object inside the bounding rectangle is used in obtaining the optimum contour. However, it is known that there are no method and apparatus implemented to detect the portions of the edge of the picture which intersect with the current contour.
It is, therefore, a primary object of the present invention to provide a method and apparatus capable of detecting portions of an edge of a preset bounding rectangle surrounding a VOP which intersect with a contour of an object within the preset bounding rectangle.
In accordance with one aspect of the invention, there is provided a method for detecting portions of an edge of a preset bounding rectangle surrounding a video object plane (VOP), which intersect with a contour of an object in a video signal, the method comprising the steps of: (a) receiving VOP data having contour data representing positions of contour pixels of the VOP and information relating to the preset bounding rectangle, detecting whether or not a contour in the VOP data intersects with an edge of the preset bounding rectangle and, if the contour intersects with the edge, finding intersection points on the edge of the preset bounding rectangle where the contour intersect; (b) sequentially obtaining position information of points located along sides of the preset bounding rectangle between the intersection points based on the contour data, the intersection points and the bounding rectangle information, and deriving an extra contour based on the position information to generate extra contour information for the extra contour, wherein the extra contour corresponds to portions consisting of points located between the intersection points along the corresponding sides of the preset bounding rectangle; and (c) combining the contour data with the extra contour information to provide a closed contour within the preset bounding rectangle.
In accordance with another aspect of the invention, there is provided an apparatus for detecting portions of an edge of a preset bounding rectangle surrounding a video object plane (VOP), which intersect with a contour of an object in a video signal, the apparatus comprising: intersection point detection means for receiving VOP data having contour data representing positions of contour pixels of the VOP and information relating to the preset bounding rectangle, detecting whether or not a contour in the VOP data intersects with an edge of the preset bounding rectangle and, if the contour intersects with the edge, finding intersection points on the edge of the preset bounding rectangle where the contour intersect therewith; extra contour information generation means for sequentially obtaining position information of points located along sides of the preset bounding rectangle between the intersection points based on the contour data, the intersection points and the bounding rectangle information, and deriving an extra contour based on the position information to generate extra contour information for the extra contour, wherein the extra contour corresponds to portions consisting of points located between the intersection points along the corresponding sides of the preset bounding rectangle; and means for combining the contour data with the extra contour information to provide a closed contour within the preset bounding rectangle.
The above and other objects and features of the present invention will become apparent from the following description of preferred embodiments given in conjunction with the accompanying drawings, in which: Fig. 1 describes positions of luminance and chrominance data in the 4:2:0 format; Fig. 2 provides an illustrative diagram for showing a VOP represented by a bounding rectangle; Fig. 3 shows luminance shape information in the form of a binary mask; Fig. 4 represents a schematic block diagram of an apparatus for encoding VOP data in accordance with the present invention; Fig. 5 offers a detailed block diagram of a motion estimation unit shown in Fig. 4; and Figs. 6A and 6B are exemplary diagrams for explaining the detection of portions of an edge of a preset bounding rectangle surrounding a VOP which intersect with a contour of an object in a video signal.
Referring to Fig. 4, there is shown a schematic block diagram of an inventive apparatus 900 for encoding a VOP in a video signal. A VOP is inputted, as a current VOP, to a contour detector 100 and via a line L50 to a motion estimation unit 300 in a form of a segmentation mask, wherein the current VOP includes contour data representing positions of contour pixels of the current VOP, and position and size information of a preset bounding rectangle surrounding it. Each pixel in the segmentation mask has a label identifying a region where it belongs to. For instance, a pixel in a background has a label "0" and each pixel within the current VOP data is labeled by one of non-zero values.
The contour detector 100 detects a contour from the current VOP and assigns index data to the contour; and outputs contour information having the contour data and the index data about the contour. The contour information is then applied via a line L10 to an inter-coding unit 200 and the motion estimation unit 300. The motion estimation unit 300 detects a most similar contour to the contour in the current VOP on the line L50 based on the contour information on the line L10 and a previous reconstructed contour image signal coupled thereto from a frame memory 700 via a line L30. The previous reconstructed contour image signal is also in the form of a segmentation mask, each pixel therein having a label identifying a region where it belongs to. Outputs of the motion estimation unit 300 on lines L20 and L40 are index data of the most similar contour and motion information representing a displacement between the contour in the current VOP and the most similar contour. Details of the motion estimation unit 300 will be provided with reference to Figs.
5, 6A and 6B hereinafter.
Referring to Fig. 5, the motion estimation unit 300 includes an intersection point detector 310, a start and end point determinator 312, a side selection device 314, a position information generator 316, a closed contour generator 320, a controller 330 and an optimum contour determinator 350.
The intersection point detector 310, with the contour information on the line L10 and the current VOP on the line L50, detects whether or not the contour detected from the current VOP intersects with an edge of the bounding rectangle surrounding the current VOP and issues a signal representing the detection result. For instance, if there are intersection portions, e.g., A and B, between the contour 115 and the preset bounding rectangle, as illustrated in Fig. 6A, the intersection point detector 310 issues a detection result signal of a logic high level; and, if otherwise, it issues the detection result signal of a logic low level. The detection result signal of the logic high or logic low signal is then provided to a switch 332 and the start and end point determinator 312.
In response to the detection result signal of the logic high or logic low level, at the switch 332, the current VOP on the line L50 is selectively coupled to the start and end point determinator 312, the side selection device 314 and the position information generator 316 via a line L11 or to the optimum contour determinator 350 via a line L12. To be more specific, in response to the detection result signal of the logic low level, the current VOP is coupled to the optimum contour determinator 350 via the line L12; and in response to the detection result signal of the logic high level, the current VOP is coupled via the line L11 to the start and end point decision device 312, the side selection device 314 and the position information generator 316. At the start and end point determinator 312 which is responsive to the detection result signal from the intersection point detector 310, as shown in Fig. 6B, the intersection points A and B on the current VOP provided from the switch 332 through the line L11 are assigned as a start and an end points, respectively. In a predetermined embodiment of the invention, the former intersection point A, which appears first toward outside from inside the bounding rectangle counterclockwise, is assigned as the start point and the latter intersection point B is signed as the end point. The position information on each Qf the two points which can be seen from the contour information of the contour in the bounding rectangle is applied to the side selection device 314 and the controller 330.
Thereafter, the side selection device 314 selects one side, e.g., a top side 111, among the four sides 111-114 of the bounding rectangle of the current VOP under the control of the controller 330 which is responsive to the position information representing the start point A in Fig. 6B, provided from the start and end point determinator 312. And the side selection device 314 issues information representing the top side 111 selected to the position information generator 316. The position information generator 316, using the current VOP provided through the line L11 from the switch 332, the contour information on the iine L10 and the side information from the side selection device 314, initiates to produce information representing positions of all points residing between the start point A and a coordinate (Xmin+l, Ymin) on the top side 111 of the bounding rectangle as illustrated in Fig. 6B. As can be seen from the above, such position information can be easily obtained based on the contour information and the position size information of the boundary rectangle. This position information is provided to the controller 330 and a buffer 318 for temporal storage thereof.
Based on the position information from the position information generator 316, the controller 330 controls the operation of the side selection device 314. Specifically, when all the position information between the start point A and the coordinate (Xmin+1, Ymin) from the position information generator 316 is received by the controller 330, it issues a control signal to select a side following the top side 111 of the bounding rectangle counterclockwise, i.e., the left side 112 thereof and sends same to the side selection device 314.
In response to the control signal, the side selection device 314 selects the left side 112 of the bounding rectangle and issues side information corresponding thereto to the position information generator 316. Similarly, the position information generator 316, in response to the side information, starts to output information corresponding to positions of all points residing between the coordinate (Xmin, Ymin) and the end point B located at the left side 112 of the bounding rectangle. And the position information is also applied to the controller 330 and the buffer 318 for storing therein temporarily. When all the position information of the points between the start point A and the end point B is received by the controller 330, it issues a read control signal to the buffer 318 to control the operation thereof.
In response to the read control signal, the buffer 318 retrieves all the position information stored therein to provide them to the closed contour generator 320. Although there is illustratively shown and described only for the start and the end points A and B on the sides 111 and 112 for the sake of simplicity, it should be appreciated that the side selection and position information generation for those on the other sides can be performed in a similar method as described above.
At the closed contour generator 320, the position information from the buffer 318 is combined with the contour information on the line L10 to form a new closed contour 115' in the bounding rectangle as depicted in Fig. 6B. And new VOP having contour information on the new closed contour 115' and index data of the new closed contour 115' and position and size information of the bounding rectangle in the current VOP on the line L50 is outputted from the closed contour generator 320 to the optimum contour determinatar 350 via a line L60.
Inputs to the optimum contour determinator 350 will be either one of the new VOP having the contour information for the new closed contour 115' on the line L60 and the current VOP on the line L12, and the previous reconstructed contour image signal having VOP's on the line L30. The optimum contour determinator 350 detects an optimum contour, a predicted contour most similar to one of the closed contour information on the line L60 and the contour information in the current VOP on the line L12 among predicted contours residing within a preset search region, e.g., +/- 8 pixels of the contour in the current VOP; and outputs a motion vector (MV) representing a displacement between the contour in the current VOP and the optimum contour and the index data of the optimum contour.
Since the optimum contour and the MV at the optimum contour determinator 350 are derived by using a conventional optimum contour determining method well known in the art, details thereof are omitted here for the sake simplicity. In a preferred embodiment of the invention, the index data of the optimum contour has a same value as a label of the object pixels corresponding to the predicted contour. The MV and the index data of the optimum contour are then provided via a line L40 to a MUX 800 and via a line L20 to a motion compensation unit 400 shown in Fig. 4.
Referring back to Fig. 4, the motion compensation unit 400 generates a predicted current contour by retrieving the optimum contour information from the frame memory 700 via the line L30 based on the MV and the index data of the optimum contour on the line L20, wherein the predicted current contour represents the optimum contour shifted by the MV. The output to the inter-coding unit 200 and a contour reconstruction unit 600 via a line L55 from the motion compensation unit 400 is the predicted current contour information representing position data of contour pixels of the predicted current contour and index data thereof.
At the inter-coding unit 200, using a conventional vertex decision technique well known in the art, a primary and a secondary vertices are decided based on the predicted current contour information and index data thereof on the line L55 and the contour in the current VOP on the line L10. And then, the inter-coding unit 200 encodes position data of the primary and the secondary vertices and provides the encoded vertex data to the MUX 800 and an inter-decoding unit 500.
At the MUX 800, the encoded vertex data, the MV and the index data of the optimum contour are multiplexed to provide encoded contour data to a transmitter (not shown) for the transmission thereof.
Meanwhile, at the inter-decoding unit 500, the encoded vertex data is decoded into decoded vertex data representing the decoded primary and secondary vertices and the decoded vertex data is provided to the contour reconstruction unit 600. The decoded vertex data from the inter-decoding unit 500 is utilized in reconstructing the current VOP together with the predicted current contour information fed via the line L55 at the contour reconstruction unit 600. The reconstructed current VOP is stored at the frame memory 700 and is utilized as a reconstructed previous contour image signal for the next VOP.
While the present invention has been described with respect to certain preferred embodiments only, other modifications and variations may be made without departing from the scope of the present invention as set forth in the following claims.

Claims (11)

Claims:
1. A method for detecting portions of an edge of a preset bounding rectangle surrounding a video object plane (VOP), which intersect with a contour of an object in a video signal, the method comprising the steps of: (a) receiving VOP data having contour data representing positions of contour pixels of the VOP and information relating to the preset bounding rectangle, detecting whether or not a contour in the VOP data intersects with an edge of the preset bounding rectangle and, if the contour intersects with the edge, finding intersection points on the edge of the preset bounding rectangle where the contour intersect; (b) sequentially obtaining position information of points located along sides of the preset bounding rectangle between the intersection points based on the contour data, the intersection points and the bounding rectangle information, and deriving an extra contour based on the position information to generate extra contour information for the extra contour, wherein the extra contour corresponds to portions consisting of points located between the intersection points along the corresponding sides of the preset bounding rectangle; and (c) combining the contour data with the extra contour information to provide a closed contour within the preset bounding rectangle.
2. The method of claim 1, wherein the number of the intersection points is 2.
3. The method of claim 2, wherein the bounding rectangle information includes information indicating position and size of the preset bounding rectangle.
4. The method of claim 3, wherein the step (b) includes the steps of: (bl) selecting one of the two intersection points as a start point and the other as an end point, wherein the start point appears counterclockwise first toward outside from inside the preset bounding rectangle; (b2) choosing one of the four sides of the preset bounding rectangle on which the start point is located to assign same as a target side; (b3) if the end point is located on the target side, generating information representing positions of points located between the start point and the end point counterclockwise based on the contour data and the position and size information of the preset bounding rectangle; (b4) if the end point is not located on the target side, generating information representing positions of points residing between the start and a point prior to the last point of the target side counterclockwise based on the contour data and the position and size information of the preset bounding rectangle and thereafter selecting a next side of the four sides of the bounding rectangle counterclockwise to assign it as a new target side; (b5) if the end point is located on the new target side, generating information representing positions of pixels located between the last point of the target side and the end point counterclockwise based on the contour data and the position and size information of the preset bounding rectangle, wherein the last point of the target side becomes a start point of the new target side; and (b6) if the end point is not located on the new target side, repeatedly performing the steps (b4) and (b5) until the position information of all the points located between the start and the end points has been obtained.
5. The method of claim 4, wherein the step (b) further includes a step of storing sequentially the position information for each of the points between the start and the end points in a buffer and retrieving them simultaneously when all the position information is stored in the buffer.
6. An apparatus for detecting portions of an edge of a preset bounding rectangle surrounding a video object plane (VOP), which intersect with a contour of an object in a video signal, the apparatus comprising: intersection point detection means for receiving VOP data having contour data representing positions of contour pixels of the VOP and information relating to the preset bounding rectangle, detecting whether or not a contour in the VOP data intersects with an edge of the preset bounding rectangle and, if the contour intersects with the edge, finding intersection points on the edge of the preset bounding rectangle where the contour intersect therewith; extra contour information generation means for sequentially obtaining position information of points located along sides of the preset bounding rectangle between the intersection points based on the contour data, the intersection points and the bounding rectangle information, and deriving an extra contour based on the position information to generate extra contour information for the extra contour, wherein the extra contour corresponds to portions consisting of points located between the intersection points along the corresponding sides of the preset bounding rectangle; and means for combining the contour data with the extra contour information to provide a closed contour within the preset bounding rectangle.
7. The apparatus of claim 6, wherein the number of the intersection points is 2.
8. The apparatus of claim 7, wherein the bounding rectangle information includes information indicating position and size of the preset bounding rectangle.
9. The apparatus of claim 8, wherein the extra contour information generation means includes: means for selecting one of the two intersection points as a start point and the other as an end point, wherein the start point appears counterclockwise first toward outside from inside the preset bounding rectangle; means for choosing one of the four sides of the preset bounding rectangle on which the start point is located to assign same as a target side; means, if the end point is located on the target side, for generating information representing positions of points located between the start point and the end point counterclockwise based on the contour data and the position and size information of the preset bounding rectangle; first generation means, if the end point is not located on the target side, for generating information representing positions of points residing between the start and a point prior to the last point of the target side counterclockwise based on the contour data and the position and size information of the preset bounding rectangle and thereafter selecting a next side of the four sides of the bounding rectangle counterclockwise to assign it as a new target side; second generation means, if the end point is located on the new target side, for generating information representing positions of pixels located between the last point of the target side and the end point counterclockwise based on the contour data and the position and size information of the preset bounding rectangle, wherein the last point of the target side becomes a start point of the new target side; and means, if the end point is not located on the new target side, performing the processes operated at the first and the second generation means until all the position information of the points located between the start and the end points has been obtained.
10. The apparatus of claim 9, wherein the extra contour information generation means further includes means for storing sequentially the position information for each of the points between the start and the end points in a buffer and retrieving them simultaneously when all the position information is stored in the buffer.
11. A contour image signal coding apparatus constructed and arranged substantially as herein described with reference to or as shown in Figs. 4-6B of the accompanying drawings.
GB9807621A 1997-05-23 1998-04-08 Encoding contour of an object in a video signal Withdrawn GB2325582A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1019970020212A KR19980084420A (en) 1997-05-23 1997-05-23 Contour Information Detection Device and Method

Publications (2)

Publication Number Publication Date
GB9807621D0 GB9807621D0 (en) 1998-06-10
GB2325582A true GB2325582A (en) 1998-11-25

Family

ID=19506827

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9807621A Withdrawn GB2325582A (en) 1997-05-23 1998-04-08 Encoding contour of an object in a video signal

Country Status (3)

Country Link
JP (1) JPH10336673A (en)
KR (1) KR19980084420A (en)
GB (1) GB2325582A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0889652A2 (en) * 1997-07-05 1999-01-07 Daewoo Electronics Co., Ltd Method and apparatus for encoding a contour of an object based on a contour motion estimation technique
EP0942604A2 (en) * 1998-03-10 1999-09-15 Hyundai Electronics Industries Co., Ltd. Method and apparatus for generating a bounding rectangle of a vop for interlaced scan type video signals
WO2001089222A1 (en) * 2000-05-18 2001-11-22 Koninklijke Philips Electronics N.V. Mpeg-4 binary shape transmission
WO2004019618A1 (en) * 2002-08-21 2004-03-04 Koninklijke Philips Electronics N.V. Digital video signal encoding
EP1691334A1 (en) * 2003-04-21 2006-08-16 Obschestvo S Ogranichennoy Otvetstvennostyu " Mir Setei" Method for encoding co-ordinates of a video image moving along the display of a computing device
US7653246B2 (en) 2003-11-18 2010-01-26 Fuji Xerox Co., Ltd. System and method for making a correction to a plurality of images
US20220237916A1 (en) * 2021-01-22 2022-07-28 Beijing Dajia Internet Information Technology Co., Ltd. Method for detecting collisions in video and electronic device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100457064B1 (en) * 1997-11-13 2005-05-27 주식회사 대우일렉트로닉스 Method for detecting in contour information
KR100726333B1 (en) * 2004-01-31 2007-06-11 학교법인 인하학원 Method for auto-detecting edges of building by using LIDAR data
KR102633109B1 (en) * 2020-01-09 2024-02-05 현대모비스 주식회사 Data converting system for detecting wide angle image object and method thereof

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4782384A (en) * 1984-04-27 1988-11-01 Utah Scientific Advanced Development Center, Inc. Area isolation apparatus for video signal control system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4782384A (en) * 1984-04-27 1988-11-01 Utah Scientific Advanced Development Center, Inc. Area isolation apparatus for video signal control system

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0889652A2 (en) * 1997-07-05 1999-01-07 Daewoo Electronics Co., Ltd Method and apparatus for encoding a contour of an object based on a contour motion estimation technique
EP0889652A3 (en) * 1997-07-05 2003-02-26 Daewoo Electronics Co., Ltd Method and apparatus for encoding a contour of an object based on a contour motion estimation technique
EP0942604A2 (en) * 1998-03-10 1999-09-15 Hyundai Electronics Industries Co., Ltd. Method and apparatus for generating a bounding rectangle of a vop for interlaced scan type video signals
EP0942604A3 (en) * 1998-03-10 2001-03-07 Hyundai Electronics Industries Co., Ltd. Method and apparatus for generating a bounding rectangle of a vop for interlaced scan type video signals
WO2001089222A1 (en) * 2000-05-18 2001-11-22 Koninklijke Philips Electronics N.V. Mpeg-4 binary shape transmission
WO2004019618A1 (en) * 2002-08-21 2004-03-04 Koninklijke Philips Electronics N.V. Digital video signal encoding
EP1691334A1 (en) * 2003-04-21 2006-08-16 Obschestvo S Ogranichennoy Otvetstvennostyu " Mir Setei" Method for encoding co-ordinates of a video image moving along the display of a computing device
EP1691334A4 (en) * 2003-04-21 2010-11-17 Obschestvo S Ogranichennoy Otv Method for encoding co-ordinates of a video image moving along the display of a computing device
US7653246B2 (en) 2003-11-18 2010-01-26 Fuji Xerox Co., Ltd. System and method for making a correction to a plurality of images
US8280188B2 (en) 2003-11-18 2012-10-02 Fuji Xerox Co., Ltd. System and method for making a correction to a plurality of images
US20220237916A1 (en) * 2021-01-22 2022-07-28 Beijing Dajia Internet Information Technology Co., Ltd. Method for detecting collisions in video and electronic device

Also Published As

Publication number Publication date
GB9807621D0 (en) 1998-06-10
KR19980084420A (en) 1998-12-05
JPH10336673A (en) 1998-12-18

Similar Documents

Publication Publication Date Title
AU748276B2 (en) Method and apparatus for encoding a motion vector of a binary shape signal
US5973743A (en) Mode coding method and apparatus for use in an interlaced shape coder
US5787199A (en) Apparatus for detecting a foreground region for use in a low bit-rate image signal encoder
US5822460A (en) Method and apparatus for generating chrominance shape information of a video object plane in a video signal
US6094225A (en) Method and apparatus for encoding mode signals for use in a binary shape coder
US5686973A (en) Method for detecting motion vectors for use in a segmentation-based coding system
Tekalp et al. Two-dimensional mesh-based visual-object representation for interactive synthetic/natural digital video
US5978048A (en) Method and apparatus for encoding a motion vector based on the number of valid reference motion vectors
JP3056120B2 (en) Video signal shape information predictive coding method
US6069976A (en) Apparatus and method for adaptively coding an image signal
GB2325582A (en) Encoding contour of an object in a video signal
US20070274687A1 (en) Video Signal Encoder, A Video Signal Processor, A Video Signal Distribution System And Methods Of Operation Therefor
EP0871332B1 (en) Method and apparatus for coding a contour of an object employing temporal correlation thereof
US5978031A (en) Method and apparatus for determining an optimum grid for use in a block-based video signal coding system
JP2001506101A (en) System and method for contour-based movement estimation
US6049567A (en) Mode coding method in a binary shape encoding
US20050259878A1 (en) Motion estimation algorithm
KR19990080991A (en) Binary shape signal encoding and decoding device and method thereof
US6020933A (en) Method and apparatus for encoding a motion vector
JP3974244B2 (en) Binary shape signal encoding device
EP0923250A1 (en) Method and apparatus for adaptively encoding a binary shape signal
GB2341030A (en) Video motion estimation
KR100413980B1 (en) Apparatus and method for estimating motion information of contour in shape information coding
Puri et al. Implicit arbitrary shape visual objects via MPEG-4 scene description
KR100568532B1 (en) Method for filling contour in coding and decoding of moving ficture

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)