CN105872560B - Image encoding method and device - Google Patents

Image encoding method and device Download PDF

Info

Publication number
CN105872560B
CN105872560B CN201510028430.9A CN201510028430A CN105872560B CN 105872560 B CN105872560 B CN 105872560B CN 201510028430 A CN201510028430 A CN 201510028430A CN 105872560 B CN105872560 B CN 105872560B
Authority
CN
China
Prior art keywords
depth
pixel
field value
focal plane
depth map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510028430.9A
Other languages
Chinese (zh)
Other versions
CN105872560A (en
Inventor
陈增源
李荣彬
李莉华
刘亚辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hong Kong Polytechnic University HKPU
Original Assignee
Hong Kong Polytechnic University HKPU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hong Kong Polytechnic University HKPU filed Critical Hong Kong Polytechnic University HKPU
Priority to CN201510028430.9A priority Critical patent/CN105872560B/en
Publication of CN105872560A publication Critical patent/CN105872560A/en
Priority to HK16111097.1A priority patent/HK1222963A1/en
Application granted granted Critical
Publication of CN105872560B publication Critical patent/CN105872560B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The present invention provides a kind of image encoding method and device, described image coding method is the following steps are included: step S1: obtaining a depth map, which includes multiple pixels, and each pixel includes depth of field value;Determine the focal plane where the pixel nearest from observation point of the object in depth map;Determine the focal plane where the pixel farthest from observation point of the object in depth map;Step S2: the pixel between fixed two neighbouring focal planes of comparison other is respectively at a distance from described two focal planes, so that it is determined that the depth of field value of the focal plane among described two focal planes and the pixel on the focal plane in the new determination of object;Step S3: the pixel that depth of field value has been determined and its depth of field value of object are successively stored, and generate encoding stream.Coding method of the invention is simple and practical, has a wide range of applications in fields such as 3D film, 3D TV and 3D renderings.

Description

Image encoding method and device
Technical field
The present invention relates to coding method more particularly to a kind of image encoding method and devices.
Background technique
In recent years, 3D concept enters in our life, and all kinds of audio-visual amusement commodity have also costed this burst of trend, pushes away 3D film, 3D game and 3D picture etc. out.However, currently, this point will there is no the storage standard of a general 3D picture The incompatible of film is caused, can not be played on each terminal device, obstruction is caused to the popularization of 3D digital content.
Summary of the invention
The present invention is for there is no the storage standard of a general 3D picture, this will cause the not phase of film at present Hold, the problem of can not playing on each terminal device, cause obstruction to the popularization of 3D digital content, provides one kind Image encoding method and device.
The present invention proposes following technical scheme with regard to the technical problem:
The present invention provides a kind of image encoding methods, comprising the following steps:
Step S1: a depth map is obtained, which includes multiple pixels, and each pixel includes depth of field value;Really It is scheduled on the focal plane where the pixel nearest from observation point of the object in depth map;Determine object in depth map from sight Focal plane where the farthest pixel of measuring point;
Step S2: the pixel between fixed two neighbouring focal planes of comparison other respectively with it is described two The distance of focal plane, so that it is determined that focal plane and object among described two focal planes in the new determination The depth of field value of pixel on focal plane;
Step S3: the pixel that depth of field value has been determined and its depth of field value of object are successively stored, and generate coding Stream.
Further include step S31 between step S2 and step S3 in the above-mentioned image encoding method of the present invention:
Step S2 is repeated, until the depth of field value of all pixels of the object in depth map is all determined.
In the above-mentioned image encoding method of the present invention, step S1 further includes step S11: obtaining number of repetition threshold value;
Between step S2 and step S3 further include: repeat step S2 with the number of number of repetition threshold value.
In the above-mentioned image encoding method of the present invention, step S1 further includes step S12: obtaining distance threshold;
Step S2 further include: when pixel of the object between fixed two neighbouring focal planes respectively with it is described When the absolute value of the difference of the distance of two focal planes is less than distance threshold, it is determined that the focal plane where the pixel is in described The centre of two focal planes, and determine that the depth of field value of newly determining focal plane is the depth of field value of the pixel.
The present invention also provides a kind of picture coding devices, comprising:
Encoded information obtains module: for obtaining a depth map, which includes multiple pixels, and each pixel packet Contain depth of field value;It is also used to determine the focal plane where the pixel nearest from observation point of the object in depth map;It is also used to Determine the focal plane where the pixel farthest from observation point of the object in depth map;
Depth of field value determining module: the pixel between fixed two neighbouring focal planes point for comparison other Not at a distance from described two focal planes, so that it is determined that the place of focal plane and object among described two focal planes In the depth of field value of the pixel on the focal plane of the new determination;
Memory module: raw for successively storing the pixel that depth of field value has been determined and its depth of field value of object At encoding stream.
In the above-mentioned picture coding device of the present invention, encoded information obtains module and is also used to obtain distance threshold;
Depth of field value determining module is also used to the pixel between fixed two neighbouring focal planes point when object When absolute value of the difference not at a distance from described two focal planes is less than distance threshold, it is determined that the focal plane where the pixel is Centre in described two focal planes, and determine that the depth of field value of newly determining focal plane is the depth of field value of the pixel.
The present invention carrys out restricted code the time it takes by using number of repetition threshold value, also, the present invention also passes through and adopts The accuracy of depth map is determined with number of repetition threshold value.Coding method of the invention is simple and practical, 3D film, 3D TV with And the fields such as 3D rendering have a wide range of applications.
Detailed description of the invention
Present invention will be further explained below with reference to the attached drawings and examples, in attached drawing:
Fig. 1 is a kind of schematic diagram of hierarchical structure;
Fig. 2 is the schematic diagram of depth map file format;
Fig. 3 is the structural schematic diagram of unit;
Fig. 4 is the schematic diagram to the pixel P detailed process encoded;
Fig. 5 is the schematic diagram to the decoded process of cells D;
Fig. 6 is the schematic diagram of the decoding cases for the unit that destination layer series is 4.
Specific embodiment
In 3D computer graphics field, it includes observation point to the observed range between subject surface that depth map, which is a kind of, Picture.Depth map can be applied to many occasions, such as defocus, the rendering of 3D scene, echo and other and observed range Relevant application.
In this application, we disclose a kind of coding methods of depth map.The coding method be based on hierarchical structure with And then binary search method calculates the pixel again and exists to determine the focal plane nearest from observation point with a pixel Depth of field value in depth map.Further, we by relevant flow chart specifically describe this coding method coding and Decoding process.Finally, we also discuss the advantages of coding method of the application and potential application.
Fig. 1 is a kind of schematic diagram of hierarchical structure.
As shown in Figure 1, F (x) is defined as the focal plane that the depth of field value in depth map is x, and further, F (x=0) definition For the focal plane on the surface nearest from observation point of object in depth map;And F (x=1) is defined as the object in depth map The focal plane on the surface farthest from observation point.In this way, all focal planes where the surface of object can use F (0 < x < 1) To indicate.
For example, F (0.5) indicates the focal plane of the centre in F (0) and F (1);And F (0.25) indicates to be in F (0) and F (0.5) focal plane of centre, etc..In this way, we can be according to the position compared between focal plane, to determine depth map Depth of field value of any one pixel in depth map on middle object.
Specifically, coding method of the invention the following steps are included:
Step S1: a depth map is obtained, which includes multiple pixels, and each pixel includes depth of field value;Really It is scheduled on the focal plane where the pixel nearest from observation point of the object in depth map;Determine object in depth map from sight Focal plane where the farthest pixel of measuring point;
First layer (Level 1) as shown in Figure 1, the focal plane on the surface nearest from observation point of object are F (0);And The focal plane on the surface farthest from observation point of object is F (1);
F (0) can be determined as the first order left boundary of depth map by we, and F (1) is determined as the first of depth map Grade the right boundary.
Step S2: the pixel between fixed two neighbouring focal planes of comparison other respectively with it is described two The distance of focal plane, so that it is determined that focal plane and object among described two focal planes in the new determination The depth of field value of pixel on focal plane;
In the present invention, for the ease of pixel of the object between fixed two neighbouring focal planes respectively with institute The comparison of the distance of two focal planes is stated, step S1 further includes step S12: obtaining distance threshold;
Step S2 further include: when pixel of the object between fixed two neighbouring focal planes respectively with it is described When the absolute value of the difference of the distance of two focal planes is less than distance threshold, it is determined that the focal plane where the pixel is in described The centre of two focal planes, and determine that the depth of field value of newly determining focal plane is the depth of field value of the pixel.
Specifically, in this step, for the pixel between F (0) and F (1) of object, compare the pixel respectively with first Grade left boundary and the first order on the right of boundary distance, if the pixel close to first order left boundary, in depth map Depth of field value can one number 0 of affix;If depth of field value of the pixel close to boundary on the right of the first order, in depth map It can one number 1 of affix;In the pixel between F (0) and F (1) of object, if two neighbouring pixels are in depth map Depth of field value difference affix number 0 and 1, then the focal plane where any one in two pixels can be confirmed as locating Focal plane F (0.5) in the centre of F (0) and F (1);If a certain pixel respectively with side on the right of first order left boundary and the first order Boundary is equidistant, it is determined that the focal plane where the pixel is the focal plane F (0.5) of the centre in F (0) and F (1);This The focal plane F (0.5) of sample, the centre in F (0) and F (1) determines that, the second layer as shown in Figure 1 (i.e. Level 2).
Similarly, F (0) can be used as to second level left boundary, and F (0.5) is compared as boundary on the right of the second level The pixel between F (0) and F (0.5) of the object side at a distance from boundary on the right of second level left boundary and the second level respectively Formula, to determine the focal plane F (0.25) of the centre in F (0) and F (0.5), third layer as shown in Figure 1 (i.e. Level 3).
Similarly, in third layer shown in Fig. 1 (i.e. Level 3), F (0.5) can be used as third level left boundary, And by F (1) as boundary on the right of the third level, the pixel between F (0.5) and F (1) of comparison other respectively with the third level left side The mode of the distance on boundary on the right of boundary and the third level, to determine the focal plane F (0.75) of the centre in F (0.5) and F (1).
According to above-mentioned method, the depth of field value of all pixels of the object in depth map can all be decided.
From foregoing description, it can be seen that coding method of the invention uses hierarchical structure, also, cataloged procedure is adopted The thought retrieved with binary system.With the in-depth of hierarchical structure, depth map can be more careful, and more minutias can It is shown in depth map deeper into ground.Certainly, coding the time it takes can be more.Therefore, we can control level The degree of the in-depth of structure come weigh depth map careful degree and coding the time it takes between choice.
Step S3: the pixel that depth of field value has been determined and its depth of field value of object are successively stored, and generate coding Stream.
Further, in the present embodiment, further include step S31 between step S2 and step S3: repeating step S2, until Until the depth of field value of all pixels of object in depth map is all determined.
For certain extremely complex depth maps, can be devoted a tremendous amount of time using step S31.In order to avoid such case Generation, in another embodiment, step S1 further includes step S11: obtain number of repetition threshold value;Between step S2 and step S3 Further include: step S2 is repeated with the number of number of repetition threshold value.
In this way, by using number of repetition threshold value, to control the number of repetition of repetition step S2, it is possible to reduce the coding depth of field Figure the time it takes.
From foregoing description, it is found that the number of repetition threshold value that the number of levels of hierarchical structure is equal to repetition step S2 adds Upper 1.
The present invention is clearly understood and practiced for the ease of those skilled in the art, below in conjunction with embodiment to the present invention Make specific description.
According to coding method provided by us above, depth map can be formatted, and become depth of field map file, such as Fig. 2 With shown in Fig. 3.Depth of field map file includes fixed-size structure (such as file header) and is used to store depth of field value with predetermined order The variable structure of the size of array.File header includes the information of depth of field map file;Specifically, in figure 2 and figure 3, File_Type Show: this document is a kind of depth of field bitmap-format;File_Size shows: the size of this document;Image_Width and Image_ Height shows the width and height of depth map respectively;Depth_Level shows depth map number of levels achieved;Depth_ Flag shows whether all pixels of object have all reached respective destination layer series;Data_Offset shows depth of field value array Storage starting point;In the present embodiment, in depth map array, the depth of field value of a pixel in each unit storage depth map Information;Each unit includes 4 fixed-size bits (bit) and the bit that size changes between 1-N;Here, N is nature Number.
In a unit, first bit is named as flagA, and whether which stores the pixels of this unit storage to reach Its destination layer series;If the value of flagA is 0, show that the pixel does not reach its destination layer series, needs further really Focal plane where the fixed pixel, so that the pixel be made to reach its destination layer series;If the value of flagA is 1, show the picture Element has had reached its destination layer series, and the number of levels of the pixel is exactly determining.Second bit is named to the 4th bit It is a binary number for flagB, for having recorded the current current number of levels achieved of the pixel (if the pixel reaches Its destination layer series is arrived, then the current current number of levels achieved of the pixel is equal to its destination layer series;If the pixel does not have Have and reach its destination layer series, then the current current number of levels achieved of the pixel is equal to depth map number of levels achieved), Show the length of dataB.The depth value of pixel is indicated to a last bit since the 5th bit, and is also stored Deterministic process of the pixel in each level of depth map.
Fig. 4 shows the detailed process to pixel P coding.According to the definition of the format of depth of field map file, the processing of pixel P State can be read from flagA;The current number of levels of pixel P can be read from flagB;The depth of field value of pixel P can be from dataB It reads.Destination layer series is N;
When the current number of levels c of pixel P is added to flagB+1, according to hierarchical structure, the coke of left side neighborhood pixels P The depth of field value of plane is L=dataB/2(c-1);The depth of field value of the focal plane of the right neighborhood pixels P is R=(1+dataB)/2(c -1).Cataloged procedure the following steps are included:
Step 100 obtains location parameter of the pixel P in depth map;
If step 110, c > 1 or flagA=1, by c-1 assignment in flagB, and store, and terminate this volume Code process;Otherwise 120 are entered step;
Step 120 calculates focal plane distance DistL=of the pixel P with the left side adjacent to pixel P | P-F (L) |;And Calculate focal plane distance DistR=of the pixel P with the right adjacent to pixel P | P-F (R) |;If min (DistL, DistR) it is less than threshold value, then by 1 assignment in flagA, and by c-1 assignment in flagB, and terminates this cataloged procedure;Otherwise enter Step 130;
Step 130, the additional bit of tail end in dataB, if DistL≤DistR, which is 0, And the depth of field value by the right adjacent to the focal plane of pixel P is changed to L+ (0.5)c, and save left boundary.Otherwise, this is increased Bit is 1, and the depth of field value of left boundary is revised as R- (0.5)c, and the right boundary is saved, subsequently into step 140;
Step 140, the value of c+1 is calculated, and the value is assigned to c, enters step 110.
Fig. 5 is shown to the decoded process of cells D.
The decoding process of cells D includes:
Step 200, acquiring unit D, obtain the destination layer series M of the cells D, and the flagB of reading unit D is converted At the N of ten's digit;
If step 210, N >=M, first bit of the dataB of output unit D is to m-th bit, and by the output Value is used as depth of field value, and terminates this decoding process;Otherwise 220 are entered step;
Step 220 adds M-N bit in the dataB of output unit D, if the n-th bit of dataB is 0, by 0 Assignment is in all bits newly increased;Otherwise by 1 assignment in all bits newly increased, subsequently into step 230;
DataB after step 230, output modifications.
Fig. 6 is the decoding cases for the unit that destination layer series is 4.
Further, the present invention also provides a kind of picture coding devices, comprising:
Encoded information obtains module: for obtaining a depth map, which includes multiple pixels, and each pixel packet Contain depth of field value;It is also used to determine the focal plane where the pixel nearest from observation point of the object in depth map;It is also used to Determine the focal plane where the pixel farthest from observation point of the object in depth map;
Depth of field value determining module: the pixel between fixed two neighbouring focal planes point for comparison other Not at a distance from described two focal planes, so that it is determined that the place of focal plane and object among described two focal planes In the depth of field value of the pixel on the focal plane of the new determination;
Memory module: raw for successively storing the pixel that depth of field value has been determined and its depth of field value of object At encoding stream.
Wherein, in the present embodiment, the depth of field value of all pixels of object of the depth of field value determining module in depth map all by It stops working after determination.
In another embodiment, encoded information obtains module and is also used to obtain number of repetition threshold value;Depth of field value determining module After the number repeated work with number of repetition threshold value, stop working.
Here, encoded information obtains module and is also used to obtain distance threshold;
Depth of field value determining module is also used to the pixel between fixed two neighbouring focal planes point when object When absolute value of the difference not at a distance from described two focal planes is less than distance threshold, it is determined that the focal plane where the pixel is Centre in described two focal planes, and determine that the depth of field value of newly determining focal plane is the depth of field value of the pixel.
The present invention carrys out restricted code the time it takes by using number of repetition threshold value, also, the present invention also passes through and adopts The accuracy of depth map is determined with number of repetition threshold value.Coding method of the invention is simple and practical, 3D film, 3D TV with And the fields such as 3D rendering have a wide range of applications.
It should be understood that for those of ordinary skills, it can be modified or changed according to the above description, And all these modifications and variations should all belong to the protection domain of appended claims of the present invention.

Claims (5)

1. a kind of image encoding method, which comprises the following steps:
Step S1: a depth map is obtained, which includes multiple pixels, and each pixel includes depth of field value;It determines Focal plane where the pixel nearest from observation point of object in depth map;Determine object in depth map from observation point Focal plane where farthest pixel;
Step S2: the pixel between fixed two neighbouring focal planes of comparison other is flat with described two cokes respectively The distance in face, so that it is determined that the coke in the new determination of focal plane and object among described two focal planes is flat The depth of field value of pixel on face;
Step S3: the pixel that depth of field value has been determined and its depth of field value of object are successively stored, and generate encoding stream;Step Rapid S1 further includes step S11: obtaining number of repetition threshold value;
Between step S2 and step S3 further include: repeat step S2 with the number of number of repetition threshold value.
2. according to right want 1 described in image encoding method, which is characterized in that step S1 further includes step S12: obtain apart from threshold Value;
Step S2 further include: when pixel of the object between fixed two neighbouring focal planes respectively with it is described two When the absolute value of the difference of the distance of focal plane is less than distance threshold, it is determined that the focal plane where the pixel is in described two The centre of focal plane, and determine that the depth of field value of newly determining focal plane is the depth of field value of the pixel.
3. a kind of image encoding method, which comprises the following steps:
Step S1: a depth map is obtained, which includes multiple pixels, and each pixel includes depth of field value;It determines Focal plane where the pixel nearest from observation point of object in depth map;Determine object in depth map from observation point Focal plane where farthest pixel;
Step S2: the pixel between fixed two neighbouring focal planes of comparison other is flat with described two cokes respectively The distance in face, so that it is determined that the coke in the new determination of focal plane and object among described two focal planes is flat The depth of field value of pixel on face;
Step S3: the pixel that depth of field value has been determined and its depth of field value of object are successively stored, and generate encoding stream;
Further include step S31 between step S2 and step S3: repeating step S2, until all pixels of the object in depth map Until depth of field value is all determined.
4. a kind of picture coding device characterized by comprising
Encoded information obtains module: for obtaining a depth map, which includes multiple pixels, and each pixel includes Depth of field value;It is also used to determine the focal plane where the pixel nearest from observation point of the object in depth map;It is also used to determine Focal plane where the pixel farthest from observation point of object in depth map;
Depth of field value determining module: for pixel of the comparison other between fixed two neighbouring focal planes respectively with The distance of described two focal planes, so that it is determined that being in for focal plane and object among described two focal planes should The newly depth of field value of the pixel on determining focal plane;
Memory module: it for successively storing the pixel that depth of field value has been determined and its depth of field value of object, generates and compiles Code stream;
Encoded information obtains module and is also used to obtain number of repetition threshold value;Depth of field value determining module is used for number of repetition threshold value Number repeated work after, stop working.
5. according to right want 4 described in picture coding device, which is characterized in that encoded information obtain module be also used to obtain distance Threshold value;
Depth of field value determining module be also used to the pixel when object between fixed two neighbouring focal planes respectively with When the absolute value of the difference of the distance of described two focal planes is less than distance threshold, it is determined that the focal plane where the pixel is to be in The centre of described two focal planes, and determine that the depth of field value of newly determining focal plane is the depth of field value of the pixel.
CN201510028430.9A 2015-01-20 2015-01-20 Image encoding method and device Active CN105872560B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201510028430.9A CN105872560B (en) 2015-01-20 2015-01-20 Image encoding method and device
HK16111097.1A HK1222963A1 (en) 2015-01-20 2016-09-21 Image coding method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510028430.9A CN105872560B (en) 2015-01-20 2015-01-20 Image encoding method and device

Publications (2)

Publication Number Publication Date
CN105872560A CN105872560A (en) 2016-08-17
CN105872560B true CN105872560B (en) 2019-08-06

Family

ID=56622895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510028430.9A Active CN105872560B (en) 2015-01-20 2015-01-20 Image encoding method and device

Country Status (2)

Country Link
CN (1) CN105872560B (en)
HK (1) HK1222963A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7002571B2 (en) * 2002-06-04 2006-02-21 Intel Corporation Grid-based loose octree for spatial partitioning
CN102982579A (en) * 2011-12-14 2013-03-20 微软公司 Image three-dimensional (3D) modeling
CN103024408A (en) * 2011-09-22 2013-04-03 株式会社东芝 Stereoscopic image converting apparatus and stereoscopic image output apparatus
CN103841396A (en) * 2012-11-23 2014-06-04 财团法人工业技术研究院 Coding method and system for stereo video

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7002571B2 (en) * 2002-06-04 2006-02-21 Intel Corporation Grid-based loose octree for spatial partitioning
CN103024408A (en) * 2011-09-22 2013-04-03 株式会社东芝 Stereoscopic image converting apparatus and stereoscopic image output apparatus
CN102982579A (en) * 2011-12-14 2013-03-20 微软公司 Image three-dimensional (3D) modeling
CN103841396A (en) * 2012-11-23 2014-06-04 财团法人工业技术研究院 Coding method and system for stereo video

Also Published As

Publication number Publication date
CN105872560A (en) 2016-08-17
HK1222963A1 (en) 2017-07-14

Similar Documents

Publication Publication Date Title
EP3991437B1 (en) Context determination for planar mode in octree-based point cloud coding
US8265407B2 (en) Method for coding and decoding 3D data implemented as a mesh model
EP4307684A2 (en) Planar mode in octree-based point cloud coding
JP7431742B2 (en) Method and apparatus for encoding/decoding a point cloud representing a three-dimensional object
US9819964B2 (en) Limited error raster compression
WO2021258374A1 (en) Method for encoding and decoding a point cloud
CN111684808A (en) Point cloud data encoding method, encoding device, decoding method, and decoding device
CN103067715A (en) Encoding and decoding methods and encoding and decoding device of range image
WO2021140354A1 (en) Context determination for planar mode in octree-based point cloud coding
CN104869399A (en) Information processing method and electronic equipment.
CN104184980A (en) Data processing method and electronic device
WO2019199513A1 (en) A method and apparatus for encoding and decoding metadata associated with patched projection of point clouds
CN105872560B (en) Image encoding method and device
CN116724212A (en) Method and apparatus for entropy encoding/decoding point cloud geometry data captured by a spin sensor head
US20140285487A1 (en) Method and Apparatus for Generating a Bitstream of Repetitive Structure Discovery Based 3D Model Compression
US20150288973A1 (en) Method and device for searching for image
CN110800301A (en) Control method and device of coding equipment and storage medium
WO2022141453A1 (en) Point cloud encoding method and apparatus, point cloud decoding method and apparatus, and encoding and decoding system
CN111684804B (en) Data encoding method, data decoding method, equipment and storage medium
CN107124613B (en) Method for recoding second-class product data of Doppler weather radar
CN110875744B (en) Coding method and device
CN106331720B (en) Video decoding related information storage method and device
CN102055978B (en) Methods and devices for coding and decoding frame motion compensation
US20170188035A1 (en) Transcoding method and electronic apparatus
CN111919445B (en) System and method for image compression and decompression using triangulation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1222963

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant