CN107454418A - 360 degree of panorama video code methods based on motion attention model - Google Patents

360 degree of panorama video code methods based on motion attention model Download PDF

Info

Publication number
CN107454418A
CN107454418A CN201710122773.0A CN201710122773A CN107454418A CN 107454418 A CN107454418 A CN 107454418A CN 201710122773 A CN201710122773 A CN 201710122773A CN 107454418 A CN107454418 A CN 107454418A
Authority
CN
China
Prior art keywords
mrow
motion vector
motion
msub
mfrac
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710122773.0A
Other languages
Chinese (zh)
Other versions
CN107454418B (en
Inventor
虞启铭
胡强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Plex VR Digital Technology Shanghai Co Ltd
Original Assignee
Plex VR Digital Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Plex VR Digital Technology Shanghai Co Ltd filed Critical Plex VR Digital Technology Shanghai Co Ltd
Priority to CN201710122773.0A priority Critical patent/CN107454418B/en
Publication of CN107454418A publication Critical patent/CN107454418A/en
Priority to PCT/CN2018/077730 priority patent/WO2018157835A1/en
Application granted granted Critical
Publication of CN107454418B publication Critical patent/CN107454418B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/527Global motion vector estimation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention relates to 360 based on motion attention model degree panorama video code method, including:Extraction motion vector obtains motion vector field, calculates the reliability of each motion vector;The weighted filtering that reliability is carried out according to reliability pre-processes, to reduce noise;Motion vector field is subjected to global motion compensation;Motion attention model is built, obtains the motion attention of encoding block;According to obtaining the motion attention of encoding block, self-adjusted block code word.Beneficial effect of the present invention:The calculating of motion attention is carried out on the basis of motion vector field, extra computation complexity is not needed, reduce the influence of noise, motion vector intensity, spatial domain motion vector contrast and time-domain motion vector contrast structure motion attention model are considered, the more code word of distribution to moving region-of-interest, less code word is distributed to indeclinable video area, the video quality of motion concern scene is improved with this while reduces the coding codeword of useless region.

Description

360 degree of panorama video code methods based on motion attention model
Technical field
It is specifically a kind of to be based on motion attention model the present invention relates to 360 degree of panorama video code technical fields 360 degree of panorama video code methods.
Background technology
Traditional direct-seeding can bring real-time race to enjoy to spectators, add after 360 degree of panorama direct seeding techniques, remove The event watching atmosphere with more presence can be built, the limitation at space seat is also breached, has widened rating colony significantly. It is live that the development of 360 degree of panorama direct seeding techniques not only can be only used for such as concert, competitive sports event mode, moreover it is possible to is applied to Medical field, real estate scene see room sale etc..Generally, all to be outdoor live.In such a case, itself collection terminal Network is extremely unstable, influence user watch 360 degree of panoramas it is live when quality.Even indoor live, institute face in network transmission The interim card faced, and 360 degree of panorama live the problem of must pull against.
As user requires more and more higher to the authenticity of virtual reality, Video Coding Scheme common at present can not expire Foot reduces the compression bit rate of 360 degree of panoramic videos in the case where ensureing identical subjective quality.Because network bandwidth conditions are limited, can only make With relatively low bit stream video, but when wishing to see clearly common-denominator target, video interested region coding techniques can sacrifice non-sense Interest area image quality, pooling of resources coding is carried out to area-of-interest, so as to realize under conditions of code stream is not improved, obtained Common-denominator target information is obtained, so can effectively avoid storage and the increase of bandwidth cost.Video interested region variable mass Transmission characteristic is very useful in monitoring field, and the picture that 360 degree of panoramic cameras are collected all more or less has dead sector Domain, same to take transmission bandwidth and storage, therefore, only the video information high-quality transmission to region-of-interest, is regarded to indeclinable Frequency is few to be passed or does not pass, and the coding codeword of the video quality of concern scene and reduction useless region is improved with this.
The content of the invention
, can it is an object of the invention to provide a kind of 360 degree of panorama video code methods based on motion attention model The it is proposed of motion attention region is carried out with the motion vector information in multiplexing and encoding, and in coding to motion region-of-interest More code word is distributed, less code word is distributed to indeclinable video area, so as to realize in the case where ensureing identical subjective quality effectively The transmission of 360 degree of panoramic videos of saving bandwidth.
The technical solution adopted by the present invention comprises the following steps:
Step 1:Extraction motion vector obtains motion vector field, calculates the reliability of each motion vector;
Step 2:The weighted filtering that reliability is carried out according to reliability pre-processes, to reduce noise;
Step 3:The revised motion vector field of step 2 is subjected to global motion compensation;
Step 4:Motion attention model is built, obtains the motion attention of encoding block;
Step 5:The motion attention of encoding block, self-adjusted block code word are obtained according to step 5.
Further, in step 1, motion vector reliability is defined as follows:
Wherein v is the motion vector of current block, and MAD is current block and the mean absolute difference of match block, μvIt is that current block 8 is adjacent The average motion vector of domain block.
Further, reliability weighted filtering is carried out in step 2:If g (v) is more than 0.1, current block motion vector is represented Reliably, then any processing that it goes without doing;If g (v) is less than 0.1, represent that current block motion vector is unreliable, then unreliable motion is sweared Amount does reliability weighing vector medium filtering, reliable motion vector around it is replaced current unreliable motion vector.
Further, in step 3:The revised motion vector field of step 3 is subjected to global motion compensation, i.e., statistics is worked as The average of the motion vector of all SKIP patterns in previous frame, and the average is subtracted to all motion vectors of present frame, obtain the overall situation Motion vector field after motion compensation.
Further, in step 4:Motion attention model includes the content of three aspects:Motion vector intensity, spatial domain fortune Dynamic vector contrast and time-domain motion vector contrast;
Motion vector strength definition is as follows:
Wherein vxAnd vyRespectively motion vector v x and the component in y-axis direction, NF are the normalization factors.Spatial domain motion arrow Amount contrast is defined as follows:
Wherein v represents current block motion vector, viThe neighborhood block motion vector of representation space 8.Time-domain motion vector contrast It is defined as follows:
Wherein vtRepresent current block motion vector, vt-iRepresent time domain neighboring block motion vector.The motion note of each encoding block Meaning power is calculated as follows:
MA=MI+MCs+MCt
Further, in steps of 5:The motion attention of encoding block is obtained come adaptive distribution codeword according to step 4, i.e., The more code word of distribution to moving region-of-interest, less code word is distributed to indeclinable video area, is closed with this to improve motion The video quality for noting scene reduces the coding codeword of useless region simultaneously,
Thus the code word of n-th of encoding block distribution is calculated as follows:
Wherein MAnRepresent the motion attention of n-th of encoding block, RframeRepresent total code word of whole frame.
Compared to the prior art, beneficial effects of the present invention are:
1) calculating of motion attention is carried out on the basis of motion vector field, and motion vector field is directly from encoder Obtain, therefore do not need extra computation complexity;
2) secondly propose the median filter method based on the weighting of motion vector reliability and processing be filtered to vector field, Reduce the influence of noise;
3) according to notice formed mechanism, considered motion vector intensity, spatial domain motion vector contrast and when Domain motion vector contrast builds motion attention model.
4) code word of the last each encoding block of motion attention self-adjusted block according to obtained each encoding block, to fortune The more code word of distribution of dynamic region-of-interest, less code word is distributed to indeclinable video area, and field is paid close attention to improve motion with this The video quality of scape reduces the coding codeword of useless region simultaneously.
Brief description of the drawings
The detailed description made by reading with reference to the following drawings to non-limiting example, further feature of the invention, Objects and advantages will become more apparent upon:
Fig. 1 is the flow chart illustration of the present invention;
Fig. 2 is the original motion vector field obtained in encoder;
Fig. 3 is the motion vector field after reliability weighted filtering;
Fig. 4 is the motion vector field after global motion compensation;
Fig. 5 is the notable figure obtained by motion attention model.
Embodiment
The present invention is further described in conjunction with accompanying drawing.
Referring to Fig. 1, Fig. 1 shows the HEVC encoder blocks of one embodiment of the present of invention, and the purpose of the present embodiment is A kind of 360 degree of panorama video code methods based on motion attention model are provided, attention model is added thereto, main bag Include following steps:
Step 1:Extraction motion vector obtains motion vector field, calculates the reliability of each motion vector.
Motion vector is extracted in HEVC reference encoder device HM16.0 used by the present embodiment, and is transported Dynamic vector field.Referring to Fig. 2, shown single arrow is motion vector in Fig. 2, for encoding block relative to reference frame certain Relative displacement in hunting zone;Motion vector field is together constituted by the motion vector for being clouded in video.
Step 2:The weighted filtering that reliability is carried out according to reliability pre-processes, to reduce noise.
Because the motion of object has continuity over time and space, so being not only between motion vector figure each point Vertical, but it is interrelated and constraint.There should be similar fortune between spatial domain, some connected piecemeals corresponding to object Dynamic vector.In time domain, same object should also have similar motion vector at different moments at position.Based on this, this hair It is bright to propose a kind of concept of motion vector reliability, calculate the reliability of motion vector;According to obtained motion vector can By degree, the reliability weighing vector medium filtering for vector field of taking exercises to unreliable motion vector, make around reliably to move arrow Amount replaces current unreliable motion vector.
Motion vector reliability g (v) is defined as follows:
Wherein v is the motion vector of current block, and MAD is current block and the mean absolute difference of match block, μvIt is that current block 8 is adjacent The average motion vector of domain block.
The motion vector obtained in HEVC reference encoder device HM16.0, which is so that, encodes optimal motion vector, is not True motion vector (most motion vector and true motion vector are similar in encoder).Therefore to fortune in step 2 Dynamic vector field is filtered pretreatment, if to reduce influence of noise --- and g (v) is bigger, represents that current block motion vector is more reliable, Any processing that then it goes without doing;If g (v) is less than 0.1, represents that current block motion vector is unreliable, then unreliable motion vector is done Reliability weighing vector medium filtering, reliable motion vector around it is set to replace current unreliable motion vector.
Referring to Fig. 3, Fig. 3 is the motion vector field after reliability weighted filtering;Fortune in circled portion at the five of Fig. 2 It is unreliable that dynamic vector is judged as, and the motion vector around its circle is judged as reliably, then by its week after by step 2 Enclose reliable motion vector and instead of unreliable motion vector in circle.Meanwhile it is determined in the outer motion vector elsewhere of circle To be reliable, then do not make any processing and retained.
Step 3:The revised motion vector field of step 2 is subjected to global motion compensation;
When video source introduces global motion due to cam movement, the motion vector for extracting to obtain can be therefore by shadow Ring.When global motion degree is not high, the effect attached by global motion influences smaller on motion vector;But work as global motion journey When degree is higher, influence of the global motion to motion vector just can not ignore.Therefore, it is necessary to be carried out to motion vector figure entirely Office's motion compensation.The method that the present embodiment uses is to count the average of the motion vector of all SKIP patterns in present frame, and right All motion vectors of present frame subtract the average.Referring to Fig. 4, because background content is mostly inactive state in image, then it is transported Dynamic vector is judged as SKIP patterns;It is all in whole image after the average of motion vector of the SKIP patterns is counted Motion vector subtracts the average, then obtains Fig. 4.It is visible in Fig. 4, be clouded in originally most of motion vector on picture by Become inconspicuous " point " or " short arrow " in being subtracted by average;Left motion vector (has mobile people at three Motion vector on body) because global motion compensation is obviously improved on the contrary, further difference is so just formd, There is obvious contrast.
Step 4:Motion attention model is built, obtains the motion attention of encoding block;In above-mentioned global motion compensation After processing, the structure of movement vision model is carried out according to motion vector after amendment, includes the content of three aspects:Motion vector is strong Degree, spatial domain motion vector contrast and time-domain motion vector contrast.Motion vector strength definition is as follows:
Wherein vxAnd vyRespectively motion vector v x and the component in y-axis direction, NF are the normalization factors.Spatial domain motion arrow Amount contrast is defined as follows:
Wherein v represents current block motion vector, viThe neighborhood block motion vector of representation space 8.Time-domain motion vector contrast It is defined as follows:
Wherein vtRepresent current block motion vector, vt-iRepresent time domain neighboring block motion vector.The motion note of each encoding block Meaning power is calculated as follows:
MA=MI+MCs+MCt
In the above, region larger motion vector intensity MI will more attract much attention.When motion is strong When degree is smaller, time-space domain motion vector MCsContrast will compensate this deficiency.On the one hand using in motion vector spatial neighborhood Motion vector spatial domain contrast ratio M CtTo describe local motion notice degree, on the other hand because time-domain motion vector is to low energy The motion of amount is very sensitive, so it is to compensate well to motion vector intensity.
Step 5:The motion attention of encoding block, self-adjusted block code word are obtained according to step 4.It is last every according to obtaining The code word of each encoding block of motion attention self-adjusted block of individual encoding block, the more code word of distribution to moving region-of-interest, Less code word is distributed to indeclinable video area.With this come improve motion concern scene video quality and meanwhile reduce dead sector The coding codeword in domain.Therefore, the code word of n-th of encoding block distribution is calculated as follows:
Wherein MAnRepresent the motion attention of n-th of encoding block, RframeRepresent total code word of whole frame.
Referring to Fig. 5, the motion attention that whiter region representation is calculated in figure is bigger, the wheel of the human body for example moved Wide edge, these scope human eyes are more paid close attention to, and the code word that coded time division is matched somebody with somebody is more;The more black motion attention for representing to be calculated is more It is small, for example most background image, due to remain static thus human eye it is less sensitive, the code word phase that coded time division is matched somebody with somebody To less.
Empirical tests, the present embodiment is under HEVC reference encoder device HM16.0, for multiple video sequences, identical subjective matter Code check in the case of amount reduces 11%.Embodiment can carry out motion attention region with the motion vector information in multiplexing and encoding It is proposed, and less code word is distributed to indeclinable video area to the more code word of distribution of motion region-of-interest in coding, from And realize the effective bandwidth for saving 360 degree of panoramic video transmission in the case where ensureing identical subjective quality.
It is described above, it is only the embodiment in the present invention, but protection scope of the present invention is not limited thereto, and is appointed What be familiar with the people of the technology disclosed herein technical scope in, it will be appreciated that the conversion or replacement expected, should all cover Within the scope of the present invention.

Claims (6)

  1. A kind of 1. 360 degree of panorama video code methods based on motion attention model, it is characterised in that comprise the following steps:
    Step 1:Extraction motion vector obtains motion vector field, calculates the reliability of each motion vector;
    Step 2:The weighted filtering that reliability is carried out according to reliability pre-processes, to reduce noise;
    Step 3:The revised motion vector field of step 2 is subjected to global motion compensation;
    Step 4:Motion attention model is built, obtains the motion attention of encoding block;
    Step 5:The motion attention of encoding block, self-adjusted block code word are obtained according to step 5.
  2. 2. a kind of 360 degree of panorama video code methods based on motion attention model according to claim 1, its feature It is:In step 1, motion vector reliability is defined as follows:
    <mrow> <mi>g</mi> <mrow> <mo>(</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <mrow> <mi>M</mi> <mi>A</mi> <mi>D</mi> </mrow> <mn>6</mn> </mfrac> <mo>-</mo> <mfrac> <mrow> <mo>|</mo> <mo>|</mo> <mi>v</mi> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <mi>v</mi> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> </mrow> <mn>50</mn> </mfrac> <mo>)</mo> </mrow> </mrow>
    Wherein v is the motion vector of current block, and MAD is current block and the mean absolute difference of match block, μvIt is the neighborhood block of current block 8 Average motion vector.
  3. 3. a kind of 360 degree of panorama video code methods based on motion attention model according to claim 2, its feature It is:Reliability weighted filtering is carried out in step 2:If g (v) is more than 0.1, represent that current block motion vector is reliable, then it goes without doing Any processing;If g (v) is less than 0.1, represents that current block motion vector is unreliable, then reliability is done to unreliable motion vector and added Weight vector medium filtering, reliable motion vector around it is set to replace current unreliable motion vector.
  4. 4. a kind of 360 degree of panorama video code methods based on motion attention model according to claim 1, its feature It is:In step 3:The revised motion vector field of step 3 is subjected to global motion compensation, that is, counts all SKIP in present frame The average of the motion vector of pattern, and the average is subtracted to all motion vectors of present frame, moved after obtaining global motion compensation Vector field.
  5. 5. a kind of 360 degree of panorama video code methods based on motion attention model according to claim 1, its feature It is:In step 4:Motion attention model includes the content of three aspects:Motion vector intensity, spatial domain motion vector contrast and Time-domain motion vector contrast;
    Motion vector strength definition is as follows:
    <mrow> <mi>M</mi> <mi>I</mi> <mo>=</mo> <mfrac> <msqrt> <mrow> <msup> <msub> <mi>v</mi> <mi>x</mi> </msub> <mn>2</mn> </msup> <mo>+</mo> <msup> <msub> <mi>v</mi> <mi>y</mi> </msub> <mn>2</mn> </msup> </mrow> </msqrt> <mrow> <mi>N</mi> <mi>F</mi> </mrow> </mfrac> </mrow>
    Wherein vxAnd vyRespectively motion vector v x and the component in y-axis direction, NF are the normalization factors.Spatial domain motion vector pair It is defined as follows than degree:
    <mrow> <msub> <mi>MC</mi> <mi>s</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>-</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </msubsup> <mo>|</mo> <mo>|</mo> <msub> <mi>v</mi> <mi>i</mi> </msub> <mo>-</mo> <mi>v</mi> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> </mrow> <mn>50</mn> </mfrac> <mo>)</mo> </mrow> </mrow>
    Wherein v represents current block motion vector, viThe neighborhood block motion vector of representation space 8.Time-domain motion vector contrast defines such as Under:
    <mrow> <msub> <mi>MC</mi> <mi>t</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>-</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>3</mn> </msubsup> <mo>|</mo> <mo>|</mo> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mo>-</mo> <mi>i</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>v</mi> <mi>t</mi> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> </mrow> <mn>60</mn> </mfrac> <mo>)</mo> </mrow> </mrow>
    Wherein vtRepresent current block motion vector, vt-iRepresent time domain neighboring block motion vector.The motion attention of each encoding block It is calculated as follows:
    MA=MI+MCs+MCt
  6. 6. 360 degree of panorama video code methods according to claim 1 based on motion attention model, its feature exist In in step 5:The motion attention of encoding block is obtained come adaptive distribution codeword according to step 4, i.e., to motion region-of-interest More code word is distributed, less code word is distributed to indeclinable video area, the video quality of motion concern scene is improved with this The coding codeword of useless region is reduced simultaneously, and thus the code word of n-th of encoding block distribution is calculated as follows:
    <mrow> <msub> <mi>R</mi> <mi>n</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>MA</mi> <mi>n</mi> </msub> </mrow> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </msubsup> <msub> <mi>MA</mi> <mi>k</mi> </msub> </mrow> </mfrac> <mo>&amp;CenterDot;</mo> <msub> <mi>R</mi> <mrow> <mi>f</mi> <mi>r</mi> <mi>a</mi> <mi>m</mi> <mi>e</mi> </mrow> </msub> </mrow>
    Wherein MAnRepresent the motion attention of n-th of encoding block, RframeRepresent total code word of whole frame.
CN201710122773.0A 2017-03-03 2017-03-03 360 degree of panorama video code methods based on motion attention model Active CN107454418B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710122773.0A CN107454418B (en) 2017-03-03 2017-03-03 360 degree of panorama video code methods based on motion attention model
PCT/CN2018/077730 WO2018157835A1 (en) 2017-03-03 2018-03-01 360-degree panoramic video coding method based on motion attention model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710122773.0A CN107454418B (en) 2017-03-03 2017-03-03 360 degree of panorama video code methods based on motion attention model

Publications (2)

Publication Number Publication Date
CN107454418A true CN107454418A (en) 2017-12-08
CN107454418B CN107454418B (en) 2019-11-22

Family

ID=60486227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710122773.0A Active CN107454418B (en) 2017-03-03 2017-03-03 360 degree of panorama video code methods based on motion attention model

Country Status (2)

Country Link
CN (1) CN107454418B (en)
WO (1) WO2018157835A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108271020A (en) * 2018-04-24 2018-07-10 福州大学 A kind of panoramic video quality evaluating method of view-based access control model attention model
WO2018157835A1 (en) * 2017-03-03 2018-09-07 叠境数字科技(上海)有限公司 360-degree panoramic video coding method based on motion attention model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050140781A1 (en) * 2003-12-29 2005-06-30 Ming-Chieh Chi Video coding method and apparatus thereof
CN101282479A (en) * 2008-05-06 2008-10-08 武汉大学 Method for encoding and decoding airspace with adjustable resolution based on interesting area
CN102572380A (en) * 2010-12-29 2012-07-11 中国移动通信集团公司 Video monitoring coding method and device
CN103765898A (en) * 2011-09-02 2014-04-30 索尼公司 Image processing device, image processing method, and program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107454418B (en) * 2017-03-03 2019-11-22 叠境数字科技(上海)有限公司 360 degree of panorama video code methods based on motion attention model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050140781A1 (en) * 2003-12-29 2005-06-30 Ming-Chieh Chi Video coding method and apparatus thereof
CN101282479A (en) * 2008-05-06 2008-10-08 武汉大学 Method for encoding and decoding airspace with adjustable resolution based on interesting area
CN102572380A (en) * 2010-12-29 2012-07-11 中国移动通信集团公司 Video monitoring coding method and device
CN103765898A (en) * 2011-09-02 2014-04-30 索尼公司 Image processing device, image processing method, and program

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018157835A1 (en) * 2017-03-03 2018-09-07 叠境数字科技(上海)有限公司 360-degree panoramic video coding method based on motion attention model
CN108271020A (en) * 2018-04-24 2018-07-10 福州大学 A kind of panoramic video quality evaluating method of view-based access control model attention model
CN108271020B (en) * 2018-04-24 2019-08-09 福州大学 A kind of panoramic video quality evaluating method of view-based access control model attention model

Also Published As

Publication number Publication date
CN107454418B (en) 2019-11-22
WO2018157835A1 (en) 2018-09-07

Similar Documents

Publication Publication Date Title
CN110969577B (en) Video super-resolution reconstruction method based on deep double attention network
CN112348766B (en) Progressive feature stream depth fusion network for surveillance video enhancement
KR101633893B1 (en) Apparatus and Method for Image Fusion
CN111028150B (en) Rapid space-time residual attention video super-resolution reconstruction method
US7295616B2 (en) Method and system for video filtering with joint motion and noise estimation
CN102077572B (en) Method and apparatus for motion blur and ghosting prevention in imaging system
US10003768B2 (en) Apparatus and methods for frame interpolation based on spatial considerations
KR102210415B1 (en) Motion-compensated frame interpolation using smoothness constraints
CN104041046B (en) The method and apparatus that high dynamic is encoded together with low-dynamic range video, the method and apparatus for reconstructing high dynamic range video
CN103501441B (en) A kind of multi-description video coding method based on human visual system
JPH07203435A (en) Method and apparatus for enhancing distorted graphic information
CN111709896A (en) Method and equipment for mapping LDR video into HDR video
US20130279598A1 (en) Method and Apparatus For Video Compression of Stationary Scenes
CN113066022B (en) Video bit enhancement method based on efficient space-time information fusion
CN110225260B (en) Three-dimensional high dynamic range imaging method based on generation countermeasure network
Cheng et al. A dual camera system for high spatiotemporal resolution video acquisition
CN107155112A (en) A kind of compressed sensing method for processing video frequency for assuming prediction more
CN112750092A (en) Training data acquisition method, image quality enhancement model and method and electronic equipment
CN107454418A (en) 360 degree of panorama video code methods based on motion attention model
CN113610707B (en) Video super-resolution method based on time attention and cyclic feedback network
Lee et al. A new framework for measuring 2D and 3D visual information in terms of entropy
CN115760663A (en) Method for synthesizing high dynamic range image from low dynamic range image based on multi-frame multi-exposure
Zhao et al. Multiframe joint enhancement for early interlaced videos
CN115661452A (en) Image de-occlusion method based on event camera and RGB image
CN107071447A (en) A kind of correlated noise modeling method based on two secondary side information in DVC

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Yu Jingyi

Inventor after: Hu Qiang

Inventor before: Yu Qiming

Inventor before: Hu Qiang

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant