CN101360246B - Video error masking method combined with 3D human face model - Google Patents

Video error masking method combined with 3D human face model Download PDF

Info

Publication number
CN101360246B
CN101360246B CN 200810046012 CN200810046012A CN101360246B CN 101360246 B CN101360246 B CN 101360246B CN 200810046012 CN200810046012 CN 200810046012 CN 200810046012 A CN200810046012 A CN 200810046012A CN 101360246 B CN101360246 B CN 101360246B
Authority
CN
China
Prior art keywords
frame
face
macro block
faceform
human face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 200810046012
Other languages
Chinese (zh)
Other versions
CN101360246A (en
Inventor
范小九
彭强
夏旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN 200810046012 priority Critical patent/CN101360246B/en
Publication of CN101360246A publication Critical patent/CN101360246A/en
Application granted granted Critical
Publication of CN101360246B publication Critical patent/CN101360246B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a video error concealment method by integrating a 3D human face model, which can be used in the field of video telephone applications to process the errors occur during the video transmission by the error concealment in order to improve the subjective visual quality of the damaged region of the face image. some key feature points of the face are acquired and used in adjusting the 3D facial model, the position and shape information is established according to the key feature points, and the texture information of the model is also updated according to the current frame during the adjusting process, so as to achieve the specific 3D facial model which is corresponding to the matching of the current frame face image. The integration of the pre-conceal idea and the specific 3D facial model realizes more accurate concealment the for human face region, meanwhile the background region still adopts the traditional concealment method, and finally the combining edge between the face and the background adopts certain smoothing treatment measures, so as to significantly improve the concealment performance and subjective quality.

Description

The video error coverage method of combined with 3 D human face model
Technical field
The invention belongs to video transmission error and recover the field, be specifically related to recover to improve the method for video decode quality in the human face image information that the video decode end is lost in to transmission course.
Background technology
Along with popularizing and development of the Internet and cordless communication network, increasing digital video signal transmits by this class channel.Because various channels and network error, compressed video data can be damaged or lose in transmission or storing process, in order to guarantee the robustness of video transmission, must introduce error recovery mechanisms.Error concealment technology (being called mistake coverage, error concealing etc. again) is the error concealing method at decoding end, mainly utilize human eye can tolerate the to a certain degree characteristic of image fault, correlation by vision signal information on time domain and spatial domain, adopt methods such as signal repairing, the information of reconstructing lost as far as possible is to improve the whole subjective quality of decoded picture.Because this is a kind of post-processing approach, do not need to change the structure and the Channel Transmission mode of video encoder, independence is strong, be suitable for various video encoding standards and method, and can unite use with other method, extensively adopted in actual applications.
The error concealment technology can be divided into spatial domain method and time-domain method.In general real-time Transmission environment, people can utilize the correlation in interframe and the frame to repair when mistake takes place, thus the real-time acceptable image quality of human eye that recovers, and avoid wrong propagation.For I frame mistake, generally adopt the spatial domain method, utilize the spatial coherence and the correct information that receives piece of adjacent block, make the interpolation in spatial domain and realize repairing losing macro block.For the mistake of P frame, then adopt time domain approach, utilize the similitude of moving between temporal correlation between the consecutive frame and adjacent block, recover to lose the motion vector of piece, use motion compensation process to cover then and lose piece.But it is not satisfactory sometimes that time domain approach is covered effect to the video error that has scene switching and strenuous exercise's object, traditional spatial domain method is owing to adopt just that the pixel on the macroblock boundaries point carries out interpolation around the zone errors, the reference information deficiency often causes losing of edge and bluring of image easily.
Three-dimensional modeling is a basic problem in computer vision and the field of Computer Graphics, tremendous development along with computer graphics techniques, 3D faceform and expression animation designing technique begin to show application prospect more and more widely, and the typical case who relates to uses as recreation animation, face-to-face communication, virtual reality etc.Particularly the current network bandwidth improve and in real time, the appearance of people's face coded system of the low visual application of bit, make 3D people's face technology become research focus in video telephone and the Web conference field.Yet, for the mistake that occurs in these video telephony applications field video transmissions, at present still take traditional error concealing method to cover basically, owing to do not consider the feasible effect and unsatisfactory of covering of the particularity of this type of application, particularly there is bigger error in human face region, when moving violent or exist other to disturb (for example microphone moves, blocks face etc.), it is poorer to cover effect.If take into full account this key character, the combined with 3 D human face model mapping result comes human face region is implemented to cover, and can improve the video error blanketing performance when this class people face video decoding image is carried out error concealment.
Summary of the invention
Above shortcoming in view of prior art the objective of the invention is to, and develops a kind of video transmission time-domain error concealing method of combined with 3 D human face model, improves the subjective quality of facial image affected area.The objective of the invention is to realize by following means.
The video error coverage method of combined with 3 D human face model carries out error concealment to the mistake that occurs in the video transmission of video telephony applications field and handles to improve the subjective visual quality do of facial image affected area, comprises following step,
1) correct received frame carries out detection of people's face and characteristic point mark, calculates corresponding 3D faceform's position and shape information, realizes 3D faceform's automatic adjustment and carries out texture repairing and mapping in conjunction with video image;
2) frame of makeing mistakes is carried out error concealment based on the 3D faceform, correlation before and after making full use of between the frame macro block, the relevant position by former frame judges that in conjunction with the loss situation of other macro blocks in the current frame same area of makeing mistakes of motion vector MV prediction of present frame human face region belongs to major part and loses still and lose on a small quantity;
3) human face region all or major part when losing, present frame can't provide enough information to be used for determining the position and the shape information of specific people's face, use is adjusted the 3D faceform who upgrades the 3D faceform or directly continue to use former frame from position and shape estimated information that the present frame background motion estimation obtains, enters step 5 then;
When 4) human face region is lost on a small quantity, earlier affected area being carried out time domain covers in advance, adopt detection of people's face and positioning feature point related algorithm to extract the key feature point according to covering the result, keep the correct characteristic point that receives on the macro block, correctly do not receive macro block and obtain by former frame individual features point estimation; Establish out position and shape information according to key feature point, further adjust in conjunction with obtainable texture information and upgrade the 3D faceform; Enter step 5 then;
5) the 3D faceform is moved to the current frame relevant position of makeing mistakes is suitably adjusted and mapping is covered, thereby substitute original damaged macro block with new macro block
6) cover the smoothing processing at descendant's face and background edge place.
Adopt the inventive method, with the corresponding specific 3D faceform of present frame facial image coupling, in conjunction with covering thought and specific 3D faceform in advance, realization is more accurately covered human face region, the background area still adopts conventional method to cover simultaneously, at last people's face and background jointing edge place are taked certain smoothing processing measure, significantly improve performance and the subjective quality covered
Description of drawings
Fig. 1 is inventor's face characteristic point definition schematic diagram
Fig. 2 is Candide-3 3D people face wire-frame model figure.
Fig. 3 people's face position and shape information are established schematic diagram.
Figure 43 D faceform adjusts schematic diagram automatically
Figure 53 D faceform texture mapping schematic diagram
Figure 63 D faceform texture is repaired schematic diagram
Figure 73 D faceform texture update schematic diagram
Fig. 8 is in conjunction with front and back frame prediction human face region loss situation schematic diagram
The outer hairclipper macro block schematic diagram of Fig. 9 motion vector
Schematic diagram is covered in Figure 10 mapping
Figure 11 smoothing processing schematic diagram
Two kinds of situation schematic diagrames that correct received frame of Figure 12 and present frame are made mistakes
Figure 13 people's face detects schematic diagram
Figure 14 key feature point extracts schematic diagram
Figure 15 adjusts schematic diagram automatically
Figure 16 texture mapping schematic diagram
Figure 17 basis 3D faceform's multi-angle is showed
Bashful zone marker of Figure 18 people and the prediction of present frame human face region
Figure 19 people's face position mark in the frame of makeing mistakes
The corresponding adjusted 3D faceform of Figure 20
Figure 21 texture update area schematic
The extrapolation motion vector of the impaired sub-macro block of Figure 22
Figure 23 frame coupling schematic diagram
Contrast schematic diagram before and after Figure 24 time domain is covered in advance
Current frame and detection of people's face and the feature point extraction schematic diagram covered of Figure 25
Figure 26 is in conjunction with 3D model error concealment schematic diagram
Figure 27 smoothing processing result
Embodiment
The present invention is described in further detail below in conjunction with accompanying drawing and concrete execution mode.
The concrete steps of this method are as follows:
The first step is carried out detection of people's face and the extraction of key feature point to correct received frame.The result that people's face detects comprises the approximate region of people's face, obtains correctly zone of people's face after finishing the verification of people's face, adopts suitable human face characteristic point extraction algorithm to extract facial key feature point in this zone.Human face characteristic point is got tail of the eye point, left and right sides corners of the mouth point, subnasal point, people's face vertical centering control separated time and canthus line intersection point and and corners of the mouth line intersection point in the human eye of the left and right sides among the present invention.As shown in Figure 1.Method for detecting human face can adopt several different methods such as Face Detection, adaboost, method of calibration can adopt methods such as the judgement of human face region length-width ratio, the judgement of human face region filling rate, and human face characteristic point extracts can adopt several different methods such as template matches, ASM, AAM.Establish the position and the shape information of corresponding people's face on this basis, realize 3D faceform's automatic adjustment, and carry out texture repairing and mapping in conjunction with video image.The 3D faceform is used to characterize the specific people's face in the frame of video, is adjusting acquisition on some more widely used gender bender's face model based at present usually, and simply undistorted true feeling is principle to try one's best.The embodiment of the invention adopts the Candide-3 wire-frame model after simplifying, as shown in Figure 2.Also can adopt some other model, for example the Mike model also is a selection preferably.
People's face that standard 3D faceform need pass through in rotation, convergent-divergent and translation ability and the frame of video mates correspondence, and these rotations, convergent-divergent and translation parameters need be by establishing people's face position in the frame of video and shape information analysis.The process of determining the bearing sense of corresponding three-dimensional body according to the two dimensional image of input that is often referred to is established in the position.It is to realize that the two dimensional image of input and 3D faceform are in shape correspondence that shape is established.For people's face, be usually directed to the degree of freedom of 6 directions, promptly around the rotation of X, Y, Z axle with along the translation of X, Y, Z axle.The rotating part of only considering is wherein established in the position among the present invention, and shape information is established and only considered translating sections and the overall situation and local adjustment of carrying out at the faceform.These two parts mainly geometrical relationship by analyzing facial key feature point and the near symmetrical character of people's face obtain.As shown in Figure 3.
3D faceform's automatic adjustment is meant according to the information of position and shape establishment adjusts the 3D faceform, thus the coupling correspondence of people's face in implementation model and the frame of video.As shown in Figure 4.Texture mapping refers to the process of obtaining the texture value realization sense of reality model of 3D model corresponding region according to frame of video.As shown in Figure 5.
Owing to be not front or approaching positive attitude under a lot of situations of people's face in the frame of video, the situation that texture stretches appears when therefore mapping back 3D faceform rotates, thereby improve the follow-up quality of covering for increasing the sense of reality, need take certain method to repair to not mapping part, one side the present invention gets the another side of repairing of texture-rich according to people's face symmetry.As shown in Figure 6.Even adjacent two frames, also may be because faceform's textural characteristics appears than big-difference in illumination and other environmental factors, particularly therefore parts such as those area-of-interests such as eyes, nose and face need carry out regular update to 3D faceform's texture information.According to 3D model correspondence markings zone, texture information that can the frequent updating area-of-interest depends on the circumstances for other parts such as cheek, beard etc., or 5 frames, or 10 frame updates once among the present invention.As shown in Figure 7.
In second step, the frame of makeing mistakes is carried out error concealment based on the 3D faceform.
This is a core content of the present invention, mainly is made of following components.
1,, judges that human face region belongs to major part and loses still and lose on a small quantity to the frame analysis of makeing mistakes.Correlation before and after will making full use of for the judgement of human face region between the frame macro block, one of method of for example determining human face region macro block collection is: comprise upper left, the lower-left of human face region, upper right, four sub-macro blocks in bottom right, relevant position by former frame is in conjunction with the loss situation of other macro blocks in the current frame same area of makeing mistakes of motion vector MV prediction of present frame, deterministic process and schematic diagram such as accompanying drawing 8.
2, be identified for the 3D faceform that damaged macro block is covered.For human face region all or major part when losing, position and shape information that present frame can't provide enough information to be used for determining specific people's face are used from position that the present frame background motion estimation obtains and shape estimated information and adjust the 3D faceform who upgrades the 3D faceform or directly continue to use former frame this moments; When losing on a small quantity for human face region, earlier affected area being carried out time domain covers in advance, adopt detection of people's face and positioning feature point related algorithm to extract the key feature point according to covering the result, keep the correct characteristic point that receives on the macro block, correctly do not receive macro block and obtain by former frame individual features point estimation.Establish out position and shape information according to key feature point, further adjust in conjunction with obtainable texture information and upgrade the 3D faceform.The pre-concealing method of the time domain that can adopt has multiple, one of them time domain that is motion vector is extrapolated is covered in advance, suppose that promptly each the sub-macro block in the former frame moves in the present frame by its MV direction, then the sub-macro block of each in the present frame a plurality of sub-macro block that may be moved to present frame from former frame covers, the extrapolation motion vector of the sub-macro block of present frame get cover its pixel maximum move to the motion vector of the sub-macro block of present frame from former frame.Definition of extrapolation motion vector and schematic diagram are seen Figure of description 9.
3, human face region utilizes the 3D faceform to shine upon to cover, and background parts is covered with conventional method.The 3D faceform moved to the current frame relevant position of makeing mistakes is suitably adjusted and mapping is covered, thereby substitutes original damaged macro block with new macro block, and all the other background damaged macro blocks still by conventional method as error concealing method based on spatial domain or time domain.Process such as accompanying drawing 10 are covered in mapping.
4, cover the smoothing processing at descendant's face and background edge place.Owing to may have very not accurate situation when aforementioned location and shape information are estimated, the patch effect that subjective visual quality do appears influencing in people's face that therefore can be after covering and background jointing edge place need take certain measure to carry out smoothing processing.The pixel value of frame people's face edge macro block and adjacent macroblocks thereof before and after will making full use of during smoothing processing, for example one of method is: the relatively corresponding macro block with former frame of edge macro block and the sad value of macro block on every side, if illustrate promptly that less than certain threshold value two macro blocks are more approaching, utilize the corresponding macro block smoothing processing of former frame this moment; If differ more greatly then carry out smoothing processing in conjunction with the present frame adjacent macroblocks.Smoothing processing process and signal are as accompanying drawing 11.
Embodiment:
Different disposal measure under two kinds of situations of the present invention for convenience of description, this example assumes former frame correctly receives, and two kinds of situations of the current frame of makeing mistakes are as shown in figure 12.All the other macro blocks all correctly receive.Below be the case step explanation of this method:
1, correct received frame analyzing and processing
1.1 people's face detects
Present frame is carried out the detection of people's face, thereby obtain the approximate region of people's face.For making testing result more help the use of back, need carry out verification to human face region, keep correct human face region.As Figure 13 for correct received frame is carried out people's face detects and verification after the human face region result.
1.2 key feature point extracts
According to back people face testing result, to the further analysis of key characteristic point position of correct human face region.Target is to extract the interior tail of the eye point of left and right sides human eye, left and right sides corners of the mouth point, subnasal point, people's face vertical centering control separated time and canthus line intersection point to reach and corners of the mouth line intersection point, and these points are distributed on the characteristic area relatively large in people's face, extract than being easier to.As Figure 14 is that human face region key feature point is extracted the result.
1.3 adjust automatically
According to the coordinate of key feature point, in conjunction with 3D oPeople's face position and shape establishment method calculate position and the form parameter of model with respect to the actual persons face, thereby realize 3D faceform's automatic adjustment.Figure 15 is 3D faceform's automatic adjustment result
1.4 texture mapping and repairing
Self-adjusting result has set up the coupling correspondence of people's face texture and model, so can realize texture mapping, finishes mapping in the time of mapping and repairs.As Figure 16 is 3D faceform texture mapping result.
1.5 basic 3D faceform generates
After finishing texture mapping and repairing, the 3D model vertices, tri patch and the corresponding texture information that are obtained are stored, promptly formed the 3D faceform, thereby can carry out rotation, Pan and Zoom operation in the three dimensions.Figure 17 is that basic 3D faceform's multi-angle is showed.
2, the current frame analyzing and processing of makeing mistakes (situation 1)
2.1 detect human face region macro block loss situation
For convenience of explanation, suppose that correct received frame is the former frame of present frame.According to normal person's face head shoulder regional percentage (being generally [0.6,2]) people's face label range of correct received frame is dwindled, thereby only comprise people's face, avoid other interference than the zonule.Figure 18 left side figure is the former frame people zone marker figure as a result that has little or no prestige.
According to the zonule signature, thereby obtain to comprise upper left, the lower-left of human face region, upper right, four sub-macro blocks in bottom right, in conjunction with the present frame motion vector that calculates of error macro block not, thereby obtain the position of four sub-macro blocks in the current frame of makeing mistakes, so just can estimate the loss situation of macro block in the present frame human face region.The right figure of Figure 18 is that the current frame of makeing mistakes is judged schematic diagram in conjunction with the former frame loss situation.
2.2 the 3D faceform who is identified for covering
According to the loss situation of macro block in people's face estimation range being judged it belongs to the macro block major part and loses, cover strategy and cover for the 3D faceform who uses former frame thereby establish.
2.3 model is adjusted strategy
Because still there are certain difference in present frame and former frame, change towards existing necessarily such as model center point and model, but can the hypothesized model size remain unchanged substantially.During adjustment, be the model mid point with predicted current frame zone rectangular centre point, and come model towards adjusting in conjunction with former frame human face region and predicted current frame zone.Figure 19 is the former frame position for the current frame of makeing mistakes shows in the frame that the predicted position of people's face in the position of former frame and this frame lean on simultaneously, by under frame interior be predicted position in this frame.Figure 19 and formula (1) have provided a kind of method of adjustment.
Suppose the long a of being of human face region among the last figure, wide is b, and two rectangle left margins are at a distance of being l, and lower boundary is at a distance of being s, and according to the position relation of two rectangles, the movement tendency that can judge people's face is the lower-left rotation, calculates Y, the Z axle anglec of rotation simultaneously as shown in Equation (1).
θ=arctan(l/a),δ=arctan(s/b)(1)
Other motion conditions in like manner can get.By adjusting, Figure 20 has shown corresponding model.
2.4 texture update
When adjusted 3D faceform was corresponded to the impaired human face region of present frame, the subregion macro block received because of correct, and the sense of reality the when texture on it is used for the processing of this frame for raising 3D faceform has significant meaning.Therefore, for tri patch subregion on the model on a macro block and this macro block correctly receive again carry out texture update, and renewal is not done in the zone of striding macro block.The triangle darker regions at Figure 21 people's face chin position is a schematic diagram (it is less that this example is upgraded the zone, and other situations have more renewal) in texture update zone.
3, the current frame analyzing and processing of makeing mistakes (situation 2)
3.1 detect human face region macro block loss situation
Processing method is with 2.1 joints herein.
3.2 the 3D faceform who is identified for covering
According to the loss situation of macro block in people's face estimation range is judged that it belongs to macro block and loses on a small quantity.
3.2.1 time domain is covered in advance
According to the make mistakes motion vector of the identical sub-macro block of sub-macro block position of former frame and present frame, calculate the number that covers pixel respectively, choose the motion vector that covers the maximum sub-macro block of number and cover motion vector in advance as the sub-macro block of present frame, if covering number of pixels result of calculation is 0, then the motion vector of covering in advance with the sub-macro block of present frame is taken as 0.Figure 22 is covering in advance that impaired sub-macro block is selected, and motion vector is represented by dotted line among the figure.
The present frame damaged macro block is done in advance covered according to the motion vector of covering in advance that obtains.The method of covering can have multiple, adopts multiple weighing value frame coupling at present embodiment.Coupling schematic diagram such as Figure 23.D among the figure U, D D, D LAnd D RBe this divided block frame matching error on four limits up and down, its time domain is covered result such as Figure 24 in advance.
3.2.2 people's face detects and feature point extraction
From covering effect, though people face part effect is bad because still there be strict the differentiation in it with background, thus use that common people's face detection algorithm still can be more correct extract people's face position.For feature point extraction, adopt aforementioned available feature point extraction algorithm, finally only keep the characteristic point on the correct frame, other characteristic points are finished by the estimation prediction by last correct frame.Figure 25 for the current frame of makeing mistakes after covering in advance with the motion vector extrapolation method the result and this current people's face of covering frame detected and the feature point extraction result, solid stain be the motion vector MV calculating gained of predicting according to correct reception characteristic point among the figure.
3.3 model is adjusted strategy
Handle with 1.3 joints herein.
3.4 texture update
Handle with 2.4 joints herein.
4, error concealment
The 3D faceform of recent renewal is and is currently available for the 3D model of covering.The 3D model is moved to the present frame appropriate location, and carry out the whole mapping of face and the realization error concealment.As Figure 26 is the current error concealment result of frame in conjunction with the 3D model that make mistakes under two kinds of error situations.
5, smoothing processing
As seen from Figure 26, because other concealing methods are adopted in non-face zone, inevitably can tangible patch effect occur in edge, carry out smoothing processing according to Figure 11 process this moment.The current two field picture of makeing mistakes after two kinds of error situations are covered carries out result such as Figure 27 of smoothing processing, what obtain combined with 3 D human face model finally covers the result, as can be seen, concealing method of the present invention has recovered human face region preferably, more can the objective truth that reflects video image realistically than traditional time domain concealing method.
Adopt the basic scheme of the inventive method, can produce the multiple change example and the combination of deriving thereof according to actual conditions in actual applications, in view of these become example and the predictability of the combination of deriving, this place repeats no more.

Claims (3)

1. the video error coverage method of combined with 3 D human face model carries out error concealment to the mistake that occurs in the video transmission of video telephony applications field and handles to improve the subjective visual quality do of facial image affected area, comprises following step,
1) correct received frame carries out detection of people's face and characteristic point mark, calculates corresponding 3D faceform's position and shape information, realizes 3D faceform's automatic adjustment and carries out texture repairing and mapping in conjunction with video image;
2) frame of makeing mistakes is carried out error concealment based on the 3D faceform, correlation before and after making full use of between the frame macro block, the relevant position by former frame judges that in conjunction with the loss situation of other macro blocks in the current frame same area of makeing mistakes of motion vector MV prediction of present frame human face region belongs to major part and loses still and lose on a small quantity;
3) human face region all or major part when losing, present frame can't provide enough information to be used for determining the position and the shape information of specific people's face, use is adjusted the 3D faceform who upgrades the 3D faceform or directly continue to use former frame from position and shape estimated information that the present frame background motion estimation obtains, enters step 5 then;
When 4) human face region is lost on a small quantity, earlier affected area being carried out time domain covers in advance, adopt detection of people's face and positioning feature point related algorithm to extract the key feature point according to covering the result, keep the correct characteristic point that receives on the macro block, correctly do not receive macro block and obtain by former frame individual features point estimation; Establish out position and shape information according to key feature point, further adjust in conjunction with obtainable texture information and upgrade the 3D faceform; Enter step 5 then;
5) the 3D faceform is moved to the current frame relevant position of makeing mistakes is suitably adjusted and mapping is covered, thereby substitute original damaged macro block with new macro block;
6) cover the smoothing processing at descendant's face and background edge place.
2. the video error coverage method of combined with 3 D human face model according to claim 1 is characterized in that, describedly when human face region is lost on a small quantity affected area is carried out time domain and covers in advance and adopt the time domain of motion vector extrapolation to cover in advance.
3. the video error coverage method of combined with 3 D human face model according to claim 1, it is characterized in that, the smoothing processing of covering descendant's face and background edge place adopt relatively edge macro block and the corresponding macro block of former frame and around the sad value of macro block, as if then carrying out smoothing processing with former frame correspondence macro block less than certain predetermined threshold value; If differ more greatly then carry out smoothing processing in conjunction with the present frame adjacent macroblocks.
CN 200810046012 2008-09-09 2008-09-09 Video error masking method combined with 3D human face model Expired - Fee Related CN101360246B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200810046012 CN101360246B (en) 2008-09-09 2008-09-09 Video error masking method combined with 3D human face model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200810046012 CN101360246B (en) 2008-09-09 2008-09-09 Video error masking method combined with 3D human face model

Publications (2)

Publication Number Publication Date
CN101360246A CN101360246A (en) 2009-02-04
CN101360246B true CN101360246B (en) 2010-06-02

Family

ID=40332567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200810046012 Expired - Fee Related CN101360246B (en) 2008-09-09 2008-09-09 Video error masking method combined with 3D human face model

Country Status (1)

Country Link
CN (1) CN101360246B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102497530A (en) * 2011-05-09 2012-06-13 苏州阔地网络科技有限公司 Secure transmission method and system for image in community network
WO2021108171A1 (en) * 2019-11-27 2021-06-03 Sony Interactive Entertainment Inc. Systems and methods for decoding and displaying lost image frames using motion compensation

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102005036B (en) * 2010-12-08 2012-05-09 上海杰图软件技术有限公司 Method and device for automatically removing tripod afterimage in panoramic image
CN102005060B (en) * 2010-12-08 2012-11-14 上海杰图软件技术有限公司 Method and device for automatically removing selected images in pictures
CN102238362A (en) * 2011-05-09 2011-11-09 苏州阔地网络科技有限公司 Image transmission method and system for community network
CN103999096B (en) * 2011-12-16 2017-12-08 英特尔公司 Picture quality for the reduction of video data background area
CN102447910A (en) * 2012-01-06 2012-05-09 南京邮电大学 H.264 coding video data wireless transmission method and wireless video monitoring system
EP2960864B1 (en) * 2014-06-23 2018-12-05 Harman Becker Automotive Systems GmbH Device and method for processing a stream of video data
CN106709404B (en) * 2015-11-16 2022-01-04 佳能株式会社 Image processing apparatus and image processing method
CN108377359B (en) * 2018-03-14 2020-08-04 苏州科达科技股份有限公司 Video error code resisting method and device, electronic equipment and storage medium
CN108629333A (en) * 2018-05-25 2018-10-09 厦门市美亚柏科信息股份有限公司 A kind of face image processing process of low-light (level), device, equipment and readable medium
CN110581974B (en) * 2018-06-07 2021-04-02 中国电信股份有限公司 Face picture improving method, user terminal and computer readable storage medium
CN109448093B (en) * 2018-10-25 2023-01-06 广东智媒云图科技股份有限公司 Method and device for generating style image
CN111243099B (en) * 2018-11-12 2023-10-27 联想新视界(天津)科技有限公司 Method and device for processing image and method and device for displaying image in AR (augmented reality) equipment
CN110213521A (en) * 2019-05-22 2019-09-06 创易汇(北京)科技有限公司 A kind of virtual instant communicating method
CN110536095A (en) * 2019-08-30 2019-12-03 Oppo广东移动通信有限公司 Call method, device, terminal and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1325662A (en) * 2001-07-13 2001-12-12 清华大学 Method for detecting moving human face
CN1731417A (en) * 2005-08-19 2006-02-08 清华大学 Method of robust human face detection in complicated background image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1325662A (en) * 2001-07-13 2001-12-12 清华大学 Method for detecting moving human face
CN1731417A (en) * 2005-08-19 2006-02-08 清华大学 Method of robust human face detection in complicated background image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102497530A (en) * 2011-05-09 2012-06-13 苏州阔地网络科技有限公司 Secure transmission method and system for image in community network
WO2021108171A1 (en) * 2019-11-27 2021-06-03 Sony Interactive Entertainment Inc. Systems and methods for decoding and displaying lost image frames using motion compensation

Also Published As

Publication number Publication date
CN101360246A (en) 2009-02-04

Similar Documents

Publication Publication Date Title
CN101360246B (en) Video error masking method combined with 3D human face model
CN101937578B (en) Method for drawing virtual view color image
CN109792520A (en) For the method and apparatus using omnidirectional's video coding of most probable mode in adaptive frame
JPH0670301A (en) Apparatus for segmentation of image
JP2008535116A (en) Method and apparatus for three-dimensional rendering
CN104850847B (en) Image optimization system and method with automatic thin face function
CN101593022A (en) A kind of quick human-computer interaction of following the tracks of based on finger tip
CN104602028B (en) A kind of three-dimensional video-frequency B frames entire frame loss error concealing method
TW201328315A (en) 2D to 3D video conversion system
CN106101726B (en) A kind of adaptive hypermedia system restorative procedure that time-space domain combines and system
CN109758756A (en) Gymnastics video analysis method and system based on 3D camera
CN101188772B (en) A method for hiding time domain error in video decoding
CN109274883A (en) Posture antidote, device, terminal and storage medium
JPH0662385A (en) Refresh-corrected image-encoding sub- assembly of data to be encoded and decoding subassembly of image encoded by above subassembly
Fukuhara et al. 3-D motion estimation of human head for model-based image coding
CN111582036A (en) Cross-view-angle person identification method based on shape and posture under wearable device
KR101440620B1 (en) Method and apparatus for estimation of object boundaries in moving picture
CN107293162A (en) Move teaching auxiliary and device, terminal device
Xu et al. Detecting head pose from stereo image sequence for active face recognition
CN104093034B (en) A kind of H.264 video flowing adaptive hypermedia system method of similarity constraint human face region
Wang et al. Nerfcap: Human performance capture with dynamic neural radiance fields
CN107767393A (en) A kind of scene flows method of estimation towards mobile hardware
CN103220533A (en) Method for hiding loss errors of three-dimensional video macro blocks
CN101370145B (en) Shielding method and apparatus for image frame
Rurainsky et al. Template-based eye and mouth detection for 3D video conferencing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100602

Termination date: 20120909