CN103345619A - Self-adaption correcting method of human eye natural contact in video chat - Google Patents

Self-adaption correcting method of human eye natural contact in video chat Download PDF

Info

Publication number
CN103345619A
CN103345619A CN2013102609043A CN201310260904A CN103345619A CN 103345619 A CN103345619 A CN 103345619A CN 2013102609043 A CN2013102609043 A CN 2013102609043A CN 201310260904 A CN201310260904 A CN 201310260904A CN 103345619 A CN103345619 A CN 103345619A
Authority
CN
China
Prior art keywords
eyes
eye
image
looking
human eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013102609043A
Other languages
Chinese (zh)
Inventor
郭燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI YONGCHANG INFORMATION TECHNOLOGY CO LTD
Original Assignee
SHANGHAI YONGCHANG INFORMATION TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI YONGCHANG INFORMATION TECHNOLOGY CO LTD filed Critical SHANGHAI YONGCHANG INFORMATION TECHNOLOGY CO LTD
Priority to CN2013102609043A priority Critical patent/CN103345619A/en
Publication of CN103345619A publication Critical patent/CN103345619A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a self-adaption correcting method of human eye natural contact in a video chat. According to the correcting method, eye regions on a model image where human eyes are in a natural contact state are used for replacing eye regions in video images during the video chat process in real time, and therefore correcting is realized. The self-adaption correcting method can correct the contact of the eyes of both sides during the video char process under the condition that no other hardware is added, natural contact is realized, and the whole method is low in implementation cost and simple to operate, and expands the use range and the practicability.

Description

Human eye nature in the Video chat is to looking the adaptive corrective method
Technical field
The present invention relates to technical fields such as computer vision, image processing, the human eye that is specifically related in a kind of Video chat is natural in looking the method for adaptive corrective.
Background technology
Along with The development in society and economy, the internet has become the leading of the world gradually, and it is more and more perfect that network becomes, and interpersonal distance has furthered.Video chat has been realized remote aspectant interchange, has instant, convenient, characteristics of high efficiency, is extensive use of by people.But, because what people saw when chatting usually is the window that shows the other side's picture, rather than directly seeing camera, this just causes in the Video chat process, be difficult to reach both sides' nature to look, eye is to the interchange of eye, and the side face exchanges, overlooks interchange or looks up interchange often, so just makes Video chat not have realization face-to-face exchange truly, lacks nature to the affinity of apparent time.
And at present, in the whole technique field, the research that this problem is carried out is not a lot, and the technical scheme that provides is lacking very all.Iris company in 2010 has released the minute surface baffle plate of a lift-launch Camera-reflection system, and it is placed on the display of computer, utilizes the reflective characteristic of finding a view of minute surface baffle plate, makes the Video chat person to reach the effect that expression in the eyes exchanges mutually to looking.This product price is higher, and minute surface baffle plate stature is bigger, carries inconvenience, is placed in the inconvenience that may take the typewriting aspect on the display of notebook computer to the user to simultaneously.
At present relevant solution is few, and all is to start with from hardware, as camera being handled or being increased baffle plate etc., though these methods can reach the effect to looking to a certain extent, but carry inconvenience, expensive, the puzzlement that brings other simultaneously to the user.
Summary of the invention
The present invention is directed in the present Video chat two sending out and to realize the problem of eyes to looking, and provide human eye nature in a kind of Video chat to looking the adaptive corrective method.This method can be under the prerequisite that does not increase other hardware devices, realizes in the Video chat process, reach both sides' eyes nature to look, the effect of face-to-face exchange.
In order to achieve the above object, the present invention adopts following technical scheme:
Human eye nature in the Video chat is to looking the adaptive corrective method, and described antidote is replaced the eye areas in the video image in the Video chat process by utilizing human eye to be in nature in real time to the eye areas on the template picture of looking state, realizes correcting.
In preferred embodiment of the present invention, the concrete steps of described antidote are as follows:
(1) takes eyes and be in nature to looking the photo of state, as template picture, and detect eyes approximate region in the template picture;
(2) in detected eyes approximate region, carry out the accurate extraction in eyes zone, obtain accurate eyes zone, as an alternative the standard form of Jiao Zhenging;
(3) take photo when closing one's eyes, and obtain the eye-level when closing one's eyes;
(4) grasp the video single-frame images, and extract eyes precise region in this image;
(5) to the judgement of closing one's eyes of eye state in the video image, if then carry out next step, continue to handle otherwise go to (4);
(6) utilize the middle labeling module of step (2) that eye areas in the video image is replaced processing, finish until Video chat.
Further, the detailed process in the described step (1) is as follows:
(11) eyes are watched camera naturally attentively, take this moment photo as template picture, eyes are in nature to the state of apparent time in the template picture;
(12) in template image, human face region is detected, extract the rectangular area that comprises people's face, dwindle the scope of human eye detection;
(13) in the rectangular area that extracts, according to the position of eyes in people's face, the first half of human eye area is carried out human eye detection, extraction comprises the eyes approximate region.
Further, the detailed process in the described step (2) is as follows:
(21) binary conversion treatment eyes approximate region: gray processing is handled the eyes approximate region, and histogram specification is introduced during binary-state threshold chooses, and earlier the gray level image histogram is carried out the regulation processing, uses the process of iteration selected threshold then;
(22) remove interfere information in the binary image: left part and bottom to left eye region are searched for, and right part and the bottom of right eye region are searched for, and be that removal searches because the black region that exists interfere informations such as hair, eyebrow to show;
(23) obtain the circumscribed rectangle of eyes: the binary image of removing interfere information is carried out rim detection, and first row begin to search the left eye angle point from the image left side, and first row begin to search the right eye angle point from the right; Down search the upper eyelid peak line by line from the image first row beginning, find out peak and be expert at, begin up to search the palpebra inferior minimum point from image last column, find out minimum point and be expert at; Extract the circumscribed rectangle of eyes according to right and left eyes angle point and the highest and minimum eyelid point place straight line, accurately extract eye areas.
Further, the detailed process in the described step (3): the picture when obtaining closed-eye state, extract the eyes approximate region, specifically utilize step (12) and (13) scheme to extract the eyes approximate region; Moreover utilize step (2) scheme to extract closed-eye state eyes precise region, and eye-level L2 when obtaining closed-eye state.
Further, the detailed process in the described step (4): grasp single-frame images in the video, utilize step (12) and (13) scheme to extract the eyes approximate region, utilize step (2) scheme accurately to extract eye areas then, and obtain eye-level L.
Further, judge the state of eyes in the single-frame images in the described step (5) by the height of judging eyes in the single-frame images, if the eye-level of the height of eyes when closing one's eyes judges that then this moment, eyes were in non-closed-eye state in the single-frame images, need next step processing; Otherwise judge that this moment, eyes were in closed-eye state, do not process, carry out the next frame image and handle.
Further again, can determine the state of eyes by the height of eyes and the distance of right and left eyes angle point in the judgement image in the described step (5), and determine whether carrying out the replacement in eyes zone with this.
Further, described step (6) will replace with standard form be resized to single-frame images in eyes zone measure-alike, and replace, finish eyes to looking the rectification of state.
Human eye nature in the Video chat provided by the invention can be adjusted gazing direction of human eyes in real time to looking the method for adaptive corrective in Video chat, reach the effect of watching attentively mutually.Compared with prior art, only handle to realize that overcome the problem that needs other hardware devices except common camera, accuracy rate is higher, and cost is low, and is simple to operate, expanded usable range and practicality by software.
Description of drawings
Further specify the present invention below in conjunction with the drawings and specific embodiments.
Fig. 1 is the general flow chart of the inventive method;
Fig. 2 is the process flow diagram that accurately extracts eye areas;
Fig. 3 is for carrying out the binaryzation effect behind the histogram specification to eye areas;
Fig. 4 is the design sketch after the interfere information in the eye areas is removed;
Fig. 5 is the eyes precise region that extracts.
Embodiment
For technological means, creation characteristic that the present invention is realized, reach purpose and effect is easy to understand, below in conjunction with concrete diagram, further set forth the present invention.
Human eye in the Video chat provided by the invention nature can realize in the Video chat process under the prerequisite that does not increase other hardware devices looking the adaptive corrective method, reach both sides' eyes nature to look, the effect of face-to-face exchange.This method is in real time replaced in Video chat process eye areas in video image by utilizing human eye to be in nature to the eye areas on the template picture of looking state by video image is handled, and realizes correcting.
Referring to Fig. 1, it is depicted as and the present invention is based on the concrete scheme flow process of above-mentioned principle:
At first, take the picture of a binocular fixation camera, reach nature to looking state, as template picture.
Then, utilize people's face to detect and the human eye detection sorter, find out the approximate region of eyes, and by the extraction to last palpebra inferior and canthus point, extract the circumscribed rectangle of right and left eyes respectively, thereby obtain accurate eyes zone, standard form as an alternative.In standard form, the height of eyes is that the distance between the minimum point of the peak in upper eyelid and palpebra inferior is L1.
Simultaneously, take a picture when closing one's eyes, as the picture of closing one's eyes.In the picture of closing one's eyes, utilize identical method to extract the zone of closing one's eyes.In the zone of closing one's eyes, the height of eyes is L2, and the distance of right and left eyes angle point is D2.
During Video chat, grasp single-frame images, and handle, by the eyes zone in the accurate single-frame images of such scheme.
Determine whether to carry out the replacement in eyes zone by the height L that judges eyes in the image.For guaranteeing degree of accuracy, specifically can determine whether to carry out the replacement in eyes zone by the height L of eyes and the distance B of right and left eyes angle point in the judgement image.Make L3=L*(D2/D), if L2<L3 illustrates not to be closed-eye state that the size of then adjusting standard form is identical with the eyes zone of image, replaces the eyes zone then.Otherwise human eye is in closed-eye state in the key diagram picture, and do not need to replace this moment.
Based on above-mentioned concrete scheme, the present invention is by further specifying (referring to Fig. 2) as next example approach.
Nature is to looking obtaining and handling of state template picture in first step, the Video chat:
When Video chat, have only when eyes and watch camera attentively, just can reach the state of nature to looking in the Video chat, the eyes template in the time of therefore at first will obtaining to watch camera attentively.Before Video chat, watch camera naturally attentively, take the photo of this moment as template image, eyes are in nature to looking state in this template image.
Utilize harr people's face sorter, in template image, carry out people's face and detect, obtain human face region, dwindle the zone of human eye detection.Adopt harr human eye sorter, according to the general position of human eye in people's face, extract the approximate region (rectangular area that comprises left eye) of left eye in the left side of the first half of human face region.Use the same method, extract the approximate range (rectangular area that comprises right eye) of right eye at the right-hand part of the first half of human face region, obtain eyes approximate region in the template image thus.
The gray processing of second step, eyes approximate region and binary conversion treatment:
Show that with obtaining the coloured image of eyes approximate region carries out gray processing, be convenient to handle.When gray level image was carried out binaryzation, most important was exactly choosing of threshold value.Determining of image binaryzation threshold value directly influences binary image to the performance degree of contour feature, and in the present invention, wish can be more complete in the binary image the profile that embodies eyes.Because factors such as the environment during Video chat and light intensity are possible different, the difficulty that this has just caused threshold value to choose.In order to address this problem, the present invention is chosen at the eyes approximate region picture that extracts under a kind of light environment situation earlier, adopt different thresholding methods that it is carried out the binaryzation experiment, the image effect that the threshold value that result's demonstration is chosen with process of iteration is carried out after the binaryzation is best, the eyes that are partitioned into the people that can be complete have kept the essential information of eyes simultaneously.Therefore, the binary processing method that adopts among the present invention is: histogram specification is introduced during the binary image threshold value chooses, earlier to image histogram regulationization, it is similar to make it roughly be distributed in regulation figure, adopts process of iteration to choose the image of optimal threshold after to processing then and carries out binaryzation.Referring to Fig. 3, it is depicted as the left eye approximate region of single-frame images in the video is carried out design sketch after the binary conversion treatment.
The removal of interfere information in third step, the binary image:
Owing to may comprise information such as part eyebrow, hair in the eyes approximate region, these information in the drawings be presented as the part black region.This just needs to reject these interfere informations, just can carry out the accurate extraction of eyes.Experiment shows that eyebrow information is typically implemented in the first half of picture, and hair information is typically implemented in the left half of left eye and the right half of right eye, and quantity of information can be very not big yet simultaneously.Therefore, method of the present invention is following (to be example with the left eye, right eye adopts identical method): the upper left corner with the left eye approximate region extracted is true origin, in the image 0 traveled through to (T-1) (size of T is adjusted according to the size in zone) row, if i (searches pixel value and is 0 pixel (being the black color dots in the image) in the row of 0<=i<T), then record array a1[i]=1, otherwise a1[i]=0.Simultaneously 0 in the image traveled through to (T-1) row, if i (search pixel value in the row of 0<=i<T) and be 0 pixel, then record array a2[i]=1, otherwise a2[i]=0.Traversal is determined the a1[i that satisfies condition after finishing] || a2[i]=first i of=0, get rid of the capable and preceding i row of preceding i in the zone then, the zone that has just obtained removing interfere informations such as eyebrow, hair.Referring to Fig. 4, it is depicted as removes the design sketch after the interfere information among Fig. 3.
Four, the extraction of two canthus points about every eye:
The binary image of eye areas being removed interfere information carries out rim detection, adopts the canny algorithm, obtains the overall profile of eyes.Carry out in the image after the rim detection, it is white having only eyes edge and iris edge, other zones all are black, according to these characteristics, the method that the present invention adopts is as follows: by row pictorial element is searched to the right from the image first row beginning, it is 255 some Y1 that first that finds contains pixel value, and some Y1 is exactly the left eye angle of eyes.Then, by row pictorial element is searched left from last row beginning of image, it is 255 some Y2 that first that finds contains pixel value, and Y2 is exactly the right eye angle of eyes.
Five, obtaining of the circumscribed rectangular area of eyes:
Line by line pictorial element is traveled through from the beginning of image first row, first that finds contains the capable H1 that pixel value is 255 point, and capable H1 is exactly the straight line at the peak place in upper eyelid.And after having traveled through, last contains pixel value is that the capable H2 of 255 point is exactly the straight line at the minimum point place of palpebra inferior, and eye-level is defined as H2-H1.According to straight line H1, H2 and some Y1, Y2 just can extract the circumscribed rectangle of eyes, also is the precise region of eyes, and it is used accurate standard form as correcting to replace.Referring to Fig. 5, it is depicted as the result that the eyes among Fig. 4 are accurately extracted.
The obtaining and handling of picture when the 6th step, closed-eye state:
Image when taking a nature closed-eye state, at first utilize disposal route identical in the first step to obtain the approximate region of eyes, utilize then second the step to the 5th the step in identical method the eyes approximate region is handled, extract the precise region of eyes, thereby the eye-level when obtaining closing one's eyes is the distance B 2 of L2 and right and left eyes angle point.
The extraction of the human eye area in the 7th step, the video image and the judgement of its show state:
When Video chat, the single-frame images when grasping a Video chat, this moment, chat person's eyes generally were to the screen curtain but not camera, the chat person's of single-frame images eyes are in non-to looking state.At first utilize disposal route identical in the first step to obtain eyes approximate region in the single-frame images, repeated for second step then to the 5th step, extract the accurate eyes zone in the single-frame images, and obtain human eye height L and right and left eyes angle point distance Y in this single-frame images.Naturality when keeping Video chat, the closed-eye state of the present invention during to chat do not deal with, and judges whether that the standard of closing one's eyes is exactly whether L is greater than L2, if L then represents non-closed-eye state this moment greater than L2.And because human eye apart from the camera difference, can cause human eye to vary in size, whether can cause certain error greater than L2 so directly judge L.In order to improve the accuracy of judgement, the present invention passes through formula: L3=L*(D2/D), human eye height L3 when obtaining apart from the camera equal length, wherein, L is human eye height L in the single-frame images, the distance of right and left eyes angle point in when D2 is closed-eye state, D is the distance of right and left eyes angle point in the single-frame images.Repeat seven step if L2<L3 then carried out for the 8th step, otherwise skips this two field picture this moment.
The replacement of the human eye area in the 8th step, the video image is handled:
Be exactly that the human eye nature is to the state of apparent time at nature to what look the eye areas accurately extracted in the state template picture shows, it is considered as standard form (being the standard form that step 1 to five obtains), utilize this standard form that the human eye area in the single frames video image is replaced, just can reach nature in the Video chat to looking state.Because human eye is different with the distance of camera, cause in the different images human eye size may be different, need carry out convergent-divergent to standard form, make the human eye in itself and the video image measure-alike, then it is replaced, thereby make chat person's eyes be in nature to looking state.Replace after the processing, skip to the continuation of the 7th step and carry out, up to withdrawing from Video chat.
More than show and described ultimate principle of the present invention, principal character and advantage of the present invention.The technician of the industry should understand; the present invention is not restricted to the described embodiments; that describes in above-described embodiment and the instructions just illustrates principle of the present invention; without departing from the spirit and scope of the present invention; the present invention also has various changes and modifications, and these changes and improvements all fall in the claimed scope of the invention.The claimed scope of the present invention is defined by appending claims and equivalent thereof.

Claims (9)

1. the nature of the human eye in the Video chat is to looking the adaptive corrective method, it is characterized in that, described antidote is replaced the eye areas in the video image in the Video chat process by utilizing human eye to be in nature in real time to the eye areas on the template picture of looking state, realizes correcting.
2. the nature of the human eye in the Video chat according to claim 1 is characterized in that to looking the adaptive corrective method concrete steps of described antidote are as follows:
(1) takes eyes and be in nature to looking the photo of state, as template picture, and detect eyes approximate region in the template picture;
(2) in detected eyes approximate region, carry out the accurate extraction in eyes zone, obtain accurate eyes zone, as an alternative the standard form of Jiao Zhenging;
(3) take photo when closing one's eyes, and obtain the eye-level when closing one's eyes;
(4) grasp the video single-frame images, and extract eyes precise region in this image;
(5) to the judgement of closing one's eyes of eye state in the video image, if then carry out next step, continue to handle otherwise go to (4);
(6) utilize the middle labeling module of step (2) that eye areas in the video image is replaced processing, finish until Video chat.
3. the nature of the human eye in the Video chat according to claim 2 is characterized in that to looking the adaptive corrective method detailed process in the described step (1) is as follows:
(11) eyes are watched camera naturally attentively, take this moment photo as template picture, eyes are in nature to the state of apparent time in the template picture;
(12) in template image, human face region is detected, extract the rectangular area that comprises people's face, dwindle the scope of human eye detection;
(13) in the rectangular area that extracts, according to the position of eyes in people's face, the first half of human eye area is carried out human eye detection, extraction comprises the eyes approximate region.
4. the nature of the human eye in the Video chat according to claim 2 is characterized in that to looking the adaptive corrective method detailed process in the described step (2) is as follows:
(21) binary conversion treatment eyes approximate region: gray processing is handled the eyes approximate region, and histogram specification is introduced during binary-state threshold chooses, and earlier the gray level image histogram is carried out the regulation processing, uses the process of iteration selected threshold then;
(22) remove interfere information in the binary image: left part and bottom to left eye region are searched for, and right part and the bottom of right eye region are searched for, and be that removal searches because the black region that exists interfere informations such as hair, eyebrow to show;
(23) obtain the circumscribed rectangle of eyes: the binary image of removing interfere information is carried out rim detection, and first row begin to search the left eye angle point from the image left side, and first row begin to search the right eye angle point from the right; Down search the upper eyelid peak line by line from the image first row beginning, find out peak and be expert at, begin up to search the palpebra inferior minimum point from image last column, find out minimum point and be expert at; Extract the circumscribed rectangle of eyes according to right and left eyes angle point and the highest and minimum eyelid point place straight line, accurately extract eye areas.
5. the nature of the human eye in the Video chat according to claim 2 is characterized in that looking the adaptive corrective method, the picture when at first obtaining closed-eye state in the described step (3), and extract the eyes approximate region; Moreover the eyes approximate region of extracting accurately extracted obtain closed-eye state eyes precise region, and eye-level when obtaining closed-eye state.
6. the nature of the human eye in the Video chat according to claim 2 is to looking the adaptive corrective method, it is characterized in that, in the described step (4) at first, grasp single-frame images in the video, single-frame images is carried out Detection and Extraction eyes approximate region, then the eyes approximate region of extracting is accurately extracted the accurate eye areas that obtains single-frame images, and obtain eye-level.
7. the nature of the human eye in the Video chat according to claim 2 is to looking the adaptive corrective method, it is characterized in that, judge the state of eyes in the single-frame images in the described step (5) by the height of judging eyes in the single-frame images, if the eye-level of the height of eyes when closing one's eyes in the single-frame images, judge that then this moment, eyes were in non-closed-eye state, need next step processing; Otherwise judge that this moment, eyes were in closed-eye state, do not process, carry out the next frame image and handle.
8. natural in looking the adaptive corrective method according to the human eye in claim 2 or the 7 described Video chats, it is characterized in that, can be by the height of eyes and the distance of right and left eyes angle point in the judgement image in the described step (5), determine the state of eyes, and determine whether to carry out the replacement in eyes zone with this.
9. the nature of the human eye in the Video chat according to claim 2 is to looking the adaptive corrective method, it is characterized in that, described step (6) will replace with standard form be resized to single-frame images in eyes zone measure-alike, and replace, finish eyes to looking the rectification of state.
CN2013102609043A 2013-06-26 2013-06-26 Self-adaption correcting method of human eye natural contact in video chat Pending CN103345619A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013102609043A CN103345619A (en) 2013-06-26 2013-06-26 Self-adaption correcting method of human eye natural contact in video chat

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013102609043A CN103345619A (en) 2013-06-26 2013-06-26 Self-adaption correcting method of human eye natural contact in video chat

Publications (1)

Publication Number Publication Date
CN103345619A true CN103345619A (en) 2013-10-09

Family

ID=49280414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013102609043A Pending CN103345619A (en) 2013-06-26 2013-06-26 Self-adaption correcting method of human eye natural contact in video chat

Country Status (1)

Country Link
CN (1) CN103345619A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102761705A (en) * 2011-04-25 2012-10-31 奥林巴斯映像株式会社 An image recording device, an image editing device and an image capturing device
CN104809458A (en) * 2014-12-29 2015-07-29 华为技术有限公司 Pupil center positioning method and pupil center positioning device
WO2016101505A1 (en) * 2014-12-26 2016-06-30 中兴通讯股份有限公司 Photographing correction method, device and terminal of front camera lens
CN105763829A (en) * 2014-12-18 2016-07-13 联想(北京)有限公司 Image processing method and electronic device
CN105787884A (en) * 2014-12-18 2016-07-20 联想(北京)有限公司 Image processing method and electronic device
WO2016119339A1 (en) * 2015-01-29 2016-08-04 京东方科技集团股份有限公司 Image correction method, image correction device and video system
WO2016205979A1 (en) * 2015-06-26 2016-12-29 Intel Corporation Flaw detection and correction in digital images
CN106358005A (en) * 2015-07-17 2017-01-25 联想(北京)有限公司 Image processing method and electronic equipment
CN106358006A (en) * 2016-01-15 2017-01-25 华中科技大学 Video correction method and video correction device
CN106557730A (en) * 2015-09-30 2017-04-05 北京奇虎科技有限公司 Face method and device for correcting in video call process
US9740938B2 (en) 2015-04-28 2017-08-22 Microsoft Technology Licensing, Llc Eye gaze correction
US9749581B2 (en) 2015-04-28 2017-08-29 Microsoft Technology Licensing, Llc Eye gaze correction
CN108009495A (en) * 2017-11-30 2018-05-08 西安科锐盛创新科技有限公司 Fatigue driving method for early warning
CN109712103A (en) * 2018-11-26 2019-05-03 深圳艺达文化传媒有限公司 From the eyes processing method and Related product of the Thunder God picture that shoots the video
CN110263642A (en) * 2013-10-28 2019-09-20 谷歌有限责任公司 For replacing the image buffer storage of the part of image
CN112533071A (en) * 2020-11-24 2021-03-19 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN112733794A (en) * 2021-01-22 2021-04-30 腾讯科技(深圳)有限公司 Method, device and equipment for correcting sight of face image and storage medium
US11360555B2 (en) 2019-05-20 2022-06-14 Cyberlink Corp. Systems and methods for automatic eye gaze refinement

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130070046A1 (en) * 2010-05-26 2013-03-21 Ramot At Tel-Aviv University Ltd. Method and system for correcting gaze offset
CN103034330A (en) * 2012-12-06 2013-04-10 中国科学院计算技术研究所 Eye interaction method and system for video conference

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130070046A1 (en) * 2010-05-26 2013-03-21 Ramot At Tel-Aviv University Ltd. Method and system for correcting gaze offset
CN103034330A (en) * 2012-12-06 2013-04-10 中国科学院计算技术研究所 Eye interaction method and system for video conference

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANNICK VAN DER HOEST 等: "Eye Contact in Leisure Video Conferencing", 《NIK-2012 CONFERENCE》, 19 November 2012 (2012-11-19), pages 263 - 267 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102761705A (en) * 2011-04-25 2012-10-31 奥林巴斯映像株式会社 An image recording device, an image editing device and an image capturing device
CN102761705B (en) * 2011-04-25 2015-07-22 奥林巴斯映像株式会社 An image recording device, an image editing device and an image capturing device
CN110263642B (en) * 2013-10-28 2022-04-19 谷歌有限责任公司 Image cache for replacing portions of an image
CN110263642A (en) * 2013-10-28 2019-09-20 谷歌有限责任公司 For replacing the image buffer storage of the part of image
CN105763829A (en) * 2014-12-18 2016-07-13 联想(北京)有限公司 Image processing method and electronic device
CN105787884A (en) * 2014-12-18 2016-07-20 联想(北京)有限公司 Image processing method and electronic device
WO2016101505A1 (en) * 2014-12-26 2016-06-30 中兴通讯股份有限公司 Photographing correction method, device and terminal of front camera lens
CN105791671A (en) * 2014-12-26 2016-07-20 中兴通讯股份有限公司 Shooting correction method and device for front camera and terminal
CN104809458A (en) * 2014-12-29 2015-07-29 华为技术有限公司 Pupil center positioning method and pupil center positioning device
CN104809458B (en) * 2014-12-29 2018-09-28 华为技术有限公司 A kind of pupil center's localization method and device
WO2016119339A1 (en) * 2015-01-29 2016-08-04 京东方科技集团股份有限公司 Image correction method, image correction device and video system
US9824428B2 (en) 2015-01-29 2017-11-21 Boe Technology Group Co., Ltd. Image correction method, image correction apparatus and video system
US9740938B2 (en) 2015-04-28 2017-08-22 Microsoft Technology Licensing, Llc Eye gaze correction
US9749581B2 (en) 2015-04-28 2017-08-29 Microsoft Technology Licensing, Llc Eye gaze correction
WO2016205979A1 (en) * 2015-06-26 2016-12-29 Intel Corporation Flaw detection and correction in digital images
CN106358005A (en) * 2015-07-17 2017-01-25 联想(北京)有限公司 Image processing method and electronic equipment
CN106358005B (en) * 2015-07-17 2020-01-31 联想(北京)有限公司 Image processing method and electronic device
CN106557730A (en) * 2015-09-30 2017-04-05 北京奇虎科技有限公司 Face method and device for correcting in video call process
CN106358006A (en) * 2016-01-15 2017-01-25 华中科技大学 Video correction method and video correction device
CN106358006B (en) * 2016-01-15 2019-08-06 华中科技大学 The bearing calibration of video and device
CN108009495A (en) * 2017-11-30 2018-05-08 西安科锐盛创新科技有限公司 Fatigue driving method for early warning
CN109712103A (en) * 2018-11-26 2019-05-03 深圳艺达文化传媒有限公司 From the eyes processing method and Related product of the Thunder God picture that shoots the video
US11360555B2 (en) 2019-05-20 2022-06-14 Cyberlink Corp. Systems and methods for automatic eye gaze refinement
CN112533071A (en) * 2020-11-24 2021-03-19 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN112733794A (en) * 2021-01-22 2021-04-30 腾讯科技(深圳)有限公司 Method, device and equipment for correcting sight of face image and storage medium
CN112733794B (en) * 2021-01-22 2021-10-15 腾讯科技(深圳)有限公司 Method, device and equipment for correcting sight of face image and storage medium

Similar Documents

Publication Publication Date Title
CN103345619A (en) Self-adaption correcting method of human eye natural contact in video chat
Merler et al. Diversity in faces
CN102081918B (en) Video image display control method and video image display device
JP2000082147A (en) Method for detecting human face and device therefor and observer tracking display
US20140354947A1 (en) Virtual glasses try-on method and apparatus thereof
CN105913093A (en) Template matching method for character recognizing and processing
Peer et al. An automatic human face detection method
CN107977639A (en) A kind of face definition judgment method
CN104597057A (en) Columnar diode surface defect detection device based on machine vision
JP4206053B2 (en) User interface device and user interface program
CN110008793A (en) Face identification method, device and equipment
CN103226824B (en) Maintain the video Redirectional system of vision significance
CN104050448A (en) Human eye positioning method and device and human eye region positioning method and device
CN106570885A (en) Background modeling method based on brightness and texture fusion threshold value
CN113191216A (en) Multi-person real-time action recognition method and system based on gesture recognition and C3D network
KR20160036375A (en) Fast Eye Detection Method Using Block Contrast and Symmetry in Mobile Device
Sanketi et al. Localizing blurry and low-resolution text in natural images
CN104156689B (en) Method and device for positioning feature information of target object
CN108986156A (en) Depth map processing method and processing device
CN110097523B (en) Video image fog concentration classification and self-adaptive defogging method
Pei et al. Enhanced PCA reconstruction method for eyeglass frame auto-removal
Wang et al. Character extraction and recognition in natural scene images
CN103888749B (en) A kind of method of the many visual frequencies of binocular video conversion
CN105608412A (en) Smiling face image processing method based on image deformation, system and shooting terminal thereof
Zhao et al. Real-time multiple-person tracking system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20131009