CN104850233A - Image processing method - Google Patents

Image processing method Download PDF

Info

Publication number
CN104850233A
CN104850233A CN201510279425.5A CN201510279425A CN104850233A CN 104850233 A CN104850233 A CN 104850233A CN 201510279425 A CN201510279425 A CN 201510279425A CN 104850233 A CN104850233 A CN 104850233A
Authority
CN
China
Prior art keywords
window
wicket
pixel
current
fritter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510279425.5A
Other languages
Chinese (zh)
Other versions
CN104850233B (en
Inventor
冯志全
冯仕昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN201510279425.5A priority Critical patent/CN104850233B/en
Publication of CN104850233A publication Critical patent/CN104850233A/en
Application granted granted Critical
Publication of CN104850233B publication Critical patent/CN104850233B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides an image processing method and belongs to the field of human-computer interaction interfaces of computers. The image processing method includes: (1) inputting a user gesture image through a camera, setting size of a window, framing out the face and storing face frame information, namely the position of the upper left corner of the face frame and height and width of the face frame; (2) setting the image at the face frame position in the window as a background color according to the saved position of the upper left corner of the face frame and height and width of the face frame; (3) judging skin color points and non-skin points via a skin model; (4) scanning the window sequentially to seek a small window corresponding to a pixel point of the face frame according to information of the face frame in the window; (5) subjecting the small window to denoising and non-skin-color influence removing; (6) acquiring a centroid of the small window; (7) obtaining the length of a half edge of a small square window of the small window according to the centroid and small window information; (8) zooming the small window again to acquire a new small square window according to the length of the half edge of the small square window obtained in the step (7).

Description

A kind of image processing method
Technical field
The invention belongs to computer man-machine interacting field of interfaces, be specifically related to a kind of image processing method.
Background technology
Extract in gesture a lot, generally can not consider to remove wrist, usually use exterior object and wrist and arm part are covered, only will retain gesture like this.Adopt said method not really to be embodied as the service of user, but allow user adaptation computing machine, so the principle in order to better realize " focus be put on man ", need to consider to expose the situation that wrist carries out operating, therefore removing wrist is necessary; Meanwhile, really can not reach by using the method for blocking wrist and arm the effect only retaining gesture at present.
Summary of the invention
The object of the invention is to solve the difficult problem existed in above-mentioned prior art, a kind of image processing method is provided, utilize camera to input user's images of gestures, remove wrist and only retain gesture.
The present invention is achieved by the following technical solutions:
A kind of image processing method, comprising:
(1) camera is utilized to input user images of gestures, setting window size, by face's frame out and preserve the information of face frame, i.e. the position in the upper left corner of face frame, height and width;
(2) position in the upper left corner of the face frame preserved according to step (1), height and width, be set to background colour by the image of the face frame position in window;
(3) colour of skin point and non-colour of skin point is judged by complexion model;
(4) according to described face frame information in the window, scanning window successively, finds out the wicket that the pixel of face frame is corresponding respectively;
(5) denoising is carried out to wicket, eliminate the impact of the non-colour of skin;
(6) barycenter of described wicket is obtained;
(7) according to the information of barycenter and wicket, half length of side of the square wicket of this wicket is obtained;
(8) again wicket is reduced: according to half length of side of the square wicket that step (7) obtains, obtain new square wicket;
(9) according to the information of square wicket, scan square wicket successively, find out the wicket that skin pixel point is corresponding respectively;
(10) execution goes arm to operate, and only retains gesture.
Described step (3) is achieved in that
Described complexion model is:
r>95&&g>40&&b>20&&abs(r-g)>15&&r>g&&r>b
If meet complexion model, then this pixel is colour of skin point, otherwise is non-colour of skin point, and the fritter of the 2*2 of the non-colour of skin in window is set to background colour.
Described step (5) comprising:
(51) height of the first address of image and image, width and every row byte number is obtained;
(52) cross the border for anti-, do not process the pixel on Far Left, rightmost, the top and four limits bottom, from the 2nd row the 2nd row, the pixel of traversing graph picture;
(53) b, g, r component of pixel is obtained, judge that pixel is the gesture color of background colour or segmentation, if gesture color, then calculate b, g, r component corresponding to pixel in the upper and lower, left and right of this pixel, upper left, lower-left, upper right, eight directions, bottom right;
(54) judge in eight neighborhoods, if having points to put is white point, then thinks that this point is noise spot, and this point is set to background colour;
(55) circulation step (53) ~ (54), until process whole pixels of former figure.
Described step (6) is achieved in that
(61) according to the height and width of frame pointer and wicket, the R G B value that current fritter is corresponding is obtained;
(62) if the RGB of current fritter meets complexion model, then perform step (63), otherwise perform step (61);
(63) calculating meets the cumulative of the coordinate figure of complexion model fritter;
(64) when the fritter in wicket has all scanned one time, accumulated value is averaging the average of the coordinate figure being met complexion model fritter, i.e. barycenter.
Described step (7) is achieved in that
(71) by from top to bottom, order scanning wicket pixel value from left to right, if the number of the non-background dot of row corresponding to current scan list is greater than a threshold value, then terminates and writes down current row coordinate, otherwise continuing scanning next column;
(72) by from top to bottom, order from left to right scans wicket pixel value successively, if the number of the non-background of row corresponding to current scan list is greater than a threshold value, then terminates and writes down current row coordinate, otherwise continuing scanning next column;
(73) by from left to right, order from top to bottom scans the pixel value of wicket successively, if the number of the non-background dot of row corresponding to current scan line is greater than a threshold value, then terminates and writes down current row-coordinate, otherwise continuing scanning next line;
(74) in the window that current pointer is corresponding, according to from right to left, order from top to bottom scans the pixel value of this window successively, if the number of the non-background dot of row corresponding to current scan line is greater than a threshold value, then terminate and write down current row-coordinate, otherwise continuing scanning next line;
(75) distance of the coordinate information that barycenter obtains to (71), (72), (73), (74) is calculated respectively;
(76) the maximum distance in four distances (75) obtained as half length of side of the next wicket of generation, i.e. half length of side of square wicket.
Described step (8) is achieved in that
(81) obtain the line inscribed joint number of current wicket and be about to the line inscribed joint number of the square wicket extracted;
(82) according to the positional information in the square wicket upper left corner that will extract, the information of this square wicket is obtained successively.
Described step (10) is achieved in that
(101) barycenter corresponding to present frame window is obtained;
(102) from left to right, scan each fritter successively from top to bottom, wherein row from left to right scans width W idth/4 corresponding to current window, obtain the pixel value of current fritter and next column adjacent isles, if adjacent next pixel value is background, then carry out the scanning of next line, otherwise the number of non-background pixel point is added up;
(103) if the cumulative pixel number that (102) obtain is non-zero, then the intersection point of hand and window left margin is obtained;
(104) from right to left, scan each fritter successively from top to bottom, wherein row scans Width-(Width/4) from right to left, obtain the pixel value of current fritter and next column adjacent isles, if adjacent next pixel value is background, then carry out the scanning of next line, otherwise the number of non-background pixel point is added up;
(105) if the cumulative pixel number that (104) obtain is non-zero, then the intersection point of hand and window right margin is obtained;
(106) from top to bottom, from left to right scan each fritter successively, wherein arrange from height H eight-(Height/4) corresponding to current window to upper scanning, obtain the pixel value of current fritter and next line adjacent isles, if adjacent next pixel value is background, then carry out the scanning of next column, otherwise the number of non-background pixel point is added up;
(107) if the cumulative pixel number that (106) obtain is non-zero, then the intersection point of hand and window lower edge is obtained;
(108) number percent shared by window and left and right, lower boundary pixel, accounts for the limit at the i.e. arm place over there of maximum ratio;
(109) obtain the line on barycenter and limit, arm place, obtain its direction vector;
(110) from barycenter to the line on the limit at arm place, find the position of wrist, the part of non-hand is set to background colour.
The position of wrist is found to be achieved in that in described step (110)
From barycenter to the line t=t/5.0 on the limit at arm place and the position of wrist.
Compared with prior art, the invention has the beneficial effects as follows: the present invention remains gesture preferably, eliminate wrist, for acquisition gesture path below and gesture identification are prepared.
Accompanying drawing explanation
The step block diagram of Fig. 1 the inventive method.
Embodiment
Below in conjunction with accompanying drawing, the present invention is described in further detail:
The interpolation that this method is carried out on the basis of frame face, particular flow sheet as shown in Figure 1:
First suppose that large frame (referring to image) is: 400*30,400*300 are the hypothesis done, and also can do the hypothesis of other sizes, this invention is all applicable to other large frames.
Specific implementation step:
(1) first, realize frame face function and preserve the information of face position, be i.e. the position in the upper left corner of frame, height and width;
(2) according to step (1) frame information (referring to the position in the upper left corner of frame, height and width), the image of the frame position (referring to the frame that step (1) obtains) in 400*300 window (Back_new (Back_new defines the pointer pointing to image, and its size arranging image is 400*300)) is set to background colour and black;
(3) the non-colour of skin in 400*300 window (is undertaken judging by complexion model, if (r>95 & & g>40 & & b>20 & & abs (r-g) >15 & & r>g & & r>b) condition is set up, then this pixel is colour of skin point, otherwise think non-colour of skin point) the fritter of 2*2 be set to background colour and black
(4) in 400*300, corresponding wicket pBGR_Buffer is extracted (according to the information of frame face in large window as high, wide, upper left corner information according to the height and width of the face obtained in (1), scan the window of 400*300 successively, find out the wicket that the pixel of frame face is corresponding respectively)
(5) denoising is carried out to wicket, eliminate the impact of the non-colour of skin
Concrete steps are:
(51) height of the first address of image and image, width and every row byte number is obtained;
(52) cross the border for anti-, do not process the pixel on Far Left, rightmost, the top and four limits bottom, from the 2nd row the 2nd row, the pixel of traversing graph picture;
(53) b, g, r component of pixel is obtained, judge that pixel is the gesture color of background colour or segmentation, if gesture color, then calculate b, g, r component corresponding to pixel in the upper and lower, left and right of this pixel, upper left, lower-left, upper right, eight directions, bottom right;
(54) judge in eight neighborhoods, if there is points (as being more than or equal to 4) point to be white point, then thinks that this point is noise spot, and this point is set to background colour;
(55) circulation step (53) ~ (54), until process whole pixels of former figure.
(6) obtain the barycenter (adopting function D2POINT_intGetRedHand_zhixin below to realize) of the corresponding wicket of pBGR_Buffer, suppose be: small_tt
(7) according to the information (referring to the height of wicket, wide, upper left position information) of barycenter small_tt and wicket, half length of side (adopting function int Small_edge below to obtain) of the square wicket (referring to that wicket is that square and the length of side are isometric) of this wicket is obtained
(8) half length of side edge required by (7) is large (because square wicket required is above not minimum bounding box, so this square wicket can not ensure that its four limit and skin pixel point have intersection point), so now according to needing the length of side reducing window in real time, realize going the main thought of wrist to be: ensure that the border of wicket has at least and has intersection point with hand, so be exactly that realization reduces wicket again below
(9) according to (8), new wicket (adopting function Small_CutImageGesture below to realize) is obtained: new_imageinformation
(10) extract from the wicket that pBGR_Buffer points to and to have the wicket of intersection point and pointer at least with hand (according to the information of square wicket as high, wide, upper left corner information, scan square wicket successively, find out the wicket that skin pixel point is corresponding respectively): Small_pBGR_Buffer
(11) execution goes the function of arm (going arm to be for gesture identification, gesture interaction are prepared) (finally better only can retain gesture when exposing arm, this prepares for carrying out gesture identification standard below, also makes the accuracy of gesture centroid position be greatly improved simultaneously):
Remove_arm(Small_pBGR_Buffer,new_imageinformation,1,1)
Wherein: Small_pBGR_Buffer is the pointer pointing to wicket;
New_imageinformation is the attribute information of wicket
Primary function is described as follows:
1:D2POINT_int GetRedHand_zhixin(BYTE*pIBuffer,int h,int w)
Illustrate: as long as first this function obtains the barycenter of the window colour of skin
Input: the pointer pIBuffer of current frame image, window are divided into the height of fritter: h and wide: w
Export: the center-of-mass coordinate X of current frame image and Y
Wherein D2POINT_int is a structure of new definition, as follows:
The concrete steps realized are:
(1) according to the height and width of frame pointer and little frame, the rgb value that current fritter is corresponding can be obtained
This is mainly by scanning the R that ranks obtain each pixel successively, G, B value
(2) according to complexion model, ((this is complexion model to r>95 & & g>40 & & b>20 & & abs (r-g) >15 & & r>g & & r>b, r wherein, g, b is three color components that pixel is corresponding, be respectively red, green, blue)), if the RGB of current fritter meets this complexion model, then perform step (3), otherwise perform step (1) (control variable mainly by giving way or arrange is accumulative can be completed)
(3) calculate cumulative (be added the position meeting complexion model, complexion model position is represented by the control variable of ranks) that meet the coordinate figure of complexion model fritter
(4) when the fritter in window has all scanned one time, then the average (average just refers to cumulative being averaging, and also can be understood as barycenter) of the coordinate figure of complexion model fritter is met.
2:int Small_edge(BYTE*pImageSrc,D4POINT ImageInformation,D2POINT zhixin)
Illustrate: it is at least crossing with the border of hand that this function mainly realizes current window
Input: the pointer pointing to current window: the positional information of pImageSrc, current window: ImageInformation, centroid position information: zhixin (at total step pBGR_Buffer respectively, imageinformation, Small_tt1)
Export: the minimum border that namely half length of side obtaining the window obtained according to input information realizes surrounding hand is long
Specific implementation step:
(1) in the window that current pointer is corresponding, press from top to bottom, order scanning window pixel value from left to right, if the number of the non-background dot of the row that current scan list is corresponding is greater than a threshold value (experimentally obtaining), then terminate and write down current row coordinate, otherwise continuing scanning next column.
(2) in the same window corresponding to current pointer, by from top to bottom, order from left to right scans this window pixel value successively, if the number of the non-background of row corresponding to current scan list is greater than a threshold value, then terminate and write down current row coordinate, otherwise continuing scanning next column.
(3) in the window that current pointer is corresponding, according to from left to right, order from top to bottom scans the pixel value of this window successively, if the number of the non-background dot of row corresponding to current scan line is greater than a threshold value, then terminate and write down current row-coordinate, otherwise continuing scanning next line.
(4) in the window that current pointer is corresponding, according to from right to left, order from top to bottom scans the pixel value of this window successively, if the number of the non-background dot of row corresponding to current scan line is greater than a threshold value, then terminate and write down current row-coordinate, otherwise continuing scanning next line.
(5) distance of the coordinate information that calculating parameter zhixin obtains to (1), (2), (3), (4) is distinguished
(6) the maximum distance in four distances (5) obtained is as half length of side generating next wicket.
3:void Small_CutImageGesture(BYTE*pImageSrc,BYTE
* imagedataCut, D4POINT ImageInformation, D4POINT New_ImageInformation) illustrate: this function realizes extracting more wicket again from wicket
Input: pointer imagedataCut, the current window position information ImageInformation of the pointer pImageSrc of current frame image, more wicket, more wicket positional information ImageInformation
Export: the pointer obtaining the more wicket that will extract
Specific implementation step:
(1) first, obtain the line inscribed joint number of current window and be about to the line inscribed joint number of the wicket extracted;
(2) according to the positional information in the wicket upper left corner that will extract, obtain the information of this more wicket successively, last imagedataCut just point to want the information of extracting position
4:void Remove_arm(BYTE*pImageBuffer,D4POINT ImageInformation,int h,intw)
Illustrate: this function mainly realizes arm, retain hand
Input: the pointer pImageBuffer of current window, the positional information ImageInformation of window, by the height and width of window institute subdividing and h and w
Export: remove arm, retain hand
Specific implementation step:
(1) barycenter corresponding to present frame window first, is obtained
(2) from left to right, scan each fritter successively from top to bottom, wherein row from left to right scans Width (referring to the upper part of the width that current window is corresponding)/4, obtain the pixel value of current fritter and next column adjacent isles, if adjacent next pixel value is background, then carry out the scanning of next line, otherwise be cumulative (number by the pixel of non-background is added) of non-background pixel point number.
(3) if pixel number (number cumulative above referring to) is non-zero, then the intersection point (being scanned the pixel of this window by ranks successively) of hand and window left margin is obtained
(4) from right to left, scan each fritter successively from top to bottom, wherein row scans Width-(Width/4) from right to left, obtain the pixel value of current fritter and next column adjacent isles, if adjacent next pixel value is background, then carry out the scanning of next line, otherwise be the cumulative of non-background pixel point number.
(5) if pixel number is non-zero, then the intersection point obtaining hand and window right margin is removed
(6) from top to bottom, from left to right scan each fritter successively, wherein arrange from Height (referring to a part for the height that current window is corresponding)-(Height/4) to upper scanning, obtain the pixel value of current fritter and next line adjacent isles, if adjacent next pixel value is background, then carry out the scanning of next column, otherwise be the cumulative of non-background pixel point number.
(7) if pixel number is non-zero, then the intersection point obtaining hand and window lower edge is removed
(8) number percent shared by window and left and right, lower boundary pixel, what account for maximum ratio is exactly the limit at arm place over there.
(9) barycenter (referring to the barycenter obtained above) and the limit, arm place that obtain window skin pixel point (refer to window and left and right, shared by lower boundary pixel number percent, what account for maximum ratio is exactly the limit at arm place over there) line, obtain its direction vector
(10) (number percent shared by window and left and right, lower boundary pixel is referred to from barycenter to arm border, what account for maximum ratio is exactly the limit at arm place over there) process in, just probably obtain the position of wrist when t (t is a variable)=t/5.0, just the part of non-hand can be set to background colour this moment.
Technique scheme is one embodiment of the present invention, for those skilled in the art, on the basis that the invention discloses application process and principle, be easy to make various types of improvement or distortion, and the method be not limited only to described by the above-mentioned embodiment of the present invention, therefore previously described mode is just preferred, and does not have restrictive meaning.

Claims (8)

1. an image processing method, is characterized in that: described method comprises:
(1) camera is utilized to input user images of gestures, setting window size, by face's frame out and preserve the information of face frame, i.e. the position in the upper left corner of face frame, height and width;
(2) position in the upper left corner of the face frame preserved according to step (1), height and width, be set to background colour by the image of the face frame position in window;
(3) colour of skin point and non-colour of skin point is judged by complexion model;
(4) according to described face frame information in the window, scanning window successively, finds out the wicket that the pixel of face frame is corresponding respectively;
(5) denoising is carried out to wicket, eliminate the impact of the non-colour of skin;
(6) barycenter of described wicket is obtained;
(7) according to the information of barycenter and wicket, half length of side of the square wicket of this wicket is obtained;
(8) again wicket is reduced: according to half length of side of the square wicket that step (7) obtains, obtain new square wicket;
(9) according to the information of square wicket, scan square wicket successively, find out the wicket that skin pixel point is corresponding respectively;
(10) execution goes arm to operate, and only retains gesture.
2. image processing method according to claim 1, is characterized in that: described step (3) is achieved in that
Described complexion model is:
r>95&&g>40&&b>20&&abs(r-g)>15&&r>g&&r>b
If meet complexion model, then this pixel is colour of skin point, otherwise is non-colour of skin point, and the fritter of the 2*2 of the non-colour of skin in window is set to background colour.
3. image processing method according to claim 2, is characterized in that: described step (5) comprising:
(51) height of the first address of image and image, width and every row byte number is obtained;
(52) cross the border for anti-, do not process the pixel on Far Left, rightmost, the top and four limits bottom, from the 2nd row the 2nd row, the pixel of traversing graph picture;
(53) b, g, r component of pixel is obtained, judge that pixel is the gesture color of background colour or segmentation, if gesture color, then calculate b, g, r component corresponding to pixel in the upper and lower, left and right of this pixel, upper left, lower-left, upper right, eight directions, bottom right;
(54) judge in eight neighborhoods, if having points to put is white point, then thinks that this point is noise spot, and this point is set to background colour;
(55) circulation step (53) ~ (54), until process whole pixels of former figure.
4. image processing method according to claim 3, is characterized in that: described step (6) is achieved in that
(61) according to the height and width of frame pointer and wicket, the R G B value that current fritter is corresponding is obtained;
(62) if the RGB of current fritter meets complexion model, then perform step (63), otherwise perform step (61);
(63) calculating meets the cumulative of the coordinate figure of complexion model fritter;
(64) when the fritter in wicket has all scanned one time, accumulated value is averaging the average of the coordinate figure being met complexion model fritter, i.e. barycenter.
5. image processing method according to claim 4, is characterized in that: described step (7) is achieved in that
(71) by from top to bottom, order scanning wicket pixel value from left to right, if the number of the non-background dot of row corresponding to current scan list is greater than a threshold value, then terminates and writes down current row coordinate, otherwise continuing scanning next column;
(72) by from top to bottom, order from left to right scans wicket pixel value successively, if the number of the non-background of row corresponding to current scan list is greater than a threshold value, then terminates and writes down current row coordinate, otherwise continuing scanning next column;
(73) by from left to right, order from top to bottom scans the pixel value of wicket successively, if the number of the non-background dot of row corresponding to current scan line is greater than a threshold value, then terminates and writes down current row-coordinate, otherwise continuing scanning next line;
(74) in the window that current pointer is corresponding, according to from right to left, order from top to bottom scans the pixel value of this window successively, if the number of the non-background dot of row corresponding to current scan line is greater than a threshold value, then terminate and write down current row-coordinate, otherwise continuing scanning next line;
(75) distance of the coordinate information that barycenter obtains to (71), (72), (73), (74) is calculated respectively;
(76) the maximum distance in four distances (75) obtained as half length of side of the next wicket of generation, i.e. half length of side of square wicket.
6. image processing method according to claim 5, is characterized in that: described step (8) is achieved in that
(81) obtain the line inscribed joint number of current wicket and be about to the line inscribed joint number of the square wicket extracted;
(82) according to the positional information in the square wicket upper left corner that will extract, the information of this square wicket is obtained successively.
7. image processing method according to claim 6, is characterized in that: described step (10) is achieved in that
(101) barycenter corresponding to present frame window is obtained;
(102) from left to right, scan each fritter successively from top to bottom, wherein row from left to right scans width W idth/4 corresponding to current window, obtain the pixel value of current fritter and next column adjacent isles, if adjacent next pixel value is background, then carry out the scanning of next line, otherwise the number of non-background pixel point is added up;
(103) if the cumulative pixel number that (102) obtain is non-zero, then the intersection point of hand and window left margin is obtained;
(104) from right to left, scan each fritter successively from top to bottom, wherein row scans Width-(Width/4) from right to left, obtain the pixel value of current fritter and next column adjacent isles, if adjacent next pixel value is background, then carry out the scanning of next line, otherwise the number of non-background pixel point is added up;
(105) if the cumulative pixel number that (104) obtain is non-zero, then the intersection point of hand and window right margin is obtained;
(106) from top to bottom, from left to right scan each fritter successively, wherein arrange from height H eight-(Height/4) corresponding to current window to upper scanning, obtain the pixel value of current fritter and next line adjacent isles, if adjacent next pixel value is background, then carry out the scanning of next column, otherwise the number of non-background pixel point is added up;
(107) if the cumulative pixel number that (106) obtain is non-zero, then the intersection point of hand and window lower edge is obtained;
(108) number percent shared by window and left and right, lower boundary pixel, accounts for the limit at the i.e. arm place over there of maximum ratio;
(109) obtain the line on barycenter and limit, arm place, obtain its direction vector;
(110) from barycenter to the line on the limit at arm place, find the position of wrist, the part of non-hand is set to background colour.
8. image processing method according to claim 7, is characterized in that: find the position of wrist to be achieved in that in described step (110)
From barycenter to the line t=t/5.0 on the limit at arm place and the position of wrist.
CN201510279425.5A 2015-05-27 2015-05-27 A kind of image processing method Active CN104850233B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510279425.5A CN104850233B (en) 2015-05-27 2015-05-27 A kind of image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510279425.5A CN104850233B (en) 2015-05-27 2015-05-27 A kind of image processing method

Publications (2)

Publication Number Publication Date
CN104850233A true CN104850233A (en) 2015-08-19
CN104850233B CN104850233B (en) 2016-04-06

Family

ID=53849929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510279425.5A Active CN104850233B (en) 2015-05-27 2015-05-27 A kind of image processing method

Country Status (1)

Country Link
CN (1) CN104850233B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599525A (en) * 2019-09-30 2019-12-20 腾讯科技(深圳)有限公司 Image compensation method and apparatus, storage medium, and electronic apparatus

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020120594A1 (en) * 2000-02-24 2002-08-29 Patrick Pirim Method and device for perception of an object by its shape, its size and/or its orientation
CN102831404A (en) * 2012-08-15 2012-12-19 深圳先进技术研究院 Method and system for detecting gestures
CN103353935A (en) * 2013-07-19 2013-10-16 电子科技大学 3D dynamic gesture identification method for intelligent home system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020120594A1 (en) * 2000-02-24 2002-08-29 Patrick Pirim Method and device for perception of an object by its shape, its size and/or its orientation
CN102831404A (en) * 2012-08-15 2012-12-19 深圳先进技术研究院 Method and system for detecting gestures
CN103353935A (en) * 2013-07-19 2013-10-16 电子科技大学 3D dynamic gesture identification method for intelligent home system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599525A (en) * 2019-09-30 2019-12-20 腾讯科技(深圳)有限公司 Image compensation method and apparatus, storage medium, and electronic apparatus

Also Published As

Publication number Publication date
CN104850233B (en) 2016-04-06

Similar Documents

Publication Publication Date Title
JP6417702B2 (en) Image processing apparatus, image processing method, and image processing program
WO2017084204A1 (en) Method and system for tracking human body skeleton point in two-dimensional video stream
JP2019504386A (en) Facial image processing method and apparatus, and storage medium
CN103927016A (en) Real-time three-dimensional double-hand gesture recognition method and system based on binocular vision
EP3912338B1 (en) Sharing physical writing surfaces in videoconferencing
JP2012203458A (en) Image processor and program
JP6932402B2 (en) Multi-gesture fine division method for smart home scenes
CN111626912A (en) Watermark removing method and device
US9678642B2 (en) Methods of content-based image area selection
US6771266B2 (en) Method and apparatus for improving the appearance of digitally represented handwriting
CN104932683A (en) Game motion sensing control method based on vision information
JP5907196B2 (en) Image processing apparatus, image processing method, image processing system, and program
CN104850232A (en) Method for acquiring remote gesture tracks under camera conditions
KR100553850B1 (en) System and method for face recognition / facial expression recognition
CN106774846A (en) Alternative projection method and device
CN104850233B (en) A kind of image processing method
RU2458396C1 (en) Method of editing static digital composite images, including images of several objects
CN111914808A (en) Gesture recognition system realized based on FPGA and recognition method thereof
CN102855025A (en) Optical multi-touch contact detection method based on visual attention model
JP5051671B2 (en) Information processing apparatus, information processing method, and program
CN109141457A (en) Navigate appraisal procedure, device, computer equipment and storage medium
JP3172498B2 (en) Image recognition feature value extraction method and apparatus, storage medium for storing image analysis program
US9159118B2 (en) Image processing apparatus, image processing system, and non-transitory computer-readable medium
CN113033256A (en) Training method and device for fingertip detection model
EP4246428A1 (en) Perspective method for physical whiteboard and generation method for virtual whiteboard

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Feng Zhiquan

Inventor before: Feng Zhiquan

Inventor before: Feng Shichang

COR Change of bibliographic data
C14 Grant of patent or utility model
GR01 Patent grant