CN110334631B - Sitting posture detection method based on face detection and binary operation - Google Patents
Sitting posture detection method based on face detection and binary operation Download PDFInfo
- Publication number
- CN110334631B CN110334631B CN201910568748.4A CN201910568748A CN110334631B CN 110334631 B CN110334631 B CN 110334631B CN 201910568748 A CN201910568748 A CN 201910568748A CN 110334631 B CN110334631 B CN 110334631B
- Authority
- CN
- China
- Prior art keywords
- sitting posture
- user
- image
- standard
- head position
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Abstract
The invention discloses a sitting posture detection method based on face detection and binary operation, which comprises the steps of firstly, collecting a standard sitting posture picture; judging whether the brightness of the detection environment is within a detectable range or not, and preprocessing the pictures outside the range; positioning the head position of a standard sitting posture by using an Adaboost face detection algorithm, and setting a detection tolerance by using the position information; and detecting whether the sitting posture of the user exceeds a detection tolerance, firstly detecting whether the sitting posture is a forward-leaning or backward-leaning wrong sitting posture under the exceeding condition, and if no wrong sitting posture exists in the front-back direction, subtracting the binary image of the standard image and the real-time image, partitioning, and detecting whether the sitting posture is a left-leaning or right-leaning wrong sitting posture. The invention makes up the problem that the Adaboost face detection algorithm is only used and is greatly influenced by light rays, solves the problem that the head position is limited when the standard sitting posture of a user is acquired, adopts different detection methods for the wrong sitting posture in the front-back direction and the left-right direction, and simplifies the judgment condition.
Description
Technical Field
The invention belongs to the technical field of image processing methods, and particularly relates to a sitting posture detection method based on face detection and binary operation.
Background
As a working group or a teenager, the user often needs to stare at a computer screen at an office desk for work or to look down at the front of the desk for study. When the sitting posture is maintained for a long time in the working or learning process, the body is easy to be discomforted, and the probability of suffering from myopia and cervical spondylosis is increased. Most teenagers have the problem of myopia more or less, and the affected age of cervical spondylosis is gradually younger, so that the good sitting posture habit is very important for office workers and the teenagers.
At present, the existing sitting posture detection method includes that feature points are extracted by using a traditional skin color segmentation algorithm for detection or a neural network is adopted for training to distinguish different wrong sitting postures. However, the problems of large influence of light, difficult feature extraction, large number of training samples and the like exist.
Disclosure of Invention
The invention aims to provide a sitting posture detection method based on face detection and binary operation, and solves the problems that the existing sitting posture detection method is greatly influenced by light environment and has complex judgment conditions.
The technical scheme adopted by the invention is as follows: a sitting posture detection method based on face detection and binary operation comprises the following steps:
step 1: collecting standard sitting posture images of a user;
step 2: calculating the brightness of the standard sitting posture image collected in the step 1, preprocessing the image with lower brightness for enhancing the brightness, and not processing the image with higher brightness;
and step 3: positioning the head of the user in the standard sitting posture image obtained in the step 2 by using an Adaboost face detection algorithm;
and 4, step 4: setting a detection tolerance by using the head positioning position obtained in the step 3;
and 5: and (4) starting to collect the sitting posture of the user in real time, detecting the head position of the user in real time according to the detection tolerance set in the step (4), and prompting the user according to the detection result.
The present invention is also characterized in that,
the step 2 is implemented according to the following steps: converting the standard sitting posture image from an RGB model space to a YCbCr model space by the following conversion formula:
in the formula (1), the Y component, i.e., the luminance component is extracted, and the average value of the gradations of the image under the Y component is calculated, assuming that the size of the image is m × n, and the average value of the gradations isThe calculation formula is as follows:
if the average gray level value is higher than the standard value, directly performing the step 3; if the average value of the gray levels is lower than the standard value, the following preprocessing needs to be performed on the image, wherein the standard value is 85:
the preprocessing is to enhance the brightness of the image, and if the original image is I (x, y) and the image with enhanced brightness is G (x, y), there are
G(x,y)=a×I(x,y) (3)
In the formula (3), a takes the value of 3.
The step 3 specifically comprises the following steps: the Adaboost face detection algorithm is used for detecting the input standard sitting posture image, the head position of a user is positioned to be D (x, y, w, h), the head position is marked by a wire frame, coordinates of the upper left corner of the wire frame of the detected head position are represented by (x, y), w represents the width of the wire frame, and h represents the length of the wire frame.
The step 4 specifically comprises the following steps: let d be 0.3 xw, d is the detection tolerance of user's position of sitting, that is, the user's head range of motion is between (x-d, y-d) and (x + w + d, y + h + d), detect whether user's standard position of sitting exceeds the screen according to the size of d, if too close to certain edge of screen, can't satisfy detection tolerance d, remind the user to gather the standard position of sitting again.
Step 5, detecting the head position of the user in a real-time sitting posture by using an Adaboost face detection algorithm, and if the head position of the user does not exceed the detection tolerance, not prompting the user; if the head position of the user exceeds the detection tolerance, firstly, judging whether the user sitting posture is out of specification in the front-back direction, and when the user does not detect out of specification in the front-back direction, detecting whether the user is out of specification in the left-right direction, specifically:
step 5.1: judging whether the user sitting posture is irregular in the front-back direction, setting the head position of the standard sitting posture image to be D (x, y, w, h), and assuming that the head position of the real-time sitting posture image is D ' (x ', y ', w ', h '), when the following judgment condition relations are met:
namely, the head area of the real-time sitting posture is larger than that of the standard sitting posture, and the head position of the user is higher than that of the standard sitting posture, the user is judged to lean forward;
when the following decision condition relationship is satisfied:
if the head area of the real-time sitting posture is smaller than that of the standard sitting posture and the head position of the user is lower and exceeds the detection tolerance, judging that the sitting posture of the user is inclined backwards;
step 5.2: when no error of the user sitting posture in the front-back direction is detected in the step 5.1, the standard sitting posture image and the real-time sitting posture image are subjected to binarization, and when binarization processing is performed, the gray average value calculated in the step 2 is used asAs a threshold value, setting the binarization standards of the pictures under different brightness to be consistent through the threshold value;
then, carrying out subtraction operation on the two binary images, and dividing the difference image into 3 x 3 blocks according to the head position of the standard sitting posture of the user, wherein the blocks are respectively expressed as S (1,1), S (1,2), S (1,3), S (2,1), S (2,2), S (2,3), S (3,1), S (3,2) and S (3,3), and the position of a central block is consistent with the head position of the standard sitting posture;
calculating the number of pixels with the pixel value of 1 in an S (2,1) block and an S (2,3) block, namely h (2,1) and h (2,3), and if h (2,1) < h (2,3) is met, considering that the sitting posture of the user inclines to the left; if h (2,1) > h (2,3) is satisfied, the user sitting posture is considered to be inclined rightward.
The invention has the beneficial effects that: the sitting posture detection method based on the face detection and the binary operation adopts the improved Adaboost face detection algorithm, judges whether to preprocess the picture or not through the brightness, and improves the condition that the detection is inaccurate when the light is dark by the original method; different methods are adopted in the front-back direction and the left-right direction of the sitting posture, so that the judgment condition is simplified, and the wrong sitting postures in four directions can be clearly distinguished.
Drawings
FIG. 1 is a flow chart of a sitting posture detection method based on face detection and binary operation according to the present invention;
FIG. 2 is a schematic diagram illustrating the setting of detection tolerance in a sitting posture detection method based on face detection and binary operation according to the present invention;
fig. 3 is a schematic diagram of binary image block division in the sitting posture detection method based on face detection and binary operation.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention provides a sitting posture detection method based on face detection and binary operation, which is implemented according to the following steps as shown in figure 1:
step 1: collecting a standard sitting posture image of a user, and setting a detection tolerance coefficient in a range of 0.2-0.5;
step 2: converting the standard sitting posture image from an RGB model space to a YCbCr model space by the following conversion formula:
extracting Y component, i.e. brightness component, calculating the average gray level of the image under the Y component, assuming that the image size is m × n, and the average gray level isThe calculation formula is as follows:
through a large number of experiments, 85 is taken as a threshold value for judging whether the gray level average value is higher or lower, and the threshold value is taken as a standard value, and if the gray level average value is higher than the standard value, the step 3 is directly carried out; if the average value of the gray levels is lower than the standard value, preprocessing the image is required:
the preprocessing is to enhance the brightness of the image, and if the original image is I (x, y) and the image with enhanced brightness is G (x, y), there are
G(x,y)=a×I(x,y) (3)
Wherein a is a value greater than 1 and is used for enhancing the image brightness, and the value of a is 3 through multiple experiments;
and step 3: detecting an input standard sitting posture image by using an Adaboost face detection algorithm, positioning the head position of a user to be D (x, y, w, h), marking the head position with a wire frame, wherein (x, y) represents the coordinates of the upper left corner of the detected head position wire frame, w represents the width of the wire frame, and h represents the length of the wire frame;
and 4, step 4: as shown in fig. 2, let d be 0.3 × w, define d as the detection tolerance of the user's sitting posture, that is, the head range of motion of the user is between (x-d, y-d) and (x + w + d, y + h + d), detect whether the standard sitting posture of the user exceeds the screen according to the size of d, and prompt the user to re-collect the standard sitting posture if the standard sitting posture of the user is too close to a certain edge of the screen and the detection tolerance d cannot be met;
and 5: the head position of the user in a real-time sitting posture is detected by using an Adaboost algorithm. If the head position of the user does not exceed the detection tolerance, no prompt is given to the user; if the head position of the user exceeds the detection tolerance, whether the user sitting posture is out of specification or not in the front-back direction is judged, and when the user does not detect out of specification in the front-back direction, whether the user is out of specification or not is detected in the left-right direction.
Step 5.1: front-back direction: assuming that the head position of the standard sitting posture image is D (x, y, w, h), and the head position of the real-time sitting posture image is D (x ', y', w ', h'), when the following decision condition relationship is satisfied:
namely, the head area of the real-time sitting posture is larger than that of the standard sitting posture, and the head position of the user is higher than that of the standard sitting posture, the user is judged to lean forward.
When the following decision condition relationship is satisfied:
if the head area of the real-time sitting posture is smaller than that of the standard sitting posture and the head position of the user is lower and exceeds the detection tolerance, judging that the sitting posture of the user is inclined backwards;
step 5.2: left-right direction: firstly, binarization is carried out on a standard sitting posture picture and a real-time sitting posture picture. When the binarization processing is carried out, the problem that the pixel values after the processing are different due to the adoption of the adaptive threshold value binarization under different light rays exists, namely when a certain image is processed, the foreground pixel value of the image is 1, and when another image is processed, the foreground pixel value of the image is 0, so that the gray level average value calculated in the step 2 is utilized at the positionIs the threshold value (by a large number of experiments, the value takes 50) by which the picture binarization criteria at different brightnesses are set to be consistent.
And then carrying out subtraction operation on the two binary images. The difference image is divided into 3 × 3 blocks according to the standard sitting posture head position of the user, which are respectively denoted as S (1,1), S (1,2), S (1,3), S (2,1), S (2,2), S (2,3), S (3,1), S (3,2), and S (3,3), and the position of the center block is consistent with the standard sitting posture head position, and the schematic diagram is shown in fig. 3.
Calculating the number of pixels with the pixel value of 1 in the S (2,1) block and the S (2,3) block, namely h (2,1) and h (2,3) respectively: if h (2,1) < h (2,3) is satisfied, the user is considered to be inclined to the left in a sitting posture (the shot pictures are mirrored); if h (2,1) > h (2,3) is satisfied, the user sitting posture is considered to be inclined rightward.
Analysis of results
(1) Comparing the classic Adaboost algorithm with the preprocessed Adaboost algorithm, the experiment collects 100 black pictures of the human face under the backlight environment, and the two algorithms are respectively used for detection, and the two algorithms are calculated to accurately detect the human face in the pictures respectively (namely, the missing detection and the false detection are eliminated), and the results are shown in the table 1:
TABLE 1
As can be seen from table 1, after the image with lower brightness is preprocessed, the Adaboost algorithm is used to perform face detection, so that the accuracy is improved.
(2) Comparing the sitting posture detection method using skin color segmentation to extract feature points with the sitting posture detection method of the present invention, the detection accuracy of detecting four sitting postures in wrong directions is respectively in the backlight environment, the environment with complex background and normal light and the environment with simple background and normal light (the total number of pictures in each sitting posture in wrong direction is fixed, and it is verified whether the corresponding algorithm can correctly detect the error), the results are as shown in table 2:
TABLE 2
Because the skin color segmentation algorithm is greatly influenced by light and background, and the skin color clustering model cannot accurately divide skin color and non-skin color, the sitting posture detection method adopting the skin color segmentation idea has an unsatisfactory effect on processing the picture under the detection environment of the algorithm.
Claims (1)
1. A sitting posture detection method based on face detection and binary operation is characterized by comprising the following steps:
step 1: collecting standard sitting posture images of a user;
step 2: calculating the brightness of the standard sitting posture image collected in the step 1, preprocessing the image with lower brightness for enhancing the brightness, and not processing the image with higher brightness;
and step 3: positioning the head of the user in the standard sitting posture image obtained in the step 2 by using an Adaboost face detection algorithm;
and 4, step 4: setting a detection tolerance by using the head positioning position obtained in the step 3;
and 5: starting to collect the sitting posture of the user in real time, detecting the head position of the user in real time according to the detection tolerance set in the step 4, and prompting the user according to the detection result;
the step 2 is specifically implemented according to the following steps: converting the standard sitting posture image from an RGB model space to a YCbCr model space by the following conversion formula:
in the formula (1), the Y component, i.e., the luminance component is extracted, and the average value of the gradations of the image under the Y component is calculated, assuming that the size of the image is m × n, and the average value of the gradations isThe calculation formula is as follows:
if the average gray level value is higher than the standard value, directly performing the step 3; if the average value of the gray levels is lower than the standard value, the following preprocessing needs to be performed on the image, wherein the standard value is 85:
the preprocessing is to enhance the brightness of the image, and if the original image is I (x, y) and the image with enhanced brightness is G (x, y), there are
G(x,y)=a×I(x,y) (3)
In the formula (3), a is 3, G (x, y) is the gray value of the pixels at the x row and the y column in the image after the brightness is enhanced, and I (x, y) is the gray value of the pixels at the x row and the y column in the original image;
the step 3 specifically comprises the following steps: detecting an input standard sitting posture image by using an Adaboost face detection algorithm, positioning a head position feature vector of a user as D [ x, y, w, h ], marking the head position feature vector by a wire frame, (x, y) representing the coordinate of the upper left corner of the detected head position wire frame, w representing the width of the wire frame, and h representing the length of the wire frame;
the step 4 specifically comprises the following steps: d is 0.3 xw, and d is the detection tolerance of the user sitting posture, namely the head moving range of the user is between (x-d, y-d) and (x + h + d, y + w + d), whether the standard sitting posture of the user exceeds the screen is detected according to the size of d, and if the standard sitting posture of the user is too close to a certain edge of the screen and cannot meet the detection tolerance d, the user is prompted to re-collect the standard sitting posture;
in the step 5, the head position of the user in the real-time sitting posture is detected by using an Adaboost face detection algorithm, and if the head position of the user does not exceed the detection tolerance, no prompt is given to the user; if the head position of the user exceeds the detection tolerance, firstly, judging whether the user sitting posture is out of specification in the front-back direction, and when the user does not detect out of specification in the front-back direction, detecting whether the user is out of specification in the left-right direction, specifically:
step 5.1: judging whether the user sitting posture is irregular in the front-back direction, wherein the head position feature vector of the standard sitting posture image is D ═ x, y, w, h ], assuming that the head position feature vector of the real-time sitting posture image is D ' ═ x ', y ', w ', h ' ], (x ', y ') indicating that the upper left corner coordinate of the real-time sitting posture head position wire frame is detected, w ' indicating that the width of the real-time sitting posture head position wire frame is detected, h ' indicating that the length of the real-time sitting posture head position wire frame is detected, and when the following judgment condition relations are met:
namely, the head area of the real-time sitting posture is larger than that of the standard sitting posture, and the head position of the user is higher than that of the standard sitting posture, the user is judged to lean forward;
when the following decision condition relationship is satisfied:
if the head area of the real-time sitting posture is smaller than that of the standard sitting posture and the head position of the user is lower and exceeds the detection tolerance, judging that the sitting posture of the user is inclined backwards;
step 5.2: when step 5.1 does not detect that the user sitting posture is wrong in the front-back direction, the standard sitting posture image sum is calculatedThe real-time sitting posture image is binarized, and the average gray level value calculated in the step 2 is used asAs a threshold value, setting the binarization standards of the pictures under different brightness to be consistent through the threshold value;
then, carrying out subtraction operation on the two binary images, and dividing the difference image into 3 x 3 blocks according to the head position of the standard sitting posture of the user, wherein the blocks are respectively expressed as S (1,1), S (1,2), S (1,3), S (2,1), S (2,2), S (2,3), S (3,1), S (3,2) and S (3,3), and the position of a central block S (2,2) is consistent with the head position of the standard sitting posture;
calculating the number of pixels with the pixel value of 1 in an S (2,1) block and an S (2,3) block, namely h (2,1) and h (2,3), and if h (2,1) < h (2,3) is met, considering that the sitting posture of the user inclines to the left; if h (2,1) > h (2,3) is satisfied, the user sitting posture is considered to be inclined rightward.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910568748.4A CN110334631B (en) | 2019-06-27 | 2019-06-27 | Sitting posture detection method based on face detection and binary operation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910568748.4A CN110334631B (en) | 2019-06-27 | 2019-06-27 | Sitting posture detection method based on face detection and binary operation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110334631A CN110334631A (en) | 2019-10-15 |
CN110334631B true CN110334631B (en) | 2021-06-15 |
Family
ID=68144621
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910568748.4A Active CN110334631B (en) | 2019-06-27 | 2019-06-27 | Sitting posture detection method based on face detection and binary operation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110334631B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111734974B (en) * | 2020-01-22 | 2022-06-03 | 中山明易智能家居科技有限公司 | Intelligent desk lamp with sitting posture reminding function |
CN113313917B (en) * | 2020-02-26 | 2022-12-16 | 北京君正集成电路股份有限公司 | Method for solving false alarm generated when no target exists in front of detector in sitting posture detection |
CN113837044B (en) * | 2021-09-13 | 2024-01-23 | 深圳市罗湖医院集团 | Organ positioning method based on ambient brightness and related equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103488980A (en) * | 2013-10-10 | 2014-01-01 | 广东小天才科技有限公司 | Sitting posture judging method and device based on camera |
KR101722131B1 (en) * | 2015-11-25 | 2017-03-31 | 국민대학교 산학협력단 | Posture and Space Recognition System of a Human Body Using Multimodal Sensors |
CN107392860A (en) * | 2017-06-23 | 2017-11-24 | 歌尔科技有限公司 | Image enchancing method and equipment, AR equipment |
CN108095333A (en) * | 2017-12-18 | 2018-06-01 | 珠海瞳印科技有限公司 | Sitting posture correcting device and its bearing calibration |
CN109523755A (en) * | 2018-12-17 | 2019-03-26 | 石家庄爱赛科技有限公司 | Stereoscopic vision sitting posture reminder and based reminding method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101706953B (en) * | 2009-11-13 | 2015-07-01 | 北京中星微电子有限公司 | Histogram equalization based image enhancement method and device |
-
2019
- 2019-06-27 CN CN201910568748.4A patent/CN110334631B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103488980A (en) * | 2013-10-10 | 2014-01-01 | 广东小天才科技有限公司 | Sitting posture judging method and device based on camera |
KR101722131B1 (en) * | 2015-11-25 | 2017-03-31 | 국민대학교 산학협력단 | Posture and Space Recognition System of a Human Body Using Multimodal Sensors |
CN107392860A (en) * | 2017-06-23 | 2017-11-24 | 歌尔科技有限公司 | Image enchancing method and equipment, AR equipment |
CN108095333A (en) * | 2017-12-18 | 2018-06-01 | 珠海瞳印科技有限公司 | Sitting posture correcting device and its bearing calibration |
CN109523755A (en) * | 2018-12-17 | 2019-03-26 | 石家庄爱赛科技有限公司 | Stereoscopic vision sitting posture reminder and based reminding method |
Non-Patent Citations (1)
Title |
---|
基于人脸检测与肤色统计的坐姿行为监测;张宇;《计算机与网络》;20170430;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110334631A (en) | 2019-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110334631B (en) | Sitting posture detection method based on face detection and binary operation | |
EP1229493B1 (en) | Multi-mode digital image processing method for detecting eyes | |
CN105740945B (en) | A kind of people counting method based on video analysis | |
WO2017084204A1 (en) | Method and system for tracking human body skeleton point in two-dimensional video stream | |
CN111275082A (en) | Indoor object target detection method based on improved end-to-end neural network | |
CN110210360B (en) | Rope skipping counting method based on video image target recognition | |
CN106980852B (en) | Based on Corner Detection and the medicine identifying system matched and its recognition methods | |
CN114241548A (en) | Small target detection algorithm based on improved YOLOv5 | |
CN110032932B (en) | Human body posture identification method based on video processing and decision tree set threshold | |
CN104794693B (en) | A kind of portrait optimization method of face key area automatic detection masking-out | |
JP2003030667A (en) | Method for automatically locating eyes in image | |
CN108197534A (en) | A kind of head part's attitude detecting method, electronic equipment and storage medium | |
CN111832405A (en) | Face recognition method based on HOG and depth residual error network | |
CN110503063A (en) | Fall detection method based on hourglass convolution autocoding neural network | |
CN106503651A (en) | A kind of extracting method of images of gestures and system | |
TWI415032B (en) | Object tracking method | |
CN112488034A (en) | Video processing method based on lightweight face mask detection model | |
JP2007048172A (en) | Information classification device | |
CN107516083A (en) | A kind of remote facial image Enhancement Method towards identification | |
CN111784723A (en) | Foreground extraction algorithm based on confidence weighted fusion and visual attention | |
CN105844641B (en) | A kind of adaptive threshold fuzziness method under dynamic environment | |
CN117197064A (en) | Automatic non-contact eye red degree analysis method | |
CN110598521A (en) | Behavior and physiological state identification method based on intelligent analysis of face image | |
CN110826495A (en) | Body left and right limb consistency tracking and distinguishing method and system based on face orientation | |
Işikdoğan et al. | Automatic recognition of Turkish fingerspelling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |