CN110619320A - Intelligent control method for intelligent bathing machine and bathing machine - Google Patents

Intelligent control method for intelligent bathing machine and bathing machine Download PDF

Info

Publication number
CN110619320A
CN110619320A CN201910928601.1A CN201910928601A CN110619320A CN 110619320 A CN110619320 A CN 110619320A CN 201910928601 A CN201910928601 A CN 201910928601A CN 110619320 A CN110619320 A CN 110619320A
Authority
CN
China
Prior art keywords
face
feature
matching
intelligent
machine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910928601.1A
Other languages
Chinese (zh)
Inventor
徐先哲
陈德芳
李想
顾倩
李潘
王嵘
杨超翔
黄河
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China University of Science and Technology
Original Assignee
East China University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China University of Science and Technology filed Critical East China University of Science and Technology
Priority to CN201910928601.1A priority Critical patent/CN110619320A/en
Publication of CN110619320A publication Critical patent/CN110619320A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention relates to an intelligent control method for an intelligent bathing machine and the bathing machine, wherein the method comprises the following steps: step S1: detecting whether a human face exists in the acquired video, and executing the step S2 when the human face is detected; step S2: extracting feature information of a face in a frame containing the face, matching the face with face data of each user stored in a database based on the extracted feature information, and executing step S3 if matching is successful; step S3: acquiring all user parameters of the matched users; step S4: and controlling the bathing machine to work according to the obtained user parameters. Compared with the prior art, the invention can provide personalized bath machine parameter configuration for the user by utilizing the image recognition result, thereby improving the comfort of the user and ensuring the use convenience.

Description

Intelligent control method for intelligent bathing machine and bathing machine
Technical Field
The invention relates to the technical field of biological feature recognition and intelligent home furnishing, in particular to an intelligent control method for an intelligent bathing machine and the bathing machine.
Background
Face recognition technology began to gradually enter the human vision in the 20 th century. The face recognition is a form of biological recognition technology, relates to a plurality of modes in the aspects of pattern recognition, computer vision, psychology, physiology, cognitive science and the like, realizes identity recognition under the assistance of a computer, and is an effective means for performing identity verification based on unique characteristics of people.
The face recognition technology has very important applications in many aspects, such as the verification of security authority, the application in financial payment transaction, the application in identity matching and the application in intelligent home human-computer interaction.
Most of the face recognition technologies are used for permission verification, unlocking and the like at present, and cannot provide subsequent customized services for a certain specific user according to personal factors such as living habits and personal preferences of the user.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide an intelligent control method for an intelligent bathing machine and the bathing machine.
The purpose of the invention can be realized by the following technical scheme:
an intelligent control method for an intelligent bathing machine comprises the following steps:
step S1: detecting whether a human face exists in the acquired video, and executing the step S2 when the human face is detected;
step S2: extracting feature information of a face in a frame containing the face, matching the face with face data of each user stored in a database based on the extracted feature information, and executing step S3 if matching is successful;
step S3: acquiring all user parameters of the matched users;
step S4: and controlling the bathing machine to work according to the obtained user parameters.
The process of performing face detection on a certain frame in the video in step S1 specifically includes:
step S11: calculating an integral image of the current frame;
step S12: extracting Haar-like edge characteristics based on the obtained integral graph;
step S13: and matching the extracted Haar-like edge features with a preset human face model, calculating a difference value, and if the difference value is within a confidence interval, determining that the human face exists in the current frame.
The confidence interval is specifically:
Pr(c1≤μ≤c2)=1-α
wherein: pr denotes probability, an abbreviation for the word probability, c1 and c2 are the two interval endpoints of the confidence interval, respectively, α denotes the significance level, 1- α denotes the confidence level.
The step S2 specifically includes:
step S21: setting a circle with a radius of 3.4 pixels and 16 pixels on the periphery as a template for screening feature points, carrying out corner detection to obtain feature corners, and generating feature corner descriptors;
step S23: comparing the obtained feature corner set with a primary classified image set in a database, if the feature corner set is matched with the primary classified image set in the database, executing the step S24, otherwise, returning an error prompt;
step S24: setting a circle with the radius of 2.6 pixels and 8 pixels on the periphery as a template to screen characteristic points, carrying out corner detection to obtain characteristic corners, and generating characteristic corner descriptors;
step S25: and comparing the retrieved feature corner set with a secondary classified image set in the database, outputting a matching result and executing the step S3 if the corresponding image set is matched, and otherwise, returning an error prompt.
And the primary classified image set is used for calculating the matching degree of the M characteristic points of each sample image, and if the result is within the threshold value range, the two images are specified to belong to the same primary classified image set.
The matching process in step S25 specifically includes: and respectively carrying out similarity matching calculation on the obtained feature corner set and a feature template library in the database, taking the largest one as a matching result, and if the matching result is consistent with the result in the step S24, judging that the matching is correct.
The second-level classified image set is characterized in that the matching degree S of N characteristic points of each image in the set is calculated in each first-level classified image set, and finally data records are stored in a database, wherein each set only has one group of data and corresponds to one face.
The mathematical expression of the similarity specifically includes:
wherein: d (-) is the similarity calculation,are feature vectors of standard face target samples within the database,for the feature vector of the current face sample to be recognized, alphajIs the magnitude, alpha, of the feature vector of a standard human face target sample in the databasej' is the amplitude of the feature vector of the current face sample to be recognized, phijIs the phase of the eigenvector, phi, of the standard face target sample in the databasej' is the phase of the feature vector of the face sample to be currently recognized,is the relative displacement of the two vector sets.
The generation process of the feature corner descriptor specifically comprises the following steps: and (3) rotating coordinate axes to be the direction of the characteristic corner points, dividing the pixels in the window into 16 blocks by taking the characteristic points as the gradient magnitude and the direction of the pixels of a 16-by-16 window of which the center is, wherein each block is a histogram statistic of 8 directions in the pixels of the block, and obtaining a 128-dimensional characteristic vector.
The utility model provides an intelligence machine of bathing, includes the machine of bathing body and is used for controlling the controlling means of the machine of bathing body, controlling means and this body coupling of the machine of bathing still include camera device, camera device is connected with controlling means, controlling means includes memory, treater to and store in the memory and by the procedure that the treater was carried out, the treater carries out realize following step during the procedure:
step S1: detecting whether a human face exists in the acquired video, and executing the step S2 when the human face is detected;
step S2: extracting feature information of a face in a frame containing the face, matching the face with face data of each user stored in a database based on the extracted feature information, and executing step S3 if matching is successful;
step S3: acquiring all user parameters of the matched users;
step S4: and controlling the bathing machine body to work according to the obtained user parameters.
Compared with the prior art, the invention has the following beneficial effects:
1) the image recognition result can be utilized to provide personalized bathing machine parameter configuration for the user, the comfort level of the user is improved, and meanwhile the use convenience is guaranteed.
2) The face is detected firstly, and then the face is identified when the face is detected, so that the power consumption can be reduced.
3) The face detection is carried out by using an integral graph mode, and the sum of rectangular areas with any size in the image can be calculated within a constant time. Therefore, the calculation amount is greatly reduced and the calculation speed is improved during image blurring, edge extraction and object detection. .
4) A confidence interval is designed. The confidence interval obtained through experiments can reduce the operation cost as much as possible on the basis of ensuring the accuracy, thereby reducing the requirements on hardware and ensuring the realization of the face recognition technology on the bathing machine.
5) The FAST mode is utilized to extract and match features, two-stage matching is set, and the recognition speed can be improved to the greatest extent on the premise of ensuring the accuracy. In addition, when the first-stage matching is carried out, the matched feature vectors are low in dimensionality, namely, the features of the extraction records are few, so that the method can be adapted to a plurality of different face data models, and the specific features of different faces are extracted and stored at the second stage, so that data redundancy can be reduced, the requirements on hardware storage and data processing are reduced, and the batch processing data volume is reduced.
6) The coordinate axis rotation is taken as the direction of the characteristic corner point, so that errors possibly caused by image rotation can be eliminated, the typical defects of the FAST algorithm are overcome, and the defect of matching accuracy can be overcome while the matching speed is improved.
Drawings
FIG. 1 is a schematic diagram of the structure of the process of the present invention;
FIG. 2 is a schematic view of an image model;
FIG. 3 is a schematic diagram of an integral calculation;
FIG. 4 is a schematic diagram of corner extraction;
fig. 5 is a diagram of the effect of identification.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
The utility model provides an intelligence machine of bathing, includes the machine of bathing body and is used for controlling the controlling means of the machine of bathing body, controlling means and the machine of bathing body coupling still include camera device, and camera device is connected with controlling means, and controlling means includes memory, treater to and the program of saving in the memory and being executed by the treater, as shown in FIG. 1, realize following step when the treater execution program:
step S1: detecting whether a human face exists in the acquired video, and executing step S2 when the human face is detected, wherein the process of performing the human face detection on a certain frame in the video specifically includes:
step S11: calculating an integral map of the current frame, wherein the process specifically comprises the following steps:
the current frame image is integrated, i.e. pixels in the convolution kernel 3 x 3 or 4 x 4 are summed. The value of each point in the image integral map is the sum of all pixel values in the upper left corner of the point in the original image. As shown in FIG. 2, an array A is first created as an integral image having a width and height equal to the original image, and then assigned with a value, each point stores the sum of all pixels in a rectangle formed by the point and the origin of the image:
SAT(x,y)=∑xi≤x,yi≤yI(xi,yi)
wherein: i (x, y) represents the pixel value of the image (x, y) location, and the integral image can be computed in increments:
SAT(x,y)=SAT(x,y-1)+SAT(x-1,y)-SAT(x-1,y-1)+I(x,y)
initial boundary: SAT (-1, y) ═ SAT (x, -1) ═ SAT (-1, -1) ═ 0.
After the incremental calculation is completed, a part of the overlapped area, namely SAT (x-1, y-1) is subtracted, and finally the current coordinate is included.
As shown in fig. 3, the integral is:
Sum(Rd)=SAT1+SAT4-SAT2-SAT3
step S12: after image integration is obtained, extracting Haar-like edge characteristics based on the obtained integral graph;
HarrA-B
=Sum(A)-Sum(B)
=[SAT4+SAT1-SAT2-SAT3]-[SAT6+SAT3-SAT4-SAT5]
step S13: and matching the extracted Haar-like edge features with a preset human face model, calculating a difference value, and if the difference value is within a confidence interval, determining that the human face exists in the current frame.
The confidence interval is specifically:
Pr(c1≤μ≤c2)=1-α
wherein: pr denotes probability, an abbreviation for the word probability, c1 and c2 are the two interval endpoints of the confidence interval, respectively, α denotes the significance level, 1- α denotes the confidence level.
Step S2: extracting feature information of a face in a frame containing the face, matching the face with face data of each user stored in a database based on the extracted feature information, and if the matching is successful, executing step S3, specifically including:
step S21: setting a circle with a radius of 3.4 pixels and 16 pixels on the periphery as a template for screening feature points, carrying out corner detection to obtain feature corners, and generating feature corner descriptors;
for corner extraction, a FAST method is adopted,
(1) firstly, carrying out segmentation test on pixels on a circle with a fixed radius, and removing a large number of non-characteristic candidate points through logic test; as shown in FIG. 4, a circle with a radius of 3.4 pixels and 16 pixels on the periphery is defined as a template for screening feature points. Where p is the center pixel point and 16 points are the pixel values of the 16 points connected by the arc on the graph. t is a threshold value, which is 8 in the current step, Ip indicates the pixel value of the center pixel, and Ip → xIp → x indicates the pixel value in the circular template. Counting the number of d or b in the circular area, and as long as the number of d or b occurrences is more than 12, the point is considered as a candidate corner point. First, comparing the pixel values of the points 1, 5, 9, and 13 (i.e. 4 points in the horizontal and vertical directions) with the central pixel value, if the pixel values of the four points are more than 3 or more than 3, which are larger than Ip → x + tIp → x + t or smaller than Ip → x-tIp → x-t, then the point is considered as a candidate corner point, otherwise it is impossible to be a corner point:
Sp→x=d(dark),Ip→x<Ip-t
Sp→x=s(similiar),Ip-t<Ip+t
Sp→x=b(brighter),Ip→x>Ip+t
(2) based on the classified corner feature detection, an ID3 classifier is utilized to judge whether a candidate point is a corner feature according to 16 features, the state of each feature is one 1, 0, 1, and a pixel in a template is divided into three parts d, s and b, which are respectively recorded as: pd, Ps, Pb. Thus belonging to one of Pd, Ps, Pb for each Sp → xSp → x. Let Kp be true, if p is a corner, otherwise false. The pixel with the largest information gain is selected by the ID3 algorithm to determine whether a pixel is a corner. The entropy of Kp is calculated using the following equation:
H(P)=(c+c-)log2(c+c-)-clog2c-c-log2c-H(P)=(c+c-)log2(c+c-)-clog2c-c-log2c-
in the above formula cc represents the number of corner points and c-c-represents the number of non-corner points. The information gain of a certain pixel is expressed by the following formula:
H(P)-H(Pd)-H(Ps)-H(Pb)H(P)-H(Pd)-H(Ps)-H(Pb)
(3) finally, carrying out verification on the corner feature by utilizing non-maximum suppression, and defining a corner response function V, wherein the definition of the corner response function is as follows in consideration of the requirements of the feature of the segmentation test and the calculation speed:
the process of generating the feature corner descriptor comprises the following steps: the coordinate axis is rotated as the direction of the characteristic corner point to eliminate the error possibly caused by image rotation, the gradient amplitude and the direction of the pixel of a 16 × 16 window taking the characteristic point as the center divide the pixel in the window into 16 blocks, each block is the histogram statistics of 8 directions in the pixel, and a 128-dimensional characteristic vector can be formed.
Step S23: comparing the obtained feature corner set with a primary classified image set in a database, if the feature corner set is matched with the primary classified image set in the database, executing the step S24, otherwise, returning an error prompt;
and the primary classified image set is obtained by calculating the matching degree of M characteristic points of each sample image, and if the result is within a threshold value range, the two images are specified to belong to the same primary classified image set.
Step S24: setting a circle with the radius of 2.6 pixels and 8 pixels on the periphery as a template to screen characteristic points, carrying out corner detection to obtain characteristic corners, and generating characteristic corner descriptors;
step S25: and comparing the retrieved feature corner set with a secondary classified image set in the database, outputting a matching result and executing the step S3 if the corresponding image set is matched, and otherwise, returning an error prompt.
The second-level classified image set is that inside each first-level classified image set, N characteristic points of each image in the set are calculated according to the matching degree S, and finally data records are stored in a database, wherein each set only has one group of data and corresponds to one face.
The matching process in step S25 specifically includes: and respectively carrying out similarity matching calculation on the obtained feature corner set and a feature template library in the database, taking the largest one as a matching result, and if the matching result is consistent with the result in the step S24, judging that the matching is correct.
The mathematical expression of the similarity specifically includes:
wherein: d (-) is the similarity calculation,are feature vectors of standard face target samples within the database,for the feature vector of the current face sample to be recognized, alphajIs the magnitude, alpha, of the feature vector of a standard human face target sample in the databasej' is the amplitude of the feature vector of the current face sample to be recognized, phijIs the phase of the eigenvector, phi, of the standard face target sample in the databasej' is the phase of the feature vector of the face sample to be currently recognized,relative displacement for two sets of vectors。
The relative displacement of the two vector sets can also be usedSpecifically, the method comprises the following steps:
wherein: dxIs the distance of two eigenvectors in the x direction, dyIs the distance of two eigenvectors in the y direction, ΓxxIs a relative displacement in the x direction, ΓyyFor relative displacement in the y direction, ΓxyIs the relative displacement of the preceding vector in the x-direction and the following vector in the y-direction, ΓyxIs the relative displacement of the preceding vector in the y-direction and the following vector in the x-direction, phixIs a phase difference in the x direction, phiyIs the phase difference in the y-direction.
If its denominator is not zero, wherein:
Φx=∑jj∝′jkjxj-Φ′j)
Γxy=∑jj∝′jkjxkjy
step S3: acquiring all user parameters of the matched users, wherein the user parameters comprise various parameters such as water temperature, water quantity and the like;
step S4: and controlling the bathing machine body to work according to the obtained user parameters.
In specific implementation, the algorithm firstly detects whether a human face appears in a circulating manner, and the method comprises the steps of sampling the current video stream, extracting the Haar characteristics of the sample image, matching the Haar characteristics with the existing characteristic set in the database, and further judging whether the human face appears. If the face is determined to appear, the algorithm executes the next face recognition, two-stage extraction of feature points is carried out on the face in the image through the right two-stage step-by-step feature point extraction algorithm, and after the extraction is completed, the algorithm enters the next face matching. In the human face matching link, the feature point matching algorithm is utilized to match the first-level feature points with the feature points of the first-level classified image set in the database, and the human face matching range is narrowed. After matching is successful, the algorithm matches the second-level feature points with the feature points of the second-level classified images in the database, as shown in fig. 5, and finally determines the identity of the current face. After the identity of the current user is determined, the algorithm searches the database for the most frequently-used setting parameters of the current user, so that a result is returned, and the next hardware work is guided to complete the setting of the system parameters.

Claims (10)

1. An intelligent control method for an intelligent bathing machine is characterized by comprising the following steps:
step S1: detecting whether a human face exists in the acquired video, and executing the step S2 when the human face is detected;
step S2: extracting feature information of a face in a frame containing the face, matching the face with face data of each user stored in a database based on the extracted feature information, and executing step S3 if matching is successful;
step S3: acquiring all user parameters of the matched users;
step S4: and controlling the bathing machine to work according to the obtained user parameters.
2. The intelligent control method for an intelligent bathing machine as claimed in claim 1, wherein the step S1 of performing face detection on a certain frame in the video specifically comprises:
step S11: calculating an integral image of the current frame;
step S12: extracting Haar-like edge characteristics based on the obtained integral graph;
step S13: and matching the extracted Haar-like edge features with a preset human face model, calculating a difference value, and if the difference value is within a confidence interval, determining that the human face exists in the current frame.
3. The intelligent control method for the intelligent bathing machine as claimed in claim 2, wherein the confidence interval is specifically:
Pr(c1≤μ≤c2)=1-α
wherein: pr denotes probability, an abbreviation for the word probability, c1 and c2 are the two interval endpoints of the confidence interval, respectively, α denotes the significance level, 1- α denotes the confidence level.
4. The intelligent control method for an intelligent bathing machine as claimed in claim 1, wherein said step S2 specifically includes:
step S21: setting a circle with a radius of 3.4 pixels and 16 pixels on the periphery as a template for screening feature points, carrying out corner detection to obtain feature corners, and generating feature corner descriptors, namely three-dimensional feature vectors of the feature corners;
step S23: comparing the obtained feature corner set with a primary classified image set in a database, if the feature corner set is matched with the primary classified image set in the database, executing the step S24, otherwise, returning an error prompt;
step S24: setting a circle with the radius of 2.6 pixels and 8 pixels on the periphery as a template to screen characteristic points, carrying out corner detection to obtain characteristic corners, and generating characteristic corner descriptors;
step S25: and comparing the retrieved feature corner set with a secondary classified image set in the database, outputting a matching result and executing the step S3 if the corresponding image set is matched, and otherwise, returning an error prompt.
5. The intelligent control method for an intelligent bathing machine as claimed in claim 4, wherein the primary classified image set is obtained by performing matching degree calculation on M feature points of each sample image, and if the result is within a threshold value range, the two images are specified to belong to the same primary classified image set.
6. The intelligent control method for an intelligent bathing machine as claimed in claim 4, wherein the matching process in step S25 is specifically as follows: and respectively carrying out similarity matching calculation on the obtained feature corner set and a feature template library in the database, taking the largest one as a matching result, and if the matching result is consistent with the result in the step S24, judging that the matching is correct.
7. An intelligent control method for an intelligent bathing machine as claimed in claim 4 or 6, wherein said secondary classified image set is obtained by calculating the matching degree S of N feature points of each image in the set within each primary classified image set, and storing the data records in the database, wherein each image in the set has only one set of data corresponding to a human face.
8. The intelligent control method for the intelligent bathing machine as claimed in claim 5 or 6, wherein the mathematical expression of the similarity is specifically as follows:
wherein: d (-) is the similarity calculation,are feature vectors of standard face target samples within the database,for the feature vector of the current face sample to be recognized, alphajIs the magnitude, alpha, of the feature vector of a standard human face target sample in the databasej' is the amplitude of the feature vector of the current face sample to be recognized, phijIs the phase of the eigenvector, phi, of the standard face target sample in the databasej' is the phase of the feature vector of the face sample to be currently recognized,is the relative displacement of the two vector sets.
9. The intelligent control method for the intelligent bathing machine as claimed in claim 4, wherein the generation process of the characteristic corner point descriptor is specifically as follows: and (3) rotating coordinate axes to be the direction of the characteristic corner points, dividing the pixels in the window into 16 blocks by taking the characteristic points as the gradient magnitude and the direction of the pixels of a 16-by-16 window of which the center is, wherein each block is a histogram statistic of 8 directions in the pixels of the block, and obtaining a 128-dimensional characteristic vector.
10. The utility model provides an intelligence machine of bathing, includes the machine of bathing body and is used for controlling the controlling means of the machine of bathing body, controlling means and this body coupling of the machine of bathing, its characterized in that still includes camera device, camera device is connected with controlling means, controlling means includes memory, treater to and store in the memory and by the procedure that the treater was carried out, the treater carries out realize following step during the procedure:
step S1: detecting whether a human face exists in the acquired video, and executing the step S2 when the human face is detected;
step S2: extracting feature information of a face in a frame containing the face, matching the face with face data of each user stored in a database based on the extracted feature information, and executing step S3 if matching is successful;
step S3: acquiring all user parameters of the matched users;
step S4: and controlling the bathing machine body to work according to the obtained user parameters.
CN201910928601.1A 2019-09-28 2019-09-28 Intelligent control method for intelligent bathing machine and bathing machine Pending CN110619320A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910928601.1A CN110619320A (en) 2019-09-28 2019-09-28 Intelligent control method for intelligent bathing machine and bathing machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910928601.1A CN110619320A (en) 2019-09-28 2019-09-28 Intelligent control method for intelligent bathing machine and bathing machine

Publications (1)

Publication Number Publication Date
CN110619320A true CN110619320A (en) 2019-12-27

Family

ID=68924687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910928601.1A Pending CN110619320A (en) 2019-09-28 2019-09-28 Intelligent control method for intelligent bathing machine and bathing machine

Country Status (1)

Country Link
CN (1) CN110619320A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111306803A (en) * 2020-03-01 2020-06-19 苏州淘喜网络科技有限公司 Water source supply control system and method based on big data

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102654903A (en) * 2011-03-04 2012-09-05 井维兰 Face comparison method
CN103616879A (en) * 2013-12-06 2014-03-05 青岛金讯网络工程有限公司 Informationized smart home life control system
CN103745240A (en) * 2013-12-20 2014-04-23 许雪梅 Method and system for retrieving human face on the basis of Haar classifier and ORB characteristics
CN104536389A (en) * 2014-11-27 2015-04-22 苏州福丰科技有限公司 3D face identification technology based intelligent household system and realization method thereof
CN105956512A (en) * 2016-04-19 2016-09-21 安徽工程大学 Air conditioner temperature automatic regulating system and air conditioner temperature automatic regulating method based on machine vision
CN107644191A (en) * 2016-07-21 2018-01-30 中兴通讯股份有限公司 A kind of face identification method and system, terminal and server
CN107784263A (en) * 2017-04-28 2018-03-09 新疆大学 Based on the method for improving the Plane Rotation Face datection for accelerating robust features
CN108625098A (en) * 2018-04-04 2018-10-09 青岛海尔洗衣机有限公司 A kind of washing machine and method of identification facial image selection washing procedure
CN108647608A (en) * 2018-04-28 2018-10-12 东莞市华睿电子科技有限公司 A kind of implementation method of the smart home burglary-resisting system based on Identification of Images
CN109839828A (en) * 2019-01-03 2019-06-04 深圳壹账通智能科技有限公司 Intelligent home furnishing control method and device, storage medium and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102654903A (en) * 2011-03-04 2012-09-05 井维兰 Face comparison method
CN103616879A (en) * 2013-12-06 2014-03-05 青岛金讯网络工程有限公司 Informationized smart home life control system
CN103745240A (en) * 2013-12-20 2014-04-23 许雪梅 Method and system for retrieving human face on the basis of Haar classifier and ORB characteristics
CN104536389A (en) * 2014-11-27 2015-04-22 苏州福丰科技有限公司 3D face identification technology based intelligent household system and realization method thereof
CN105956512A (en) * 2016-04-19 2016-09-21 安徽工程大学 Air conditioner temperature automatic regulating system and air conditioner temperature automatic regulating method based on machine vision
CN107644191A (en) * 2016-07-21 2018-01-30 中兴通讯股份有限公司 A kind of face identification method and system, terminal and server
CN107784263A (en) * 2017-04-28 2018-03-09 新疆大学 Based on the method for improving the Plane Rotation Face datection for accelerating robust features
CN108625098A (en) * 2018-04-04 2018-10-09 青岛海尔洗衣机有限公司 A kind of washing machine and method of identification facial image selection washing procedure
CN108647608A (en) * 2018-04-28 2018-10-12 东莞市华睿电子科技有限公司 A kind of implementation method of the smart home burglary-resisting system based on Identification of Images
CN109839828A (en) * 2019-01-03 2019-06-04 深圳壹账通智能科技有限公司 Intelligent home furnishing control method and device, storage medium and electronic equipment

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
LHANCHAO: "特征点匹配-FAST特征点检测", 《HTTPS://BLOG.CSDN.NET/LHANCHAO/ARTICLE/DETAILS/52732947》 *
刘亮: "人脸识别在职能门禁系统中的应用", 《电子技术与软件工程》 *
刘代志: "遥感地球物理与国家安全", 《西安地图出版社》 *
吴定雪等: "基于点特征的旋转图像匹配新方法", 《计算机科学》 *
邹建成等: "数学及其在图像处理中的应用", 《北京邮电大学出版社》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111306803A (en) * 2020-03-01 2020-06-19 苏州淘喜网络科技有限公司 Water source supply control system and method based on big data

Similar Documents

Publication Publication Date Title
Chen et al. An end-to-end system for unconstrained face verification with deep convolutional neural networks
Thornton et al. A Bayesian approach to deformed pattern matching of iris images
Liu et al. Learning the spherical harmonic features for 3-D face recognition
Kroon et al. Eye localization for face matching: is it always useful and under what conditions?
US9208375B2 (en) Face recognition mechanism
CN108416338B (en) Non-contact palm print identity authentication method
JP2013012190A (en) Method of approximating gabor filter as block-gabor filter, and memory to store data structure for access by application program running on processor
Zakaria et al. Hierarchical skin-adaboost-neural network (h-skann) for multi-face detection
Wang et al. Weber local descriptors with variable curvature gabor filter for finger vein recognition
Liu et al. Finger vein recognition with superpixel-based features
TW202006630A (en) Payment method, apparatus, and system
CN114359998B (en) Identification method of face mask in wearing state
Oldal et al. Hand geometry and palmprint-based authentication using image processing
Yakovleva et al. Face Detection for Video Surveillance-based Security System.
Lee et al. Robust Face Detection Based on Knowledge‐Directed Specification of Bottom‐Up Saliency
Dai et al. Iris center localization using energy map with image inpaint technology and post-processing correction
TW201635197A (en) Face recognition method and system
CN110619320A (en) Intelligent control method for intelligent bathing machine and bathing machine
CN116342968B (en) Dual-channel face recognition method and device
CN116453230A (en) Living body detection method, living body detection device, terminal equipment and storage medium
CN111753583A (en) Identification method and device
CN115690803A (en) Digital image recognition method and device, electronic equipment and readable storage medium
Liu-Jimenez et al. FPGA implementation for an iris biometric processor
Vera et al. Iris recognition algorithm on BeagleBone Black
CN111428679B (en) Image identification method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191227