WO2019051777A1 - 一种基于智能终端的提醒方法和提醒系统 - Google Patents

一种基于智能终端的提醒方法和提醒系统 Download PDF

Info

Publication number
WO2019051777A1
WO2019051777A1 PCT/CN2017/101893 CN2017101893W WO2019051777A1 WO 2019051777 A1 WO2019051777 A1 WO 2019051777A1 CN 2017101893 W CN2017101893 W CN 2017101893W WO 2019051777 A1 WO2019051777 A1 WO 2019051777A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
reminder
smoke
threshold
smart terminal
Prior art date
Application number
PCT/CN2017/101893
Other languages
English (en)
French (fr)
Inventor
黄文菲
Original Assignee
深圳传音通讯有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳传音通讯有限公司 filed Critical 深圳传音通讯有限公司
Priority to PCT/CN2017/101893 priority Critical patent/WO2019051777A1/zh
Priority to CN201780094925.9A priority patent/CN111163650A/zh
Publication of WO2019051777A1 publication Critical patent/WO2019051777A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A24TOBACCO; CIGARS; CIGARETTES; SIMULATED SMOKING DEVICES; SMOKERS' REQUISITES
    • A24FSMOKERS' REQUISITES; MATCH BOXES; SIMULATED SMOKING DEVICES
    • A24F47/00Smokers' requisites not otherwise provided for
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to the field of intelligent control, and in particular, to a reminding method and a reminding system based on a smart terminal.
  • the object of the present invention is to provide a reminding method and a reminding system based on a smart terminal, which can pop up a smoking cessation prompt in real time, remind the user of the danger of smoking, and automatically count the smoking data of the user and analyze the smoking habits of the user.
  • the invention discloses a reminding method based on a smart terminal, comprising the following steps:
  • the distance is compared with a distance threshold preset in the smart terminal, and when the distance is smaller than the distance threshold, a reminder information is displayed in the display interface of the smart terminal.
  • the reminding method further comprises the following steps:
  • the number and frequency of the prompt message and/or the prompt voice are counted in a preset period, and the number and frequency of statistics are displayed.
  • the spacing is compared with a distance threshold preset in the smart terminal, and when the spacing is less than the distance threshold, a step of displaying a reminder information in the display interface of the smart terminal is further include:
  • the time limit for controlling the reminder period threshold after the current time is the starting point is controlled to stop displaying the reminder information.
  • the step of detecting a person in the scene and identifying a portrait element possessed by the character comprises:
  • the presence of the portrait element is determined by comparing the location of the portrait element with a location threshold of a portrait element preset to the smart terminal.
  • the step of detecting the area brightness and the area chrominance in the scene, and determining whether the scene has smoke elements in the scene includes:
  • the invention also discloses a reminder system based on a smart terminal, comprising:
  • a camera for acquiring a right scene in real time
  • a person detecting module connected to the camera, detecting a person in the scene, and identifying a portrait element possessed by the character;
  • a smoke detecting module connected to the camera, detecting an area brightness and a regional chromaticity in the scene, and determining whether there is a smoke element in the scene;
  • a calculation module connected to the person detection module and the smoke detection module, and calculating the portrait element and the smoke Spacing
  • control module connected to the computing module, receiving the spacing, and comparing the spacing with a distance threshold preset in the smart terminal, and controlling the smart when the spacing is less than the distance threshold
  • a reminder message is displayed in the display interface of the terminal.
  • the reminding system further comprises:
  • a database which is disposed in the smart terminal, and is provided with a prompt message and/or a prompt voice;
  • a recording module connected to the calling module, when the prompt message and/or the prompt voice is called, collecting a call time for calling the prompt message and/or the prompt voice;
  • the statistics module is connected to the recording module, and based on the calling time, counts and displays the number and frequency of the prompt message and/or the prompt voice within a preset period.
  • control module presets a reminder time threshold and a reminder period threshold, and the method includes:
  • An operation unit configured to display the reminder information to be displayed within the reminder time threshold
  • the operating unit further controls to stop displaying the reminder information within a time period in which the current time is the reminder period threshold after the starting point.
  • the person detection module comprises:
  • a positioning unit that positions the portrait element to mark a position of the portrait element
  • the identification unit compares the position of the portrait element with a position threshold of a portrait element preset to the smart terminal to determine the presence of the portrait element.
  • the smoke detecting module comprises:
  • a dividing unit dividing an image having the scene according to a contour of each element of the scene, and acquiring at least one scene element
  • An extracting unit is connected to the dividing unit to extract brightness, chromaticity and contrast of each of the scene elements;
  • the difference detecting unit detects a difference in transmittance between each of the scene elements and an adjacent scene element to determine whether there is a smoke element in the scene.
  • FIG. 1 is a schematic flow chart of a method for reminding a smart terminal based on a preferred embodiment of the present invention
  • FIG. 2 is a schematic flow chart of a method for reminding a smart terminal based on another preferred embodiment of the present invention
  • FIG. 3 is a flow chart showing the display of reminder information in accordance with a preferred embodiment of the present invention.
  • FIG. 4 is a flow chart showing the recognition of a portrait element in accordance with a preferred embodiment of the present invention.
  • Figure 5 is a flow chart showing the determination of smoke elements in accordance with a preferred embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a reminder system based on a smart terminal in accordance with a preferred embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a reminder system based on a smart terminal according to another preferred embodiment of the present invention.
  • FIG. 8 is a schematic structural view of a control module in accordance with a preferred embodiment of the present invention.
  • FIG. 9 is a schematic structural view of a person detecting module in accordance with a preferred embodiment of the present invention.
  • FIG. 10 is a block diagram showing the structure of a smoke detecting module in accordance with a preferred embodiment of the present invention.
  • first, second, third, etc. may be used in the present disclosure to describe various information, such information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as second information without departing from the scope of the present disclosure.
  • second information may also be referred to as first information.
  • word “if” as used herein may be interpreted as "when” or "when” or "in response to determination"
  • a smoking cessation reminding method for a smoking user includes the following steps:
  • S100 Calling the camera of the smart terminal to acquire a scene facing the camera in real time.
  • the camera using the smart terminal can detect all the scenes in the camera can be detected actively and in real time without user operation. Specifically, the camera is always in the running state (whether running in the foreground or running in the background), and the screen that is facing it is captured (optional not saved), and only the current scene is acquired.
  • the advantage of controlling the real-time acquisition of the camera is that the user can also omit the operation of the camera, which is completely controlled by the intelligent terminal.
  • the portrait element possessed by the character is recognized, for example, the facial features of the human face, the limbs of the human body, the clothes worn by the character, etc., only when the above-mentioned
  • the next steps are taken when the portrait element is used.
  • the detection of portrait elements can eliminate the fact that the camera is facing the photo, the non-living characters in the video, and the misidentification of the characters, reducing the chance of false reminders when the user is still not smoking.
  • the area brightness detection and the area chromaticity detection are performed on other areas in the scene.
  • the infrared scattering principle is used to detect the smoke, that is, when the smoke reaches a predetermined threshold, the alarm data is sent to the gateway, and an alarm sounds.
  • the infrared emitting tube currently installed in the smoke detector emits an infrared light beam, which is scattered by the soot particles, and the intensity of the scattered light is proportional to the concentration of the smoke, when the photosensitive tube receives the infrared light beam.
  • the strength and weakness change it will be converted into an electrical signal, and an alarm signal will be formed through the transmitting circuit and the receiving circuit.
  • the method used is to detect the area brightness and the area chromaticity of each block area in the scene to determine whether there is smoke in the scene.
  • the smoke in the scene that the camera is facing reflects in the collected picture that the area where the smoke is located is gray, that is, the brightness of the area of the area is compared with other brightness in the scene. Low, similarly, the chromaticity of the area where the smoke is located will also be dark. According to these two characteristics, the presence of the smoke element can be judged by detecting the brightness of the area and the chromaticity of the area in the scene.
  • the position of the person in the current scene is obtained, and if the position of the smoke element is close to the mouth, the person in the scene can be basically judged to be in a smoking state.
  • S400 calculating a spacing between the portrait element and the smoke element when there is a smoke element in the scene
  • the smoke is the smoke of second-hand smoke
  • the smoking user is not facing the camera, in this case, even if a reminder is issued It can't be implemented to actual smoking users. Therefore, after confirming that there is a smoke element in the scene, it is necessary to calculate the distance between the portrait element and the smoke element in the scene captured by the camera to determine whether the smoke element belongs directly to the character element, or whether the source of the smoke element is the character element. . The user will be alerted only if the smoke element belongs directly to the character element or if the source of the smoke element is the person element.
  • S500 Compare the spacing with a distance threshold preset in the smart terminal, and display a reminder information in the display interface of the smart terminal when the spacing is less than the distance threshold
  • a distance threshold is preset in the smart terminal, and the calculated distance between the portrait element and the smoke element is compared with the distance threshold.
  • the distance between the portrait element and the smoke element is less than the distance threshold, it can be determined that the smoke represented by the smoke element is emitted by the person having the character element, and should be reminded at this time. Therefore, on the display interface of the smart terminal, such as the top, the right side, etc., a reminder message is displayed in a striking manner such as sliding, scrolling, jumping, etc., to inform the current smoking hazard of the smoking user of the smart terminal, and to warn it to stop. Smoking behavior.
  • the reminder information can be configured to display the display mode regardless of how the smart terminal operates, switch to which interface, or even the screen, until the user extinguishes the cigarette but the camera can no longer capture the smoke element. until.
  • the smart terminal-based reminding method for a smoking user may further include:
  • S600 preset a prompt message and/or a prompt voice in a database of the smart terminal.
  • a prompt message and/or a prompt voice may be pre-set in the smart terminal.
  • the prompt message may be a text in the form of a barrage such as "smoking is harmful to health, smoking reduces life, and secondhand smoke is more deadly", or may be a picture of the lungs caused by smoking on the basis of words, forming a prompt message.
  • pre-set a prompt voice in the database for example, to collect pre-prepared voice audio by means of recording, or to store external prompt audio by downloading, such audio can be retrieved later. And play when the user smokes, prompting the user to stop smoking in a more direct way.
  • a call instruction may be issued to the database of the smart terminal to call the stored prompt message and/or prompt voice from the database, and display and/or play out.
  • the time at which the prompt message is displayed will be recorded, for example, at 8:08 am on August 8, 2017, when the user of the smart terminal is found to smoke, in addition to being displayed
  • the display time of the prompt message will also be recorded, and the message will be recorded every time the prompt message is displayed, thereby obtaining the user through the collection of big data. Smoking habits, pre-warning messages to users.
  • S900 Count the number and frequency of the prompt message and/or the prompt voice in a preset period based on the calling time, and display the number and frequency of statistics.
  • the number and frequency of the reminder information are counted, for example, the number of times the reminder information is displayed in the preset period of the day cycle, and the number of times the reminder information is displayed in the preset period of the week cycle. Slightly higher, Statistics in the form of a line chart or a histogram are recorded in the notebook of the smart terminal. The statistical content in the notebook can be periodically displayed to the user to help the user understand that the smoking behavior is implemented within a certain period of time. The frequency and frequency to better understand the harm of smoking to itself.
  • the reminder information when it is displayed, it may be preferably set:
  • the reminder information is stopped during the period of the threshold.
  • the reminder time threshold and the reminder period threshold are set, and the reminder time threshold is used to control the time when the reminder information is displayed. For example, when the time for displaying the reminder information is displayed is 10 seconds, when the reminder information is displayed The current time of the initial display is recorded, and the current time is used as a starting point, and the reminder information is displayed within a time period of reminding the time threshold.
  • the reminder period threshold is used to control the frequency of displaying the reminder information.
  • the reminder period threshold may be set to 10 minutes.
  • the control no longer displays the next time within 10 minutes.
  • a reminder message that even if the user’s smoking behavior is found again, no reminder is given. It can be understood that, as an optional function, whether the reminder period threshold is set and the size of the reminder period threshold is set is user-adjustable, and the setting is changed according to personal preference and usage.
  • the identification of the portrait elements in the character can be achieved by the following steps:
  • the presence of the portrait element is determined by comparing the location of the portrait element with a location threshold of a portrait element preset to the smart terminal.
  • the process of positioning and determining the portrait elements mainly includes four components: face image acquisition and detection, face image preprocessing, face image feature extraction, and matching and recognition.
  • Face image acquisition Different face images can be captured by the camera, such as static images, dynamic images, different positions, different expressions, etc., can be well collected.
  • the camera automatically searches for and captures the user's face image.
  • Face detection In practice, face detection is mainly used for pre-processing of face recognition, that is, the position and size of the face are accurately calibrated in the image.
  • the pattern features contained in the face image are very rich, such as histogram features, color features, template features, structural features, and Haar features. Face detection is to pick out the useful information and use these features to achieve face detection.
  • the Adaboost algorithm is used to select some rectangular features (weak classifiers) that best represent the face.
  • the weak classifier is constructed as a strong classifier according to the weighted voting method, and then some strong classifiers are trained.
  • the cascaded classifiers that form a cascade structure in series effectively increase the detection speed of the classifier.
  • Face Image Preprocessing is based on face detection results, processing the image and ultimately serving the feature extraction process.
  • the original image acquired by the system is often not directly used due to various conditions and random interference. It must be pre-processed with grayscale correction and noise filtering in the early stage of image processing.
  • the preprocessing process mainly includes ray compensation, gradation transformation, histogram equalization, normalization, geometric correction, filtering and sharpening of face images.
  • Face image feature extraction The features that can be used are generally divided into visual features, pixel statistical features, face image transform coefficient features, face image algebra features, and the like. Face feature extraction is performed on certain features of the face. Face feature extraction, also known as face representation, is a process of character modeling a face. The methods of face feature extraction are summarized into two categories: one is based on knowledge representation methods; the other is based on algebraic features or statistical learning.
  • the knowledge-based representation method mainly obtains the feature data which is helpful for face classification according to the shape description of the face organs and the distance characteristics between them.
  • the feature components usually include the Euclidean distance, curvature and angle between the feature points.
  • the human face is composed of parts such as eyes, nose, mouth, chin, etc. The geometric description of these parts and the structural relationship between them can be used as important features for recognizing human faces. These features are called geometric features.
  • Knowledge-based face representation mainly includes geometric feature-based methods and template matching methods.
  • Face image matching and recognition The feature data of the extracted face image is searched and matched with the feature template stored in the database. By setting a threshold, when the similarity exceeds the threshold, the result of the matching is output. Face recognition is to compare the face features to be recognized with the obtained face feature templates, and judge the identity information of the faces according to the degree of similarity. This process is divided into two categories: one is confirmation, one-to-one image comparison process, and the other is recognition, which is a one-to-many image matching process.
  • the face is composed of portrait elements such as eyes, nose, mouth, and chin. Because of the differences in the shape, size, and structure of these portrait elements, each face in the world varies widely, so the shape of these portrait elements and The geometric description of the structural relationship can be used as an important feature of face recognition.
  • the geometric feature was first used for the description and recognition of the side profile of the face. First, several significant points were determined according to the side profile curve, and a set of feature metrics such as distance, angle, etc. for identification were derived from these significant points.
  • the use of geometric features for frontal face recognition is generally performed by extracting the location of important feature points such as the human eye, mouth, nose, and the geometry of important organs such as the eye as classification features.
  • the deformable templating method can be regarded as an improvement of the geometric feature method.
  • the basic idea is to design an organ model with adjustable parameters (ie, deformable template), define an energy function, and minimize the energy function by adjusting the model parameters.
  • the model parameters at this time are taken as the geometric features of the organ.
  • the weighting coefficients of various costs in the energy function can only be determined by experience, which is difficult to generalize.
  • the energy function optimization process is very time consuming and difficult to apply.
  • Parameter-based face representation can achieve an efficient description of the salient features of the face, but it requires a lot of pre-processing and fine parameter selection.
  • the general geometric features only describe the basic shape and structure relationship of the components, ignoring the local fine features, resulting in the loss of part of the information, more suitable for rough classification, and the existing feature point detection technology in the accuracy rate Far from meeting the requirements, the amount of calculation is also large.
  • the representation of the principal subspace is compact, the feature dimension is greatly reduced, but it is non-localized, the support of the kernel function is extended in the entire coordinate space, and it is non-topological, the point adjacent to an axis projection.
  • This method has achieved good results in practical applications, and it forms the basis of FaceIt's face recognition software.
  • the feature face method is one of the most popular algorithms proposed by Turk and Pentland in the early 1990s. It has simple and effective features, also called face recognition method based on principal component analysis (PCA).
  • PCA principal component analysis
  • the basic idea of the feature face face technology is to find the face image of the face image set covariance matrix from the statistical point of view, and to approximate the face image. These feature vectors are called Eigenfaces.
  • the eigenface reflects the information that is implicit in the set of face samples and the structural relationship of the face.
  • the feature vectors of the sample set covariance matrix of the eyes, cheeks, and lower jaws are called feature eyes, feature jaws, and feature lips, collectively referred to as feature face faces.
  • the feature face generates a subspace in the corresponding image space, called a child face space.
  • the projection distance of the test image window in the sub-face space is calculated, and if the window image satisfies the threshold comparison condition, it is determined to be a human face.
  • the method based on feature analysis that is, the relative ratio of the face reference point and other shape parameters or class parameters describing the facial face feature are combined to form the recognition feature vector, and the overall face-based recognition not only retains the face portion
  • the topological relationship between the pieces, and also the information of each component itself, and the component-based recognition is to design a specific recognition algorithm by extracting the local contour information and the gray information.
  • the method first determines the size, position, distance and other attributes of the facial iris, nose, mouth angle and the like, and then calculates their geometric feature quantities, and these feature quantities form a feature vector describing the image.
  • the core of the technology is actually "local body feature analysis” and "graphic/neural recognition algorithm.” This algorithm is a method that utilizes various organs and features of the human face.
  • the corresponding geometric relationship multi-data formation identification parameter is compared, judged and confirmed with all the original parameters in the database.
  • feature face On the basis of the traditional feature face, the researchers noticed that the feature vector with large feature value (ie, feature face) is not necessarily the direction of good classification performance, and accordingly, various feature (subspace) selection methods, such as Peng's, have been developed.
  • the eigenface method is an explicit principal component analysis face modeling.
  • Some linear self-association and linear compression type B P networks are implicit principal component analysis methods. They all represent faces as some vectors.
  • the weighted sum of these vectors is the main eigenvector of the training set cross product matrix.
  • the eigenface method is a simple, fast and practical algorithm based on transform coefficient features, but because it essentially depends on the gray correlation of the training set and the test set image, and requires the test image to be compared with the training set. So it has a lot of limitations.
  • the feature face recognition method based on KL transform is an optimal orthogonal transform in image compression. It is used for statistical feature extraction, which forms the basis of subspace method pattern recognition. If KL transform is used For face recognition, it is assumed that the face is in a low-dimensional linear space, and different faces are separable. Since the high-dimensional image space KL transform can obtain a new set of orthogonal bases, it can be partially orthogonal. Base to generate low-dimensional face space, and the basis of low-dimensional space is obtained by analyzing the statistical characteristics of the face training sample set.
  • the generation matrix of the KL transform can be the overall scatter matrix of the training sample set, or it can be training.
  • the inter-class scatter matrix of the sample set can be trained by using the average of several images of the same person, so that the interference of light and the like can be eliminated to some extent, and the calculation amount is also reduced. Less, and the recognition rate will not decrease.
  • a dynamic link model (DLA) is proposed for object recognition with distortion invariance.
  • the object is described by sparse graphs.
  • the vertices are marked by multi-scale description of the local energy spectrum, and the edges represent topological connections and are marked by geometric distance.
  • Plastic pattern matching techniques are applied to find the most recent known patterns.
  • the surface deformation is performed by the method of finite element analysis, and it is judged whether the two pictures are the same person according to the deformation condition. This method is characterized by placing the space (x, y) and the gray scale I (x, y) in a 3D space and considering it. Experiments show that the recognition result is significantly better than the feature face method.
  • the face is encoded into 83 model parameters by automatically locating the salient features of the face, and the face recognition based on the shape information is performed by the method of discrimination analysis.
  • Elastic image matching technology is a recognition algorithm based on geometric features and wavelet texture analysis for gray distribution information. Because the algorithm makes good use of face structure and gray distribution information, it also has automatic and precise positioning. The function of the facial feature points has a good recognition effect, and the adaptive recognition rate is high.
  • Artificial neural network is a nonlinear dynamic system with good self-organization and self-adaptation ability.
  • the research of neural network methods in face recognition is in the ascendant. First, extract 50 principals of the face, then map it to the 5-dimensional space with the autocorrelation neural network, and then use a common multi-layer perceptron to discriminate, which is better for some simple test images;
  • a hybrid neural network for face recognition in which unsupervised neural networks are used for feature extraction and supervised neural networks are used for classification.
  • the application of neural network methods in face recognition has certain advantages over the above-mentioned methods, because it is quite difficult to explicitly describe many rules or rules of face recognition, and the neural network method can be learned.
  • the process obtains implicit expressions of these laws and rules, and it is more adaptable and generally easier to implement. Therefore, artificial neural network recognition is fast, but the recognition rate is low.
  • the neural network method usually needs to input the face as a one-dimensional vector, so the input node is huge, and one of the important targets for recognition is dimension reduction processing.
  • the Gabor filter limits the Gaussian network function to the shape of a plane wave, and has a preference for the orientation and frequency in the filter design, which is characterized by sensitivity to line edge responses.
  • the method is to store a number of standard face image templates or face image organ templates in the library.
  • the sample face image is matched with all the pixels in the library using normalized correlation metrics.
  • the eigenface method treats the image as a matrix, calculates the eigenvalues and the corresponding eigenvectors as algebraic features for recognition, and has the advantage of not having to extract geometric features such as the nose, nose, and nose, but the recognition rate is not high in a single sample, and Large amount of calculation when the number of face patterns is large
  • This technique is derived from, but essentially different from, the traditional "feature face” face recognition method.
  • feature face all people share a face subspace, and the method establishes a face subspace private to the individual object for each individual face, thereby not only better describing the faces between different individual faces. The difference is, and the most likely to discard the intra-class differences and noise that are unfavorable for recognition, and thus has better discriminating ability than the traditional "feature face algorithm”.
  • a technique for generating multiple training samples based on a single sample is proposed, so that the individual face subspace method requiring multiple training samples can be applied to the single Training sample face recognition problem.
  • the singular value feature is stable in describing the image and has important properties such as transposition invariance, rotation invariance, displacement invariance, and image transformation invariance, the singular value feature can be An effective algebraic feature description as an image.
  • Singular value decomposition technology has been widely used in image data compression, signal processing and pattern analysis.
  • the detection of smoke elements includes:
  • Dividing an image having the scene according to a contour of each element of the scene acquiring at least one scene element; extracting brightness, chromaticity, and contrast of each of the scene elements; detecting each of the scene elements and adjacent The difference in transmittance of the scene elements; determining whether there is a smoke element in the scene.
  • the acquired scene is first divided, for example, by using a gray threshold.
  • a grayscale threshold is set in the smart terminal. After the grayscale threshold is determined, the grayscale value of each pixel in the scene is extracted. The grayscale threshold is compared with the grayscale value of the pixel one by one, and the pixel point division can be performed in parallel for each pixel. Under the comparison result, the pixel points corresponding to the gray value corresponding to the gray threshold are integrated to form a first image unit, and the pixel points corresponding to the gray value greater than or equal to the gray threshold are integrated to form a second image unit.
  • the above advantages of relying on gray threshold segmentation are simple calculation, high computational efficiency, and high speed. It is widely used in applications that emphasize computational efficiency (such as for hardware implementation).
  • the selection of the grayscale threshold includes a global threshold, an adaptive threshold, an optimal threshold, and the like.
  • the global threshold means that the entire image to be projected is segmented using the same threshold, and is suitable for images with obvious contrast between the background and the foreground. It is determined from the entire image to be projected. However, this method only considers the gray value of the pixel itself, and generally does not consider the spatial feature, and is therefore sensitive to noise.
  • Commonly used global threshold selection includes peak-to-valley method using image gray histogram, minimum error method, maximum inter-class variance method, maximum entropy automatic threshold method, and Some other methods.
  • grayscale thresholds may be used for segmentation according to local features of the image to be projected.
  • the grayscale threshold at this time is an adaptive threshold.
  • the selection of the grayscale threshold needs to be determined according to a specific problem, and is generally determined by experiments.
  • the best gray threshold can be determined by analyzing the histogram. For example, when the histogram clearly shows a bimodal condition, the midpoint of the two peaks can be selected as the optimal gray threshold.
  • each of the acquired scene elements extracts internal brightness, chromaticity, contrast, etc., to determine whether the transmittance of each of the divided regions and the adjacent regions is different. For example, the brightness of the smoke region is closer to the adjacent scene. The brightness of the element is low and the contrast is dark, so it is basically possible to determine that there is a smoke element in the area.
  • FIG. 6-10 a plurality of embodiments of the smart terminal-based reminder system in accordance with a preferred embodiment of the present invention and a schematic structural diagram of each module in some embodiments are shown.
  • the reminder system includes:
  • the camera is always running (whether running in the foreground or running in the background), capturing the screen it is facing (optional not saving), only the current scene.
  • the advantage of controlling the real-time acquisition of the camera is that the user's operation on the camera can also be omitted, which is completely controlled by the intelligent terminal;
  • a person detecting module is connected to the camera to detect a person in the scene, and recognizes a portrait element possessed by the character, for example, a facial features of a human face, limbs of a human body, clothes worn by a character, etc., only when recognized The next steps will be performed when the portrait element above is reached.
  • the detection of portrait elements can exclude parts such as photographs, non-living characters in the video, misidentification of the characters, and reduce the chance of false reminders when the user does not smoke;
  • a smoke detecting module connected to the camera, detecting an area brightness and a regional chromaticity in the scene, and determining whether there is a smoke element in the scene;
  • the calculation module is connected to the person detection module and the smoke detection module. After confirming that there is a smoke element in the scene, it is necessary to calculate the distance between the portrait element and the smoke element in the scene captured by the camera to determine whether the smoke element belongs directly to the character element. Or whether the source of the smoke element is the character element. The user will be alerted only if the smoke element belongs directly to the character element, or if the source of the smoke element is the character element;
  • a control module is connected to the calculation module, receives the spacing, and calculates a distance between the obtained portrait element and the smoke element to be compared with the distance threshold.
  • the control module can determine the smoke The smoke represented by the element is sent by the person who has the character of the character, and should be reminded at this time.
  • the reminder system further comprises:
  • the database is disposed in the smart terminal, and is provided with a prompt message and/or a prompt voice
  • the prompt message may be a text in the form of a barrage such as “smoking is harmful to health, smoking reduces life, and secondhand smoke is more deadly”. It is a picture of the effect on the lungs caused by smoking on the basis of the text, and a message is formed.
  • it can also be Pre-set prompt voices in the database, for example, collecting pre-prepared voice audio by means of recording, or storing external prompt audio by downloading, such audio can be retrieved later and played when the user smokes In a more direct way, prompt the user to stop smoking.
  • the calling module is connected to the database, and the stored prompt message and/or prompt voice is called from the database and displayed and/or played out.
  • the recording module is connected to the calling module.
  • the display time of the prompt message is also recorded, and configured as The secondary display prompt message will be recorded, and thus, through the collection of big data, the user's smoking habit can be obtained, and the reminder information is sent to the user in advance.
  • the statistics module is connected to the recording module, and based on the calling time, counts and displays the number and frequency of the prompt message and/or the prompt voice within a preset period. For example, in the preset period of the day cycle, the number of times the reminder information is displayed, and the number of times the reminder information is displayed is slightly higher in the preset period of the week cycle, and is counted by a line chart or a histogram. Recorded in the notebook of the smart terminal, the statistical content in the notebook can be periodically displayed to the user to help the user understand the frequency and frequency of the smoking behavior in a certain period of time, to better understand the smoking pair. Its own harm.
  • a reminder time threshold and a reminder period threshold are preset therein, and include:
  • the operation unit controls the reminder information to be displayed within the reminder time threshold; the obtaining unit acquires a current time for displaying the reminder information; and the operation unit further selects the reminder period threshold after the current time is the starting point During the time period, the control stops displaying the reminder information.
  • the time for displaying the reminder information is displayed is 10 seconds
  • the reminder information is displayed, the current time of the initial display is recorded, and the current time is used as the starting point, and the reminder information is displayed within the time period of the reminding time threshold.
  • the reminder period threshold is used to control the frequency of displaying the reminder information.
  • the reminder period threshold may be set to 10 minutes.
  • FIG-10 are structural diagrams of a person detection module and a smoke detection module, respectively, wherein the person detection module includes: a positioning unit that positions the portrait element to mark the position of the portrait element; the identification unit will The location of the portrait element is compared with a location threshold of a portrait element preset to the smart terminal to determine the presence of the portrait element.
  • the implementation and the relying algorithm of the positioning unit and the identification unit can be implemented after loading according to the method described above.
  • the smoke detection module includes: a segmentation unit that divides an image having the scene according to a contour of each element of the scene to acquire at least one scene element; and an extraction unit that is connected to the segmentation unit to extract each of the scene elements The brightness, chromaticity, and contrast; the difference detecting unit detects a difference in transmittance between each of the scene elements and an adjacent scene element to determine whether there is a smoke element in the scene.
  • each scene element acquired by the extraction unit extracts internal brightness, chromaticity, contrast, etc. to determine whether the transmittance of each of the divided regions and the adjacent region is different, for example, the brightness of the smoke region is different. If the brightness of adjacent scene elements is low and the contrast is dark, the difference detecting unit can basically determine that there are smoke elements in the area.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供了一种基于智能终端的提醒方法,包括以下步骤:调用所述智能终端的摄像头,实时获取正对所述摄像头的场景;检测所述场景内的人物,并识别所述人物具有的人像要素;检测所述场景内的区域亮度及区域色度,判断所述场景内是否具有烟雾要素;当所述场景内具有烟雾要素时,计算所述人像要素和烟雾要素的间距;将所述间距与预设于所述智能终端内的一距离阈值比较,当所述间距小于所述距离阈值时,于所述智能终端的显示界面内显示一提醒信息。采用上述技术方案后,能够实时弹出戒烟提示语,提醒用户吸烟的危害性,并自动统计用户吸烟数据,分析用户吸烟习惯,起到有效地帮助用户戒烟的功能。

Description

一种基于智能终端的提醒方法和提醒系统 技术领域
本发明涉及智能控制领域,尤其涉及一种基于智能终端的提醒方法和提醒系统。
背景技术
智能终端经过数十年的发展,也已成为人们生活中不可或缺的一部分。很难想象没有智能终端的日子是黑暗还是昏暗。时至今日,我国已经拥有近八亿的智能终端用户,如此庞大的客户群体使众多智能终端厂商对自己的前途无比的自信和疯狂的坚持,也就有多年前第一批国产智能终端品牌攻城略地占据国内近半江山,而目前的智能终端厂商更注重在不同方面的功能迭代,以做到与其他厂商的差异胡。
由于智能终端的崛起,大多数烟民都习惯边吸烟边玩手机。据不完全统计,目前全球约有13亿烟民,中国约3.5亿,每天有约1.34万人因吸烟离世。可见吸烟危害非常严重,因此戒烟也成为世界大难题之一。以智能终端为载体,目前市面上有很多戒烟的方式,如“戒烟军团”、“戒客”、“种子习惯”等戒烟软件。这些软件的方式如下:
1、服务器社区互动,互相分享经验心得帮助戒烟;
2、类似打卡的方式,用签到的形式帮助戒烟;
3、阅读书籍、文章等方式帮助戒烟。
但上述戒烟软件各具有缺陷:
1、网络社区互动,涉及到网络安全方面,有几率遇上骗子,烟没戒成反遭财物损失;
2、打卡签到、阅读书籍的方式提醒戒烟,比较枯燥难以坚持,实用性不大;
3、上述技术都没有起到,实时提醒戒烟的作用。
因此,需要一种可警醒烟民,通过简单有效的方式向吸烟用户提醒吸烟的危害的提醒方法和提醒系统。
发明内容
为了克服上述技术缺陷,本发明的目的在于提供一种基于智能终端的提醒方法和提醒系统,能够实时弹出戒烟提示语,提醒用户吸烟的危害性,并自动统计用户吸烟数据,分析用户吸烟习惯,起到有效地帮助用户戒烟的功能。
本发明公开了一种基于智能终端的提醒方法,包括以下步骤:
调用所述智能终端的摄像头,实时获取正对所述摄像头的场景;
检测所述场景内的人物,并识别所述人物具有的人像要素;
检测所述场景内的区域亮度及区域色度,判断所述场景内是否具有烟雾要素;
当所述场景内具有烟雾要素时,计算所述人像要素和烟雾要素的间距;
将所述间距与预设于所述智能终端内的一距离阈值比较,当所述间距小于所述距离阈值时,于所述智能终端的显示界面内显示一提醒信息。
优选地,所述提醒方法还包括以下步骤:
于所述智能终端的数据库内预设提示消息和/或提示语音;
调用所述提示消息和/或提示语音作为所述提醒信息;
当所述提示消息和/或提示语音被调用时,搜集调用所述提示消息和/或提示语音的调用时间;
基于所述调用时间,统计一预设周期内调用所述提示消息和/或提示语音的次数和频率,并显示统计的次数和频率。
优选地,将所述间距与预设于所述智能终端内的一距离阈值比较,当所述间距小于所述距离阈值时,于所述智能终端的显示界面内显示一提醒信息的步骤后还包括:
于所述智能终端内预设一提醒时间阈值及提醒周期阈值;
控制所述提醒信息于所述提醒时间阈值内显示;
获取显示所述提醒信息的当前时间;
控制自所述当前时间为起始点后的所述提醒周期阈值的时间段内,停止显示所述提醒信息。
优选地,检测所述场景内的人物,并识别所述人物具有的人像要素的步骤包括:
定位所述人像要素,以标记所述人像元素的位置;
将所述人像元素的位置与预设于智能终端的人像元素位置阈值比较,确定所述人像要素的存在。
优选地,检测所述场景内的区域亮度及区域色度,判断所述场景内是否具有烟雾要素的步骤包括:
根据所述场景的每一要素的轮廓分割具有所述场景的图像,获取至少一个场景要素;
提取每一所述场景要素的亮度、色度、对比度;
检测每一所述场景要素与相邻的场景要素的透光度的差异;
确定所述场景内是否具有烟雾要素。
本发明还公开了一种基于智能终端的提醒系统,包括:
摄像头,用于实时获取正对的场景;
人物检测模块,与所述摄像头连接,检测所述场景内的人物,并识别所述人物具有的人像要素;
烟雾检测模块,与所述摄像头连接,检测所述场景内的区域亮度及区域色度,判断所述场景内是否具有烟雾要素;
计算模块,与所述人物检测模块及烟雾检测模块连接,计算所述人像要素和烟雾要 素的间距;
控制模块,与所述计算模块连接,接收所述间距,并将所述间距与预设于所述智能终端内的一距离阈值比较,当所述间距小于所述距离阈值时,控制所述智能终端的显示界面内显示一提醒信息。
优选地,所述提醒系统还包括:
数据库,设于所述智能终端内,预设有提示消息和/或提示语音;
调用模块,与所述数据库连接,调用所述提示消息和/或提示语音作为所述提醒信息;
记录模块,与所述调用模块连接,当所述提示消息和/或提示语音被调用时,搜集调用所述提示消息和/或提示语音的调用时间;
统计模块,与所述记录模块连接,基于所述调用时间,统计一预设周期内调用所述提示消息和/或提示语音的次数和频率并显示。
优选地,所述控制模块内预设有一提醒时间阈值及提醒周期阈值,其包括:
操作单元,控制所述提醒信息于所述提醒时间阈值内显示;
获取单元,获取显示所述提醒信息的当前时间;
所述操作单元还自所述当前时间为起始点后的所述提醒周期阈值的时间段内,控制停止显示所述提醒信息。
优选地,所述人物检测模块包括:
定位单元,对所述人像要素进行定位,以标记所述人像元素的位置;
识别单元,将所述人像元素的位置与预设于智能终端的人像元素位置阈值比较,确定所述人像要素的存在。
优选地,所述烟雾检测模块包括:
分割单元,根据所述场景的每一要素的轮廓分割具有所述场景的图像,获取至少一个场景要素;
提取单元,与所述分割单元连接,提取每一所述场景要素的亮度、色度、对比度;
差异检测单元,检测每一所述场景要素与相邻的场景要素的透光度的差异,以确定所述场景内是否具有烟雾要素。
采用了上述技术方案后,与现有技术相比,具有以下有益效果:
1.可实时弹出戒烟提示语,提醒用户吸烟的危害性;。
2.自动统计用户吸烟数据,分析用户吸烟习惯,周期性地给予用户吸烟报告,并指导用户戒烟措施;
3.减少用户操作,提高戒烟效率。
附图说明
图1为符合本发明一优选实施例中基于智能终端的提醒方法的流程示意图;
图2为符合本发明另一优选实施例中基于智能终端的提醒方法的流程示意图;
图3为符合本发明一优选实施例中显示提醒信息的流程示意图;
图4为符合本发明一优选实施例中识别人像要素的流程示意图;
图5为符合本发明一优选实施例中判断烟雾要素的流程示意图;
图6为符合本发明一优选实施例中基于智能终端的提醒系统的结构示意图;
图7为符合本发明另一优选实施例中基于智能终端的提醒系统的结构示意图;
图8为符合本发明一优选实施例中控制模块的结构示意图;
图9为符合本发明一优选实施例中人物检测模块的结构示意图;
图10为符合本发明一优选实施例中烟雾检测模块的结构示意图。
具体实施方式
以下结合附图与具体实施例进一步阐述本发明的优点。
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
在本公开使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开。在本公开和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本公开可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”
在本发明的描述中,需要理解的是,术语“纵向”、“横向”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。
在本发明的描述中,除非另有规定和限定,需要说明的是,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是机械连接或电连接,也可以是两个元件内部的连通,可以是直接相连,也可以通过中间媒介间接相连,对于本领域的普通技术人员而言,可以根据具体情况理解上述术语的具体含义。
参阅图1,为符合本发明一优选实施例中基于智能终端的提醒方法的流程示意图,在该实施例中,对吸烟用户的戒烟提醒方法包括以下步骤:
S100:调用所述智能终端的摄像头,实时获取正对所述摄像头的场景
相比于原有需要用户主动地启动软件对自身的吸烟行为进行督促和监控的方式,本 实施例中利用智能终端的摄像头可无需用户操作,主动且实时地检测摄像头可拍摄画面内的全部场景。具体地,将摄像头始终处于运行状态(无论是否前台运行或后台运行),对其正对的画面进行捕捉(可选择不保存),仅获取当前场景。控制摄像头实时获取的好处在于,用户对摄像头的操作也可省略,完全由智能终端自行控制。
S200:检测所述场景内的人物,并识别所述人物具有的人像要素
调用摄像头获取当前场景后,将对场景内是否有人物进行检测。可以理解的是,当用户边吸烟边操作智能终端时,其恰好正对智能终端的前置摄像头或后置摄像头,场景内应当具有用户人物,若场景内无人物,可理解为用户当前未使用手机,或手机正对的场景内无人物,自然无法检测用户是否在吸烟。
为了提高检测精度,当检测到当前场景内具有人物时,将对该人物具有的人像要素进行识别,例如,人脸部的五官、人体的四肢、人物穿戴的衣物等,仅当识别到上述的人像要素时才将执行后续步骤。人像要素的检测,可排除部分如摄像头正对的是照片、视频内的非活体人物、人物误识别的情况发生,减少在用户在未吸烟的情况下依然作提醒的误提醒几率。
S300:检测所述场景内的区域亮度及区域色度,判断所述场景内是否具有烟雾要素
在人像要素识别后,将对场景内的其他区域进行区域亮度检测及区域色度检测。由于智能终端的摄像头无法像专用的烟雾报警器般,采用红外散射原理来探测烟雾,即当烟雾达到预定阈值时,发送报警数据到网关,并且发出报警提示声。具体地,目前安装在烟雾探测器内的红外发射管会发出红外光束,该红外光束将被烟尘粒子散射,其散射光的强弱与烟的浓度成正比,当光敏管接收到的红外光束的强弱发生变化时,将转化为电信号,通过发射电路和接收电路形成报警信号。但由于摄像头捕捉画面时无法发出红外线来对烟雾进行感测,因此,本实施例中,所采用的方法,为检测场景内各块区域的区域亮度和区域色度,来判断场景内是否有烟雾要素。可以理解的是,当用户在吸烟时,摄像头正对的场景内所具有的烟雾在采集的画面内反应的是,烟雾所在区域呈灰色,即该区域的区域亮度相较场景内的其他亮度将偏低,同样地,烟雾所在区域的色度也将偏暗,根据这两个特点,可通过对场景内的区域亮度和区域色度进行检测,以判断烟雾要素的存在。
例如,通过识别人物要素中的人脸上的嘴部的定位,获得在当前场景下人物的位置,烟雾要素的位置若靠近嘴部,则基本可判断场景内的人物处于吸烟状态。
S400:当所述场景内具有烟雾要素时,计算所述人像要素和烟雾要素的间距;
考虑到可能存在场景内有多个人物,而烟雾为二手烟的烟雾的情况,其实并不存在有用户正在吸烟,或可以理解为吸烟用户未正对摄像头,在这样的情况下,即使发出提醒也无法落实到实际的吸烟用户处。因此,在确认场景内具有烟雾要素后,需要计算摄像头捕捉的场景内人像要素与烟雾要素的间距,来判断该烟雾要素是否直属于该人物要素,或是该烟雾要素的来源是否为该人物要素。只有当烟雾要素直属于该人物要素,或是该烟雾要素的来源为该人物要素时,才会向用户发出提醒。
S500:将所述间距与预设于所述智能终端内的一距离阈值比较,当所述间距小于所述距离阈值时,于所述智能终端的显示界面内显示一提醒信息
为判断烟雾要素是否直属于该人物要素,或是该烟雾要素的来源是否为该人物要素,在智能终端内预设有一距离阈值,计算所得的人像要素与烟雾要素的间距将与该距离阈值比较,当人像要素与烟雾要素的间距小于距离阈值时,可判断为烟雾要素所代表的烟雾是由具有该人物要素的人物发出的,此时应当提醒。因此,在智能终端的显示界面上,比如顶部、右侧等位置,以滑动、滚动、跳跃等醒目的方式显示一提醒信息,告知当前正对智能终端的吸烟用户吸烟的危害,并告诫其停止吸烟行为。
通过上述配置,用户在整个提醒过程中,一旦产生了吸烟行为,将通过上述实施例中的提醒方法向其告知提醒语言、提醒信号等,无需再去对智能终端的操作。甚至为了增加戒烟的有效性,可将提醒信息配置为无论对智能终端如何操作,切换至何种界面,甚至息屏,仍然显示的显示方式,直至用户将烟头熄灭,摄像头无法再捕捉到烟雾要素为止。
参阅图2,在上文所述的实施例的基础上,基于智能终端的对吸烟用户的提醒方法还可包括:
S600:于所述智能终端的数据库内预设提示消息和/或提示语音
无论是否已按照上文所述的内容执行了步骤S100-S500,均可在智能终端内预设有提示消息和/或提示语音。提示消息可以是例如“吸烟有害健康、吸烟减少寿命、二手烟更为致命”等弹幕形式的文字,也可以是在文字的基础上加以吸烟引起的对肺部影响的图片等,形成提示消息。同样地,也可以在数据库内预设有提示语音,例如,通过录音的方式采集预先准备好的语音音频,或是通过下载的方式存储外部提示音频,此类音频可在后续被调取出,并在用户吸烟时播放,以更为直接的方式提示用户停止吸烟。
S700:调用所述提示消息和/或提示语音作为所述提醒信息
准备发布提醒信息时,可向智能终端的数据库发出调取指令,从数据库内调用已存储的提示消息和/或提示语音,并显示和/或播放而出。
S800:当所述提示消息和/或提示语音被调用时,搜集调用所述提示消息和/或提示语音的调用时间
一旦在智能终端的显示界面上显示提示消息时,将记录下显示该提示消息的时间,例如,在2017年8月8日的早上8点08分,发现智能终端的用户吸烟时,除了在显示界面上弹出“吸烟有害健康”的提示消息外,还将记录下该提示消息的显示时间,且配置为每次显示提示消息均将记录下,由此,通过大数据的采集,可获得用户的吸烟习惯,预先向用户发出提醒信息。
S900:基于所述调用时间,统计一预设周期内调用所述提示消息和/或提示语音的次数和频率,并显示统计的次数和频率
在数据库内,统计调用提醒信息的次数和频率,例如,以天为周期的预设周期内,提醒信息显示的次数,以周为周期的预设周期内,在哪几天提醒信息显示的次数略高, 以折线图或柱状图的方式统计等,记录在智能终端的记事本内,记事本内的统计内容,可定期地向用户展示,帮助用户了解到在某一时间段内,其吸烟行为实施的次数和频率,更好地了解吸烟对自身的危害。
参阅图3,在显示提醒信息时,可优选地设置:
于所述智能终端内预设一提醒时间阈值及提醒周期阈值;控制提醒信息于所述提醒时间阈值内显示;获取显示提醒信息的当前时间;控制自当前时间为起始点后的所述提醒周期阈值的时间段内,停止显示所述提醒信息。具体地,对提醒信息显示的设置处,设置提醒时间阈值与提醒周期阈值,提醒时间阈值用于控制提醒信息显示的时间,例如,控制提醒信息显示的时间在10秒,则当显示提醒信息时,记录下初始显示的当前时间,以该当前时间为起始点,往后提醒时间阈值的时间内,显示提醒信息。提醒周期阈值用于控制显示提醒信息的频率,例如,可设置提醒周期阈值为10分钟,当显示有一提醒信息,并获取了显示提醒信息的当前时间后,控制在10分钟内不再显示下一提醒信息,即即便再次发现了用户吸烟的行为,也不作提醒。可以理解的是,作为可选的功能,是否需要设置提醒周期阈值及设置提醒周期阈值的大小为用户可自行调整的,根据个人喜好及使用情况改变设置。
参阅图4,对人物内人像要素的识别,可通过以下步骤实现:
定位所述人像要素,以标记所述人像元素的位置;
将所述人像元素的位置与预设于智能终端的人像元素位置阈值比较,确定所述人像要素的存在。
具体地,人像要素的定位及确定的流程主要包括四个组成部分,分别为:人脸图像采集及检测、人脸图像预处理、人脸图像特征提取以及匹配与识别。
人脸图像采集:不同的人脸图像都能通过摄像头采集下来,比如静态图像、动态图像、不同的位置、不同表情等方面都可以得到很好的采集。当用户在摄像头的拍摄范围内时,摄像头会自动搜索并拍摄用户的人脸图像。
人脸检测:人脸检测在实际中主要用于人脸识别的预处理,即在图像中准确标定出人脸的位置和大小。人脸图像中包含的模式特征十分丰富,如直方图特征、颜色特征、模板特征、结构特征及Haar特征等。人脸检测就是把这其中有用的信息挑出来,并利用这些特征实现人脸检测。
人脸检测过程中使用Adaboost算法挑选出一些最能代表人脸的矩形特征(弱分类器),按照加权投票的方式将弱分类器构造为一个强分类器,再将训练得到的若干强分类器串联组成一个级联结构的层叠分类器,有效地提高分类器的检测速度。
人脸图像预处理:对于人脸的图像预处理是基于人脸检测结果,对图像进行处理并最终服务于特征提取的过程。系统获取的原始图像由于受到各种条件的限制和随机干扰,往往不能直接使用,必须在图像处理的早期阶段对它进行灰度校正、噪声过滤等图像预处理。对于人脸图像而言,其预处理过程主要包括人脸图像的光线补偿、灰度变换、直方图均衡化、归一化、几何校正、滤波以及锐化等。
人脸图像特征提取:可使用的特征通常分为视觉特征、像素统计特征、人脸图像变换系数特征、人脸图像代数特征等。人脸特征提取就是针对人脸的某些特征进行的。人脸特征提取,也称人脸表征,它是对人脸进行特征建模的过程。人脸特征提取的方法归纳起来分为两大类:一种是基于知识的表征方法;另外一种是基于代数特征或统计学习的表征方法。
基于知识的表征方法主要是根据人脸器官的形状描述以及他们之间的距离特性来获得有助于人脸分类的特征数据,其特征分量通常包括特征点间的欧氏距离、曲率和角度等。人脸由眼睛、鼻子、嘴、下巴等局部构成,对这些局部和它们之间结构关系的几何描述,可作为识别人脸的重要特征,这些特征被称为几何特征。基于知识的人脸表征主要包括基于几何特征的方法和模板匹配法。
人脸图像匹配与识别:提取的人脸图像的特征数据与数据库中存储的特征模板进行搜索匹配,通过设定一个阈值,当相似度超过这一阈值,则把匹配得到的结果输出。人脸识别就是将待识别的人脸特征与已得到的人脸特征模板进行比较,根据相似程度对人脸的身份信息进行判断。这一过程又分为两类:一类是确认,是一对一进行图像比较的过程,另一类是辨认,是一对多进行图像匹配对比的过程。
实现时,可通过以下算法实现:
1.基于几何特征的方法
正是由于人脸由眼睛、鼻子、嘴巴、下巴等人像要素构成,因为这些人像要素的形状、大小和结构上的各种差异才使得世界上每个人脸千差万别,因此对这些人像要素的形状和结构关系的几何描述,可以做为人脸识别的重要特征。几何特征最早是用于人脸侧面轮廓的描述与识别,首先根据侧面轮廓曲线确定若干显著点,并由这些显著点导出一组用于识别的特征度量如距离、角度等。
采用几何特征进行正面人脸识别一般是通过提取人眼、口、鼻等重要特征点的位置和眼睛等重要器官的几何形状作为分类特征。可变形模板法可以视为几何特征方法的一种改进,其基本思想是:设计一个参数可调的器官模型(即可变形模板),定义一个能量函数,通过调整模型参数使能量函数最小化,此时的模型参数即做为该器官的几何特征。
这种方法存在两个问题,一是能量函数中各种代价的加权系数只能由经验确定,难以推广,二是能量函数优化过程十分耗时,难以实际应用。基于参数的人脸表示可以实现对人脸显著特征的一个高效描述,但它需要大量的前处理和精细的参数选择。同时,采用一般几何特征只描述了部件的基本形状与结构关系,忽略了局部细微特征,造成部分信息的丢失,更适合于做粗分类,而且目前已有的特征点检测技术在精确率上还远不能满足要求,计算量也较大。
2.局部特征分析方法(Local Face Analysis)
主元子空间的表示是紧凑的,特征维数大大降低,但它是非局部化的,其核函数的支集扩展在整个坐标空间中,同时它是非拓扑的,某个轴投影后临近的点与原图像空间中点的临近性没有任何关系,而局部性和拓扑性对模式分析和分割是理想的特性,似乎 这更符合神经信息处理的机制,因此寻找具有这种特性的表达十分重要。这种方法在实际应用取得了很好的效果,它构成了FaceIt人脸识别软件的基础。
3.特征脸方法(Eigenface或PCA)
特征脸方法是90年代初期由Turk和Pentland提出的目前最流行的算法之一,具有简单有效的特点,也称为基于主成分分析(principal component analysis,简称PCA)的人脸识别方法。
特征子脸技术的基本思想是:从统计的观点,寻找人脸图像分布的基本人像元素,即人脸图像样本集协方差矩阵的特征向量,以此近似地表征人脸图像。这些特征向量称为特征脸(Eigenface)。
实际上,特征脸反映了隐含在人脸样本集合内部的信息和人脸的结构关系。将眼睛、面颊、下颌的样本集协方差矩阵的特征向量称为特征眼、特征颌和特征唇,统称特征子脸。特征子脸在相应的图像空间中生成子空间,称为子脸空间。计算出测试图像窗口在子脸空间的投影距离,若窗口图像满足阈值比较条件,则判断其为人脸。
基于特征分析的方法,也就是将人脸基准点的相对比率和其它描述人脸脸部特征的形状参数或类别参数等一起构成识别特征向量,这种基于整体脸的识别不仅保留了人脸部件之间的拓扑关系,而且也保留了各部件本身的信息,而基于部件的识别则是通过提取出局部轮廓信息及灰度信息来设计具体识别算法。该方法是先确定眼虹膜、鼻翼、嘴角等面像五官轮廓的大小、位置、距离等属性,然后再计算出它们的几何特征量,而这些特征量形成一描述该面像的特征向量。其技术的核心实际为“局部人体特征分析”和“图形/神经识别算法。”这种算法是利用人体面部各器官及特征部位的方法。如对应几何关系多数据形成识别参数与数据库中所有的原始参数进行比较、判断与确认。在传统特征脸的基础上,研究者注意到特征值大的特征向量(即特征脸)并不一定是分类性能好的方向,据此发展了多种特征(子空间)选择方法,如Peng的双子空间方法、Weng的线性歧义分析方法、Belhumeur的FisherFace方法等。事实上,特征脸方法是一种显式主元分析人脸建模,一些线性自联想、线性压缩型B P网则为隐式的主元分析方法,它们都是把人脸表示为一些向量的加权和,这些向量是训练集叉积阵的主特征向量。总之,特征脸方法是一种简单、快速、实用的基于变换系数特征的算法,但由于它在本质上依赖于训练集和测试集图像的灰度相关性,而且要求测试图像与训练集比较像,所以它有着很大的局限性。
基于KL变换的特征人脸识别方法,是图象压缩中的一种最优正交变换,人们将它用于统计特征提取,从而形成了子空间法模式识别的基础,若将KL变换用于人脸识别,则需假设人脸处于低维线性空间,且不同人脸具有可分性,由于高维图象空间KL变换后可得到一组新的正交基,因此可通过保留部分正交基,以生成低维人脸空间,而低维空间的基则是通过分析人脸训练样本集的统计特性来获得,KL变换的生成矩阵可以是训练样本集的总体散布矩阵,也可以是训练样本集的类间散布矩阵,即可采用同一人的数张图象的平均来进行训练,这样可在一定程度上消除光线等的干扰,且计算量也得到减 少,而识别率不会下降。
4.基于弹性模型的方法
针对畸变不变性的物体识别提出了动态链接模型(DLA),将物体用稀疏图形来描述,其顶点用局部能量谱的多尺度描述来标记,边则表示拓扑连接关系并用几何距离来标记,然后应用塑性图形匹配技术来寻找最近的已知图形。将人脸图像(I)(x,y)建模为可变形的3D网格表面(x,y,I(x,y)),从而将人脸匹配问题转化为可变形曲面的弹性匹配问题。利用有限元分析的方法进行曲面变形,并根据变形的情况判断两张图片是否为同一个人。这种方法的特点在于将空间(x,y)和灰度I(x,y)放在了一个3D空间中同时考虑,实验表明识别结果明显优于特征脸方法。
通过自动定位人脸的显著特征点将人脸编码为83个模型参数,并利用辨别分析的方法进行基于形状信息的人脸识别。弹性图匹配技术是一种基于几何特征和对灰度分布信息进行小波纹理分析相结合的识别算法,由于该算法较好的利用了人脸的结构和灰度分布信息,而且还具有自动精确定位面部特征点的功能,因而具有良好的识别效果,适应性强识别率较高。
5.神经网络方法(Neural Networks)
人工神经网络是一种非线性动力学系统,具有良好的自组织、自适应能力。目前神经网络方法在人脸识别中的研究方兴未艾。首先提取人脸的50个主元,然后用自相关神经网络将它映射到5维空间中,再用一个普通的多层感知器进行判别,对一些简单的测试图像效果较好;还提出了一种混合型神经网络来进行人脸识别,其中非监督神经网络用于特征提取,而监督神经网络用于分类。神经网络方法在人脸识别上的应用比起前述几类方法来有一定的优势,因为对人脸识别的许多规律或规则进行显性的描述是相当困难的,而神经网络方法则可以通过学习的过程获得对这些规律和规则的隐性表达,它的适应性更强,一般也比较容易实现。因此人工神经网络识别速度快,但识别率低。而神经网络方法通常需要将人脸作为一个一维向量输入,因此输入节点庞大,其识别重要的一个目标就是降维处理。
6.其它方法:
除了以上几种方法,人脸识别还有其它若干思路和方法,包括一下一些:
1)隐马尔可夫模型方法(Hidden Markov Model)
2)Gabor小波变换+图形匹配
(1)精确抽取面部特征点以及基于Gabor引擎的匹配算法,具有较好的准确性,能够排除由于面部姿态、表情、发型、眼镜、照明环境等带来的变化。
(2)Gabor滤波器将Gaussian网络函数限制为一个平面波的形状,并且在滤波器设计中有优先方位和频率的选择,表现为对线条边缘反应敏感。
(3)但该算法的识别速度很慢,只适合于录象资料的回放识别,对于现场的适应性很差。
3)人脸等密度线分析匹配方法
(1)多重模板匹配方法
该方法是在库中存贮若干标准面像模板或面像器官模板,在进行比对时,将采样面像所有象素与库中所有模板采用归一化相关量度量进行匹配。
(2)线性判别分析方法(Linear Discriminant Analysis,LDA)
(3)本征脸法
本征脸法将图像看做矩阵,计算本征值和对应的本征向量作为代数特征进行识别,具有无需提取眼嘴鼻等几何特征的优点,但在单样本时识别率不高,且在人脸模式数较大时计算量大
(4)特定人脸子空间(FSS)算法
该技术来源于但在本质上区别于传统的″特征脸″人脸识别方法。″特征脸″方法中所有人共有一个人脸子空间,而该方法则为每一个体人脸建立一个该个体对象所私有的人脸子空间,从而不但能够更好的描述不同个体人脸之间的差异性,而且最大可能地摈弃了对识别不利的类内差异性和噪声,因而比传统的″特征脸算法″具有更好的判别能力。另外,针对每个待识别个体只有单一训练样本的人脸识别问题,提出了一种基于单一样本生成多个训练样本的技术,从而使得需要多个训练样本的个体人脸子空间方法可以适用于单训练样本人脸识别问题。
(5)奇异值分解(singular value decomposition,简称SVD)
是一种有效的代数特征提取方法.由于奇异值特征在描述图像时是稳定的,且具有转置不变性、旋转不变性、位移不变性、镜像变换不变性等重要性质,因此奇异值特征可以作为图像的一种有效的代数特征描述。奇异值分解技术已经在图像数据压缩、信号处理和模式分析中得到了广泛应用。
参阅图5,对烟雾要素的检测,包括:
根据所述场景的每一要素的轮廓分割具有所述场景的图像,获取至少一个场景要素;提取每一所述场景要素的亮度、色度、对比度;检测每一所述场景要素与相邻的场景要素的透光度的差异;确定所述场景内是否具有烟雾要素。
具体地,首先需将获取的场景进行分割,例如,可采用灰度阈值的方式分割。在智能终端内设置一灰度阈值。灰度阈值确定后,提取场景内的每一像素点的灰度值。将灰度阈值与像素点的灰度值逐个进行比较,而且像素点分割可对各像素并行地进行。比较结果下,集成低于灰度阈值的灰度值对应的像素点,形成第一图像单元,集成大于或等于灰度阈值的灰度值对应的像素点,形成第二图像单元。上述依托灰度阈值分割的优点是计算简单、运算效率较高、速度快。在重视运算效率的应用场合(如用于硬件实现),它得到了广泛应用。在该实施例中,灰度阈值的选取包括全局阈值、自适应阈值、最佳阈值等等。其中,全局阈值是指整幅待投影图像使用同一个阈值做分割处理,适用于背景和前景有明显对比的图像。它是根据整幅待投影图像确定的。但是这种方法只考虑像素点本身的灰度值,一般不考虑空间特征,因而对噪声很敏感。常用的全局阈值选取包括利用图像灰度直方图的峰谷法、最小误差法、最大类间方差法、最大熵自动阈值法以及 其它一些方法。
另外,考虑到在许多情况下,物体和背景的对比度在图像中的各处不是一样的,这时很难用一个统一的灰度阈值将物体与背景分开。这时可以根据待投影图像的局部特征分别采用不同的灰度阈值进行分割。实际处理时,需要按照具体问题将待投影图像分成若干子区域分别选择灰度阈值,或者动态地根据一定的邻域范围选择每点处的灰度阈值,进行图像分割。这时的灰度阈值为自适应阈值。
此外,灰度阈值的选择需要根据具体问题来确定,一般通过实验来确定。对于给定的图像,可以通过分析直方图的方法确定最佳的灰度阈值,例如当直方图明显呈现双峰情况时,可以选择两个峰值的中点作为最佳的灰度阈值。
分割完成后,获取的每个场景要素提取内部的亮度、色度、对比度等,来确定每块分割区域与相邻区域的透光度是否有差异,比如,烟雾区域的亮度较相邻的场景要素的亮度偏低,对比度偏暗,则基本可确定区域内具有烟雾要素。
参阅图6-10,分别示出了符合本发明一优选实施例中基于智能终端的提醒系统的多个实施例及部分实施例中各模块的结构示意图。
一基本实施例中,提醒系统包括:
摄像头,始终处于运行状态(无论是否前台运行或后台运行),对其正对的画面进行捕捉(可选择不保存),仅获取当前场景。控制摄像头实时获取的好处在于,用户对摄像头的操作也可省略,完全由智能终端自行控制;
人物检测模块,与所述摄像头连接,检测所述场景内的人物,将对该人物具有的人像要素进行识别,例如,人脸部的五官、人体的四肢、人物穿戴的衣物等,仅当识别到上述的人像要素时才将执行后续步骤。人像要素的检测,可排除部分如摄像头正对的是照片、视频内的非活体人物、人物误识别的情况发生,减少在用户在未吸烟的情况下依然作提醒的误提醒几率;
烟雾检测模块,与所述摄像头连接,检测所述场景内的区域亮度及区域色度,判断所述场景内是否具有烟雾要素;
计算模块,与所述人物检测模块及烟雾检测模块连接,在确认场景内具有烟雾要素后,需要计算摄像头捕捉的场景内人像要素与烟雾要素的间距,来判断该烟雾要素是否直属于该人物要素,或是该烟雾要素的来源是否为该人物要素。只有当烟雾要素直属于该人物要素,或是该烟雾要素的来源为该人物要素时,才会向用户发出提醒;
控制模块,与所述计算模块连接,接收所述间距,并计算所得的人像要素与烟雾要素的间距将与该距离阈值比较,当人像要素与烟雾要素的间距小于距离阈值时,可判断为烟雾要素所代表的烟雾是由具有该人物要素的人物发出的,此时应当提醒。
一进一步的实施例中,提醒系统还包括:
数据库,设于所述智能终端内,预设有提示消息和/或提示语音,提示消息可以是例如“吸烟有害健康、吸烟减少寿命、二手烟更为致命”等弹幕形式的文字,也可以是在文字的基础上加以吸烟引起的对肺部影响的图片等,形成提示消息。同样地,也可以在 数据库内预设有提示语音,例如,通过录音的方式采集预先准备好的语音音频,或是通过下载的方式存储外部提示音频,此类音频可在后续被调取出,并在用户吸烟时播放,以更为直接的方式提示用户停止吸烟。
调用模块,与所述数据库连接,从数据库内调用已存储的提示消息和/或提示语音,并显示和/或播放而出。
记录模块,与所述调用模块连接,当发现智能终端的用户吸烟时,除了在显示界面上弹出“吸烟有害健康”的提示消息外,还将记录下该提示消息的显示时间,且配置为每次显示提示消息均将记录下,由此,通过大数据的采集,可获得用户的吸烟习惯,预先向用户发出提醒信息。
统计模块,与所述记录模块连接,基于所述调用时间,统计一预设周期内调用所述提示消息和/或提示语音的次数和频率并显示。例如,以天为周期的预设周期内,提醒信息显示的次数,以周为周期的预设周期内,在哪几天提醒信息显示的次数略高,以折线图或柱状图的方式统计等,记录在智能终端的记事本内,记事本内的统计内容,可定期地向用户展示,帮助用户了解到在某一时间段内,其吸烟行为实施的次数和频率,更好地了解吸烟对自身的危害。
对于控制模块而言,其内预设有一提醒时间阈值及提醒周期阈值,并包括了:
操作单元,控制所述提醒信息于所述提醒时间阈值内显示;获取单元,获取显示所述提醒信息的当前时间;所述操作单元还自所述当前时间为起始点后的所述提醒周期阈值的时间段内,控制停止显示所述提醒信息。例如,控制提醒信息显示的时间在10秒,则当显示提醒信息时,记录下初始显示的当前时间,以该当前时间为起始点,往后提醒时间阈值的时间内,显示提醒信息。提醒周期阈值用于控制显示提醒信息的频率,例如,可设置提醒周期阈值为10分钟,当显示有一提醒信息,并获取了显示提醒信息的当前时间后,控制在10分钟内不再显示下一提醒信息。
参阅图9-10,分别为人物检测模块和烟雾检测模块的结构示意图,其中,人物检测模块包括:定位单元,对所述人像要素进行定位,以标记所述人像元素的位置;识别单元,将所述人像元素的位置与预设于智能终端的人像元素位置阈值比较,确定所述人像要素的存在。定位单元和识别单元的实现和依托算法,可根据上文所述的方法加载后实现。而烟雾检测模块包括:分割单元,根据所述场景的每一要素的轮廓分割具有所述场景的图像,获取至少一个场景要素;提取单元,与所述分割单元连接,提取每一所述场景要素的亮度、色度、对比度;差异检测单元,检测每一所述场景要素与相邻的场景要素的透光度的差异,以确定所述场景内是否具有烟雾要素。分割单元分割完成后,提取单元获取的每个场景要素提取内部的亮度、色度、对比度等,来确定每块分割区域与相邻区域的透光度是否有差异,比如,烟雾区域的亮度较相邻的场景要素的亮度偏低,对比度偏暗,则差异检测单元基本可确定区域内具有烟雾要素。
应当注意的是,本发明的实施例有较佳的实施性,且并非对本发明作任何形式的限制,任何熟悉该领域的技术人员可能利用上述揭示的技术内容变更或修饰为等同的有效 实施例,但凡未脱离本发明技术方案的内容,依据本发明的技术实质对以上实施例所作的任何修改或等同变化及修饰,均仍属于本发明技术方案的范围内。

Claims (10)

  1. 一种基于智能终端的提醒方法,其特征在于,包括以下步骤:
    调用所述智能终端的摄像头,实时获取正对所述摄像头的场景;
    检测所述场景内的人物,并识别所述人物具有的人像要素;
    检测所述场景内的区域亮度及区域色度,判断所述场景内是否具有烟雾要素;
    当所述场景内具有烟雾要素时,计算所述人像要素和烟雾要素的间距;
    将所述间距与预设于所述智能终端内的一距离阈值比较,当所述间距小于所述距离阈值时,于所述智能终端的显示界面内显示一提醒信息。
  2. 如权利要求1所述的提醒方法,其特征在于,
    所述提醒方法还包括以下步骤:
    于所述智能终端的数据库内预设提示消息和/或提示语音;
    调用所述提示消息和/或提示语音作为所述提醒信息;
    当所述提示消息和/或提示语音被调用时,搜集调用所述提示消息和/或提示语音的调用时间;
    基于所述调用时间,统计一预设周期内调用所述提示消息和/或提示语音的次数和频率,并显示统计的次数和频率。
  3. 如权利要求1所述的提醒方法,其特征在于,
    将所述间距与预设于所述智能终端内的一距离阈值比较,当所述间距小于所述距离阈值时,于所述智能终端的显示界面内显示一提醒信息的步骤后还包括:
    于所述智能终端内预设一提醒时间阈值及提醒周期阈值;
    控制所述提醒信息于所述提醒时间阈值内显示;
    获取显示所述提醒信息的当前时间;
    控制自所述当前时间为起始点后的所述提醒周期阈值的时间段内,停止显示所述提醒信息。
  4. 如权利要求1所述的提醒方法,其特征在于,
    检测所述场景内的人物,并识别所述人物具有的人像要素的步骤包括:
    定位所述人像要素,以标记所述人像元素的位置;
    将所述人像元素的位置与预设于智能终端的人像元素位置阈值比较,确定所述人像要素的存在。
  5. 如权利要求1所述的提醒方法,其特征在于,
    检测所述场景内的区域亮度及区域色度,判断所述场景内是否具有烟雾要素的步骤包括:
    根据所述场景的每一要素的轮廓分割具有所述场景的图像,获取至少一个场景要素;
    提取每一所述场景要素的亮度、色度、对比度;
    检测每一所述场景要素与相邻的场景要素的透光度的差异;
    确定所述场景内是否具有烟雾要素。
  6. 一种基于智能终端的提醒系统,其特征在于,所述提醒系统包括:
    摄像头,用于实时获取正对的场景;
    人物检测模块,与所述摄像头连接,检测所述场景内的人物,并识别所述人物具有的人像要素;
    烟雾检测模块,与所述摄像头连接,检测所述场景内的区域亮度及区域色度,判断所述场景内是否具有烟雾要素;
    计算模块,与所述人物检测模块及烟雾检测模块连接,计算所述人像要素和烟雾要素的间距;
    控制模块,与所述计算模块连接,接收所述间距,并将所述间距与预设于所述智能终端内的一距离阈值比较,当所述间距小于所述距离阈值时,控制所述智能终端的显示界面内显示一提醒信息。
  7. 如权利要求6所述的提醒系统,其特征在于,所述提醒系统还包括:
    数据库,设于所述智能终端内,预设有提示消息和/或提示语音;
    调用模块,与所述数据库连接,调用所述提示消息和/或提示语音作为所述提醒信息;
    记录模块,与所述调用模块连接,当所述提示消息和/或提示语音被调用时,搜集调用所述提示消息和/或提示语音的调用时间;
    统计模块,与所述记录模块连接,基于所述调用时间,统计一预设周期内调用所述提示消息和/或提示语音的次数和频率并显示。
  8. 如权利要求6所述的提醒系统,其特征在于,
    所述控制模块内预设有一提醒时间阈值及提醒周期阈值,其包括:
    操作单元,控制所述提醒信息于所述提醒时间阈值内显示;
    获取单元,获取显示所述提醒信息的当前时间;
    所述操作单元还自所述当前时间为起始点后的所述提醒周期阈值的时间段内,控制停止显示所述提醒信息。
  9. 如权利要求6所述的提醒系统,其特征在于,所述人物检测模块包括:
    定位单元,对所述人像要素进行定位,以标记所述人像元素的位置;
    识别单元,将所述人像元素的位置与预设于智能终端的人像元素位置阈值比较,确定所述人像要素的存在。
  10. 如权利要求6所述的提醒系统,其特征在于,所述烟雾检测模块包括:
    分割单元,根据所述场景的每一要素的轮廓分割具有所述场景的图像,获取至少一个场景要素;
    提取单元,与所述分割单元连接,提取每一所述场景要素的亮度、色度、对比度;
    差异检测单元,检测每一所述场景要素与相邻的场景要素的透光度的差异,以确定所述场景内是否具有烟雾要素。
PCT/CN2017/101893 2017-09-15 2017-09-15 一种基于智能终端的提醒方法和提醒系统 WO2019051777A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/101893 WO2019051777A1 (zh) 2017-09-15 2017-09-15 一种基于智能终端的提醒方法和提醒系统
CN201780094925.9A CN111163650A (zh) 2017-09-15 2017-09-15 一种基于智能终端的提醒方法和提醒系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/101893 WO2019051777A1 (zh) 2017-09-15 2017-09-15 一种基于智能终端的提醒方法和提醒系统

Publications (1)

Publication Number Publication Date
WO2019051777A1 true WO2019051777A1 (zh) 2019-03-21

Family

ID=65723152

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/101893 WO2019051777A1 (zh) 2017-09-15 2017-09-15 一种基于智能终端的提醒方法和提醒系统

Country Status (2)

Country Link
CN (1) CN111163650A (zh)
WO (1) WO2019051777A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832346A (zh) * 2019-04-17 2020-10-27 北京嘀嘀无限科技发展有限公司 人脸识别方法、装置、电子设备及可读存储介质
CN113761980A (zh) * 2020-06-04 2021-12-07 杭州海康威视系统技术有限公司 吸烟检测方法、装置、电子设备及机器可读存储介质
CN114098170A (zh) * 2021-11-29 2022-03-01 深圳市汉清达科技有限公司 一种具有烟雾浓度调控能力的智能电子烟及其使用方法
CN114939211A (zh) * 2022-04-28 2022-08-26 中国人民解放军陆军军医大学第一附属医院 一种智能雾化系统
CN115630644A (zh) * 2022-11-09 2023-01-20 哈尔滨工业大学 基于lda主题模型的直播用户弹幕的话题挖掘方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10512286B2 (en) * 2017-10-19 2019-12-24 Rai Strategic Holdings, Inc. Colorimetric aerosol and gas detection for aerosol delivery device
CN113688725A (zh) * 2021-08-24 2021-11-23 李畅杰 盥洗用具无人维护平台

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100238036A1 (en) * 2009-03-20 2010-09-23 Silicon Laboratories Inc. Use of optical reflectance proximity detector for nuisance mitigation in smoke alarms
CN103000005A (zh) * 2012-09-29 2013-03-27 徐州东方传动机械有限公司 一种吸烟提醒装置
CN105120215A (zh) * 2015-08-19 2015-12-02 苏州市新瑞奇节电科技有限公司 基于图像分析的车间防抽烟监控方法
CN105394813A (zh) * 2015-10-17 2016-03-16 深圳市易特科信息技术有限公司 智能戒烟监控系统及方法
CN105976570A (zh) * 2016-05-20 2016-09-28 山东师范大学 一种基于车载视频监控的驾驶员吸烟行为实时监测方法

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5245185B2 (ja) * 2009-03-31 2013-07-24 サクサ株式会社 歩きたばこ監視装置
CN102013009A (zh) * 2010-11-15 2011-04-13 无锡中星微电子有限公司 烟雾图像识别方法及装置
CN103876290B (zh) * 2014-03-27 2016-12-07 沈洁 一种智能戒烟的方法及其装置和终端
CN104270602A (zh) * 2014-09-16 2015-01-07 深圳市九洲电器有限公司 一种健康管理方法及装置
CN104598934B (zh) * 2014-12-17 2018-09-18 安徽清新互联信息科技有限公司 一种驾驶员吸烟行为监控方法
CN204466891U (zh) * 2015-01-22 2015-07-15 深圳西红柿科技有限公司 一种可链接移动终端监控吸烟记录的烟盒
CN105844863A (zh) * 2016-04-25 2016-08-10 上海斐讯数据通信技术有限公司 一种烟雾提醒方法、系统及智能终端
CN106225012B (zh) * 2016-09-23 2019-02-26 成都九十度工业产品设计有限公司 一种辅助戒烟打火机及其控制方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100238036A1 (en) * 2009-03-20 2010-09-23 Silicon Laboratories Inc. Use of optical reflectance proximity detector for nuisance mitigation in smoke alarms
CN103000005A (zh) * 2012-09-29 2013-03-27 徐州东方传动机械有限公司 一种吸烟提醒装置
CN105120215A (zh) * 2015-08-19 2015-12-02 苏州市新瑞奇节电科技有限公司 基于图像分析的车间防抽烟监控方法
CN105394813A (zh) * 2015-10-17 2016-03-16 深圳市易特科信息技术有限公司 智能戒烟监控系统及方法
CN105976570A (zh) * 2016-05-20 2016-09-28 山东师范大学 一种基于车载视频监控的驾驶员吸烟行为实时监测方法

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832346A (zh) * 2019-04-17 2020-10-27 北京嘀嘀无限科技发展有限公司 人脸识别方法、装置、电子设备及可读存储介质
CN113761980A (zh) * 2020-06-04 2021-12-07 杭州海康威视系统技术有限公司 吸烟检测方法、装置、电子设备及机器可读存储介质
CN113761980B (zh) * 2020-06-04 2024-03-01 杭州海康威视系统技术有限公司 吸烟检测方法、装置、电子设备及机器可读存储介质
CN114098170A (zh) * 2021-11-29 2022-03-01 深圳市汉清达科技有限公司 一种具有烟雾浓度调控能力的智能电子烟及其使用方法
CN114098170B (zh) * 2021-11-29 2024-04-12 深圳市汉清达科技有限公司 一种具有烟雾浓度调控能力的智能电子烟及其使用方法
CN114939211A (zh) * 2022-04-28 2022-08-26 中国人民解放军陆军军医大学第一附属医院 一种智能雾化系统
CN115630644A (zh) * 2022-11-09 2023-01-20 哈尔滨工业大学 基于lda主题模型的直播用户弹幕的话题挖掘方法

Also Published As

Publication number Publication date
CN111163650A (zh) 2020-05-15

Similar Documents

Publication Publication Date Title
WO2019051777A1 (zh) 一种基于智能终端的提醒方法和提醒系统
Huang et al. Robust face detection using Gabor filter features
WO2019051665A1 (zh) 一种智能终端的启动控制方法及启动控制系统
Agarwal et al. Face recognition using principle component analysis, eigenface and neural network
Ma et al. Local intensity variation analysis for iris recognition
KR101185525B1 (ko) 서포트 벡터 머신 및 얼굴 인식에 기초한 자동 생체 식별
Gunawan et al. Development of face recognition on raspberry pi for security enhancement of smart home system
CN111597955A (zh) 基于深度学习的表情情绪识别的智能家居控制方法及装置
US11257226B1 (en) Low-overhead motion classification
Zhang et al. A survey on face anti-spoofing algorithms
Zou et al. Face Recognition Using Active Near-IR Illumination.
Mady et al. Efficient real time attendance system based on face detection case study “MEDIU staff”
Yingxin et al. A robust hand gesture recognition method via convolutional neural network
WO2019090503A1 (zh) 一种智能终端的图像拍摄方法及图像拍摄系统
US11423762B1 (en) Providing device power-level notifications
Heo et al. Performance evaluation of face recognition using visual and thermal imagery with advanced correlation filters
Lin et al. A gender classification scheme based on multi-region feature extraction and information fusion for unconstrained images
CN117095471A (zh) 基于多尺度特征的人脸伪造溯源方法
Balasuriya et al. Frontal view human face detection and recognition
Rajalakshmi et al. A review on classifiers used in face recognition methods under pose and illumination variation
Guo et al. Human face recognition using a spatially weighted Hausdorff distance
Gupta et al. Unsupervised biometric anti-spoofing using generative adversarial networks
KR100711223B1 (ko) 저니키(Zernike)/선형 판별 분석(LDA)을 이용한얼굴 인식 방법 및 그 방법을 기록한 기록매체
US11163097B1 (en) Detection and correction of optical filter position in a camera device
Peng et al. A software framework for PCa-based face recognition

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17925118

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17925118

Country of ref document: EP

Kind code of ref document: A1