Smoking monitoring system based on gesture recognition
Technical Field
The invention belongs to the technical field of image monitoring, and relates to a smoking monitoring system based on gesture recognition.
Background
At present, smoking is regarded as a killer threatening human health, more and more attention is paid to all aspects of society, and according to statistics, China has about 3.2 hundred million smokers.
The safety of smokers and surrounding personnel is seriously harmed by smoking, particularly, dangerous accidents are easily caused when smoking is carried out in dangerous environments such as gas stations and explosive storage places, however, at present, the smoke concentration is generally adopted for alarming and reminding, if the smoke concentration is lower than a set parameter value, alarming does not occur, the problems of poor safety detection and high danger exist, and in order to improve the monitoring of the smokers, the smoking state of the smokers is detected by adopting a posture identification mode.
Disclosure of Invention
The invention aims to provide a smoking monitoring system based on gesture recognition, which can extract and recognize characteristic points of a smoker in a collected image by an image collection module, an acquisition and division module and a gesture characteristic simulation module, and can count the distance between the characteristic points in a smoking gesture by a screening and positioning module, a gesture correction module, a gesture recognition and comparison module and a gesture evaluation and processing module, judge smoking frequency in unit time according to the distance between the characteristic points, further analyze smoking addiction coefficients of the smoker, and solve the problem of poor smoking monitoring effect on the smoker in the prior art.
The purpose of the invention can be realized by the following technical scheme:
the smoking monitoring system based on gesture recognition comprises an image acquisition module, an acquisition and division module, a gesture feature simulation module, a screening and positioning module, a gesture correction module, a gesture recognition comparison module, a human body gesture model database, a gesture evaluation processing module and a display terminal;
the image acquisition module is connected with the acquisition and division module, the posture characteristic simulation module is respectively connected with the acquisition and division module, the screening and positioning module and the posture recognition and comparison module, the posture correction module is respectively connected with the screening and positioning module and the human body posture model database, the posture evaluation processing module is respectively connected with the posture recognition and comparison module, the posture correction module, the human body posture model database and the display terminal, and the posture recognition and comparison module is connected with the human body posture model database.
The image acquisition module is used for acquiring image information of personnel in real time, removing the acquired images, screening out images with the definition degree greater than a preset definition degree threshold value, and sending the screened images with the definition degree greater than the preset definition degree threshold value to the acquisition and division module;
the acquisition and division module is used for receiving the image sent by the image acquisition module, dividing the received image into a plurality of sub-images with the same size, numbering the sub-images, wherein the sub-images are respectively 1,2, 1, i, n, and sequentially sending the divided and sequenced sub-images to the attitude feature simulation module in sequence;
the attitude characteristic simulation module is used for receiving the information of each subimage sent by the acquisition and division module, extracting attitude characteristic points of each received subimage, simulating position coordinates of the extracted characteristic points in each subimage, sending the simulated position coordinates of the extracted characteristic points in each subimage to the screening and positioning module, and sending the extracted characteristic points in each subimage to the attitude identification and comparison module;
the screening and positioning module is used for receiving the simulated position coordinates of the feature points sent by the attitude feature simulation module, counting the distance between each feature point according to the obtained simulated position coordinates of the feature points, judging the distance between the finger joint feature point and the lip feature point, and sending the distance between each feature point to the attitude correction module;
the gesture correction module is used for receiving the distance between the feature points sent by the screening and positioning module, screening out the feature points with the changed distance between the feature points, counting the times of the change of the distance between any two feature points, counting the distance between the finger joint feature points and the lip feature points, comparing the counted distance with a set standard distance threshold, judging whether the absolute value of the difference value between the distance between the finger joint feature points and the lip feature points and the standard distance threshold is in a preset interval threshold range, and correcting the distance between the finger joint feature points and the lip feature points in the sub-image according to the standard distance between the finger joint feature points and the lip feature points in the smoking process of the smoker in the human gesture model database if the absolute value is in the preset interval threshold range;
the human body posture model database is used for storing each posture characteristic point and the position corresponding to each posture characteristic point of a smoker in a smoking state, and a standard distance threshold value and interval threshold value information between the finger joint characteristic point and the lip characteristic point, and storing times that the distance between the finger joint characteristic point and the lip characteristic point in unit time is smaller than the standard distance threshold value, wherein different times correspond to different smoking frequencies, and different smoking frequency ranges have smoking addiction coefficients corresponding to the times;
the gesture recognition and comparison module is used for receiving the feature points of the sub-images sent by the gesture feature simulation module, comparing the feature points in the sub-images with the gesture feature points in the human body gesture model database one by one, judging whether the feature points in the sub-images are gesture feature points or not, and extracting the gesture feature points in the sub-images.
Further, the gesture evaluation processing module is used for receiving the compared gesture feature points sent by the gesture recognition and comparison module, receiving the distance between the corrected finger joint feature points and the lip feature points sent by the gesture correction module, judging whether the received gesture feature points are the finger joint feature points and the lip feature points or not, and judging whether the received gesture feature points are the finger joint feature points or the lip feature points or not, then the distance between the finger joint feature point and the lip feature point in the posture feature point after the correction is counted and compared with a standard distance threshold value, the times that the distance between the finger joint feature point and the lip feature point in the posture feature point is smaller than the standard distance threshold value are accumulated, and comparing the accumulated times with smoking addiction coefficients corresponding to different times stored in the human posture model database one by one to obtain the addiction coefficient of the smoker.
And further, the smoking system also comprises a display terminal, and the display terminal is used for receiving and displaying the smoking times and smoking addiction coefficients of the smoker in unit time, which are sent by the posture evaluation processing module.
Further, the simulation position coordinates of the feature points are the number of the sub-image where the current feature point is located and the distance between the feature point and four sides of the sub-image with the number.
The invention has the beneficial effects that:
according to the smoking monitoring system based on the posture recognition, the image acquisition module, the acquisition and division module and the posture characteristic simulation module are combined, the characteristic points of the smoker in the acquired image can be extracted and recognized, the distances among the characteristic points in the smoking posture are counted through the screening and positioning module, the posture correction module, the posture recognition and comparison module and the posture evaluation and processing module, the smoking frequency in unit time is judged according to the distances among the characteristic points, the smoking addiction coefficient of the smoker is analyzed, the smoking frequency and the addiction state of the smoker can be conveniently and intuitively known, and the system can monitor the smoking state of the smoker according to the smoking posture of the smoker.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a smoking monitoring system based on gesture recognition according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the smoking monitoring system based on gesture recognition includes an image acquisition module, an acquisition and division module, a gesture feature simulation module, a screening and positioning module, a gesture correction module, a gesture recognition and comparison module, a human body gesture model database, a gesture evaluation processing module and a display terminal;
the image acquisition module is connected with the acquisition and division module, the posture characteristic simulation module is respectively connected with the acquisition and division module, the screening and positioning module and the posture identification and comparison module, the posture correction module is respectively connected with the screening and positioning module and the human body posture model database, the posture evaluation processing module is respectively connected with the posture identification and comparison module, the posture correction module, the human body posture model database and the display terminal, and the posture identification and comparison module is connected with the human body posture model database.
The image acquisition module comprises a plurality of high-definition cameras, is arranged indoors and is used for acquiring image information of personnel in real time, screening out images with the definition degree larger than a preset definition degree threshold value through clearing treatment of the acquired images so as to ensure the clarity of the images, and sending the screened images with the definition degree larger than the preset definition degree threshold value to the acquisition and division module;
the acquisition and division module is used for receiving the image sent by the image acquisition module, dividing the received image into a plurality of sub-images with the same size, numbering the sub-images, wherein the sub-images are respectively 1,2, 1, i, n, and sequentially sending the divided and sequenced sub-images to the attitude feature simulation module in sequence;
the gesture feature simulation module is used for receiving the information of each sub-image sent by the acquisition and division module, extracting gesture feature points of each received sub-image, performing simulated position coordinates on the feature points in each extracted sub-image, sending the simulated position coordinates of the feature points in each extracted sub-image to the screening and positioning module, and sending the feature points in each extracted sub-image to the gesture recognition and comparison module, wherein the simulated position coordinates of the feature points are the number of the current feature point in the sub-image and the distance between the feature point and the four sides of the sub-image with the number, and the gesture feature points comprise wrist joints, elbow joints, finger joints, lips and the like;
the screening and positioning module is used for receiving the simulated position coordinates of the feature points sent by the attitude feature simulation module, counting the distance between each feature point according to the obtained simulated position coordinates of the feature points, judging the distance between the finger joint feature point and the lip feature point, and sending the distance between each feature point to the attitude correction module;
the posture correction module is used for receiving the distances among the characteristic points sent by the screening and positioning module, screening out the characteristic points with the changed distances among the characteristic points, and counts the times of the change of the distance between any two characteristic points, and counts the distance between the finger joint characteristic point and the lip characteristic point, and comparing the counted distance with a set standard distance threshold, judging whether the absolute value of the difference between the distance from the knuckle feature point to the lip feature point and the standard distance threshold is within a preset interval threshold range, if so, the distance between the finger joint feature point and the lip feature point in the sub-image is corrected according to the standard distance between the finger joint feature point and the lip feature point in the smoking process of the smoker in the human posture model database, sending the distance between the corrected finger joint characteristic point and the lip characteristic point to a posture evaluation processing module;
the human body posture model database is used for storing each posture characteristic point and the position corresponding to each posture characteristic point of a smoker in a smoking state, and a standard distance threshold value and interval threshold value information between the finger joint characteristic point and the lip characteristic point, and storing times that the distance between the finger joint characteristic point and the lip characteristic point in unit time is smaller than the standard distance threshold value, wherein different times correspond to different smoking frequencies, and different smoking frequency ranges have smoking addiction coefficients corresponding to the times;
the gesture recognition comparison module is used for receiving the feature points of the sub-images sent by the gesture feature simulation module, comparing the feature points in the sub-images with the gesture feature points in the human body gesture model database one by one, judging whether the feature points in the sub-images are gesture feature points or not, extracting the gesture feature points in the sub-images and sending the extracted gesture feature points to the gesture evaluation processing module;
the gesture evaluation processing module is used for receiving the compared gesture feature points sent by the gesture recognition and comparison module, receiving the distance between the corrected knuckle feature points and the lip feature points sent by the gesture correction module, judging whether the received gesture feature points are the knuckle feature points and the lip feature points or not, wherein the gesture feature points are the knuckle feature points or the lip feature points, counting the number of times that the distance between the knuckle feature point and the lip feature point in the corrected attitude feature point is less than the standard distance threshold value, and comparing the accumulated times with smoking addiction coefficients corresponding to different times stored in a human body posture model database one by one, and sending the smoking times and the smoking addiction coefficients of the smoker in unit time to a display terminal by a posture evaluation processing module.
The display terminal is used for receiving the smoking times and the smoking addiction coefficient of the smoker in unit time, which are sent by the posture evaluation processing module, and displaying the smoking times and the smoking addiction coefficient, so that the smoking frequency and the addiction state of the smoker can be conveniently and intuitively known.
The foregoing is merely exemplary and illustrative of the principles of the present invention and various modifications, additions and substitutions of the specific embodiments described herein may be made by those skilled in the art without departing from the principles of the present invention or exceeding the scope of the claims set forth herein.