CN116110129A - Intelligent evaluation method, device, equipment and storage medium for dining quality of infants - Google Patents
Intelligent evaluation method, device, equipment and storage medium for dining quality of infants Download PDFInfo
- Publication number
- CN116110129A CN116110129A CN202310196586.2A CN202310196586A CN116110129A CN 116110129 A CN116110129 A CN 116110129A CN 202310196586 A CN202310196586 A CN 202310196586A CN 116110129 A CN116110129 A CN 116110129A
- Authority
- CN
- China
- Prior art keywords
- infant
- dining
- infants
- score
- position information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/18—Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/778—Active pattern-learning, e.g. online learning of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Human Computer Interaction (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computational Linguistics (AREA)
- Operations Research (AREA)
- Probability & Statistics with Applications (AREA)
- Psychiatry (AREA)
- Algebra (AREA)
- Social Psychology (AREA)
- General Engineering & Computer Science (AREA)
- Medical Treatment And Welfare Office Work (AREA)
Abstract
The invention relates to the field of intelligent nursing, solves the problem that the dining quality of infants cannot be effectively evaluated in the prior art, and provides an intelligent evaluation method, device, equipment and storage medium for the dining quality of infants. The method comprises the following steps: acquiring a real-time video stream in an infant care scene, and decomposing the video stream into multi-frame images; a target detection model is preset, the multi-frame images are input into the target detection model, the hand position information of a person and the mouth position information of an infant in the multi-frame images are identified, and whether the infant is dining or not is identified according to the hand position information of the person and the mouth position information of the infant; when the infant is identified to be dining, the dining behaviors of the infant are analyzed according to the preset behavior analysis rules, and the dining quality of the infant is evaluated. The infant dining system is beneficial to assisting a user in guiding an infant to take dinner, improves the dining standardization of the infant, and improves the nursing experience of parents.
Description
Technical Field
The invention relates to the field of intelligent nursing, in particular to an intelligent evaluation method, device and equipment for dining quality of infants and a storage medium.
Background
Along with the development and popularization of various intelligent terminals, the application of intelligent nursing equipment is also becoming more and more widespread, and gradually becomes a part of life of people.
In the prior art, the dining chair for the infants mainly provides physical convenience for dining for the infants, analysis on dining behaviors of the infants mainly comes from manual nursing of parents, part of the intelligent dining chair for the infants is provided with an intelligent camera which is used for recording dining habits of the infants, the dining behaviors of the infants are simply and roughly analyzed, analysis results are very imperfect, parents cannot be guided to adjust and correct the dining behaviors of the infants according to analysis results, and nursing experience actually obtained by the parents is poor.
Therefore, how to effectively evaluate the dining behavior quality of the infants is a problem to be solved.
Disclosure of Invention
In view of the above, the embodiment of the invention provides an intelligent assessment method, device, equipment and storage medium for dining quality of infants, which are used for solving the problem that the dining quality of infants cannot be assessed effectively in the prior art.
In a first aspect, an embodiment of the present invention provides an intelligent assessment method for dining quality of an infant, where the method includes:
s1: acquiring a real-time video stream in an infant care scene, and decomposing the video stream into multi-frame images;
s2: a target detection model is preset, the multi-frame images are input into the target detection model, the hand position information of a person and the mouth position information of an infant in the multi-frame images are identified, and whether the infant is dining or not is identified according to the hand position information of the person and the mouth position information of the infant;
s3: when the infant is identified to be dining, the dining behaviors of the infant are analyzed according to the preset behavior analysis rules, and the dining quality of the infant is evaluated.
Preferably, the S3 includes:
s31: when the infant is identified to be dining, extracting each frame of target image corresponding to the infant dining;
s32: counting to obtain infant feeding times according to the hand position information of the person and the infant mouth position information;
s33: and comprehensively analyzing each target image according to a preset dining quality evaluation rule and combining the infant eating times to evaluate the dining quality of the infant.
Preferably, the S33 includes:
s331: carrying out autonomous analysis on the dining behaviors of the infants in the target image, and synthesizing autonomous analysis results and the infant eating times to obtain autonomous scores;
s332: carrying out pleasure degree analysis on the dining behaviors of the infants in the target image, and synthesizing pleasure degree analysis results and the infant eating times to obtain pleasure degree scores;
s333: performing compactness analysis on the dining behaviors of the infants in the target image, and synthesizing a compactness analysis result and the infant feeding times to obtain a compactness score;
s334: and carrying out weighted calculation on the autonomy score, the pleasure score and the compactness score to obtain a comprehensive score, and giving an evaluation result according to the comprehensive score.
Preferably, the S331 includes:
s3311: presetting a first classification model, and classifying hands of a person into infant hands and non-infant hands by using the first classification model;
s3312: acquiring the occurrence times of the hands of the infants, and counting the independent feeding times of the infants according to the occurrence times of the hands of the infants;
s3313: and calculating the autonomy score by integrating the infant feeding times and the infant autonomy feeding times.
Preferably, the S332 includes:
s3321: performing face detection on the target image, and identifying infant face information in the target image;
s3322: presetting a second classification model, inputting the infant facial information into the second classification model, and identifying infant expression information;
s3323: according to the expression information, respectively counting the smile times and crying times of the infants;
s3324: and calculating the pleasure score by integrating the smile times, crying times and feeding times of the infants.
Preferably, the S333 includes:
s3331: acquiring a time node of infant feeding;
s3332: calculating average interval time and time sequence according to the time node;
s3333: and calculating a compactness score according to the average interval time, the time sequence and the infant feeding times.
Preferably, the S334 includes:
s3341: obtaining the age of an infant, and presetting a score threshold value and an age threshold value;
s3342: weighting and calculating the autonomy score, the pleasure score and the compactness score to obtain a comprehensive score;
s3343: and comparing the comprehensive score with a score threshold value, and comparing the infant age with an age threshold value to obtain an evaluation result of the infant dining quality.
In a second aspect, an embodiment of the present invention further provides an intelligent dining quality assessment device for infants, where the device includes:
the image acquisition module is used for acquiring a real-time video stream in an infant care scene and decomposing the video stream into multi-frame images;
the behavior judging module is used for presetting a target detection model, inputting the multi-frame images into the target detection model, identifying hand position information of a person and mouth position information of an infant in the multi-frame images, and identifying whether the infant is dining or not according to the hand position information of the person and the mouth position information of the infant;
the behavior evaluation module is used for analyzing the dining behaviors of the infants according to preset behavior analysis rules when the infants are identified to be dining, and evaluating the dining quality of the infants.
In a third aspect, an embodiment of the present invention further provides an electronic device, including: at least one processor, at least one memory and computer program instructions stored in the memory, which when executed by the processor, implement the method as in the first aspect of the embodiments described above.
In a fourth aspect, embodiments of the present invention also provide a storage medium having stored thereon computer program instructions which, when executed by a processor, implement a method as in the first aspect of the embodiments described above.
In summary, the beneficial effects of the invention are as follows:
according to the intelligent assessment method, device and equipment for dining quality of the infants and the storage medium, the real-time video stream in the infant nursing scene is obtained, and the video stream is decomposed into multi-frame images; a target detection model is preset, the multi-frame images are input into the target detection model, the hand position information of a person and the mouth position information of an infant in the multi-frame images are identified, and whether the infant is dining or not is identified according to the hand position information of the person and the mouth position information of the infant; when the infant is identified to be dining, the dining behaviors of the infant are analyzed according to the preset behavior analysis rules, and the dining quality of the infant is evaluated. When the infant dining behavior occurs, the infant dining behavior is comprehensively analyzed and evaluated through the preset behavior analysis rule, a detailed evaluation result is obtained, the infant dining is guided by a user, the dining normalization of the infant is improved, and the nursing experience of parents is also improved.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described, and it is within the scope of the present invention to obtain other drawings according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an intelligent evaluation method for dining quality of infants in embodiment 1 of the invention;
FIG. 2 is a schematic view of the infant on a dining chair according to example 1 of the present invention;
FIG. 3 is a schematic diagram of the invention in example 1 for determining that an infant is dining;
FIG. 4 is a schematic flow chart of the evaluation of dining behavior of infants in embodiment 1 of the present invention;
FIG. 5 is a flow chart of the weighted calculation of the score in embodiment 1 of the present invention;
FIG. 6 is a schematic flow chart of determining a meal autonomy score in embodiment 1 of the invention;
FIG. 7 is a schematic diagram of the invention in example 1 for determining that an infant is eating autonomously;
FIG. 8 is a flow chart of determining a dining pleasure score in embodiment 1 of the present invention;
FIG. 9 is a schematic flow chart of determining a meal compactness score in embodiment 1 of the present invention;
FIG. 10 is a flow chart showing the result of comprehensive evaluation in embodiment 1 of the present invention;
FIG. 11 is a block diagram showing the construction of an intelligent assessment device for dining quality of infants in embodiment 2 of the present invention;
fig. 12 is a schematic structural diagram of an electronic device in embodiment 3 of the present invention;
the labels in the figures are as follows:
1-a minimum external rectangular frame of the body shape of an infant; 2-the smallest circumscribed rectangular frame of the dining chair; 3-minimum circumscribed rectangular frame of hand; 4, a smallest external rectangular frame of the infant mouth; 5-minimum circumscribed rectangle frame of infant's hand.
Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below, and in order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely configured to illustrate the invention and are not configured to limit the invention. It will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the invention by showing examples of the invention.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
Example 1
Referring to fig. 1, embodiment 1 of the present invention provides an intelligent evaluation method for dining quality of infants, which includes:
s1: acquiring a real-time video stream in an infant care scene, and decomposing the video stream into multi-frame images;
specifically, a real-time video stream in an infant nursing scene is obtained, wherein the video stream refers to a color video shot in the daytime and an infrared video stream image at night, so that the infant day and night twenty-four hours of nursing experience is realized, and the video stream is decomposed into multi-frame images.
S2: a target detection model is preset, the multi-frame images are input into the target detection model, the hand position information of a person and the mouth position information of an infant in the multi-frame images are identified, and whether the infant is dining or not is identified according to the hand position information of the person and the mouth position information of the infant;
specifically, firstly, judging multi-frame images obtained by decomposing a real-time video stream, inputting the images judged to have infant dining into a next step flow to further evaluate dining behaviors, and discarding the images not having infant dining without processing. The number of images decomposed by the real-time video stream may be more, if the dining behavior quality analysis and evaluation are carried out completely, unnecessary resource waste is caused, the working efficiency is lower, and the images of the infants which do not have dining are discarded, so that unnecessary working procedures are reduced, and the working efficiency of subsequent quality evaluation work is improved. Collecting a large number of images of infant nursing scenes in advance, marking infant shapes and dining chairs in the images, training by using a deep learning algorithm, outputting a first detection model based on yolov5s, capable of identifying infant shape positions and dining chair positions, inputting the multi-frame images into the first detection model, and obtaining infant shape positions Pos1 (x 1, y1, w1, h 1) and dining chair positions Pos3 (x 3, y3, w3, h 3), wherein x1 and x3 respectively represent the abscissa of the center point of an infant shape minimum circumscribed rectangular frame with the reference number 1 and the abscissa of the center point of a dining chair minimum circumscribed rectangular frame with the reference number 2; y1 and y3 respectively represent the ordinate of the central point of the minimum external rectangular frame of the infant figure and the ordinate of the central point of the minimum external rectangular frame of the dining chair; w1 and w3 respectively represent the width of the smallest external rectangular frame of the infant figure and the width of the smallest external rectangular frame of the dining chair; h1 and h3 respectively represent the height of the smallest circumscribed rectangular frame of the infant figure and the height of the smallest circumscribed rectangular frame of the dining chair. Determination of Pos1 and Pos3 described above: if x1> x3 and x1< x3+ w3 and y1> y3 are satisfied, then the infant is considered to be sitting on the dining chair; if x1> x3 and x1< x3+ w3 and y1> y3 are not satisfied, the infant is not considered to be sitting on the dining chair. Before judging whether the infant sits on the dining chair, the infant is judged whether to sit on the dining chair, so that false starting of dining quality assessment equipment is avoided when no infant sits on the dining chair, resources are effectively saved, and unnecessary work consumption is reduced. When an infant sits on the dining chair, the hands and the infant mouths of the human in the collected images are marked in a large number of infant nursing scenes, training is carried out by using a deep learning algorithm, a second detection model based on yolov5s and capable of identifying the hand position and the infant mouth position of the human is output, a multi-frame image obtained by decomposing a real-time video stream is input into the second detection model, and referring to fig. 3, the hand positions Pos2 (x 2, y2, w2, h 2) and the infant mouth positions Pos4 (x 4, y4, w4, h 4) are identified, wherein x2 and x4 respectively represent the abscissa of the center point of the minimum circumscribed rectangular frame of the hand marked with 3 and the abscissa of the center point of the minimum circumscribed rectangular frame of the infant mouth marked with 4; y2 and y4 respectively represent the ordinate of the center point of the minimum circumscribed rectangular frame of the hand and the ordinate of the center point of the minimum circumscribed rectangular frame of the infant mouth; w2 and w4 respectively represent the width of the hand and the width of the smallest circumscribed rectangular frame of the infant mouth; h2 and h4 represent the height of the smallest circumscribed rectangular frame of the hand and the height of the smallest circumscribed rectangular frame of the infant's mouth, respectively. At this time, pos2 and Pos4 are further determined: for example, setting the transverse distance threshold to be 5, setting the longitudinal distance threshold to be 4, and considering that the infant is dining on the dining chair if the absolute value of the difference between x2 and x4 is less than 5 and the absolute value of the difference between y2 and y4 is less than 4; if the absolute value of the difference between x2 and x4 is less than 5 and the absolute value of the difference between y2 and y4 is less than 4, then the infant is considered to be on the dining chair but not dining. By further judging the hand and infant mouth position, whether the infant is dining or not is more accurately judged, the situation that the infant sits on the dining chair but is not dining is avoided, and the effectiveness of subsequent infant dining quality assessment is improved.
S3: when the infant is identified to be dining, the dining behaviors of the infant are evaluated according to the preset behavior analysis rules, and the evaluation result is output.
Specifically, when the infant is identified to be dining, the dining behavior of the infant is comprehensively and effectively evaluated according to a preset behavior analysis rule, and an evaluation result which can be referred by a user is output. In the process, the infant dining behavior is comprehensively and effectively evaluated through the preset behavior analysis rule, the auxiliary effect on the parents to correct the improper infant dining habit is achieved, and meanwhile, the infant dining health is improved on the premise of nursing the infant.
In one embodiment, referring to fig. 4, the step S3 includes:
s31: when the infant is identified to be dining, extracting each frame of target image corresponding to the infant dining;
specifically, as the multi-frame image decomposed by the real-time video stream is input, the infant is identified to be dining in part of the images, the infant is not identified to be dining in part of the images, and each frame of target image of the infant identified to be dining is extracted as a target image.
S32: counting to obtain infant feeding times according to the hand position information of the person and the infant mouth position information;
specifically, the statistics of infant feeding times is carried out on each frame of target image: the initial value of the infant feeding times N1 and N1 is preset to be 0, if the absolute value of the difference between x2 and x4 is smaller than 5 and the absolute value of the difference between y2 and y4 is smaller than 4, N1 is added with 1, and the final N1 is counted to be the infant feeding times.
S33: and comprehensively analyzing each target image according to a preset dining quality evaluation rule and combining the infant eating times to evaluate the dining quality of the infant.
Specifically, the target image with the infant dining behavior is extracted for behavior analysis, the residual image without the infant dining behavior is not recognized is not processed, the working efficiency of dining behavior quality assessment can be effectively improved, the working flow is reduced, and the time is saved.
In one embodiment, referring to fig. 5, the step S33 includes:
s331: carrying out autonomous analysis on the dining behaviors of the infants in the target image, and synthesizing autonomous analysis results and the infant eating times to obtain autonomous scores;
in one embodiment, referring to fig. 6, the step S331 includes:
s3311: presetting a first classification model, and classifying hands of a person into infant hands and non-infant hands by using the first classification model;
specifically, in the collected images under the large number of infant care scenes, infant hands and non-infant hands in the images are marked in advance, training is performed by using a deep learning algorithm, a first classification model capable of classifying the hands in the images into the infant hands and the non-infant hands based on Resnet is output, and the hands of the person are classified into the infant hands and the non-infant hands by using the first classification model.
S3312: acquiring the occurrence times of the hands of the infants, and counting the independent feeding times of the infants according to the occurrence times of the hands of the infants;
specifically, referring to fig. 7, if the infant hand with the reference number 5 appears, the infant is considered to feed food into the inlet by itself, instead of being fed by a guardian or other adults, that is, the infant is fed autonomously at this time, an initial value of N2, N2 of the infant's number of autonomous feeds is preset to be 0, if the infant hand appears once during the infant feeding process, N2 is added with 1, and the final N2 is counted as the number of infant's autonomous feeds.
S3313: and calculating the autonomy score by integrating the infant feeding times and the infant autonomy feeding times.
Specifically, the autonomy score P1 is calculated by the infant feeding number N1 and the infant autonomy feeding number N2, wherein 0= < p1= < 10, and the calculation formula is: p1=n2/n1×10, a higher value of P1 indicates a higher dining autonomy for the infant, a lower value of P1 indicates a lower autonomy for the infant, possibly mostly for feeding meals by others; the higher the value of P1, the higher the autonomy of the infant, probably mostly by the infant having dinner itself, indicating a higher appetite for the infant. Through the effective evaluation to the autonomy of dining of infant, can remind the parents to little feed the meal when the autonomy of dining of infant is lower, cultivate the good habit of independently dining of infant more, be favorable to the healthy development of infant's body and mind.
S332: carrying out pleasure degree analysis on the dining behaviors of the infants in the target image, and synthesizing pleasure degree analysis results and the infant eating times to obtain pleasure degree scores;
in one embodiment, referring to fig. 8, the step S332 includes:
s3321: performing face detection on the target image, and identifying infant face information in the target image;
specifically, in the collected images of a large number of infant care scenes, images of infant faces are marked in advance, training is performed by using a deep learning algorithm, a third detection model capable of detecting the infant faces is output, the target image is input into the third detection model, infant face information in the target image is identified, and the infant face information at least comprises one of the following: key point information of left eye, right eye, nose and mouth.
S3322: presetting a second classification model, inputting the infant facial information into the second classification model, and identifying infant expression information;
specifically, in the collected images of a large number of infant care scenes, the information of each key point of the infant in the crying state, the smiling state and other states is marked in advance, the training is performed by using a deep learning algorithm, a second classification model capable of classifying the infant face information into the smiling state, the crying state or other states is output, the infant face information is classified into the smiling state, the crying state and other states by using the second classification model, and the information of the infant in the smiling state and the crying state is used as infant expression information.
S3323: according to the expression information, respectively counting the smile times and crying times of the infants;
specifically, an initial value of the infant smile times M1 and M1 is preset to be 0, if expression information is information in an infant smile state, M1 is added with 1, and the final M1 is obtained through statistics to serve as the infant smile times; and presetting an infant crying frequency M2, wherein the initial value of the M2 is 0, adding 1 to the M2 if expression information appears as information in the infant crying state, and counting to obtain the final M2 as the infant crying frequency.
S3324: and calculating the pleasure score by integrating the smile times, crying times and feeding times of the infants.
Specifically, the pleasure score P2 is calculated by the infant smile number M1, crying number M2, and infant meal number N1, wherein, -10= < p2= < 10, whose calculation formula is: p2= (M1-M2)/N1 x 10, wherein if the pleasure score-10= < p2 < 0, it indicates that the infant is crying during dining, the pleasure is poor; if the pleasure score p2 > =0, and the larger the value of P2, the higher the pleasure degree, the smile the infant is at during dining; through detecting the expression change condition of infant in-process of having dinner, monitor the pleasure degree of infant in-process of having dinner effectively, when the pleasure degree is lower, probably be because picky food or food scald etc. cause, need the head of a family to adjust food kind and the temperature of infant's in dinner appropriately.
S333: performing compactness analysis on the dining behaviors of the infants in the target image, and synthesizing a compactness analysis result and the infant feeding times to obtain a compactness score;
in one embodiment, referring to fig. 9, S333 includes:
s3331: acquiring a time node of infant feeding;
specifically, when it is recognized that the infant feeding situation occurs, the dining quality assessment device automatically records that the time node at that time is T1, T2..t (N1), wherein T1, T2..t (N1) represents the time node of each infant from the first feeding to the last feeding of the nth 1.
S3332: calculating average interval time and time sequence according to the time node;
specifically, according to each time node, calculating average interval time Tavg of infant feeding, wherein the calculation formula is as follows: tavg= (T (N1) -T1)/(N1-1), and then calculating a time sequence T (i, j) between two adjacent infant feeding, wherein 1= < i, j= < N1, i is an integer, and the calculation formula is as follows: t (1, 2) =t2-T1, T (2, 3) =t3-T2,..t (N1-1, N1) =t (N1) -T (N1-1).
S3333: and calculating a compactness score according to the average interval time, the time sequence and the infant feeding times.
Specifically, an initial value of a compactness number num is preset to be 0, if T (i, j) is smaller than or equal to Tavg, the feeding time interval between two consecutive times of infants is smaller than the average time interval, at this time, the compactness of the infant dining is higher, the dining is more positive, num is added with 1, the final compactness number num is obtained, and the compactness fraction P3 is calculated according to the compactness number num and the infant feeding number N1, wherein the calculation formula is as follows: p3=num/(N1-1) ×10, where 0 < P3 < 10. The larger the value of the compactness fraction P3 is, the more compact the infant dining process is, and the higher the dining enthusiasm is; in contrast, the smaller the value of the infant P3 is, the less compact the infant dining process is, the less enthusiasm of dining is, and parents are required to improve enthusiasm of infant dining.
S334: and carrying out weighted calculation on the autonomy score, the pleasure score and the compactness score to obtain a comprehensive score, and giving an evaluation result according to the comprehensive score.
In one embodiment, referring to fig. 10, the step S334 includes:
s3341: obtaining the age of an infant, and presetting a score threshold value and an age threshold value;
specifically, the apparatus acquires an infant Age input by a user through a mobile terminal, and sets a score threshold pghreshold in advance, taking pghreshold=6 as an example, and sets an infant Age threshold in advance, taking Age threshold age=3 as an example.
S3342: weighting and calculating the autonomy score, the pleasure score and the compactness score to obtain a comprehensive score;
specifically, weighting calculation is carried out on the autonomy score, the pleasure score and the compactness score to obtain a comprehensive score; the calculation formula is as follows: p=a×p1+b×p2+c×p3, where a, b, c are weighting coefficients of three scores, 0 < a, b, c < 1, which satisfies a+b+c=1, and considering that different parents have different requirements for dining quality evaluation of infants, three weighting coefficients a, b, c can be set by the user, thereby satisfying different use requirements of the user: for example, if the user wants to prefer to evaluate the autonomy of the infant dining, the value of the weighting coefficient a is set to 0.6, and similarly, if the user wants to prefer to evaluate the pleasure degree and the compactness degree, the weighting coefficients b and c are respectively adjusted.
S3343: and comparing the comprehensive score with a score threshold value, and comparing the infant age with an age threshold value to obtain an evaluation result of the infant dining quality.
Specifically, taking the weighting coefficient a=0.6, b=0.3 and c=0.1 set by the user as an example, the user prefers to evaluate the dining autonomy of the infants at this time, the calculated comprehensive score P is 5 and smaller than the preset score threshold P, the Age of the infants is 4 and is larger than the set Age threshold, the infants are older, but the autonomous feeding ability of the infants is poorer in the dining process, parents need to pay attention to reducing the feeding frequency of the infants, and the infants are fed by themselves; if the weighting coefficient a=0.1, b=0.6 and c=0.3 set by the user, the user prefers to evaluate the pleasure of dining of the infants, the calculated comprehensive score P is 4 smaller than the preset score threshold value P, which indicates that the pleasure of the infants in the dining process is poor, parents need to pay attention to the reason that the pleasure of dining of the infants is low, whether the temperature of food or food is unsuitable, and for this reason, the variety and the temperature of the food are adjusted; if the weighting coefficient a=0.1, b=0.3 and c=0.6 set by the user, the user prefers to evaluate the compactness of the dining of the infant, the calculated comprehensive score P is 5 and smaller than the preset score threshold P, the Age of the infant is 5 and larger than the set Age threshold Age, the Age of the infant is larger, but the compactness of the dining is lower, parents need to improve the attention of the infant in the dining process, and the dining time of the infant is fixed. The whole process is intelligent and digital according to the specific evaluation result presented to the user by the comprehensive score, has an auxiliary effect on guiding the dining behavior of the infant by the user, and is beneficial to improving the normalization of the dining process of the infant.
Example 2
Referring to fig. 11, embodiment 2 of the present invention further provides an intelligent assessment device for dining quality of infants, the device comprising:
the image acquisition module is used for acquiring a real-time video stream in an infant care scene and decomposing the video stream into multi-frame images;
the behavior judging module is used for presetting a target detection model, inputting the multi-frame images into the target detection model, identifying hand position information of a person and mouth position information of an infant in the multi-frame images, and identifying whether the infant is dining or not according to the hand position information of the person and the mouth position information of the infant;
the behavior evaluation module is used for evaluating the dining behavior of the infant according to a preset behavior analysis rule when the infant is identified to be dining, and outputting an evaluation result.
Specifically, with the adoption of the intelligent infant dining quality assessment device of the embodiment 2 of the invention, an image acquisition module is used for acquiring a real-time video stream in an infant nursing scene and decomposing the video stream into multi-frame images; the behavior judging module is used for presetting a target detection model, inputting the multi-frame images into the target detection model, identifying hand position information of a person and mouth position information of an infant in the multi-frame images, and identifying whether the infant is dining or not according to the hand position information of the person and the mouth position information of the infant; the behavior evaluation module is used for evaluating the dining behavior of the infant according to a preset behavior analysis rule when the infant is identified to be dining, and outputting an evaluation result. When the infant dining behavior occurs, the infant dining behavior is comprehensively analyzed and evaluated through the preset behavior analysis rule, a detailed evaluation result is obtained, the infant dining is guided by a user, the dining normalization of the infant is improved, and the nursing experience of parents is also improved.
Example 3
In addition, the intelligent evaluation method for dining quality of infants in accordance with embodiment 1 of the present invention described in connection with fig. 1 may be implemented by an electronic device. Fig. 12 shows a schematic hardware structure of an electronic device according to embodiment 3 of the present invention.
The electronic device may include a processor and memory storing computer program instructions.
In particular, the processor may comprise a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present invention.
The memory may include mass storage for data or instructions. By way of example, and not limitation, the memory may comprise a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the foregoing. The memory may include removable or non-removable (or fixed) media, where appropriate. The memory may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory is a non-volatile solid state memory. In a particular embodiment, the memory includes Read Only Memory (ROM). The ROM may be mask programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory, or a combination of two or more of these, where appropriate.
The processor reads and executes the computer program instructions stored in the memory to implement any of the infant dining quality intelligent assessment methods in the above embodiments.
In one example, the electronic device may also include a communication interface and a bus. The processor, the memory, and the communication interface are connected by a bus and complete communication with each other, as shown in fig. 12.
The communication interface is mainly used for realizing communication among the modules, the devices, the units and/or the equipment in the embodiment of the invention.
The bus includes hardware, software, or both that couple the components of the device to each other. By way of example, and not limitation, the buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. The bus may include one or more buses, where appropriate. Although embodiments of the invention have been described and illustrated with respect to a particular bus, the invention contemplates any suitable bus or interconnect.
Example 4
In addition, in combination with the intelligent assessment method for dining quality of infants in the above embodiment 1, embodiment 4 of the present invention may also provide a computer readable storage medium for implementation. The computer readable storage medium has stored thereon computer program instructions; the computer program instructions, when executed by the processor, implement any of the infant meal quality intelligent assessment methods of the embodiments described above.
In summary, the embodiment of the invention provides an intelligent evaluation method, device and equipment for dining quality of infants and a storage medium.
It should be understood that the invention is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and shown, and those skilled in the art can make various changes, modifications and additions, or change the order between steps, after appreciating the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this disclosure describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, or may be performed in a different order from the order in the embodiments, or several steps may be performed simultaneously.
In the foregoing, only the specific embodiments of the present invention are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present invention is not limited thereto, and any equivalent modifications or substitutions can be easily made by those skilled in the art within the technical scope of the present invention, and they should be included in the scope of the present invention.
Claims (10)
1. An intelligent assessment method for dining quality of infants, which is characterized by comprising the following steps:
s1: acquiring a real-time video stream in an infant care scene, and decomposing the video stream into multi-frame images;
s2: a target detection model is preset, the multi-frame images are input into the target detection model, the hand position information of a person and the mouth position information of an infant in the multi-frame images are identified, and whether the infant is dining or not is identified according to the hand position information of the person and the mouth position information of the infant;
s3: when the infant is identified to be dining, the dining behaviors of the infant are analyzed according to the preset behavior analysis rules, and the dining quality of the infant is evaluated.
2. The intelligent assessment method for dining quality of infants according to claim 1, wherein S3 comprises:
s31: when the infant is identified to be dining, extracting each frame of target image corresponding to the infant dining;
s32: counting to obtain infant feeding times according to the hand position information of the person and the infant mouth position information;
s33: and comprehensively analyzing each target image according to a preset dining quality evaluation rule and combining the infant eating times to evaluate the dining quality of the infant.
3. The intelligent assessment method for dining quality of infants according to claim 2, wherein S33 comprises:
s331: carrying out autonomous analysis on the dining behaviors of the infants in the target image, and synthesizing autonomous analysis results and the infant eating times to obtain autonomous scores;
s332: carrying out pleasure degree analysis on the dining behaviors of the infants in the target image, and synthesizing pleasure degree analysis results and the infant eating times to obtain pleasure degree scores;
s333: performing compactness analysis on the dining behaviors of the infants in the target image, and synthesizing a compactness analysis result and the infant feeding times to obtain a compactness score;
s334: and carrying out weighted calculation on the autonomy score, the pleasure score and the compactness score to obtain a comprehensive score, and giving an evaluation result according to the comprehensive score.
4. The intelligent assessment method for dining quality of infants according to claim 3, wherein S331 comprises:
s3311: presetting a first classification model, and classifying hands of a person into infant hands and non-infant hands by using the first classification model;
s3312: acquiring the occurrence times of the hands of the infants, and counting the independent feeding times of the infants according to the occurrence times of the hands of the infants;
s3313: and calculating the autonomy score by integrating the infant feeding times and the infant autonomy feeding times.
5. The intelligent assessment method for dining quality of infants according to claim 3, wherein S332 comprises:
s3321: performing face detection on the target image, and identifying infant face information in the target image;
s3322: presetting a second classification model, inputting the infant facial information into the second classification model, and identifying infant expression information;
s3323: according to the expression information, respectively counting the smile times and crying times of the infants;
s3324: and calculating the pleasure score by integrating the smile times, crying times and feeding times of the infants.
6. The intelligent assessment method according to claim 3, wherein S333 comprises:
s3331: acquiring a time node of infant feeding;
s3332: calculating average interval time and time sequence according to the time node;
s3333: and calculating a compactness score according to the average interval time, the time sequence and the infant feeding times.
7. The intelligent assessment method according to claim 3, wherein S334 comprises:
s3341: obtaining the age of an infant, and presetting a score threshold value and an age threshold value;
s3342: weighting and calculating the autonomy score, the pleasure score and the compactness score to obtain a comprehensive score;
s3343: and comparing the comprehensive score with a score threshold value, and comparing the infant age with an age threshold value to obtain an evaluation result of the infant dining quality.
8. An intelligent assessment device for dining quality of infants, which is characterized by comprising:
the image acquisition module is used for acquiring a real-time video stream in an infant care scene and decomposing the video stream into multi-frame images;
the behavior judging module is used for presetting a target detection model, inputting the multi-frame images into the target detection model, identifying hand position information of a person and mouth position information of an infant in the multi-frame images, and identifying whether the infant is dining or not according to the hand position information of the person and the mouth position information of the infant;
the behavior evaluation module is used for analyzing the dining behaviors of the infants according to preset behavior analysis rules when the infants are identified to be dining, and evaluating the dining quality of the infants.
9. An electronic device, comprising: at least one processor, at least one memory, and computer program instructions stored in the memory, which when executed by the processor, implement the method of any one of claims 1-7.
10. A storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310196586.2A CN116110129A (en) | 2023-03-03 | 2023-03-03 | Intelligent evaluation method, device, equipment and storage medium for dining quality of infants |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310196586.2A CN116110129A (en) | 2023-03-03 | 2023-03-03 | Intelligent evaluation method, device, equipment and storage medium for dining quality of infants |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116110129A true CN116110129A (en) | 2023-05-12 |
Family
ID=86261669
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310196586.2A Pending CN116110129A (en) | 2023-03-03 | 2023-03-03 | Intelligent evaluation method, device, equipment and storage medium for dining quality of infants |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116110129A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116580427A (en) * | 2023-05-24 | 2023-08-11 | 武汉星巡智能科技有限公司 | Method, device and equipment for manufacturing electronic album containing interaction content of people and pets |
-
2023
- 2023-03-03 CN CN202310196586.2A patent/CN116110129A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116580427A (en) * | 2023-05-24 | 2023-08-11 | 武汉星巡智能科技有限公司 | Method, device and equipment for manufacturing electronic album containing interaction content of people and pets |
CN116580427B (en) * | 2023-05-24 | 2023-11-21 | 武汉星巡智能科技有限公司 | Method, device and equipment for manufacturing electronic album containing interaction content of people and pets |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8593523B2 (en) | Method and apparatus for capturing facial expressions | |
CN107767874B (en) | Infant crying recognition prompting method and system | |
US20200285842A1 (en) | Method and apparatus for child state analysis, vehicle, electronic device, and storage medium | |
CN113194359B (en) | Method, device, equipment and medium for automatically grabbing baby wonderful video highlights | |
CN116110129A (en) | Intelligent evaluation method, device, equipment and storage medium for dining quality of infants | |
CN110427923B (en) | Infant milk vomiting behavior recognition method and device, computer equipment and storage medium | |
CN116580427B (en) | Method, device and equipment for manufacturing electronic album containing interaction content of people and pets | |
CN113709562B (en) | Automatic editing method, device, equipment and storage medium based on baby action video | |
CN112686211A (en) | Fall detection method and device based on attitude estimation | |
CN115862115B (en) | Infant respiration detection area positioning method, device and equipment based on vision | |
CN108710820A (en) | Infantile state recognition methods, device and server based on recognition of face | |
CN113378762A (en) | Sitting posture intelligent monitoring method, device, equipment and storage medium | |
CN116682176A (en) | Method, device, equipment and storage medium for intelligently generating infant video tag | |
CN117373110A (en) | Visible light-thermal infrared imaging infant behavior recognition method, device and equipment | |
CN116386671B (en) | Infant crying type identification method, device, equipment and storage medium | |
CN117173784B (en) | Infant turning-over action detection method, device, equipment and storage medium | |
CN113591520A (en) | Image identification method, intrusion object detection method and device | |
CN115170870A (en) | Deep learning-based infant behavior feature classification method and system | |
CN116761035B (en) | Video intelligent editing method, device and equipment based on maternal and infant feeding behavior recognition | |
CN111402987B (en) | Medicine reminding method, device, equipment and storage medium based on visible light video | |
CN112036293A (en) | Age estimation method, and training method and device of age estimation model | |
CN115590477B (en) | Sleep staging method and device based on self-supervision, electronic equipment and storage medium | |
CN115953389B (en) | Strabismus judging method and device based on face key point detection | |
CN113591535B (en) | Facial feature point-based method for identifying chewing motion of old people in feeding process | |
US20240221764A1 (en) | Sound detection method and related device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |