CN116382468A - Automatic triggering method, system and storage medium for interaction function of intelligent mirror - Google Patents

Automatic triggering method, system and storage medium for interaction function of intelligent mirror Download PDF

Info

Publication number
CN116382468A
CN116382468A CN202310165330.5A CN202310165330A CN116382468A CN 116382468 A CN116382468 A CN 116382468A CN 202310165330 A CN202310165330 A CN 202310165330A CN 116382468 A CN116382468 A CN 116382468A
Authority
CN
China
Prior art keywords
user
preset
weight
image
standard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310165330.5A
Other languages
Chinese (zh)
Inventor
李剑
赵建洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangyin Baohong Electric Appliance Co ltd
Original Assignee
Jiangyin Baohong Electric Appliance Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangyin Baohong Electric Appliance Co ltd filed Critical Jiangyin Baohong Electric Appliance Co ltd
Priority to CN202310165330.5A priority Critical patent/CN116382468A/en
Publication of CN116382468A publication Critical patent/CN116382468A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Abstract

The application relates to an automatic triggering method, system and storage medium for an interactive function of an intelligent mirror, which relate to the technical field of image processing and comprise the following steps: acquiring a user image; identifying the user image to obtain a plurality of confidence degrees; calculating comprehensive weights according to all the confidence degrees and preset weights corresponding to the standard states one by one; matching a weight range in which the comprehensive weight falls from a preset weight set; matching corresponding action instructions from the corresponding tables of the weight ranges and the action instructions; and executing the matched action instruction. The user information is acquired by means of an image recognition technology, the current state of the user is estimated through the user information, a proper action instruction is matched according to the current state of the user, and the corresponding action instruction is actively executed, so that the user can automatically take effect to conform to the current state of the user under the condition that the user does not interact, and better use experience is achieved for the user.

Description

Automatic triggering method, system and storage medium for interaction function of intelligent mirror
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an automatic triggering method and system for an interaction function of an intelligent mirror, and a storage medium.
Background
Mirrors are one of the household items commonly found in the home. The traditional mirror can only be used for assisting the arrangement of the dressing clothes, and has single function. Along with the progress of technology, smart home is beginning to enter people's life. On the basis of the common mirror, the functions of voice, display, communication, light supplementing, defogging and the like are added, and the intelligent expansion is carried out on the common mirror, so that the common mirror becomes an intelligent mirror.
The intelligent mirror provides more convenient services for users and meets richer demands of the users. For example, a user can not only arrange the makeup while using the smart mirror, but also query internet information through voice control. For another example, in case that the external environment is dark, the light is automatically supplemented so that the user can normally use the smart mirror.
It can be seen that the current functions of the intelligent mirror can be divided into two types of interactive and autonomous, and the functions of light supplementing, defogging and the like which can be realized according to environmental parameters are classified as autonomous, so that the intelligent mirror can be operated without user operation. And the functions of inquiring weather forecast and current traffic condition, which need to be actively triggered by the user, are classified as interactive.
In practical use, many users have difficulty in thinking (e.g., just awake) or have no habit of interacting with the smart mirror at all, which makes many functions of the smart mirror inapplicable, and thus, the smart mirror does not have sufficient advantages over the conventional mirror. Therefore, there is a need to optimize the function triggering conditions of the smart mirrors so that the smart mirrors can trigger the corresponding functions more actively.
Disclosure of Invention
In order to enable the interaction function of the intelligent mirror to be triggered automatically under a suitable scene, the application provides an interaction function automatic triggering method, an interaction function automatic triggering system and a storage medium for the intelligent mirror.
In a first aspect, the present application provides an automatic triggering method for an interaction function of an intelligent mirror, which adopts the following technical scheme:
an automatic triggering method for an interactive function of an intelligent mirror comprises the following steps:
acquiring a user image;
the user image is identified to obtain a plurality of confidence degrees, the confidence degrees correspond to different standard states, and the confidence degrees are used for representing the possibility that the current user state is the corresponding standard state;
calculating comprehensive weights according to all the confidence degrees and preset weights corresponding to the standard states one by one;
matching a weight range in which the comprehensive weight falls from a preset weight set;
matching corresponding action instructions from a corresponding table of the weight range and the action instructions, wherein different action instructions correspond to different intelligent mirror functions;
and executing the matched action instruction.
According to the technical scheme, the user information is acquired by means of the image recognition technology, the current state of the user is estimated by the user information, the proper action instruction is matched according to the current state of the user, and the corresponding action instruction is actively executed, so that the user can automatically take effect to conform to the current state of the user under the condition that the user does not interact, and better use experience is provided for the user.
Optionally, the identifying the user image to obtain a plurality of confidence degrees includes the following steps:
identifying the user image to obtain a dressing image and a face image;
respectively comparing the dressing image with a plurality of preset dressing standard images to obtain corresponding dressing similarity;
respectively comparing the face image with a plurality of preset face standard images to obtain corresponding face similarity;
combining dressing similarity and face similarity in pairs and calculating confidence;
and determining the corresponding relation between the confidence coefficient and the standard state according to the corresponding relation between the preset standard state and the dressing standard image and the face standard image.
Optionally, the calculating the comprehensive weight according to all the confidence degrees and the preset weights corresponding to the standard states one by one includes the following steps:
obtaining initial weights according to the confidence degrees and preset weights;
acquiring real-time, and determining a correction value corresponding to the real-time from a preset work and rest table, wherein the preset work and rest table stores the corresponding relation between a time period and the correction value;
the correction value and the initial weight are added to obtain a composite weight.
Optionally, the obtaining the initial weight according to the confidence level and the preset weight includes the following steps:
based on the same corresponding standard state, adjusting the preset weight according to the confidence coefficient to obtain the actual weight corresponding to each standard state;
all the actual weights are added to get the initial weights.
Optionally, the work and rest table comprises a personal work and rest table and a public work and rest table;
the method for determining the correction value corresponding to the real-time according to the preset work and rest table comprises the following steps:
the user information is determined from the user image,
judging whether the personal work and rest time corresponding to the user information is preset,
if the correction value exists, determining the correction value according to the personal work and rest table;
if not, a correction value is determined based on the utility table.
Optionally, before executing the matched action instruction, the method includes the following steps:
extracting the historical execution time of the corresponding action from a preset database according to the matched action instruction, wherein different action instructions and the historical execution time corresponding to the corresponding action instruction are stored in the database;
calculating the time interval between the historical execution time and the current time, judging whether the time interval exceeds a preset threshold value,
if yes, executing an action instruction and updating the current time into a database to replace the corresponding historical execution time;
if not, the action instruction is not executed.
In a second aspect, the present application provides an automatic triggering system for an interaction function of an intelligent mirror, which adopts the following technical scheme:
an automatic triggering system for an interactive function of a smart mirror, comprising:
the image acquisition module is used for acquiring the user image;
the image analysis module is used for acquiring a plurality of confidence degrees by identifying the user image;
the weight calculation module is used for calculating comprehensive weights according to all the confidence degrees and preset weights corresponding to the standard states one by one;
the weight matching module is used for matching a weight range in which the comprehensive weight falls from a preset weight set;
the instruction determining module is used for matching corresponding action instructions from the weight range and the corresponding table of the action instructions;
and the instruction execution module is used for executing the matched action instruction.
In a third aspect, the present application provides a readable storage medium storing a computer program capable of being loaded by a processor and performing an order merging method in a supply chain as described above.
In summary, the present application includes at least one of the following beneficial technical effects: the user information is acquired by means of an image recognition technology, the current state of the user is estimated through the user information, a proper action instruction is matched according to the current state of the user, and the corresponding action instruction is actively executed, so that the user can automatically take effect to conform to the current state of the user under the condition that the user does not interact, and better use experience is achieved for the user.
Drawings
Fig. 1 is a block diagram of the overall steps of an embodiment of the present application.
FIG. 2 is a block diagram of specific steps of how confidence is achieved in an embodiment of the present application.
FIG. 3 is a block diagram of the specific steps of how the composite weights are calculated in an embodiment of the present application.
Fig. 4 is a block diagram of steps of how correction values are determined in an embodiment of the present application.
FIG. 5 is a block diagram of steps after determining an action instruction in an embodiment of the present application.
Detailed Description
The present application is described in further detail below with reference to the accompanying drawings.
The embodiment of the application discloses an automatic triggering method for an interactive function of an intelligent mirror, referring to fig. 1, comprising the following steps:
s100, acquiring a user image.
The user image is acquired by photographing through a camera provided on the smart mirror. And the camera is used together with the infrared sensor on the intelligent mirror. When a user approaches the intelligent mirror, the infrared sensor captures infrared rays emitted by a human body, so that the camera is controlled to start, and the camera is automatically triggered to work.
In addition, in order to make the image of the user clear and complete enough, the infrared sensor can also be arranged on the side where the mirror surface of the intelligent mirror is located, and shielding objects are arranged around the infrared sensor, so that the user can trigger the infrared sensor when the user faces the mirror surface. Such an arrangement enables the camera to be activated as much as possible only when the user is looking at the mirror.
S200, identifying the user image to obtain a plurality of confidence degrees.
The plurality of confidence levels correspond to different standard states. The confidence is used to characterize the likelihood that the current user state is the corresponding standard state.
In theory, the current user state should be accurately analyzed after the user image is identified, for example, whether the user is in a just-awake state, in a vague state, or in a ready-to-go state. However, since the user's expression, clothing, etc. have many influencing factors, and it is difficult to determine unified criteria, it is difficult to obtain a single and accurate result when identifying the user's image, and multiple state results often appear, and each state result has a corresponding possibility.
Therefore, in this embodiment, a plurality of standards are defined manually for various states of the user, that is, a plurality of standard states are pre-stored, and after the user image is identified, the identification result is respectively compared with the plurality of standard states, and the comparison result is the confidence level.
S300, calculating comprehensive weights according to all the confidence degrees and preset weights corresponding to the standard states one by one.
The preset weights are set by staff, and each standard state corresponds to different preset weights.
The integrated weight is a weight obtained by integrating the likelihood of all standard states.
S400, matching a weight range in which the comprehensive weight falls from a preset weight set.
The preset weight set stores a plurality of weight ranges, the weight ranges respectively correspond to a plurality of standard states, and the weight ranges are sequentially continuous. The preset weight is the median of the weight range corresponding to the standard state.
For example, there are three standard states A, B, C, then the corresponding preset weights are 1, 3, 5, and the corresponding weight ranges are 0-2, 2-4, 4-6, respectively.
The weight range of the integrated weight means that the user state corresponding to the integrated weight is determined as the standard state corresponding to the corresponding weight range.
S500, matching corresponding action instructions from the weight range and the corresponding table of the action instructions.
Wherein, different action instructions correspond to different smart mirror functions.
The correspondence between the weight range and the motion instruction is actually the correspondence between the standard state and the motion instruction. For example, when the standard state is in a just-awake state, the corresponding action instruction is to play light music; when the standard state is in the vague state, the corresponding action instruction is not action; the standard state is when the mobile phone is in a ready-to-go state, and the corresponding action instruction is to play a weather forecast.
S600, executing the matched action instruction.
In one embodiment, referring to FIG. 2, the user image is identified to achieve a plurality of confidence levels, comprising the steps of:
s210, identifying the user image to acquire a dressing image and a face image.
The user image is recognized to separate a human body image therefrom, and then the human body image is divided into a face image including a face and a wearing image having clothing worn by the user.
The facial image is acquired in order to judge whether the user is mental or drowsy by the facial expression of the user.
The wearing image is acquired in order to judge whether the user wears the night suit or the normal wear by the clothing worn by the user.
The user image is possible to be a whole body shot or a half body shot under the influence of different shooting angles of the camera, but the user face is required to be shot under the action of the intelligent mirror, so that the image shot by the camera is an image containing the user face and the coat, and the wearing image and the face image can be split.
Of course, the acquired user image does not contain the user face, and if the acquired user image does not contain the user face, the user image is invalidated, and the camera shoots again until the acquired user image containing the user face or the infrared sensor loses the infrared sensing of the human body.
As for the case where the user lacks the upper garment in the image, the user is directly considered to be wearing the night suit.
S220, respectively comparing the dressing image with a plurality of preset dressing standard images to obtain corresponding dressing similarity, and respectively comparing the face image with a plurality of preset face standard images to obtain corresponding face similarity.
The standard images of the dressing are two types, one is the images of various kinds of normal clothing, and the other is the images of various kinds of night clothes. Accordingly, facial images also have two categories, one being images of various drowsiness expressions and the other being images of various mental expressions.
The acquired dressing similarity corresponds to two, namely the pajamas similarity and the normal dress similarity, and the specific process is as follows: the dressing image is compared with a plurality of dressing standard images to obtain a plurality of dressing similarity rates, the dressing similarity rate with the highest similarity is selected from the dressing similarity rates to be used as one dressing similarity, and the other dressing similarity is obtained by subtracting the previous dressing similarity from 1.
For example, the standard images of the dressing are respectively a night suit A, a night suit B, a normal suit A and a normal suit B, and the similarity ratios obtained after comparison are respectively 0.42, 0.34, 0.55 and 0.57; then the dressing similarity ratio with the highest selected similarity is 0.57 and is compared with the normal dress, so that the dressing similarity ratio is selected as the normal dress similarity. If the same as the normal dress similarity selecting mode, the pajamas similarity is directly selected from the compared dressing similarity rates, the sum of the two similarities is easy to exceed 1, and obvious deviation occurs in the calculation of the comprehensive weight. The pajamas and the normal clothes are in a opposite relation, so that the normal clothes similarity is high, and the pajamas similarity is reduced, and therefore, 1-0.57=0.43 is directly adopted, and 0.43 is taken as the pajamas similarity.
The method for obtaining the facial similarity is similar to the method for obtaining the dressing similarity, and will not be described in detail here.
S230, combining the dressing similarity and the face similarity in pairs, and calculating the confidence.
There are two dressing similarities and two facial similarities, and there are 4 kinds of combination modes in total, namely four confidence coefficients are calculated in total. Each confidence level is calculated by multiplying the combined dressing similarity by the face similarity, for example, dressing similarity is 0.6, face similarity is 0.3, and confidence level is 0.18.
In addition, according to the above example, the other dressing similarity is 0.4, the other face similarity is 0.7, and the remaining three confidence degrees are 0.42, 0.28, and 0.12, respectively, and it can be seen that the sum of the four confidence degrees is 1.
S240, according to the corresponding relation between the preset standard state and the standard image of the dressing, and the corresponding relation between the confidence coefficient and the standard state is determined.
The correspondence between the standard state and the standard image of the dressing and the corresponding relationship between the standard state and the standard image of the face are set by the staff. For example, the standard state (in a ready-to-go state) corresponds to a standard clothing image (normal dress) and a standard face image (spirit); the standard state (in the just-awake state) corresponds to the standard clothing image (pajamas) and the standard face image (spirits), and also corresponds to the standard clothing image (normal wear) and the standard face image (drowsiness); the standard state (in the state of confusion) corresponds to a standard clothing image (pajamas) and a standard facial image (drowsiness).
In one embodiment, referring to fig. 3, the calculation of the comprehensive weight according to all the confidence levels and the preset weights corresponding to the standard states one by one includes the following steps:
s310, obtaining initial weights according to the confidence degrees and preset weights.
S320, acquiring real-time, and determining a correction value corresponding to the real-time from a preset work and rest table.
The corresponding relation between the time period and the correction value is stored in a preset work and rest table.
S330, adding the correction value and the initial weight to obtain the comprehensive weight.
Because certain errors exist in image recognition and image comparison, theoretical user states are determined through additional time, the theoretical user states are reflected through correction values, and final state judgment is influenced through weight generation.
Assume that three time periods A (5 points-9 points), B (9 points-20 points) and C (20 points-5 points) are stored in a preset work and rest table, and the correction values 0, +0.5 and-0.5 are respectively corresponding to the preset work and rest table. The calculated initial weight is 2.2, and the weight ranges corresponding to the three standard states A, B, C are 0-2, 2-4 and 4-6, respectively, the initial weight corresponds to the standard state B, but if the real-time at this time is 22 points, the matched correction value is-0.5, and the comprehensive weight is 1.7, the corresponding standard state is the standard state a.
When the device is applied to practical use, the standard state A is in a vague state, the standard state B is in a just-awake state, and the standard state C is in a ready-to-go state. The three states are in progressive relationship and correspond to the distribution of the corresponding weight ranges. In the conventional work and rest, in the time period A, three standard states can all appear, so that the corresponding correction value is 0, namely no correction is performed; in the period B, the possibility that the user is going out is higher, so the correction value is a positive value, and the guiding weight is required to be biased to the standard state C; in the period C, the user may take a night, drink water, etc., more in line with the standard state a, so the correction value is a negative value to guide the weight to deviate from the standard state a.
Further, obtaining an initial weight according to the confidence coefficient and the preset weight, including the following steps:
s311, based on the same corresponding standard state, adjusting the preset weight according to the confidence coefficient to obtain the actual weight corresponding to each standard state.
The preset weight is adjusted according to the confidence coefficient, the confidence coefficient is multiplied by the preset weight, and the product is the actual weight.
The confidence represents the probability that the user state belongs to the corresponding standard state, and the preset weight is a specific value to represent the corresponding standard state, and the actual weight is a specific value that the user state may be in a certain standard state.
S312, adding all the actual weights to obtain initial weights.
All confidence sums are 1, then all actual weights add to get the actual weights.
In addition, the preset work and rest tables include a personal work and rest table and a public work and rest table, and the number of the personal work and rest tables can be multiple, and each personal work and rest table corresponds to different users. However, the utility work and rest table is directly preset by staff, and the personal work and rest table is formed by automatically collecting user habits by the intelligent mirror in the long-term use process of the user. Thus, users who have just begun to use smart mirrors often can only use utility tables.
Referring to fig. 4, determining a correction value corresponding to real-time according to a preset work and rest table includes the following steps:
s321, determining user information according to the user image.
S322, judging whether the personal work and rest time corresponding to the user information is preset.
And S323, if the correction value exists, determining the correction value according to the personal work and rest table.
S324, if the correction value does not exist, the correction value is determined according to the public work information table.
After the correction value is determined according to the public work and rest table and the corresponding action instruction is matched from the corresponding table of the weight range and the action instruction, the current time and the corresponding standard state are stored into an initial work and rest table corresponding to the user as personal work and rest fragments according to the standard state corresponding to the action instruction, and after the personal work and rest fragments in the personal work and rest table continuously cover a preset time period, the initial work and rest table is converted into the personal work and rest table. The initial work and rest table is automatically created when the intelligent mirror recognizes new user information and is bound with the new user information. The preset time period can be one week, two weeks or even one month, and the working day and the rest day can be covered to record more various user work and rest.
In one embodiment, before executing the matched action instruction, see fig. 5, the method comprises the following steps:
s510, extracting the historical execution time of the corresponding action from a preset database according to the matched action instruction.
The database stores different action instructions and the corresponding historical execution time of the corresponding action instructions.
S520, calculating the time interval between the historical execution time and the current time, and judging whether the time interval exceeds a preset threshold.
And S530, if the time interval exceeds the preset threshold, executing the action instruction and updating the current time into the database to replace the corresponding historical execution time.
If the time interval is less than or equal to the preset threshold value, the action instruction is not executed.
The preset threshold is set by the staff and can be selected from 10 minutes, 20 minutes, and 30 minutes. Because the user can repeatedly pass in front of the intelligent mirror in a short time, the camera is triggered for a plurality of times, and if a preset threshold value is not set, the same action instruction is easily repeatedly executed for a plurality of times, so that the normal life of the user is influenced; after the preset threshold is set, the triggering time of the same action instruction can be prolonged.
The embodiment of the application also discloses an automatic triggering system of the interaction function for the intelligent mirror, which comprises the following components:
and the image acquisition module is used for acquiring the user image.
The image analysis module is used for acquiring a plurality of confidence degrees by identifying the user image.
And the weight calculation module is used for calculating the comprehensive weight according to all the confidence degrees and the preset weights corresponding to the standard states one by one.
The weight matching module is used for matching the weight range in which the comprehensive weight falls from a preset weight set.
And the instruction determining module is used for matching corresponding action instructions from the weight range and the corresponding tables of the action instructions.
And the instruction execution module is used for executing the matched action instruction.
The embodiment of the application also discloses a readable storage medium which stores a computer program capable of being loaded by a processor and executing the automatic triggering method for the interaction function of the intelligent mirror.
The foregoing are all preferred embodiments of the present application, and are not intended to limit the scope of the present application in any way, therefore: all equivalent changes in structure, shape and principle of this application should be covered in the protection scope of this application.

Claims (8)

1. An automatic triggering method for an interactive function of an intelligent mirror is characterized by comprising the following steps:
acquiring a user image;
the user image is identified to obtain a plurality of confidence degrees, the confidence degrees correspond to different standard states, and the confidence degrees are used for representing the possibility that the current user state is the corresponding standard state;
calculating comprehensive weights according to all the confidence degrees and preset weights corresponding to the standard states one by one;
matching a weight range in which the comprehensive weight falls from a preset weight set;
matching corresponding action instructions from a corresponding table of the weight range and the action instructions, wherein different action instructions correspond to different intelligent mirror functions;
and executing the matched action instruction.
2. The method for automatically triggering interactive functions of a smart mirror according to claim 1, wherein the identifying the user image to obtain a plurality of confidence levels comprises the steps of:
identifying the user image to obtain a dressing image and a face image;
respectively comparing the dressing image with a plurality of preset dressing standard images to obtain corresponding dressing similarity;
respectively comparing the face image with a plurality of preset face standard images to obtain corresponding face similarity;
combining dressing similarity and face similarity in pairs and calculating confidence;
and determining the corresponding relation between the confidence coefficient and the standard state according to the corresponding relation between the preset standard state and the dressing standard image and the face standard image.
3. The automatic triggering method for the interactive function of the smart mirror according to claim 1, wherein the calculating the comprehensive weight according to all confidence levels and the preset weight corresponding to the standard state one by one comprises the following steps:
obtaining initial weights according to the confidence degrees and preset weights;
acquiring real-time, and determining a correction value corresponding to the real-time from a preset work and rest table, wherein the preset work and rest table stores the corresponding relation between a time period and the correction value;
the correction value and the initial weight are added to obtain a composite weight.
4. The automatic triggering method for an interactive function of an intelligent mirror according to claim 3, wherein the obtaining an initial weight according to the confidence level and the preset weight comprises the following steps:
based on the same corresponding standard state, adjusting the preset weight according to the confidence coefficient to obtain the actual weight corresponding to each standard state;
all the actual weights are added to get the initial weights.
5. A method for automatically triggering interactive functions for a smart mirror according to claim 3, wherein the interest list comprises a personal interest list and a public interest list;
the method for determining the correction value corresponding to the real-time according to the preset work and rest table comprises the following steps:
the user information is determined from the user image,
judging whether the personal work and rest time corresponding to the user information is preset,
if the correction value exists, determining the correction value according to the personal work and rest table;
if not, a correction value is determined based on the utility table.
6. An automatic triggering method for interactive functions of a smart mirror according to claim 1, comprising the steps of, before executing the matched action instruction:
extracting the historical execution time of the corresponding action from a preset database according to the matched action instruction, wherein different action instructions and the historical execution time corresponding to the corresponding action instruction are stored in the database;
calculating the time interval between the historical execution time and the current time, judging whether the time interval exceeds a preset threshold value,
if yes, executing an action instruction and updating the current time into a database to replace the corresponding historical execution time;
if not, the action instruction is not executed.
7. An automatic interactive function triggering system for a smart mirror, comprising:
the image acquisition module is used for acquiring the user image;
the image analysis module is used for acquiring a plurality of confidence degrees by identifying the user image;
the weight calculation module is used for calculating comprehensive weights according to all the confidence degrees and preset weights corresponding to the standard states one by one;
the weight matching module is used for matching a weight range in which the comprehensive weight falls from a preset weight set;
the instruction determining module is used for matching corresponding action instructions from the weight range and the corresponding table of the action instructions;
and the instruction execution module is used for executing the matched action instruction.
8. A readable storage medium, characterized in that a computer program is stored which can be loaded by a processor and which performs an automatic triggering method for interactive functions of a smart mirror as claimed in any one of claims 1 to 6.
CN202310165330.5A 2023-02-24 2023-02-24 Automatic triggering method, system and storage medium for interaction function of intelligent mirror Pending CN116382468A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310165330.5A CN116382468A (en) 2023-02-24 2023-02-24 Automatic triggering method, system and storage medium for interaction function of intelligent mirror

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310165330.5A CN116382468A (en) 2023-02-24 2023-02-24 Automatic triggering method, system and storage medium for interaction function of intelligent mirror

Publications (1)

Publication Number Publication Date
CN116382468A true CN116382468A (en) 2023-07-04

Family

ID=86972087

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310165330.5A Pending CN116382468A (en) 2023-02-24 2023-02-24 Automatic triggering method, system and storage medium for interaction function of intelligent mirror

Country Status (1)

Country Link
CN (1) CN116382468A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842102A (en) * 2012-06-29 2012-12-26 惠州Tcl移动通信有限公司 Intelligent auxiliary dressing device and method
CN106125929A (en) * 2016-06-23 2016-11-16 中国地质大学(武汉) The people's mirror exchange method fed back with color emotion based on expression recognition and system
CN107340857A (en) * 2017-06-12 2017-11-10 美的集团股份有限公司 Automatic screenshot method, controller, Intelligent mirror and computer-readable recording medium
CN107563395A (en) * 2017-09-22 2018-01-09 北京小米移动软件有限公司 The method and apparatus that dressing management is carried out by Intelligent mirror
CN108337361A (en) * 2017-12-25 2018-07-27 福州领头虎软件有限公司 A kind of method and terminal prejudging behavior by gyro sensor
CN110658744A (en) * 2018-06-30 2020-01-07 珠海格力电器股份有限公司 Control method, device and system of intelligent equipment, electronic equipment and storage medium
CN111121809A (en) * 2019-12-25 2020-05-08 上海博泰悦臻电子设备制造有限公司 Recommendation method and device and computer storage medium
CN111339979A (en) * 2020-03-04 2020-06-26 平安科技(深圳)有限公司 Image recognition method and image recognition device based on feature extraction
CN114431684A (en) * 2022-01-11 2022-05-06 茹柚智能科技(上海)有限公司 Intelligent mirror
CN114900538A (en) * 2022-03-28 2022-08-12 青岛海尔科技有限公司 Control method and device of intelligent mirror, storage medium and electronic device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842102A (en) * 2012-06-29 2012-12-26 惠州Tcl移动通信有限公司 Intelligent auxiliary dressing device and method
CN106125929A (en) * 2016-06-23 2016-11-16 中国地质大学(武汉) The people's mirror exchange method fed back with color emotion based on expression recognition and system
CN107340857A (en) * 2017-06-12 2017-11-10 美的集团股份有限公司 Automatic screenshot method, controller, Intelligent mirror and computer-readable recording medium
CN107563395A (en) * 2017-09-22 2018-01-09 北京小米移动软件有限公司 The method and apparatus that dressing management is carried out by Intelligent mirror
CN108337361A (en) * 2017-12-25 2018-07-27 福州领头虎软件有限公司 A kind of method and terminal prejudging behavior by gyro sensor
CN110658744A (en) * 2018-06-30 2020-01-07 珠海格力电器股份有限公司 Control method, device and system of intelligent equipment, electronic equipment and storage medium
CN111121809A (en) * 2019-12-25 2020-05-08 上海博泰悦臻电子设备制造有限公司 Recommendation method and device and computer storage medium
CN111339979A (en) * 2020-03-04 2020-06-26 平安科技(深圳)有限公司 Image recognition method and image recognition device based on feature extraction
CN114431684A (en) * 2022-01-11 2022-05-06 茹柚智能科技(上海)有限公司 Intelligent mirror
CN114900538A (en) * 2022-03-28 2022-08-12 青岛海尔科技有限公司 Control method and device of intelligent mirror, storage medium and electronic device

Similar Documents

Publication Publication Date Title
KR102470171B1 (en) Method and system for optimizing massage recommendation program of smart massage chair
CN107111359B (en) Information processing system, control method, and computer-readable storage medium
CN108139582A (en) For data acquisition and the method and apparatus of Evaluation Environment data
CN109343700B (en) Eye movement control calibration data acquisition method and device
US20110246509A1 (en) Information processing device, image output method, and program
CN108133055A (en) Intelligent dress ornament storage device and based on its storage, recommend method and apparatus
JP6455809B2 (en) Preference judgment system
CN111627117A (en) Method and device for adjusting special effect of portrait display, electronic equipment and storage medium
CN113239220A (en) Image recommendation method and device, terminal and readable storage medium
CN116382468A (en) Automatic triggering method, system and storage medium for interaction function of intelligent mirror
CN111281403B (en) Fine-grained human body fatigue detection method and device based on embedded equipment
CN102521571A (en) Multimode biological identifying device and method thereof
CN113076347A (en) Push program screening system and method based on emotion on mobile terminal
US10019489B1 (en) Indirect feedback systems and methods
CN110584675B (en) Information triggering method and device and wearable device
CN112070572A (en) Virtual fitting method, device, storage medium and computer equipment
CN111459285A (en) Display device control method based on eye control technology, display device and storage medium
CN110543813A (en) Face image and gaze counting method and system based on scene
CN106815264B (en) Information processing method and system
CN113378691A (en) Intelligent home management system and method based on real-time user behavior analysis
CN112418022B (en) Human body data detection method and device
CN113761989B (en) Behavior recognition method and device, computer and readable storage medium
CN112560688A (en) Daily water intake estimation system and method based on motion sensor signals
CN214964225U (en) Multifunctional intelligent mirror
WO2020135936A1 (en) A computer implemented method, a system and computer program for determining personalized parameters for a user

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination