CN112462622A - Intelligent home control method and intelligent control equipment based on biological feature recognition - Google Patents

Intelligent home control method and intelligent control equipment based on biological feature recognition Download PDF

Info

Publication number
CN112462622A
CN112462622A CN202011419124.5A CN202011419124A CN112462622A CN 112462622 A CN112462622 A CN 112462622A CN 202011419124 A CN202011419124 A CN 202011419124A CN 112462622 A CN112462622 A CN 112462622A
Authority
CN
China
Prior art keywords
information
target
recognition result
determining
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011419124.5A
Other languages
Chinese (zh)
Inventor
张瑞华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202011419124.5A priority Critical patent/CN112462622A/en
Publication of CN112462622A publication Critical patent/CN112462622A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to an intelligent home control method and intelligent control equipment for biological feature recognition. By applying the scheme, when the target biological characteristic information is collected from the target information, the target characteristic information can be subjected to characteristic recognition to obtain a characteristic recognition result comprising a facial emotion recognition result and a voice emotion recognition result. Therefore, the corresponding control strategy can be generated according to the characteristic identification result and issued to the intelligent home, and a user does not need to actively input a control instruction for the intelligent home. In addition, the accurate time period when the user reaches the target house can be determined by analyzing the distance between the information acquisition equipment and the target house and the road condition information, so that the self-adaptive adjustment of the control strategy is realized, and the single and rigid control of the smart home is avoided.

Description

Intelligent home control method and intelligent control equipment based on biological feature recognition
Technical Field
The application relates to the technical field of intelligent home control, in particular to an intelligent home control method and intelligent control equipment based on biological feature recognition.
Background
The intelligent home (HomeAutomation) is a modern living system formed on the basis of a traditional house and assisted by an internet of things communication technology, an automatic control technology and an artificial intelligence technology. The gradual maturity of intelligent home provides great convenience for modern fast-paced life, and can provide a good, comfortable and intelligent living environment for users. However, the operation of smart homes at present still requires active remote or local control by the user.
Disclosure of Invention
The application provides an intelligent home control method and intelligent control equipment based on biological feature recognition, and aims to solve the technical problems in the prior art.
In a first aspect, a biometric feature recognition smart home control method is provided, which is applied to a smart control device communicating with an information acquisition device, and the method includes:
acquiring target information acquired by information acquisition equipment, analyzing the target information, and judging whether the target information contains target biological characteristic information matched with preset biological characteristic information; the information acquisition equipment is one or a combination of a user terminal, an office terminal and a vehicle-mounted controller;
when the target information is judged to contain the target biological characteristic information matched with the preset biological characteristic information, determining the distance between the information acquisition equipment corresponding to the target biological characteristic information and a target house;
performing feature recognition on the target biological feature information to obtain a feature recognition result;
and determining a control strategy for controlling the intelligent home based on the characteristic identification result, issuing the control strategy to a target intelligent home in the target home, and performing self-adaptive adjustment on the issued control strategy by taking the distance and road condition information corresponding to the distance as a basis.
Preferably, the step of performing feature recognition on the target biometric information to obtain a feature recognition result specifically includes:
determining a category of the target biometric information;
determining a corresponding feature identification thread according to the category;
and carrying out feature recognition on the target biological feature information of the corresponding category based on the feature recognition thread to obtain a feature recognition result.
Preferably, the step of performing feature recognition on the target biometric information of the corresponding category based on the feature recognition thread to obtain a feature recognition result specifically includes:
if the target biological characteristic information is face image information, determining a pixel key point sequence of the face image information and a pixel boundary value sequence of the face image information;
acquiring an initial element attribute corresponding to any sequence element of the face image information in the pixel key point sequence; determining a sequence element with the largest sequence weight in the pixel boundary value sequence as a target sequence element;
generating a mirror image element attribute corresponding to the initial element attribute in the target sequence element, and determining a sequence conversion relation between the pixel key point sequence and the pixel boundary value sequence according to the initial element attribute and the mirror image element attribute;
acquiring a target element attribute in the target sequence element by taking the initial element attribute as a reference; according to the sequence conversion relationship, mapping the target element attribute to the sequence element corresponding to the initial element attribute, obtaining a mirror image sequence element corresponding to the target element attribute from the sequence element corresponding to the initial element attribute, and determining the mirror image sequence element as a current sequence element corresponding to the face image information and used for representing a current emotional state;
and determining the current emotion state corresponding to the face image information according to the target element attribute in the current sequence element, and determining the current emotion state as a face emotion recognition result corresponding to the face image information.
Preferably, the step of performing feature recognition on the target biometric information of the corresponding category based on the feature recognition thread to obtain a feature recognition result specifically includes:
if the target biological characteristic information is voice information, performing natural language processing on the voice information to obtain text information corresponding to the voice information, extracting keywords of the text information, identifying semantics of the keywords, and obtaining subject information of the text information according to the semantics;
inputting the voice information into a trained neural network to obtain a current tone characteristic and a current tone characteristic corresponding to the voice information;
determining a voice emotion state corresponding to the voice information based on the current tone features and the current tone features;
determining an expected emotion state corresponding to the theme information according to a preset database;
judging whether the speech emotional state is similar to the expected emotional state; if yes, generating a voice emotion recognition result of the voice information according to the expected emotion state; and if not, weighting the voice emotion state and the expected emotion state to obtain an actual emotion state of the voice information, and generating a voice emotion recognition result of the voice information according to the actual emotion state.
Preferably, before the step of determining a control strategy for controlling the smart home based on the feature recognition result, the method further includes:
if the feature recognition result comprises a facial emotion recognition result and a voice emotion recognition result, respectively determining emotion description information of the facial emotion recognition result and the voice emotion recognition result;
determining a first correlation coefficient between the facial emotion recognition result and the voice emotion recognition result according to the emotion description information;
acquiring a plurality of identification parameters of the facial emotion identification result;
performing identification adjustment on at least part of identification parameters in the facial emotion recognition result according to the first correlation coefficient;
generating a corrected recognition result corresponding to the facial emotion recognition result according to the recognition parameters which finish the identification adjustment in the facial emotion recognition result;
and fusing the corrected recognition result and the voice emotion recognition result by taking the first correlation coefficient as reference to obtain a comprehensive emotion recognition result.
Preferably, the step of determining a control strategy for controlling the smart home based on the feature recognition result specifically includes:
determining an expected use coefficient of each smart home in the target house from the comprehensive emotion recognition result;
and sequencing each smart home in the target residence according to the sequence of expected use coefficients, and determining the control strategies of a plurality of smart homes which are sequenced in the front according to the set voltage.
Preferably, the step of issuing the control policy to the target smart home in the target home and adaptively adjusting the issued control policy according to the distance and the traffic information corresponding to the distance includes:
format conversion is carried out on an initial control strategy corresponding to a target intelligent home according to an information receiving format of the target intelligent home to obtain first control instruction information corresponding to the information receiving format, and the first control instruction information is issued to the target intelligent home;
determining a target time period when the user reaches the target house according to the distance and the road condition information corresponding to the distance; judging whether the target intelligent home meets the operation requirement in the target time period, if not, generating a correction control strategy corresponding to the target intelligent home based on the comprehensive emotion recognition result and the target time period; determining second control instruction information corresponding to the correction control strategy; and issuing the second control instruction information to the target smart home to cover the first control instruction information.
In a second aspect, an intelligent control device is provided, comprising: the system comprises a processor, a memory and a network interface, wherein the memory and the network interface are connected with the processor; the network interface is connected with a nonvolatile memory in the intelligent control equipment; when the processor is operated, the computer program is called from the nonvolatile memory through the network interface, and the computer program is operated through the memory so as to execute the method.
In a third aspect, a readable storage medium applied to a computer is provided, and a computer program is burned in the readable storage medium, and when the computer program runs in a memory of an intelligent control device, the method is implemented.
When the intelligent home control method and the intelligent control equipment for biological feature recognition are applied, when the target biological feature information is collected from the target information, the target feature information can be subjected to feature recognition to obtain a feature recognition result comprising a facial emotion recognition result and a voice emotion recognition result. Therefore, the corresponding control strategy can be generated according to the characteristic identification result and issued to the intelligent home, and a user does not need to actively input a control instruction for the intelligent home. In addition, the accurate time period when the user reaches the target house can be determined by analyzing the distance between the information acquisition equipment and the target house and the road condition information, so that the self-adaptive adjustment of the control strategy is realized, and the single and rigid control of the smart home is avoided.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic view of an application scenario of a biometric smart home control system 100 according to an exemplary embodiment of the present application.
Fig. 2 is a flowchart illustrating a biometric smart home control method according to an exemplary embodiment of the present application.
Fig. 3 is a block diagram illustrating an embodiment of a biometric smart home control device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In order to solve the technical problems of the existing smart home, the embodiment of the invention provides a biometric feature recognition smart home control method and a biometric feature recognition smart home control device.
In order to describe the smart home control method provided by the embodiment of the present invention, an application scenario of the smart home control method is first described. It should be understood that the following scenarios are for example only and are not limiting of the present solution. In practical application, the number and the types of smart homes in the following scenes can be increased or decreased appropriately.
Fig. 1 is a schematic view of an application scenario of a biometric smart home control system 100 according to an embodiment of the present invention, which can be applied to a home. Where residences include, but are not limited to, bungalows, oceans, high-rise buildings, apartments and villas.
Further, the system comprises the intelligent control device 200 and a plurality of intelligent home 300 distributed in different areas of the house, and the intelligent control device 200 is communicated with each intelligent home 300. In this embodiment, the intelligent control device 200 may be a main control computer disposed in a house, or may be a cloud server disposed in a cloud, which is not limited herein.
Further, the smart home 300 may be different types of home appliances, such as a television, a refrigerator, an air conditioner, a water heater, and a lighting lamp. In specific implementation, the corresponding smart home 300 can be configured in an increasing or decreasing manner according to actual needs.
With continued reference to fig. 1, the system may further include an in-vehicle controller 400, a user terminal 500, and an office terminal 600 in communication with the intelligent control device 200. The in-vehicle controller 400 may be a controller for controlling the vehicle to run in a private car of a user, the user terminal 500 may be a mobile terminal (e.g., a mobile phone) of the user, and the office terminal 600 may be an office computer installed in an office of the user.
It can be understood that the intelligent control device 200 can accurately know the working and living states of the user through communication with the smart home 300 in the user house, the vehicle-mounted controller 400 in the user's private car, and the office terminal 600 in the user's office, and actively adjust and control the smart home 300 in a self-adaptive manner based on the working and living states of the user at different time intervals, so as to solve the problem that the user manually adjusts and controls the smart home 300 after returning home.
Fig. 2 is a flowchart illustrating a biometric smart home control method according to an embodiment of the present invention, where the method is applied to the smart control device 200 in fig. 1, and the method may specifically include the following steps.
Step 210, acquiring target information acquired by information acquisition equipment, analyzing the target information, and judging whether the target information contains target biological characteristic information matched with preset biological characteristic information; the information acquisition equipment is one or a combination of a user terminal, an office terminal and a vehicle-mounted controller.
In the embodiment of the present invention, the intelligent control device 200 acquires target information acquired by the vehicle-mounted controller 400, the user terminal 500, and the office terminal 600 in real time. In this embodiment, the target information may be image information or voice information, or may be a combination of image information and voice information.
In the embodiment of the present invention, the preset biometric information may be the biometric information of the user himself, which is previously imported into the intelligent control device 200 by the user, so that the intelligent control device 200 can analyze and judge the acquired target information according to the preset biometric information at a later stage.
Step 220, when it is determined that the target information includes the target biometric information matched with the preset biometric information, determining a distance between an information acquisition device corresponding to the target biometric information and the target residence.
In the embodiment of the present invention, the smart home 300 is disposed in a target residence, and the target residence may be understood as a residence of the user. The distance between the information collecting device and the target residence can be understood as the geographical distance between the information collecting device and the target residence. For example, if the information collecting apparatus is an office terminal 600, the distance between the office terminal 600 and the target residence may be xxxxkm (i.e., the straight-line distance from the office to the target residence).
And step 230, performing feature identification on the target biological feature information to obtain a feature identification result.
In the embodiment of the invention, the target characteristic information can be subjected to characteristic recognition through the pre-trained neural network. Further, the feature recognition result may include a facial emotion recognition result and a voice emotion recognition result. The description of the feature recognition result will be described in detail later.
And 240, determining a control strategy for controlling the smart home based on the feature recognition result, issuing the control strategy to a target smart home in the target house, and performing adaptive adjustment on the issued control strategy according to the distance and the road condition information corresponding to the distance.
It is understood that, when the above steps 210 to 240 are applied, when the target biometric information is collected from the target information, the target biometric information can be subjected to feature recognition to obtain a feature recognition result including a facial emotion recognition result and a speech emotion recognition result. Therefore, the corresponding control strategy can be generated according to the characteristic identification result and issued to the intelligent home, and a user does not need to actively input a control instruction for the intelligent home. In addition, the accurate time period when the user reaches the target house can be determined by analyzing the distance between the information acquisition equipment and the target house and the road condition information, so that the self-adaptive adjustment of the control strategy is realized, and the single and rigid control of the smart home is avoided.
In specific implementation, there may be multiple categories of target biometric information, and in order to ensure accuracy of the feature recognition result, correlation analysis needs to be performed on different target biometric information.
For this reason, in step 230, the step of performing feature recognition on the target biometric information to obtain a feature recognition result may specifically include the following steps: firstly, determining the category of the target biological characteristic information, then determining a corresponding characteristic identification thread according to the category, and finally, carrying out characteristic identification on the target biological characteristic information of the corresponding category based on different characteristic identification threads. Of course, when there are a plurality of types of target biometric information (in the present embodiment, the description is given by taking face image information and voice information as examples), it is necessary to consider the relevance between different types of target biometric information.
In order to explain the entire feature recognition process in more detail, the following description will first describe the feature recognition of a single category of target biometric information, and then analyze the relevance between target feature information of multiple categories.
(1) The target biological characteristic information is face image information.
Firstly, determining a pixel key point sequence of the face image information and a pixel boundary value sequence of the face image information; the pixel key point sequence is obtained by splitting pixel points of the face image information and counting pixel values of the split pixels, and the pixel boundary value sequence is obtained by calculating the difference of the pixel values of every two adjacent pixels of the split pixels; the pixel keypoint sequence and the pixel boundary value sequence respectively comprise sequence elements with different sequence weights; sequence elements in the pixel keypoint sequence are pixel values, and sequence elements in the pixel boundary value sequence are boundary values.
Secondly, acquiring an initial element attribute corresponding to any sequence element of the face image information in the pixel key point sequence; determining a sequence element with the largest sequence weight in the pixel boundary value sequence as a target sequence element; the element attributes are used for representing face emotion parameters corresponding to the sequence elements, the face emotion parameters are used for representing emotion categories, and the face emotion parameters corresponding to different emotion categories are different.
Then, generating a mirror image element attribute corresponding to the initial element attribute in the target sequence element, and determining a sequence conversion relation between the pixel key point sequence and the pixel boundary value sequence according to the initial element attribute and the mirror image element attribute; wherein the sequence transformation relationship is used for mutually transforming the sequence elements in the pixel key point sequence and the pixel boundary value sequence.
Further, acquiring a target element attribute in the target sequence element by taking the initial element attribute as a reference; and mapping the target element attribute to the sequence element corresponding to the initial element attribute according to the sequence conversion relationship, obtaining a mirror image sequence element corresponding to the target element attribute from the sequence element corresponding to the initial element attribute, and determining the mirror image sequence element as a current sequence element corresponding to the face image information and used for representing the current emotional state.
And finally, determining the current emotion state corresponding to the face image information according to the target element attribute in the current sequence element, and determining the current emotion state as a face emotion recognition result corresponding to the face image information.
For example, facial emotion recognition results may include, but are not limited to, happiness, fatigue, and impatience. It is understood that different emotional states correspond to different feature recognition results.
In specific implementation, the contents described in the above steps can be used to perform depth analysis and recognition on the face image information, so as to accurately determine the face emotion recognition result corresponding to the face image information.
(2) The target biometric information is voice information.
Firstly, natural language processing is carried out on the voice information to obtain text information corresponding to the voice information, keywords of the text information are extracted, semantics of the keywords are identified, and subject information of the text information is obtained according to the semantics.
For example, the topic information may characterize the user's intent when speaking. The different topic information represents different speaking intentions and is not limited herein.
Secondly, inputting the voice information into a trained neural network to obtain a current tone characteristic and a current tone characteristic corresponding to the voice information; wherein the neural network is trained by sample speech information having different intonation features and tone features.
And then, determining a voice emotion state corresponding to the voice information based on the current tone features and the current tone features.
Then, determining an expected emotion state corresponding to the theme information according to a preset database; wherein, different expected emotional states corresponding to different subject information are prestored in the database.
For example, the desired emotional state corresponding to the topic information "i go home and prepare to overtime" may be "excited" and "depressed". That is, the same subject information may match different desired emotional states.
Finally, judging whether the voice emotion state is similar to the expected emotion state; if yes, generating a voice emotion recognition result of the voice information according to the expected emotion state; and if not, weighting the voice emotion state and the expected emotion state to obtain an actual emotion state of the voice information, and generating a voice emotion recognition result of the voice information according to the actual emotion state.
In this embodiment, the speech emotional state and the desired emotional state may both be represented by binary strings, with different binary strings corresponding to different emotional states. Whether the speech emotional state is similar to the desired emotional state may be determined by performing a cosine distance calculation on feature vectors corresponding to binary strings corresponding to the speech emotional state and the desired emotional state.
Further, weighting the speech emotional state and the desired emotional state may be understood as weighting binary strings corresponding to the speech emotional state and the desired emotional state.
It can be understood that through the above contents, the voice information can be comprehensively and accurately identified, so that a voice emotion identification result corresponding to the voice information is determined.
On the basis, if the feature recognition result corresponding to the target biological feature information is one of the two cases, the subsequent steps can be directly executed according to the determined feature recognition result. If the feature recognition result corresponding to the target biometric information includes the two situations, before the step of determining the control strategy for controlling the smart home based on the feature recognition result in step S240, the method may further include the following steps.
First, emotion description information of the facial emotion recognition result and the voice emotion recognition result is respectively determined, and a first association coefficient between the facial emotion recognition result and the voice emotion recognition result is determined according to the emotion description information.
Secondly, a plurality of recognition parameters of the facial emotion recognition result are acquired.
Then, at least part of the recognition parameters in the facial emotion recognition result are subjected to identification adjustment according to the first correlation coefficient.
And then, generating a corrected recognition result corresponding to the facial emotion recognition result according to the recognition parameters which finish the identification adjustment in the facial emotion recognition result.
And finally, fusing the corrected recognition result and the voice emotion recognition result by taking the first correlation coefficient as a reference to obtain a comprehensive emotion recognition result.
It is to be understood that the above steps are described in more detail below through steps 310-380 for ease of understanding.
Step 310, determining first emotion description information corresponding to the facial emotion recognition result and second emotion description information corresponding to the voice emotion recognition result.
In step 310, the emotion description information is used to characterize the real-time emotional state of the user, and the emotion description information may be recorded in a form of setting character codes, so that the intelligent control device 200 may perform digital analysis and processing.
Step 320, determining a first correlation coefficient between the facial emotion recognition result and the voice emotion recognition result according to the first emotion description information and the second emotion description information.
In step 320, the first correlation coefficient is used to characterize the consistency and matching degree of the facial emotion recognition result and the voice emotion recognition result. The larger the first correlation coefficient is, the higher the consistency and matching degree of the facial emotion recognition result and the voice emotion recognition result are. The smaller the first correlation coefficient is, the lower the degree of coincidence and matching of the facial emotion recognition result and the speech emotion recognition result is.
Step 330, acquiring a plurality of identification parameters of the facial emotion identification result; the recognition parameters are used for characterizing the generation logic of the facial emotion recognition result.
Step 340, under the condition that it is determined that the facial emotion recognition result contains the instantaneous facial change identifier according to the first correlation coefficient, determining a first parameter difference value between each recognition parameter of the facial emotion recognition result under the static facial change identifier and each recognition parameter of the facial emotion recognition result under the instantaneous facial change identifier according to the recognition parameters of the facial emotion recognition result under the instantaneous facial change identifier and the parameter types of the recognition parameters, and adjusting the recognition parameters of the facial emotion recognition result under the static facial change identifier and the recognition parameters under the instantaneous facial change identifier, wherein the first parameter difference value is smaller than or equal to a set threshold value, to be under the instantaneous facial change identifier.
Step 350, under the condition that the static face identification corresponding to the facial emotion recognition result contains a plurality of recognition parameters, determining a second parameter difference value between the recognition parameters of the facial emotion recognition result under the static face identification according to the recognition parameters of the facial emotion recognition result under the instantaneous face change identification and the parameter types of the recognition parameters, and screening the recognition parameters under the static face identification according to the second parameter difference value between the recognition parameters.
And step 360, adding adjustment parameters for the identification parameters reserved for the screening according to the identification parameters of the facial emotion recognition result under the instantaneous facial change identifier and the parameter types of the identification parameters, and adjusting the identification parameters reserved for the screening to be under the instantaneous facial change identifier based on the magnitude sequence of the adjustment parameters.
Step 370, generating a modified recognition result corresponding to the facial emotion recognition result according to the recognition parameters of the facial emotion recognition result under the static facial identity.
It can be understood that by adjusting the recognition parameters of the face emotion recognition result under the static face identifier, the noise corresponding to the instantly changing face emotion can be removed, so that the corrected recognition result corresponding to the face emotion recognition result is generated according to the recognition parameters of the face emotion recognition result under the static face identifier.
And 380, fusing the corrected recognition result and the voice emotion recognition result by taking the first correlation coefficient as a reference to obtain a comprehensive emotion recognition result.
In the embodiment, the comprehensive emotion recognition result can accurately and reliably represent the real-time emotion state of the user, so that a data basis is provided for the follow-up generation of the control strategy of the smart home.
On the basis of the above, the determining of the control strategy for controlling the smart home based on the feature recognition result described in step 240 may be specifically implemented by the method described in the following steps.
Step 2411, determining expected use coefficients of each smart home in the target house from the comprehensive emotion recognition result.
In this embodiment, the expected usage coefficient is used to represent the usage desire of the user for the smart home. For example, if the expected use coefficient of the refrigeration air conditioner is determined to be larger than that of the electric warming oven through the comprehensive emotion recognition result, the fact that the user wants to be in a cooler environment after returning home is represented, and in this case, the use desire of the user for the refrigeration air conditioner is larger than that of the electric warming oven.
And 2412, sequencing each smart home in the target residence according to the sequence of the expected use coefficients, and determining a control strategy of a plurality of smart homes which are sequenced in the front according to the set voltage.
In this embodiment, the set voltage may be the maximum voltage that the target home can withstand without tripping. In order to avoid tripping of a target residence, the number of the smart homes in the starting state needs to be controlled, the expected use coefficients are sorted, and the smart homes in the front of the sorting are determined by the control strategy, so that the user requirements can be met to the greatest extent, and the situation that the user still needs to actively control the smart homes to be started after returning to the target residence is avoided.
On the basis, the step of issuing the control policy to the target smart home in the target home and adaptively adjusting the issued control policy according to the distance and the traffic information corresponding to the distance described in the step 240 may specifically include the content described in the following steps.
Step 2421, according to the information receiving format of the target smart home, performing format conversion on the initial control strategy corresponding to the target smart home to obtain first control instruction information corresponding to the information receiving format, and issuing the first control instruction information to the target smart home.
Step 2422, determining a target time period when the user reaches the target residence according to the distance and the road condition information corresponding to the distance; judging whether the target intelligent home meets the operation requirement in the target time period, if not, generating a correction control strategy corresponding to the target intelligent home based on the comprehensive emotion recognition result and the target time period; determining second control instruction information corresponding to the correction control strategy; and issuing the second control instruction information to the target smart home to cover the first control instruction information.
It can be understood that through the steps 2421 to 2422, the control strategy can be adaptively adjusted according to the real-time distance between the user and the target residence, so that single and rigid control of the smart home is avoided.
On the basis, please refer to fig. 3 in combination, a schematic module diagram of the smart home control apparatus 201 for biometric feature recognition is also provided, and the smart home control apparatus 201 is described in detail as follows.
A1. The utility model provides a biological characteristic identification's intelligent house controlling means, is applied to the intelligent control equipment with information acquisition equipment communication, the device includes following functional module.
The information analysis module 2011 is configured to obtain target information acquired by the information acquisition device, analyze the target information, and determine whether the target information includes target biometric information that matches preset biometric information; the information acquisition equipment is one or a combination of a user terminal, an office terminal and a vehicle-mounted controller.
A distance determining module 2012, configured to determine, when it is determined that the target information includes the target biometric information matched with the preset biometric information, a distance between an information acquisition device corresponding to the target biometric information and the target residence.
And the feature recognition module 2013 is configured to perform feature recognition on the target biological feature information to obtain a feature recognition result.
The home control module 2014 is configured to determine a control policy for controlling the smart home based on the feature recognition result, issue the control policy to a target smart home in the target home, and perform adaptive adjustment on the issued control policy according to the distance and road condition information corresponding to the distance.
A2. According to the smart home control device described in a1, the feature identification module 2013 is specifically configured to:
determining a category of the target biometric information;
determining a corresponding feature identification thread according to the category;
and carrying out feature recognition on the target biological feature information of the corresponding category based on the feature recognition thread to obtain a feature recognition result.
A3. According to the smart home control device described in a2, the feature identification module 2013 is specifically configured to:
if the target biological characteristic information is face image information, determining a pixel key point sequence of the face image information and a pixel boundary value sequence of the face image information;
acquiring an initial element attribute corresponding to any sequence element of the face image information in the pixel key point sequence; determining a sequence element with the largest sequence weight in the pixel boundary value sequence as a target sequence element;
generating a mirror image element attribute corresponding to the initial element attribute in the target sequence element, and determining a sequence conversion relation between the pixel key point sequence and the pixel boundary value sequence according to the initial element attribute and the mirror image element attribute;
acquiring a target element attribute in the target sequence element by taking the initial element attribute as a reference; according to the sequence conversion relationship, mapping the target element attribute to the sequence element corresponding to the initial element attribute, obtaining a mirror image sequence element corresponding to the target element attribute from the sequence element corresponding to the initial element attribute, and determining the mirror image sequence element as a current sequence element corresponding to the face image information and used for representing a current emotional state;
and determining the current emotion state corresponding to the face image information according to the target element attribute in the current sequence element, and determining the current emotion state as a face emotion recognition result corresponding to the face image information.
A4. According to the smart home control device described in a2, the feature identification module 2013 is specifically configured to:
if the target biological characteristic information is voice information, performing natural language processing on the voice information to obtain text information corresponding to the voice information, extracting keywords of the text information, identifying semantics of the keywords, and obtaining subject information of the text information according to the semantics;
inputting the voice information into a trained neural network to obtain a current tone characteristic and a current tone characteristic corresponding to the voice information;
determining a voice emotion state corresponding to the voice information based on the current tone features and the current tone features;
determining an expected emotion state corresponding to the theme information according to a preset database;
judging whether the speech emotional state is similar to the expected emotional state; if yes, generating a voice emotion recognition result of the voice information according to the expected emotion state; and if not, weighting the voice emotion state and the expected emotion state to obtain an actual emotion state of the voice information, and generating a voice emotion recognition result of the voice information according to the actual emotion state.
A5. The smart home control device according to a1, wherein the feature identification module 2013 is further configured to:
if the feature recognition result comprises a facial emotion recognition result and a voice emotion recognition result, respectively determining emotion description information of the facial emotion recognition result and the voice emotion recognition result;
determining a first correlation coefficient between the facial emotion recognition result and the voice emotion recognition result according to the emotion description information;
acquiring a plurality of identification parameters of the facial emotion identification result;
performing identification adjustment on at least part of identification parameters in the facial emotion recognition result according to the first correlation coefficient;
generating a corrected recognition result corresponding to the facial emotion recognition result according to the recognition parameters which finish the identification adjustment in the facial emotion recognition result;
and fusing the corrected recognition result and the voice emotion recognition result by taking the first correlation coefficient as reference to obtain a comprehensive emotion recognition result.
A6. According to the smart home control apparatus described in a5, the home control module 2014 is specifically configured to:
determining an expected use coefficient of each smart home in the target house from the comprehensive emotion recognition result;
and sequencing each smart home in the target residence according to the sequence of expected use coefficients, and determining the control strategies of a plurality of smart homes which are sequenced in the front according to the set voltage.
A7. According to the smart home control apparatus described in a6, the home control module 2014 is specifically configured to:
format conversion is carried out on an initial control strategy corresponding to a target intelligent home according to an information receiving format of the target intelligent home to obtain first control instruction information corresponding to the information receiving format, and the first control instruction information is issued to the target intelligent home;
determining a target time period when the user reaches the target house according to the distance and the road condition information corresponding to the distance; judging whether the target intelligent home meets the operation requirement in the target time period, if not, generating a correction control strategy corresponding to the target intelligent home based on the comprehensive emotion recognition result and the target time period; determining second control instruction information corresponding to the correction control strategy; and issuing the second control instruction information to the target smart home to cover the first control instruction information.
On the basis, an embodiment of the present invention further provides an intelligent control device, including: the intelligent control device comprises a processor, a memory and a network interface, wherein the memory and the network interface are connected with the processor, and the network interface is connected with a nonvolatile memory in the intelligent control device. When the processor is running, the processor calls the computer program from the nonvolatile memory through the network interface and runs the computer program through the memory so as to execute the method.
Further, a readable storage medium applied to a computer is provided, the readable storage medium is burned with a computer program, and the computer program realizes the method when running in a memory of the intelligent control device.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (8)

1. A biological characteristic recognition intelligent home control method is applied to intelligent control equipment which is communicated with information acquisition equipment, and comprises the following steps:
acquiring target information acquired by information acquisition equipment, analyzing the target information, and judging whether the target information contains target biological characteristic information matched with preset biological characteristic information; the information acquisition equipment is one or a combination of a user terminal, an office terminal and a vehicle-mounted controller;
wherein:
the target information is image information, voice information, or
A combination of image information and voice information;
when the target information is judged to contain the target biological characteristic information matched with the preset biological characteristic information, determining the distance between the information acquisition equipment corresponding to the target biological characteristic information and a target house;
wherein:
the distance between the information acquisition equipment and the target house is the geographical distance between the information acquisition equipment and the target house;
performing feature recognition on the target biological feature information to obtain a feature recognition result;
wherein:
the feature recognition result comprises a facial emotion recognition result and a voice emotion recognition result;
and determining a control strategy for controlling the intelligent home based on the characteristic identification result, issuing the control strategy to a target intelligent home in the target home, and performing self-adaptive adjustment on the issued control strategy by taking the distance and road condition information corresponding to the distance as a basis.
2. The smart home control method according to claim 1, wherein the step of performing feature recognition on the target biometric information to obtain a feature recognition result specifically includes:
determining a category of the target biometric information;
determining a corresponding feature identification thread according to the category;
and carrying out feature recognition on the target biological feature information of the corresponding category based on the feature recognition thread to obtain a feature recognition result.
3. The smart home control method according to claim 2, wherein the step of performing feature recognition on the target biometric information of the corresponding category based on the feature recognition thread to obtain a feature recognition result specifically includes:
if the target biological characteristic information is face image information, determining a pixel key point sequence of the face image information and a pixel boundary value sequence of the face image information;
acquiring an initial element attribute corresponding to any sequence element of the face image information in the pixel key point sequence; determining a sequence element with the largest sequence weight in the pixel boundary value sequence as a target sequence element;
generating a mirror image element attribute corresponding to the initial element attribute in the target sequence element, and determining a sequence conversion relation between the pixel key point sequence and the pixel boundary value sequence according to the initial element attribute and the mirror image element attribute;
acquiring a target element attribute in the target sequence element by taking the initial element attribute as a reference; according to the sequence conversion relationship, mapping the target element attribute to the sequence element corresponding to the initial element attribute, obtaining a mirror image sequence element corresponding to the target element attribute from the sequence element corresponding to the initial element attribute, and determining the mirror image sequence element as a current sequence element corresponding to the face image information and used for representing a current emotional state;
and determining the current emotion state corresponding to the face image information according to the target element attribute in the current sequence element, and determining the current emotion state as a face emotion recognition result corresponding to the face image information.
4. The smart home control method according to claim 2, wherein the step of performing feature recognition on the target biometric information of the corresponding category based on the feature recognition thread to obtain a feature recognition result specifically includes:
if the target biological characteristic information is voice information, performing natural language processing on the voice information to obtain text information corresponding to the voice information, extracting keywords of the text information, identifying semantics of the keywords, and obtaining subject information of the text information according to the semantics;
inputting the voice information into a trained neural network to obtain a current tone characteristic and a current tone characteristic corresponding to the voice information;
determining a voice emotion state corresponding to the voice information based on the current tone features and the current tone features;
determining an expected emotion state corresponding to the theme information according to a preset database;
judging whether the speech emotional state is similar to the expected emotional state; if yes, generating a voice emotion recognition result of the voice information according to the expected emotion state; and if not, weighting the voice emotion state and the expected emotion state to obtain an actual emotion state of the voice information, and generating a voice emotion recognition result of the voice information according to the actual emotion state.
5. The smart home control method according to claim 1, wherein before the step of determining the control strategy for controlling the smart home based on the feature recognition result, the method further comprises:
if the feature recognition result comprises a facial emotion recognition result and a voice emotion recognition result, respectively determining emotion description information of the facial emotion recognition result and the voice emotion recognition result;
determining a first correlation coefficient between the facial emotion recognition result and the voice emotion recognition result according to the emotion description information;
acquiring a plurality of identification parameters of the facial emotion identification result;
performing identification adjustment on at least part of identification parameters in the facial emotion recognition result according to the first correlation coefficient;
generating a corrected recognition result corresponding to the facial emotion recognition result according to the recognition parameters which finish the identification adjustment in the facial emotion recognition result;
and fusing the corrected recognition result and the voice emotion recognition result by taking the first correlation coefficient as reference to obtain a comprehensive emotion recognition result.
6. The smart home control method according to claim 5, wherein the step of determining a control strategy for controlling the smart home based on the feature recognition result specifically includes:
determining an expected use coefficient of each smart home in the target house from the comprehensive emotion recognition result;
and sequencing each smart home in the target residence according to the sequence of expected use coefficients, and determining the control strategies of a plurality of smart homes which are sequenced in the front according to the set voltage.
7. The smart home control method according to claim 6, wherein the step of issuing the control policy to the target smart home in the target home and adaptively adjusting the issued control policy according to the distance and the traffic information corresponding to the distance specifically comprises:
format conversion is carried out on an initial control strategy corresponding to a target intelligent home according to an information receiving format of the target intelligent home to obtain first control instruction information corresponding to the information receiving format, and the first control instruction information is issued to the target intelligent home;
determining a target time period when the user reaches the target house according to the distance and the road condition information corresponding to the distance; judging whether the target intelligent home meets the operation requirement in the target time period, if not, generating a correction control strategy corresponding to the target intelligent home based on the comprehensive emotion recognition result and the target time period; determining second control instruction information corresponding to the correction control strategy; and issuing the second control instruction information to the target smart home to cover the first control instruction information.
8. An intelligent control device, comprising:
a processor, and
a memory and a network interface connected with the processor;
the network interface is connected with a nonvolatile memory in the intelligent control equipment;
the processor, when running, retrieves a computer program from the non-volatile memory via the network interface and runs the computer program via the memory to perform the method of any of claims 1-7.
CN202011419124.5A 2020-04-02 2020-04-02 Intelligent home control method and intelligent control equipment based on biological feature recognition Withdrawn CN112462622A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011419124.5A CN112462622A (en) 2020-04-02 2020-04-02 Intelligent home control method and intelligent control equipment based on biological feature recognition

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010254822.8A CN111447124B (en) 2020-04-02 2020-04-02 Intelligent household control method and intelligent control equipment based on biological feature recognition
CN202011419124.5A CN112462622A (en) 2020-04-02 2020-04-02 Intelligent home control method and intelligent control equipment based on biological feature recognition

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010254822.8A Division CN111447124B (en) 2020-04-02 2020-04-02 Intelligent household control method and intelligent control equipment based on biological feature recognition

Publications (1)

Publication Number Publication Date
CN112462622A true CN112462622A (en) 2021-03-09

Family

ID=71649780

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202011419098.6A Withdrawn CN112631137A (en) 2020-04-02 2020-04-02 Intelligent household control method and intelligent control equipment applied to biological feature recognition
CN202011419124.5A Withdrawn CN112462622A (en) 2020-04-02 2020-04-02 Intelligent home control method and intelligent control equipment based on biological feature recognition
CN202010254822.8A Active CN111447124B (en) 2020-04-02 2020-04-02 Intelligent household control method and intelligent control equipment based on biological feature recognition

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202011419098.6A Withdrawn CN112631137A (en) 2020-04-02 2020-04-02 Intelligent household control method and intelligent control equipment applied to biological feature recognition

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202010254822.8A Active CN111447124B (en) 2020-04-02 2020-04-02 Intelligent household control method and intelligent control equipment based on biological feature recognition

Country Status (1)

Country Link
CN (3) CN112631137A (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215927B (en) * 2020-09-18 2023-06-23 腾讯科技(深圳)有限公司 Face video synthesis method, device, equipment and medium
CN112904451A (en) * 2021-01-20 2021-06-04 浙江洁特智慧科技有限公司 Presence type inductor
CN113569634B (en) * 2021-06-18 2024-03-26 青岛海尔科技有限公司 Scene characteristic control method and device, storage medium and electronic device
CN113852524A (en) * 2021-07-16 2021-12-28 天翼智慧家庭科技有限公司 Intelligent household equipment control system and method based on emotional characteristic fusion
CN118136010B (en) * 2024-03-13 2024-09-27 浙江康巴赫科技股份有限公司 Electrical appliance working mode switching method and system based on voice interaction

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101917585A (en) * 2010-08-13 2010-12-15 宇龙计算机通信科技(深圳)有限公司 Method, device and terminal for regulating video information sent from visual telephone to opposite terminal
CN105785770A (en) * 2014-12-26 2016-07-20 北京奇虎科技有限公司 Onboard control system capable of controlling intelligent cooking electric appliance
CN105780378A (en) * 2014-12-26 2016-07-20 北京奇虎科技有限公司 Vehicle-mounted control system capable of controlling intelligent washing machine
CN106972991A (en) * 2016-10-25 2017-07-21 上海赫千电子科技有限公司 Smart home interacted system based on QNX onboard operations systems
CN106570496B (en) * 2016-11-22 2019-10-01 上海智臻智能网络科技股份有限公司 Emotion identification method and apparatus and intelligent interactive method and equipment
US10878831B2 (en) * 2017-01-12 2020-12-29 Qualcomm Incorporated Characteristic-based speech codebook selection
CN107220591A (en) * 2017-04-28 2017-09-29 哈尔滨工业大学深圳研究生院 Multi-modal intelligent mood sensing system
CN107272607A (en) * 2017-05-11 2017-10-20 上海斐讯数据通信技术有限公司 A kind of intelligent home control system and method
CN107133368B (en) * 2017-06-09 2020-11-03 上海思依暄机器人科技股份有限公司 Human-computer interaction method and system and robot
CN107235045A (en) * 2017-06-29 2017-10-10 吉林大学 Consider physiology and the vehicle-mounted identification interactive system of driver road anger state of manipulation information
CN108039988B (en) * 2017-10-31 2021-04-30 珠海格力电器股份有限公司 Equipment control processing method and device
CN109087670B (en) * 2018-08-30 2021-04-20 西安闻泰电子科技有限公司 Emotion analysis method, system, server and storage medium
CN110262413A (en) * 2019-05-29 2019-09-20 深圳市轱辘汽车维修技术有限公司 Intelligent home furnishing control method, control device, car-mounted terminal and readable storage medium storing program for executing
CN110399837B (en) * 2019-07-25 2024-01-05 深圳智慧林网络科技有限公司 User emotion recognition method, device and computer readable storage medium
CN110246519A (en) * 2019-07-25 2019-09-17 深圳智慧林网络科技有限公司 Emotion identification method, equipment and computer readable storage medium
CN110491415A (en) * 2019-09-23 2019-11-22 河南工业大学 A kind of speech-emotion recognition method based on convolutional neural networks and simple cycle unit
CN110673503A (en) * 2019-10-31 2020-01-10 重庆长安汽车股份有限公司 Intelligent household equipment control method and device, cloud server and computer readable storage medium
CN110941332A (en) * 2019-11-06 2020-03-31 北京百度网讯科技有限公司 Expression driving method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112631137A (en) 2021-04-09
CN111447124B (en) 2021-03-23
CN111447124A (en) 2020-07-24

Similar Documents

Publication Publication Date Title
CN111447124B (en) Intelligent household control method and intelligent control equipment based on biological feature recognition
CN109582793A (en) Model training method, customer service system and data labeling system, readable storage medium storing program for executing
CN114678014A (en) Intention recognition method, device, computer equipment and computer readable storage medium
US20190164566A1 (en) Emotion recognizing system and method, and smart robot using the same
CN110597082A (en) Intelligent household equipment control method and device, computer equipment and storage medium
CN112541738B (en) Examination and approval method, device, equipment and medium based on intelligent conversation technology
WO2023184942A1 (en) Voice interaction method and apparatus and electric appliance
WO2023273776A1 (en) Speech data processing method and apparatus, and storage medium and electronic apparatus
CN114639379A (en) Interaction method and device of intelligent electric appliance, computer equipment and medium
CN110674276B (en) Robot self-learning method, robot terminal, device and readable storage medium
CN111105798B (en) Equipment control method based on voice recognition
CN113220828B (en) Method, device, computer equipment and storage medium for processing intention recognition model
CN117238322B (en) Self-adaptive voice regulation and control method and system based on intelligent perception
CN110895936A (en) Voice processing method and device based on household appliance
CN117350411A (en) Large model training and task processing method and device based on federal learning
CN117456995A (en) Interactive method and system of pension service robot
CN112669836A (en) Command recognition method and device and computer readable storage medium
CN112052686A (en) Voice learning resource pushing method for user interactive education
CN109976703B (en) Guidance instruction method, computer-readable storage medium, and cooking apparatus
CN116757855A (en) Intelligent insurance service method, device, equipment and storage medium
CN116956856A (en) Data processing method and device, storage medium and electronic equipment
CN116955602A (en) Text processing method and device and electronic equipment
CN112860870B (en) Noise data identification method and equipment
CN117691918B (en) Control method and system for swimming pool pump motor
CN112634874B (en) Automatic tuning terminal equipment based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210309

WW01 Invention patent application withdrawn after publication