WO2022227991A1 - 用于智能家居系统的控制方法 - Google Patents

用于智能家居系统的控制方法 Download PDF

Info

Publication number
WO2022227991A1
WO2022227991A1 PCT/CN2022/083701 CN2022083701W WO2022227991A1 WO 2022227991 A1 WO2022227991 A1 WO 2022227991A1 CN 2022083701 W CN2022083701 W CN 2022083701W WO 2022227991 A1 WO2022227991 A1 WO 2022227991A1
Authority
WO
WIPO (PCT)
Prior art keywords
attribute information
target image
server
clothing
image
Prior art date
Application number
PCT/CN2022/083701
Other languages
English (en)
French (fr)
Inventor
张信耶
许升
万文鑫
Original Assignee
青岛海尔洗衣机有限公司
海尔智家股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 青岛海尔洗衣机有限公司, 海尔智家股份有限公司 filed Critical 青岛海尔洗衣机有限公司
Publication of WO2022227991A1 publication Critical patent/WO2022227991A1/zh

Links

Images

Classifications

    • DTEXTILES; PAPER
    • D06TREATMENT OF TEXTILES OR THE LIKE; LAUNDERING; FLEXIBLE MATERIALS NOT OTHERWISE PROVIDED FOR
    • D06FLAUNDERING, DRYING, IRONING, PRESSING OR FOLDING TEXTILE ARTICLES
    • D06F33/00Control of operations performed in washing machines or washer-dryers 
    • D06F33/30Control of washing machines characterised by the purpose or target of the control 
    • DTEXTILES; PAPER
    • D06TREATMENT OF TEXTILES OR THE LIKE; LAUNDERING; FLEXIBLE MATERIALS NOT OTHERWISE PROVIDED FOR
    • D06FLAUNDERING, DRYING, IRONING, PRESSING OR FOLDING TEXTILE ARTICLES
    • D06F34/00Details of control systems for washing machines, washer-dryers or laundry dryers
    • D06F34/04Signal transfer or data transmission arrangements
    • DTEXTILES; PAPER
    • D06TREATMENT OF TEXTILES OR THE LIKE; LAUNDERING; FLEXIBLE MATERIALS NOT OTHERWISE PROVIDED FOR
    • D06FLAUNDERING, DRYING, IRONING, PRESSING OR FOLDING TEXTILE ARTICLES
    • D06F34/00Details of control systems for washing machines, washer-dryers or laundry dryers
    • D06F34/14Arrangements for detecting or measuring specific parameters
    • D06F34/18Condition of the laundry, e.g. nature or weight
    • DTEXTILES; PAPER
    • D06TREATMENT OF TEXTILES OR THE LIKE; LAUNDERING; FLEXIBLE MATERIALS NOT OTHERWISE PROVIDED FOR
    • D06FLAUNDERING, DRYING, IRONING, PRESSING OR FOLDING TEXTILE ARTICLES
    • D06F2103/00Parameters monitored or detected for the control of domestic laundry washing machines, washer-dryers or laundry dryers
    • D06F2103/02Characteristics of laundry or load
    • DTEXTILES; PAPER
    • D06TREATMENT OF TEXTILES OR THE LIKE; LAUNDERING; FLEXIBLE MATERIALS NOT OTHERWISE PROVIDED FOR
    • D06FLAUNDERING, DRYING, IRONING, PRESSING OR FOLDING TEXTILE ARTICLES
    • D06F2103/00Parameters monitored or detected for the control of domestic laundry washing machines, washer-dryers or laundry dryers
    • D06F2103/02Characteristics of laundry or load
    • D06F2103/04Quantity, e.g. weight or variation of weight
    • DTEXTILES; PAPER
    • D06TREATMENT OF TEXTILES OR THE LIKE; LAUNDERING; FLEXIBLE MATERIALS NOT OTHERWISE PROVIDED FOR
    • D06FLAUNDERING, DRYING, IRONING, PRESSING OR FOLDING TEXTILE ARTICLES
    • D06F2103/00Parameters monitored or detected for the control of domestic laundry washing machines, washer-dryers or laundry dryers
    • D06F2103/02Characteristics of laundry or load
    • D06F2103/06Type or material

Definitions

  • the invention relates to the technical field of smart home, and in particular provides a control method for a smart home system.
  • washing machines In home life, taking washing machines as an example, with the improvement of living standards and the continuous improvement of users' pursuit of quality of life, washing machines are usually equipped with a variety of washing and care programs, which correspond to clothing with different attributes, such as material, category, color, etc. , which can provide more precise washing and care procedures for clothes with different attributes.
  • attributes such as material, category, color, etc.
  • the attribute composition of clothing is relatively complex and changeable, and users often cannot make accurate washing and nursing judgments, which may lead to inaccurate selected washing and nursing procedures.
  • a camera is installed on the washing machine, the image of the clothes is captured by the camera, the image of the clothes is recognized by the method of fuzzy recognition, the attribute information of the clothes is determined, and the appropriate washing program is recommended according to the attribute information of the clothes.
  • the accuracy of the fuzzy identification method is not high, and the determined attribute information of the clothes may deviate from the real attribute information of the clothes, resulting in inaccurate recommended washing procedures, and it is difficult to provide reasonable care for the clothes to be washed, and it may even be possible Damage to the clothing, such as color streaking, shrinkage deformation, etc., affects the user experience.
  • the present invention provides a smart home system.
  • the smart home system includes a clothes treatment device and a server, the clothes treatment device can communicate with the server; the control method includes the following steps: the clothes treatment device acquires a target image; sending the target image to the server; the server judges the type of the target image; according to the judgment result, the server selectively calls different preset models to identify the target image to determine the clothing attribute information; In the case of recognizing the clothing attribute information, the server selectively determines a laundry program according to the clothing attribute information.
  • the preset model includes a character recognition model, and the character recognition model is pre-stored in the server; "According to the judgment result, the server selectively calls different preset models to The step of recognizing the target image to determine the clothing attribute information" specifically includes: if the target image is a clothing identification image, the server calls the character recognition model to recognize the target image, and recognizes the text information in the target image; the server determines the clothing attribute information according to the text information.
  • the step of "the server invokes the character recognition model to recognize the target image, and recognizes the text information in the target image” specifically includes: the character recognition model The target image is detected to obtain a text segmentation probability matrix and a text distance probability matrix; the character recognition model calculates and obtains a text probability matrix according to the text segmentation probability matrix and the text distance probability matrix; the character recognition model according to The character probability matrix extracts a character area from the target image; the character recognition model recognizes the character area and recognizes the character information.
  • the text probability matrix is calculated according to the following method:
  • P ij is the text probability matrix
  • p ij is the ij -th element of the text probability matrix P ij
  • S ij is the text segmentation probability matrix
  • s ij is the ij-th element of the text segmentation probability matrix S ij ij elements
  • D ij is the text distance probability matrix
  • d ij is the ij -th element of the text distance probability matrix D ij
  • e is a constant
  • K is a constant.
  • the server has a database; the preset model further includes a deep learning model, and the deep learning model is pre-stored in the server; "According to the judgment result, the server selectively The step of calling different preset models to identify the target image to determine the clothing attribute information" further includes: if the target image is a clothing image, the server compares the target image with the standard stored in the database. clothes images are compared; according to the comparison result, the server judges whether the standard clothes image has the same or similar image as the target image; according to the judgment result, the server selectively calls the deep learning model to The target image is identified, and the clothing attribute information is identified.
  • the step of "according to the judgment result, the server selectively invokes the deep learning model to identify the target image, and identifies the clothing attribute information" specifically includes: if all the If there is no image identical or similar to the target image in the standard clothing image, the server invokes the deep learning model to identify the target image, and identifies the clothing attribute information.
  • the step of "according to the judgment result, the server selectively invokes the deep learning model to identify the target image, and identifies the clothing attribute information" further includes: if all the If there is an image that is the same or similar to the target image in the standard clothing image, the server determines the laundry program corresponding to the standard clothing image that is the same or similar to the target image as the laundry program of the current laundry.
  • the smart home system further includes a user terminal, and the user terminal can communicate with the clothing processing device and the server; "the server selectively according to the clothing attribute information
  • the step of determining a washing and care program specifically includes: the server sending the clothing attribute information to the client; judging whether the clothing attribute information has been modified; according to the judgment result, the server selectively according to the clothing attribute information or the modified laundry attribute information to determine the washing program.
  • the step of "according to the judgment result, the server selectively determines the laundry program according to the clothing attribute information or the modified clothing attribute information" specifically includes: if the clothing attribute information If it is not modified, the server determines the laundry program according to the clothing attribute information.
  • the step of "according to the judgment result, the server selectively determines the washing and care program according to the clothing attribute information or the modified clothing attribute information" further includes: if the clothing attribute information If it is modified, the server stores the modified clothing attribute information, and determines the washing and care program according to the modified clothing attribute information.
  • the smart home system includes a washing machine and a server; the washing machine obtains a target image; the washing machine sends the target image to the server; the server judges the type of the target image; The preset model of the server identifies the target image to determine the clothing attribute information; in the case of identifying the clothing attribute information, the server selectively determines the washing and care program according to the clothing attribute information.
  • the server judges the type of the target image, and selects the type of the target image according to the judgment result.
  • Different preset models can be called to identify the target image. Because the preset model has a very high accuracy for image recognition, and according to the type of the image, the preset model that matches the image is called for identification, which further improves the image quality.
  • the accuracy and accuracy of the recognition can more accurately determine the attribute information of the clothes, make the recommended washing and care procedures more accurate, and provide more reasonable washing and care for the clothes to be washed, avoid damage to the clothes, and improve the user experience. .
  • the target image is a clothing identification image
  • the target image records the attribute information of the clothing in the form of text.
  • the server calls a character recognition model that specifically recognizes text to recognize the target image. , so that the text information in the target image can be accurately recognized, and the clothing attribute information corresponding to the target image can be determined according to the recognized text information, so that the washing and care program can be recommended more accurately, and the clothes can be washed more reasonably. protection, thereby improving the user experience.
  • the target image is an image of clothes
  • the target image records the attribute information of the clothes in the form of pictures.
  • the server compares the target image with the standard clothes images stored in the database. Compare and judge whether there is an image that is the same or similar to the target image in the standard clothing image, and according to the judgment result, selectively call the deep learning model that specifically recognizes the image to identify the target image, so that the target image can be accurately identified.
  • the recorded clothing attribute information; and, in the case that the standard clothing image has the same or similar image as the target image, the target image is avoided to be identified again, which improves the operation speed of the smart home system and can quickly determine the cleaning and care. procedures, thereby improving the user experience.
  • Fig. 1 is the structural diagram of the smart home system of the present invention
  • Fig. 2 is the main flow chart of the control method of the present invention.
  • FIG. 3 is a flowchart of a control method for selectively calling different preset models of the present invention to identify a target image
  • Fig. 4 is the flow chart of the control method that calls character recognition model of the present invention to recognize target image
  • FIG. 5 is a flowchart of a control method for selectively calling a deep learning model to identify a target image of the present invention
  • Fig. 6 is the first flow chart of the control method of the present invention for selectively determining the washing and care program according to the clothing attribute information
  • Fig. 7 is the second flow chart of the control method for selectively determining the washing and care program according to the clothing attribute information of the present invention.
  • FIG. 8 is a logic diagram of the control method of the present invention.
  • the term "arrangement” should be understood in a broad sense, for example, it may be a fixed connection, a detachable connection, or an integral connection; It can be a mechanical connection or an electrical connection; it can be a direct connection or an indirect connection through an intermediate medium, and it can be an internal connection between two components.
  • the specific meanings of the above terms in the present invention can be understood according to specific situations.
  • the present invention provides a control method for a smart home system, which is aimed at the server judging the type of the target image after the washing machine obtains the target image, and selectively calling the target image according to the judgment result.
  • Different preset models identify the target image. Because the preset model has a very high accuracy for image recognition, and according to the type of the image, the preset model that matches the image is called for identification, which further improves the accuracy of image recognition. and accuracy, the attribute information of clothing can be more accurately determined, the recommended washing and care procedures are more accurate, and more reasonable washing and care can be provided for the clothes to be washed, thereby avoiding damage to the clothes, thereby improving the user experience.
  • FIG. 1 is a structural diagram of the smart home system of the present invention.
  • the smart home system of the present invention includes a washing machine 1, a server 2 and a client 3.
  • the washing machine 1 can communicate with the server 2 and the client 3, and the client 3 can communicate with the server 2; wherein, the washing machine 1 includes communication module, image acquisition module and control module. Both the communication module and the image acquisition module are connected to the control module.
  • the washing machine 1 communicates with the server 2 and the user terminal 3 through the communication module.
  • the image acquisition module is used to capture target images, and the control module controls the image acquisition module.
  • the captured target image is sent to the server 2 through the communication module.
  • the control module can be any type of controller, such as a programmable controller, a combinational logic controller, or the like.
  • the communication module may be, but not limited to, a Bluetooth module, a WiFi module, an NFC module, a ZigBee module, and the like.
  • the image acquisition module may be, but not limited to, a camera, a camera, and the like.
  • the target image may be a clothing identification image or a clothing image.
  • the clothing identification image may be, but not limited to, a clothing washing label, a clothing information label, a clothing electronic label, and the like.
  • a database and a preset model are set on the server 2, and standard clothing images are stored in the database;
  • the preset model includes a character recognition model and a deep learning model, and the character recognition model is used for recognizing clothing identification images, and the deep learning model uses for recognizing clothing images.
  • the standard clothing image may be the image of the clothing produced by the manufacturer input by the manufacturer of the clothing through the manufacturer's user terminal, or may be the image of the user's own clothing input by the user through the personal user terminal, or may be in the historical washing process.
  • the image of the laundry to be washed captured by the washing machine 1, and so on.
  • the character recognition model is an OCR model, namely an Optical Character Recognition model.
  • the character recognition model can also be other models such as CV model, ResNeXt model, VGG16 model, etc., no matter what model is adopted, as long as the clothing attribute information can be recognized.
  • the deep learning model can be but not limited to CNN model, FasterR-CNN model, SPPNet model, DeeplabV3+ model, YOLO model and HRNet model.
  • the server 2 may be, but not limited to, a cloud server and a background server.
  • the user terminal 3 may be an APP installed on the washing machine 1 or an APP installed on a terminal device of a user of the washing machine 1 .
  • the terminal device of the user of the washing machine 1 may be a mobile smart terminal such as a mobile phone, a tablet computer, a smart bracelet, and a smart watch, or a non-mobile smart terminal such as a computer and a smart speaker.
  • FIG. 2 is the main flowchart of the control method of the present invention.
  • control method for the smart home system of the present invention comprises the following steps:
  • the washing machine acquires a target image
  • the washing machine sends the target image to the server
  • the server determines the type of the target image
  • the server selectively invokes different preset models to identify the target image, so as to determine the clothing attribute information
  • the server selectively determines the washing and care program according to the clothing attribute information.
  • the target image may be a clothing identification image such as a clothing washing label, a clothing information label, and a clothing electronic label, or a clothing image.
  • the clothing attribute information includes the type, material and color of the clothing.
  • the clothing attribute information may only include other information such as weight and size, which can be flexibly adjusted and set by those skilled in the art according to actual usage requirements.
  • the cleaning program includes washing program, rinsing program, drying program, quick washing program, drying program, sterilization program and other programs; cleaning parameters include washing time, washing water level, washing temperature, rinsing times, drying temperature, Other parameters such as drying time, sterilization temperature, sterilization time, sterilization method, etc.
  • step S100 the washing machine captures a target image through an image acquisition module such as a camera and a camera.
  • an image acquisition module such as a camera and a camera.
  • the server may call a pre-stored classification model to analyze the target image, and determine the type of the target image according to the analysis result.
  • the classification model can be SENet model, Keras model, VGG model, AtoC model and other models, and those skilled in the art can flexibly select different classification models to analyze the target image according to actual use requirements, no matter what classification model is used,
  • the specific method for analyzing the target image corresponding to any classification model should not constitute any limitation to the present invention.
  • a control method for selectively calling different preset models to identify a target image of the present invention will be described below.
  • Fig. 3 is the flow chart of the control method of selectively calling different preset models of the present invention to identify the target image
  • Fig. 4 is the flow chart of the control method of the present invention calling the character recognition model to identify the target image
  • 5 is a flowchart of a control method for selectively calling a deep learning model to identify a target image of the present invention.
  • step S400 "according to the judgment result, the server selectively calls different preset models to identify the target image to determine the clothing attribute information"
  • the steps include:
  • the server calls a character recognition model to recognize the target image, and identifies the text information in the target image;
  • the server determines clothing attribute information according to the text information.
  • steps S410 to S420 taking the character recognition model as an OCR model as an example, if the target image is a clothing identification image, it means that the target image records the attribute information of the clothing in the form of text.
  • the server calls the OCR model specially designed to recognize text to recognize the target image, so that the text information in the target image can be accurately recognized, and the clothing attribute information corresponding to the target image can be determined according to the recognized text information, so as to be more accurate It recommends washing and care programs to provide more reasonable washing and care for clothes, thereby improving the user experience.
  • step S410 the step of "the server invokes the character recognition model to recognize the target image and recognizes the text information in the target image” specifically includes:
  • the character recognition model detects the target image, and obtains a text segmentation probability matrix and a text distance probability matrix;
  • the character recognition model calculates the text probability matrix according to the text segmentation probability matrix and the text distance probability matrix
  • the character recognition model extracts the text area from the target image according to the text probability matrix
  • the character recognition model recognizes the text area, and recognizes text information.
  • the text segmentation probability matrix represents the predicted text area in the target image
  • the text distance probability matrix represents the minimum distance from each pixel in the target image to the text area
  • the text probability matrix represents the actual text area in the target image.
  • step S412 the text probability matrix can be calculated according to the following formula (1):
  • P ij is the text probability matrix
  • p ij is the ij -th element of the text probability matrix P ij
  • S ij is the text segmentation probability matrix
  • s ij is the ij -th element of the text segmentation probability matrix S ij
  • D ij is the text distance probability matrix
  • d ij is the ijth element of the text distance probability matrix D ij
  • e is a constant
  • K is a constant.
  • e and K can be determined according to the computing power and computing accuracy of the OCR model.
  • step S412 the text probability matrix can be calculated according to the following formula (2):
  • P ij is the text probability matrix
  • p ij is the ij -th element of the text probability matrix P ij
  • S ij is the text segmentation probability matrix
  • s ij is the ij -th element of the text segmentation probability matrix S ij
  • e is a constant
  • K is a constant.
  • the calculation method of the text probability matrix P ij is not limited to the two methods listed above, and the text probability matrix P ij can also be calculated according to the text distance probability matrix D ij . No matter what calculation method is adopted, as long as the text probability matrix can be obtained by calculation P ij is enough .
  • step S400 the step of "according to the judgment result, the server selectively invokes different preset models to identify the target image to determine the clothing attribute information" further includes:
  • the server compares the target image with the standard clothing image stored in the database
  • the server determines whether there is an image identical or similar to the target image in the standard clothing image
  • the server selectively invokes the deep learning model to identify the target image, and identifies clothing attribute information.
  • the same means that there are images in the standard clothing image that are completely consistent with the target image, that is, the similarity is 100%; similar means that there are images in the standard clothing image that have a similarity with the target image above the preset similarity, preferably , the preset similarity can be 95%, 90%, 85%, etc.
  • the similarity can be calculated using the following methods: the server extracts the first vector matrix of the target image; the server extracts the second vector matrix of each standard clothing image; the server calculates the relationship between the first vector matrix and each second vector matrix similarity.
  • the first vector matrix A is ⁇ 1, 2, 3 ⁇
  • the second vector matrix B 1 is ⁇ 1, 2, 3 ⁇ , that is, the first vector matrix A and the second vector matrix B 1 are exactly the same, therefore, similar
  • the degree X 1 is 100%
  • the target image is the same as the standard clothing image
  • the first vector matrix A is ⁇ 1, 2, 3 ⁇
  • the cosine distance can also be used to represent the similarity, and the preset cosine distance is 0.9.
  • the cosine distance between the first vector matrix A and the second vector matrix B 1 is 0.98 respectively, which is greater than the preset cosine distance.
  • the target image Similar to the standard clothing image; or, the cosine distance between the first vector matrix A and the second vector matrix B 1 is 0.6 respectively, which is smaller than the preset cosine distance, and the target image is not similar to the standard clothing image.
  • the preset cosine distance can also be 0.8, 0.95, etc.
  • step S450 the step of "according to the judgment result, the server selectively invokes the deep learning model to identify the target image and identifies the clothing attribute information" specifically includes:
  • the server invokes the deep learning model to identify the target image, and identifies the clothing attribute information;
  • the server determines the laundry procedure corresponding to the standard garment image identical or similar to the target image as the laundry procedure of the current garment.
  • step S451 taking the deep learning model as an example of a CNN model, if there is no image identical or similar to the target image in the standard clothing image, for example, the similarity X 2 between the target image listed above and the standard clothing image is 67%, which is less than the predetermined value.
  • the similarity (such as 95%)
  • the target image is not similar to the standard clothing image, for another example, the cosine distance between the target image and the standard clothing image is 0.6 respectively, which is smaller than the preset cosine distance (such as 0.9), the target image and the standard clothing image are respectively 0.6.
  • the standard clothing images are not similar, indicating that the database does not store clothing images that match the clothing in the target image, nor does it store attribute information such as the type, material, and color of the clothing, nor does it store the laundry matching the clothing.
  • the server in order to more accurately determine the attribute information of the clothing and the washing and care program, the server calls the CNN model that specifically recognizes the image to identify the target image, so as to identify the attribute information such as the type, material and color of the clothing.
  • the type, material and color attributes of the type, material and color information can accurately determine the washing program.
  • step S452 if there is an image that is the same or similar to the target image in the standard clothing image, for example, the similarity X 1 between the target image listed above and the standard clothing image is 100%, and the target image is the same as the standard clothing image, for example, The cosine distance between the target image and the standard clothing image is 0.98 respectively, which is greater than the preset cosine distance (such as 0.9).
  • the target image is similar to the standard clothing image, indicating that the database stores the clothing image that matches the clothing in the target image. , as well as attribute information such as the type, material, and color of the clothing, and the washing program that matches the clothing, the washing program can provide reasonable care for the clothing in the target image.
  • the server can directly determine the washing and care program corresponding to the standard clothing image similar to the target image as the washing and care program of the current clothing, which improves the running speed of the smart home system and can quickly determine Washing and care procedures, thereby improving the user experience.
  • first vector matrix, the second vector matrix and the similarity listed above are only exemplary, not limiting, and those skilled in the art can determine the first vector matrix according to actual clothing pictures in practical applications. and the second vector matrix, and calculate the similarity.
  • step S410 and step S430 are in no order, and are parallel, only related to the judgment result of the type of the target image, and corresponding steps can be performed according to different judgment results.
  • Steps S451 and S452 are in no order and are parallel, and are only related to the judgment result of whether there is an image identical or similar to the target image in the standard clothing image, and corresponding steps can be performed according to different judgment results.
  • FIG. 6 is the first flow chart of the control method of the present invention for selectively determining the laundry program according to the clothing attribute information
  • FIG. 7 is the flow chart of the present invention’s control method for selectively determining the laundry program according to the clothing attribute information two.
  • step S500 the step of "the server selectively determines the washing and care program according to the clothing attribute information" specifically includes:
  • the server sends the clothing attribute information to the client
  • the server selectively determines the washing and care program according to the clothing attribute information or the modified clothing attribute information.
  • step S520 after the user terminal receives the clothing attribute information and the preset time interval, such as 2min, 3min, 4min or 5min and other time, the server judges whether the clothing attribute information has been modified, and gives the user enough time to determine whether to Modify clothing attribute information and modify clothing attribute information.
  • step S520 enumerates that the server performs the work of "judging whether the clothing attribute information is modified", this is not a limitation, it is only an example, and the "judging clothing attribute information" can also be completed by a washing machine, a terminal device, etc. Whether to modify the work, those skilled in the art can flexibly adjust and set the execution body of the work, which is not limited in the present invention.
  • step S530 the step of "according to the judgment result, the server selectively determines the washing and care program according to the clothing attribute information or the modified clothing attribute information" specifically includes:
  • the server determines the washing and nursing program according to the clothing attribute information
  • the server stores the modified clothing attribute information, and determines a washing and care program according to the modified clothing attribute information.
  • the modification of the clothing attribute information is performed on the basis of the clothing attribute information identified by the server, rather than directly discarding the clothing attribute information identified by the server.
  • the modification of clothing attribute information means that the parameters of the clothing attribute information are modified, or some attributes are deleted, or attributes are added, etc., wherein the modified parameters can be color, material, type, etc.; , type, weight and size.
  • step S531 if the clothing attribute information is not modified after the user terminal receives the clothing attribute information and the preset time interval is set, for example, the preset time is 2 minutes, after the user terminal receives the clothing attribute information and the interval is 2 minutes, the color, The material, type and other information have not been modified, indicating that the color, material, type, etc. identified by the server are consistent with the real color, material, type and other information of the clothing, and can reflect the real attributes of the clothing. Clothing attribute information such as material and type determines the washing and care program, thereby improving the user experience.
  • step S532 if the clothing attribute information is modified after the user end receives the clothing attribute information and the preset time interval is set, for example, the preset time is 2 minutes, after the user end receives the clothing attribute information and the interval is 2 minutes, the color, material At least one of the information such as , type and other information is modified, for example, the color of the identified clothing is red, and the user changes the color to orange; or, the identified type is windbreaker, and the user changes the type to men's windbreaker; Or, if the user thinks that the colors of the clothes are all light and there is no risk of cross-coloring, they delete the color; or, if the user thinks that the clothes are too large and difficult to clean, they increase the size and change it to men's long windbreaker . Regardless of the above situation, the server determines the washing and care program according to the modified clothing attribute information, so as to meet the user's usage requirements, thereby improving the user experience.
  • the server determines the washing and care program according to the modified clothing attribute information, so as
  • the preset time and the modification method of the clothing attribute information listed above are only exemplary, not limiting, and those skilled in the art can flexibly adjust and set the preset time and clothing attributes according to actual use requirements. How information is changed.
  • step S531 and step S532 are not in order, but are in parallel, only related to the judgment result of whether the clothing attribute information has been modified, and the corresponding steps can be performed according to different judgment results.
  • 8 is a logic diagram of the control method of the present invention.
  • the washing machine acquires a target image
  • the washing machine sends the target image to the server
  • the server determines the type of the target image
  • step S604 is performed;
  • step S609 is executed;
  • the character recognition model detects the target image, and obtains a text segmentation probability matrix S ij and a text distance probability matrix D ij ;
  • the character recognition model calculates and obtains the text probability matrix P ij according to S ij and D ij ;
  • the character recognition model extracts the text area from the target image according to P ij ;
  • the character recognition model recognizes the text area, and recognizes the text information
  • the server determines clothing attribute information according to the text information
  • the server compares the target image with the standard clothing image stored in the database
  • the server determines whether there is an image identical or similar to the target image in the standard clothing image; if so, execute step S611; if not, execute step S612;
  • the server determines the washing and care program corresponding to the standard clothing image similar to the target image as the washing and care program of the current clothing;
  • the server invokes the deep learning model to identify the target image, and identifies the clothing attribute information
  • step S608 or step S612 execute step S613;
  • the server sends clothing attribute information to the client
  • step S614 determine whether the clothing attribute information is modified; if yes, go to step S615; if not, go to step S616;
  • the server stores the modified clothing attribute information, and determines a washing and care program according to the modified clothing attribute information
  • the server determines a washing and care program according to the clothing attribute information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Textile Engineering (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

一种用于智能家居系统的控制方法,涉及智能家居技术领域,旨在解决现有智能家居系统的确定的衣物属性信息不准确,导致推荐的洗护程序不准确的问题。控制方法包括下列步骤:衣物处理设备获取目标图像;衣物处理设备将目标图像发送至服务器;服务器判断目标图像的类型;根据判断结果,服务器选择性地调用不同的预设模型对目标图像进行识别,以确定衣物属性信息;在识别出衣物属性信息的情形下,服务器选择性地根据衣物属性信息确定洗护程序,能够更准确地识别出衣物属性信息,从而能够更准确地推荐洗护程序,为待洗涤的衣物提供合理的洗护,避免了对衣物造成损伤,进而提高了用户体验。

Description

用于智能家居系统的控制方法 技术领域
本发明涉及智能家居技术领域,具体提供一种用于智能家居系统的控制方法。
背景技术
在家居生活中,以洗衣机为例,随着生活水平的提高以及用户对于生活品质的追求不断提升,洗衣机通常设置有多种洗护程序,分别对应不同属性的衣物,例如材质、类别、颜色等,能够为不同的属性的衣物提供更加精准化的洗护程序。但是,在实际使用的过程中,衣物的属性组成较复杂而多变,用户往往无法做出准确地洗护判断,有可能导致选择的洗护程序不准确。
为了解决上述问题,在洗衣机上设置摄像头,通过摄像头拍摄衣物的图像,采用模糊识别的方法识别衣物的图像,并确定衣物的属性信息,根据衣物的属性信息推荐合适的洗涤程序。但是,模糊识别方法的准确性不高,确定的衣物属性信息可能与衣物的真实属性信息存在偏差,从而导致推荐的洗涤程序不准确,难以为待洗涤的衣物提供合理的洗护,甚至有可能对衣物造成损伤,例如,串色、缩水变形等,影响了用户的使用体验。
因此,本领域需要一种新的用于智能家居系统的控制方法来解决上述问题。
发明内容
为了解决现有技术中的上述问题,即为了解决现有智能家居系统的确定的衣物属性信息不准确,导致推荐的洗护程序不准确的问题,本发明提供了一种用于智能家居系统的控制方法,该智能家居系统包括衣物处理设备和服务器,所述衣物处理设备能够与所述服务器通信;所述控制方法包括下列步骤:所述衣物处理设备获取目标图像;所述衣物处理设备将所述目标图像发送至所述服务器;所述服务器判断所述目标 图像的类型;根据判断结果,所述服务器选择性地调用不同的预设模型对所述目标图像进行识别,以确定衣物属性信息;在识别出所述衣物属性信息的情形下,所述服务器选择性地根据所述衣物属性信息确定洗护程序。
在上述控制方法的优选技术方案中,所述预设模型包括字符识别模型,所述字符识别模型预先存储于所述服务器;“根据判断结果,所述服务器选择性地调用不同的预设模型对所述目标图像进行识别,以确定衣物属性信息”的步骤具体包括:如果所述目标图像是衣物标识图像,则所述服务器调用所述字符识别模型对所述目标图像进行识别,识别出所述目标图像中的文字信息;所述服务器根据所述文字信息确定所述衣物属性信息。
在上述控制方法的优选技术方案中,“所述服务器调用所述字符识别模型对所述目标图像进行识别,识别出所述目标图像中的文字信息”的步骤具体包括:所述字符识别模型对所述目标图像进行检测,得到文字分割概率矩阵和文字距离概率矩阵;所述字符识别模型根据所述文字分割概率矩阵和所述文字距离概率矩阵,计算得到文字概率矩阵;所述字符识别模型根据所述文字概率矩阵从所述目标图像中提取文字区域;所述字符识别模型对所述文字区域进行识别,识别出所述文字信息。
在上述控制方法的优选技术方案中,按照下列方法来计算所述文字概率矩阵:
Figure PCTCN2022083701-appb-000001
其中,P ij为所述文字概率矩阵,p ij为所述文字概率矩阵P ij的第 ij个元素;S ij为所述文字分割概率矩阵,s ij为所述文字分割概率矩阵S ij的第 ij个元素;D ij为所述文字距离概率矩阵,d ij为所述文字距离概率矩阵D ij的第 ij个元素;e为常数;K为常数。
在上述控制方法的优选技术方案中,所述服务器具有数据库;所述预设模型还包括深度学习模型,所述深度学习模型预先存储于所述服务器;“根据判断结果,所述服务器选择性地调用不同的预设模型对所述目标图像进行识别,以确定衣物属性信息”的步骤还包括:如果所述目标图像是衣物图像,则所述服务器将所述目标图像与所述数据 库存储的标准衣物图像进行比较;根据比较结果,所述服务器判断所述标准衣物图像中是否有与所述目标图像相同或相似的图像;根据判断结果,所述服务器选择性地调用所述深度学习模型对所述目标图像进行识别,识别出所述衣物属性信息。
在上述控制方法的优选技术方案中,“根据判断结果,所述服务器选择性地调用所述深度学习模型对所述目标图像进行识别,识别出所述衣物属性信息”的步骤具体包括:如果所述标准衣物图像中没有与所述目标图像相同或相似的图像,则所述服务器调用所述深度学习模型对所述目标图像进行识别,识别出所述衣物属性信息。
在上述控制方法的优选技术方案中,“根据判断结果,所述服务器选择性地调用所述深度学习模型对所述目标图像进行识别,识别出所述衣物属性信息”的步骤还包括:如果所述标准衣物图像中有与所述目标图像相同或相似的图像,则所述服务器将与所述目标图像相同或相似的标准衣物图像所对应的洗护程序确定为当前衣物的洗护程序。
在上述控制方法的优选技术方案中,所述智能家居系统还包括用户端,所述用户端能够与所述衣物处理设备和所述服务器通信;“所述服务器选择性地根据所述衣物属性信息确定洗护程序”的步骤具体包括:所述服务器向所述用户端发送所述衣物属性信息;判断所述衣物属性信息是否被修改;根据判断结果,所述服务器选择性地根据所述衣物属性信息或修改后的衣物属性信息确定洗护程序。
在上述控制方法的优选技术方案中,“根据判断结果,所述服务器选择性地根据所述衣物属性信息或修改后的衣物属性信息确定洗护程序”的步骤具体包括:如果所述衣物属性信息未被修改,则所述服务器根据所述衣物属性信息确定洗护程序。
在上述控制方法的优选技术方案中,“根据判断结果,所述服务器选择性地根据所述衣物属性信息或修改后的衣物属性信息确定洗护程序”的步骤还包括:如果所述衣物属性信息被修改,则所述服务器存储修改后的衣物属性信息,并根据修改后的衣物属性信息确定洗护程序。
在本发明的控制方法的优选技术方案中,智能家居系统包括洗衣机和服务器;洗衣机获取目标图像;洗衣机将目标图像发送至服务 器;服务器判断目标图像的类型;根据判断结果,服务器选择性地调用不同的预设模型对目标图像进行识别,以确定衣物属性信息;在识别出衣物属性信息的情形下,服务器选择性地根据衣物属性信息确定洗护程序。
相对于现有技术中采用模糊识别的方法识别衣物的图像,并确定衣物的属性信息的技术方案,本发明在洗衣机获取到目标图像之后,服务器对目标图像的类型进行判断,并根据判断结果选择性地调用不同的预设模型对目标图像进行识别,由于预设模型对图像识别的准确性非常高,而且根据图像的类型,调用与该图像相匹配的预设模型进行识别,进一步提高了图像识别的精度和准确性,能够更准确地确定衣物属性信息,使得推荐的洗护程序更加准确,能够为待洗涤的衣物提供更合理的洗护,避免了对衣物造成损伤,进而提高了用户体验。
进一步地,如果目标图像是衣物标识图像,说明该目标图像以文字的形式记载了衣物的属性信息,为了更准确地确定衣物的属性信息,服务器调用专门识别文字的字符识别模型对目标图像进行识别,从而能够准确地识别出目标图像中的文字信息,并根据识别出的文字信息确定与该目标图像相对应的衣物属性信息,从而能够更准确地推荐洗护程序,为衣物提供更合理的洗护,进而提高了用户体验。
进一步地,如果目标图像是衣物图像,说明该目标图像以图片的形式记载了衣物的属性信息,为了更准确地确定衣物的属性信息以及洗护程序,服务器将目标图像与数据库存储的标准衣物图像进行比较,判断标准衣物图像中是否有与目标图像相同或相似的图像,并根据判断结果,选择性地调用专门识别图像的深度学习模型对目标图像进行识别,从而能够准确地识别出目标图像中所记载的衣物属性信息;而且,避免在标准衣物图像中有与目标图像相同或相似的图像的情形下,再次对目标图像进行识别,提高了智能家居系统运行的速度,能够快速地确定洗护程序,进而提高了用户体验。
附图说明
下面参照附图并结合洗衣机来描述本发明的智能家居系统和控制方法,附图中:
图1是本发明的智能家居系统的结构图;
图2是本发明的控制方法的主流程图;
图3是本发明的选择性地调用不同的预设模型对目标图像进行识别的控制方法的流程图;
图4是本发明的调用字符识别模型对目标图像进行识别的控制方法的流程图;
图5是本发明的选择性地调用深度学习模型对目标图像进行识别的控制方法的流程图;
图6是本发明的选择性地根据衣物属性信息确定洗护程序的控制方法的流程图一;
图7是本发明的选择性地根据衣物属性信息确定洗护程序的控制方法的流程图二;
图8是本发明的控制方法的逻辑图。
附图标记列表
1、洗衣机;2、服务器;3、用户端。
具体实施方式
下面参照附图来描述本发明的优选实施方式。本领域技术人员应当理解的是,这些实施方式仅仅用于解释本发明的技术原理,并非旨在限制本发明的保护范围。例如,尽管本申请是结合洗衣机来描述的,但是,本发明的技术方案并不局限于此,该智能家居系统和控制方法显然也可以应用于干衣机、洗干一体机、洗鞋机和护理机等其他衣物处理设备,这种改变并不偏离本发明的原理和范围。
需要说明的是,在本发明的描述中,除非另有明确的规定和限定,术语“设置”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通。对于本领域技术人员而言,可根据具体情况理解上述术语在本发明中的具体含义。
基于背景技术中提出的技术问题,本发明提供了一种用于智能家居系统的控制方法,旨在洗衣机获取到目标图像之后,服务器对目 标图像的类型进行判断,并根据判断结果选择性地调用不同的预设模型对目标图像进行识别,由于预设模型对图像识别的准确性非常高,而且根据图像的类型,调用与该图像相匹配的预设模型进行识别,进一步提高了图像识别的精度和准确性,能够更准确地确定衣物属性信息,使得推荐的洗护程序更加准确,能够为待洗涤的衣物提供更合理的洗护,避免了对衣物造成损伤,进而提高了用户体验。
首先参见图1,对本发明的智能家居系统进行描述。其中,图1是本发明的智能家居系统的结构图。
如图1所示,本发明的智能家居系统包括洗衣机1、服务器2和用户端3,洗衣机1能够与服务器2和用户端3通信,用户端3能够与服务器2通信;其中,洗衣机1包括通信模块、图像采集模块和控制模块,通信模块和图像采集模块均与控制模块连接,洗衣机1通过通信模块与服务器2和用户端3通信,图像采集模块用于拍摄目标图像,控制模块控制图像采集模块通过通信模块将拍摄到的目标图像发送至服务器2。在物理形式上,控制模块可以是任何类型的控制器,例如可编程控制器、组合逻辑控制器等。
其中,通信模块可以是但不限于蓝牙模块、WiFi模块、NFC模块、ZigBee模块等。
其中,图像采集模块可以是但不限于摄像头、照相机等。
其中,目标图像可以是衣物标识图像,还可以是衣物图像。进一步地,衣物标识图像可以是但不限于衣物水洗标签、衣物信息标签和衣物电子标签等。
优选地,服务器2上设置有数据库和预设模型,数据库内存储有标准衣物图像;预设模型包括字符识别模型和深度学习模型,字符识别模型用于对衣物标识图像进行识别,深度学习模型用于对衣物图像进行识别。
其中,标准衣物图像可以是衣物的生产厂家通过厂家的用户端输入的厂家生产的衣物的图像,也可以是用户通过个人的用户端输入的用户自身衣物的图像,也可以是在历史洗涤过程中洗衣机1拍摄到的待洗涤衣物的图像,等等。
优选地,字符识别模型是OCR模型,即Optical Character Recognition模型。当然,字符识别模型还可以是CV模型、ResNeXt模型、VGG16模型等其他模型,无论采取何种模型,只要能够识别出衣物属性信息即可。
其中,深度学习模型可以是但不限于CNN模型、FasterR-CNN模型、SPPNet模型、DeeplabV3+模型、YOLO模型和HRNet模型。
其中,服务器2可以是但不限于云端服务器和后台服务器。
其中,用户端3可以是洗衣机1上安装的APP、也可以是洗衣机1用户的终端设备上安装的APP。洗衣机1用户的终端设备可以是手机、平板电脑、智能手环以及智能手表等可移动的智能终端,也可以是电脑、智能音箱等非移动智能终端。
需要说明的是,尽管上述实施方式中是结合用户端3进行描述的,但上述特征中并非全部为必须,本领域技术人员能够理解的是,在保证智能家居系统能够正常运转的前提下,可对上述实施方式进行适当地删减,以组合出新的实施方式。例如,可以在上述实施方式的基础上将用户端33删除,从而组合出新的智能家居系统。
下面参照图2,对本发明的用于智能家居系统的控制方法进行描述。其中,图2是本发明的控制方法的主流程图。
如图2所示,本发明的用于智能家居系统的控制方法包括下列步骤:
S100、洗衣机获取目标图像;
S200、洗衣机将目标图像发送至服务器;
S300、服务器判断目标图像的类型;
S400、根据判断结果,服务器选择性地调用不同的预设模型对目标图像进行识别,以确定衣物属性信息;
S500、在识别出衣物属性信息的情形下,服务器选择性地根据衣物属性信息确定洗护程序。
其中,目标图像可以是衣物水洗标签、衣物信息标签和衣物电子标签等衣物标识图像,也可以是衣物图像。
其中,衣物属性信息包括衣物的类型、材质和颜色。当然,衣物属性信息还可以只包括重量和尺寸等其他信息,本领域技术人员可以根据实际的使用需求灵活地调整和设置。
其中,洗护程序包括洗涤程序、漂洗程序、甩干程序、快洗程序、烘干程序、杀菌程序等其他程序;洗护参数包括洗涤时间、洗涤水位、洗涤温度、漂洗次数、烘干温度、烘干时间、杀菌温度、杀菌时间、杀菌方式等其他参数。
步骤S100中,洗衣机通过摄像头、照相机等图像采集模块拍摄目标图像。
步骤S300中,服务器可以调用预先存储的分类模型对目标图像进行分析,并根据分析结果,判断目标图像的类型。其中,分类模型可以是SENet模型、Keras模型、VGG模型和AtoC模型等模型,本领域技术人员可以根据实际的使用需求灵活地选择不同的分类模型对目标图像进行分析,无论采用何种分类模型,任何一种分类模型对应的对目标图像进行分析的具体方法都不应对本发明构成任何的限制。
下面参照图3至图5,对本发明的选择性地调用不同的预设模型对目标图像进行识别的控制方法进行描述。其中,图3是本发明的选择性地调用不同的预设模型对目标图像进行识别的控制方法的流程图;图4是本发明的调用字符识别模型对目标图像进行识别的控制方法的流程图;图5是本发明的选择性地调用深度学习模型对目标图像进行识别的控制方法的流程图。
如图3所示,服务器上预先存储字符识别模型和深度学习模型;步骤S400中,“根据判断结果,服务器选择性地调用不同的预设模型对目标图像进行识别,以确定衣物属性信息”的步骤具体包括:
S410、如果目标图像是衣物标识图像,则服务器调用字符识别模型对目标图像进行识别,识别出目标图像中的文字信息;
S420、服务器根据文字信息确定衣物属性信息。
步骤S410至步骤S420中,以字符识别模型是OCR模型为例,如果目标图像是衣物标识图像,说明该目标图像以文字的形式记载了衣物的属性信息,为了更准确地确定衣物的属性信息,服务器调用专门识别文字的OCR模型对目标图像进行识别,从而能够准确地识别出目 标图像中的文字信息,并根据识别出的文字信息确定与该目标图像相对应的衣物属性信息,从而能够更准确地推荐洗护程序,为衣物提供更合理的洗护,进而提高了用户体验。
当然,也可以采用CV模型、ResNeXt模型、VGG16模型等其他字符识别模型对目标图像进行识别,以识别出目标图像中的文字信息,无论采取何种字符识别模型,任何一种字符识别模型对应的对目标图像进行识别的具体方法都不应对本发明构成任何的限制。
如图4所示,步骤S410中,“服务器调用字符识别模型对目标图像进行识别,识别出目标图像中的文字信息”的步骤具体包括:
S411、字符识别模型对目标图像进行检测,得到文字分割概率矩阵和文字距离概率矩阵;
S412、字符识别模型根据文字分割概率矩阵和文字距离概率矩阵,计算得到文字概率矩阵;
S413、字符识别模型根据文字概率矩阵从目标图像中提取文字区域;
S414、字符识别模型对文字区域进行识别,识别出文字信息。
其中,文字分割概率矩阵表示目标图像中的预测文字区域;文字距离概率矩阵表示目标图像中每个像素点到文字区域的最小距离;文字概率矩阵表示目标图像中的实际文字区域。
进一步地,步骤S412中,可以按照下列公式(1)来计算文字概率矩阵:
Figure PCTCN2022083701-appb-000002
上述公式(1)中,P ij为文字概率矩阵,p ij为文字概率矩阵P ij的第 ij个元素;S ij为文字分割概率矩阵,s ij为文字分割概率矩阵S ij的第 ij个元素;D ij为文字距离概率矩阵,d ij为文字距离概率矩阵D ij的第 ij个元素;e为常数;K为常数。
其中,e和K可以根据OCR模型的计算能力以及计算精度能来确定。
或者,在一种可替代的方式中,步骤S412中,可以按照下列公式(2)来计算文字概率矩阵:
Figure PCTCN2022083701-appb-000003
上述公式(2)中,P ij为文字概率矩阵,p ij为文字概率矩阵P ij的第 ij个元素;S ij为文字分割概率矩阵,s ij为文字分割概率矩阵S ij的第 ij个元素;e为常数;K为常数。
当然,文字概率矩阵P ij的计算方法不限于上述列举的两种方法,还可以根据文字距离概率矩阵D ij来计算文字概率矩阵P ij,无论采取何种计算方法,只要能够计算得到文字概率矩阵P ij即可。
再次参阅图3,步骤S400中,“根据判断结果,服务器选择性地调用不同的预设模型对目标图像进行识别,以确定衣物属性信息”的步骤还包括:
S430、如果目标图像是衣物图像,则服务器将目标图像与数据库存储的标准衣物图像进行比较;
S440、根据比较结果,服务器判断标准衣物图像中是否有与目标图像相同或相似的图像;
S450、根据判断结果,服务器选择性地调用深度学习模型对目标图像进行识别,识别出衣物属性信息。
其中,相同是指标准衣物图像中有与目标图像完全一致的图像,即相似度为100%;相似是指标准衣物图像中有与目标图像的相似度在预设相似度以上的图像,优选地,预设相似度可以是95%,也可以是90%,也可以是85%等。
优选地,可以采用下列方法计算相似度;服务器提取目标图像的第一向量矩阵;服务器提取每个标准衣物图像的第二向量矩阵;服务器计算第一向量矩阵与每个第二向量矩阵之间的相似度。例如,第一向量矩阵A为{1、2、3},第二向量矩阵B 1为{1、2、3},即第一向量矩阵A与第二向量矩阵B 1完全相同,因此,相似度X 1为100%,目标图像与该标准衣物图像相同;又如,第一向量矩阵A为{1、2、3},第二向量矩阵B 2为{1、1、3},经比较可以看出有一个元素不同,因此,相似度X 2为67%,即2÷3×100%=67%,小于预设相似度,目标图像与该标准衣物图像不相似。
或者,也可以采用余弦距离来表示相似度,预设余弦距离为0.9,例如,第一向量矩阵A与第二向量矩阵B 1之间的余弦距离分别为0.98,大于预设余弦距离,目标图像与该标准衣物图像相似;或者,第一向量矩阵A与第二向量矩阵B 1之间的余弦距离分别为0.6,小于预设余弦距离,目标图像与该标准衣物图像不相似。当然,预设余弦距离也可以是0.8、0.95等。
如图5所示,步骤S450中,“根据判断结果,服务器选择性地调用深度学习模型对目标图像进行识别,识别出衣物属性信息”的步骤具体包括:
S451、如果标准衣物图像中没有与目标图像相同或相似的图像,则服务器调用深度学习模型对目标图像进行识别,识别出衣物属性信息;
S452、如果标准衣物图像中有与目标图像相同或相似的图像,则服务器将与目标图像相同或相似的标准衣物图像所对应的洗护程序确定为当前衣物的洗护程序。
步骤S451中,以深度学习模型是CNN模型为例,如果标准衣物图像中没有与目标图像相同或相似的图像,例如,上述列举的目标图像与标准衣物图像相似度X 2为67%,小于预设相似度(如95%),目标图像与该标准衣物图像不相似,又如,目标图像与标准衣物图像之间的余弦距离分别为0.6,小于预设余弦距离(如0.9),目标图像与该标准衣物图像不相似,说明数据库中没有存储与目标图像中的衣物相匹配的衣物图像,也没有存储该衣物的类型、材质和颜色等属性信息,更没有存储与该衣物相匹配的洗护程序,为了更准确地确定衣物的属性信息以及洗护程序,则服务器调用专门识别图像的CNN模型对目标图像进行识别,以识别出该衣物的类型、材质和颜色等属性信息,从而能够根据识别出的类型、材质和颜色等属性信息准确地确定洗护程序。
当然,也可以采用FasterR-CNN模型、SPPNet模型、DeeplabV3+模型、YOLO模型和HRNet模型等其他深度学习模型对目标图像进行识别,以识别出衣物属性信息,无论采取何种深度学习模型,任何一种深度学习模型对应的对目标图像进行识别的具体方法都不应对本发明构成任何的限制。
步骤S452中,如果标准衣物图像中有与目标图像相同或相似的图像,例如,上述列举的目标图像与标准衣物图像相似度X 1为100%,目标图像与该标准衣物图像相同,又如,目标图像与标准衣物图像之间的余弦距离分别为0.98,大于预设余弦距离(如0.9),目标图像与该标准衣物图像相似,说明数据库中存储有与目标图像中的衣物相匹配的衣物图像,以及该衣物的类型、材质和颜色等属性信息,以及与该衣物相匹配的洗护程序,该洗护程序就能够为目标图像中的衣物提供合理的洗护,因此,无需根据目标图像重新确定衣物属性信息以及洗护程序,服务器直接将与目标图像相似的标准衣物图像所对应的洗护程序确定为当前衣物的洗护程序即可,提高了智能家居系统运行的速度,能够快速地确定洗护程序,进而提高了用户体验。
需要说明的是,上述列举的第一向量矩阵、第二向量矩阵和相似度,只是示例性地,不是限制性地,本领域技术人员在实际应用中可以根据实际的衣物图片确定第一向量矩阵和第二向量矩阵,并计算相似度。
还需要说明的是,上述过程中,步骤S410和步骤S430没有先后顺序,是并列的,仅仅和目标图像的类型的判断结果相关,根据不同的判断结果执行对应的步骤即可。步骤S451和步骤S452没有先后顺序,是并列的,仅仅和标准衣物图像中是否有与目标图像相同或相似的图像的判断结果相关,根据不同的判断结果执行对应的步骤即可。
下面参照图6和图7,对本发明的选择性地根据衣物属性信息确定洗护程序的控制方法进行描述。其中,图6是本发明的选择性地根据衣物属性信息确定洗护程序的控制方法的流程图一;图7是本发明的选择性地根据衣物属性信息确定洗护程序的控制方法的流程图二。
如图6所示,步骤S500中,“服务器选择性地根据衣物属性信息确定洗护程序”的步骤具体包括:
S510、服务器向用户端发送衣物属性信息;
S520、判断衣物属性信息是否被修改;
S530、根据判断结果,服务器选择性地根据衣物属性信息或修改后的衣物属性信息确定洗护程序。
步骤S520中,在用户端接收到衣物属性信息并间隔预设时间之后,例如2min、3min、4min或5min等其他时间,服务器才判断衣物属性信息是否被修改,给予用户足够的时间来确定是否要修改衣物属性信息以及修改衣物属性信息。虽然步骤S520中列举的是由服务器执行“判断衣物属性信息是否被修改”的工作,但这并不是限制性地,只是示例性地,还可以由洗衣机、终端设备等来完成“判断衣物属性信息是否被修改”的工作,本领域技术人员可以灵活地调整和设置该工作的执行主体,本发明对此不作任何的限制。
如图7所示,步骤S530中,“根据判断结果,服务器选择性地根据衣物属性信息或修改后的衣物属性信息确定洗护程序”的步骤具体包括:
S531、如果衣物属性信息未被修改,则服务器根据衣物属性信息确定洗护程序;
S532、如果衣物属性信息被修改,则服务器存储修改后的衣物属性信息,并根据修改后的衣物属性信息确定洗护程序。
其中,更改衣物属性信息是在服务器识别出的衣物属性信息的基础上进行修改,而非直接舍弃服务器识别出的衣物属性信息。衣物属性信息被修改是指衣物属性信息的参数被修改、或部分属性被删除、或增加属性等,其中,被修改的参数可以是颜色、材质、类型等;删除或增加的属性可是颜色、材质、类型、重量以及尺寸等。
步骤S531中,如果在用户端接收到衣物属性信息并间隔预设时间之后,衣物属性信息未被修改,例如,预设时间是2min,在用户端接收到衣物属性信息并间隔2min之后,颜色、材质、类型等信息均未被修改,说明服务器识别出的颜色、材质、类型等与该衣物真实的颜色、材质、类型等信息一致,能够反应衣物的真实属性,服务器直接根据识别出的颜色、材质、类型等衣物属性信息确定洗护程序,进而提高了用户体验。
步骤S532中,如果在用户端接收到衣物属性信息并间隔预设时间之后,衣物属性信息被修改,例如,预设时间是2min,在用户端接收到衣物属性信息并间隔2min之后,颜色、材质、类型等信息中的至少一个被修改,例如,识别出的衣物的颜色是红色,用户将颜色修改为 橘红色;又或者,识别出的类型是风衣,用户将类型修改为男款风衣;又或者,用户认为衣物的颜色均为浅色,不存在串色的风险,就将颜色删除;又或者,用户认为衣物的尺寸较大,难以清洗干净,就增加了尺寸,修改为男士长款风衣。无论是上述那种情形,服务器均根据修改后的衣物属性信息确定洗护程序,以满足用户的使用需求,进而提高了用户体验。
需要说明的是,上述列举的预设时间以及衣物属性信息的更改方式,只是示例性地,不是限制性地,本领域技术人员可以根据实际的使用需求灵活地调整和设置预设时间以及衣物属性信息的更改方式。
还需要说明的是,上述过程中,步骤S531和步骤S532没有先后顺序,是并列的,仅仅和衣物属性信息是否被修改的判断结果相关,根据不同的判断结果执行对应的步骤即可。
下面参照图8,对本发明的一种可能的控制流程进行介绍。其中,图8是本发明的控制方法的逻辑图。
如图8所示,本发明的控制方法的一种可能的完整流程是:
S601、洗衣机获取目标图像;
S602、洗衣机将目标图像发送至服务器;
S603、服务器判断目标图像的类型;
如果目标图像是衣物标识图像,则执行步骤S604;
如果目标图像是衣物图像,则执行步骤S609;
S604、字符识别模型对目标图像进行检测,得到文字分割概率矩阵S ij和文字距离概率矩阵D ij
S605、字符识别模型根据S ij和D ij,计算得到文字概率矩阵P ij
S606、字符识别模型根据P ij从目标图像中提取文字区域;
S607、字符识别模型对文字区域进行识别,识别出文字信息;
S608、服务器根据文字信息确定衣物属性信息;
S609、服务器将目标图像与数据库存储的标准衣物图像进行比较;
S610、根据比较结果,服务器判断标准衣物图像中是否有与目标图像相同或相似的图像;若是,则执行步骤S611;若否,则执行步骤S612;
S611、服务器将与目标图像相似的标准衣物图像所对应的洗护程序确定为当前衣物的洗护程序;
S612、服务器调用深度学习模型对目标图像进行识别,识别出衣物属性信息;
在步骤S608或步骤S612之后,执行步骤S613;
S613、服务器向用户端发送衣物属性信息;
S614、判断衣物属性信息是否被修改;若是,则执行步骤S615;若否,则执行步骤S616;
S615、服务器存储修改后的衣物属性信息,并根据修改后的衣物属性信息确定洗护程序;
S616、服务器根据衣物属性信息确定洗护程序。
应该指出的是,上述实施例只是本发明的一种较佳的实施方式中,仅用来阐述本发明方法的原理,并非旨在限制本发明的保护范围,在实际应用中,本领域技术人员可以根据需要而将上述功能分配由不同的步骤来完成,即将本发明实施例中的步骤再分解或者组合。例如,上述实施例的步骤可以合并为一个步骤,也可以进一步拆分成多个子步骤,以完成以上描述的全部或者部分功能。对于本发明实施例中涉及的步骤的名称,其仅仅是为了区分各个步骤,不视为对本发明的限制。
至此,已经结合附图所示的优选实施方式描述了本发明的技术方案,但是,本领域技术人员容易理解的是,本发明的保护范围显然不局限于这些具体实施方式。在不偏离本发明的原理的前提下,本领域技术人员可以对相关技术特征作出等同的更改或替换,这些更改或替换之后的技术方案都将落入本发明的保护范围之内。

Claims (10)

  1. 一种用于智能家居系统的控制方法,其特征在于,所述智能家居系统包括衣物处理设备和服务器,所述衣物处理设备能够与所述服务器通信;
    所述控制方法包括下列步骤:
    所述衣物处理设备获取目标图像;
    所述衣物处理设备将所述目标图像发送至所述服务器;
    所述服务器判断所述目标图像的类型;
    根据判断结果,所述服务器选择性地调用不同的预设模型对所述目标图像进行识别,以确定衣物属性信息;
    在识别出所述衣物属性信息的情形下,所述服务器选择性地根据所述衣物属性信息确定洗护程序。
  2. 根据权利要求1所述的控制方法,其特征在于,所述预设模型包括字符识别模型,所述字符识别模型预先存储于所述服务器;
    “根据判断结果,所述服务器选择性地调用不同的预设模型对所述目标图像进行识别,以确定衣物属性信息”的步骤具体包括:
    如果所述目标图像是衣物标识图像,则所述服务器调用所述字符识别模型对所述目标图像进行识别,识别出所述目标图像中的文字信息;
    所述服务器根据所述文字信息确定所述衣物属性信息。
  3. 根据权利要求2所述的控制方法,其特征在于,“所述服务器调用所述字符识别模型对所述目标图像进行识别,识别出所述目标图像中的文字信息”的步骤具体包括:
    所述字符识别模型对所述目标图像进行检测,得到文字分割概率矩阵和文字距离概率矩阵;
    所述字符识别模型根据所述文字分割概率矩阵和所述文字距离概率矩阵,计算得到文字概率矩阵;
    所述字符识别模型根据所述文字概率矩阵从所述目标图像中提取文字区域;
    所述字符识别模型对所述文字区域进行识别,识别出所述文字信息。
  4. 根据权利要求3所述的控制方法,其特征在于,按照下列方法来计算所述文字概率矩阵:
    Figure PCTCN2022083701-appb-100001
    其中,P ij为所述文字概率矩阵,p ij为所述文字概率矩阵P ij的第 ij个元素;S ij为所述文字分割概率矩阵,s ij为所述文字分割概率矩阵S ij的第 ij个元素;D ij为所述文字距离概率矩阵,d ij为所述文字距离概率矩阵D ij的第 ij个元素;e为常数;K为常数。
  5. 根据权利要求1所述的控制方法,其特征在于,所述服务器具有数据库;所述预设模型还包括深度学习模型,所述深度学习模型预先存储于所述服务器;
    “根据判断结果,所述服务器选择性地调用不同的预设模型对所述目标图像进行识别,以确定衣物属性信息”的步骤还包括:
    如果所述目标图像是衣物图像,则所述服务器将所述目标图像与所述数据库存储的标准衣物图像进行比较;
    根据比较结果,所述服务器判断所述标准衣物图像中是否有与所述目标图像相同或相似的图像;
    根据判断结果,所述服务器选择性地调用所述深度学习模型对所述目标图像进行识别,识别出所述衣物属性信息。
  6. 根据权利要求5所述的控制方法,其特征在于,“根据判断结果,所述服务器选择性地调用所述深度学习模型对所述目标图像进行识别,识别出所述衣物属性信息”的步骤具体包括:
    如果所述标准衣物图像中没有与所述目标图像相同或相似的图像,则所述服务器调用所述深度学习模型对所述目标图像进行识别,识别出所述衣物属性信息。
  7. 根据权利要求5所述的控制方法,其特征在于,“根据判断结果, 所述服务器选择性地调用所述深度学习模型对所述目标图像进行识别,识别出所述衣物属性信息”的步骤还包括:
    如果所述标准衣物图像中有与所述目标图像相同或相似的图像,则所述服务器将与所述目标图像相同或相似的标准衣物图像所对应的洗护程序确定为当前衣物的洗护程序。
  8. 根据权利要求1至6中任一项所述的控制方法,其特征在于,所述智能家居系统还包括用户端,所述用户端能够与所述衣物处理设备和所述服务器通信;
    “所述服务器选择性地根据所述衣物属性信息确定洗护程序”的步骤具体包括:
    所述服务器向所述用户端发送所述衣物属性信息;
    判断所述衣物属性信息是否被修改;
    根据判断结果,所述服务器选择性地根据所述衣物属性信息或修改后的衣物属性信息确定洗护程序。
  9. 根据权利要求8所述的控制方法,其特征在于,“根据判断结果,所述服务器选择性地根据所述衣物属性信息或修改后的衣物属性信息确定洗护程序”的步骤具体包括:
    如果所述衣物属性信息未被修改,则所述服务器根据所述衣物属性信息确定洗护程序。
  10. 根据权利要求8所述的控制方法,其特征在于,“根据判断结果,所述服务器选择性地根据所述衣物属性信息或修改后的衣物属性信息确定洗护程序”的步骤还包括:
    如果所述衣物属性信息被修改,则所述服务器存储修改后的衣物属性信息,并根据修改后的衣物属性信息确定洗护程序。
PCT/CN2022/083701 2021-04-29 2022-03-29 用于智能家居系统的控制方法 WO2022227991A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110476297.9 2021-04-29
CN202110476297.9A CN115262162A (zh) 2021-04-29 2021-04-29 用于智能家居系统的控制方法

Publications (1)

Publication Number Publication Date
WO2022227991A1 true WO2022227991A1 (zh) 2022-11-03

Family

ID=83744769

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/083701 WO2022227991A1 (zh) 2021-04-29 2022-03-29 用于智能家居系统的控制方法

Country Status (2)

Country Link
CN (1) CN115262162A (zh)
WO (1) WO2022227991A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002224486A (ja) * 2001-02-01 2002-08-13 Toshiba Corp 洗濯機
CN106884278A (zh) * 2017-04-17 2017-06-23 东华大学 一种多功能智能衣物护理机
CN111379119A (zh) * 2018-12-26 2020-07-07 Lg电子株式会社 洗涤物处理装置及其洗涤程序确定方法
CN111893704A (zh) * 2019-05-05 2020-11-06 青岛海尔智能技术研发有限公司 衣物护理方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002224486A (ja) * 2001-02-01 2002-08-13 Toshiba Corp 洗濯機
CN106884278A (zh) * 2017-04-17 2017-06-23 东华大学 一种多功能智能衣物护理机
CN111379119A (zh) * 2018-12-26 2020-07-07 Lg电子株式会社 洗涤物处理装置及其洗涤程序确定方法
CN111893704A (zh) * 2019-05-05 2020-11-06 青岛海尔智能技术研发有限公司 衣物护理方法和装置

Also Published As

Publication number Publication date
CN115262162A (zh) 2022-11-01

Similar Documents

Publication Publication Date Title
WO2020134991A1 (zh) 纸质表单的自动录入方法、装置、计算机设备和存储介质
US20190228211A1 (en) Au feature recognition method and device, and storage medium
WO2019033573A1 (zh) 面部情绪识别方法、装置及存储介质
WO2021114814A1 (zh) 人体属性识别方法、装置、电子设备以及存储介质
KR102056806B1 (ko) 영상 통화 서비스를 제공하는 단말과 서버
Althubiti et al. Circuit manufacturing defect detection using VGG16 convolutional neural networks
CN110741377A (zh) 人脸图像处理方法、装置、存储介质及电子设备
CN107633205A (zh) 嘴唇动作分析方法、装置及存储介质
CN112489143A (zh) 一种颜色识别方法、装置、设备及存储介质
WO2021189911A1 (zh) 基于视频流的目标物位置检测方法、装置、设备及介质
WO2021042518A1 (zh) 基于人脸识别的字体调整方法、装置、设备及介质
WO2020026643A1 (ja) 情報処理装置、情報処理方法及び情報処理プログラム
CN105121620A (zh) 图像处理设备、图像处理方法、程序和存储介质
JP7018408B2 (ja) 画像検索装置および教師データ抽出方法
JP2009068946A (ja) 欠陥分類装置および方法並びにプログラム
CN111126147B (zh) 图像处理方法、装置和电子系统
TW202201275A (zh) 手部作業動作評分裝置、方法及電腦可讀取存儲介質
WO2022227991A1 (zh) 用于智能家居系统的控制方法
CN113762163B (zh) 一种gmp车间智能化监控管理方法及系统
CN108021921A (zh) 图像特征点提取系统及其应用
CN106676821A (zh) 一种自动选择洗衣模式的方法及终端
CN112257491B (zh) 自适应调度人脸识别和属性分析方法及装置
CN113159876B (zh) 服装搭配推荐装置、方法及存储介质
CN113435353A (zh) 基于多模态的活体检测方法、装置、电子设备及存储介质
TWI696959B (zh) 機台參數擷取裝置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22794464

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22794464

Country of ref document: EP

Kind code of ref document: A1