CN111549486B - Detergent dosage determining method and device, storage medium and washing machine - Google Patents

Detergent dosage determining method and device, storage medium and washing machine Download PDF

Info

Publication number
CN111549486B
CN111549486B CN201910069982.2A CN201910069982A CN111549486B CN 111549486 B CN111549486 B CN 111549486B CN 201910069982 A CN201910069982 A CN 201910069982A CN 111549486 B CN111549486 B CN 111549486B
Authority
CN
China
Prior art keywords
clothes
image
feature map
pixel
clothing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910069982.2A
Other languages
Chinese (zh)
Other versions
CN111549486A (en
Inventor
肖文轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN201910069982.2A priority Critical patent/CN111549486B/en
Publication of CN111549486A publication Critical patent/CN111549486A/en
Application granted granted Critical
Publication of CN111549486B publication Critical patent/CN111549486B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for determining the dosage of detergent, a storage medium and a washing machine, wherein the method comprises the following steps: collecting clothes images of clothes to be washed in a washing machine; inputting the clothes image into a pre-established clothes recognition neural network model to recognize the clothes amount in the clothes image; and determining the amount of the detergent required for washing the laundry according to the recognized amount of the laundry. The scheme provided by the invention can ensure that the washing machine can determine the addition amount of the detergent according to the quantity of the clothes to be washed.

Description

Detergent dosage determining method and device, storage medium and washing machine
Technical Field
The invention relates to the field of control, in particular to a method and a device for determining the dosage of detergent, a storage medium and a washing machine.
Background
The washing machine is a household appliance designed for relieving physical labor of people. Most of the existing washing machines are not intelligent enough, and in the using process, a user needs to add detergent according to the quantity of washed clothes, but the user cannot accurately know the quantity of the detergent to be added. Sometimes, when too much detergent is put in, the detergent remains after the clothes are washed, and when too little detergent is put in, the clothes cannot be cleaned. Moreover, the current washing machine still needs to manually add detergent, so that the whole washing process is very complicated.
Disclosure of Invention
The main purpose of the present invention is to overcome the above-mentioned drawbacks of the prior art, and to provide a control method, device, storage medium and washing machine, so as to solve the problem that it is not intelligent enough for a user to add detergent according to the amount of laundry when using the washing machine in the prior art.
In one aspect of the present invention, there is provided a detergent dosage determination method comprising: collecting clothes images of clothes to be washed in a washing machine; inputting the clothes image into a pre-established clothes recognition neural network model to recognize the clothes amount in the clothes image; and determining the amount of the detergent required for washing the laundry according to the recognized amount of the laundry.
Optionally, the clothing recognition neural network model is built by: respectively acquiring a first image when clothes exist in the washing machine and/or a second image when no clothes exist in the washing machine; inputting the first image and/or the second image as a training sample into a preset neural network for model training to obtain the clothing recognition neural network model; wherein each pixel of the first image and/or the second image is pre-labeled as a clothing pixel or a non-clothing pixel.
Optionally, the clothing recognition neural network model comprises: the system comprises an input layer, a convolutional coding network, a deconvolution decoding network and a pixel classification layer; the convolutional encoding network, comprising: a convolutional layer, a batch regularization layer, an activation function and a pooling layer; and/or, the deconvolution coding network, comprising: an upper sampling layer, a convolution layer, a batch regularization layer and an activation function; and/or, the pixel classification layer comprises: SoftMax classifier.
Optionally, inputting the clothing image into a pre-trained clothing recognition neural network model to recognize the clothing amount in the clothing image, including: inputting the clothing image into the clothing recognition neural network model through the input layer; carrying out convolutional coding processing on the clothes image through the convolutional coding network to obtain a processed first feature map; performing deconvolution decoding processing on the first feature map through the deconvolution decoding network to obtain a processed second feature map; classifying each pixel in the second feature map through the pixel classification layer to obtain a classification result of each pixel, where the classification result includes: clothing pixels and non-clothing pixels.
Optionally, performing convolutional encoding processing on the clothing image through the convolutional encoding network to obtain a processed first feature map, including: performing feature extraction on the clothes image through a convolution operation; performing batch regularization operation on the extracted features, and performing nonlinear mapping by using an activation function to obtain a first feature map; and performing pooling operation on the first feature map obtained after the nonlinear mapping is performed, and recording the index position of the maximum feature value in the process of the pooling operation.
Optionally, performing a deconvolution decoding process on the first feature map by the deconvolution decoding network to obtain a processed second feature map, including: according to the index position of the maximum characteristic value, performing up-sampling processing on the first characteristic diagram to obtain a corresponding sparse characteristic diagram; performing convolution operation on the sparse feature map subjected to the up-sampling processing to obtain a corresponding dense feature map; and carrying out batch regularization operation on the obtained dense feature map, and carrying out nonlinear mapping by using an activation function to obtain a second feature map of the clothes image.
Optionally, classifying each pixel in the second feature map through the pixel classification layer to obtain a classification result of each pixel, including: and classifying each pixel in the second feature map through a SoftMax classifier, and outputting a classification result of each pixel.
Optionally, the amount of laundry includes: a ratio of a number of clothing pixels to a total number of pixels in the clothing image; determining an amount of detergent required for washing the laundry according to the recognized laundry amount, including: and determining the amount of the detergent required for washing the clothes to be washed according to the ratio of the number of the clothes pixels to the total number of the pixels.
Another aspect of the present invention provides a detergent usage amount determining apparatus including: the washing machine comprises a collecting unit, a control unit and a control unit, wherein the collecting unit is used for collecting clothes images of clothes to be washed in the washing machine; the identification unit is used for inputting the clothes image into a pre-established clothes identification neural network model so as to identify the clothes amount in the clothes image; and a determining unit for determining an amount of detergent required for washing the laundry according to the recognized laundry amount.
Optionally, the clothing recognition neural network model is built by: respectively acquiring a first image when clothes exist in the washing machine and/or a second image when no clothes exist in the washing machine; inputting the first image and/or the second image as a training sample into a preset neural network for model training to obtain the clothing recognition neural network model; wherein each pixel of the first image and/or the second image is pre-labeled as a clothing pixel or a non-clothing pixel.
Optionally, the clothing recognition neural network model comprises: the system comprises an input layer, a convolutional coding network, a deconvolution decoding network and a pixel classification layer; the convolutional encoding network, comprising: a convolutional layer, a batch regularization layer, an activation function and a pooling layer; and/or, the deconvolution coding network, comprising: an upper sampling layer, a convolution layer, a batch regularization layer and an activation function; and/or, the pixel classification layer comprises: SoftMax classifier.
Optionally, the identification unit includes: an input subunit, configured to input the clothing image into the clothing recognition neural network model through the input layer; the coding subunit is used for carrying out convolutional coding processing on the clothes image through the convolutional coding network so as to obtain a processed first feature map; the decoding subunit is configured to perform deconvolution decoding processing on the first feature map through the deconvolution decoding network to obtain a processed second feature map; a classification subunit, configured to classify, by the pixel classification layer, each pixel in the second feature map to obtain a classification result of each pixel, where the classification result includes: clothing pixels and non-clothing pixels.
Optionally, the encoding subunit performs convolutional encoding processing on the clothing image through the convolutional encoding network to obtain a processed first feature map, and includes: performing feature extraction on the clothes image through a convolution operation; performing batch regularization operation on the extracted features, and performing nonlinear mapping by using an activation function to obtain a first feature map; and performing pooling operation on the first feature map obtained after the nonlinear mapping is performed, and recording the index position of the maximum feature value in the process of the pooling operation.
Optionally, the performing, by the decoding subunit, a deconvolution decoding process on the first feature map by using the deconvolution decoding network to obtain a processed second feature map includes: according to the index position of the maximum characteristic value, performing up-sampling processing on the first characteristic diagram to obtain a corresponding sparse characteristic diagram; performing convolution operation on the sparse feature map subjected to the up-sampling processing to obtain a corresponding dense feature map; and carrying out batch regularization operation on the obtained dense feature map, and carrying out nonlinear mapping by using an activation function to obtain a second feature map of the clothes image.
Optionally, the classifying subunit classifies each pixel in the second feature map through the pixel classification layer to obtain a classification result of each pixel, including: and classifying each pixel in the second feature map through a SoftMax classifier, and outputting a classification result of each pixel.
Optionally, the amount of laundry includes: a ratio of a number of clothing pixels to a total number of pixels in the clothing image; the determination unit determines an amount of detergent required to wash the laundry according to the recognized laundry amount, including: and determining the amount of the detergent required for washing the clothes to be washed according to the ratio of the number of the clothes pixels to the total number of the pixels.
A further aspect of the invention provides a storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of any of the methods described above.
In a further aspect, the invention provides a washing machine comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the processor implementing the steps of any of the methods described above when executing the program.
In a further aspect, the present invention provides a washing machine comprising a detergent usage amount determining apparatus as defined in any one of the preceding claims.
According to the technical scheme of the invention, a clothes recognition neural network model is utilized to recognize clothes images of clothes to be washed in the washing machine so as to recognize the clothes amount of the clothes to be washed, and the required detergent amount of the clothes to be washed is determined according to the recognized clothes amount; the intelligent adjustment of the addition of the detergent of the washing machine can be realized, the addition of the detergent can be determined according to the number of clothes to be washed by the washing machine, so that the intelligent addition of the detergent can be realized, the washing machine is more intelligent, the operation steps of a user are reduced, and the use experience of the user is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a process schematic of one embodiment of a detergent dosage determination process provided by the present invention;
FIG. 2 is a schematic view showing a mounting position of a camera of the washing machine;
FIG. 3a is a schematic diagram of a network structure of the clothing recognition neural network model according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of a convolutional encoding network according to an embodiment of the present invention;
FIG. 3c is a schematic diagram of a structure of a deconvolution decoding network according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating one embodiment of the step of inputting the clothing image into a pre-trained clothing recognition neural network model to recognize the amount of clothing in the clothing image;
FIG. 5 is a schematic structural view of an embodiment of a detergent use amount determining apparatus provided by the present invention;
fig. 6 is a schematic structural diagram of an embodiment of an identification unit according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the specific embodiments of the present invention and the accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 1 is a process diagram showing an example of the method for determining the amount of detergent provided by the present invention.
As shown in fig. 1, according to one embodiment of the present invention, the detergent usage amount determining method includes at least step S110, step S120, and step S130.
Step S110, collecting the clothes image of the clothes to be washed in the washing machine.
In particular, a laundry image of laundry to be washed inside the washing machine (e.g., inside an inner tub of the washing machine) may be captured by the camera. The camera may be disposed on a washing machine door of the washing machine, for example, as shown in fig. 2, fig. 2 is a schematic view of a mounting position of the camera of the washing machine.
Step S120, inputting the clothes image into a pre-established clothes recognition neural network model to recognize the clothes amount in the clothes image.
The pre-established clothing recognition neural network model can be specifically established in the following way:
(1) a first image when clothes exist in the washing machine and/or a second image when no clothes exist in the washing machine are/is respectively collected.
Wherein each pixel of the first image and/or the second image is pre-labeled as a clothing pixel or a non-clothing pixel. Specifically, a data set is prepared for a model training process, some images are collected as the data set, the images can be captured by a camera, the camera is installed at a position shown in fig. 2, the collected images can include images of clothes and images without clothes, the data set is labeled, pixel points are divided into 2 categories, namely clothes pixels and non-clothes pixels, and the data are read by a computer conveniently and are labeled as 0 (non-clothes) and 1 (clothes).
(2) And inputting the first image and/or the second image as a training sample into a preset neural network for model training to obtain the clothing recognition neural network model.
And inputting the training sample into a preset neural network for model training to obtain a clothes recognition neural network model capable of recognizing the category of each pixel in the image. That is, the class of each pixel in the clothes image can be identified through the clothes recognition neural network model, i.e., whether each pixel is a clothes pixel or a non-clothes pixel is identified.
Fig. 3a is a schematic diagram of a network structure of the clothing recognition neural network model according to an embodiment of the present invention, that is, a schematic diagram of a network structure of the predetermined neural network.
In a specific embodiment, as shown in fig. 3a, the network structure of the clothing recognition neural network model is a deep convolutional neural network, that is, model training is performed by using a deep convolutional neural network algorithm, which may specifically include an Input layer (Input images), a convolutional coding network (code module), a deconvolution decoding network (decode module), and pixel classification layers (output images). The convolutional coding network and the deconvolution decoding network are both multilayer and have the same number of layers.
Wherein, the Input images layer is used to Input the image into the network, the convolutional coding network code module is a full convolutional neural network, each convolutional layer is followed by a downsampling layer (i.e. pooling layer), fig. 3b is a schematic structural diagram of the convolutional coding network according to the embodiment of the present invention, and fig. 3c is a schematic structural diagram of the deconvolution decoding network according to the embodiment of the present invention. As shown in fig. 3a and 3b, the convolutional coding network may specifically include a convolutional layer (Conv), a batch regularization layer (BN), an activation function (ReLu activation function), and a Pooling layer (Pooling); as shown in fig. 3a and 3c, the deconvolution coding network may specifically include an upsampling layer (upsampling), a convolutional layer (Conv), a batch regularization layer (BN), and an activation function (ReLu activation function); the convolutional coding network comprises 16 convolutional layers, the structure of the convolutional coding network is similar to that of the first 16 convolutional neural layers of a VGG-19 network structure designed for object classification, and meanwhile, the full connection layer of the VGG-19 is abandoned, so that a high-resolution feature map is output in a deepest encoder, and the parameters of the network are reduced, thereby reducing the training time of the network, and the corresponding deconvolution decoding network decoder module also comprises 16 convolutional layers, so that the whole neural network structure reaches 32 layers; the pixel classification layer output images, namely the output layer, comprises a SoftMax classifier, and is used for classifying pixels at corresponding positions in an image into various categories and calculating the probability of belonging to which category.
Fig. 4 is a flowchart illustrating a specific implementation of the step of inputting the clothing image into a pre-trained clothing recognition neural network model to recognize the amount of clothing in the clothing image according to an embodiment of the present invention. As shown in fig. 4, step S120 may specifically include step S121, step S122, step S123, and step S124.
Step S121, inputting the clothing image into the clothing recognition neural network model through the input layer.
And S122, carrying out convolutional coding processing on the clothes image through the convolutional coding network to obtain a processed first feature map.
Specifically, feature extraction is performed on the clothing image through convolution processing; performing batch regularization operation on the extracted features, and performing nonlinear mapping by using an activation function to obtain a first feature map; and performing pooling operation on the first feature map obtained after the nonlinear mapping is performed, and recording the index position of the maximum feature value in the process of the pooling operation.
Referring to fig. 3b, in the convolutional coding network, each convolution operation is to perform feature extraction on the output of the previous layer through a convolution kernel of 3 × 3 size, then perform Batch regularization (Batch Normalization) on the extracted features, perform nonlinear mapping on the features by using an activation function ReLu, and finally perform Pooling (firing). Preferably, the Pooling operation adopts a Max-Pooling (Max-Pooling) mode, the length and width of each feature map are changed to be one half of the original length and width after Pooling is finished, and the maximum Pooling can obtain the translation invariance of the image on small space displacement change. While multiple max pooling may result in more robust features for the classifier. However, continuous pooling downsampling causes continuous distortion and loss of boundary information of an image, which is not beneficial to the task of image segmentation, so that a decoding network is correspondingly set for the subsequent process, and in order to restore the image information as much as possible, an index of the maximum feature value needs to be recorded in the pooling process.
Optionally, after the non-linear mapping is performed, a Dropout layer is used to close nodes in the convolutional coding network with a preset probability, so that an overfitting phenomenon of a model can be prevented, and then pooling operation is performed.
And step S123, performing deconvolution decoding processing on the first feature map through the deconvolution decoding network to obtain a processed second feature map.
Specifically, according to the index position of the maximum eigenvalue, performing upsampling processing on the first eigen map to obtain a corresponding sparse eigen map, wherein the sparse eigen map is an image feature represented by a sparse matrix; performing convolution operation on the sparse feature map subjected to the up-sampling processing to obtain a corresponding dense feature map, wherein the dense feature map is an image feature represented by a dense matrix; and carrying out batch regularization operation on the obtained dense feature map, and carrying out nonlinear mapping by using an activation function to obtain a second feature map of the clothes image.
Referring to fig. 3c, each decoder performs upsampling (upsampling) on the input features by using the index of the maximum feature value recorded in the pooling operation process, performs convolution operation on the upsampled sparse feature map by using a trainable convolution kernel to obtain a dense feature map, performs batch regularization operation on the dense feature map similarly to the encoding process, and performs nonlinear mapping by using a ReLu activation function to obtain a second feature map for pixel classification.
Step S124, classifying each pixel in the second feature map through the pixel classification layer to obtain a classification result of each pixel, where the classification result includes: clothing pixels and non-clothing pixels.
And classifying each pixel in the second feature map through a SoftMax classifier, and outputting a classification result of each pixel. The specific output is the probability of each pixel in each class, and the class with the highest probability of each pixel is the classification of the pixel.
In the deep neural network structure, in order to overcome the defect that the deep neural network is difficult to train and accelerate the training process of the network, a batch regularization layer is added after each convolution layer, so that the problem that the gradient of the deep network easily disappears in the training process is solved, the convergence speed of training and the model accuracy are improved, and meanwhile, Dropout can be used for preventing the model from being over-fitted.
And step S130, determining the quantity of the detergent required for washing the clothes to be washed according to the recognized quantity of the clothes.
The laundry amount may specifically include a ratio of a number of laundry pixels to a total number of pixels in the laundry image; accordingly, the amount of detergent required for washing the laundry is determined according to the recognized laundry amount, and particularly, the amount of detergent required for washing the laundry may be determined according to a ratio of the number of laundry pixels to the total number of pixels.
And counting all the pixel points, and if the number of the clothes pixels is identified as m and the number of the non-clothes pixels is identified as n, the clothes amount P is m/(m + n), and the value range of P is [0, 1 ]. In one embodiment, a correspondence table of the amount of laundry (the ratio of the number of laundry pixels to the total number of pixels) and the amount of detergent may be preset, and the amount of detergent required for washing the laundry may be determined by looking up the correspondence table according to the currently recognized amount of laundry.
Fig. 5 is a schematic structural view of an embodiment of the detergent amount determining apparatus according to the present invention. As shown in fig. 5, the detergent dosage device 100 includes a collecting unit 110, a recognizing unit 120, and a determining unit 130.
The collecting unit 110 is used for collecting a laundry image of laundry to be washed in the washing machine; the recognition unit 120 is configured to input the laundry image into a pre-established laundry recognition neural network model to recognize a laundry amount in the laundry image; the determination unit 130 is configured to determine an amount of detergent required for washing the laundry, based on the recognized amount of laundry.
The collecting unit 110 collects laundry images of laundry to be washed in the washing machine.
Specifically, the collecting unit 110 may collect a laundry image of laundry to be washed inside the washing machine (e.g., inside an inner tub of the washing machine) through a camera. The camera may be arranged on a washing machine door of the washing machine, for example as shown with reference to fig. 2.
The recognition unit 120 inputs the laundry image into a pre-established laundry recognition neural network model to recognize the amount of laundry in the laundry image.
The pre-established clothing recognition neural network model can be specifically established in the following way:
(1) a first image when clothes exist in the washing machine and/or a second image when no clothes exist in the washing machine are/is respectively collected.
Wherein each pixel of the first image and/or the second image is pre-labeled as a clothing pixel or a non-clothing pixel. Specifically, a data set is prepared for the model training process, some images are collected as the data set, the images can be captured by a camera, and the camera is installed at a position such as
As shown in fig. 2, the collected images may include images of clothes and images without clothes, and the data set is labeled, the pixel points are classified into 2 categories, which are clothes pixels and non-clothes pixels, respectively, and for convenience of reading data by the computer, the pixel points may be labeled as 0 (non-clothes) and 1 (clothes), respectively.
(2) And inputting the first image and/or the second image as a training sample into a preset neural network for model training to obtain the clothing recognition neural network model.
And inputting the training sample into a preset neural network for model training to obtain a clothes recognition neural network model capable of recognizing the category of each pixel in the image. That is, the class of each pixel in the clothes image can be identified through the clothes recognition neural network model, i.e., whether each pixel is a clothes pixel or a non-clothes pixel is identified.
Fig. 3a is a schematic diagram of a network structure of the clothing recognition neural network model according to an embodiment of the present invention, that is, a schematic diagram of a network structure of the predetermined neural network. In a specific embodiment, as shown in fig. 3a, the network structure of the clothing recognition neural network model is a deep convolutional neural network, that is, model training is performed by using a deep convolutional neural network algorithm, which may specifically include an Input layer (Input images), a convolutional coding network (code module), a deconvolution decoding network (decode module), and pixel classification layers (output images). The convolutional coding network and the deconvolution decoding network are both multilayer and have the same number of layers.
Wherein, the Input images layer is used to Input images into the network, the convolutional coding network code module is a full convolutional neural network, each convolutional layer is followed by a downsampling layer (i.e. Pooling layer), as shown in fig. 3a and 3b, the convolutional coding network may specifically include a convolutional layer (Conv), a batch regularization layer (BN), an activation function (ReLu activation function), and a Pooling layer (Pooling); as shown in fig. 3a and 3c, the deconvolution coding network may specifically include an upsampling layer (upsampling), a convolutional layer (Conv), a batch regularization layer (BN), and an activation function (ReLu activation function); the convolutional coding network comprises 16 convolutional layers, the structure of the convolutional coding network is similar to that of the first 16 convolutional neural layers of a VGG-19 network structure designed for object classification, and meanwhile, the full connection layer of the VGG-19 is abandoned, so that a high-resolution feature map is output in a deepest encoder, and the parameters of the network are reduced, thereby reducing the training time of the network, and the corresponding deconvolution decoding network decoder module also comprises 16 convolutional layers, so that the whole neural network structure reaches 32 layers; the pixel classification layer output images, namely the output layer, comprises a SoftMax classifier, and is used for classifying pixels at corresponding positions in an image into various categories and calculating the probability of belonging to which category.
Fig. 6 is a schematic structural diagram of an embodiment of an identification unit according to an embodiment of the present invention. As shown in fig. 6, the identifying unit 120 may specifically include an input subunit 121, an encoding subunit 122, a decoding subunit 123, and a classifying subunit 124.
The input subunit 121 is configured to input the clothing image into the clothing recognition neural network model through the input layer.
The coding subunit 122 is configured to perform a convolutional coding process on the clothing image through the convolutional coding network to obtain a processed first feature map.
Specifically, the encoding subunit 122 performs feature extraction on the laundry image by convolution processing; performing batch regularization operation on the extracted features, and performing nonlinear mapping by using an activation function to obtain a first feature map; and performing pooling operation on the first feature map obtained after the nonlinear mapping is performed, and recording the index position of the maximum feature value in the process of the pooling operation.
Referring to fig. 3b, in the convolutional coding network, each convolution operation is to perform feature extraction on the output of the previous layer through a convolution kernel of 3 × 3 size, then perform Batch regularization (Batch Normalization) on the extracted features, perform nonlinear mapping on the features by using an activation function ReLu, and finally perform Pooling (firing). Preferably, the Pooling operation adopts a Max-Pooling (Max-Pooling) mode, the length and width of each feature map are changed to be one half of the original length and width after Pooling is finished, and the maximum Pooling can obtain the translation invariance of the image on small space displacement change. While multiple max pooling may result in more robust features for the classifier. However, continuous pooling downsampling causes continuous distortion and loss of boundary information of an image, which is not beneficial to the task of image segmentation, so that a decoding network is correspondingly set for the subsequent process, and in order to restore the image information as much as possible, an index of the maximum feature value needs to be recorded in the pooling process.
Optionally, after the nonlinear mapping is performed, the coding subunit 122 closes nodes in the convolutional coding network with a preset probability by using a Dropout layer, so that an overfitting phenomenon of a model can be prevented, and then a pooling operation is performed.
The decoding subunit 123 is configured to perform deconvolution decoding processing on the first feature map through the deconvolution decoding network to obtain a processed second feature map;
specifically, the decoding subunit 123 performs upsampling processing on the first feature map according to the index position of the maximum feature value to obtain a corresponding sparse feature map, where the sparse feature map is an image feature represented by a sparse matrix; performing convolution operation on the sparse feature map subjected to the up-sampling processing to obtain a corresponding dense feature map, wherein the dense feature map is an image feature represented by a dense matrix; and carrying out batch regularization operation on the obtained dense feature map, and carrying out nonlinear mapping by using an activation function to obtain a second feature map of the clothes image.
Referring to fig. 3c, each decoder performs upsampling (upsampling) on the input features by using the index of the maximum feature value recorded in the pooling operation process, performs convolution operation on the upsampled sparse feature map by using a trainable convolution kernel to obtain a dense feature map, performs batch regularization operation on the dense feature map similarly to the encoding process, and performs nonlinear mapping by using a ReLu activation function to obtain a second feature map for pixel classification.
The classification subunit 124 is configured to classify each pixel in the second feature map through the pixel classification layer to obtain a classification result of each pixel, where the classification result includes: clothing pixels and non-clothing pixels.
Specifically, the classification subunit 124 classifies each pixel in the second feature map by a SoftMax classifier, and outputs a classification result of each pixel. The specific output is the probability of each pixel in each class, and the class with the highest probability of each pixel is the classification of the pixel.
In the deep neural network structure, in order to overcome the defect that the deep neural network is difficult to train and accelerate the training process of the network, a batch regularization layer is added after each convolution layer, so that the problem that the gradient of the deep network easily disappears in the training process is solved, the convergence speed of training and the model accuracy are improved, and meanwhile, Dropout can be used for preventing the model from being over-fitted.
The determination unit 130 determines an amount of detergent required to wash the laundry according to the recognized laundry amount.
The laundry amount may specifically include a ratio of a number of laundry pixels to a total number of pixels in the laundry image; accordingly, the determination unit 130 may determine an amount of detergent required to wash the laundry according to a ratio of the number of the laundry pixels to the total number of pixels.
For example, all the pixels are counted, and assuming that the number of pixels identified as laundry is m and the number of non-laundry pixels is n, the laundry amount P is m/(m + n), and the value range of P is [0, 1 ]. In one embodiment, a correspondence table of the amount of laundry (a ratio of the number of laundry pixels to the total number of pixels) and the amount of detergent may be preset, and the determining unit 130 determines the amount of detergent required to wash the laundry by looking up the correspondence table according to the currently recognized amount of laundry.
The present invention also provides a storage medium corresponding to the detergent usage determining method, having stored thereon a computer program which, when executed by a processor, carries out the steps of any of the methods described above.
The invention also provides a washing machine corresponding to the detergent dosage determination method, which comprises a processor, a memory and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the steps of any one of the methods.
The invention also provides a washing machine corresponding to the detergent dosage determining device, comprising any one of the detergent dosage determining devices.
According to the scheme provided by the invention, the clothes image of the clothes to be washed in the washing machine is identified by utilizing the clothes identification neural network model so as to identify the clothes amount of the clothes to be washed, and the required detergent amount of the clothes to be washed is determined according to the identified clothes amount; the intelligent adjustment of the addition of the detergent of the washing machine can be realized, the addition of the detergent can be determined according to the number of clothes to be washed by the washing machine, so that the intelligent addition of the detergent can be realized, the washing machine is more intelligent, the operation steps of a user are reduced, and the use experience of the user is improved.
The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope and spirit of the invention and the following claims. For example, due to the nature of software, the functions described above may be implemented using software executed by a processor, hardware, firmware, hardwired, or a combination of any of these. In addition, each functional unit may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and the parts serving as the control device may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The above description is only an example of the present invention, and is not intended to limit the present invention, and it is obvious to those skilled in the art that various modifications and variations can be made in the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (17)

1. A method for determining a dose of a detergent, comprising:
collecting a clothes image of clothes to be washed in a washing machine;
inputting the clothes image into a pre-established clothes recognition neural network model to recognize the clothes amount in the clothes image;
determining the amount of detergent required for washing the laundry according to the recognized amount of the laundry;
the amount of laundry, comprising: a ratio of a number of clothing pixels to a total number of pixels in the clothing image;
determining an amount of detergent required for washing the laundry according to the recognized laundry amount, including:
and determining the amount of the detergent required for washing the clothes to be washed according to the ratio of the number of the clothes pixels to the total number of the pixels.
2. The method of claim 1, wherein the clothing recognition neural network model is built by:
respectively acquiring a first image when clothes exist in the washing machine and/or a second image when no clothes exist in the washing machine;
inputting the first image and/or the second image as a training sample into a preset neural network for model training to obtain the clothing recognition neural network model;
wherein each pixel of the first image and/or the second image is pre-labeled as a clothing pixel or a non-clothing pixel.
3. The method of claim 1 or 2, wherein the clothing recognition neural network model comprises: the system comprises an input layer, a convolutional coding network, a deconvolution decoding network and a pixel classification layer;
the convolutional encoding network, comprising: a convolutional layer, a batch regularization layer, an activation function and a pooling layer;
and/or the presence of a gas in the gas,
the deconvolution decoding network, comprising: an upper sampling layer, a convolution layer, a batch regularization layer and an activation function;
and/or the presence of a gas in the gas,
the pixel classification layer includes: SoftMax classifier.
4. The method of claim 3, wherein inputting the clothing image into a pre-trained clothing recognition neural network model to recognize the amount of clothing in the clothing image comprises:
inputting the clothing image into the clothing recognition neural network model through the input layer;
carrying out convolutional coding processing on the clothes image through the convolutional coding network to obtain a processed first feature map;
performing deconvolution decoding processing on the first feature map through the deconvolution decoding network to obtain a processed second feature map;
classifying each pixel in the second feature map through the pixel classification layer to obtain a classification result of each pixel, where the classification result includes: clothing pixels and non-clothing pixels.
5. The method according to claim 4, wherein performing convolutional encoding processing on the clothes image through the convolutional encoding network to obtain a processed first feature map comprises:
performing feature extraction on the clothes image through a convolution operation;
performing batch regularization operation on the extracted features, and performing nonlinear mapping by using an activation function to obtain a first feature map;
and performing pooling operation on the first feature map obtained after the nonlinear mapping is performed, and recording the index position of the maximum feature value in the process of the pooling operation.
6. The method of claim 5, wherein performing a deconvolution decoding process on the first feature map by the deconvolution decoding network to obtain a processed second feature map comprises:
according to the index position of the maximum characteristic value, performing up-sampling processing on the first characteristic diagram to obtain a corresponding sparse characteristic diagram;
performing convolution operation on the sparse feature map subjected to the up-sampling processing to obtain a corresponding dense feature map;
and carrying out batch regularization operation on the obtained dense feature map, and carrying out nonlinear mapping by using an activation function to obtain a second feature map of the clothes image.
7. The method according to any one of claims 4-6, wherein classifying each pixel in the second feature map by the pixel classification layer to obtain a classification result of each pixel comprises:
and classifying each pixel in the second feature map through a SoftMax classifier, and outputting a classification result of each pixel.
8. A detergent dosage determining apparatus, comprising:
the washing machine comprises a collecting unit, a control unit and a control unit, wherein the collecting unit is used for collecting clothes images of clothes to be washed in the washing machine;
the identification unit is used for inputting the clothes image into a pre-established clothes identification neural network model so as to identify the clothes amount in the clothes image;
a determining unit for determining an amount of detergent required for washing the laundry according to the recognized amount of the laundry;
the amount of laundry, comprising: a ratio of a number of clothing pixels to a total number of pixels in the clothing image;
the determination unit determines an amount of detergent required to wash the laundry according to the recognized laundry amount, including:
and determining the amount of the detergent required for washing the clothes to be washed according to the ratio of the number of the clothes pixels to the total number of the pixels.
9. The apparatus of claim 8, wherein the clothing recognition neural network model is built by:
respectively acquiring a first image when clothes exist in the washing machine and/or a second image when no clothes exist in the washing machine;
inputting the first image and/or the second image as a training sample into a preset neural network for model training to obtain the clothing recognition neural network model;
wherein each pixel of the first image and/or the second image is pre-labeled as a clothing pixel or a non-clothing pixel.
10. The apparatus of claim 8 or 9, wherein the clothing recognition neural network model comprises: the system comprises an input layer, a convolutional coding network, a deconvolution decoding network and a pixel classification layer;
the convolutional encoding network, comprising: a convolutional layer, a batch regularization layer, an activation function and a pooling layer;
and/or the presence of a gas in the gas,
the deconvolution decoding network, comprising: an upper sampling layer, a convolution layer, a batch regularization layer and an activation function;
and/or the presence of a gas in the gas,
the pixel classification layer includes: SoftMax classifier.
11. The apparatus of claim 10, wherein the identification unit comprises:
an input subunit, configured to input the clothing image into the clothing recognition neural network model through the input layer;
the coding subunit is used for carrying out convolutional coding processing on the clothes image through the convolutional coding network so as to obtain a processed first feature map;
the decoding subunit is configured to perform deconvolution decoding processing on the first feature map through the deconvolution decoding network to obtain a processed second feature map;
a classification subunit, configured to classify, by the pixel classification layer, each pixel in the second feature map to obtain a classification result of each pixel, where the classification result includes: clothing pixels and non-clothing pixels.
12. The apparatus according to claim 11, wherein the encoding subunit performs a convolutional encoding process on the clothes image through the convolutional coding network to obtain a processed first feature map, and includes:
performing feature extraction on the clothes image through a convolution operation;
performing batch regularization operation on the extracted features, and performing nonlinear mapping by using an activation function to obtain a first feature map;
and performing pooling operation on the first feature map obtained after the nonlinear mapping is performed, and recording the index position of the maximum feature value in the process of the pooling operation.
13. The apparatus according to claim 12, wherein the decoding subunit performs a deconvolution decoding process on the first feature map through the deconvolution decoding network to obtain a processed second feature map, and includes:
according to the index position of the maximum characteristic value, performing up-sampling processing on the first characteristic diagram to obtain a corresponding sparse characteristic diagram;
performing convolution operation on the sparse feature map subjected to the up-sampling processing to obtain a corresponding dense feature map;
and carrying out batch regularization operation on the obtained dense feature map, and carrying out nonlinear mapping by using an activation function to obtain a second feature map of the clothes image.
14. The apparatus according to any one of claims 11-13, wherein the classification subunit classifies each pixel in the second feature map by the pixel classification layer to obtain a classification result of each pixel, and includes:
and classifying each pixel in the second feature map through a SoftMax classifier, and outputting a classification result of each pixel.
15. A storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
16. A washing machine comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the processor when executing the program implementing the steps of the method of any one of claims 1 to 7.
17. A washing machine comprising a detergent dosage determination device as claimed in any one of claims 8 to 14.
CN201910069982.2A 2019-01-24 2019-01-24 Detergent dosage determining method and device, storage medium and washing machine Active CN111549486B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910069982.2A CN111549486B (en) 2019-01-24 2019-01-24 Detergent dosage determining method and device, storage medium and washing machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910069982.2A CN111549486B (en) 2019-01-24 2019-01-24 Detergent dosage determining method and device, storage medium and washing machine

Publications (2)

Publication Number Publication Date
CN111549486A CN111549486A (en) 2020-08-18
CN111549486B true CN111549486B (en) 2021-08-31

Family

ID=72003673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910069982.2A Active CN111549486B (en) 2019-01-24 2019-01-24 Detergent dosage determining method and device, storage medium and washing machine

Country Status (1)

Country Link
CN (1) CN111549486B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113463327B (en) * 2021-06-30 2024-03-12 无锡小天鹅电器有限公司 Clothing display method and device, electronic equipment and computer readable storage medium
US11725322B2 (en) 2021-10-06 2023-08-15 Whirlpool Corporation Better dosing with a virtual and adaptive low cost doser

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017004787A1 (en) * 2015-07-07 2017-01-12 深圳市赛亿科技开发有限公司 Intelligent washing machine and control method thereof
CN106930055A (en) * 2017-01-22 2017-07-07 无锡小天鹅股份有限公司 Washing machine and for washing machine foam volume detection method and device
CN107341805A (en) * 2016-08-19 2017-11-10 北京市商汤科技开发有限公司 Background segment and network model training, image processing method and device before image
CN107893309A (en) * 2017-10-31 2018-04-10 珠海格力电器股份有限公司 Washing method and device and washing method and device
CN108823900A (en) * 2018-09-05 2018-11-16 深圳市猎搜有限公司 The long-range control method and control system of washing machine inflow and feed liquor

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017004787A1 (en) * 2015-07-07 2017-01-12 深圳市赛亿科技开发有限公司 Intelligent washing machine and control method thereof
CN107341805A (en) * 2016-08-19 2017-11-10 北京市商汤科技开发有限公司 Background segment and network model training, image processing method and device before image
CN106930055A (en) * 2017-01-22 2017-07-07 无锡小天鹅股份有限公司 Washing machine and for washing machine foam volume detection method and device
CN107893309A (en) * 2017-10-31 2018-04-10 珠海格力电器股份有限公司 Washing method and device and washing method and device
CN108823900A (en) * 2018-09-05 2018-11-16 深圳市猎搜有限公司 The long-range control method and control system of washing machine inflow and feed liquor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度卷积神经网络的道路场景理解;吴宗胜,傅卫平,韩改宁;《计算机工程与应用》;20171115;第53卷(第22期);8-15 *

Also Published As

Publication number Publication date
CN111549486A (en) 2020-08-18

Similar Documents

Publication Publication Date Title
CN110363182B (en) Deep learning-based lane line detection method
CN109033954B (en) Machine vision-based aerial handwriting recognition system and method
CN110458059B (en) Gesture recognition method and device based on computer vision
CN109977262A (en) The method, apparatus and processing equipment of candidate segment are obtained from video
CN110472082B (en) Data processing method, data processing device, storage medium and electronic equipment
CN111160249A (en) Multi-class target detection method of optical remote sensing image based on cross-scale feature fusion
CN103383732B (en) Image processing method and device
JP6897749B2 (en) Learning methods, learning systems, and learning programs
CN111549486B (en) Detergent dosage determining method and device, storage medium and washing machine
CN111954250B (en) Lightweight Wi-Fi behavior sensing method and system
CN111340820B (en) Image segmentation method and device, electronic equipment and storage medium
CN109285147B (en) Image processing method and device for breast molybdenum target calcification detection and server
CN111210417B (en) Cloth defect detection method based on convolutional neural network
CN111310531B (en) Image classification method, device, computer equipment and storage medium
CN110647897B (en) Zero sample image classification and identification method based on multi-part attention mechanism
CN113284122B (en) Roll paper packaging defect detection method and device based on deep learning and storage medium
CN113191352A (en) Water meter pointer reading identification method based on target detection and binary image detection
CN117408959A (en) Model training method, defect detection method, device, electronic equipment and medium
CN116740460A (en) Pcb defect detection system and detection method based on convolutional neural network
Imamura et al. MLF-SC: Incorporating multi-layer features to sparse coding for anomaly detection
CN116188906A (en) Method, device, equipment and medium for identifying closing mark in popup window image
CN109739840A (en) Data processing empty value method, apparatus and terminal device
CN112668399B (en) Image processing method, fingerprint information extraction method, device, equipment and medium
CN112241954B (en) Full-view self-adaptive segmentation network configuration method based on lump differentiation classification
CN114187292A (en) Abnormality detection method, apparatus, device and storage medium for cotton spinning paper tube

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant