CN112764349A - Clothes hanger control method, clothes hanger, system and storage medium - Google Patents

Clothes hanger control method, clothes hanger, system and storage medium Download PDF

Info

Publication number
CN112764349A
CN112764349A CN201911061473.1A CN201911061473A CN112764349A CN 112764349 A CN112764349 A CN 112764349A CN 201911061473 A CN201911061473 A CN 201911061473A CN 112764349 A CN112764349 A CN 112764349A
Authority
CN
China
Prior art keywords
gesture
clothes hanger
control
user
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911061473.1A
Other languages
Chinese (zh)
Inventor
陈小平
熊德林
陈应文
周晓京
康明吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Viomi Electrical Technology Co Ltd
Original Assignee
Foshan Viomi Electrical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Viomi Electrical Technology Co Ltd filed Critical Foshan Viomi Electrical Technology Co Ltd
Priority to CN201911061473.1A priority Critical patent/CN112764349A/en
Publication of CN112764349A publication Critical patent/CN112764349A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the field of intelligent home furnishing, and discloses a clothes hanger control method, a clothes hanger, an intelligent clothes hanger system and a storage medium, wherein the clothes hanger is provided with a camera, and the control method comprises the following steps: starting the camera, and acquiring an image containing gesture actions of a user through the camera; inputting the image into a pre-trained gesture recognition model to obtain a corresponding gesture action; judging whether the gesture action is matched with a preset control gesture; if the gesture action is matched with a preset control gesture, calling a corresponding control instruction according to the preset control gesture to control the clothes hanger body to ascend or descend. The lifting of the clothes hanger is controlled by gestures, so that the clothes hanger is convenient for users to use.

Description

Clothes hanger control method, clothes hanger, system and storage medium
Technical Field
The application relates to the field of smart home, in particular to a clothes hanger control method, a clothes hanger, an intelligent clothes hanger system and a storage medium.
Background
With the popularization of smart homes, the clothes drying rack is also under continuous development. Compared with the traditional fixed clothes hanger, the novel clothes hanger is widely accepted by the market with convenience and usability, such as a hand-operated lifting clothes hanger, a switch type electric lifting clothes hanger, a remote control type lifting clothes hanger and the like. In recent years, due to the continuous development of machine vision, voice technology and chip computing capability, non-contact control technology is more and more applied to the field of smart homes. However, the switch type electric lifting clothes hanger needs to destroy a wall body and wiring to install and control the shape, and the appearance is not attractive enough, and the remote control type lifting clothes hanger often causes the situation that a user cannot find a remote controller when using the remote control type lifting clothes hanger.
Therefore, how to control the lifting of the clothes hanger to facilitate the use of the user becomes an urgent problem to be solved.
Disclosure of Invention
The application provides a clothes hanger control method, a clothes hanger, an intelligent clothes hanger system and a storage medium, which are used for controlling the lifting of the clothes hanger by utilizing gestures, so that the use of a user is convenient.
In a first aspect, the application provides a clothes hanger control method, the clothes hanger is provided with a camera, and the control method comprises the following steps:
starting the camera, and acquiring an image containing gesture actions of a user through the camera;
inputting the image into a pre-trained gesture recognition model to obtain a corresponding gesture action;
judging whether the gesture action is matched with a preset control gesture;
if the gesture action is matched with a preset control gesture, calling a corresponding control instruction according to the preset control gesture to control the clothes hanger body to ascend or descend.
In a second aspect, the application also provides a laundry rack, which comprises a camera, a memory and a processor;
the camera is used for acquiring an image containing the gesture action of the user;
the memory is used for storing a computer program;
the processor is used for executing the computer program and realizing the clothes hanger control method when the computer program is executed.
In a third aspect, the present application further provides an intelligent laundry rack system, which includes: the clothes hanger is in communication connection with the washing machine;
the washing machine is used for sending an opening instruction to the clothes hanger after clothes are washed;
the clothes hanger is used for receiving the opening instruction and executing the steps of the clothes hanger control method.
In a fourth aspect, the present application also provides a computer-readable storage medium, which stores a computer program, which, when executed by a processor, causes the processor to implement the laundry rack control method as described above.
The application discloses a clothes hanger control method, a clothes hanger, an intelligent clothes hanger system and a storage medium, wherein a camera is arranged on the clothes hanger, and an image containing user gesture actions is acquired by starting the camera; inputting the image into a pre-trained gesture recognition model to obtain a corresponding gesture action; and then judging whether the gesture action is matched with a preset control gesture, if so, calling a corresponding control instruction according to the preset control gesture to control the clothes hanger body to ascend or descend. The user can control the lifting of the clothes hanger through gesture actions, and the clothes hanger is convenient to use.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic view of an intelligent laundry rack system provided herein;
FIG. 2 is a schematic diagram illustrating steps of a method for training a gesture recognition model according to an embodiment of the present application;
FIG. 3 is a schematic step diagram of a clothes rack control method provided by the embodiment of the application;
FIG. 4 is a schematic diagram of several recognized gesture actions provided by the embodiments of the present application;
FIG. 5 is a schematic step diagram of a clothes rack control method provided by an embodiment of the application;
FIG. 6 is a schematic structural diagram of a gesture recognition model training apparatus according to an embodiment of the present disclosure;
FIG. 7 is a schematic block diagram of a structure of a clothes hanger provided by the embodiment of the application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
It is to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a schematic view of an intelligent laundry rack system provided in the present application. The intelligent laundry rack system in the embodiment of the present application will be described below with reference to fig. 1.
As shown in fig. 1, the intelligent laundry rack system 10 includes a laundry rack 11 and a washing machine 12, a signal transmission device 121 is disposed on the washing machine 12, and the washing machine 12 sends an opening instruction to the laundry rack through the signal transmission device 121 when washing clothes, where the opening instruction is an instruction for controlling to open a camera.
When the signal transmission device 121 performs signal transmission, for example, a 5G/4G network, a WIFI network, bluetooth, or ZigBee may be used to transmit various wireless signals.
The laundry rack 11 includes a laundry rack body 111, a camera 112, a gesture recognition device 113, and a function control device 114, and an image including a gesture action of a user may be collected by the camera, and the collected image may be sent to the gesture recognition device for gesture recognition.
The clothes hanger body 111 is used for ascending or descending to allow a user to take clothes. It can be understood that the laundry rack 11 is further provided with a motor to drive the laundry rack body to ascend or descend.
The camera 112 is disposed on the laundry stand 11, and in some embodiments, the camera 112 may be disposed on the laundry stand body 111 for capturing an image containing a gesture action of the user.
In embodiments of the present application, the camera may be a 2D camera, in other embodiments, the camera may be a depth camera, or the like.
The gesture recognition device 113 is disposed on the laundry rack 11, and the gesture recognition device 113 is connected to the camera, and is configured to receive an image which includes a gesture action of the user and is acquired by the camera, and perform gesture recognition on the image to generate a recognition result. The gesture recognition device may be specifically a gesture recognition chip.
The function control device 114 is disposed on the laundry rack 11, the function control device 114 includes a signal receiving module, a processor and a motor driving module, and the function control device can be respectively communicated with the washing machine, the camera and the gesture recognition device, for example, the camera is turned on according to an opening instruction sent by the washing machine, a recognition result generated by the gesture recognition device is obtained, and the laundry rack body is controlled to ascend or descend according to the recognition result.
The Processor may be a Central Processing Unit (CPU), or may be other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In order to reduce the energy consumption of the clothes hanger, the opening state of the camera needs to be controlled, namely the camera is opened when a user needs to dry clothes and use the clothes hanger, so that the energy consumption of the camera is reduced.
Therefore, in the intelligent clothes hanger system, the washing machine sends a starting instruction to the clothes hanger after washing clothes, and the clothes hanger receives the starting instruction and controls the camera to be started according to the starting instruction. The opening and closing of the camera is controlled according to the washing state of the washing machine, so that the opening and closing of the camera are controlled, and the energy consumption of the camera is reduced.
In one embodiment, the clothes hanger can be further provided with an infrared sensor. The infrared sensor is in signal connection with the function control device and used for detecting whether a user approaches the clothes hanger or not, the infrared sensor sends a detection result to the function control device, and when the user approaches the clothes hanger, the function control device starts the camera.
It is to be understood that the intelligent laundry rack system in fig. 1 and the above-mentioned naming of the components of the intelligent laundry rack system are for identification purposes only and are not intended to limit the embodiments of the present application.
The clothes hanger control method provided by the embodiment of the application will be described in detail based on the intelligent clothes hanger system in fig. 1.
Referring to fig. 2, fig. 2 is a schematic diagram illustrating steps of a training method for a gesture recognition model according to an embodiment of the present disclosure. The gesture recognition model is obtained by model training based on a convolutional neural network, and can be obtained by training with other networks.
It should be noted that the model training is performed by using a MobileNetv2-SSD network to obtain the gesture recognition model, but other networks may be used, for example, VGG-SSD, MobileNetv2-SSDLite, and the like. The following description will take the MobileNetv2-SSD network as an example.
As shown in FIG. 2, the training method of the gesture recognition model is used for training the gesture recognition model to be applied to the clothes hanger control method. The training method includes steps S101 to S103.
S101, obtaining a sample gesture image.
Specifically, the sample gesture image is a captured image of a plurality of gesture motions. In some embodiments, the sample gesture images may be gesture motion images taken from different angles. Selecting a plurality of different gesture actions, shooting the gesture actions from different angles, and taking the shot images as sample gesture images, wherein the sample gesture images form a sample gesture image set for training a gesture recognition model.
And S102, labeling the sample gesture image according to the category identification corresponding to the gesture category to construct sample data.
The gesture type comprises an ascending control gesture and a descending control gesture, and the corresponding type identification comprises an ascending identification and a descending identification. For example, the up control gesture may be a thumb-up gesture and the down control gesture may be a thumb-down gesture.
In some embodiments, the gesture categories may include a stop control gesture in addition to the up control gesture and the down control gesture, and the category corresponding to the stop control gesture is identified as a stop identifier. For example, the stop control gesture is an OK gesture.
In some embodiments, to improve the accuracy of the gesture recognition model, after annotating the sample gesture image, image processing may be performed on the sample gesture image to change picture parameters of the sample gesture image.
Wherein the image processing operation comprises: resizing, cropping, rotation, image algorithm, and the like; the image algorithm processing comprises the following steps: the method comprises the following steps of color temperature adjustment algorithm, exposure adjustment algorithm, contrast adjustment algorithm, highlight recovery algorithm, low light compensation algorithm, white balance algorithm, definition adjustment algorithm, atomization algorithm index and natural saturation adjustment algorithm. By the image processing operations, the diversity of sample data can be increased, so that the sample data is closer to a really shot picture.
Accordingly, the picture parameters include size information, pixel size, color temperature parameter, exposure, contrast, white balance, sharpness, fog parameter, natural saturation, and the like.
It should be noted that, performing image processing operation on the sample gesture image to change the picture parameter of the sample gesture image refers to performing one or more of the above-mentioned image processing operations on the sample gesture image to change the picture parameter of the sample gesture image. And the diversity of the samples is increased, and meanwhile, the samples can represent the real environment better, so that the identification accuracy of the model is improved.
S103, based on the convolutional neural network, performing model training according to the sample data to obtain a gesture recognition model, and taking the obtained gesture recognition model as a pre-trained gesture recognition model.
Specifically, model training is performed through a MobileNetv2-SSD network by using constructed sample data, wherein the convolutional neural network comprises an input layer, a plurality of convolutional layers, a pooling layer, a full-link layer and an output layer. The method comprises the steps that a feature extraction layer is added in the convolutional neural network, the size of a feature map for extracting a default frame is redesigned, the corresponding layer in the network is further defined to be a layer for generating default frames (default frames), the length-width ratio and the like of the default frame generated in each layer are defined, the default frame layers with different length-width ratios are selected to be used for predicting gestures with different proportions in a sample gesture image, and the accuracy of gesture recognition is improved. When the gesture of the user is far away from the camera, the occupation ratio of the hand in the acquired gesture action image is small, and the recognition accuracy of the model is improved.
In some embodiments, since the clothes hanger control method is applied to the clothes hanger, the trained model can be stored in the gesture recognition device of the clothes hanger, so that the data processing speed and the reaction speed of the model are increased, the interaction speed is increased, and the real-time experience is brought to the user.
In some embodiments, in order to ensure the normal operation of the clothes hanger and quickly identify the type of the gesture motion, the trained gesture recognition model needs to be compressed, and the compressed model is stored in the gesture recognition device of the clothes hanger.
The compression processing specifically comprises the steps of carrying out quantization processing on the gesture recognition model and the like so as to reduce the size of the gesture recognition model, and further conveniently storing the gesture recognition model in a gesture recognition device of a clothes hanger with small capacity.
According to the training method provided by the embodiment, the images of a plurality of gesture actions are shot, and the shot images are processed by utilizing an image processing operation to increase the diversity of sample data; after the obtained sample gesture image is labeled, sample data is constructed, model training is performed according to the constructed sample data based on a convolutional neural network to obtain a gesture recognition model, and the obtained gesture recognition model is used as a pre-trained gesture recognition model to be applied to the clothes hanger control method, so that the recognition accuracy of the gesture action of a user can be improved.
Referring to fig. 3, fig. 3 is a schematic step diagram of a clothes hanger control method according to an embodiment of the present application. The clothes hanger control method can be applied to a clothes hanger of an intelligent clothes hanger system, and the user can control the lifting of the clothes hanger by utilizing gestures through gesture recognition of an image containing the gesture actions of the user, so that the use of the user is convenient.
As shown in fig. 3, the clothes hanger control method specifically includes: step S201 to step S204.
S201, starting the camera, and acquiring an image containing gesture actions of a user through the camera.
Specifically, a camera on the clothes hanger is started, and an image containing gesture actions of a user is acquired through the camera.
In some embodiments, the camera may be turned on when a person is detected approaching the laundry rack. In the embodiment, the clothes hanger is provided with the infrared sensor, and when the infrared sensor detects that a person approaches the clothes hanger, the camera is started. The opening and closing of the camera are controlled, and the energy consumption efficiency of the clothes hanger is optimized.
In some embodiments, in order to improve the recognition accuracy of the gesture recognition model for the input image and reduce the computational complexity, after the image containing the gesture action of the user is acquired, the acquired image may be further preprocessed. The method specifically comprises the following steps: and adjusting the size of the image to obtain an image with a preset size. And carrying out format conversion on the image with the preset size to obtain the image with the preset format.
Specifically, the size of the acquired images is adjusted, so that the acquired images with different sizes are adjusted to be uniform in size. After the size is adjusted, format conversion is performed on the image with the preset size, wherein different sampling formats of the cameras may cause different formats of the acquired image including the gesture action of the user, such as a YUV format or an RGB format, and the format conversion is performed on the image with the preset size by using a conversion formula to convert the image into a uniform format, such as unifying the image with the preset size into an RGB format. The processing of the image by the gesture recognition model is facilitated, and the recognition accuracy is improved.
It should be noted that, when performing size adjustment and format adjustment on the acquired image, the adjustment sequence of size adjustment and format adjustment is not limited, and the size adjustment and then the format adjustment may be performed first, or the format adjustment and then the size adjustment may be performed first.
S202, inputting the image into a pre-trained gesture recognition model to obtain a corresponding gesture action.
Specifically, an image which is acquired by a camera and contains the gesture action of the user is input into a pre-trained gesture recognition model, the gesture action is recognized, and a recognition result is output, wherein the recognition result may include the gesture action contained in the image. As shown in fig. 4, fig. 4 is a schematic diagram of several recognized gesture actions provided in the embodiment of the present application.
S203, judging whether the gesture action is matched with a preset control gesture.
Specifically, the preset control gesture may be a preset gesture for controlling the clothes hanger body to ascend or descend. The preset control gesture can be set by a software developer or can be set by a user independently. And after the corresponding gesture action is recognized, judging whether the gesture action is matched with a preset control gesture.
In some embodiments, the preset control gesture may be one gesture or a plurality of gestures. In the case of multiple gestures, the gesture actions may be ordered or unordered. Taking multiple ordered gestures as an example, after gesture actions are recognized, whether the recognized gesture actions include multiple preset gestures can be judged according to the sequence.
And S204, if the gesture action is matched with a preset control gesture, calling a corresponding control instruction according to the preset control gesture to control the clothes hanger body to ascend or descend.
Specifically, if the function control module judges that the gesture action is matched with the preset control gesture, the corresponding control instruction is called according to the preset control gesture to control the rising or falling of the clothes hanger body.
In some embodiments, invoking a corresponding control instruction according to the preset control gesture to control the clothes hanger body to ascend or descend may specifically include:
detecting the current state of the clothes hanger, wherein the current state comprises a first state for airing clothes and a second state for carrying or taking up clothes; calling a corresponding control instruction and the current state according to the preset control gesture to control the clothes hanger body to ascend or descend; and when the clothes hanger is in a second state, acquiring a target image through the camera, and if the target image does not contain the user, controlling the clothes hanger to rise from the second state to the first state.
Specifically, the first state is a high state in which the laundry rack is in a clothes drying state, and the second state is a low state in which the laundry rack is in a clothes receiving state or a clothes drying state. After receiving the recognition result sent by the gesture recognition module, the processor detects the current state of the clothes hanger and sends a corresponding instruction to the motor driving module according to the current state of the clothes hanger and the gesture recognition result so as to control the clothes hanger body to ascend or descend.
For example, when the clothes hanger is in the first state and the preset control gesture is a gesture for controlling the clothes hanger body to descend, the processor sends a descending instruction to the motor driving module to drive the motor to rotate forwards to drive the clothes hanger body to descend.
And when the clothes hanger is in the second state, if the target image acquired by the camera does not contain the user, the user is indicated that the clothes can be collected or aired by the user at the moment, and the processor controls the clothes hanger to ascend from the second state to the first state, and returns to the original position or dries the clothes. After the clothes are collected or put on the clothes, a user does not need to do corresponding gesture actions, and after the user leaves the clothes hanger, the clothes hanger can automatically rise from the second state to the first state, so that the convenience degree of the user is improved.
In an embodiment, when the laundry rack is in the second state, acquiring a target image through the camera, and if the target image does not include the user, controlling the laundry rack to ascend from the second state to the first state may specifically be: when the clothes hanger is in a second state, continuously acquiring target images through the camera, and if the target images do not comprise the user within preset time, controlling the clothes hanger to rise from the second state to the first state.
In particular, the preset time may be five minutes. And continuously acquiring a target image by the camera within five minutes, and controlling the clothes hanger to rise from the second state to the first state if the target image within five minutes does not comprise a user. The preset time is set, so that the change of the state of the clothes hanger caused by temporary leaving of a user in a short time is avoided.
In one embodiment, when the current state of the clothes hanger is inconsistent with the control instruction corresponding to the preset control gesture call, a prompt is sent to the user to remind the user to change the instruction.
Specifically, the current state of the clothes hanger is inconsistent with the preset control gesture, and the preset control gesture calls the corresponding control instruction to control the clothes hanger body to descend, for example, the current state of the clothes hanger is the second state, and the preset control gesture calls the corresponding control instruction to control the clothes hanger body to descend; or the current state of the clothes hanger is the first state, and the preset control gesture calls the corresponding control command to control the clothes hanger body to ascend.
The reminding mode can be reminding through a prompting lamp, can also be reminding through an alarm sound, and can also be reminding through other reminding modes. In this embodiment, the clothes hanger can be provided with a red-green double-color indicator light, when the current state of the clothes hanger is inconsistent with the control instruction corresponding to the preset control gesture call, the red light is turned on, and when the current state of the clothes hanger is inconsistent with the control instruction corresponding to the preset control gesture call, the green light is turned on.
According to the clothes hanger control method provided by the embodiment, the image containing the gesture action of the user is collected through the camera, the image is output to the pre-trained gesture recognition model to obtain the corresponding gesture action, and when the recognized gesture action is matched with the preset control gesture, the corresponding control instruction is called according to the preset control gesture to control the clothes hanger body to ascend or descend. The user can use the gesture to control the lifting of the clothes hanger, and the use of the user is convenient.
Referring to fig. 5, fig. 5 is a schematic step diagram of a clothes hanger control method according to an embodiment of the present application. The clothes hanger control method can be applied to the clothes hanger of the intelligent clothes hanger system provided in the figure 1, and achieves the purposes of enabling a user to control the lifting of the clothes hanger through gestures and facilitating the use of the user through the analysis of images containing the gestures of the user.
As shown in fig. 5, the clothes hanger control method specifically includes: step S301 to step S306.
S301, receiving a starting instruction sent by the washing machine after clothes are washed, starting the camera according to the starting instruction, and acquiring images containing gesture actions of a user through the camera.
Specifically, in the intelligent laundry rack system provided in fig. 1, the washing machines are wirelessly connected to the laundry racks for networking. When the washing machine finishes washing clothes, an opening instruction is sent to the clothes hanger, the clothes hanger receives the opening instruction sent by the washing machine, and the camera is opened according to the opening instruction, so that the camera collects images containing gesture actions of a user. The opening and closing of the camera is linked with the working state of the washing machine, after the washing machine washes clothes, a user needs to use the clothes hanger to dry the clothes, and the camera is opened at the moment. Just open the camera when the user needs to use, avoid the camera to normally open, optimized the energy consumption efficiency of clothes hanger that dries in the air.
S302, inputting the image into a pre-trained gesture recognition model to obtain a corresponding gesture action.
Specifically, an image which is acquired by a camera and contains the gesture action of the user is input into a pre-trained gesture recognition model, the gesture action is recognized, and a recognition result is output, wherein the recognition result may include the gesture action contained in the image.
S303, detecting whether the awakening operation of the clothes hanger by the user is received.
Specifically, in order to avoid an erroneous operation caused by an unintentional gesture motion when the user is near the laundry rack, when the user needs to use the laundry rack, the user first needs to wake up the laundry rack.
In one embodiment, the detecting whether the user awakening operation on the clothes hanger is received comprises the following steps: continuously collecting multi-frame images including gesture actions of a user, judging whether the gesture actions of the user change or not according to the multi-frame images, and taking the changed gesture actions as awakening operations if the gesture actions of the user change.
Specifically, the wake-up operation may be an operation in which a gesture of the user changes within a preset time. The method comprises the steps that a camera continuously collects multi-frame images including gesture actions of a user, then whether the gesture actions of the user change in the multi-frame images is judged, if the gesture actions of the user change, the user is considered to execute awakening operation, and a detection result of the awakening operation of the user on the clothes hanger is output and received.
In one embodiment, the detecting whether the user awakening operation on the clothes hanger is received comprises the following steps: continuously acquiring multi-frame images including gesture actions of a user, and taking the gesture actions matched with a preset awakening gesture as awakening operation if the gesture actions of the user are matched with the preset awakening gesture.
Specifically, the preset wake-up gesture may be a variety of gestures in which a gesture of the user changes within a preset time. For example, the preset wake-up gesture may be a change in a user's gesture from one gesture to another. Continuously collecting multi-frame images including gesture actions of a user, if the change of the gesture actions of the user is matched with a preset awakening gesture, determining that the user executes awakening operation, and outputting a detection result of the awakening operation of the user on the clothes hanger.
In one embodiment, the detecting whether the user awakening operation on the clothes hanger is received comprises the following steps: if the gesture actions are matched with a preset awakening gesture, judging whether the sequence of the gesture actions is a preset sequence or not; and if the sequence of the plurality of gesture actions is a preset sequence, taking the plurality of gesture actions as awakening actions.
Specifically, if a plurality of gesture actions collected by the camera are matched with a preset awakening gesture, whether the collected collection sequence of the plurality of gesture actions accords with the preset sequence or not is judged, and only when the collection sequence of the plurality of gesture actions is the preset sequence, the plurality of gesture actions are taken as the awakening action, and the detection result of the awakening operation of the clothes hanger received by the user is output.
For example, when the preset sequence is a first gesture and a second gesture, only when the collected gesture actions include the first gesture and the second gesture, and the collection sequence of the first gesture and the second gesture is the first gesture and the second gesture, the judgment result that the sequence of the gesture actions is the preset sequence at the moment is output, and the second gesture and the first gesture are used as awakening actions to awaken the clothes hanger.
S304, if the awakening operation of the clothes hanger by the user is received, awakening the clothes hanger to be in the working state according to the awakening operation.
Specifically, if the awakening operation of the user on the clothes hanger is received, the clothes hanger is awakened according to the awakening operation, so that the clothes hanger is in the working state. After the clothes hanger is in a working state, a user can control the clothes hanger to lift by utilizing gestures, misoperation caused by unintentional gesture actions of the user near the clothes hanger is avoided, and the user experience is improved.
S305, judging whether the gesture action is matched with a preset control gesture.
Specifically, the preset control gesture may be a preset gesture for controlling the clothes hanger body to ascend or descend. The preset control gesture can be set by a software developer or can be set by a user independently. And after the corresponding gesture action is recognized, judging whether the gesture action is matched with a preset control gesture.
S306, if the gesture action is matched with a preset control gesture, calling a corresponding control instruction according to the preset control gesture to control the clothes hanger body to ascend, descend or stop.
Specifically, if the gesture action is matched with the preset control gesture, the corresponding control instruction is called according to the preset control gesture to control the clothes hanger body to ascend, descend or stop.
In some embodiments, invoking a corresponding control instruction according to the preset control gesture to control the clothes hanger body to ascend or descend may specifically include:
calling a corresponding control instruction according to the preset control gesture to adjust the working state of the motor, wherein the working state of the motor comprises reverse rotation, forward rotation and stop; and controlling the clothes hanger body to ascend, descend or stop based on the working state of the motor.
Specifically, when the gesture action is judged to be matched with the preset control gesture, the corresponding control instruction is called according to the preset control gesture, so that the working state of the motor is adjusted based on the control instruction, and the motor controls the clothes hanger body to ascend, descend or stop.
For example, when the gesture action is judged to be matched with a preset control gesture for controlling the clothes hanger body to descend, a corresponding control instruction is called according to the preset control gesture, and the motor is controlled to rotate forwards based on the control instruction, so that the motor drives the clothes hanger body to descend in the process of rotating forwards.
According to the clothes hanger control method provided by the embodiment, the camera is started according to the starting instruction sent by the washing machine after clothes are washed, and the camera collects the image containing the gesture action of the user, so that the camera is prevented from being normally opened, and the energy consumption efficiency of the camera is optimized; after gesture actions are recognized, whether awakening operation of a user on the clothes hanger is received or not is detected, if the awakening operation of the user on the clothes hanger is received, the clothes hanger is awakened to be in a working state according to the awakening operation, misoperation of the clothes hanger caused by the fact that the user performs gesture actions unintentionally nearby the clothes hanger is avoided, and user experience is improved. And when the recognized gesture action is matched with a preset control gesture, calling a corresponding control instruction according to the preset control gesture to control the clothes hanger body to ascend, descend or stop. The user can use the gesture to control the lifting of the clothes hanger, and the use of the user is convenient.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a gesture recognition model training apparatus according to an embodiment of the present disclosure. The gesture recognition model training device can be a deep learning computer. The gesture recognition device receives a gesture recognition model trained in advance by the gesture recognition model training device so as to perform gesture recognition on the received image which is acquired by the camera and contains the gesture action of the user.
The gesture recognition model training apparatus 400 includes a sample acquisition module 401, a sample labeling module 402, and a model training module 403.
The sample acquiring module 401 is configured to acquire a sample gesture image, where the sample gesture image is an image of a plurality of captured gesture actions.
The sample labeling module 402 is configured to label the sample gesture image according to the category identifier corresponding to the gesture category to construct sample data.
The model training module 403 is configured to perform model training according to the sample data based on a convolutional neural network to obtain a gesture recognition model, and use the obtained gesture recognition model as a pre-trained gesture recognition model.
Referring to fig. 7, fig. 7 is a schematic block diagram of a structure of a clothes hanger according to an embodiment of the present application. The laundry stand 11 includes a laundry stand body 111, a camera 112, a gesture recognition device 113, a function control device 114, and a memory 115. The camera 112, the gesture recognition device 113, the function control device 114, and the memory 115 are connected via a bus, such as an I2C (Inter-integrated Circuit) bus.
Specifically, the function control device 114 includes a Signal receiving module, a Processor, and a motor driving module, wherein the Processor may be a Micro-controller Unit (MCU), a Central Processing Unit (CPU), a Digital Signal Processor (DSP), or the like.
Specifically, the Memory 115 may be a Flash chip, a Read-Only Memory (ROM) magnetic disk, an optical disk, a usb disk, or a removable hard disk.
Wherein the processor is configured to run a computer program stored in the memory and to implement the following steps when executing the computer program:
starting the camera, and acquiring an image containing gesture actions of a user through the camera; inputting the image into a pre-trained gesture recognition model to obtain a corresponding gesture action; judging whether the gesture action is matched with a preset control gesture; if the gesture action is matched with a preset control gesture, calling a corresponding control instruction according to the preset control gesture to control the clothes hanger body to ascend or descend.
In some embodiments, when the processor implements the turning on of the camera, the following is specifically implemented:
receiving a starting instruction sent by the washing machine after washing clothes, and starting the camera according to the starting instruction; or when people are detected to approach the clothes hanger, the camera is started.
In some embodiments, when the processor calls the corresponding control instruction according to the preset control gesture to control the clothes hanger body to ascend or descend, the following steps are specifically implemented:
detecting the current state of the clothes hanger, wherein the current state comprises a first state for airing clothes and a second state for carrying or taking up clothes; calling a corresponding control instruction and the current state according to the preset control gesture to control the clothes hanger body to ascend or descend; and when the clothes hanger is in a second state, acquiring a target image through the camera, and if the target image does not contain the user, controlling the clothes hanger to rise from the second state to the first state.
In some embodiments, the processor, prior to implementing the inputting the image into the pre-trained gesture recognition model, is further to implement:
adjusting the size of the image to obtain an image with a preset size; carrying out format conversion on the image with the preset size to obtain an image with a preset format; when the processor is used for inputting the image into the pre-trained gesture recognition model, the following steps are specifically realized: and inputting the images in the preset format into a pre-trained gesture recognition model.
In some embodiments, before the processor calls the corresponding control instruction according to the preset control gesture to control the clothes hanger body to ascend or descend, the processor is further configured to:
detecting whether a user awakening operation on the clothes hanger is received; and if the awakening operation of the clothes hanger by the user is received, awakening the clothes hanger to be in the working state according to the awakening operation.
In some embodiments, when the processor detects whether the user awakens the laundry rack, the processor specifically implements:
continuously collecting multi-frame images including gesture actions of a user, judging whether the gesture actions of the user change or not according to the multi-frame images, and taking the changed gesture actions as awakening operations if the gesture actions of the user change; or continuously acquiring multi-frame images comprising the gesture actions of the user, and taking the gesture actions matched with the preset awakening gesture as awakening operation if the gesture actions of the user are matched with the preset awakening gesture; or if the gesture actions are matched with a preset awakening gesture, judging whether the sequence of the gesture actions is a preset sequence or not; and if the sequence of the plurality of gesture actions is a preset sequence, taking the plurality of gesture actions as awakening actions.
In some embodiments, the processor is further configured to receive a pre-trained gesture recognition model, wherein the training of the gesture recognition model comprises:
acquiring a sample gesture image, wherein the sample gesture image is a plurality of shot gesture motion images; marking the sample gesture image according to the category identification corresponding to the gesture category to construct sample data; and based on a convolutional neural network, performing model training according to the sample data to obtain a gesture recognition model, and taking the obtained gesture recognition model as a pre-trained gesture recognition model.
The embodiment of the application also provides a computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, the computer program comprises program instructions, and the processor executes the program instructions to realize the clothes hanger control method provided by the embodiment of the application.
The computer readable storage medium may be an internal storage unit of the laundry rack described in the foregoing embodiment, for example, a hard disk or a memory of the laundry rack. The computer readable storage medium may also be an external storage device of the laundry rack, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the laundry rack.
While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A clothes hanger control method is characterized in that a camera is arranged on the clothes hanger, and the control method comprises the following steps:
starting the camera, and acquiring an image containing gesture actions of a user through the camera;
inputting the image into a pre-trained gesture recognition model to obtain a corresponding gesture action;
judging whether the gesture action is matched with a preset control gesture;
if the gesture action is matched with a preset control gesture, calling a corresponding control instruction according to the preset control gesture to control the clothes hanger body to ascend or descend.
2. The method for controlling a laundry rack according to claim 1, wherein the turning on the camera comprises:
receiving a starting instruction sent by the washing machine after washing clothes, and starting the camera according to the starting instruction; or
And when a person is detected to approach the clothes hanger, the camera is started.
3. The clothes hanger control method according to claim 1 or 2, wherein the calling of the corresponding control command according to the preset control gesture to control the clothes hanger body to ascend or descend comprises:
detecting the current state of the clothes hanger, wherein the current state comprises a first state for airing clothes and a second state for carrying or taking up clothes;
calling a corresponding control instruction and the current state according to the preset control gesture to control the clothes hanger body to ascend or descend; and
when the clothes hanger is in a second state, a target image is collected through the camera, and if the user is not included in the target image, the clothes hanger is controlled to ascend from the second state to the first state.
4. The method for controlling the laundry rack according to claim 1, wherein before inputting the image into the pre-trained gesture recognition model, the method further comprises:
adjusting the size of the image to obtain an image with a preset size;
carrying out format conversion on the image with the preset size to obtain an image with a preset format;
the inputting the image into a pre-trained gesture recognition model comprises: and inputting the images in the preset format into a pre-trained gesture recognition model.
5. The clothes hanger control method according to claim 1, wherein before calling the corresponding control command according to the preset control gesture to control the clothes hanger body to ascend or descend, the method further comprises:
detecting whether a user awakening operation on the clothes hanger is received;
and if the awakening operation of the clothes hanger by the user is received, awakening the clothes hanger to be in the working state according to the awakening operation.
6. The method for controlling the clothes hanger according to claim 5, wherein the detecting whether the user awakening operation on the clothes hanger is received comprises:
continuously collecting multi-frame images including gesture actions of a user, judging whether the gesture actions of the user change or not according to the multi-frame images, and taking the changed gesture actions as awakening operations if the gesture actions of the user change; or
Continuously acquiring multi-frame images including gesture actions of a user, and taking the gesture actions matched with a preset awakening gesture as awakening operation if the gesture actions of the user are matched with the preset awakening gesture; or
If the gesture actions are matched with a preset awakening gesture, judging whether the sequence of the gesture actions is a preset sequence or not; and if the sequence of the plurality of gesture actions is a preset sequence, taking the plurality of gesture actions as awakening actions.
7. The clothes hanger control method according to claim 4 or 5, further comprising receiving a pre-trained gesture recognition model, wherein the training process of the gesture recognition model comprises:
acquiring a sample gesture image, wherein the sample gesture image is a plurality of shot gesture motion images;
marking the sample gesture image according to the category identification corresponding to the gesture category to construct sample data;
and based on a convolutional neural network, performing model training according to the sample data to obtain a gesture recognition model, and taking the obtained gesture recognition model as a pre-trained gesture recognition model.
8. A clothes hanger is characterized by comprising a camera, a memory and a processor;
the camera is used for acquiring an image containing the gesture action of the user;
the memory is used for storing a computer program;
the processor is used for executing the computer program and realizing the clothes hanger control method according to any one of claims 1 to 7 when the computer program is executed.
9. An intelligent clothes hanger system is characterized by comprising: the clothes hanger is in communication connection with the washing machine;
the washing machine is used for sending an opening instruction to the clothes hanger after clothes are washed;
the clothes rack is used for receiving the opening instruction and executing the steps of the clothes rack control method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that it stores a computer program which, when executed by a processor, causes the processor to implement the laundry rack control method according to any one of claims 1 to 7.
CN201911061473.1A 2019-11-01 2019-11-01 Clothes hanger control method, clothes hanger, system and storage medium Pending CN112764349A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911061473.1A CN112764349A (en) 2019-11-01 2019-11-01 Clothes hanger control method, clothes hanger, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911061473.1A CN112764349A (en) 2019-11-01 2019-11-01 Clothes hanger control method, clothes hanger, system and storage medium

Publications (1)

Publication Number Publication Date
CN112764349A true CN112764349A (en) 2021-05-07

Family

ID=75692042

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911061473.1A Pending CN112764349A (en) 2019-11-01 2019-11-01 Clothes hanger control method, clothes hanger, system and storage medium

Country Status (1)

Country Link
CN (1) CN112764349A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113625625A (en) * 2021-07-30 2021-11-09 珠海格力电器股份有限公司 Intelligent clothes hanger control method, system, device, equipment and storage medium
CN116434559A (en) * 2023-06-14 2023-07-14 杭州立方控股股份有限公司 Intelligent anti-parking management system and method for emergency channel

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610284A (en) * 2017-08-23 2018-01-19 移康智能科技(上海)股份有限公司 A kind of gesture identification method, device and intelligent peephole
CN108035128A (en) * 2018-01-26 2018-05-15 佛山市洛克威特科技有限公司 A kind of clothes hanger based on data analysis
CN108181992A (en) * 2018-01-22 2018-06-19 北京百度网讯科技有限公司 Voice awakening method, device, equipment and computer-readable medium based on gesture
CN108399009A (en) * 2018-02-11 2018-08-14 易视腾科技股份有限公司 The method and device of smart machine is waken up using human-computer interaction gesture
CN108734065A (en) * 2017-04-20 2018-11-02 奥克斯空调股份有限公司 A kind of images of gestures acquisition device and method
CN108885813A (en) * 2018-06-06 2018-11-23 深圳前海达闼云端智能科技有限公司 Intelligent sales counter, article identification method, apparatus, server and storage medium
CN109137420A (en) * 2018-09-28 2019-01-04 江苏理工学院 A kind of intelligent elevated clothes-airing hanger based on machine vision
CN109440416A (en) * 2018-12-12 2019-03-08 江阴市友邦家居用品有限公司 A kind of intelligent electric clothes hanger based on gesture control
CN109518436A (en) * 2018-12-12 2019-03-26 江阴市友邦家居用品有限公司 A kind of intelligence system based on multiple sensors and Internet of Things control clothes hanger
WO2019080203A1 (en) * 2017-10-25 2019-05-02 南京阿凡达机器人科技有限公司 Gesture recognition method and system for robot, and robot
CN110381188A (en) * 2019-08-08 2019-10-25 西安易朴通讯技术有限公司 Mobile terminal and its application method

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734065A (en) * 2017-04-20 2018-11-02 奥克斯空调股份有限公司 A kind of images of gestures acquisition device and method
CN107610284A (en) * 2017-08-23 2018-01-19 移康智能科技(上海)股份有限公司 A kind of gesture identification method, device and intelligent peephole
WO2019080203A1 (en) * 2017-10-25 2019-05-02 南京阿凡达机器人科技有限公司 Gesture recognition method and system for robot, and robot
CN108181992A (en) * 2018-01-22 2018-06-19 北京百度网讯科技有限公司 Voice awakening method, device, equipment and computer-readable medium based on gesture
US20190228217A1 (en) * 2018-01-22 2019-07-25 Beijing Baidu Netcom Science And Technology Co., Ltd. Method, apparatus and device for waking up voice interaction function based on gesture, and computer readable medium
CN108035128A (en) * 2018-01-26 2018-05-15 佛山市洛克威特科技有限公司 A kind of clothes hanger based on data analysis
CN108399009A (en) * 2018-02-11 2018-08-14 易视腾科技股份有限公司 The method and device of smart machine is waken up using human-computer interaction gesture
CN108885813A (en) * 2018-06-06 2018-11-23 深圳前海达闼云端智能科技有限公司 Intelligent sales counter, article identification method, apparatus, server and storage medium
CN109137420A (en) * 2018-09-28 2019-01-04 江苏理工学院 A kind of intelligent elevated clothes-airing hanger based on machine vision
CN109440416A (en) * 2018-12-12 2019-03-08 江阴市友邦家居用品有限公司 A kind of intelligent electric clothes hanger based on gesture control
CN109518436A (en) * 2018-12-12 2019-03-26 江阴市友邦家居用品有限公司 A kind of intelligence system based on multiple sensors and Internet of Things control clothes hanger
CN110381188A (en) * 2019-08-08 2019-10-25 西安易朴通讯技术有限公司 Mobile terminal and its application method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113625625A (en) * 2021-07-30 2021-11-09 珠海格力电器股份有限公司 Intelligent clothes hanger control method, system, device, equipment and storage medium
CN116434559A (en) * 2023-06-14 2023-07-14 杭州立方控股股份有限公司 Intelligent anti-parking management system and method for emergency channel

Similar Documents

Publication Publication Date Title
EP3859561A1 (en) Method for processing video file, and electronic device
EP4064284A1 (en) Voice detection method, prediction model training method, apparatus, device, and medium
CN111327814A (en) Image processing method and electronic equipment
CN111443884A (en) Screen projection method and device and electronic equipment
CN109496423A (en) Image display method and electronic equipment under a kind of photographed scene
CN115866121A (en) Application interface interaction method, electronic device and computer-readable storage medium
CN108399349A (en) Image-recognizing method and device
CN108777766B (en) Multi-person photographing method, terminal and storage medium
CN110471606B (en) Input method and electronic equipment
CN113838490B (en) Video synthesis method and device, electronic equipment and storage medium
US20230421900A1 (en) Target User Focus Tracking Photographing Method, Electronic Device, and Storage Medium
CN113170037B (en) Method for shooting long exposure image and electronic equipment
CN102033727A (en) Electronic equipment interface control system and method
CN101472066A (en) Near-end control method of image viewfinding device and image viewfinding device applying the method
EP4199488A1 (en) Voice interaction method and electronic device
WO2023284715A1 (en) Object reconstruction method and related device
CN112764349A (en) Clothes hanger control method, clothes hanger, system and storage medium
WO2022007707A1 (en) Home device control method, terminal device, and computer-readable storage medium
CN115484380A (en) Shooting method, graphical user interface and electronic equipment
WO2023241209A1 (en) Desktop wallpaper configuration method and apparatus, electronic device and readable storage medium
CN113010076A (en) Display element display method and electronic equipment
CN117130469A (en) Space gesture recognition method, electronic equipment and chip system
CN112449101A (en) Shooting method and electronic equipment
CN114943976B (en) Model generation method and device, electronic equipment and storage medium
CN113536834A (en) Pouch detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210507

RJ01 Rejection of invention patent application after publication